Re: [music-dsp] Dither video and articles

2015-02-12 Thread gwenhwyfaer
On 12/02/2015, gwenhwyfaer gwenhwyf...@gmail.com wrote:
 On 11/02/2015, Andrew Simper a...@cytomic.com replied to me:
 ... I made 7 sawtooth
 waves with random (static) phases and one straightforward sawtooth
 wave, with all partials in phase. I just listened to it again, to
 check my memory. On a half-decent pair of headphones, the difference
 between the all-partials-in-phase sawtooth and the random-phase ones
 is readily audible, but it was rather harder to tell the difference
 between the various random-phase waves; they all kind of sounded
 pulse-wavey. On a pair of speakers through the same amp and soundcard,
 though, I can still *jst about* pick out the in-phase sawtooth -
 but I couldn't confidently tell the difference between the 7 other
 waves. Which I'm guessing has something to do with the difference
 between the fairly one-dimensional travel of sound from headphone to
 ear, vs the bouncing-in-from-all-kinds-of-directions speaker-ear
 journey.

 Have you considered that headphones don't have crossovers?

 Nope. Good point.

Indeed, it does seem to be a bit easier to pick out the in-phase
sawtooth on the hideous tinny laptop piezo-buzzers I've got in front
of me... but I'm not randomising the order of them or anything, and I
really should be doing that, so interpret my report as subject to
confirmation bias.

Crest factor? I can't easily find out, but a visual inspection shows
that all the waves are hitting one rail or the other. Which makes me
think I normalised each wave individually, which means I introduced
RMS differences as a means of distinguishing them...

OK, forget I said anything. *pipes down*
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-12 Thread gwenhwyfaer
On 11/02/2015, Andrew Simper a...@cytomic.com replied to me:
 ... I made 7 sawtooth
 waves with random (static) phases and one straightforward sawtooth
 wave, with all partials in phase. I just listened to it again, to
 check my memory. On a half-decent pair of headphones, the difference
 between the all-partials-in-phase sawtooth and the random-phase ones
 is readily audible, but it was rather harder to tell the difference
 between the various random-phase waves; they all kind of sounded
 pulse-wavey. On a pair of speakers through the same amp and soundcard,
 though, I can still *jst about* pick out the in-phase sawtooth -
 but I couldn't confidently tell the difference between the 7 other
 waves. Which I'm guessing has something to do with the difference
 between the fairly one-dimensional travel of sound from headphone to
 ear, vs the bouncing-in-from-all-kinds-of-directions speaker-ear
 journey.

 Have you considered that headphones don't have crossovers?

Nope. Good point.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-10 Thread Andrew Simper
On 11 February 2015 at 05:52, gwenhwyfaer gwenhwyf...@gmail.com wrote:
 On 10/02/2015, Didier Dambrin di...@skynet.be wrote:
 Pretty easy to check the obvious difference between a pure low sawtooth, and

 the same sawtooth with all partials starting at random phases.

 Ah, this again? Good times. I remember playing. I made 7 sawtooth
 waves with random (static) phases and one straightforward sawtooth
 wave, with all partials in phase. I just listened to it again, to
 check my memory. On a half-decent pair of headphones, the difference
 between the all-partials-in-phase sawtooth and the random-phase ones
 is readily audible, but it was rather harder to tell the difference
 between the various random-phase waves; they all kind of sounded
 pulse-wavey. On a pair of speakers through the same amp and soundcard,
 though, I can still *jst about* pick out the in-phase sawtooth -
 but I couldn't confidently tell the difference between the 7 other
 waves. Which I'm guessing has something to do with the difference
 between the fairly one-dimensional travel of sound from headphone to
 ear, vs the bouncing-in-from-all-kinds-of-directions speaker-ear
 journey.

Have you considered that headphones don't have crossovers?

All the best,

Andrew
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-10 Thread Andrew Simper
Didier,

I can hear hiss down at -72 dBFS while a 0 dBFS 440 hz sine wave is
playing. There is no compressor in my signal chain anywhere, I use an
RME FireFace UCX and have all gains to 0 dBFS and only adjust the
headhpone out gain. The FX % cpu on the soundcard is at 0 %, and I
even double checked through all the power buttons for the EQ / Comps
on each channel, nothing is on.

I will not reply to you any further on this topic, I have made my
statements very clear, posted examples, and been very patient with
you, but you still don't want to believe me so it is best to not
discuss it any further as it is just wasting everyone's time.

All the best,

Andrew


-- cytomic -- sound music software --

On 10 February 2015 at 21:35, Didier Dambrin di...@skynet.be wrote:

 Interestingly, I wasn't gonna suggest that a possible cause could have been a 
 compressor built-in the soundcard, because.. why would a soundcard even do 
 that..

 However.. I've polled some people in our forum with this same test, and one 
 guy could hear it. But it turns out that he owns an X-Fi, and it does feature 
 automatic gain compensation, which was on for him. Owning the same soundcard, 
 I turned it on, and yes, that made the noise at -80dB rather clear.

 I'm not saying it's what's happening for you, but are you 100% sure of 
 everything the signal goes through in your system?


 This said, the existence of a built-in compressor in a soundcard.. that alone 
 might be a point for dithering, if the common end listener leaves that kind 
 of thing on.




 -Message d'origine- From: Andrew Simper
 Sent: Tuesday, February 10, 2015 6:52 AM

 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles

 Hi Didier,

 I count myself as having good hearing, I always wear ear protection at
 any gigs / loud events and have always done so. My hearing is very
 important to me since it is essential for my livelihood.

 I made a new test, a 440 hz sine wave with three 0.25 second white
 noise bursts -66 dB, -72 dB and -75 dB below the sine (which is at -6
 dBFS). I can hear the first one very clearly, then just hear the
 second one. I can't actually hear the hiss of the third one but I can
 hear the amplitude of the sine wave fractionally lowering when the
 actual amplitude of the test sine remains constant, I don't know why
 this is but that's how I hear it.

 You will clearly see where the white noise bursts are if you use some
 sort of FFT display, but please just have a listen first and try and
 pick where each (3 total) are in the file:

 www.cytomic.com/files/dsp/border-of-hearing.wav

 For the other way around, a constant noise file and with bursts of 440
 hz sine waves, the sine has to be very loud before I can hear it, up
 around -28 dB from memory. Noise added to a sine wave is much easier
 to pick, which is why I think low pass filtered tones that are largely
 sine like in nature are the border case for dither.

 All the best,

 Andy


 -- cytomic -- sound music software --


 On 10 February 2015 at 10:56, Didier Dambrin di...@skynet.be wrote:

 I'm having a hard time finding anyone who could hear past the -72dB noise,
 here around.

 Really, either you have super-ears, or the cause is (technically) somewhere
 else. But it matters, because the whole point of dithering to 16bit depends
 on how common that ability is.




 -Message d'origine- From: Andrew Simper
 Sent: Saturday, February 07, 2015 2:08 PM

 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles

 On 7 February 2015 at 03:52, Didier Dambrin di...@skynet.be wrote:


 It was just several times the same fading in/out noise at different
 levels,
 just to see if you hear quieter things than I do, I thought you'd have
 guessed that.

 https://drive.google.com/file/d/0B6Cr7wjQ2EPub2I1aGExVmJCNzA/view?usp=sharing
 (0dB, -36dB, -54dB, -66dB, -72dB, -78dB)

 Here if I make the starting noise annoying, then I hear the first 4 parts,
 until 18:00. Thus, if 0dB is my threshold of annoyance, I can't hear
 -72dB.

 So you hear it at -78dB? Would be interesting to know how many can, and if
 it's subjective or a matter of testing environment (the variable already
 being the 0dB annoyance starting point)



 Yep, I could hear all of them, and the time I couldn't hear the hiss
 any more as at the 28.7 second mark, just before the end of the file.
 For reference this noise blast sounded much louder than the bass tone
 that Nigel posted when both were normalised, I had my headphones amp
 at -18 dB so the first noise peak was loud but not uncomfortable.

 I thought it was an odd test since the test file just stopped before I
 couldn't hear the LFO amplitude modulation cycles, so I wasn't sure
 what you were trying to prove!

 All the best,

 Andy




 -Message d'origine- From: Andrew Simper
 Sent: Friday, February 06, 2015 3:21 PM
 To: A discussion list for music-related DSP
 Subject: Re: [music

Re: [music-dsp] Dither video and articles

2015-02-10 Thread robert bristow-johnson

On 2/10/15 8:49 AM, Didier Dambrin wrote:
What are you talking about - why would phase not matter? It's 
extremely important (well, phase relationship between neighboring 
partials).





well, it's unlikely you'll be able to hear the difference between this:

x(t) = cos(wt) - 1/3*cos(3wt) + 1/5*cos(5wt) - 1/7*cos(7wt)

and this:

x(t) = cos(wt) + 1/3*cos(3wt) + 1/5*cos(5wt) + 1/7*cos(7wt)

yet the waveshapes are much different.

so if you have MATLAB or Octave, try this file out and see what you can 
hear.  look at the waveforms and see how different they are.


%
%   square_phase.m
%
%   a test to see if we can really hear phase changes
%   in the harmonics of a Nyquist limited square wave.
%
%   (c) 2004 r...@audioimagination.com mailto:r...@audioimagination.com
%

if ~exist('Fs', 'var')
 Fs = 44100  % sample rate, Hz
end

if ~exist('f0', 'var')
 f0 = 110.25 % fundamental freq, Hz
end

if ~exist('tone_duration', 'var')
 tone_duration = 2.0 % seconds
end

if ~exist('change_rate', 'var')
 change_rate = 1.0   % Hz
end

if ~exist('max_harmonic', 'var')
 max_harmonic = floor((Fs/2)/f0) - 1
end

if ~exist('amplitude_factor', 'var')
 amplitude_factor = 0.25 % this just keeps things from 
clipping

end

if ~exist('outFile', 'var')
 outFile = 'square_phase.wav'
end


   % make sure we don't uber-Nyquist 
anything

max_harmonic = min(max_harmonic, floor((Fs/2)/f0)-1);

t = linspace((-1/4)/f0, tone_duration-(1/4)/f0, Fs*tone_duration+1);

detune = change_rate;

x = cos(2*pi*f0*t);  % start with 1st harmonic

n = 3;   % continue with 3rd harmonic while 
(n = max_harmonic)
 if ((n-1) == 4*floor((n-1)/4))   % lessee if it's an even or 
odd term

 x = x + (1/n)*cos(2*pi*n*f0*t);
  else
 x = x - (1/n)*cos(2*pi*(n*f0+detune)*t);
 detune = -detune;% comment this line in an see some
 end  % funky intermediate waveforms
 n = n + 2;   % continue with next odd harmonic
end

x = amplitude_factor*x;

% x = sin((pi/2)*x);   % toss in a little soft clipping

plot(t, x);  % see
sound(x, Fs);% hear
wavwrite(x, Fs, outFile);% remember






16 bits is just barely enough for high-quality audio.


So to you, that Pono player isn't snake oil?


well, Vicki is the high-res guru here.

i certainly don't think we need 24-bit and 192 kHz just for listening to 
music in our living room.  but for intermediate nodes (or intermediate 
files), 24-bit is not a bad idea.  and if you have space or bandwidth to 
burn, why not, say, 96 kHz.  then people can't complain about the 
scrunching of the bell curve near Nyquist they get with cookbook EQ.


for a high-quality audio and music signal processor, i think that 16-bit 
pre-emphasized files (for sampled sounds or waveforms) is the minimum i 
want, 16-bit or more ADC and DAC, and 24-bit internal nodes for 
processing is the minimum i would want to not feel cheap about it.  if 
i were to use an ADI Blackfin (i never have) to process music and 
better-than-voice audio, i would end up doing a lot of double-precision 
math.


BTW, at this: 
http://www.aes.org/events/125/tutorials/session.cfm?code=T19 i 
demonstrated how good 7-bit audio sounds in a variety of different 
formats, including fixed, float (with 3 exponent bits and 4 mantissa 
bits), and block floating point (actually that was 7.001 bits per 
sample), dithered and not, noise-shaped and not.  but i still wouldn't 
want to listen to 7-bit audio if i had CD.


well dithered and noise-shaped 16-bits at 44.1 kHz is good enough for 
me.  i might not be able to hear much wrong with 128 kbit/sec MP3, but i 
still like CD audio better.





Besides, if it had mattered so much, non-linear (mu/A-law) encoding 
could have applied to 16bit as well..




naw, then you get a sorta noise amplitude modulation with a signal of 
roughly constant amplitude.  and there are much better ways to do 
optimal bit reduction than companding.  companding is a quick and easy 
way they did it back in the old Bell System days.  and, even in 
companding, arcsinh() and sinh() would be smoother mapping than either 
mu or A-law.



--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-10 Thread Didier Dambrin
Interestingly, I wasn't gonna suggest that a possible cause could have been 
a compressor built-in the soundcard, because.. why would a soundcard even do 
that..


However.. I've polled some people in our forum with this same test, and one 
guy could hear it. But it turns out that he owns an X-Fi, and it does 
feature automatic gain compensation, which was on for him. Owning the same 
soundcard, I turned it on, and yes, that made the noise at -80dB rather 
clear.


I'm not saying it's what's happening for you, but are you 100% sure of 
everything the signal goes through in your system?



This said, the existence of a built-in compressor in a soundcard.. that 
alone might be a point for dithering, if the common end listener leaves that 
kind of thing on.





-Message d'origine- 
From: Andrew Simper

Sent: Tuesday, February 10, 2015 6:52 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

Hi Didier,

I count myself as having good hearing, I always wear ear protection at
any gigs / loud events and have always done so. My hearing is very
important to me since it is essential for my livelihood.

I made a new test, a 440 hz sine wave with three 0.25 second white
noise bursts -66 dB, -72 dB and -75 dB below the sine (which is at -6
dBFS). I can hear the first one very clearly, then just hear the
second one. I can't actually hear the hiss of the third one but I can
hear the amplitude of the sine wave fractionally lowering when the
actual amplitude of the test sine remains constant, I don't know why
this is but that's how I hear it.

You will clearly see where the white noise bursts are if you use some
sort of FFT display, but please just have a listen first and try and
pick where each (3 total) are in the file:

www.cytomic.com/files/dsp/border-of-hearing.wav

For the other way around, a constant noise file and with bursts of 440
hz sine waves, the sine has to be very loud before I can hear it, up
around -28 dB from memory. Noise added to a sine wave is much easier
to pick, which is why I think low pass filtered tones that are largely
sine like in nature are the border case for dither.

All the best,

Andy


-- cytomic -- sound music software --


On 10 February 2015 at 10:56, Didier Dambrin di...@skynet.be wrote:

I'm having a hard time finding anyone who could hear past the -72dB noise,
here around.

Really, either you have super-ears, or the cause is (technically) 
somewhere
else. But it matters, because the whole point of dithering to 16bit 
depends

on how common that ability is.




-Message d'origine- From: Andrew Simper
Sent: Saturday, February 07, 2015 2:08 PM

To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

On 7 February 2015 at 03:52, Didier Dambrin di...@skynet.be wrote:


It was just several times the same fading in/out noise at different
levels,
just to see if you hear quieter things than I do, I thought you'd have
guessed that.

https://drive.google.com/file/d/0B6Cr7wjQ2EPub2I1aGExVmJCNzA/view?usp=sharing
(0dB, -36dB, -54dB, -66dB, -72dB, -78dB)

Here if I make the starting noise annoying, then I hear the first 4 
parts,

until 18:00. Thus, if 0dB is my threshold of annoyance, I can't hear
-72dB.

So you hear it at -78dB? Would be interesting to know how many can, and 
if

it's subjective or a matter of testing environment (the variable already
being the 0dB annoyance starting point)



Yep, I could hear all of them, and the time I couldn't hear the hiss
any more as at the 28.7 second mark, just before the end of the file.
For reference this noise blast sounded much louder than the bass tone
that Nigel posted when both were normalised, I had my headphones amp
at -18 dB so the first noise peak was loud but not uncomfortable.

I thought it was an odd test since the test file just stopped before I
couldn't hear the LFO amplitude modulation cycles, so I wasn't sure
what you were trying to prove!

All the best,

Andy





-Message d'origine- From: Andrew Simper
Sent: Friday, February 06, 2015 3:21 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

Sorry, you said until, which is even more confusing. There are
multiple points when I hear the noise until since it sounds like the
noise is modulated in amplitude by a sine like LFO for the entire
file, so the volume of the noise ramps up and down in a cyclic manner.
The last ramping I hear fades out at around the 28.7 second mark when
it is hard to tell if it just ramps out at that point or is just on
the verge of ramping up again and then the file ends at 28.93 seconds.
I have not tried to measure the LFO wavelength or any other such
things, this is just going on listening alone.

All the best,

Andrew Simper



On 6 February 2015 at 22:01, Andrew Simper a...@cytomic.com wrote:



On 6 February 2015 at 17:32, Didier Dambrin di...@skynet.be wrote:



Just out of curiosity, until which point

Re: [music-dsp] Dither video and articles

2015-02-10 Thread robert bristow-johnson

On 2/9/15 10:19 PM, Nigel Redmon wrote:

But it matters, because the whole point of dithering to 16bit depends on how 
common that ability is.

Depends on how common? I’m not sure what qualifies for common, but if it’s 1 in 
100, or 5 in 100, it’s still a no-brainer because it costs nothing, effectively.


i have had a similar argument with Andrew Horner about tossing phase 
information outa the line spectrum of wavetables for wavetable 
synthesis.  why bother to do that?  why not just keep the phase 
information and the waveshape when it costs nothing to do it.


regarding dithering and quantization, if it were me, for 32-bit or 
24-bit fixed-point *intermediate* values (like multiple internal nodes 
of an algorithm), simply because of the cost of dithering, i would 
simply use fraction saving, which is 1st-order noise shaping with a 
zero at DC, and not dither.  or just simply round, but the fraction 
saving is better and just about as cheap in computational cost.


but for quantizing to 16 bits (like for mastering a CD or a 16-bit 
uncompressed .wav or .aif file), i would certainly dither and optimally 
noise-shape that.  it costs a little more, but like the wavetable phase, 
once you do it the ongoing costs are nothing.  and you have better data 
stored in your lower resolution format.  so why not?


16 bits is just barely enough for high-quality audio.  and it wouldn't 
have been if Stanley Lipshitz and John Vanderkooy and Robert Wanamaker 
didn't tell us in the 80's how to extract another few more dB outa the 
dynamic range of the 16-bit word.  they really rescued the 80 minute Red 
Book CD.


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Dither video and articles

2015-02-10 Thread Didier Dambrin
What are you talking about - why would phase not matter? It's extremely 
important (well, phase relationship between neighboring partials).





16 bits is just barely enough for high-quality audio.


So to you, that Pono player isn't snake oil?

Besides, if it had mattered so much, non-linear (mu/A-law) encoding could 
have applied to 16bit as well..






-Message d'origine- 
From: robert bristow-johnson

Sent: Tuesday, February 10, 2015 2:37 PM
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] Dither video and articles

On 2/9/15 10:19 PM, Nigel Redmon wrote:
But it matters, because the whole point of dithering to 16bit depends on 
how common that ability is.
Depends on how common? I’m not sure what qualifies for common, but if it’s 
1 in 100, or 5 in 100, it’s still a no-brainer because it costs nothing, 
effectively.


i have had a similar argument with Andrew Horner about tossing phase
information outa the line spectrum of wavetables for wavetable
synthesis.  why bother to do that?  why not just keep the phase
information and the waveshape when it costs nothing to do it.

regarding dithering and quantization, if it were me, for 32-bit or
24-bit fixed-point *intermediate* values (like multiple internal nodes
of an algorithm), simply because of the cost of dithering, i would
simply use fraction saving, which is 1st-order noise shaping with a
zero at DC, and not dither.  or just simply round, but the fraction
saving is better and just about as cheap in computational cost.

but for quantizing to 16 bits (like for mastering a CD or a 16-bit
uncompressed .wav or .aif file), i would certainly dither and optimally
noise-shape that.  it costs a little more, but like the wavetable phase,
once you do it the ongoing costs are nothing.  and you have better data
stored in your lower resolution format.  so why not?

16 bits is just barely enough for high-quality audio.  and it wouldn't
have been if Stanley Lipshitz and John Vanderkooy and Robert Wanamaker
didn't tell us in the 80's how to extract another few more dB outa the
dynamic range of the 16-bit word.  they really rescued the 80 minute Red
Book CD.

--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

-
Aucun virus trouvé dans ce message.
Analyse effectuée par AVG - www.avg.fr
Version: 2015.0.5645 / Base de données virale: 4284/9088 - Date: 10/02/2015 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Dither video and articles

2015-02-10 Thread Didier Dambrin
I'm talking about simple initial phase offsets, nothing dynamic. It's an old 
subject, you will find it back as ghost thone in this mailing list, with 
audio examples.


I'll redo an audio demo if you insist, but simply randomizing the *initial* 
(yes, nothing dynamic) phases of all partials of a sawtooth, will give a 
pretty distinctive metallic tone, absolutely nothing like a pure sawtooth, 
and only differing in partial phases.





-Message d'origine- 
From: robert bristow-johnson

Sent: Tuesday, February 10, 2015 7:47 PM
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] Dither video and articles

On 2/10/15 1:22 PM, Didier Dambrin wrote:
Of course, a lot of visually different waveshapes sound the same, as soon 
as the phase relationship between neighboring partials is shifted by the 
same amount.

they can be shifted by *any* amount, as long as it's static.

in fact, what do you mean by same amount?  same amount of time?  then
that's just a delay.

same amount of phase?  well that *does* change the waveshape, but it can
be any amount of phase for a perfectly periodic waveform.  when things
get less than perfectly periodic, then you have changing harmonic
coefficients, both in amplitude and phase.



That doesn't mean it's always the case


i agree.  if the phase changes rapidly enough, you'll hear it as a
detuned or slightly non-harmonic partial.

i can't argue Andrew Horner's case for him (he just didn't think he
needed to deal with changing relative phases in all of his wavetable
synthesis papers he had in the JAES).  i know you can construct all-pass
filters with long delay times inside (and sufficient feedback
coefficient) and you'll *definitely* hear a difference.  APFs only
change the phase and nothing else.

and my argument to Andrew was that it costs nothing to preserve the
phase in wavetable synthesis, so why not?  my own work (which is now
about 2 and 3 decades old) didn't even use what is commonly called the
heterodyne oscillator to get the wavetables.  i yanked this
time-domain waveforms directly outa the time-domain data.  Andrew would
do something like a sinusoidal modeling analysis, get both amplitude and
phase of each harmonic, and then throw the phase away before creating
the wavetables.

and I've once posted here examples of how shifting the phase of 1 harmonic 
of a sawtooth sounded very different.

I think you were even part of the debate.


probably.  perhaps i posted the same MATLAB file for discussion.

Pretty easy to check the obvious difference between a pure low sawtooth, 
and the same sawtooth with all partials starting at random phases.




all partials?  it's a bandlimited saw, no?  the harmonic numbers stop
at some finite number.

well, it's a (bandlimited) square wave in the example below and the
partials are changing phase in some reasonable goofy manner.  some
partials are slightly detuned up and others are slightly detuned down.
and both in such a way that the waveform slowly changes from square wave
to something unrecognizable and slowly back to square.  and, if your
playback system is nice and linear, it's unlikely you'll hear it do that
(and you keep the number of harmonics low, don't uber-Nyquist it).

it can be rewritten to do it for saw.

BTW, because of word-wrapping that i cannot turn off, be sure to unwrap
some of the comment lines in the MATLAB program.

--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


-
Aucun virus trouve dans ce message.
Analyse effectuee par AVG - www.avg.fr
Version: 2015.0.5645 / Base de donnees virale: 4284/9088 - Date: 10/02/2015 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-10 Thread Didier Dambrin
Of course 24bit isn't a bad idea for intermediate files, but 32bit float is 
a better idea, even just because you don't have to normalize  store gain 
information that pretty much no app will read from the file. And since the 
price of storage is negligible these days..





-Message d'origine- 
From: robert bristow-johnson

Sent: Tuesday, February 10, 2015 6:11 PM
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] Dither video and articles


i certainly don't think we need 24-bit and 192 kHz just for listening to
music in our living room.  but for intermediate nodes (or intermediate
files), 24-bit is not a bad idea. 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-10 Thread Ethan Duni
So to you, that Pono player isn't snake oil?

It's more the 192kHz sampling rate that renders the Pono player into snake
oil territory. The extra bits probably aren't getting you much, but the
ridiculous sampling rate can only *hurt* audio quality, while consuming
that much more battery and storage.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-10 Thread robert bristow-johnson

On 2/10/15 1:30 PM, Didier Dambrin wrote:
Of course 24bit isn't a bad idea for intermediate files, but 32bit 
float is a better idea, even just because you don't have to normalize 
 store gain information that pretty much no app will read from the 
file. And since the price of storage is negligible these days..


can't disagree with that.  now, even with float, you can dither and 
noise shape the quantization (from double to single-precision floats), 
but the code to do so is more difficult.  and i dunno *what* to do if 
adding your dither causes, for a single sample, the exponent to change.  
it's kinda messy.  i guess you just accept that this particular sample 
will not be perfectly dithered correctly and, whatever quantization 
error *does* result, use that in the noise-shaping feedback.


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-10 Thread Didier Dambrin
Of course, a lot of visually different waveshapes sound the same, as soon as 
the phase relationship between neighboring partials is shifted by the same 
amount.


That doesn't mean it's always the case and I've once posted here examples of 
how shifting the phase of 1 harmonic of a sawtooth sounded very different.

I think you were even part of the debate.
Pretty easy to check the obvious difference between a pure low sawtooth, and 
the same sawtooth with all partials starting at random phases.






-Message d'origine- 
From: robert bristow-johnson

Sent: Tuesday, February 10, 2015 6:11 PM
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] Dither video and articles

On 2/10/15 8:49 AM, Didier Dambrin wrote:
What are you talking about - why would phase not matter? It's extremely 
important (well, phase relationship between neighboring partials).





well, it's unlikely you'll be able to hear the difference between this:

x(t) = cos(wt) - 1/3*cos(3wt) + 1/5*cos(5wt) - 1/7*cos(7wt)

and this:

x(t) = cos(wt) + 1/3*cos(3wt) + 1/5*cos(5wt) + 1/7*cos(7wt)

yet the waveshapes are much different.

so if you have MATLAB or Octave, try this file out and see what you can
hear.  look at the waveforms and see how different they are.

%
%   square_phase.m
%
%   a test to see if we can really hear phase changes
%   in the harmonics of a Nyquist limited square wave.
%
%   (c) 2004 r...@audioimagination.com mailto:r...@audioimagination.com
%

if ~exist('Fs', 'var')
 Fs = 44100  % sample rate, Hz
end

if ~exist('f0', 'var')
 f0 = 110.25 % fundamental freq, Hz
end

if ~exist('tone_duration', 'var')
 tone_duration = 2.0 % seconds
end

if ~exist('change_rate', 'var')
 change_rate = 1.0   % Hz
end

if ~exist('max_harmonic', 'var')
 max_harmonic = floor((Fs/2)/f0) - 1
end

if ~exist('amplitude_factor', 'var')
 amplitude_factor = 0.25 % this just keeps things from
clipping
end

if ~exist('outFile', 'var')
 outFile = 'square_phase.wav'
end


   % make sure we don't uber-Nyquist
anything
max_harmonic = min(max_harmonic, floor((Fs/2)/f0)-1);

t = linspace((-1/4)/f0, tone_duration-(1/4)/f0, Fs*tone_duration+1);

detune = change_rate;

x = cos(2*pi*f0*t);  % start with 1st harmonic

n = 3;   % continue with 3rd harmonic while
(n = max_harmonic)
 if ((n-1) == 4*floor((n-1)/4))   % lessee if it's an even or
odd term
 x = x + (1/n)*cos(2*pi*n*f0*t);
  else
 x = x - (1/n)*cos(2*pi*(n*f0+detune)*t);
 detune = -detune;% comment this line in an see some
 end  % funky intermediate waveforms
 n = n + 2;   % continue with next odd harmonic
end

x = amplitude_factor*x;

% x = sin((pi/2)*x);   % toss in a little soft clipping

plot(t, x);  % see
sound(x, Fs);% hear
wavwrite(x, Fs, outFile);% remember






16 bits is just barely enough for high-quality audio.


So to you, that Pono player isn't snake oil?


well, Vicki is the high-res guru here.

i certainly don't think we need 24-bit and 192 kHz just for listening to
music in our living room.  but for intermediate nodes (or intermediate
files), 24-bit is not a bad idea.  and if you have space or bandwidth to
burn, why not, say, 96 kHz.  then people can't complain about the
scrunching of the bell curve near Nyquist they get with cookbook EQ.

for a high-quality audio and music signal processor, i think that 16-bit
pre-emphasized files (for sampled sounds or waveforms) is the minimum i
want, 16-bit or more ADC and DAC, and 24-bit internal nodes for
processing is the minimum i would want to not feel cheap about it.  if
i were to use an ADI Blackfin (i never have) to process music and
better-than-voice audio, i would end up doing a lot of double-precision
math.

BTW, at this:
http://www.aes.org/events/125/tutorials/session.cfm?code=T19 i
demonstrated how good 7-bit audio sounds in a variety of different
formats, including fixed, float (with 3 exponent bits and 4 mantissa
bits), and block floating point (actually that was 7.001 bits per
sample), dithered and not, noise-shaped and not.  but i still wouldn't
want to listen to 7-bit audio if i had CD.

well dithered and noise-shaped 16-bits at 44.1 kHz is good enough for
me.  i might not be able to hear much wrong with 128 kbit/sec MP3, but i
still like CD audio better.




Besides, if it had mattered so much, non-linear (mu/A-law) encoding could 
have applied to 16bit as well..




naw, then you get a sorta noise amplitude modulation with a signal of
roughly constant amplitude.  and there are much better ways to do
optimal bit reduction than companding.  companding is a quick and easy
way they did it back in the old Bell System days.  and, even in
companding, arcsinh() and sinh() would be smoother mapping

Re: [music-dsp] Dither video and articles

2015-02-10 Thread robert bristow-johnson

On 2/10/15 1:51 PM, Ethan Duni wrote:

So to you, that Pono player isn't snake oil?

It's more the 192kHz sampling rate that renders the Pono player into snake
oil territory. The extra bits probably aren't getting you much, but the
ridiculous sampling rate can only *hurt* audio quality, while consuming
that much more battery and storage.


that's interesting.  why does higher-than-needed sample rate hurt audio 
quality?  might not be necessary, but how does it make it worse 
(excluding the increased computational burden)?  i always think that 
analog (or continuous-time) is like having an infinite sample rate.



--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-10 Thread gwenhwyfaer
On 10/02/2015, Didier Dambrin di...@skynet.be wrote:
 Pretty easy to check the obvious difference between a pure low sawtooth, and

 the same sawtooth with all partials starting at random phases.

Ah, this again? Good times. I remember playing. I made 7 sawtooth
waves with random (static) phases and one straightforward sawtooth
wave, with all partials in phase. I just listened to it again, to
check my memory. On a half-decent pair of headphones, the difference
between the all-partials-in-phase sawtooth and the random-phase ones
is readily audible, but it was rather harder to tell the difference
between the various random-phase waves; they all kind of sounded
pulse-wavey. On a pair of speakers through the same amp and soundcard,
though, I can still *jst about* pick out the in-phase sawtooth -
but I couldn't confidently tell the difference between the 7 other
waves. Which I'm guessing has something to do with the difference
between the fairly one-dimensional travel of sound from headphone to
ear, vs the bouncing-in-from-all-kinds-of-directions speaker-ear
journey.

I'm only a data point, though, so I'm not brave enough to actually
conclude anything. At least, not any more. ;)
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-10 Thread Tom Duffy

The only comment in that page that actually tells the story is buried:

--
Different media, different master

I've run across a few articles and blog posts that declare the virtues 
of 24 bit or 96/192kHz by comparing a CD to an audio DVD (or SACD) of 
the 'same' recording. This comparison is invalid; the masters are 
usually different.



The benefit end users get from Pono / Hi resolution files is exactly
this - the master was prepared without the usual requirements for
radio-play-ready compression or filtering.

Sure you can do the same in 44.1kHz/16bit or MP3 or MP4, but packaging
is everything, and it's a lot easier to market something that requires
a bigger file as being better.

Everyone gets hung up on the but you can't hear the extra bits / extra
FS but ignore the advantage of less GI-GO.

Tom.


On 2/10/2015 12:46 PM, Ethan Duni wrote:
why does higher-than-needed sample rate hurt audio quality?
might not be necessary, but how does it make it worse (excluding
the increased computational burden)?

The danger is that you are now including a bunch of out-of-band content in
your output signal, which can be transformed into in-band aliasing by any
nonlinearities in your playback chain. It's generally not a big deal, but
it is measurable and does hurt quality:

http://xiph.org/~xiphmont/demo/neil-young.html

This is an excellent example of the tension between audiophile
perfectionism (i.e., more sample rate must always be at least as good,
because digital audio is some kind of terrifying bogeyman) and actual
engineering quality control (i.e., overspec-ing systems drives up costs,
compromises the quality in other components, and generally creates more
headaches than it solves).

E


On Tue, Feb 10, 2015 at 10:54 AM, robert bristow-johnson 
r...@audioimagination.com wrote:

On 2/10/15 1:51 PM, Ethan Duni wrote:

So to you, that Pono player isn't snake oil?

It's more the 192kHz sampling rate that renders the Pono player into snake
oil territory. The extra bits probably aren't getting you much, but the
ridiculous sampling rate can only *hurt* audio quality, while consuming
that much more battery and storage.


that's interesting.  why does higher-than-needed sample rate hurt audio
quality?  might not be necessary, but how does it make it worse (excluding
the increased computational burden)?  i always think that analog (or
continuous-time) is like having an infinite sample rate.


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, 
dsp links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


NOTICE: This electronic mail message and its contents, including any attachments hereto 
(collectively, this e-mail), is hereby designated as confidential and 
proprietary. This e-mail may be viewed and used only by the person to whom it has been sent 
and his/her employer solely for the express purpose for which it has been disclosed and only in 
accordance with any confidentiality or non-disclosure (or similar) agreement between TEAC 
Corporation or its affiliates and said employer, and may not be disclosed to any other person or 
entity.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-10 Thread Tom Duffy

So you like the bar being raised, but not the way that Neil Young has
attempted?

Whether the higher resolution actually degrades the quality is a
topic up for future debate.

From the ponomusic webpage:
...and now, with the PonoPlayer, you can finally feel the master in all 
its glory, in its native resolution, CD quality or higher, the way the 
artist made it, exactly


Even they are not saying it has to be higher than CD quality, just
that it has to have been made well in the first place.

I don't get why so many people are trying to paint this as
a snake oil pitch.

---
Tom.

On 2/10/2015 1:13 PM, Ethan Duni wrote:
I'm all for releasing stuff from improved masters. There's a trend in my
favorite genre (heavy metal) to rerelease a lot of classics in full
dynamic range editions lately. While I'm not sure that all of these
releases really sound much better (how much dynamic range was there in an
underground death metal recording from 1991 anyway?) I like the trend.
These are regular CD releases, no weird formats (demonstrating that such is
not required to sell the improved master releases).

But the thing is that you often *can* hear the extra sampling frequency -
in the form of additional distortion. It sounds, if anything, *worse* than
a release with an appropriate sample rate! Trying to sell people on better
audio, and then giving them a bunch of additional intermodulation
distortion is not a justified marketing ploy, it's outright deceptive and
abusive. This is working from the assumption that your customers are
idiots, and that you should exploit that to make money, irrespective of
whether audio quality is harmed or not. The fact the Neil Young is himself
one of the suckers renders this less objectionable, but only slightly.
Anyway Pono is already a byword for audiophile snake oil so hopefully the
damage will mostly be limited to the bank accounts of Mr. Young and his
various financial backers in this idiocy. Sounds like the product is a real
dog in industrial design terms anyway (no hold button, awkward shape,
etc.). Good riddance...

E
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, 
dsp links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


NOTICE: This electronic mail message and its contents, including any attachments hereto 
(collectively, this e-mail), is hereby designated as confidential and 
proprietary. This e-mail may be viewed and used only by the person to whom it has been sent 
and his/her employer solely for the express purpose for which it has been disclosed and only in 
accordance with any confidentiality or non-disclosure (or similar) agreement between TEAC 
Corporation or its affiliates and said employer, and may not be disclosed to any other person or 
entity.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-10 Thread Michael Gogins
What I am interested in, regarding this discussion, is quite specific.
I make computer music using Csound, and usually using completely
synthesized sound, and so far only in stereo. Csound can run at any
sample rate, can output floating-point soundfiles, and can dither. My
sounds are not necessarily simple and cover the whole frequency range
and a wide dynamic range.

My only real question is, since the signal path right up to the point
where the soundfile is written is likely to be the same in all cases,
what kind of differences if any can I try to hear in CD audio versus
say 96 KHz floating-point?

These differences (if any) will be caused by the different Csound
sampling rate, the different soundfile sample word size/dynamic range,
and of course the different things that might happen to these two
kinds of soundfiles on their way out of a high-quality
DAC/amplifier/monitor speaker rig.

At times, my pieces have fortunately been presented in nice quiet
concert halls with really good amplifiers and speakers. I have also
been able to listen a few times in high-end recording studios designed
for this kind of music (this is a very different listening
experience).

Regards,
Mike

-
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com


On Tue, Feb 10, 2015 at 4:13 PM, Ethan Duni ethan.d...@gmail.com wrote:
 I'm all for releasing stuff from improved masters. There's a trend in my
 favorite genre (heavy metal) to rerelease a lot of classics in full
 dynamic range editions lately. While I'm not sure that all of these
 releases really sound much better (how much dynamic range was there in an
 underground death metal recording from 1991 anyway?) I like the trend.
 These are regular CD releases, no weird formats (demonstrating that such is
 not required to sell the improved master releases).

 But the thing is that you often *can* hear the extra sampling frequency -
 in the form of additional distortion. It sounds, if anything, *worse* than
 a release with an appropriate sample rate! Trying to sell people on better
 audio, and then giving them a bunch of additional intermodulation
 distortion is not a justified marketing ploy, it's outright deceptive and
 abusive. This is working from the assumption that your customers are
 idiots, and that you should exploit that to make money, irrespective of
 whether audio quality is harmed or not. The fact the Neil Young is himself
 one of the suckers renders this less objectionable, but only slightly.
 Anyway Pono is already a byword for audiophile snake oil so hopefully the
 damage will mostly be limited to the bank accounts of Mr. Young and his
 various financial backers in this idiocy. Sounds like the product is a real
 dog in industrial design terms anyway (no hold button, awkward shape,
 etc.). Good riddance...

 E
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-10 Thread Ethan Duni
I like the trend of releasing remastered material, where there is scope for
improved quality. Which isn't always, but there's an entire generation of
albums that were victims of the loudness wars, and various early work by
artists that hadn't access to quality mastering at the time, and so on,
that can benefit. This has been happening totally independent of Pono.

I don't like the Pono music scam because it confounds that (legitimate)
aspect with the snake oil about 24 bits and high sampling rates - while
charging a premium. There is zero meaningful test results that back Pono's
quality claims (and note how frequently their marketing adds caveats about
comparing to low-res MP3s, as if it's 1998 or something). And while there
isn't a definitive formal test showing that Pono sucks, there are multiple
informal tests without obvious methodological flaws which show that Pono is
inferior to your regular iTunes downloads. Neil Young says he's going to
give you better quality (for 2-3 times the price), and instead delivers
*lower* quality (or, maybe, the same, at best).

The fact that their own marketing material can't even seem to keep their
story straight regarding what the high resolution is or is not supposed to
provide you, seems to me to go to the point that this is all a marketing
exercise in bullshitting the consumer with a bunch of ill-founded claims.
For that matter, Pono's implication that one can't get improved masters via
other routes is itself deceptive.

I'm also somewhat bemused by Neil Young being the poster boy for this
high-resolution snake oil. While I admittedly haven't listened to his
entire catalogue, his whole style features low dynamic range, non-extreme
spectrum, and quite high noise floors (typically easily audible at even
moderate volume). Which is fine, nothing wrong with the crunchy/vintage
rock sound. It just doesn't fit with the whole we need to be able to hear
stuff at 35kHz and -130dB delusions.

That said, this statement seems problematic:

Whether the higher resolution actually degrades the quality is a topic up
for future debate.

I mean, if you personally don't want to debate it right here and now that's
fine. But nobody is obliged to set this stuff aside. It's immediately
topical, and the test files for evaluating it have been provided in the
xiph link.

E



On Tue, Feb 10, 2015 at 1:25 PM, Tom Duffy tdu...@tascam.com wrote:

 So you like the bar being raised, but not the way that Neil Young has
 attempted?

 Whether the higher resolution actually degrades the quality is a
 topic up for future debate.

 From the ponomusic webpage:
 ...and now, with the PonoPlayer, you can finally feel the master in all
 its glory, in its native resolution, CD quality or higher, the way the
 artist made it, exactly

 Even they are not saying it has to be higher than CD quality, just
 that it has to have been made well in the first place.

 I don't get why so many people are trying to paint this as
 a snake oil pitch.

 ---
 Tom.


 On 2/10/2015 1:13 PM, Ethan Duni wrote:
 I'm all for releasing stuff from improved masters. There's a trend in my
 favorite genre (heavy metal) to rerelease a lot of classics in full
 dynamic range editions lately. While I'm not sure that all of these
 releases really sound much better (how much dynamic range was there in an
 underground death metal recording from 1991 anyway?) I like the trend.
 These are regular CD releases, no weird formats (demonstrating that such is
 not required to sell the improved master releases).

 But the thing is that you often *can* hear the extra sampling frequency -
 in the form of additional distortion. It sounds, if anything, *worse* than
 a release with an appropriate sample rate! Trying to sell people on better
 audio, and then giving them a bunch of additional intermodulation
 distortion is not a justified marketing ploy, it's outright deceptive and
 abusive. This is working from the assumption that your customers are
 idiots, and that you should exploit that to make money, irrespective of
 whether audio quality is harmed or not. The fact the Neil Young is himself
 one of the suckers renders this less objectionable, but only slightly.
 Anyway Pono is already a byword for audiophile snake oil so hopefully the
 damage will mostly be limited to the bank accounts of Mr. Young and his
 various financial backers in this idiocy. Sounds like the product is a real
 dog in industrial design terms anyway (no hold button, awkward shape,
 etc.). Good riddance...

 E
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp


 NOTICE: This electronic mail message and its contents, including any
 attachments hereto (collectively, this e-mail), is hereby designated as
 confidential and proprietary. This e-mail may be viewed and used only by
 the person to whom it 

Re: [music-dsp] Dither video and articles

2015-02-10 Thread Ethan Duni
why does higher-than-needed sample rate hurt audio quality?
might not be necessary, but how does it make it worse (excluding
the increased computational burden)?

The danger is that you are now including a bunch of out-of-band content in
your output signal, which can be transformed into in-band aliasing by any
nonlinearities in your playback chain. It's generally not a big deal, but
it is measurable and does hurt quality:

http://xiph.org/~xiphmont/demo/neil-young.html

This is an excellent example of the tension between audiophile
perfectionism (i.e., more sample rate must always be at least as good,
because digital audio is some kind of terrifying bogeyman) and actual
engineering quality control (i.e., overspec-ing systems drives up costs,
compromises the quality in other components, and generally creates more
headaches than it solves).

E


On Tue, Feb 10, 2015 at 10:54 AM, robert bristow-johnson 
r...@audioimagination.com wrote:

 On 2/10/15 1:51 PM, Ethan Duni wrote:

 So to you, that Pono player isn't snake oil?

 It's more the 192kHz sampling rate that renders the Pono player into snake
 oil territory. The extra bits probably aren't getting you much, but the
 ridiculous sampling rate can only *hurt* audio quality, while consuming
 that much more battery and storage.


 that's interesting.  why does higher-than-needed sample rate hurt audio
 quality?  might not be necessary, but how does it make it worse (excluding
 the increased computational burden)?  i always think that analog (or
 continuous-time) is like having an infinite sample rate.


 --

 r b-j  r...@audioimagination.com

 Imagination is more important than knowledge.



 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-10 Thread Zhiguang Zhang
Re:Pono, what about the DAC in the device?  That could make an audible and real 
difference.  Also, there is undeniably more information in high res downloads, 
if the original master was recorded to tape or to hi-res in Pro Tools.  So, has 
anyone ever considered the sample-level ‘phase’ effect of listening to properly 
mastered hi-res audio if the playback chain is of a quality that diminishes 
intermodulation artifacts?




-EZ

On Tue, Feb 10, 2015 at 4:45 PM, Ethan Duni ethan.d...@gmail.com wrote:

 I like the trend of releasing remastered material, where there is scope for
 improved quality. Which isn't always, but there's an entire generation of
 albums that were victims of the loudness wars, and various early work by
 artists that hadn't access to quality mastering at the time, and so on,
 that can benefit. This has been happening totally independent of Pono.
 I don't like the Pono music scam because it confounds that (legitimate)
 aspect with the snake oil about 24 bits and high sampling rates - while
 charging a premium. There is zero meaningful test results that back Pono's
 quality claims (and note how frequently their marketing adds caveats about
 comparing to low-res MP3s, as if it's 1998 or something). And while there
 isn't a definitive formal test showing that Pono sucks, there are multiple
 informal tests without obvious methodological flaws which show that Pono is
 inferior to your regular iTunes downloads. Neil Young says he's going to
 give you better quality (for 2-3 times the price), and instead delivers
 *lower* quality (or, maybe, the same, at best).
 The fact that their own marketing material can't even seem to keep their
 story straight regarding what the high resolution is or is not supposed to
 provide you, seems to me to go to the point that this is all a marketing
 exercise in bullshitting the consumer with a bunch of ill-founded claims.
 For that matter, Pono's implication that one can't get improved masters via
 other routes is itself deceptive.
 I'm also somewhat bemused by Neil Young being the poster boy for this
 high-resolution snake oil. While I admittedly haven't listened to his
 entire catalogue, his whole style features low dynamic range, non-extreme
 spectrum, and quite high noise floors (typically easily audible at even
 moderate volume). Which is fine, nothing wrong with the crunchy/vintage
 rock sound. It just doesn't fit with the whole we need to be able to hear
 stuff at 35kHz and -130dB delusions.
 That said, this statement seems problematic:
Whether the higher resolution actually degrades the quality is a topic up
 for future debate.
 I mean, if you personally don't want to debate it right here and now that's
 fine. But nobody is obliged to set this stuff aside. It's immediately
 topical, and the test files for evaluating it have been provided in the
 xiph link.
 E
 On Tue, Feb 10, 2015 at 1:25 PM, Tom Duffy tdu...@tascam.com wrote:
 So you like the bar being raised, but not the way that Neil Young has
 attempted?

 Whether the higher resolution actually degrades the quality is a
 topic up for future debate.

 From the ponomusic webpage:
 ...and now, with the PonoPlayer, you can finally feel the master in all
 its glory, in its native resolution, CD quality or higher, the way the
 artist made it, exactly

 Even they are not saying it has to be higher than CD quality, just
 that it has to have been made well in the first place.

 I don't get why so many people are trying to paint this as
 a snake oil pitch.

 ---
 Tom.


 On 2/10/2015 1:13 PM, Ethan Duni wrote:
 I'm all for releasing stuff from improved masters. There's a trend in my
 favorite genre (heavy metal) to rerelease a lot of classics in full
 dynamic range editions lately. While I'm not sure that all of these
 releases really sound much better (how much dynamic range was there in an
 underground death metal recording from 1991 anyway?) I like the trend.
 These are regular CD releases, no weird formats (demonstrating that such is
 not required to sell the improved master releases).

 But the thing is that you often *can* hear the extra sampling frequency -
 in the form of additional distortion. It sounds, if anything, *worse* than
 a release with an appropriate sample rate! Trying to sell people on better
 audio, and then giving them a bunch of additional intermodulation
 distortion is not a justified marketing ploy, it's outright deceptive and
 abusive. This is working from the assumption that your customers are
 idiots, and that you should exploit that to make money, irrespective of
 whether audio quality is harmed or not. The fact the Neil Young is himself
 one of the suckers renders this less objectionable, but only slightly.
 Anyway Pono is already a byword for audiophile snake oil so hopefully the
 damage will mostly be limited to the bank accounts of Mr. Young and his
 various financial backers in this idiocy. Sounds like the product is a real
 dog in industrial design terms anyway (no hold 

Re: [music-dsp] Dither video and articles

2015-02-10 Thread Ethan Duni
How do the crest factors of these different sawtooth waveforms compare?
I'd expect one with randomized phase to have a much lower crest factor.
Which is to say that I'd expect the in-phase sawtooth to activate a lot
more nonlinearity in the playback chain, which explains why that one is
easy to pick out but the various randomized ones all sound similar. It also
implies that we'd need a very fancy playback system with excellent
linearity to draw any conclusions about the underlying audibility of the
sawtooth partial phases as such.

E

On Tue, Feb 10, 2015 at 1:52 PM, gwenhwyfaer gwenhwyf...@gmail.com wrote:

 On 10/02/2015, Didier Dambrin di...@skynet.be wrote:
  Pretty easy to check the obvious difference between a pure low sawtooth,
 and
 
  the same sawtooth with all partials starting at random phases.

 Ah, this again? Good times. I remember playing. I made 7 sawtooth
 waves with random (static) phases and one straightforward sawtooth
 wave, with all partials in phase. I just listened to it again, to
 check my memory. On a half-decent pair of headphones, the difference
 between the all-partials-in-phase sawtooth and the random-phase ones
 is readily audible, but it was rather harder to tell the difference
 between the various random-phase waves; they all kind of sounded
 pulse-wavey. On a pair of speakers through the same amp and soundcard,
 though, I can still *jst about* pick out the in-phase sawtooth -
 but I couldn't confidently tell the difference between the 7 other
 waves. Which I'm guessing has something to do with the difference
 between the fairly one-dimensional travel of sound from headphone to
 ear, vs the bouncing-in-from-all-kinds-of-directions speaker-ear
 journey.

 I'm only a data point, though, so I'm not brave enough to actually
 conclude anything. At least, not any more. ;)
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-10 Thread Eric Brombaugh

Here's the guts of the Pono:

http://mikebeauchamp.com/2014/12/pono-player-teardown/

DAC is an ESS ES9018K2M

http://www.esstech.com/PDF/ES9018-2M%20PB%20Rev%200.8%20130619.pdf

32-bit - Wonder what the actual ENOB is...

Output driver is a discrete design.

Main MCU is apparently a TI OMAP similar to those found on the Beagleboard.

Eric

On 02/10/2015 03:27 PM, Zhiguang Zhang wrote:

Actually scratch that 2nd thought.  It would be good to know what DAC the Pono 
device contains.

-EZ

On Tue, Feb 10, 2015 at 5:20 PM, Zhiguang Zhang ericzh...@gmail.com
wrote:


Re:Pono, what about the DAC in the device?  That could make an audible and real 
difference.  Also, there is undeniably more information in high res downloads, 
if the original master was recorded to tape or to hi-res in Pro Tools.  So, has 
anyone ever considered the sample-level ‘phase’ effect of listening to properly 
mastered hi-res audio if the playback chain is of a quality that diminishes 
intermodulation artifacts?
-EZ


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Dither video and articles

2015-02-09 Thread Vicki Melchior
Nigel, I looked at your video again and it seems to me it's confusing as to 
whether you mean 'don't dither the 24b final output' or 'don't ever dither at 
24b'.  You make statements several times that imply the former, but in your 
discussion about 24b on all digital interfaces, sends and receives etc, you 
clearly say to never dither at 24b.  Several people in this thread have pointed 
out the difference between intermediate stage truncation and final stage 
truncation, and the fact that if truncation is done repeatedly, any distortion 
spectra will continue to build.   It is not noise-like, the peaks are coherent 
peaks and correlated to the signal.  

You don't say in the video what the processing history is for the files you are 
using.  If they are simple captures with no processing, they probably reflect 
the additive gaussian noise present at  the 20th bit in the A/D, based on 
Andy's post, and are properly dithered for 24b truncation.   My point is that 
at the digital capture stage you have (S+N) and the amplitude distribution of 
the S+N signal might be fine for 24b truncation if N is dither-like.  After 
various stages of digital processing including non-linear steps, the (S+N) 
intermediate signal may no longer have an adequate amplitude distribution to be 
truncated without 24b dither.  

I think the whole subject of self dither might be better approached through FFT 
measurement than by listening.   Bob Katz shows an FFT of truncation spectra at 
24b in his book on 'Itunes Music, Mastering for High Resolution Audio Delivery' 
 but he uses a generated, dithered pure tone that doesn't start with added 
gaussian noise.  Haven't thought about it but I can imagine extending his 
approach into a research effort.  

Offhand I don't know anything that would go wrong in your difference file ( 
...if the error doesn't sound wrong).  It's a common method for looking at 
residuals.

Vicki


On Feb 8, 2015, at 6:11 PM, Nigel Redmon wrote:

 Beyond that, Nigel raises this issue in the context of self-dither”...
 
 First, remember that I’m the guy who recommended “always” dithering 16-bit 
 (no “always” as in “alway necessary”, but as in “do it always, unless you 
 know that it gives no improvement”), and to not bother dithering 24-bit. So, 
 I’m only interested in this discussion for 24-bit. That said:
 
 ...In situations where there is a clear external noise source present, 
 whether the situation is analog to digital conversion or digital to digital 
 bit depth change, the external noise may, or may not, be satisfactory as 
 dither but at least it's properties can be measured.
 
 For 24-bit audio, could you give an example of when it’s likely to not be 
 satisfactory (maybe you’ve already given a reference to determining 
 “satisfactory)? Offhand, I’d say one case might be with extremely low noise, 
 then digitally faded such that you fade the noise level below the dithering 
 threshold while you still have enough signal to exhibit truncation 
 distortion, and the fade characteristics allow it to last long enough to 
 matter to your ears—if we weren’t talking about this distortion being down 
 near -140 dB in the first place. I’d think that, typically, you’d have 
 gaussian noise at a much higher level that is needed to dither 24-bit; that 
 could change with digital processing, but I think that in the usual recording 
 chain, it seems pretty hard to avoid for your analog to digital conversion” 
 case.
 
 I’m still interested in what you have to say about my post yesterday (“...if 
 the error doesn’t sound wrong to the ear, can it still sound wrong added to 
 the music?”). Care to comment?
 
 
 On Feb 8, 2015, at 8:09 AM, Vicki Melchior vmelch...@earthlink.net wrote:
 
 I have no argument at all with the cheap high-pass TPDF dither; whenever it 
 was published the original authors undoubtedly verified that the moment 
 decoupling occurred, as you say.  And that's what is needed for dither 
 effectiveness.   If you're creating noise for dither, you have the option to 
 verify its properties.  But in the situation of an analog signal with added, 
 independent instrument noise, you do need to verify that the composite noise 
 source actually satisfies the criteria for dither.  1/f noise in particular 
 has been questioned, which is why I raised the spectrum issue.  
 
 Beyond that, Nigel raises this issue in the context of self-dither.  In 
 situations where there is a clear external noise source present, whether the 
 situation is analog to digital conversion or digital to digital bit depth 
 change, the external noise may, or may not, be satisfactory as dither but at 
 least it's properties can be measured.  If the 'self-dithering' instead 
 refers to analog noise captured into the digitized signal with the idea that 
 this noise is going to be preserved and available at later truncation steps 
 to 'self dither' it is a very very hazy argument.   I'm aware of the various 
 caveats that are often 

Re: [music-dsp] Dither video and articles

2015-02-09 Thread Vicki Melchior
That's a clear explanation of the self-dither assumed in A/D conversion, thanks 
for posting it. 

Vicki
 
On Feb 8, 2015, at 9:11 PM, Andrew Simper wrote:

 Vicki,
 
 If you look at the limits of what is possible in a real world ADC
 there is a certain amount of noise in any electrical system due to
 gaussian thermal noise:
 http://en.wikipedia.org/wiki/Johnson%E2%80%93Nyquist_noise
 
 For example if you look at an instrument / measurement grade ADC like
 this: 
 http://www.prismsound.com/test_measure/products_subs/dscope/dscope_spec.php
 They publish figures of a residual noise floor of 1.4 uV, which they
 say is -115 dBu. So if you digitise a 1 V peak (2 V peak to peak) sine
 wave with a 24-bit ADC then you will have hiss (which includes a large
 portion of gaussian noise) at around the 20 bit mark, so you will have
 4-bits of hiss to self dither. This has nothing to do with microphones
 or noise in air, this is in the near perfect case of transmission via
 a well shielded differential cable transferring the voltage directly
 to the ADC.
 
 All the best,
 
 Andy
 -- cytomic -- sound music software --
 
 
 On 9 February 2015 at 00:09, Vicki Melchior vmelch...@earthlink.net wrote:
 I have no argument at all with the cheap high-pass TPDF dither; whenever it 
 was published the original authors undoubtedly verified that the moment 
 decoupling occurred, as you say.  And that's what is needed for dither 
 effectiveness.   If you're creating noise for dither, you have the option to 
 verify its properties.  But in the situation of an analog signal with added, 
 independent instrument noise, you do need to verify that the composite noise 
 source actually satisfies the criteria for dither.  1/f noise in particular 
 has been questioned, which is why I raised the spectrum issue.
 
 Beyond that, Nigel raises this issue in the context of self-dither.  In 
 situations where there is a clear external noise source present, whether the 
 situation is analog to digital conversion or digital to digital bit depth 
 change, the external noise may, or may not, be satisfactory as dither but at 
 least it's properties can be measured.  If the 'self-dithering' instead 
 refers to analog noise captured into the digitized signal with the idea that 
 this noise is going to be preserved and available at later truncation steps 
 to 'self dither' it is a very very hazy argument.   I'm aware of the various 
 caveats that are often postulated, i.e. signal is captured at double 
 precision, no truncation, very selected processing.  But even in minimalist 
 recording such as live to two track, it's not clear to me that the signal 
 can get through the digital stages of the A/D and still retain an unaltered 
 noise distribution.  It certainly won't do so after considerable processing. 
  So the sho
 r
 t
 answer is, dither!  At the 24th bit or at the 16th bit, whatever your output 
 is.  If you (Nigel or RBJ) have references to the contrary, please say so.
 
 Vicki
 
 On Feb 8, 2015, at 10:11 AM, robert bristow-johnson wrote:
 
 On 2/7/15 8:54 AM, Vicki Melchior wrote:
 Well, the point of dither is to reduce correlation between the signal and 
 quantization noise.  Its effectiveness requires that the error signal has 
 given properties; the mean error should be zero and the RMS error should 
 be independent of the signal.  The best known examples satisfying those 
 conditions are white Gaussian noise at ~ 6dB above the RMS quantization 
 level and white TPDF noise  at ~3dB above the same, with Gaussian noise 
 eliminating correlation entirely and TPDF dither eliminating correlation 
 with the first two moments of the error distribution.   That's all 
 textbook stuff.  There are certainly noise shaping algorithms that shape 
 either the sum of white dither and quantization noise or the white dither 
 and quantization noise independently, and even (to my knowledge) a few 
 completely non-white dithers that are known to work, but determining the 
 effectiveness of noise at dithering still requires examining the 
 statistical properties of the error signal and showi
 n
 g
 
 th
 at the mean is 0 and the second moment is signal independent.  (I think 
 Stanley Lipschitz showed that the higher moments don't matter to 
 audibility.)
 
 but my question was not about the p.d.f. of the dither (to decouple both 
 the mean and the variance of the quantization error, you need triangular 
 p.d.f. dither of 2 LSBs width that is independent of the *signal*) but 
 about the spectrum of the dither.  and Nigel mentioned this already, but 
 you can cheaply make high-pass TPDF dither with a single (decent) uniform 
 p.d.f. random number per sample and running that through a simple 1st-order 
 FIR which has +1 an -1 coefficients (i.e. subtract the previous UPDF from 
 the current UPDF to get the high-pass TPDF).  also, i think Bart Locanthi 
 (is he still on this planet?) and someone else did a simple paper back in 
 the 90s about the possible benefits of 

Re: [music-dsp] Dither video and articles

2015-02-09 Thread Nigel Redmon
But it matters, because the whole point of dithering to 16bit depends on how 
common that ability is.

Depends on how common? I’m not sure what qualifies for common, but if it’s 1 in 
100, or 5 in 100, it’s still a no-brainer because it costs nothing, effectively.

But more importantly, I don’t think you’re impressed by my point that it’s the 
audio engineers, the folks making the music, that are in the best position to 
hear it, and to do something about it. There are the ones listening carefully, 
in studios built to be quiet and lack reflections and resonances that might 
mask things, on revealing monitors and with ample power. I don’t think that you 
understand that it’s these guys who are not going to let their work go out the 
door with grit on it, even if it’s below -90 dB. You wouldn’t get many 
sympathetic ears among them if you advocated that they cease this dithering 
nonsense :-) I get enough grief about telling them that dither at 24-bit is 
useless.

How common it is for for the average listener is immaterial. It’s not done for 
the average listener.


 On Feb 9, 2015, at 6:56 PM, Didier Dambrin di...@skynet.be wrote:
 
 I'm having a hard time finding anyone who could hear past the -72dB noise, 
 here around.
 
 Really, either you have super-ears, or the cause is (technically) somewhere 
 else. But it matters, because the whole point of dithering to 16bit depends 
 on how common that ability is.
 
 
 
 
 -Message d'origine- From: Andrew Simper
 Sent: Saturday, February 07, 2015 2:08 PM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles
 
 On 7 February 2015 at 03:52, Didier Dambrin di...@skynet.be wrote:
 It was just several times the same fading in/out noise at different levels,
 just to see if you hear quieter things than I do, I thought you'd have
 guessed that.
 https://drive.google.com/file/d/0B6Cr7wjQ2EPub2I1aGExVmJCNzA/view?usp=sharing
 (0dB, -36dB, -54dB, -66dB, -72dB, -78dB)
 
 Here if I make the starting noise annoying, then I hear the first 4 parts,
 until 18:00. Thus, if 0dB is my threshold of annoyance, I can't hear -72dB.
 
 So you hear it at -78dB? Would be interesting to know how many can, and if
 it's subjective or a matter of testing environment (the variable already
 being the 0dB annoyance starting point)
 
 Yep, I could hear all of them, and the time I couldn't hear the hiss
 any more as at the 28.7 second mark, just before the end of the file.
 For reference this noise blast sounded much louder than the bass tone
 that Nigel posted when both were normalised, I had my headphones amp
 at -18 dB so the first noise peak was loud but not uncomfortable.
 
 I thought it was an odd test since the test file just stopped before I
 couldn't hear the LFO amplitude modulation cycles, so I wasn't sure
 what you were trying to prove!
 
 All the best,
 
 Andy
 
 
 
 
 -Message d'origine- From: Andrew Simper
 Sent: Friday, February 06, 2015 3:21 PM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles
 
 Sorry, you said until, which is even more confusing. There are
 multiple points when I hear the noise until since it sounds like the
 noise is modulated in amplitude by a sine like LFO for the entire
 file, so the volume of the noise ramps up and down in a cyclic manner.
 The last ramping I hear fades out at around the 28.7 second mark when
 it is hard to tell if it just ramps out at that point or is just on
 the verge of ramping up again and then the file ends at 28.93 seconds.
 I have not tried to measure the LFO wavelength or any other such
 things, this is just going on listening alone.
 
 All the best,
 
 Andrew Simper
 
 
 
 On 6 February 2015 at 22:01, Andrew Simper a...@cytomic.com wrote:
 
 On 6 February 2015 at 17:32, Didier Dambrin di...@skynet.be wrote:
 
 Just out of curiosity, until which point do you hear the noise in this
 little test (a 32bit float wav), starting from a bearable first part?
 
 
 https://drive.google.com/file/d/0B6Cr7wjQ2EPucjFCSUhGNkVRaUE/view?usp=sharing
 
 
 I hear noise immediately in that recording, it's hard to tell exactly
 the time I can first hear it since there is some latency from when I
 press play to when the sound starts, but as far as I can tell it is
 straight away. Why do you ask such silly questions?
 
 All the best,
 
 Andrew Simper
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
 
 
 -
 Aucun virus trouve dans ce message.
 Analyse effectuee par AVG - www.avg.fr
 Version: 2015.0.5645 / Base de donnees virale: 4281/9068 - Date: 06/02/2015
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http

Re: [music-dsp] Dither video and articles

2015-02-09 Thread Didier Dambrin
I'm having a hard time finding anyone who could hear past the -72dB noise, 
here around.


Really, either you have super-ears, or the cause is (technically) somewhere 
else. But it matters, because the whole point of dithering to 16bit depends 
on how common that ability is.





-Message d'origine- 
From: Andrew Simper

Sent: Saturday, February 07, 2015 2:08 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

On 7 February 2015 at 03:52, Didier Dambrin di...@skynet.be wrote:
It was just several times the same fading in/out noise at different 
levels,

just to see if you hear quieter things than I do, I thought you'd have
guessed that.
https://drive.google.com/file/d/0B6Cr7wjQ2EPub2I1aGExVmJCNzA/view?usp=sharing
(0dB, -36dB, -54dB, -66dB, -72dB, -78dB)

Here if I make the starting noise annoying, then I hear the first 4 parts,
until 18:00. Thus, if 0dB is my threshold of annoyance, I can't 
hear -72dB.


So you hear it at -78dB? Would be interesting to know how many can, and if
it's subjective or a matter of testing environment (the variable already
being the 0dB annoyance starting point)


Yep, I could hear all of them, and the time I couldn't hear the hiss
any more as at the 28.7 second mark, just before the end of the file.
For reference this noise blast sounded much louder than the bass tone
that Nigel posted when both were normalised, I had my headphones amp
at -18 dB so the first noise peak was loud but not uncomfortable.

I thought it was an odd test since the test file just stopped before I
couldn't hear the LFO amplitude modulation cycles, so I wasn't sure
what you were trying to prove!

All the best,

Andy





-Message d'origine- From: Andrew Simper
Sent: Friday, February 06, 2015 3:21 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

Sorry, you said until, which is even more confusing. There are
multiple points when I hear the noise until since it sounds like the
noise is modulated in amplitude by a sine like LFO for the entire
file, so the volume of the noise ramps up and down in a cyclic manner.
The last ramping I hear fades out at around the 28.7 second mark when
it is hard to tell if it just ramps out at that point or is just on
the verge of ramping up again and then the file ends at 28.93 seconds.
I have not tried to measure the LFO wavelength or any other such
things, this is just going on listening alone.

All the best,

Andrew Simper



On 6 February 2015 at 22:01, Andrew Simper a...@cytomic.com wrote:


On 6 February 2015 at 17:32, Didier Dambrin di...@skynet.be wrote:


Just out of curiosity, until which point do you hear the noise in this
little test (a 32bit float wav), starting from a bearable first part?


https://drive.google.com/file/d/0B6Cr7wjQ2EPucjFCSUhGNkVRaUE/view?usp=sharing



I hear noise immediately in that recording, it's hard to tell exactly
the time I can first hear it since there is some latency from when I
press play to when the sound starts, but as far as I can tell it is
straight away. Why do you ask such silly questions?

All the best,

Andrew Simper


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, 
dsp

links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


-
Aucun virus trouve dans ce message.
Analyse effectuee par AVG - www.avg.fr
Version: 2015.0.5645 / Base de donnees virale: 4281/9068 - Date: 
06/02/2015

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, 
dsp

links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


-
Aucun virus trouve dans ce message.
Analyse effectuee par AVG - www.avg.fr
Version: 2015.0.5645 / Base de donnees virale: 4281/9071 - Date: 07/02/2015 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-09 Thread Nigel Redmon
OK, I don’t want to diverge too much from the practical to the theoretical, so 
I’m going to run down what is usual, not what is possible, because it narrows 
the field of discussion.

Most people I know are using recording systems that bussing audio at 32-bit 
float, minimum, and use 64-bit float calculations in plug-ins and significant 
processing. They may still be using 24-bit audio tracks on disk, but for the 
most part they are recorded and are dithered one way or another (primarily 
gaussian noise in the recording process). They may bounce things to tracks to 
free processor cycles. I think in large majority of cases, these are 
self-dithered, but even if it doesn’t happen for some, I don’t think it will 
impact the audio. And if people are worried about it, I don’t understand why 
they aren’t using 32-bit float files, as I think most people have that choices 
these days.

Some of the more hard core will send audio out to a convertor (therefore 
truncated at 24-bit), and back in. Again, I think the vast majority of cases, 
these will self dither, but then there’s the fact error is at a very low level, 
will get buried in the thermal noise of the electronics, etc.

Maybe I left out some other good ones, but to cut it short, yes, I’m mainly 
talking about final mixes. At 24-bit, that often goes to someone else to 
master. The funny thing is that some mastering engineers say “only dither 
once!”, and they want to be the one doing it. Others point out that they may 
want to mess with the dynamic range and boost frequencies, and any error from 
not dithering 24-bit will show up in…you know, the stereo imaging, depth, etc. 
I think it would be exceptional to actually have truncation distortion of 
significant duration, except for potential situations with unusual fades, so 
I’m not worried about saying don’t dither 24-bit, even heading to a mastering 
engineer (but again, do it if you want, it’s just no big deal for final 
outputs–in contrast to the pain in the rear it is to do it at every point for 
the items I mentioned in previous paragraphs).

Down the more theoretical paths, I’ve had people argue that this is a big deal 
because things like ProTools 56k plug-ins need to be dithered internally…but 
why argue legacy stuff that “is what it is”, and secondly, these people usually 
don’t think through how many 24-bit truncations occur in a 56k algorithm, and 
you only have so many cycles. The other thing I sometimes get is the specter of 
the cumulative effect (but what if you have so many tracks, and feedback, 
and…)—but it seems to me that the more of this you get going on, to approach a 
meaningful error magnitude, the more it’s jumbled up in chaos and the less easy 
it is for your ear to recognize it as “bad”.



 On Feb 9, 2015, at 7:54 AM, Vicki Melchior vmelch...@earthlink.net wrote:
 
 Nigel, I looked at your video again and it seems to me it's confusing as to 
 whether you mean 'don't dither the 24b final output' or 'don't ever dither at 
 24b'.  You make statements several times that imply the former, but in your 
 discussion about 24b on all digital interfaces, sends and receives etc, you 
 clearly say to never dither at 24b.  Several people in this thread have 
 pointed out the difference between intermediate stage truncation and final 
 stage truncation, and the fact that if truncation is done repeatedly, any 
 distortion spectra will continue to build.   It is not noise-like, the peaks 
 are coherent peaks and correlated to the signal.  
 
 You don't say in the video what the processing history is for the files you 
 are using.  If they are simple captures with no processing, they probably 
 reflect the additive gaussian noise present at  the 20th bit in the A/D, 
 based on Andy's post, and are properly dithered for 24b truncation.   My 
 point is that at the digital capture stage you have (S+N) and the amplitude 
 distribution of the S+N signal might be fine for 24b truncation if N is 
 dither-like.  After various stages of digital processing including non-linear 
 steps, the (S+N) intermediate signal may no longer have an adequate amplitude 
 distribution to be truncated without 24b dither.  
 
 I think the whole subject of self dither might be better approached through 
 FFT measurement than by listening.   Bob Katz shows an FFT of truncation 
 spectra at 24b in his book on 'Itunes Music, Mastering for High Resolution 
 Audio Delivery'  but he uses a generated, dithered pure tone that doesn't 
 start with added gaussian noise.  Haven't thought about it but I can imagine 
 extending his approach into a research effort.  
 
 Offhand I don't know anything that would go wrong in your difference file ( 
 ...if the error doesn't sound wrong).  It's a common method for looking at 
 residuals.
 
 Vicki
 
 
 On Feb 8, 2015, at 6:11 PM, Nigel Redmon wrote:
 
 Beyond that, Nigel raises this issue in the context of self-dither”...
 
 First, remember that I’m the guy who recommended “always” 

Re: [music-dsp] Dither video and articles

2015-02-09 Thread Nigel Redmon
I’m thankful for Andy posting that clear explanation too. Sometimes I 
understate things—when I said that it would be “pretty hard to avoid” having 
ample gaussian noise to self-dither in the A/D process, I was thinking 
cryogenics (LOL).


 On Feb 9, 2015, at 7:54 AM, Vicki Melchior vmelch...@earthlink.net wrote:
 
 That's a clear explanation of the self-dither assumed in A/D conversion, 
 thanks for posting it. 
 
 Vicki
 
 On Feb 8, 2015, at 9:11 PM, Andrew Simper wrote:
 
 Vicki,
 
 If you look at the limits of what is possible in a real world ADC
 there is a certain amount of noise in any electrical system due to
 gaussian thermal noise:
 http://en.wikipedia.org/wiki/Johnson%E2%80%93Nyquist_noise
 
 For example if you look at an instrument / measurement grade ADC like
 this: 
 http://www.prismsound.com/test_measure/products_subs/dscope/dscope_spec.php
 They publish figures of a residual noise floor of 1.4 uV, which they
 say is -115 dBu. So if you digitise a 1 V peak (2 V peak to peak) sine
 wave with a 24-bit ADC then you will have hiss (which includes a large
 portion of gaussian noise) at around the 20 bit mark, so you will have
 4-bits of hiss to self dither. This has nothing to do with microphones
 or noise in air, this is in the near perfect case of transmission via
 a well shielded differential cable transferring the voltage directly
 to the ADC.
 
 All the best,
 
 Andy
 -- cytomic -- sound music software --
 
 
 On 9 February 2015 at 00:09, Vicki Melchior vmelch...@earthlink.net wrote:
 I have no argument at all with the cheap high-pass TPDF dither; whenever it 
 was published the original authors undoubtedly verified that the moment 
 decoupling occurred, as you say.  And that's what is needed for dither 
 effectiveness.   If you're creating noise for dither, you have the option 
 to verify its properties.  But in the situation of an analog signal with 
 added, independent instrument noise, you do need to verify that the 
 composite noise source actually satisfies the criteria for dither.  1/f 
 noise in particular has been questioned, which is why I raised the spectrum 
 issue.
 
 Beyond that, Nigel raises this issue in the context of self-dither.  In 
 situations where there is a clear external noise source present, whether 
 the situation is analog to digital conversion or digital to digital bit 
 depth change, the external noise may, or may not, be satisfactory as dither 
 but at least it's properties can be measured.  If the 'self-dithering' 
 instead refers to analog noise captured into the digitized signal with the 
 idea that this noise is going to be preserved and available at later 
 truncation steps to 'self dither' it is a very very hazy argument.   I'm 
 aware of the various caveats that are often postulated, i.e. signal is 
 captured at double precision, no truncation, very selected processing.  But 
 even in minimalist recording such as live to two track, it's not clear to 
 me that the signal can get through the digital stages of the A/D and still 
 retain an unaltered noise distribution.  It certainly won't do so after 
 considerable processing.  So the sho
 r
 t
 answer is, dither!  At the 24th bit or at the 16th bit, whatever your 
 output is.  If you (Nigel or RBJ) have references to the contrary, please 
 say so.
 
 Vicki
 
 On Feb 8, 2015, at 10:11 AM, robert bristow-johnson wrote:
 
 On 2/7/15 8:54 AM, Vicki Melchior wrote:
 Well, the point of dither is to reduce correlation between the signal and 
 quantization noise.  Its effectiveness requires that the error signal has 
 given properties; the mean error should be zero and the RMS error should 
 be independent of the signal.  The best known examples satisfying those 
 conditions are white Gaussian noise at ~ 6dB above the RMS quantization 
 level and white TPDF noise  at ~3dB above the same, with Gaussian noise 
 eliminating correlation entirely and TPDF dither eliminating correlation 
 with the first two moments of the error distribution.   That's all 
 textbook stuff.  There are certainly noise shaping algorithms that shape 
 either the sum of white dither and quantization noise or the white dither 
 and quantization noise independently, and even (to my knowledge) a few 
 completely non-white dithers that are known to work, but determining the 
 effectiveness of noise at dithering still requires examining the 
 statistical properties of the error signal and showi
 n
 g
 
 th
 at the mean is 0 and the second moment is signal independent.  (I think 
 Stanley Lipschitz showed that the higher moments don't matter to 
 audibility.)
 
 but my question was not about the p.d.f. of the dither (to decouple both 
 the mean and the variance of the quantization error, you need triangular 
 p.d.f. dither of 2 LSBs width that is independent of the *signal*) but 
 about the spectrum of the dither.  and Nigel mentioned this already, but 
 you can cheaply make high-pass TPDF dither with a single (decent) uniform 
 p.d.f. random 

Re: [music-dsp] Dither video and articles

2015-02-08 Thread Vicki Melchior
I have no argument at all with the cheap high-pass TPDF dither; whenever it was 
published the original authors undoubtedly verified that the moment decoupling 
occurred, as you say.  And that's what is needed for dither effectiveness.   If 
you're creating noise for dither, you have the option to verify its properties. 
 But in the situation of an analog signal with added, independent instrument 
noise, you do need to verify that the composite noise source actually satisfies 
the criteria for dither.  1/f noise in particular has been questioned, which is 
why I raised the spectrum issue.  

Beyond that, Nigel raises this issue in the context of self-dither.  In 
situations where there is a clear external noise source present, whether the 
situation is analog to digital conversion or digital to digital bit depth 
change, the external noise may, or may not, be satisfactory as dither but at 
least it's properties can be measured.  If the 'self-dithering' instead refers 
to analog noise captured into the digitized signal with the idea that this 
noise is going to be preserved and available at later truncation steps to 'self 
dither' it is a very very hazy argument.   I'm aware of the various caveats 
that are often postulated, i.e. signal is captured at double precision, no 
truncation, very selected processing.  But even in minimalist recording such as 
live to two track, it's not clear to me that the signal can get through the 
digital stages of the A/D and still retain an unaltered noise distribution.  It 
certainly won't do so after considerable processing.  So the short 
 answer is, dither!  At the 24th bit or at the 16th bit, whatever your output 
is.  If you (Nigel or RBJ) have references to the contrary, please say so.

Vicki

On Feb 8, 2015, at 10:11 AM, robert bristow-johnson wrote:

 On 2/7/15 8:54 AM, Vicki Melchior wrote:
 Well, the point of dither is to reduce correlation between the signal and 
 quantization noise.  Its effectiveness requires that the error signal has 
 given properties; the mean error should be zero and the RMS error should be 
 independent of the signal.  The best known examples satisfying those 
 conditions are white Gaussian noise at ~ 6dB above the RMS quantization 
 level and white TPDF noise  at ~3dB above the same, with Gaussian noise 
 eliminating correlation entirely and TPDF dither eliminating correlation 
 with the first two moments of the error distribution.   That's all textbook 
 stuff.  There are certainly noise shaping algorithms that shape either the 
 sum of white dither and quantization noise or the white dither and 
 quantization noise independently, and even (to my knowledge) a few 
 completely non-white dithers that are known to work, but determining the 
 effectiveness of noise at dithering still requires examining the statistical 
 properties of the error signal and showing
  
 th
  at the mean is 0 and the second moment is signal independent.  (I think 
 Stanley Lipschitz showed that the higher moments don't matter to audibility.)
 
 but my question was not about the p.d.f. of the dither (to decouple both the 
 mean and the variance of the quantization error, you need triangular p.d.f. 
 dither of 2 LSBs width that is independent of the *signal*) but about the 
 spectrum of the dither.  and Nigel mentioned this already, but you can 
 cheaply make high-pass TPDF dither with a single (decent) uniform p.d.f. 
 random number per sample and running that through a simple 1st-order FIR 
 which has +1 an -1 coefficients (i.e. subtract the previous UPDF from the 
 current UPDF to get the high-pass TPDF).  also, i think Bart Locanthi (is he 
 still on this planet?) and someone else did a simple paper back in the 90s 
 about the possible benefits of high-pass dither.  wasn't a great paper or 
 anything, but it was about the same point.
 
 i remember mentioning this at an AES in the 90's, and Stanley *did* address 
 it.  for straight dither it works okay, but for noise-shaping with feedback, 
 to be perfectly legitimate, you want white TPDF dither (which requires adding 
 or subtracting two independent UPDF random numbers).  and i agree with that.  
 it's just that if someone wanted to make a quick-and-clean high-pass dither 
 with the necessary p.d.f., you can do that with the simple subtraction trick. 
  and the dither is not white but perfectly decouples the first two moments of 
 the total quantization error.  it's just a simple trick that not good for too 
 much.
 
 -- 
 
 r b-j  r...@audioimagination.com
 
 Imagination is more important than knowledge.
 
 
 
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

Re: [music-dsp] Dither video and articles

2015-02-08 Thread robert bristow-johnson

On 2/7/15 8:54 AM, Vicki Melchior wrote:
Well, the point of dither is to reduce correlation between the signal and quantization noise.  Its effectiveness requires that the error signal has given properties; the mean error should be zero and the RMS error should be independent of the signal.  The best known examples satisfying those conditions are white Gaussian noise at ~ 6dB above the RMS quantization level and white TPDF noise  at ~3dB above the same, with Gaussian noise eliminating correlation entirely and TPDF dither eliminating correlation with the first two moments of the error distribution.   That's all textbook stuff.  There are certainly noise shaping algorithms that shape either the sum of white dither and quantization noise or the white dither and quantization noise independently, and even (to my knowledge) a few completely non-white dithers that are known to work, but determining the effectiveness of noise at dithering still requires examining the statistical properties of the error signal and showing 

th

  at the mean is 0 and the second moment is signal independent.  (I think 
Stanley Lipschitz showed that the higher moments don't matter to audibility.)


but my question was not about the p.d.f. of the dither (to decouple both 
the mean and the variance of the quantization error, you need triangular 
p.d.f. dither of 2 LSBs width that is independent of the *signal*) but 
about the spectrum of the dither.  and Nigel mentioned this already, but 
you can cheaply make high-pass TPDF dither with a single (decent) 
uniform p.d.f. random number per sample and running that through a 
simple 1st-order FIR which has +1 an -1 coefficients (i.e. subtract the 
previous UPDF from the current UPDF to get the high-pass TPDF).  also, i 
think Bart Locanthi (is he still on this planet?) and someone else did a 
simple paper back in the 90s about the possible benefits of high-pass 
dither.  wasn't a great paper or anything, but it was about the same point.


i remember mentioning this at an AES in the 90's, and Stanley *did* 
address it.  for straight dither it works okay, but for noise-shaping 
with feedback, to be perfectly legitimate, you want white TPDF dither 
(which requires adding or subtracting two independent UPDF random 
numbers).  and i agree with that.  it's just that if someone wanted to 
make a quick-and-clean high-pass dither with the necessary p.d.f., you 
can do that with the simple subtraction trick.  and the dither is not 
white but perfectly decouples the first two moments of the total 
quantization error.  it's just a simple trick that not good for too much.


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-08 Thread Andrew Simper
Vicki,

If you look at the limits of what is possible in a real world ADC
there is a certain amount of noise in any electrical system due to
gaussian thermal noise:
http://en.wikipedia.org/wiki/Johnson%E2%80%93Nyquist_noise

For example if you look at an instrument / measurement grade ADC like
this: 
http://www.prismsound.com/test_measure/products_subs/dscope/dscope_spec.php
They publish figures of a residual noise floor of 1.4 uV, which they
say is -115 dBu. So if you digitise a 1 V peak (2 V peak to peak) sine
wave with a 24-bit ADC then you will have hiss (which includes a large
portion of gaussian noise) at around the 20 bit mark, so you will have
4-bits of hiss to self dither. This has nothing to do with microphones
or noise in air, this is in the near perfect case of transmission via
a well shielded differential cable transferring the voltage directly
to the ADC.

All the best,

Andy
-- cytomic -- sound music software --


On 9 February 2015 at 00:09, Vicki Melchior vmelch...@earthlink.net wrote:
 I have no argument at all with the cheap high-pass TPDF dither; whenever it 
 was published the original authors undoubtedly verified that the moment 
 decoupling occurred, as you say.  And that's what is needed for dither 
 effectiveness.   If you're creating noise for dither, you have the option to 
 verify its properties.  But in the situation of an analog signal with added, 
 independent instrument noise, you do need to verify that the composite noise 
 source actually satisfies the criteria for dither.  1/f noise in particular 
 has been questioned, which is why I raised the spectrum issue.

 Beyond that, Nigel raises this issue in the context of self-dither.  In 
 situations where there is a clear external noise source present, whether the 
 situation is analog to digital conversion or digital to digital bit depth 
 change, the external noise may, or may not, be satisfactory as dither but at 
 least it's properties can be measured.  If the 'self-dithering' instead 
 refers to analog noise captured into the digitized signal with the idea that 
 this noise is going to be preserved and available at later truncation steps 
 to 'self dither' it is a very very hazy argument.   I'm aware of the various 
 caveats that are often postulated, i.e. signal is captured at double 
 precision, no truncation, very selected processing.  But even in minimalist 
 recording such as live to two track, it's not clear to me that the signal can 
 get through the digital stages of the A/D and still retain an unaltered noise 
 distribution.  It certainly won't do so after considerable processing.  So 
 the shor
 t
  answer is, dither!  At the 24th bit or at the 16th bit, whatever your output 
 is.  If you (Nigel or RBJ) have references to the contrary, please say so.

 Vicki

 On Feb 8, 2015, at 10:11 AM, robert bristow-johnson wrote:

 On 2/7/15 8:54 AM, Vicki Melchior wrote:
 Well, the point of dither is to reduce correlation between the signal and 
 quantization noise.  Its effectiveness requires that the error signal has 
 given properties; the mean error should be zero and the RMS error should be 
 independent of the signal.  The best known examples satisfying those 
 conditions are white Gaussian noise at ~ 6dB above the RMS quantization 
 level and white TPDF noise  at ~3dB above the same, with Gaussian noise 
 eliminating correlation entirely and TPDF dither eliminating correlation 
 with the first two moments of the error distribution.   That's all textbook 
 stuff.  There are certainly noise shaping algorithms that shape either the 
 sum of white dither and quantization noise or the white dither and 
 quantization noise independently, and even (to my knowledge) a few 
 completely non-white dithers that are known to work, but determining the 
 effectiveness of noise at dithering still requires examining the 
 statistical properties of the error signal and showin
 g

 th
  at the mean is 0 and the second moment is signal independent.  (I think 
 Stanley Lipschitz showed that the higher moments don't matter to 
 audibility.)

 but my question was not about the p.d.f. of the dither (to decouple both the 
 mean and the variance of the quantization error, you need triangular p.d.f. 
 dither of 2 LSBs width that is independent of the *signal*) but about the 
 spectrum of the dither.  and Nigel mentioned this already, but you can 
 cheaply make high-pass TPDF dither with a single (decent) uniform p.d.f. 
 random number per sample and running that through a simple 1st-order FIR 
 which has +1 an -1 coefficients (i.e. subtract the previous UPDF from the 
 current UPDF to get the high-pass TPDF).  also, i think Bart Locanthi (is he 
 still on this planet?) and someone else did a simple paper back in the 90s 
 about the possible benefits of high-pass dither.  wasn't a great paper or 
 anything, but it was about the same point.

 i remember mentioning this at an AES in the 90's, and Stanley *did* address 
 it.  for straight 

Re: [music-dsp] Dither video and articles

2015-02-07 Thread Andrew Simper
32-bit internal floating point is not sufficient for certain DSP tasks
and will be plainly audible as causing all sorts of problems, a DF1 at
low frequencies is the classic example of this, it causes large
amounts of low frequency rumble. This is a completely different thing
to the final bit depth of an audio file to listen to.

Andy

-- cytomic -- sound music software --

On 7 February 2015 at 02:24, Michael Gogins michael.gog...@gmail.com wrote:

 Do not believe anything that is not confirmed to a high degree of
 statistical signifance (say, 5 standard deviations) by a double-blind
 test using an ABX comparator.

 That said, the AES study did use double-blind testing. I did not read
 the article, only the abstract, so cannot say more about the study.

 In my own work, I have verified with a double-blind ABX comparator at
 a high degree of statistical significance that I can hear the
 differences in certain selected portions of the same Csound piece
 rendered with 32 bit floating point samples versus 64 bit floating
 point samples. These are sample words used in internal calculations,
 not for output soundfiles. What I heard was differences in the sound
 of the same filter algorithm. These differences were not at all hard
 to hear, but they occurred in only one or two places in the piece.

 I have not myself been able to hear differences in audio output
 quality between CD audio and high-resolution audio, but when I get the
 time I may try again, now that I have a better idea what to listen
 for.

 Regards,
 Mike



 -
 Michael Gogins
 Irreducible Productions
 http://michaelgogins.tumblr.com
 Michael dot Gogins at gmail dot com


 On Fri, Feb 6, 2015 at 1:13 PM, Nigel Redmon earle...@earlevel.com wrote:
 Mastering engineers can hear truncation error at the 24th bit but say it is 
 subtle and may require experience or training to pick up.
 
  Quick observations:
 
  1) The output step size of the lsb is full-scale / 2^24. If full-scale is 
  1V, then step is 0.000596046447753906V, or 0.0596 microvolt (millionths 
  of a volt). Hearing capabilities aside, the converter must be able to 
  resolve this, and it must make it through the thermal (and other) noise of 
  their equipment and move a speaker. If you’re not an electrical engineer, 
  it may be difficult to grasp the problem that this poses.
 
  2) I happened on a discussion in an audio forum, where a highly-acclaimed 
  mastering engineer and voice on dither mentioned that he could hear the 
  dither kick in when he pressed a certain button in the GUI of some beta 
  software. The maker of the software had to inform him that he was mistaken 
  on the function of the button, and in fact it didn’t affect the audio 
  whatsoever. (I’ll leave his name out, because it’s immaterial—the guy is a 
  great source of info to people and is clearly excellent at what he does, 
  and everyone who works with audio runs into this at some point.) The 
  mastering engineer graciously accepted his goof.
 
  3) Mastering engineers invariably describe the differences in very 
  subjective term. While this may be a necessity, it sure makes it difficult 
  to pursue any kind of validation. From a mastering engineer to me, 
  yesterday: 'To me the truncated version sounds colder, more glassy, with 
  less richness in the bass and harmonics, and less front to back depth in 
  the stereo field.’
 
  4) 24-bit audio will almost always have a far greater random noise floor 
  than is necessary to dither, so they will be self-dithered. By “almost”, I 
  mean that very near 100% of the time. Sure, you can create exceptions, such 
  as synthetically generated simple tones, but it’s hard to imagine them 
  happening in the course of normal music making. There is nothing magic 
  about dither noise—it’s just mimicking the sort of noise that your 
  electronics generates thermally. And when mastering engineers say they can 
  hear truncation distortion at 24-bit, they don’t say “on this particular 
  brief moment, this particular recording”—they seems to say it in general. 
  It’s extremely unlikely that non-randomized truncation distortion even 
  exists for most material at 24-bit.
 
  My point is simply that I’m not going to accept that mastering engineers 
  can hear the 24th bit truncation just because they say they can.
 
 
  On Feb 6, 2015, at 5:21 AM, Vicki Melchior vmelch...@earthlink.net wrote:
 
  The following published double blind test contradicts the results of the 
  old Moran/Meyer publication in showing (a) that the differences between CD 
  and higher resolution sources is audible and (b) that failure to dither at 
  the 16th bit is also audible.
 
  http://www.aes.org/e-lib/browse.cfm?elib=17497
 
  The Moran/Meyer tests had numerous technical problems that have long been 
  discussed, some are enumerated in the above.
 
  As far as dithering at the 24th bit, I can't disagree more with a 
  conclusion that 

Re: [music-dsp] Dither video and articles

2015-02-07 Thread Andrew Simper
On 7 February 2015 at 03:52, Didier Dambrin di...@skynet.be wrote:
 It was just several times the same fading in/out noise at different levels,
 just to see if you hear quieter things than I do, I thought you'd have
 guessed that.
 https://drive.google.com/file/d/0B6Cr7wjQ2EPub2I1aGExVmJCNzA/view?usp=sharing
 (0dB, -36dB, -54dB, -66dB, -72dB, -78dB)

 Here if I make the starting noise annoying, then I hear the first 4 parts,
 until 18:00. Thus, if 0dB is my threshold of annoyance, I can't hear -72dB.

 So you hear it at -78dB? Would be interesting to know how many can, and if
 it's subjective or a matter of testing environment (the variable already
 being the 0dB annoyance starting point)

Yep, I could hear all of them, and the time I couldn't hear the hiss
any more as at the 28.7 second mark, just before the end of the file.
For reference this noise blast sounded much louder than the bass tone
that Nigel posted when both were normalised, I had my headphones amp
at -18 dB so the first noise peak was loud but not uncomfortable.

I thought it was an odd test since the test file just stopped before I
couldn't hear the LFO amplitude modulation cycles, so I wasn't sure
what you were trying to prove!

All the best,

Andy




 -Message d'origine- From: Andrew Simper
 Sent: Friday, February 06, 2015 3:21 PM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles

 Sorry, you said until, which is even more confusing. There are
 multiple points when I hear the noise until since it sounds like the
 noise is modulated in amplitude by a sine like LFO for the entire
 file, so the volume of the noise ramps up and down in a cyclic manner.
 The last ramping I hear fades out at around the 28.7 second mark when
 it is hard to tell if it just ramps out at that point or is just on
 the verge of ramping up again and then the file ends at 28.93 seconds.
 I have not tried to measure the LFO wavelength or any other such
 things, this is just going on listening alone.

 All the best,

 Andrew Simper



 On 6 February 2015 at 22:01, Andrew Simper a...@cytomic.com wrote:

 On 6 February 2015 at 17:32, Didier Dambrin di...@skynet.be wrote:

 Just out of curiosity, until which point do you hear the noise in this
 little test (a 32bit float wav), starting from a bearable first part?


 https://drive.google.com/file/d/0B6Cr7wjQ2EPucjFCSUhGNkVRaUE/view?usp=sharing


 I hear noise immediately in that recording, it's hard to tell exactly
 the time I can first hear it since there is some latency from when I
 press play to when the sound starts, but as far as I can tell it is
 straight away. Why do you ask such silly questions?

 All the best,

 Andrew Simper

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp


 -
 Aucun virus trouve dans ce message.
 Analyse effectuee par AVG - www.avg.fr
 Version: 2015.0.5645 / Base de donnees virale: 4281/9068 - Date: 06/02/2015
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-07 Thread Vicki Melchior
Hi RBJ,

Well, the point of dither is to reduce correlation between the signal and 
quantization noise.  Its effectiveness requires that the error signal has given 
properties; the mean error should be zero and the RMS error should be 
independent of the signal.  The best known examples satisfying those conditions 
are white Gaussian noise at ~ 6dB above the RMS quantization level and white 
TPDF noise  at ~3dB above the same, with Gaussian noise eliminating correlation 
entirely and TPDF dither eliminating correlation with the first two moments of 
the error distribution.   That's all textbook stuff.  There are certainly noise 
shaping algorithms that shape either the sum of white dither and quantization 
noise or the white dither and quantization noise independently, and even (to my 
knowledge) a few completely non-white dithers that are known to work, but 
determining the effectiveness of noise at dithering still requires examining 
the statistical properties of the error signal and showing th
 at the mean is 0 and the second moment is signal independent.  (I think 
Stanley Lipschitz showed that the higher moments don't matter to audibility.)

Probably there are papers around looking at analog noise in typical music 
signals and how well it works as self dither (because self dither is assumed in 
some A/D conversion) but I don't know them and would be very happy to see them. 
 The one case I know involving some degree of modeling was a tutorial on dither 
given last year in Berlin that advised against depending on self dither in 
signal processing unless the noise source was checked out thoroughly before 
hand.  Variability of amplitude, PDF and time coherence were discussed if I 
recall.

Best,
Vicki 

On Feb 6, 2015, at 9:27 PM, robert bristow-johnson wrote:

 
 
 
 
 
 
 
  Original Message 
 
 Subject: Re: [music-dsp] Dither video and articles
 
 From: Vicki Melchior vmelch...@earthlink.net
 
 Date: Fri, February 6, 2015 2:23 pm
 
 To: A discussion list for music-related DSP music-dsp@music.columbia.edu
 
 --
 
 
 
 The self dither argument is not as obvious as it may appear. To be effective 
 at dithering, the noise has to be at the right level of course but also 
 should be white and temporally constant.
  
 why does it have to be white?  or why should it?
 
 
 
 
 
 --
  
 r b-j   r...@audioimagination.com
  
 Imagination is more important than knowledge.
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-07 Thread Nigel Redmon
why does it have to be white?  or why should it?

A common and trivial dither signal for non-shaped dither is rectangular PDF 
noise through a one-pole highpass filter. In other words, instead of generating 
two random numbers and adding them together for the dither signal at each 
sample, one random number is generated, and the random number for the previous 
sample is subtracted. The idea is that it biases the noise toward the highs, 
less in the body of the music, and is a little faster computationally (which 
typically doesn’t mean a thing).


 On Feb 6, 2015, at 6:27 PM, robert bristow-johnson 
 r...@audioimagination.com wrote:
 
 
  Original Message 
 
 Subject: Re: [music-dsp] Dither video and articles
 
 From: Vicki Melchior vmelch...@earthlink.net
 
 Date: Fri, February 6, 2015 2:23 pm
 
 To: A discussion list for music-related DSP music-dsp@music.columbia.edu
 
 --
 
 
 The self dither argument is not as obvious as it may appear. To be effective 
 at dithering, the noise has to be at the right level of course but also 
 should be white and temporally constant.
  
 why does it have to be white?  or why should it?
 
 
 --
  
 r b-j   r...@audioimagination.com
  
 Imagination is more important than knowledge.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Dither video and articles

2015-02-07 Thread Nigel Redmon
Hi Vicki,

My intuitive view of dither is this (I think you can get this point from my 
video):

After truncation, the error introduced is the truncated signal minus the 
original high resolution signal. We could analyze it statistically, but our 
ears and brain do a real good job of that. And after all, the object here is to 
satisfy our ears and brain.

Listening to the original, high-resolution signal, plus this error signal, is 
equivalent to listening to the truncated signal.

So, my question would be, given such an error signal that sounds smooth, 
pleasant, and unmodulated (hiss-like, not grating, whining, or sputtering, for 
instance): Under what circumstances would the result of adding this error 
signal to the original signal result in an unnecessarily distracting or 
unpleasant degradation of the source material? (And of course, we’re talking 
about 16-bit audio, so not an error of overpowering amplitude.)

I’m not asking this rhetorically, I’d like to know. Measurable statistical 
purity aside, if the error doesn’t sound wrong to the ear, can it still sound 
wrong added to the music? I’ve tried a bit, but so far I haven’t been able to 
convince myself that it can, so I’d appreciate it if someone else could.

Nigel


 On Feb 7, 2015, at 5:54 AM, Vicki Melchior vmelch...@earthlink.net wrote:
 
 Hi RBJ,
 
 Well, the point of dither is to reduce correlation between the signal and 
 quantization noise.  Its effectiveness requires that the error signal has 
 given properties; the mean error should be zero and the RMS error should be 
 independent of the signal.  The best known examples satisfying those 
 conditions are white Gaussian noise at ~ 6dB above the RMS quantization level 
 and white TPDF noise  at ~3dB above the same, with Gaussian noise eliminating 
 correlation entirely and TPDF dither eliminating correlation with the first 
 two moments of the error distribution.   That's all textbook stuff.  There 
 are certainly noise shaping algorithms that shape either the sum of white 
 dither and quantization noise or the white dither and quantization noise 
 independently, and even (to my knowledge) a few completely non-white dithers 
 that are known to work, but determining the effectiveness of noise at 
 dithering still requires examining the statistical properties of the error 
 signal and showing th
 at the mean is 0 and the second moment is signal independent.  (I think 
 Stanley Lipschitz showed that the higher moments don't matter to audibility.)
 
 Probably there are papers around looking at analog noise in typical music 
 signals and how well it works as self dither (because self dither is assumed 
 in some A/D conversion) but I don't know them and would be very happy to see 
 them.  The one case I know involving some degree of modeling was a tutorial 
 on dither given last year in Berlin that advised against depending on self 
 dither in signal processing unless the noise source was checked out 
 thoroughly before hand.  Variability of amplitude, PDF and time coherence 
 were discussed if I recall.
 
 Best,
 Vicki 
 
 On Feb 6, 2015, at 9:27 PM, robert bristow-johnson wrote:
 
 
 
 
 
 
 
 
  Original Message 
 
 Subject: Re: [music-dsp] Dither video and articles
 
 From: Vicki Melchior vmelch...@earthlink.net
 
 Date: Fri, February 6, 2015 2:23 pm
 
 To: A discussion list for music-related DSP music-dsp@music.columbia.edu
 
 --
 
 
 
 The self dither argument is not as obvious as it may appear. To be 
 effective at dithering, the noise has to be at the right level of course 
 but also should be white and temporally constant.
 
 why does it have to be white?  or why should it?
 
 
 
 
 
 --
 
 r b-j   r...@audioimagination.com
 
 Imagination is more important than knowledge.
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Dither video and articles

2015-02-06 Thread Didier Dambrin

mmh, Affiliation: Meridian Audio Ltd?




-Message d'origine- 
From: Vicki Melchior

Sent: Friday, February 06, 2015 2:21 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

The following published double blind test contradicts the results of the old 
Moran/Meyer publication in showing (a) that the differences between CD and 
higher resolution sources is audible and (b) that failure to dither at the 
16th bit is also audible.


http://www.aes.org/e-lib/browse.cfm?elib=17497

The Moran/Meyer tests had numerous technical problems that have long been 
discussed, some are enumerated in the above.


As far as dithering at the 24th bit, I can't disagree more with a conclusion 
that says it's unnecessary in data handling.  Mastering engineers can hear 
truncation error at the 24th bit but say it is subtle and may require 
experience or training to pick up.  What they are hearing is not noise or 
peaks sitting at the 24th bit but rather the distortion that goes with 
truncation at 24b, and it is said to have a characteristic coloration effect 
on sound.  I'm aware of an effort to show this with AB/X tests, hopefully it 
will be published.  The problem with failing to dither at 24b is that many 
such truncation steps would be done routinely in mastering, and thus the 
truncation distortion products continue to build up.  Whether you personally 
hear it is likely to depend both on how extensive your data flow pathway is 
and how good your playback equipment is.


Vicki Melchior

On Feb 5, 2015, at 10:01 PM, Ross Bencina wrote:


On 6/02/2015 1:50 PM, Tom Duffy wrote:

The AES report is highly controversial.

Plenty of sources dispute the findings.


Can you name some?

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, 
dsp links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


-
Aucun virus trouve dans ce message.
Analyse effectuee par AVG - www.avg.fr
Version: 2015.0.5645 / Base de donnees virale: 4281/9068 - Date: 06/02/2015 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-06 Thread Didier Dambrin
It was just several times the same fading in/out noise at different levels, 
just to see if you hear quieter things than I do, I thought you'd have 
guessed that.

https://drive.google.com/file/d/0B6Cr7wjQ2EPub2I1aGExVmJCNzA/view?usp=sharing
(0dB, -36dB, -54dB, -66dB, -72dB, -78dB)

Here if I make the starting noise annoying, then I hear the first 4 parts, 
until 18:00. Thus, if 0dB is my threshold of annoyance, I can't hear -72dB.


So you hear it at -78dB? Would be interesting to know how many can, and if 
it's subjective or a matter of testing environment (the variable already 
being the 0dB annoyance starting point)





-Message d'origine- 
From: Andrew Simper

Sent: Friday, February 06, 2015 3:21 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

Sorry, you said until, which is even more confusing. There are
multiple points when I hear the noise until since it sounds like the
noise is modulated in amplitude by a sine like LFO for the entire
file, so the volume of the noise ramps up and down in a cyclic manner.
The last ramping I hear fades out at around the 28.7 second mark when
it is hard to tell if it just ramps out at that point or is just on
the verge of ramping up again and then the file ends at 28.93 seconds.
I have not tried to measure the LFO wavelength or any other such
things, this is just going on listening alone.

All the best,

Andrew Simper



On 6 February 2015 at 22:01, Andrew Simper a...@cytomic.com wrote:

On 6 February 2015 at 17:32, Didier Dambrin di...@skynet.be wrote:

Just out of curiosity, until which point do you hear the noise in this
little test (a 32bit float wav), starting from a bearable first part?

https://drive.google.com/file/d/0B6Cr7wjQ2EPucjFCSUhGNkVRaUE/view?usp=sharing


I hear noise immediately in that recording, it's hard to tell exactly
the time I can first hear it since there is some latency from when I
press play to when the sound starts, but as far as I can tell it is
straight away. Why do you ask such silly questions?

All the best,

Andrew Simper

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


-
Aucun virus trouve dans ce message.
Analyse effectuee par AVG - www.avg.fr
Version: 2015.0.5645 / Base de donnees virale: 4281/9068 - Date: 06/02/2015 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-06 Thread Nigel Redmon
Hi Michael,

I know that you already understand this, and comment that this is for internal 
calculations, but for the sake of anyone who might misinterpret your 32-bit vs 
64-bit comment, I’ll point out that this is a situation of error feedback—the 
resulting error is much greater than the sample sizes you’re talking about, and 
can result in differences far above the 24-bit level. A simple example is the 
ubiquitous direct form I biquad, which goes all to hell in lower audio 
frequencies with 24-bit storage (unless you noise shape or increase resolution).

Nigel


 On Feb 6, 2015, at 10:24 AM, Michael Gogins michael.gog...@gmail.com wrote:
 
 Do not believe anything that is not confirmed to a high degree of
 statistical signifance (say, 5 standard deviations) by a double-blind
 test using an ABX comparator.
 
 That said, the AES study did use double-blind testing. I did not read
 the article, only the abstract, so cannot say more about the study.
 
 In my own work, I have verified with a double-blind ABX comparator at
 a high degree of statistical significance that I can hear the
 differences in certain selected portions of the same Csound piece
 rendered with 32 bit floating point samples versus 64 bit floating
 point samples. These are sample words used in internal calculations,
 not for output soundfiles. What I heard was differences in the sound
 of the same filter algorithm. These differences were not at all hard
 to hear, but they occurred in only one or two places in the piece.
 
 I have not myself been able to hear differences in audio output
 quality between CD audio and high-resolution audio, but when I get the
 time I may try again, now that I have a better idea what to listen
 for.
 
 Regards,
 Mike
 
 
 
 -
 Michael Gogins
 Irreducible Productions
 http://michaelgogins.tumblr.com
 Michael dot Gogins at gmail dot com
 
 
 On Fri, Feb 6, 2015 at 1:13 PM, Nigel Redmon earle...@earlevel.com wrote:
 Mastering engineers can hear truncation error at the 24th bit but say it is 
 subtle and may require experience or training to pick up.
 
 Quick observations:
 
 1) The output step size of the lsb is full-scale / 2^24. If full-scale is 
 1V, then step is 0.000596046447753906V, or 0.0596 microvolt (millionths 
 of a volt). Hearing capabilities aside, the converter must be able to 
 resolve this, and it must make it through the thermal (and other) noise of 
 their equipment and move a speaker. If you’re not an electrical engineer, it 
 may be difficult to grasp the problem that this poses.
 
 2) I happened on a discussion in an audio forum, where a highly-acclaimed 
 mastering engineer and voice on dither mentioned that he could hear the 
 dither kick in when he pressed a certain button in the GUI of some beta 
 software. The maker of the software had to inform him that he was mistaken 
 on the function of the button, and in fact it didn’t affect the audio 
 whatsoever. (I’ll leave his name out, because it’s immaterial—the guy is a 
 great source of info to people and is clearly excellent at what he does, and 
 everyone who works with audio runs into this at some point.) The mastering 
 engineer graciously accepted his goof.
 
 3) Mastering engineers invariably describe the differences in very 
 subjective term. While this may be a necessity, it sure makes it difficult 
 to pursue any kind of validation. From a mastering engineer to me, 
 yesterday: 'To me the truncated version sounds colder, more glassy, with 
 less richness in the bass and harmonics, and less front to back depth in 
 the stereo field.’
 
 4) 24-bit audio will almost always have a far greater random noise floor 
 than is necessary to dither, so they will be self-dithered. By “almost”, I 
 mean that very near 100% of the time. Sure, you can create exceptions, such 
 as synthetically generated simple tones, but it’s hard to imagine them 
 happening in the course of normal music making. There is nothing magic about 
 dither noise—it’s just mimicking the sort of noise that your electronics 
 generates thermally. And when mastering engineers say they can hear 
 truncation distortion at 24-bit, they don’t say “on this particular brief 
 moment, this particular recording”—they seems to say it in general. It’s 
 extremely unlikely that non-randomized truncation distortion even exists for 
 most material at 24-bit.
 
 My point is simply that I’m not going to accept that mastering engineers can 
 hear the 24th bit truncation just because they say they can.
 
 
 On Feb 6, 2015, at 5:21 AM, Vicki Melchior vmelch...@earthlink.net wrote:
 
 The following published double blind test contradicts the results of the 
 old Moran/Meyer publication in showing (a) that the differences between CD 
 and higher resolution sources is audible and (b) that failure to dither at 
 the 16th bit is also audible.
 
 http://www.aes.org/e-lib/browse.cfm?elib=17497
 
 The Moran/Meyer tests had numerous technical 

Re: [music-dsp] Dither video and articles

2015-02-06 Thread Didier Dambrin
I SO agree with 4), that when it comes to recorded  not synthesized (but 
even synthesized in some cases actually - I've made additive synths and it's 
a big CPU saver to avoid processing inaudible partials) audio, room noise is 
so much above the levels we're debating, that it's a bit silly.





-Message d'origine- 
From: Nigel Redmon

Sent: Friday, February 06, 2015 7:13 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

Mastering engineers can hear truncation error at the 24th bit but say it is 
subtle and may require experience or training to pick up.


Quick observations:

1) The output step size of the lsb is full-scale / 2^24. If full-scale is 
1V, then step is 0.000596046447753906V, or 0.0596 microvolt (millionths 
of a volt). Hearing capabilities aside, the converter must be able to 
resolve this, and it must make it through the thermal (and other) noise of 
their equipment and move a speaker. If you’re not an electrical engineer, it 
may be difficult to grasp the problem that this poses.


2) I happened on a discussion in an audio forum, where a highly-acclaimed 
mastering engineer and voice on dither mentioned that he could hear the 
dither kick in when he pressed a certain button in the GUI of some beta 
software. The maker of the software had to inform him that he was mistaken 
on the function of the button, and in fact it didn’t affect the audio 
whatsoever. (I’ll leave his name out, because it’s immaterial—the guy is a 
great source of info to people and is clearly excellent at what he does, and 
everyone who works with audio runs into this at some point.) The mastering 
engineer graciously accepted his goof.


3) Mastering engineers invariably describe the differences in very 
subjective term. While this may be a necessity, it sure makes it difficult 
to pursue any kind of validation. From a mastering engineer to me, 
yesterday: 'To me the truncated version sounds colder, more glassy, with 
less richness in the bass and harmonics, and less front to back depth in 
the stereo field.’


4) 24-bit audio will almost always have a far greater random noise floor 
than is necessary to dither, so they will be self-dithered. By “almost”, I 
mean that very near 100% of the time. Sure, you can create exceptions, such 
as synthetically generated simple tones, but it’s hard to imagine them 
happening in the course of normal music making. There is nothing magic about 
dither noise—it’s just mimicking the sort of noise that your electronics 
generates thermally. And when mastering engineers say they can hear 
truncation distortion at 24-bit, they don’t say “on this particular brief 
moment, this particular recording”—they seems to say it in general. It’s 
extremely unlikely that non-randomized truncation distortion even exists for 
most material at 24-bit.


My point is simply that I’m not going to accept that mastering engineers can 
hear the 24th bit truncation just because they say they can.



On Feb 6, 2015, at 5:21 AM, Vicki Melchior vmelch...@earthlink.net 
wrote:


The following published double blind test contradicts the results of the 
old Moran/Meyer publication in showing (a) that the differences between CD 
and higher resolution sources is audible and (b) that failure to dither at 
the 16th bit is also audible.


http://www.aes.org/e-lib/browse.cfm?elib=17497

The Moran/Meyer tests had numerous technical problems that have long been 
discussed, some are enumerated in the above.


As far as dithering at the 24th bit, I can't disagree more with a 
conclusion that says it's unnecessary in data handling.  Mastering 
engineers can hear truncation error at the 24th bit but say it is subtle 
and may require experience or training to pick up.  What they are hearing 
is not noise or peaks sitting at the 24th bit but rather the distortion 
that goes with truncation at 24b, and it is said to have a characteristic 
coloration effect on sound.  I'm aware of an effort to show this with AB/X 
tests, hopefully it will be published.  The problem with failing to dither 
at 24b is that many such truncation steps would be done routinely in 
mastering, and thus the truncation distortion products continue to build 
up.  Whether you personally hear it is likely to depend both on how 
extensive your data flow pathway is and how good your playback equipment 
is.


Vicki Melchior

On Feb 5, 2015, at 10:01 PM, Ross Bencina wrote:


On 6/02/2015 1:50 PM, Tom Duffy wrote:

The AES report is highly controversial.

Plenty of sources dispute the findings.


Can you name some?

Ross.
--


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

-
Aucun virus trouvé dans ce message.
Analyse effectuée par AVG - www.avg.fr
Version: 2015.0.5645 / Base de données virale: 4281/9068 - Date

Re: [music-dsp] Dither video and articles

2015-02-06 Thread Victor Lazzarini
Yes, but note that in the case Michael is reporting, all filters have 
double-precision coeffs and data storage. It is only when passing samples 
between unit generators that the difference lies (either single or
double precision is used). Still, I believe that 
there can be audible differences.

Victor Lazzarini
Dean of Arts, Celtic Studies, and Philosophy
Maynooth University
Ireland

 On 6 Feb 2015, at 18:43, Ethan Duni ethan.d...@gmail.com wrote:
 
 Thanks for the reference Vicki
 
 What they are hearing is not noise or peaks sitting at the 24th
 bit but rather the distortion that goes with truncation at 24b, and
 it is said to have a characteristic coloration effect on sound.  I'm
 aware of an effort to show this with AB/X tests, hopefully it will be
 published.
 
 I'm skeptical, but definitely hope that such a test gets undertaken and
 published. Would be interesting to have some real data either way.
 
 The problem with failing to dither at 24b is that many such truncation
 steps would be done routinely in mastering, and thus the truncation
 distortion products continue to build up.
 
 Hopefully everyone agrees that the questions of what is appropriate for
 intermediate processing and what is appropriate for final distribution are
 quite different, and that substantially higher resolutions (and probably
 including dither) are indicated for intermediate processing. As Michael
 Goggins says:
 
 In my own work, I have verified with a double-blind ABX comparator at
 a high degree of statistical significance that I can hear the
 differences in certain selected portions of the same Csound piece
 rendered with 32 bit floating point samples versus 64 bit floating
 point samples. These are sample words used in internal calculations,
 not for output soundfiles. What I heard was differences in the sound
 of the same filter algorithm. These differences were not at all hard
 to hear, but they occurred in only one or two places in the piece.
 
 Indeed, it is not particularly difficult to cook up filter
 designs/algorithms that will break any given finite internal resolution. At
 some point those filter designs become pathological, but there are plenty
 of reasonable cases where 32 bit float internal precision is insufficient.
 Note that a 32-bit float only has 24 bits of mantissa, which is 8 bits less
 than is typically used in embedded fixed-point implementations (for
 sensitive components like filter guts, I mean). So even very standard stuff
 that has been around for decades in the fixed-point world will break if
 implemented naively in 32 bit float.
 
 E
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-06 Thread Nigel Redmon
Isn't it generally agreed that truncation noise is correlated with the signal?

“Is correlated”? No, but it can be.

First, if there is enough noise in the signal before truncation, then it’s 
dithered by default—no correlation.

Second, if the signal is sufficiently complex, it seems, then there is no 
apparent correlation. See my video (https://www.youtube.com/watch?v=KCyA6LlB3As 
https://www.youtube.com/watch?v=KCyA6LlB3As) where I show a 32-bit float mix, 
truncated to 8-bit, nulled, and boosted +24 dB. There is no apparent 
correlation till the very end, even though the noise floor is not sufficient to 
self-dither.


 On Feb 6, 2015, at 10:42 AM, Tom Duffy tdu...@tascam.com wrote:
 
 Isn't it generally agreed that truncation noise is correlated with the
 signal?
 The human ear is excellent at picking up on correlation, so a system
 that introduces multiple correlated (noise) signals may reach a point
 where it is perceptual, even if the starting point is a 24 bit signal.
 
 I would believe this to be an explanation for why ProTools early hardware 
 mixers were regarded as having problems - they used 24bit
 fixed point DSPs, coupled with fixed bit headroom management may
 have introduced truncation noise at a level higher than the 24 bit
 noise floor.
 
 Also, the dither noise source itself needs to be investigated.
 Studies have shown that a fixed repeated buffer of pre-generated white
 noise is immediately obvious (and non-pleasing) to the listener up to
 several hundred ms long - if that kind of source was used as a dither
 signal, the self correlation becomes even more problematic.
 Calculated a new PRDG value for each sample is expensive, which
 is why a pre-generated buffer is attractive to the implementor.
 
 ---
 Tom.
 
 On 2/6/2015 10:32 AM, Victor Lazzarini wrote:
 Quite. This conversation is veering down the vintage wine tasting alley.
 
 Victor Lazzarini
 Dean of Arts, Celtic Studies, and Philosophy
 Maynooth University
 Ireland
 
 On 6 Feb 2015, at 18:13, Nigel Redmon earle...@earlevel.com wrote:
 
 Mastering engineers can hear truncation error at the 24th bit but say it is 
 subtle and may require experience or training to pick up.
 
 Quick observations:
 
 1) The output step size of the lsb is full-scale / 2^24. If full-scale is 1V, 
 then step is 0.000596046447753906V, or 0.0596 microvolt (millionths of a 
 volt). Hearing capabilities aside, the converter must be able to resolve 
 this, and it must make it through the thermal (and other) noise of their 
 equipment and move a speaker. If you’re not an electrical engineer, it may be 
 difficult to grasp the problem that this poses.
 
 2) I happened on a discussion in an audio forum, where a highly-acclaimed 
 mastering engineer and voice on dither mentioned that he could hear the 
 dither kick in when he pressed a certain button in the GUI of some beta 
 software. The maker of the software had to inform him that he was mistaken on 
 the function of the button, and in fact it didn’t affect the audio 
 whatsoever. (I’ll leave his name out, because it’s immaterial—the guy is a 
 great source of info to people and is clearly excellent at what he does, and 
 everyone who works with audio runs into this at some point.) The mastering 
 engineer graciously accepted his goof.
 
 3) Mastering engineers invariably describe the differences in very subjective 
 term. While this may be a necessity, it sure makes it difficult to pursue any 
 kind of validation. From a mastering engineer to me, yesterday: 'To me the 
 truncated version sounds colder, more glassy, with less richness in the bass 
 and harmonics, and less front to back depth in the stereo field.’
 
 4) 24-bit audio will almost always have a far greater random noise floor than 
 is necessary to dither, so they will be self-dithered. By “almost”, I mean 
 that very near 100% of the time. Sure, you can create exceptions, such as 
 synthetically generated simple tones, but it’s hard to imagine them happening 
 in the course of normal music making. There is nothing magic about dither 
 noise—it’s just mimicking the sort of noise that your electronics generates 
 thermally. And when mastering engineers say they can hear truncation 
 distortion at 24-bit, they don’t say “on this particular brief moment, this 
 particular recording”—they seems to say it in general. It’s extremely 
 unlikely that non-randomized truncation distortion even exists for most 
 material at 24-bit.
 
 My point is simply that I’m not going to accept that mastering engineers can 
 hear the 24th bit truncation just because they say they can.
 
 
 On Feb 6, 2015, at 5:21 AM, Vicki Melchior vmelch...@earthlink.net wrote:
 
 The following published double blind test contradicts the results of the old 
 Moran/Meyer publication in showing (a) that the differences between CD and 
 higher resolution sources is audible and (b) that failure to dither at the 
 16th bit is also audible.
 
 

Re: [music-dsp] Dither video and articles

2015-02-06 Thread Michael Gogins
This was done before John ffitch (I believe it was he) changed the
filter samples in even the single-precision version of Csound to use
double-precision. And I think this change may have been made as a
result of my report.

Regards,
Mike

-
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com


On Fri, Feb 6, 2015 at 2:04 PM, Victor Lazzarini
victor.lazzar...@nuim.ie wrote:
 Yes, but note that in the case Michael is reporting, all filters have 
 double-precision coeffs and data storage. It is only when passing samples 
 between unit generators that the difference lies (either single or
 double precision is used). Still, I believe that
 there can be audible differences.

 Victor Lazzarini
 Dean of Arts, Celtic Studies, and Philosophy
 Maynooth University
 Ireland

 On 6 Feb 2015, at 18:43, Ethan Duni ethan.d...@gmail.com wrote:

 Thanks for the reference Vicki

 What they are hearing is not noise or peaks sitting at the 24th
 bit but rather the distortion that goes with truncation at 24b, and
 it is said to have a characteristic coloration effect on sound.  I'm
 aware of an effort to show this with AB/X tests, hopefully it will be
 published.

 I'm skeptical, but definitely hope that such a test gets undertaken and
 published. Would be interesting to have some real data either way.

 The problem with failing to dither at 24b is that many such truncation
 steps would be done routinely in mastering, and thus the truncation
 distortion products continue to build up.

 Hopefully everyone agrees that the questions of what is appropriate for
 intermediate processing and what is appropriate for final distribution are
 quite different, and that substantially higher resolutions (and probably
 including dither) are indicated for intermediate processing. As Michael
 Goggins says:

 In my own work, I have verified with a double-blind ABX comparator at
 a high degree of statistical significance that I can hear the
 differences in certain selected portions of the same Csound piece
 rendered with 32 bit floating point samples versus 64 bit floating
 point samples. These are sample words used in internal calculations,
 not for output soundfiles. What I heard was differences in the sound
 of the same filter algorithm. These differences were not at all hard
 to hear, but they occurred in only one or two places in the piece.

 Indeed, it is not particularly difficult to cook up filter
 designs/algorithms that will break any given finite internal resolution. At
 some point those filter designs become pathological, but there are plenty
 of reasonable cases where 32 bit float internal precision is insufficient.
 Note that a 32-bit float only has 24 bits of mantissa, which is 8 bits less
 than is typically used in embedded fixed-point implementations (for
 sensitive components like filter guts, I mean). So even very standard stuff
 that has been around for decades in the fixed-point world will break if
 implemented naively in 32 bit float.

 E
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-06 Thread Didier Dambrin

So you hear all 6 too?



-Message d'origine- 
From: Richard Dobson

Sent: Friday, February 06, 2015 4:10 PM
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] Dither video and articles

On 06/02/2015 14:21, Andrew Simper wrote:

Sorry, you said until, which is even more confusing. There are
multiple points when I hear the noise until since it sounds like the
noise is modulated in amplitude by a sine like LFO for the entire
file, so the volume of the noise ramps up and down in a cyclic manner.
The last ramping I hear fades out at around the 28.7 second mark when
it is hard to tell if it just ramps out at that point or is just on
the verge of ramping up again and then the file ends at 28.93 seconds.
I have not tried to measure the LFO wavelength or any other such
things, this is just going on listening alone.




Its a series of six smoothly enveloped noise bursts (slowish rise/
slower decay) the first peaking at max amplitude (so you have to be
ready to hear it as very loud!), then successively softer repeats until
at some point it is (presumably?) too quiet to be heard. Very visible in
Audacity using the Waveform (dB) display mode. So the word until is
entirely appropriate. I do recommend visual inspection of waveforms in
such situations to minimise guessing (or at least, to confirm the
guesses or otherwise). In any case, I would expect people to hear all
six, give a suitably quiet listening environment and an appropriately
generous overall playback level etc.

Richard Dobson


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-06 Thread Nigel Redmon
 to choose, between the two 16-bit ones I would prefer the one
 with dither but put through a make mono plugin, as this sounded the
 closest to the float version.
 
 All the best,
 
 Andy
 
 -- cytomic -- sound music software --
 
 
 On 5 February 2015 at 16:46, Nigel Redmon earle...@earlevel.com wrote:
 Hmm, I thought that would let you save the page source (wave file)…Safari 
 creates the file of the appropriate name and type, but it stays at 0 
 bytes…OK, I put up and index page—do the usual right-click to save the 
 field to disk if you need to access the files directly:
 
 http://earlevel.com/temp/music-dsp/
 
 
 On Feb 5, 2015, at 12:13 AM, Nigel Redmon earle...@earlevel.com wrote:
 
 OK, here’s my new piece, I call it Diva bass—to satisfy your request for 
 me to make something with truncation distortion apparent. (If it bother 
 you that my piece is one note, imagine that this is just the last note of 
 a longer piece.)
 
 I spent maybe 30 seconds getting the sound—opened Diva (default 
 “minimoog” modules), turn the mixer knobs down except for VCO 1, set 
 range to 32’, waveform to triangle, max release on the VCA envelope.
 
 In 32-bit float glory:
 
 http://earlevel.com/temp/music-dsp/Diva%20bass%2032-bit%20float.wav
 
 Truncated to 16-bit, no dither (Quan Jr plug-in, Digital Performer), 
 saved to 16-bit wave file:
 
 http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated.wav
 
 You’ll have to turn your sound system up, not insanely loud, but loud. (I 
 said that this would be the case before.) I can hear it, and I know 
 engineers who monitor much louder, routinely, than I’m monitoring to hear 
 this. My Equator Q10s are not terribly high powered, and I’m not adding 
 any other gain ahead of them in order to boost the quiet part.
 
 If you want to hear the residual easily (32-bit version inverted, summed 
 with 16-bit truncated, the result with +40 dB gain via Trim plug-in):
 
 http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated%20residual%20+40dB.wav
 
 I don’t expect the 16-bit truncated version to bother you, but it does 
 bother some audio engineers. Here's 16-bit dithered version, for 
 completeness, so that you can decide if the added noise floor bothers you:
 
 http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20dithered.wav
 
 
 
 On Feb 4, 2015, at 1:10 PM, Didier Dambrin di...@skynet.be wrote:
 
 Yes, I disagree with the always. Not always needed means it's 
 sometimes needed, my point is that it's never needed, until proven 
 otherwise. Your video proves that sometimes it's not needed, but not 
 that sometimes it's needed.
 
 
 
 -Message d'origine- From: Nigel Redmon
 Sent: Wednesday, February 04, 2015 6:51 PM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles
 
 I totally understood the point of your video, that dithering to 16bit 
 isn't always needed - but that's what I disagree with.
 
 Sorry, Didier, I’m confused now. I took from your previous message that 
 you feel 16-bit doesn’t need to be dithered (dithering to 16bit will 
 never make any audible difference”). Here you say that you disagree with 
 dithering to 16bit isn't always needed”. In fact, you are saying that 
 it’s never needed—you disagree because “isn’t always needed” implies 
 that it is sometimes needed—correct?
 
 
 On Feb 4, 2015, at 5:06 AM, Didier Dambrin di...@skynet.be wrote:
 
 Then, it’s no-win situation, because I could EASILY manufacture a bit 
 of music that had significant truncation distortion at 16-bit.
 
 Please do, I would really like to hear it.
 
 I have never heard truncation noise at 16bit, other than by playing 
 with levels in a such a way that the peaking parts of the rest of the 
 sound would destroy your ears or be very unpleasant at best. (you say 
 12dB, it's already a lot)
 
 I totally understood the point of your video, that dithering to 16bit 
 isn't always needed - but that's what I disagree with.
 
 
 
 -Message d'origine- From: Nigel Redmon
 Sent: Wednesday, February 04, 2015 10:59 AM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles
 
 Hi Didier—You seem to find contradictions in my choices because you are 
 making the wrong assumptions about what I’m showing and saying.
 
 First, I’m not steadfast that 16-bit dither is always needed—and in 
 fact the point of the video was that I was showing you (the viewers) 
 how you can judge it objectively for yourself (and decide whether you 
 want to dither). This is a much better way that the usual that I hear 
 from people, who often listen to the dithered and non-dithered results, 
 and talk about the soundstage collapsing without dither, “brittle” 
 versus “transparent , etc.
 
 But if I’m to give you a rule of thumb, a practical bit of advice that 
 you can apply without concern that you might be doing something wrong 
 in a given circumstance, that advice is “always dither 16-bit 
 reductions

Re: [music-dsp] Dither video and articles

2015-02-06 Thread Vicki Melchior
The self dither argument is not as obvious as it may appear.  To be effective 
at dithering, the noise has to be at the right level of course but also should 
be white and temporally constant.  The noise floors present in music data 
normally come from the self noise of the analog components used in recording 
and are composites of a number of noise PDFs.  For example, a graph in a second 
paper by the same group (cited below if wanted) shows spectra of the measured 
noise floors from around a dozen recordings.  The noise spectra are composites 
with the lower frequencies clearly 1/f noise and the upper frequencies summing 
closer to flat.  Whether composite noise of this sort is both temporally 
continuous and white enough to be relied on for dither needs to be shown; it's 
been shown under at least some circumstances (not in these papers) that a 
truncation distortion spectrum can be produced and measured when signals are 
truncated to 24b.  

I'm not saying the self dither argument is necessarily wrong; but it needs 
verification as to when and where it is reliably valid.   If 24b truncation 
turns out to be demonstrably audible in an AB/X, then the self dither idea 
clearly needs to be rethought.

Vicki Melchior

(graph mentioned is fig 8 in this paper:   
http://www.aes.org/e-lib/browse.cfm?elib=17501)

On Feb 6, 2015, at 2:20 PM, Nigel Redmon wrote:

 First, if there is enough noise in the signal before truncation, then it’s 
 dithered by default—no correlation.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-06 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] Dither video and articles

From: Vicki Melchior vmelch...@earthlink.net

Date: Fri, February 6, 2015 2:23 pm

To: A discussion list for music-related DSP music-dsp@music.columbia.edu

--



 The self dither argument is not as obvious as it may appear. To be effective 
 at dithering, the noise has to be at the right level of course but also 
 should be white and temporally constant.
�
why does it have to be white?� or why should it?





--
�
r b-j � � � � � � � � � r...@audioimagination.com
�
Imagination is more important than knowledge.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Dither video and articles

2015-02-06 Thread Andrew Simper
Sorry, you said until, which is even more confusing. There are
multiple points when I hear the noise until since it sounds like the
noise is modulated in amplitude by a sine like LFO for the entire
file, so the volume of the noise ramps up and down in a cyclic manner.
The last ramping I hear fades out at around the 28.7 second mark when
it is hard to tell if it just ramps out at that point or is just on
the verge of ramping up again and then the file ends at 28.93 seconds.
I have not tried to measure the LFO wavelength or any other such
things, this is just going on listening alone.

All the best,

Andrew Simper



On 6 February 2015 at 22:01, Andrew Simper a...@cytomic.com wrote:
 On 6 February 2015 at 17:32, Didier Dambrin di...@skynet.be wrote:
 Just out of curiosity, until which point do you hear the noise in this
 little test (a 32bit float wav), starting from a bearable first part?

 https://drive.google.com/file/d/0B6Cr7wjQ2EPucjFCSUhGNkVRaUE/view?usp=sharing

 I hear noise immediately in that recording, it's hard to tell exactly
 the time I can first hear it since there is some latency from when I
 press play to when the sound starts, but as far as I can tell it is
 straight away. Why do you ask such silly questions?

 All the best,

 Andrew Simper
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-06 Thread Andrew Simper
On 6 February 2015 at 17:32, Didier Dambrin di...@skynet.be wrote:
 Just out of curiosity, until which point do you hear the noise in this
 little test (a 32bit float wav), starting from a bearable first part?

 https://drive.google.com/file/d/0B6Cr7wjQ2EPucjFCSUhGNkVRaUE/view?usp=sharing

I hear noise immediately in that recording, it's hard to tell exactly
the time I can first hear it since there is some latency from when I
press play to when the sound starts, but as far as I can tell it is
straight away. Why do you ask such silly questions?

All the best,

Andrew Simper
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-06 Thread Vicki Melchior
The following published double blind test contradicts the results of the old 
Moran/Meyer publication in showing (a) that the differences between CD and 
higher resolution sources is audible and (b) that failure to dither at the 16th 
bit is also audible.  

http://www.aes.org/e-lib/browse.cfm?elib=17497

The Moran/Meyer tests had numerous technical problems that have long been 
discussed, some are enumerated in the above.  

As far as dithering at the 24th bit, I can't disagree more with a conclusion 
that says it's unnecessary in data handling.  Mastering engineers can hear 
truncation error at the 24th bit but say it is subtle and may require 
experience or training to pick up.  What they are hearing is not noise or peaks 
sitting at the 24th bit but rather the distortion that goes with truncation at 
24b, and it is said to have a characteristic coloration effect on sound.  I'm 
aware of an effort to show this with AB/X tests, hopefully it will be 
published.  The problem with failing to dither at 24b is that many such 
truncation steps would be done routinely in mastering, and thus the truncation 
distortion products continue to build up.  Whether you personally hear it is 
likely to depend both on how extensive your data flow pathway is and how good 
your playback equipment is.  

Vicki Melchior

On Feb 5, 2015, at 10:01 PM, Ross Bencina wrote:

 On 6/02/2015 1:50 PM, Tom Duffy wrote:
 The AES report is highly controversial.
 
 Plenty of sources dispute the findings.
 
 Can you name some?
 
 Ross.
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-06 Thread Richard Dobson

On 06/02/2015 14:21, Andrew Simper wrote:

Sorry, you said until, which is even more confusing. There are
multiple points when I hear the noise until since it sounds like the
noise is modulated in amplitude by a sine like LFO for the entire
file, so the volume of the noise ramps up and down in a cyclic manner.
The last ramping I hear fades out at around the 28.7 second mark when
it is hard to tell if it just ramps out at that point or is just on
the verge of ramping up again and then the file ends at 28.93 seconds.
I have not tried to measure the LFO wavelength or any other such
things, this is just going on listening alone.




Its a series of six smoothly enveloped noise bursts (slowish rise/ 
slower decay) the first peaking at max amplitude (so you have to be 
ready to hear it as very loud!), then successively softer repeats until 
at some point it is (presumably?) too quiet to be heard. Very visible in 
Audacity using the Waveform (dB) display mode. So the word until is 
entirely appropriate. I do recommend visual inspection of waveforms in 
such situations to minimise guessing (or at least, to confirm the 
guesses or otherwise). In any case, I would expect people to hear all 
six, give a suitably quiet listening environment and an appropriately 
generous overall playback level etc.


Richard Dobson


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-06 Thread Nigel Redmon
Mastering engineers can hear truncation error at the 24th bit but say it is 
subtle and may require experience or training to pick up.

Quick observations:

1) The output step size of the lsb is full-scale / 2^24. If full-scale is 1V, 
then step is 0.000596046447753906V, or 0.0596 microvolt (millionths of a 
volt). Hearing capabilities aside, the converter must be able to resolve this, 
and it must make it through the thermal (and other) noise of their equipment 
and move a speaker. If you’re not an electrical engineer, it may be difficult 
to grasp the problem that this poses.

2) I happened on a discussion in an audio forum, where a highly-acclaimed 
mastering engineer and voice on dither mentioned that he could hear the dither 
kick in when he pressed a certain button in the GUI of some beta software. The 
maker of the software had to inform him that he was mistaken on the function of 
the button, and in fact it didn’t affect the audio whatsoever. (I’ll leave his 
name out, because it’s immaterial—the guy is a great source of info to people 
and is clearly excellent at what he does, and everyone who works with audio 
runs into this at some point.) The mastering engineer graciously accepted his 
goof.

3) Mastering engineers invariably describe the differences in very subjective 
term. While this may be a necessity, it sure makes it difficult to pursue any 
kind of validation. From a mastering engineer to me, yesterday: 'To me the 
truncated version sounds colder, more glassy, with less richness in the bass 
and harmonics, and less front to back depth in the stereo field.’

4) 24-bit audio will almost always have a far greater random noise floor than 
is necessary to dither, so they will be self-dithered. By “almost”, I mean that 
very near 100% of the time. Sure, you can create exceptions, such as 
synthetically generated simple tones, but it’s hard to imagine them happening 
in the course of normal music making. There is nothing magic about dither 
noise—it’s just mimicking the sort of noise that your electronics generates 
thermally. And when mastering engineers say they can hear truncation distortion 
at 24-bit, they don’t say “on this particular brief moment, this particular 
recording”—they seems to say it in general. It’s extremely unlikely that 
non-randomized truncation distortion even exists for most material at 24-bit.

My point is simply that I’m not going to accept that mastering engineers can 
hear the 24th bit truncation just because they say they can.


 On Feb 6, 2015, at 5:21 AM, Vicki Melchior vmelch...@earthlink.net wrote:
 
 The following published double blind test contradicts the results of the old 
 Moran/Meyer publication in showing (a) that the differences between CD and 
 higher resolution sources is audible and (b) that failure to dither at the 
 16th bit is also audible.  
 
 http://www.aes.org/e-lib/browse.cfm?elib=17497
 
 The Moran/Meyer tests had numerous technical problems that have long been 
 discussed, some are enumerated in the above.  
 
 As far as dithering at the 24th bit, I can't disagree more with a conclusion 
 that says it's unnecessary in data handling.  Mastering engineers can hear 
 truncation error at the 24th bit but say it is subtle and may require 
 experience or training to pick up.  What they are hearing is not noise or 
 peaks sitting at the 24th bit but rather the distortion that goes with 
 truncation at 24b, and it is said to have a characteristic coloration effect 
 on sound.  I'm aware of an effort to show this with AB/X tests, hopefully it 
 will be published.  The problem with failing to dither at 24b is that many 
 such truncation steps would be done routinely in mastering, and thus the 
 truncation distortion products continue to build up.  Whether you personally 
 hear it is likely to depend both on how extensive your data flow pathway is 
 and how good your playback equipment is.  
 
 Vicki Melchior
 
 On Feb 5, 2015, at 10:01 PM, Ross Bencina wrote:
 
 On 6/02/2015 1:50 PM, Tom Duffy wrote:
 The AES report is highly controversial.
 
 Plenty of sources dispute the findings.
 
 Can you name some?
 
 Ross.
 --

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Dither video and articles

2015-02-06 Thread Tom Duffy

Isn't it generally agreed that truncation noise is correlated with the
signal?
The human ear is excellent at picking up on correlation, so a system
that introduces multiple correlated (noise) signals may reach a point
where it is perceptual, even if the starting point is a 24 bit signal.

I would believe this to be an explanation for why ProTools early 
hardware mixers were regarded as having problems - they used 24bit

fixed point DSPs, coupled with fixed bit headroom management may
have introduced truncation noise at a level higher than the 24 bit
noise floor.

Also, the dither noise source itself needs to be investigated.
Studies have shown that a fixed repeated buffer of pre-generated white
noise is immediately obvious (and non-pleasing) to the listener up to
several hundred ms long - if that kind of source was used as a dither
signal, the self correlation becomes even more problematic.
Calculated a new PRDG value for each sample is expensive, which
is why a pre-generated buffer is attractive to the implementor.

---
Tom.

On 2/6/2015 10:32 AM, Victor Lazzarini wrote:
Quite. This conversation is veering down the vintage wine tasting alley.

Victor Lazzarini
Dean of Arts, Celtic Studies, and Philosophy
Maynooth University
Ireland

On 6 Feb 2015, at 18:13, Nigel Redmon earle...@earlevel.com wrote:

Mastering engineers can hear truncation error at the 24th bit but say it 
is subtle and may require experience or training to pick up.


Quick observations:

1) The output step size of the lsb is full-scale / 2^24. If full-scale 
is 1V, then step is 0.000596046447753906V, or 0.0596 microvolt 
(millionths of a volt). Hearing capabilities aside, the converter must 
be able to resolve this, and it must make it through the thermal (and 
other) noise of their equipment and move a speaker. If you’re not an 
electrical engineer, it may be difficult to grasp the problem that this 
poses.


2) I happened on a discussion in an audio forum, where a 
highly-acclaimed mastering engineer and voice on dither mentioned that 
he could hear the dither kick in when he pressed a certain button in the 
GUI of some beta software. The maker of the software had to inform him 
that he was mistaken on the function of the button, and in fact it 
didn’t affect the audio whatsoever. (I’ll leave his name out, because 
it’s immaterial—the guy is a great source of info to people and is 
clearly excellent at what he does, and everyone who works with audio 
runs into this at some point.) The mastering engineer graciously 
accepted his goof.


3) Mastering engineers invariably describe the differences in very 
subjective term. While this may be a necessity, it sure makes it 
difficult to pursue any kind of validation. From a mastering engineer to 
me, yesterday: 'To me the truncated version sounds colder, more glassy, 
with less richness in the bass and harmonics, and less front to back 
depth in the stereo field.’


4) 24-bit audio will almost always have a far greater random noise floor 
than is necessary to dither, so they will be self-dithered. By “almost”, 
I mean that very near 100% of the time. Sure, you can create exceptions, 
such as synthetically generated simple tones, but it’s hard to imagine 
them happening in the course of normal music making. There is nothing 
magic about dither noise—it’s just mimicking the sort of noise that your 
electronics generates thermally. And when mastering engineers say they 
can hear truncation distortion at 24-bit, they don’t say “on this 
particular brief moment, this particular recording”—they seems to say it 
in general. It’s extremely unlikely that non-randomized truncation 
distortion even exists for most material at 24-bit.


My point is simply that I’m not going to accept that mastering engineers 
can hear the 24th bit truncation just because they say they can.



On Feb 6, 2015, at 5:21 AM, Vicki Melchior vmelch...@earthlink.net wrote:

The following published double blind test contradicts the results of the 
old Moran/Meyer publication in showing (a) that the differences between 
CD and higher resolution sources is audible and (b) that failure to 
dither at the 16th bit is also audible.


http://www.aes.org/e-lib/browse.cfm?elib=17497

The Moran/Meyer tests had numerous technical problems that have long 
been discussed, some are enumerated in the above.


As far as dithering at the 24th bit, I can't disagree more with a 
conclusion that says it's unnecessary in data handling.  Mastering 
engineers can hear truncation error at the 24th bit but say it is subtle 
and may require experience or training to pick up.  What they are 
hearing is not noise or peaks sitting at the 24th bit but rather the 
distortion that goes with truncation at 24b, and it is said to have a 
characteristic coloration effect on sound.  I'm aware of an effort to 
show this with AB/X tests, hopefully it will be published.  The problem 
with failing to dither at 24b is that many such truncation steps would 
be done 

Re: [music-dsp] Dither video and articles

2015-02-06 Thread Ethan Duni
Thanks for the reference Vicki

What they are hearing is not noise or peaks sitting at the 24th
bit but rather the distortion that goes with truncation at 24b, and
it is said to have a characteristic coloration effect on sound.  I'm
aware of an effort to show this with AB/X tests, hopefully it will be
published.

I'm skeptical, but definitely hope that such a test gets undertaken and
published. Would be interesting to have some real data either way.

The problem with failing to dither at 24b is that many such truncation
steps would be done routinely in mastering, and thus the truncation
distortion products continue to build up.

Hopefully everyone agrees that the questions of what is appropriate for
intermediate processing and what is appropriate for final distribution are
quite different, and that substantially higher resolutions (and probably
including dither) are indicated for intermediate processing. As Michael
Goggins says:

In my own work, I have verified with a double-blind ABX comparator at
a high degree of statistical significance that I can hear the
differences in certain selected portions of the same Csound piece
rendered with 32 bit floating point samples versus 64 bit floating
point samples. These are sample words used in internal calculations,
not for output soundfiles. What I heard was differences in the sound
of the same filter algorithm. These differences were not at all hard
to hear, but they occurred in only one or two places in the piece.

Indeed, it is not particularly difficult to cook up filter
designs/algorithms that will break any given finite internal resolution. At
some point those filter designs become pathological, but there are plenty
of reasonable cases where 32 bit float internal precision is insufficient.
Note that a 32-bit float only has 24 bits of mantissa, which is 8 bits less
than is typically used in embedded fixed-point implementations (for
sensitive components like filter guts, I mean). So even very standard stuff
that has been around for decades in the fixed-point world will break if
implemented naively in 32 bit float.

E
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-05 Thread Nigel Redmon
OK, here’s my new piece, I call it Diva bass—to satisfy your request for me to 
make something with truncation distortion apparent. (If it bother you that my 
piece is one note, imagine that this is just the last note of a longer piece.)

I spent maybe 30 seconds getting the sound—opened Diva (default “minimoog” 
modules), turn the mixer knobs down except for VCO 1, set range to 32’, 
waveform to triangle, max release on the VCA envelope.

In 32-bit float glory:

http://earlevel.com/temp/music-dsp/Diva%20bass%2032-bit%20float.wav

Truncated to 16-bit, no dither (Quan Jr plug-in, Digital Performer), saved to 
16-bit wave file:

http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated.wav

You’ll have to turn your sound system up, not insanely loud, but loud. (I said 
that this would be the case before.) I can hear it, and I know engineers who 
monitor much louder, routinely, than I’m monitoring to hear this. My Equator 
Q10s are not terribly high powered, and I’m not adding any other gain ahead of 
them in order to boost the quiet part.

If you want to hear the residual easily (32-bit version inverted, summed with 
16-bit truncated, the result with +40 dB gain via Trim plug-in):

http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated%20residual%20+40dB.wav

I don’t expect the 16-bit truncated version to bother you, but it does bother 
some audio engineers. Here's 16-bit dithered version, for completeness, so that 
you can decide if the added noise floor bothers you:

http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20dithered.wav



 On Feb 4, 2015, at 1:10 PM, Didier Dambrin di...@skynet.be wrote:
 
 Yes, I disagree with the always. Not always needed means it's sometimes 
 needed, my point is that it's never needed, until proven otherwise. Your 
 video proves that sometimes it's not needed, but not that sometimes it's 
 needed.
 
 
 
 -Message d'origine- From: Nigel Redmon
 Sent: Wednesday, February 04, 2015 6:51 PM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles
 
 I totally understood the point of your video, that dithering to 16bit isn't 
 always needed - but that's what I disagree with.
 
 Sorry, Didier, I’m confused now. I took from your previous message that you 
 feel 16-bit doesn’t need to be dithered (dithering to 16bit will never make 
 any audible difference”). Here you say that you disagree with dithering to 
 16bit isn't always needed”. In fact, you are saying that it’s never 
 needed—you disagree because “isn’t always needed” implies that it is 
 sometimes needed—correct?
 
 
 On Feb 4, 2015, at 5:06 AM, Didier Dambrin di...@skynet.be wrote:
 
 Then, it’s no-win situation, because I could EASILY manufacture a bit of 
 music that had significant truncation distortion at 16-bit.
 
 Please do, I would really like to hear it.
 
 I have never heard truncation noise at 16bit, other than by playing with 
 levels in a such a way that the peaking parts of the rest of the sound would 
 destroy your ears or be very unpleasant at best. (you say 12dB, it's already 
 a lot)
 
 I totally understood the point of your video, that dithering to 16bit isn't 
 always needed - but that's what I disagree with.
 
 
 
 -Message d'origine- From: Nigel Redmon
 Sent: Wednesday, February 04, 2015 10:59 AM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles
 
 Hi Didier—You seem to find contradictions in my choices because you are 
 making the wrong assumptions about what I’m showing and saying.
 
 First, I’m not steadfast that 16-bit dither is always needed—and in fact the 
 point of the video was that I was showing you (the viewers) how you can 
 judge it objectively for yourself (and decide whether you want to dither). 
 This is a much better way that the usual that I hear from people, who often 
 listen to the dithered and non-dithered results, and talk about the 
 soundstage collapsing without dither, “brittle” versus “transparent , etc.
 
 But if I’m to give you a rule of thumb, a practical bit of advice that you 
 can apply without concern that you might be doing something wrong in a given 
 circumstance, that advice is “always dither 16-bit reductions”. First, I 
 suspect that it’s below the existing noise floor of most music (even so, 
 things like slow fades of the master fader might override that, for that 
 point in time). Still, it’s not hard to manufacture something musical that 
 subject to bad truncation distortion—a naked, low frequency, 
 low-haromic-content sound (a synthetic bass or floor tom perhaps). Anyway, 
 at worst case, you’ve added white noise that you are unlikely to hear—and if 
 you do, so what? If broadband noise below -90 dB were a deal-breaker in 
 recorded music, there wouldn’t be any recorded music. Yeah, truncation 
 distortion at 16-bits is an edge case, but the cost to remove it is almost 
 nothing.
 
 You say that we can’t perceive

Re: [music-dsp] Dither video and articles

2015-02-05 Thread Nigel Redmon
 
 other gain ahead of them in order to boost the quiet part.
 
 If you want to hear the residual easily (32-bit version inverted, summed 
 with 16-bit truncated, the result with +40 dB gain via Trim plug-in):
 
 http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated%20residual%20+40dB.wav
 
 I don’t expect the 16-bit truncated version to bother you, but it does 
 bother some audio engineers. Here's 16-bit dithered version, for 
 completeness, so that you can decide if the added noise floor bothers you:
 
 http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20dithered.wav
 
 
 
 On Feb 4, 2015, at 1:10 PM, Didier Dambrin di...@skynet.be wrote:
 
 Yes, I disagree with the always. Not always needed means it's 
 sometimes needed, my point is that it's never needed, until proven 
 otherwise. Your video proves that sometimes it's not needed, but not that 
 sometimes it's needed.
 
 
 
 -Message d'origine- From: Nigel Redmon
 Sent: Wednesday, February 04, 2015 6:51 PM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles
 
 I totally understood the point of your video, that dithering to 16bit 
 isn't always needed - but that's what I disagree with.
 
 Sorry, Didier, I’m confused now. I took from your previous message that 
 you feel 16-bit doesn’t need to be dithered (dithering to 16bit will 
 never make any audible difference”). Here you say that you disagree with 
 dithering to 16bit isn't always needed”. In fact, you are saying that 
 it’s never needed—you disagree because “isn’t always needed” implies that 
 it is sometimes needed—correct?
 
 
 On Feb 4, 2015, at 5:06 AM, Didier Dambrin di...@skynet.be wrote:
 
 Then, it’s no-win situation, because I could EASILY manufacture a bit 
 of music that had significant truncation distortion at 16-bit.
 
 Please do, I would really like to hear it.
 
 I have never heard truncation noise at 16bit, other than by playing with 
 levels in a such a way that the peaking parts of the rest of the sound 
 would destroy your ears or be very unpleasant at best. (you say 12dB, 
 it's already a lot)
 
 I totally understood the point of your video, that dithering to 16bit 
 isn't always needed - but that's what I disagree with.
 
 
 
 -Message d'origine- From: Nigel Redmon
 Sent: Wednesday, February 04, 2015 10:59 AM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles
 
 Hi Didier—You seem to find contradictions in my choices because you are 
 making the wrong assumptions about what I’m showing and saying.
 
 First, I’m not steadfast that 16-bit dither is always needed—and in fact 
 the point of the video was that I was showing you (the viewers) how you 
 can judge it objectively for yourself (and decide whether you want to 
 dither). This is a much better way that the usual that I hear from 
 people, who often listen to the dithered and non-dithered results, and 
 talk about the soundstage collapsing without dither, “brittle” versus 
 “transparent , etc.
 
 But if I’m to give you a rule of thumb, a practical bit of advice that 
 you can apply without concern that you might be doing something wrong in 
 a given circumstance, that advice is “always dither 16-bit reductions”. 
 First, I suspect that it’s below the existing noise floor of most music 
 (even so, things like slow fades of the master fader might override that, 
 for that point in time). Still, it’s not hard to manufacture something 
 musical that subject to bad truncation distortion—a naked, low frequency, 
 low-haromic-content sound (a synthetic bass or floor tom perhaps). 
 Anyway, at worst case, you’ve added white noise that you are unlikely to 
 hear—and if you do, so what? If broadband noise below -90 dB were a 
 deal-breaker in recorded music, there wouldn’t be any recorded music. 
 Yeah, truncation distortion at 16-bits is an edge case, but the cost to 
 remove it is almost nothing.
 
 You say that we can’t perceive quantization above 14-bit, but of course 
 we can. If you can perceive it at 14-bit in a given circumstance, and 
 it’s an extended low-level passage, you can easily raise the volume 
 control another 12 dB and be in the same situation at 16-bit. Granted, 
 it’s most likely that the recording engineer hears it and not the 
 end-listener, but who is this video aimed at if not the recording 
 engineer? He’s the one making the choice of whether to dither.
 
 Specifically:
 ..then why not use a piece of audio that does prove the point, instead? 
 I know why, it's because you can’t...
 
 First, I would have to use my own music (because I don’t own 32-bit float 
 versions of other peoples’ music, even if I thought it was fair use to of 
 copyrighted material). Then, it’s no-win situation, because I could 
 EASILY manufacture a bit of music that had significant truncation 
 distortion at 16-bit. I only need to fire up one of my soft synths, and 
 ring out some dull bell tones and bass sounds

Re: [music-dsp] Dither video and articles

2015-02-05 Thread Andrew Simper
On 6 February 2015 at 12:16, Didier Dambrin di...@skynet.be wrote:
 I'm not quite sure I understand what you described here below.
 I think the wavs should have contained a normalized part, so that anyone who
 listens to it, will never crank up his volume above the threshold of pain on
 the first, normalized part, and then everyone is more or less listening to
 the quiet part the same way.

That is exactly what I was doing, to normalise the float wav file and
let you know it wasn't even remotely near the level of pain, which
tells me the gain of -12 dB on the headphone amp is a reasonable
listening level.


 Claiming that it's any audible is one thing, but you go as far as saying
 it's clear to hear.. we're probably not testing the same way.
 I have normalized (+23dB) the last 9 seconds of the Diva bass 16-bit
 truncated.wav file to hear what I was supposed to hear. I'm just not hearing
 anything close to that, in the normal test.

I can only say what I hear, which is pretty clear. Nigel's point about
the volume is this: at one point in the song that bass sound would be
normalised up higher, or perhaps behind drums which were louder, but
you can consider this bit as being in a quieter bit of a song, so
absolutely reasonable as a test case.


 While I have Sennheiser HD650, I'm listening through bose QC15 because,
 although it's night time, my ambient noise is probably a gazillion times
 above what we're debating here. So I'm in a pretty quiet listening setup
 here (for those who have tried QC15's).

If you can't hear it I believe you, but I can hear it. Not all peoples
hearing is equal.

All the best,

Andrew Simper





 -Message d'origine- From: Andrew Simper
 Sent: Friday, February 06, 2015 3:31 AM

 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles

 I also tried boosting the float version of the bass tone to -1 dB (so
 another 18 dB up from with the same test setup), it was loud, but not
 anywhere near the threshold of pain for me. I then boosted it another
 12 dB on the headphone control (so 0 dB gain), so now 30 dB gain in
 total and my headphones were really shaking, this was a bit silly a
 level, but still definitely not painful to listen to. My point being
 that this is a very reasonable test signal to listen to, and it is
 clear to hear the differences even at low levels of gain.

 If I had to choose, between the two 16-bit ones I would prefer the one
 with dither but put through a make mono plugin, as this sounded the
 closest to the float version.

 All the best,

 Andy

 -- cytomic -- sound music software --


 On 5 February 2015 at 16:46, Nigel Redmon earle...@earlevel.com wrote:

 Hmm, I thought that would let you save the page source (wave file)…Safari
 creates the file of the appropriate name and type, but it stays at 0
 bytes…OK, I put up and index page—do the usual right-click to save the field
 to disk if you need to access the files directly:

 http://earlevel.com/temp/music-dsp/


 On Feb 5, 2015, at 12:13 AM, Nigel Redmon earle...@earlevel.com wrote:

 OK, here’s my new piece, I call it Diva bass—to satisfy your request for
 me to make something with truncation distortion apparent. (If it bother you
 that my piece is one note, imagine that this is just the last note of a
 longer piece.)

 I spent maybe 30 seconds getting the sound—opened Diva (default
 “minimoog” modules), turn the mixer knobs down except for VCO 1, set range
 to 32’, waveform to triangle, max release on the VCA envelope.

 In 32-bit float glory:

 http://earlevel.com/temp/music-dsp/Diva%20bass%2032-bit%20float.wav

 Truncated to 16-bit, no dither (Quan Jr plug-in, Digital Performer),
 saved to 16-bit wave file:

 http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated.wav

 You’ll have to turn your sound system up, not insanely loud, but loud. (I
 said that this would be the case before.) I can hear it, and I know
 engineers who monitor much louder, routinely, than I’m monitoring to hear
 this. My Equator Q10s are not terribly high powered, and I’m not adding any
 other gain ahead of them in order to boost the quiet part.

 If you want to hear the residual easily (32-bit version inverted, summed
 with 16-bit truncated, the result with +40 dB gain via Trim plug-in):


 http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated%20residual%20+40dB.wav

 I don’t expect the 16-bit truncated version to bother you, but it does
 bother some audio engineers. Here's 16-bit dithered version, for
 completeness, so that you can decide if the added noise floor bothers you:

 http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20dithered.wav



 On Feb 4, 2015, at 1:10 PM, Didier Dambrin di...@skynet.be wrote:

 Yes, I disagree with the always. Not always needed means it's
 sometimes needed, my point is that it's never needed, until proven
 otherwise. Your video proves that sometimes it's not needed, but not that
 sometimes it's needed

Re: [music-dsp] Dither video and articles

2015-02-05 Thread Andrew Simper
 the last note of 
 a longer piece.)

 I spent maybe 30 seconds getting the sound—opened Diva (default “minimoog” 
 modules), turn the mixer knobs down except for VCO 1, set range to 32’, 
 waveform to triangle, max release on the VCA envelope.

 In 32-bit float glory:

 http://earlevel.com/temp/music-dsp/Diva%20bass%2032-bit%20float.wav

 Truncated to 16-bit, no dither (Quan Jr plug-in, Digital Performer), saved 
 to 16-bit wave file:

 http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated.wav

 You’ll have to turn your sound system up, not insanely loud, but loud. (I 
 said that this would be the case before.) I can hear it, and I know 
 engineers who monitor much louder, routinely, than I’m monitoring to hear 
 this. My Equator Q10s are not terribly high powered, and I’m not adding 
 any other gain ahead of them in order to boost the quiet part.

 If you want to hear the residual easily (32-bit version inverted, summed 
 with 16-bit truncated, the result with +40 dB gain via Trim plug-in):

 http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated%20residual%20+40dB.wav

 I don’t expect the 16-bit truncated version to bother you, but it does 
 bother some audio engineers. Here's 16-bit dithered version, for 
 completeness, so that you can decide if the added noise floor bothers you:

 http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20dithered.wav



 On Feb 4, 2015, at 1:10 PM, Didier Dambrin di...@skynet.be wrote:

 Yes, I disagree with the always. Not always needed means it's 
 sometimes needed, my point is that it's never needed, until proven 
 otherwise. Your video proves that sometimes it's not needed, but not that 
 sometimes it's needed.



 -Message d'origine- From: Nigel Redmon
 Sent: Wednesday, February 04, 2015 6:51 PM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles

 I totally understood the point of your video, that dithering to 16bit 
 isn't always needed - but that's what I disagree with.

 Sorry, Didier, I’m confused now. I took from your previous message that 
 you feel 16-bit doesn’t need to be dithered (dithering to 16bit will 
 never make any audible difference”). Here you say that you disagree with 
 dithering to 16bit isn't always needed”. In fact, you are saying that 
 it’s never needed—you disagree because “isn’t always needed” implies that 
 it is sometimes needed—correct?


 On Feb 4, 2015, at 5:06 AM, Didier Dambrin di...@skynet.be wrote:

 Then, it’s no-win situation, because I could EASILY manufacture a bit 
 of music that had significant truncation distortion at 16-bit.

 Please do, I would really like to hear it.

 I have never heard truncation noise at 16bit, other than by playing with 
 levels in a such a way that the peaking parts of the rest of the sound 
 would destroy your ears or be very unpleasant at best. (you say 12dB, 
 it's already a lot)

 I totally understood the point of your video, that dithering to 16bit 
 isn't always needed - but that's what I disagree with.



 -Message d'origine- From: Nigel Redmon
 Sent: Wednesday, February 04, 2015 10:59 AM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles

 Hi Didier—You seem to find contradictions in my choices because you are 
 making the wrong assumptions about what I’m showing and saying.

 First, I’m not steadfast that 16-bit dither is always needed—and in fact 
 the point of the video was that I was showing you (the viewers) how you 
 can judge it objectively for yourself (and decide whether you want to 
 dither). This is a much better way that the usual that I hear from 
 people, who often listen to the dithered and non-dithered results, and 
 talk about the soundstage collapsing without dither, “brittle” versus 
 “transparent , etc.

 But if I’m to give you a rule of thumb, a practical bit of advice that 
 you can apply without concern that you might be doing something wrong in 
 a given circumstance, that advice is “always dither 16-bit reductions”. 
 First, I suspect that it’s below the existing noise floor of most music 
 (even so, things like slow fades of the master fader might override 
 that, for that point in time). Still, it’s not hard to manufacture 
 something musical that subject to bad truncation distortion—a naked, low 
 frequency, low-haromic-content sound (a synthetic bass or floor tom 
 perhaps). Anyway, at worst case, you’ve added white noise that you are 
 unlikely to hear—and if you do, so what? If broadband noise below -90 dB 
 were a deal-breaker in recorded music, there wouldn’t be any recorded 
 music. Yeah, truncation distortion at 16-bits is an edge case, but the 
 cost to remove it is almost nothing.

 You say that we can’t perceive quantization above 14-bit, but of course 
 we can. If you can perceive it at 14-bit in a given circumstance, and 
 it’s an extended low-level passage, you can easily raise the volume 
 control another 12 dB

Re: [music-dsp] Dither video and articles

2015-02-05 Thread Didier Dambrin
I couldn't hear any difference (through headphones), even after an insane 
boost, and even though your 16bit truncated wav was 6dB(?) lower than the 
32bit wav


But even if I could hear it, IMHO this is 13bit worth of audio inside a 
16bit file.





-Message d'origine- 
From: Nigel Redmon

Sent: Thursday, February 05, 2015 9:13 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

OK, here’s my new piece, I call it Diva bass—to satisfy your request for me 
to make something with truncation distortion apparent. (If it bother you 
that my piece is one note, imagine that this is just the last note of a 
longer piece.)


I spent maybe 30 seconds getting the sound—opened Diva (default “minimoog” 
modules), turn the mixer knobs down except for VCO 1, set range to 32’, 
waveform to triangle, max release on the VCA envelope.


In 32-bit float glory:

http://earlevel.com/temp/music-dsp/Diva%20bass%2032-bit%20float.wav

Truncated to 16-bit, no dither (Quan Jr plug-in, Digital Performer), saved 
to 16-bit wave file:


http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated.wav

You’ll have to turn your sound system up, not insanely loud, but loud. (I 
said that this would be the case before.) I can hear it, and I know 
engineers who monitor much louder, routinely, than I’m monitoring to hear 
this. My Equator Q10s are not terribly high powered, and I’m not adding any 
other gain ahead of them in order to boost the quiet part.


If you want to hear the residual easily (32-bit version inverted, summed 
with 16-bit truncated, the result with +40 dB gain via Trim plug-in):


http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated%20residual%20+40dB.wav

I don’t expect the 16-bit truncated version to bother you, but it does 
bother some audio engineers. Here's 16-bit dithered version, for 
completeness, so that you can decide if the added noise floor bothers you:


http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20dithered.wav




On Feb 4, 2015, at 1:10 PM, Didier Dambrin di...@skynet.be wrote:

Yes, I disagree with the always. Not always needed means it's 
sometimes needed, my point is that it's never needed, until proven 
otherwise. Your video proves that sometimes it's not needed, but not that 
sometimes it's needed.




-Message d'origine- From: Nigel Redmon
Sent: Wednesday, February 04, 2015 6:51 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

I totally understood the point of your video, that dithering to 16bit 
isn't always needed - but that's what I disagree with.


Sorry, Didier, I’m confused now. I took from your previous message that 
you feel 16-bit doesn’t need to be dithered (dithering to 16bit will 
never make any audible difference”). Here you say that you disagree with 
dithering to 16bit isn't always needed”. In fact, you are saying that it’s 
never needed—you disagree because “isn’t always needed” implies that it is 
sometimes needed—correct?




On Feb 4, 2015, at 5:06 AM, Didier Dambrin di...@skynet.be wrote:

Then, it’s no-win situation, because I could EASILY manufacture a bit 
of music that had significant truncation distortion at 16-bit.


Please do, I would really like to hear it.

I have never heard truncation noise at 16bit, other than by playing with 
levels in a such a way that the peaking parts of the rest of the sound 
would destroy your ears or be very unpleasant at best. (you say 12dB, 
it's already a lot)


I totally understood the point of your video, that dithering to 16bit 
isn't always needed - but that's what I disagree with.




-Message d'origine- From: Nigel Redmon
Sent: Wednesday, February 04, 2015 10:59 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

Hi Didier—You seem to find contradictions in my choices because you are 
making the wrong assumptions about what I’m showing and saying.


First, I’m not steadfast that 16-bit dither is always needed—and in fact 
the point of the video was that I was showing you (the viewers) how you 
can judge it objectively for yourself (and decide whether you want to 
dither). This is a much better way that the usual that I hear from 
people, who often listen to the dithered and non-dithered results, and 
talk about the soundstage collapsing without dither, “brittle” versus 
“transparent , etc.


But if I’m to give you a rule of thumb, a practical bit of advice that 
you can apply without concern that you might be doing something wrong in 
a given circumstance, that advice is “always dither 16-bit reductions”. 
First, I suspect that it’s below the existing noise floor of most music 
(even so, things like slow fades of the master fader might override that, 
for that point in time). Still, it’s not hard to manufacture something 
musical that subject to bad truncation distortion—a naked, low frequency, 
low-haromic-content sound (a synthetic

Re: [music-dsp] Dither video and articles

2015-02-05 Thread Didier Dambrin

But then I would hear that covering noise..

At the level you listened to, can you listen to a normalized song and bear 
it?




-Message d'origine- 
From: Andreas Beisler

Sent: Thursday, February 05, 2015 4:22 PM
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] Dither video and articles

The artifacts are very prominent in the tail end of the truncated file.
I don't understand how you cannot hear it. Must be covered by the noise
floor of your sound card's converters.

Andreas



On 2/5/2015 1:55 PM, Didier Dambrin wrote:

I couldn't hear any difference (through headphones), even after an
insane boost, and even though your 16bit truncated wav was 6dB(?) lower
than the 32bit wav

But even if I could hear it, IMHO this is 13bit worth of audio inside a
16bit file.




-Message d'origine- From: Nigel Redmon
Sent: Thursday, February 05, 2015 9:13 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

OK, here’s my new piece, I call it Diva bass—to satisfy your request for
me to make something with truncation distortion apparent. (If it bother
you that my piece is one note, imagine that this is just the last note
of a longer piece.)

I spent maybe 30 seconds getting the sound—opened Diva (default
“minimoog” modules), turn the mixer knobs down except for VCO 1, set
range to 32’, waveform to triangle, max release on the VCA envelope.

In 32-bit float glory:

http://earlevel.com/temp/music-dsp/Diva%20bass%2032-bit%20float.wav

Truncated to 16-bit, no dither (Quan Jr plug-in, Digital Performer),
saved to 16-bit wave file:

http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated.wav

You’ll have to turn your sound system up, not insanely loud, but loud.
(I said that this would be the case before.) I can hear it, and I know
engineers who monitor much louder, routinely, than I’m monitoring to
hear this. My Equator Q10s are not terribly high powered, and I’m not
adding any other gain ahead of them in order to boost the quiet part.

If you want to hear the residual easily (32-bit version inverted, summed
with 16-bit truncated, the result with +40 dB gain via Trim plug-in):

http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated%20residual%20+40dB.wav


I don’t expect the 16-bit truncated version to bother you, but it does
bother some audio engineers. Here's 16-bit dithered version, for
completeness, so that you can decide if the added noise floor bothers you:

http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20dithered.wav




On Feb 4, 2015, at 1:10 PM, Didier Dambrin di...@skynet.be wrote:

Yes, I disagree with the always. Not always needed means it's
sometimes needed, my point is that it's never needed, until proven
otherwise. Your video proves that sometimes it's not needed, but not
that sometimes it's needed.



-Message d'origine- From: Nigel Redmon
Sent: Wednesday, February 04, 2015 6:51 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles


I totally understood the point of your video, that dithering to 16bit
isn't always needed - but that's what I disagree with.


Sorry, Didier, I’m confused now. I took from your previous message
that you feel 16-bit doesn’t need to be dithered (dithering to 16bit
will never make any audible difference”). Here you say that you
disagree with dithering to 16bit isn't always needed”. In fact, you
are saying that it’s never needed—you disagree because “isn’t always
needed” implies that it is sometimes needed—correct?



On Feb 4, 2015, at 5:06 AM, Didier Dambrin di...@skynet.be wrote:


Then, it’s no-win situation, because I could EASILY manufacture a
bit of music that had significant truncation distortion at 16-bit.


Please do, I would really like to hear it.

I have never heard truncation noise at 16bit, other than by playing
with levels in a such a way that the peaking parts of the rest of the
sound would destroy your ears or be very unpleasant at best. (you say
12dB, it's already a lot)

I totally understood the point of your video, that dithering to 16bit
isn't always needed - but that's what I disagree with.



-Message d'origine- From: Nigel Redmon
Sent: Wednesday, February 04, 2015 10:59 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

Hi Didier—You seem to find contradictions in my choices because you
are making the wrong assumptions about what I’m showing and saying.

First, I’m not steadfast that 16-bit dither is always needed—and in
fact the point of the video was that I was showing you (the viewers)
how you can judge it objectively for yourself (and decide whether you
want to dither). This is a much better way that the usual that I hear
from people, who often listen to the dithered and non-dithered
results, and talk about the soundstage collapsing without dither,
“brittle” versus “transparent , etc.

But if I’m to give you a rule of thumb, a practical bit

Re: [music-dsp] Dither video and articles

2015-02-05 Thread Nigel Redmon
Hmm, I thought that would let you save the page source (wave file)…Safari 
creates the file of the appropriate name and type, but it stays at 0 bytes…OK, 
I put up and index page—do the usual right-click to save the field to disk if 
you need to access the files directly:

http://earlevel.com/temp/music-dsp/


 On Feb 5, 2015, at 12:13 AM, Nigel Redmon earle...@earlevel.com wrote:
 
 OK, here’s my new piece, I call it Diva bass—to satisfy your request for me 
 to make something with truncation distortion apparent. (If it bother you that 
 my piece is one note, imagine that this is just the last note of a longer 
 piece.)
 
 I spent maybe 30 seconds getting the sound—opened Diva (default “minimoog” 
 modules), turn the mixer knobs down except for VCO 1, set range to 32’, 
 waveform to triangle, max release on the VCA envelope.
 
 In 32-bit float glory:
 
 http://earlevel.com/temp/music-dsp/Diva%20bass%2032-bit%20float.wav
 
 Truncated to 16-bit, no dither (Quan Jr plug-in, Digital Performer), saved to 
 16-bit wave file:
 
 http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated.wav
 
 You’ll have to turn your sound system up, not insanely loud, but loud. (I 
 said that this would be the case before.) I can hear it, and I know engineers 
 who monitor much louder, routinely, than I’m monitoring to hear this. My 
 Equator Q10s are not terribly high powered, and I’m not adding any other gain 
 ahead of them in order to boost the quiet part.
 
 If you want to hear the residual easily (32-bit version inverted, summed with 
 16-bit truncated, the result with +40 dB gain via Trim plug-in):
 
 http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated%20residual%20+40dB.wav
 
 I don’t expect the 16-bit truncated version to bother you, but it does bother 
 some audio engineers. Here's 16-bit dithered version, for completeness, so 
 that you can decide if the added noise floor bothers you:
 
 http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20dithered.wav
 
 
 
 On Feb 4, 2015, at 1:10 PM, Didier Dambrin di...@skynet.be wrote:
 
 Yes, I disagree with the always. Not always needed means it's sometimes 
 needed, my point is that it's never needed, until proven otherwise. Your 
 video proves that sometimes it's not needed, but not that sometimes it's 
 needed.
 
 
 
 -Message d'origine- From: Nigel Redmon
 Sent: Wednesday, February 04, 2015 6:51 PM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles
 
 I totally understood the point of your video, that dithering to 16bit isn't 
 always needed - but that's what I disagree with.
 
 Sorry, Didier, I’m confused now. I took from your previous message that you 
 feel 16-bit doesn’t need to be dithered (dithering to 16bit will never make 
 any audible difference”). Here you say that you disagree with dithering to 
 16bit isn't always needed”. In fact, you are saying that it’s never 
 needed—you disagree because “isn’t always needed” implies that it is 
 sometimes needed—correct?
 
 
 On Feb 4, 2015, at 5:06 AM, Didier Dambrin di...@skynet.be wrote:
 
 Then, it’s no-win situation, because I could EASILY manufacture a bit of 
 music that had significant truncation distortion at 16-bit.
 
 Please do, I would really like to hear it.
 
 I have never heard truncation noise at 16bit, other than by playing with 
 levels in a such a way that the peaking parts of the rest of the sound 
 would destroy your ears or be very unpleasant at best. (you say 12dB, it's 
 already a lot)
 
 I totally understood the point of your video, that dithering to 16bit isn't 
 always needed - but that's what I disagree with.
 
 
 
 -Message d'origine- From: Nigel Redmon
 Sent: Wednesday, February 04, 2015 10:59 AM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles
 
 Hi Didier—You seem to find contradictions in my choices because you are 
 making the wrong assumptions about what I’m showing and saying.
 
 First, I’m not steadfast that 16-bit dither is always needed—and in fact 
 the point of the video was that I was showing you (the viewers) how you can 
 judge it objectively for yourself (and decide whether you want to dither). 
 This is a much better way that the usual that I hear from people, who often 
 listen to the dithered and non-dithered results, and talk about the 
 soundstage collapsing without dither, “brittle” versus “transparent , 
 etc.
 
 But if I’m to give you a rule of thumb, a practical bit of advice that you 
 can apply without concern that you might be doing something wrong in a 
 given circumstance, that advice is “always dither 16-bit reductions”. 
 First, I suspect that it’s below the existing noise floor of most music 
 (even so, things like slow fades of the master fader might override that, 
 for that point in time). Still, it’s not hard to manufacture something 
 musical that subject to bad truncation distortion—a naked, low frequency, 
 low

Re: [music-dsp] Dither video and articles

2015-02-05 Thread Andreas Beisler
The artifacts are very prominent in the tail end of the truncated file. 
I don't understand how you cannot hear it. Must be covered by the noise 
floor of your sound card's converters.


Andreas



On 2/5/2015 1:55 PM, Didier Dambrin wrote:

I couldn't hear any difference (through headphones), even after an
insane boost, and even though your 16bit truncated wav was 6dB(?) lower
than the 32bit wav

But even if I could hear it, IMHO this is 13bit worth of audio inside a
16bit file.




-Message d'origine- From: Nigel Redmon
Sent: Thursday, February 05, 2015 9:13 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

OK, here’s my new piece, I call it Diva bass—to satisfy your request for
me to make something with truncation distortion apparent. (If it bother
you that my piece is one note, imagine that this is just the last note
of a longer piece.)

I spent maybe 30 seconds getting the sound—opened Diva (default
“minimoog” modules), turn the mixer knobs down except for VCO 1, set
range to 32’, waveform to triangle, max release on the VCA envelope.

In 32-bit float glory:

http://earlevel.com/temp/music-dsp/Diva%20bass%2032-bit%20float.wav

Truncated to 16-bit, no dither (Quan Jr plug-in, Digital Performer),
saved to 16-bit wave file:

http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated.wav

You’ll have to turn your sound system up, not insanely loud, but loud.
(I said that this would be the case before.) I can hear it, and I know
engineers who monitor much louder, routinely, than I’m monitoring to
hear this. My Equator Q10s are not terribly high powered, and I’m not
adding any other gain ahead of them in order to boost the quiet part.

If you want to hear the residual easily (32-bit version inverted, summed
with 16-bit truncated, the result with +40 dB gain via Trim plug-in):

http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated%20residual%20+40dB.wav


I don’t expect the 16-bit truncated version to bother you, but it does
bother some audio engineers. Here's 16-bit dithered version, for
completeness, so that you can decide if the added noise floor bothers you:

http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20dithered.wav




On Feb 4, 2015, at 1:10 PM, Didier Dambrin di...@skynet.be wrote:

Yes, I disagree with the always. Not always needed means it's
sometimes needed, my point is that it's never needed, until proven
otherwise. Your video proves that sometimes it's not needed, but not
that sometimes it's needed.



-Message d'origine- From: Nigel Redmon
Sent: Wednesday, February 04, 2015 6:51 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles


I totally understood the point of your video, that dithering to 16bit
isn't always needed - but that's what I disagree with.


Sorry, Didier, I’m confused now. I took from your previous message
that you feel 16-bit doesn’t need to be dithered (dithering to 16bit
will never make any audible difference”). Here you say that you
disagree with dithering to 16bit isn't always needed”. In fact, you
are saying that it’s never needed—you disagree because “isn’t always
needed” implies that it is sometimes needed—correct?



On Feb 4, 2015, at 5:06 AM, Didier Dambrin di...@skynet.be wrote:


Then, it’s no-win situation, because I could EASILY manufacture a
bit of music that had significant truncation distortion at 16-bit.


Please do, I would really like to hear it.

I have never heard truncation noise at 16bit, other than by playing
with levels in a such a way that the peaking parts of the rest of the
sound would destroy your ears or be very unpleasant at best. (you say
12dB, it's already a lot)

I totally understood the point of your video, that dithering to 16bit
isn't always needed - but that's what I disagree with.



-Message d'origine- From: Nigel Redmon
Sent: Wednesday, February 04, 2015 10:59 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

Hi Didier—You seem to find contradictions in my choices because you
are making the wrong assumptions about what I’m showing and saying.

First, I’m not steadfast that 16-bit dither is always needed—and in
fact the point of the video was that I was showing you (the viewers)
how you can judge it objectively for yourself (and decide whether you
want to dither). This is a much better way that the usual that I hear
from people, who often listen to the dithered and non-dithered
results, and talk about the soundstage collapsing without dither,
“brittle” versus “transparent , etc.

But if I’m to give you a rule of thumb, a practical bit of advice
that you can apply without concern that you might be doing something
wrong in a given circumstance, that advice is “always dither 16-bit
reductions”. First, I suspect that it’s below the existing noise
floor of most music (even so, things like slow fades of the master
fader might override

Re: [music-dsp] Dither video and articles

2015-02-05 Thread Didier Dambrin
But the key here is *bits*. If you're listening at normal levels, those 
parts in music that don't use all 16bits (which is obvious, you can find 
parts of all levels in a song) will be quieter,  thus the noise will be 
less audible.


Put a sine wave in the lowest 1 or 2 bits of a 16bit piece of audio, it 
should be horrible noise, right? If you crank up your volume until you hear 
that sinewave, obviously it will. But at normal listening level, are you 
really gonna hear that sinewave or worse, its horrible noise? My bet would 
be *maybe*, in an anechoic room, after a couple of hours of getting used to 
silence.




he cost is virtual nothing


I will certainly not disagree with that, it doesn't hurt  costs (almost) 
nothing. But it's still snake oil.




Our biggest difference is that you are looking at this from the 
end-listener point of view.


Yes, because that's the only thing 16bit audio applies to, the end listener. 
Ok, apparently some still need to publish 16bit audio files for pro's 
because not every tool out there (I guess) supports 24 ( I would still 
advise against storing in integer format at all) or 32bit formats - this is 
most likely not gonna last very long.
Talking about this, in a world where the end listener almost always listens 
in lossy encoded formats, the 16bit quantization problem isn't even a shrimp 
in the whole universe.








-Message d'origine- 
From: Nigel Redmon

Sent: Thursday, February 05, 2015 7:13 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

Music is not typically full scale. My level was arbitrary—where the mixer 
knob happened to be sitting—but the note is relatively loud in a musical 
setting.


You don’t get to use all 16 bits, all the time in music. So, to complain 
that it might as well be 13-bit…well, if we had 13-bit converters and sample 
size, we’d be having this discussion about 10-bit. The bass note is LOUD, 
compared to similar bits in actual music, as I’m playing from iTunes right 
now.


OK, I’m not trying to convince you—it was obvious that we’d have to agree to 
disagree on this. And, as you know, I’m not overstating the importance of 
dithering 16-bit audio, as many others do. I’m simply saying that it’s worth 
it—the cost is virtual nothing (it’s not even don’t in real time, but just 
for the final bounce to disk), doing it doesn’t harm the music in any way 
(if you can hear the distortion, I don’t think you’ll hear 16-bit flat 
dither).


Our biggest difference is that you are looking at this from the end-listener 
point of view. But why would I be giving advice to the listener? They aren’t 
the ones making the choice to dither or not. The advice is for people in the 
position of dithering. And these people do hear it. If my advice were “Don’t 
bother—you can’t hear it anyway”, these people would think I’m an idiot—of 
course they can hear it. Their business is to look for junk and grunge and 
get rid of it. I can envision Bob Katz, Bob Olson, and Bruce Swedien 
knocking at my door, wanting to beat me with a microphone stand and pop 
screens for telling them that they can’t hear this stuff. (Just kidding, 
they seem like really nice guys.)


The funny thing is that I’m arguing in favor of 16-bit dither with you, and 
having a similar exchange with a mastering engineer, who is sending me 
examples of why we really must dither at 24-bit ...




On Feb 5, 2015, at 9:49 AM, Didier Dambrin di...@skynet.be wrote:


If you mean that the peak loudness of the synth isn’t hitting full scale


Yeah I mean that, since, to compensate, you crank your volume up, making 
it 13bit worth (from 14bit, after your extra -6dB gain)


I mean it's always the same debate with dithering, one could demonstrate 
exactly the same with 8bit worth of audio in a 16bit file. To me a 16bit 
file is 16bit worth of audio, for the whole project, thus with the loudest 
parts of the project designed to be listened to. If the entire project 
peaks at -18dB, then it's not designed to be listened to at the same level 
as other 16bit files, and thus it's not 16bit worth of audio. One could go 
further  store 1 bit worth of audio in a 16bit file and point out how 
degraded it is.
Quantization  loss is everywhere in a computer (obviously) and magnifying 
it doesn't make a point, because you always can bring the imperceptible 
back to perception. To me it's all about what's perceptible when the 
project is used as intended, otherwise, even 64bit float audio should be 
marked as lossy.



I could have had a louder sound with a similar tail that would have 
produced the same distortion.


yeah, except that louder sound would have killed your ears, so you would 
have cranked your listening level down, and not heard the noise anymore






-Message d'origine- From: Nigel Redmon
Sent: Thursday, February 05, 2015 6:22 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

Oh, sorry

Re: [music-dsp] Dither video and articles

2015-02-05 Thread Nigel Redmon
Yes, because that's the only thing 16bit audio applies to, the end listener.

??? They have absolutely no control over it. The decision to dither or not was 
made before they hear it. My advice is not to them. I get asked questions about 
dither from the people who do the reduction to 16-bit, not your average music 
listener. I have another video that explains what dither is and how it works, 
for the curious, but I get asked for my opinion, so I made this video. (Often, 
the people who ask already have their own opinion, and want to see if I’m on 
their side. And often, what follows is a spirited debate about 24-bit dither, 
not 16-bit.)

Talking about this, in a world where the end listener almost always listens in 
lossy encoded formats, the 16bit quantization problem isn't even a shrimp in 
the whole universe.

Sure, or FM radio in a car on a cheap system. But a mastering engineer isn’t 
going to factor in the lowest common denominator, any more than a photographer 
is going to assume that his photo will end up in black and white newsprint, or 
a movie director will assume that his work is going to be cropped to fit an old 
tube set and broadcast for pickup on rabbit ears. :-) If you tell a recording 
or mastering engineer that nobody can hear this stuff, they’ll crank the 
monitors and say, “you can’t hear THAT crap?” End of story.

Of course, they’ll often “hear” it when it isn’t really there too, which is why 
I showed a more objective way of listening for it. Several people have told me 
that they can hear it, consistently, on 24-bit truncations. I don’t think so. I 
read in a forum, where an expert was using some beta software and mentioned the 
audible difference with engaging 24-bit dither and not via a button on the GUI, 
and the developer had to tell him that he was mistaken on the function of that 
button, and that it did not impact audio at all. (I’m not making fun of the 
guy, and I admire his work, it’s just that anyone who does serious audio work 
fools themselves into thinking they hear something that is not, 
occasionally—fact of life.) But at 16-bit, it’s just not that hard to hear 
it—an edge case, for sure, but it’s there, so they will want to act on it, and 
I don’t think that’s unreasonable.


 On Feb 5, 2015, at 3:15 PM, Didier Dambrin di...@skynet.be wrote:
 
 But the key here is *bits*. If you're listening at normal levels, those parts 
 in music that don't use all 16bits (which is obvious, you can find parts of 
 all levels in a song) will be quieter,  thus the noise will be less audible.
 
 Put a sine wave in the lowest 1 or 2 bits of a 16bit piece of audio, it 
 should be horrible noise, right? If you crank up your volume until you hear 
 that sinewave, obviously it will. But at normal listening level, are you 
 really gonna hear that sinewave or worse, its horrible noise? My bet would be 
 *maybe*, in an anechoic room, after a couple of hours of getting used to 
 silence.
 
 
 he cost is virtual nothing
 
 I will certainly not disagree with that, it doesn't hurt  costs (almost) 
 nothing. But it's still snake oil.
 
 
 
 Our biggest difference is that you are looking at this from the 
 end-listener point of view.
 
 Yes, because that's the only thing 16bit audio applies to, the end listener. 
 Ok, apparently some still need to publish 16bit audio files for pro's because 
 not every tool out there (I guess) supports 24 ( I would still advise 
 against storing in integer format at all) or 32bit formats - this is most 
 likely not gonna last very long.
 Talking about this, in a world where the end listener almost always listens 
 in lossy encoded formats, the 16bit quantization problem isn't even a shrimp 
 in the whole universe.
 
 
 
 
 
 
 
 -Message d'origine- From: Nigel Redmon
 Sent: Thursday, February 05, 2015 7:13 PM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles
 
 Music is not typically full scale. My level was arbitrary—where the mixer 
 knob happened to be sitting—but the note is relatively loud in a musical 
 setting.
 
 You don’t get to use all 16 bits, all the time in music. So, to complain that 
 it might as well be 13-bit…well, if we had 13-bit converters and sample size, 
 we’d be having this discussion about 10-bit. The bass note is LOUD, compared 
 to similar bits in actual music, as I’m playing from iTunes right now.
 
 OK, I’m not trying to convince you—it was obvious that we’d have to agree to 
 disagree on this. And, as you know, I’m not overstating the importance of 
 dithering 16-bit audio, as many others do. I’m simply saying that it’s worth 
 it—the cost is virtual nothing (it’s not even don’t in real time, but just 
 for the final bounce to disk), doing it doesn’t harm the music in any way (if 
 you can hear the distortion, I don’t think you’ll hear 16-bit flat dither).
 
 Our biggest difference is that you are looking at this from the end-listener 
 point of view. But why would I be giving

Re: [music-dsp] Dither video and articles

2015-02-05 Thread Didier Dambrin
What I wrote is that 16bit audio only applies to the end listener, that is, 
it's aimed at the end listener, not the professional who will reuse the bit 
of audio. There is just no way A/B testing on a sample of listeners, at 
loud, but still realistic listening levels, would show that dithering to 
16bit makes a difference.



Sure, or FM radio in a car on a cheap system. But a mastering engineer 
isn’t going to factor in the lowest common denominator, any more than a 
photographer is going to assume that his photo will end up in black and 
white newsprint


An engineer has very important work to do on mastering (I would say in 
rather destructive ways, *because* our perception is rather forgiving), but 
that doesn't make dithering to 16bit less snake oil.



Several people have told me that they can hear it, consistently, on 
24-bit truncations.


yeah I hear things like that (or worse) all day long. But to be honnest, 
even I have ended up tweaking parameters of a switched off effect, until I 
was happy with the result.





-Message d'origine- 
From: Nigel Redmon

Sent: Friday, February 06, 2015 2:00 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

Yes, because that's the only thing 16bit audio applies to, the end 
listener.


??? They have absolutely no control over it. The decision to dither or not 
was made before they hear it. My advice is not to them. I get asked 
questions about dither from the people who do the reduction to 16-bit, not 
your average music listener. I have another video that explains what dither 
is and how it works, for the curious, but I get asked for my opinion, so I 
made this video. (Often, the people who ask already have their own opinion, 
and want to see if I’m on their side. And often, what follows is a spirited 
debate about 24-bit dither, not 16-bit.)


Talking about this, in a world where the end listener almost always listens 
in lossy encoded formats, the 16bit quantization problem isn't even a 
shrimp in the whole universe.


Sure, or FM radio in a car on a cheap system. But a mastering engineer isn’t 
going to factor in the lowest common denominator, any more than a 
photographer is going to assume that his photo will end up in black and 
white newsprint, or a movie director will assume that his work is going to 
be cropped to fit an old tube set and broadcast for pickup on rabbit ears. 
:-) If you tell a recording or mastering engineer that nobody can hear this 
stuff, they’ll crank the monitors and say, “you can’t hear THAT crap?” End 
of story.


Of course, they’ll often “hear” it when it isn’t really there too, which is 
why I showed a more objective way of listening for it. Several people have 
told me that they can hear it, consistently, on 24-bit truncations. I don’t 
think so. I read in a forum, where an expert was using some beta software 
and mentioned the audible difference with engaging 24-bit dither and not via 
a button on the GUI, and the developer had to tell him that he was mistaken 
on the function of that button, and that it did not impact audio at all. (I’m 
not making fun of the guy, and I admire his work, it’s just that anyone who 
does serious audio work fools themselves into thinking they hear something 
that is not, occasionally—fact of life.) But at 16-bit, it’s just not that 
hard to hear it—an edge case, for sure, but it’s there, so they will want to 
act on it, and I don’t think that’s unreasonable.




On Feb 5, 2015, at 3:15 PM, Didier Dambrin di...@skynet.be wrote:

But the key here is *bits*. If you're listening at normal levels, those 
parts in music that don't use all 16bits (which is obvious, you can find 
parts of all levels in a song) will be quieter,  thus the noise will be 
less audible.


Put a sine wave in the lowest 1 or 2 bits of a 16bit piece of audio, it 
should be horrible noise, right? If you crank up your volume until you 
hear that sinewave, obviously it will. But at normal listening level, are 
you really gonna hear that sinewave or worse, its horrible noise? My bet 
would be *maybe*, in an anechoic room, after a couple of hours of getting 
used to silence.




he cost is virtual nothing


I will certainly not disagree with that, it doesn't hurt  costs (almost) 
nothing. But it's still snake oil.




Our biggest difference is that you are looking at this from the 
end-listener point of view.


Yes, because that's the only thing 16bit audio applies to, the end 
listener. Ok, apparently some still need to publish 16bit audio files for 
pro's because not every tool out there (I guess) supports 24 ( I would 
still advise against storing in integer format at all) or 32bit formats - 
this is most likely not gonna last very long.
Talking about this, in a world where the end listener almost always 
listens in lossy encoded formats, the 16bit quantization problem isn't 
even a shrimp in the whole universe.








-Message d'origine- From: Nigel Redmon
Sent

Re: [music-dsp] Dither video and articles

2015-02-05 Thread Ethan Duni
There is just no way A/B testing on a sample of listeners,
at loud, but still realistic listening levels, would show that
dithering to 16bit makes a difference.

Well, can you refer us to an A/B test that confirms your assertions?
Personally I take a dim view of people telling me that a test would surely
confirm their assertions, but without actually doing any test.

And again, there are a variety of real-world use cases where the 16 bit
audio from a CD (or whatever) has its dynamic range reduced in the playback
chain. Are we supposed to just ignore those use cases?

E
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-05 Thread Andrew Simper
Hi Nigel,

Can I please ask a favour? Can you please add a mono noise button to
your dither plugin? In headphones the sudden onset of stereo hiss of
the dither is pretty obvious and a little distracting in this example.
I had a listen with a make mono plugin and the results were much
less obvious between the 16-bit with dither and the float file.  It
would be interesting to hear a stereo source (eg the same Diva sounds
but in unison) put through mono noise dithering.

The differences are pretty clear to me, thanks for posting the files! My setup:

(*) Switching between files randomly the three files randomly playing
them back with unity gain (the float file padded -6 dB to have the
same volume as the others)
(*) FireFace UCX with headphone output set to -12 dB, all other gains at unity
(*) Senheisser Amperior HD25 headphones

My results

(*) the float file is easy to spot, because of the differences when
compared to the other two
(*) the dithered one sounds hissy straight away when I switch to it,
it is obvious that the hiss is stereo, my ears immediately hear that
stereo difference, but otherwise it sounds like the original float
file
(*) the undithered one, right from the start, sounds like a harsher
version of the float one with just a hint of noise as well, an
aggressive subtle edge to the tone which just isn't in the original.
When the fadeout comes then it becomes more obvious aliasing
distortion that everyone is used to hearing.

I also tried boosting the float version of the bass tone to -1 dB (so
another 18 dB up from with the same test setup), it was loud, but not
anywhere near the threshold of pain for me. I then boosted it another
12 dB on the headphone control (so 0 dB gain), so now 30 dB gain in
total and my headphones were really shaking, this was a bit silly a
level, but still definitely not painful to listen to. My point being
that this is a very reasonable test signal to listen to, and it is
clear to hear the differences even at low levels of gain.

If I had to choose, between the two 16-bit ones I would prefer the one
with dither but put through a make mono plugin, as this sounded the
closest to the float version.

All the best,

Andy

-- cytomic -- sound music software --


On 5 February 2015 at 16:46, Nigel Redmon earle...@earlevel.com wrote:
 Hmm, I thought that would let you save the page source (wave file)…Safari 
 creates the file of the appropriate name and type, but it stays at 0 
 bytes…OK, I put up and index page—do the usual right-click to save the field 
 to disk if you need to access the files directly:

 http://earlevel.com/temp/music-dsp/


 On Feb 5, 2015, at 12:13 AM, Nigel Redmon earle...@earlevel.com wrote:

 OK, here’s my new piece, I call it Diva bass—to satisfy your request for me 
 to make something with truncation distortion apparent. (If it bother you 
 that my piece is one note, imagine that this is just the last note of a 
 longer piece.)

 I spent maybe 30 seconds getting the sound—opened Diva (default “minimoog” 
 modules), turn the mixer knobs down except for VCO 1, set range to 32’, 
 waveform to triangle, max release on the VCA envelope.

 In 32-bit float glory:

 http://earlevel.com/temp/music-dsp/Diva%20bass%2032-bit%20float.wav

 Truncated to 16-bit, no dither (Quan Jr plug-in, Digital Performer), saved 
 to 16-bit wave file:

 http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated.wav

 You’ll have to turn your sound system up, not insanely loud, but loud. (I 
 said that this would be the case before.) I can hear it, and I know 
 engineers who monitor much louder, routinely, than I’m monitoring to hear 
 this. My Equator Q10s are not terribly high powered, and I’m not adding any 
 other gain ahead of them in order to boost the quiet part.

 If you want to hear the residual easily (32-bit version inverted, summed 
 with 16-bit truncated, the result with +40 dB gain via Trim plug-in):

 http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated%20residual%20+40dB.wav

 I don’t expect the 16-bit truncated version to bother you, but it does 
 bother some audio engineers. Here's 16-bit dithered version, for 
 completeness, so that you can decide if the added noise floor bothers you:

 http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20dithered.wav



 On Feb 4, 2015, at 1:10 PM, Didier Dambrin di...@skynet.be wrote:

 Yes, I disagree with the always. Not always needed means it's 
 sometimes needed, my point is that it's never needed, until proven 
 otherwise. Your video proves that sometimes it's not needed, but not that 
 sometimes it's needed.



 -Message d'origine- From: Nigel Redmon
 Sent: Wednesday, February 04, 2015 6:51 PM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles

 I totally understood the point of your video, that dithering to 16bit 
 isn't always needed - but that's what I disagree with.

 Sorry, Didier, I’m confused now. I took from your previous message

Re: [music-dsp] Dither video and articles

2015-02-05 Thread Ross Bencina

Hi Ethan,

On 6/02/2015 1:17 PM, Ethan Duni wrote:
 There is just no way A/B testing on a sample of listeners,
 at loud, but still realistic listening levels, would show that
 dithering to 16bit makes a difference.

 Well, can you refer us to an A/B test that confirms your assertions?
 Personally I take a dim view of people telling me that a test would 
surely

 confirm their assertions, but without actually doing any test.

Here's a double-blind A/B/X test that indicated no one could hear the 
difference between 16 and 24 bit. 24-bit is better than 16-bit with 
dithering so maybe you can extrapolate.


AES Journal 2007 September, Volume 55 Number 9: Audibility of a 
CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback

E. Brad Meyer and David R. Moran

I found this link with google:
http://drewdaniels.com/audible.pdf

The test results show that the CD-quality A/D/A loop was undetectable 
at normal-to-loud listening levels, by any of the subjects, on any of 
the playback systems. The noise of the CD-quality loop was audible only

at very elevated levels.

Cheers,

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-05 Thread Tom Duffy

The AES report is highly controversial.

Plenty of sources dispute the findings.

---
Tom

On 2/5/2015 6:39 PM, Ross Bencina wrote:
Hi Ethan,

On 6/02/2015 1:17 PM, Ethan Duni wrote:
  There is just no way A/B testing on a sample of listeners,
  at loud, but still realistic listening levels, would show that
  dithering to 16bit makes a difference.
 
  Well, can you refer us to an A/B test that confirms your assertions?
  Personally I take a dim view of people telling me that a test would
surely
  confirm their assertions, but without actually doing any test.

Here's a double-blind A/B/X test that indicated no one could hear the
difference between 16 and 24 bit. 24-bit is better than 16-bit with
dithering so maybe you can extrapolate.

AES Journal 2007 September, Volume 55 Number 9: Audibility of a
CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback
E. Brad Meyer and David R. Moran

I found this link with google:
http://drewdaniels.com/audible.pdf

The test results show that the CD-quality A/D/A loop was undetectable
at normal-to-loud listening levels, by any of the subjects, on any of
the playback systems. The noise of the CD-quality loop was audible only
at very elevated levels.

Cheers,

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


NOTICE: This electronic mail message and its contents, including any attachments hereto 
(collectively, this e-mail), is hereby designated as confidential and 
proprietary. This e-mail may be viewed and used only by the person to whom it has been sent 
and his/her employer solely for the express purpose for which it has been disclosed and only in 
accordance with any confidentiality or non-disclosure (or similar) agreement between TEAC 
Corporation or its affiliates and said employer, and may not be disclosed to any other person or 
entity.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-05 Thread Nigel Redmon
Music is not typically full scale. My level was arbitrary—where the mixer knob 
happened to be sitting—but the note is relatively loud in a musical setting.

You don’t get to use all 16 bits, all the time in music. So, to complain that 
it might as well be 13-bit…well, if we had 13-bit converters and sample size, 
we’d be having this discussion about 10-bit. The bass note is LOUD, compared to 
similar bits in actual music, as I’m playing from iTunes right now.

OK, I’m not trying to convince you—it was obvious that we’d have to agree to 
disagree on this. And, as you know, I’m not overstating the importance of 
dithering 16-bit audio, as many others do. I’m simply saying that it’s worth 
it—the cost is virtual nothing (it’s not even don’t in real time, but just for 
the final bounce to disk), doing it doesn’t harm the music in any way (if you 
can hear the distortion, I don’t think you’ll hear 16-bit flat dither).

Our biggest difference is that you are looking at this from the end-listener 
point of view. But why would I be giving advice to the listener? They aren’t 
the ones making the choice to dither or not. The advice is for people in the 
position of dithering. And these people do hear it. If my advice were “Don’t 
bother—you can’t hear it anyway”, these people would think I’m an idiot—of 
course they can hear it. Their business is to look for junk and grunge and get 
rid of it. I can envision Bob Katz, Bob Olson, and Bruce Swedien knocking at my 
door, wanting to beat me with a microphone stand and pop screens for telling 
them that they can’t hear this stuff. (Just kidding, they seem like really nice 
guys.)

The funny thing is that I’m arguing in favor of 16-bit dither with you, and 
having a similar exchange with a mastering engineer, who is sending me examples 
of why we really must dither at 24-bit ...


 On Feb 5, 2015, at 9:49 AM, Didier Dambrin di...@skynet.be wrote:
 
 If you mean that the peak loudness of the synth isn’t hitting full scale
 
 Yeah I mean that, since, to compensate, you crank your volume up, making it 
 13bit worth (from 14bit, after your extra -6dB gain)
 
 I mean it's always the same debate with dithering, one could demonstrate 
 exactly the same with 8bit worth of audio in a 16bit file. To me a 16bit file 
 is 16bit worth of audio, for the whole project, thus with the loudest parts 
 of the project designed to be listened to. If the entire project peaks at 
 -18dB, then it's not designed to be listened to at the same level as other 
 16bit files, and thus it's not 16bit worth of audio. One could go further  
 store 1 bit worth of audio in a 16bit file and point out how degraded it is.
 Quantization  loss is everywhere in a computer (obviously) and magnifying it 
 doesn't make a point, because you always can bring the imperceptible back to 
 perception. To me it's all about what's perceptible when the project is used 
 as intended, otherwise, even 64bit float audio should be marked as lossy.
 
 
 I could have had a louder sound with a similar tail that would have 
 produced the same distortion.
 
 yeah, except that louder sound would have killed your ears, so you would have 
 cranked your listening level down, and not heard the noise anymore
 
 
 
 
 
 -Message d'origine- From: Nigel Redmon
 Sent: Thursday, February 05, 2015 6:22 PM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles
 
 Oh, sorry about the 6 dB. I made the 16- and 32-bit versions, then noticed I 
 had the gain slider on the DP mixer pushed up. I pulled it back to 0 dB and 
 made new bounces, plus the residual and dithered version subsequently, but 
 must have grabbed the wrong 32-bit version for upload.
 
 I have no idea what you’re implying about IMHO this is 13bit worth of audio 
 inside a 16bit file”. I took care to have no gain after the truncation 
 (except the accidental 6 dB on the 32-bit file). If you mean that the peak 
 loudness of the synth isn’t hitting full scale, then, A) welcome to music, 
 and B) it’s immaterial—I could have had a louder sound with a similar tail 
 that would have produced the same distortion.
 
 I’m not surprised you couldn’t hear it, as I said it required fairly high 
 listening levels and I don’t know what your equipment is. It can be heard on 
 a professional monitoring system. I’m monitoring off my TASCAM DM-3200, and 
 it does not have a loud headphone amp—I can’t hear it there. But it’s right 
 on the edge—if I boost it +6 dB I have no problem hearing it. But my 
 monitoring speakers get louder than the headphones, so I can hear it there. 
 And I know engineers who routinely monitor much louder than my gear can get.
 
 
 On Feb 5, 2015, at 4:55 AM, Didier Dambrin di...@skynet.be wrote:
 
 I couldn't hear any difference (through headphones), even after an insane 
 boost, and even though your 16bit truncated wav was 6dB(?) lower than the 
 32bit wav
 
 But even if I could hear it, IMHO this is 13bit worth of audio

Re: [music-dsp] Dither video and articles

2015-02-05 Thread Didier Dambrin

If you mean that the peak loudness of the synth isn’t hitting full scale


Yeah I mean that, since, to compensate, you crank your volume up, making it 
13bit worth (from 14bit, after your extra -6dB gain)


I mean it's always the same debate with dithering, one could demonstrate 
exactly the same with 8bit worth of audio in a 16bit file. To me a 16bit 
file is 16bit worth of audio, for the whole project, thus with the loudest 
parts of the project designed to be listened to. If the entire project peaks 
at -18dB, then it's not designed to be listened to at the same level as 
other 16bit files, and thus it's not 16bit worth of audio. One could go 
further  store 1 bit worth of audio in a 16bit file and point out how 
degraded it is.
Quantization  loss is everywhere in a computer (obviously) and magnifying 
it doesn't make a point, because you always can bring the imperceptible back 
to perception. To me it's all about what's perceptible when the project is 
used as intended, otherwise, even 64bit float audio should be marked as 
lossy.



I could have had a louder sound with a similar tail that would have 
produced the same distortion.


yeah, except that louder sound would have killed your ears, so you would 
have cranked your listening level down, and not heard the noise anymore






-Message d'origine- 
From: Nigel Redmon

Sent: Thursday, February 05, 2015 6:22 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

Oh, sorry about the 6 dB. I made the 16- and 32-bit versions, then noticed I 
had the gain slider on the DP mixer pushed up. I pulled it back to 0 dB and 
made new bounces, plus the residual and dithered version subsequently, but 
must have grabbed the wrong 32-bit version for upload.


I have no idea what you’re implying about IMHO this is 13bit worth of audio 
inside a 16bit file”. I took care to have no gain after the truncation 
(except the accidental 6 dB on the 32-bit file). If you mean that the peak 
loudness of the synth isn’t hitting full scale, then, A) welcome to music, 
and B) it’s immaterial—I could have had a louder sound with a similar tail 
that would have produced the same distortion.


I’m not surprised you couldn’t hear it, as I said it required fairly high 
listening levels and I don’t know what your equipment is. It can be heard on 
a professional monitoring system. I’m monitoring off my TASCAM DM-3200, and 
it does not have a loud headphone amp—I can’t hear it there. But it’s right 
on the edge—if I boost it +6 dB I have no problem hearing it. But my 
monitoring speakers get louder than the headphones, so I can hear it there. 
And I know engineers who routinely monitor much louder than my gear can get.




On Feb 5, 2015, at 4:55 AM, Didier Dambrin di...@skynet.be wrote:

I couldn't hear any difference (through headphones), even after an insane 
boost, and even though your 16bit truncated wav was 6dB(?) lower than the 
32bit wav


But even if I could hear it, IMHO this is 13bit worth of audio inside a 
16bit file.





-Message d'origine- From: Nigel Redmon
Sent: Thursday, February 05, 2015 9:13 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

OK, here’s my new piece, I call it Diva bass—to satisfy your request for 
me to make something with truncation distortion apparent. (If it bother 
you that my piece is one note, imagine that this is just the last note of 
a longer piece.)


I spent maybe 30 seconds getting the sound—opened Diva (default “minimoog” 
modules), turn the mixer knobs down except for VCO 1, set range to 32’, 
waveform to triangle, max release on the VCA envelope.


In 32-bit float glory:

http://earlevel.com/temp/music-dsp/Diva%20bass%2032-bit%20float.wav

Truncated to 16-bit, no dither (Quan Jr plug-in, Digital Performer), saved 
to 16-bit wave file:


http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated.wav

You’ll have to turn your sound system up, not insanely loud, but loud. (I 
said that this would be the case before.) I can hear it, and I know 
engineers who monitor much louder, routinely, than I’m monitoring to hear 
this. My Equator Q10s are not terribly high powered, and I’m not adding 
any other gain ahead of them in order to boost the quiet part.


If you want to hear the residual easily (32-bit version inverted, summed 
with 16-bit truncated, the result with +40 dB gain via Trim plug-in):


http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated%20residual%20+40dB.wav

I don’t expect the 16-bit truncated version to bother you, but it does 
bother some audio engineers. Here's 16-bit dithered version, for 
completeness, so that you can decide if the added noise floor bothers you:


http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20dithered.wav




On Feb 4, 2015, at 1:10 PM, Didier Dambrin di...@skynet.be wrote:

Yes, I disagree with the always. Not always needed means it's 
sometimes needed, my

Re: [music-dsp] Dither video and articles

2015-02-05 Thread Nigel Redmon
Bob Ohlsson—not sure if I really typed it that way or if it got 
autocorrected...

 On Feb 5, 2015, at 10:13 AM, Nigel Redmon earle...@earlevel.com wrote:
 
 Music is not typically full scale. My level was arbitrary—where the mixer 
 knob happened to be sitting—but the note is relatively loud in a musical 
 setting.
 
 You don’t get to use all 16 bits, all the time in music. So, to complain that 
 it might as well be 13-bit…well, if we had 13-bit converters and sample size, 
 we’d be having this discussion about 10-bit. The bass note is LOUD, compared 
 to similar bits in actual music, as I’m playing from iTunes right now.
 
 OK, I’m not trying to convince you—it was obvious that we’d have to agree to 
 disagree on this. And, as you know, I’m not overstating the importance of 
 dithering 16-bit audio, as many others do. I’m simply saying that it’s worth 
 it—the cost is virtual nothing (it’s not even don’t in real time, but just 
 for the final bounce to disk), doing it doesn’t harm the music in any way (if 
 you can hear the distortion, I don’t think you’ll hear 16-bit flat dither).
 
 Our biggest difference is that you are looking at this from the end-listener 
 point of view. But why would I be giving advice to the listener? They aren’t 
 the ones making the choice to dither or not. The advice is for people in the 
 position of dithering. And these people do hear it. If my advice were “Don’t 
 bother—you can’t hear it anyway”, these people would think I’m an idiot—of 
 course they can hear it. Their business is to look for junk and grunge and 
 get rid of it. I can envision Bob Katz, Bob Olson, and Bruce Swedien knocking 
 at my door, wanting to beat me with a microphone stand and pop screens for 
 telling them that they can’t hear this stuff. (Just kidding, they seem like 
 really nice guys.)
 
 The funny thing is that I’m arguing in favor of 16-bit dither with you, and 
 having a similar exchange with a mastering engineer, who is sending me 
 examples of why we really must dither at 24-bit ...
 
 
 On Feb 5, 2015, at 9:49 AM, Didier Dambrin di...@skynet.be wrote:
 
 If you mean that the peak loudness of the synth isn’t hitting full scale
 
 Yeah I mean that, since, to compensate, you crank your volume up, making it 
 13bit worth (from 14bit, after your extra -6dB gain)
 
 I mean it's always the same debate with dithering, one could demonstrate 
 exactly the same with 8bit worth of audio in a 16bit file. To me a 16bit 
 file is 16bit worth of audio, for the whole project, thus with the loudest 
 parts of the project designed to be listened to. If the entire project peaks 
 at -18dB, then it's not designed to be listened to at the same level as 
 other 16bit files, and thus it's not 16bit worth of audio. One could go 
 further  store 1 bit worth of audio in a 16bit file and point out how 
 degraded it is.
 Quantization  loss is everywhere in a computer (obviously) and magnifying 
 it doesn't make a point, because you always can bring the imperceptible back 
 to perception. To me it's all about what's perceptible when the project is 
 used as intended, otherwise, even 64bit float audio should be marked as 
 lossy.
 
 
 I could have had a louder sound with a similar tail that would have 
 produced the same distortion.
 
 yeah, except that louder sound would have killed your ears, so you would 
 have cranked your listening level down, and not heard the noise anymore
 
 
 
 
 
 -Message d'origine- From: Nigel Redmon
 Sent: Thursday, February 05, 2015 6:22 PM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles
 
 Oh, sorry about the 6 dB. I made the 16- and 32-bit versions, then noticed I 
 had the gain slider on the DP mixer pushed up. I pulled it back to 0 dB and 
 made new bounces, plus the residual and dithered version subsequently, but 
 must have grabbed the wrong 32-bit version for upload.
 
 I have no idea what you’re implying about IMHO this is 13bit worth of audio 
 inside a 16bit file”. I took care to have no gain after the truncation 
 (except the accidental 6 dB on the 32-bit file). If you mean that the peak 
 loudness of the synth isn’t hitting full scale, then, A) welcome to music, 
 and B) it’s immaterial—I could have had a louder sound with a similar tail 
 that would have produced the same distortion.
 
 I’m not surprised you couldn’t hear it, as I said it required fairly high 
 listening levels and I don’t know what your equipment is. It can be heard on 
 a professional monitoring system. I’m monitoring off my TASCAM DM-3200, and 
 it does not have a loud headphone amp—I can’t hear it there. But it’s right 
 on the edge—if I boost it +6 dB I have no problem hearing it. But my 
 monitoring speakers get louder than the headphones, so I can hear it there. 
 And I know engineers who routinely monitor much louder than my gear can get.
 
 
 On Feb 5, 2015, at 4:55 AM, Didier Dambrin di...@skynet.be wrote:
 
 I couldn't hear any difference

Re: [music-dsp] Dither video and articles

2015-02-04 Thread Nigel Redmon
Great point, Steffan, and glad to hear that you did some experiments. I have 
not, but made an assumption (by considering the math involved in encoding) that 
encoding from a high resolution source is best. My current music partner is a 
long-time engineer and producer, and he has the habit of mixing 16-bit versions 
and going from there, and I’ve been badgering him to always mix to 32-float (or 
24-bit if he must—you know how habits go with engineers, the concept of float 
seems to bother him, and others I know), and make a 16-bit (*only* for CD) and 
all other versions (AAC, etc.).


 On Feb 4, 2015, at 2:45 AM, STEFFAN DIEDRICHSEN sdiedrich...@me.com wrote:
 
 Great video!
 
 Great explanation and nice demonstration. On the other hand, I’m tempted to 
 ask, if this discussion is still relevant due to the slight changes in music 
 distribution. CD is still a medium, many artist prefer for distribution, 
 mostly for the artwork and booklet, that’s delivered to the buyer. As a 
 consequence, in most cases, the 16 bit, dithered or noise shaped master is 
 used for the compressed versions as well. But the question is, if this 
 process is really the best way? I made some experiments and found out, that 
 AAC benefits from a 24 bit or floating point input, dither noise is rather 
 disturbing the encoding process. That said, CD final mastering should be done 
 in parallel  to the creation of compressed versions. 
 
 
 Steffan   
 
 
 On 24.01.2015|KW4, at 18:49, Nigel Redmon earle...@earlevel.com wrote:
 
 “In the coming weeks”, I said…OK, maybe 10 months…(I wasn’t *just* slow, 
 actually rethought and changed courses a couple of times)…
 
 Here’s my new “Dither—The Naked Truth” video, looking at isolated truncation 
 distortion in music:
 
 https://www.youtube.com/watch?v=KCyA6LlB3As 
 https://www.youtube.com/watch?v=KCyA6LlB3As
 
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-04 Thread Nigel Redmon
I totally understood the point of your video, that dithering to 16bit isn't 
always needed - but that's what I disagree with.

Sorry, Didier, I’m confused now. I took from your previous message that you 
feel 16-bit doesn’t need to be dithered (dithering to 16bit will never make 
any audible difference”). Here you say that you disagree with dithering to 
16bit isn't always needed”. In fact, you are saying that it’s never needed—you 
disagree because “isn’t always needed” implies that it is sometimes 
needed—correct?


 On Feb 4, 2015, at 5:06 AM, Didier Dambrin di...@skynet.be wrote:
 
 Then, it’s no-win situation, because I could EASILY manufacture a bit of 
 music that had significant truncation distortion at 16-bit.
 
 Please do, I would really like to hear it.
 
 I have never heard truncation noise at 16bit, other than by playing with 
 levels in a such a way that the peaking parts of the rest of the sound would 
 destroy your ears or be very unpleasant at best. (you say 12dB, it's already 
 a lot)
 
 I totally understood the point of your video, that dithering to 16bit isn't 
 always needed - but that's what I disagree with.
 
 
 
 -Message d'origine- From: Nigel Redmon
 Sent: Wednesday, February 04, 2015 10:59 AM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles
 
 Hi Didier—You seem to find contradictions in my choices because you are 
 making the wrong assumptions about what I’m showing and saying.
 
 First, I’m not steadfast that 16-bit dither is always needed—and in fact the 
 point of the video was that I was showing you (the viewers) how you can judge 
 it objectively for yourself (and decide whether you want to dither). This is 
 a much better way that the usual that I hear from people, who often listen to 
 the dithered and non-dithered results, and talk about the soundstage 
 collapsing without dither, “brittle” versus “transparent , etc.
 
 But if I’m to give you a rule of thumb, a practical bit of advice that you 
 can apply without concern that you might be doing something wrong in a given 
 circumstance, that advice is “always dither 16-bit reductions”. First, I 
 suspect that it’s below the existing noise floor of most music (even so, 
 things like slow fades of the master fader might override that, for that 
 point in time). Still, it’s not hard to manufacture something musical that 
 subject to bad truncation distortion—a naked, low frequency, 
 low-haromic-content sound (a synthetic bass or floor tom perhaps). Anyway, at 
 worst case, you’ve added white noise that you are unlikely to hear—and if you 
 do, so what? If broadband noise below -90 dB were a deal-breaker in recorded 
 music, there wouldn’t be any recorded music. Yeah, truncation distortion at 
 16-bits is an edge case, but the cost to remove it is almost nothing.
 
 You say that we can’t perceive quantization above 14-bit, but of course we 
 can. If you can perceive it at 14-bit in a given circumstance, and it’s an 
 extended low-level passage, you can easily raise the volume control another 
 12 dB and be in the same situation at 16-bit. Granted, it’s most likely that 
 the recording engineer hears it and not the end-listener, but who is this 
 video aimed at if not the recording engineer? He’s the one making the choice 
 of whether to dither.
 
 Specifically:
 ..then why not use a piece of audio that does prove the point, instead? I 
 know why, it's because you can’t...
 
 First, I would have to use my own music (because I don’t own 32-bit float 
 versions of other peoples’ music, even if I thought it was fair use to of 
 copyrighted material). Then, it’s no-win situation, because I could EASILY 
 manufacture a bit of music that had significant truncation distortion at 
 16-bit. I only need to fire up one of my soft synths, and ring out some dull 
 bell tones and bass sounds. Then people would accuse me of fitting the data 
 to the theory, and this isn’t typical music made in a typical high-end study 
 by a professional engineer. And my video would be 20 minutes long because I’m 
 not looking at a 40-second bit of music any more. Instead, I clearly 
 explained my choice, and it proved to be a pretty good one, and probably 
 fairly typical at 16-bit, wouldn’t you agree? As I mentioned at the end of 
 the video, the plan is to further examine some high-resolution music that a 
 Grammy award-winning engineer and producer friend of mine has said he will 
 provide.
 
 ...and dithering to 16bit will never make any audible difference.
 
 If you mean “never make any audible difference” in the sense that it won’t 
 matter one bit to sales or musical enjoyment, I agree. I imagine 
 photographers make fixes and color tweaks that will never be noticed in the 
 magazine or webpage that the photo will end up in either. But I guarantee 
 you, there are lots of audio engineers that will not let that practically 
 (using the word in the original “practical sense–don’t read

Re: [music-dsp] Dither video and articles

2015-02-04 Thread Nigel Redmon
 post-edit 
 the sound. Yes it is, totally, but if you're gonna post-edit the sound, you 
 will rather keep it 32 or 24bit anyway - the argument about dithering to 
 16bit is for the final mix.
 
 To me, until proven otherwise, for normal-to-(not abnormally)-high dynamic 
 ranges, we can't perceive quantization above 14bit for audio, and 10bits for 
 images on a screen (debatable here because monitors aren't linear but that's 
 another story). Yet people seem to care less about images, and there's 
 gradient banding all over the place.
 
 
 
 
 
 
 -Message d'origine- From: Andrew Simper
 Sent: Wednesday, February 04, 2015 6:06 AM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles
 
 Hi Nigel,
 
 Isn't the rule of thumb in IT estimates something like: Double the
 time you estimated, then move it up to the next time unit? So 2 weeks
 actually means 4 months, but since we're in Music IT I think we should
 be allowed 5 times instead of 2, so from my point of view you've
 actually delivered on time ;)
 
 Thanks very much for doing the video! I agree with your recommended
 workflows of 16 bit = always dither, and 24 bit = don't dither. I
 would probably go further and say just use triangular dither, since at
 some time in the future you may want to pitch the sound down (ie for a
 sample library of drums with a tom you want to tune downwards, or
 remixing a song) then any noise shaped dither will cause an issue
 since the noise will become audible.
 
 All the best,
 
 Andrew
 
 -- cytomic -- sound music software --
 
 
 On 25 January 2015 at 01:49, Nigel Redmon earle...@earlevel.com wrote:
 “In the coming weeks”, I said…OK, maybe 10 months…(I wasn’t *just* slow, 
 actually rethought and changed courses a couple of times)…
 
 Here’s my new “Dither—The Naked Truth” video, looking at isolated truncation 
 distortion in music:
 
 https://www.youtube.com/watch?v=KCyA6LlB3As
 
 
 On Mar 26, 2014, at 4:45 PM, Nigel Redmon earle...@earlevel.com wrote:
 
 Since it’s been quiet…
 
 Maybe this would be interesting to some list members? A basic and intuitive 
 explanation of audio dither:
 
 https://www.youtube.com/watch?v=zWpWIQw7HWU
 
 The video will be followed by a second part, in the coming weeks, that 
 covers details like when, and when not to use dither and noise shaping. 
 I’ll be putting up some additional test files in an article on ear 
 level.com in the next day or so.
 
 For these and other articles on dither:
 
 http://www.earlevel.com/main/category/digital-audio/dither-digital-audio/
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
 
 -
 Aucun virus trouvé dans ce message.
 Analyse effectuée par AVG - www.avg.fr
 Version: 2015.0.5645 / Base de données virale: 4281/9051 - Date: 03/02/2015 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Dither video and articles

2015-02-04 Thread Andrew Simper
On 4 February 2015 at 14:24, Didier Dambrin di...@skynet.be wrote:
 Andrew says he agrees, but then adds that it's important when you post-edit
 the sound. Yes it is, totally, but if you're gonna post-edit the sound, you
 will rather keep it 32 or 24bit anyway - the argument about dithering to
 16bit is for the final mix.

Unless you ship 16-bit samples as a product.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-04 Thread Didier Dambrin
Yes, I disagree with the always. Not always needed means it's sometimes 
needed, my point is that it's never needed, until proven otherwise. Your 
video proves that sometimes it's not needed, but not that sometimes it's 
needed.




-Message d'origine- 
From: Nigel Redmon

Sent: Wednesday, February 04, 2015 6:51 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

I totally understood the point of your video, that dithering to 16bit isn't 
always needed - but that's what I disagree with.


Sorry, Didier, I’m confused now. I took from your previous message that you 
feel 16-bit doesn’t need to be dithered (dithering to 16bit will never make 
any audible difference”). Here you say that you disagree with dithering to 
16bit isn't always needed”. In fact, you are saying that it’s never 
needed—you disagree because “isn’t always needed” implies that it is 
sometimes needed—correct?




On Feb 4, 2015, at 5:06 AM, Didier Dambrin di...@skynet.be wrote:

Then, it’s no-win situation, because I could EASILY manufacture a bit of 
music that had significant truncation distortion at 16-bit.


Please do, I would really like to hear it.

I have never heard truncation noise at 16bit, other than by playing with 
levels in a such a way that the peaking parts of the rest of the sound 
would destroy your ears or be very unpleasant at best. (you say 12dB, it's 
already a lot)


I totally understood the point of your video, that dithering to 16bit 
isn't always needed - but that's what I disagree with.




-Message d'origine- From: Nigel Redmon
Sent: Wednesday, February 04, 2015 10:59 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

Hi Didier—You seem to find contradictions in my choices because you are 
making the wrong assumptions about what I’m showing and saying.


First, I’m not steadfast that 16-bit dither is always needed—and in fact 
the point of the video was that I was showing you (the viewers) how you 
can judge it objectively for yourself (and decide whether you want to 
dither). This is a much better way that the usual that I hear from people, 
who often listen to the dithered and non-dithered results, and talk about 
the soundstage collapsing without dither, “brittle” versus “transparent 
, etc.


But if I’m to give you a rule of thumb, a practical bit of advice that you 
can apply without concern that you might be doing something wrong in a 
given circumstance, that advice is “always dither 16-bit reductions”. 
First, I suspect that it’s below the existing noise floor of most music 
(even so, things like slow fades of the master fader might override that, 
for that point in time). Still, it’s not hard to manufacture something 
musical that subject to bad truncation distortion—a naked, low frequency, 
low-haromic-content sound (a synthetic bass or floor tom perhaps). Anyway, 
at worst case, you’ve added white noise that you are unlikely to hear—and 
if you do, so what? If broadband noise below -90 dB were a deal-breaker in 
recorded music, there wouldn’t be any recorded music. Yeah, truncation 
distortion at 16-bits is an edge case, but the cost to remove it is almost 
nothing.


You say that we can’t perceive quantization above 14-bit, but of course we 
can. If you can perceive it at 14-bit in a given circumstance, and it’s an 
extended low-level passage, you can easily raise the volume control 
another 12 dB and be in the same situation at 16-bit. Granted, it’s most 
likely that the recording engineer hears it and not the end-listener, but 
who is this video aimed at if not the recording engineer? He’s the one 
making the choice of whether to dither.


Specifically:
..then why not use a piece of audio that does prove the point, instead? I 
know why, it's because you can’t...


First, I would have to use my own music (because I don’t own 32-bit float 
versions of other peoples’ music, even if I thought it was fair use to of 
copyrighted material). Then, it’s no-win situation, because I could EASILY 
manufacture a bit of music that had significant truncation distortion at 
16-bit. I only need to fire up one of my soft synths, and ring out some 
dull bell tones and bass sounds. Then people would accuse me of fitting 
the data to the theory, and this isn’t typical music made in a typical 
high-end study by a professional engineer. And my video would be 20 
minutes long because I’m not looking at a 40-second bit of music any more. 
Instead, I clearly explained my choice, and it proved to be a pretty good 
one, and probably fairly typical at 16-bit, wouldn’t you agree? As I 
mentioned at the end of the video, the plan is to further examine some 
high-resolution music that a Grammy award-winning engineer and producer 
friend of mine has said he will provide.



...and dithering to 16bit will never make any audible difference.


If you mean “never make any audible difference” in the sense that it won’t

Re: [music-dsp] Dither video and articles

2015-02-04 Thread Nigel Redmon
LOL, yes on the time estimates…I headed down one path, and, no that wasn’t 
right, down another…and another…oh, and now I need to write a plug-in..#D 
buttons would be nice…and every time my videos double in length, it’s takes at 
least four times as long to complete…

I understood that lesson pretty well in software development. I worked for a 
company (not to be named) on an ambitious product (not to be named), and we had 
a big meeting at the long conference table to set the milestones…mid-summer, 
with initial targets in September, beta in December, show at NAMM, ship end of 
Q1…as each of these dates where announced, I’d add “of the FOLLOWING year…”, 
and every one at the table would turn and glare at me, probably uncertain 
whether I was serious or joking. Well, guess which December it reached beta, 
which NAMM it finally showed at, which end of Q1…yeah, the following year. I 
always did pretty well in consulting estimates when I envisioned a seemingly 
reasonable (but of course unreasonably optimistic) amount of time for each 
step, added a little slop, doubling that, for each major component, then 
summing up those components and doubling the result.

I agree with you on your point about shaped dither. My feeling is that it’s a 
popular thing for companies to come out with their own, perhaps proprietary 
noise shaping…almost like a status symbol. That really doesn’t do anything of 
practical value. OK, it’s of marginal value in a technical sense at 16-bit, but 
in practical terms, where is it ever going to improve the listening experience? 
At that point, I’d just as soon leave the entire added noise floor below -90 
dB, going with TPDF. And as you say, definitely flat if there will ever be post 
processing. At 8-bits, noise shaping sure makes the area of interest in music 
much clearer (it debatable whether it makes the overall listening experience 
better, but if you need to focus on musical details in the middle, it’s a win). 
But that’s not a typical use case. And at 24-bits…not worth dithering in the 
first place, but does no harm so I have no gripe with people who suggest to 
dither 24-bit, but why oh why would you used shaped dither in that case? I’m 
not saying shaped dither is worthless at 16-bit, just that it’s not my choice. 
But it’s funny to see very slightly different flavors of noise shaping being 
heavily touted as a remarkable improvement over last year’s shaped dither, even 
though the differences are so far below the music that you have to do 
artificial things like listen to dithered silence with the volume up (or dither 
to a small sample size) to rationalize your choice of shaped dither flavor.


 On Feb 3, 2015, at 9:06 PM, Andrew Simper a...@cytomic.com wrote:
 
 Hi Nigel,
 
 Isn't the rule of thumb in IT estimates something like: Double the
 time you estimated, then move it up to the next time unit? So 2 weeks
 actually means 4 months, but since we're in Music IT I think we should
 be allowed 5 times instead of 2, so from my point of view you've
 actually delivered on time ;)
 
 Thanks very much for doing the video! I agree with your recommended
 workflows of 16 bit = always dither, and 24 bit = don't dither. I
 would probably go further and say just use triangular dither, since at
 some time in the future you may want to pitch the sound down (ie for a
 sample library of drums with a tom you want to tune downwards, or
 remixing a song) then any noise shaped dither will cause an issue
 since the noise will become audible.
 
 All the best,
 
 Andrew
 
 -- cytomic -- sound music software --
 
 
 On 25 January 2015 at 01:49, Nigel Redmon earle...@earlevel.com wrote:
 “In the coming weeks”, I said…OK, maybe 10 months…(I wasn’t *just* slow, 
 actually rethought and changed courses a couple of times)…
 
 Here’s my new “Dither—The Naked Truth” video, looking at isolated truncation 
 distortion in music:
 
 https://www.youtube.com/watch?v=KCyA6LlB3As
 
 
 On Mar 26, 2014, at 4:45 PM, Nigel Redmon earle...@earlevel.com wrote:
 
 Since it’s been quiet…
 
 Maybe this would be interesting to some list members? A basic and intuitive 
 explanation of audio dither:
 
 https://www.youtube.com/watch?v=zWpWIQw7HWU
 
 The video will be followed by a second part, in the coming weeks, that 
 covers details like when, and when not to use dither and noise shaping. 
 I’ll be putting up some additional test files in an article on ear 
 level.com in the next day or so.
 
 For these and other articles on dither:
 
 http://www.earlevel.com/main/category/digital-audio/dither-digital-audio/
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 

Re: [music-dsp] Dither video and articles

2015-02-04 Thread Didier Dambrin
Then, it’s no-win situation, because I could EASILY manufacture a bit of 
music that had significant truncation distortion at 16-bit.


Please do, I would really like to hear it.

I have never heard truncation noise at 16bit, other than by playing with 
levels in a such a way that the peaking parts of the rest of the sound would 
destroy your ears or be very unpleasant at best. (you say 12dB, it's already 
a lot)


I totally understood the point of your video, that dithering to 16bit isn't 
always needed - but that's what I disagree with.




-Message d'origine- 
From: Nigel Redmon

Sent: Wednesday, February 04, 2015 10:59 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

Hi Didier—You seem to find contradictions in my choices because you are 
making the wrong assumptions about what I’m showing and saying.


First, I’m not steadfast that 16-bit dither is always needed—and in fact the 
point of the video was that I was showing you (the viewers) how you can 
judge it objectively for yourself (and decide whether you want to dither). 
This is a much better way that the usual that I hear from people, who often 
listen to the dithered and non-dithered results, and talk about the 
soundstage collapsing without dither, “brittle” versus “transparent , 
etc.


But if I’m to give you a rule of thumb, a practical bit of advice that you 
can apply without concern that you might be doing something wrong in a given 
circumstance, that advice is “always dither 16-bit reductions”. First, I 
suspect that it’s below the existing noise floor of most music (even so, 
things like slow fades of the master fader might override that, for that 
point in time). Still, it’s not hard to manufacture something musical that 
subject to bad truncation distortion—a naked, low frequency, 
low-haromic-content sound (a synthetic bass or floor tom perhaps). Anyway, 
at worst case, you’ve added white noise that you are unlikely to hear—and if 
you do, so what? If broadband noise below -90 dB were a deal-breaker in 
recorded music, there wouldn’t be any recorded music. Yeah, truncation 
distortion at 16-bits is an edge case, but the cost to remove it is almost 
nothing.


You say that we can’t perceive quantization above 14-bit, but of course we 
can. If you can perceive it at 14-bit in a given circumstance, and it’s an 
extended low-level passage, you can easily raise the volume control another 
12 dB and be in the same situation at 16-bit. Granted, it’s most likely that 
the recording engineer hears it and not the end-listener, but who is this 
video aimed at if not the recording engineer? He’s the one making the choice 
of whether to dither.


Specifically:
..then why not use a piece of audio that does prove the point, instead? I 
know why, it's because you can’t...


First, I would have to use my own music (because I don’t own 32-bit float 
versions of other peoples’ music, even if I thought it was fair use to of 
copyrighted material). Then, it’s no-win situation, because I could EASILY 
manufacture a bit of music that had significant truncation distortion at 
16-bit. I only need to fire up one of my soft synths, and ring out some dull 
bell tones and bass sounds. Then people would accuse me of fitting the data 
to the theory, and this isn’t typical music made in a typical high-end study 
by a professional engineer. And my video would be 20 minutes long because I’m 
not looking at a 40-second bit of music any more. Instead, I clearly 
explained my choice, and it proved to be a pretty good one, and probably 
fairly typical at 16-bit, wouldn’t you agree? As I mentioned at the end of 
the video, the plan is to further examine some high-resolution music that a 
Grammy award-winning engineer and producer friend of mine has said he will 
provide.



...and dithering to 16bit will never make any audible difference.


If you mean “never make any audible difference” in the sense that it won’t 
matter one bit to sales or musical enjoyment, I agree. I imagine 
photographers make fixes and color tweaks that will never be noticed in the 
magazine or webpage that the photo will end up in either. But I guarantee 
you, there are lots of audio engineers that will not let that practically 
(using the word in the original “practical sense–don’t read as “almost) 
un-hearable zipper in the fade go. If they know it’s there, and in some 
cases they CAN actually hear it, with the volume cranked, you can tell them 
all day and all night that they are wasting there time dithering, because 
listeners will never hear it, but they will want to get rid of it. And the 
cost of that rash action to get rid of it? Basically nothing. Hence my 
advice: Dither and don’t worry about it—or listen to the residual up close 
and see if there’s nothing to worry about, if you prefer.




On Feb 3, 2015, at 10:24 PM, Didier Dambrin di...@skynet.be wrote:

Sorry, but if I sum up this video, it goes like this:
you need dithering

Re: [music-dsp] Dither video and articles

2015-02-04 Thread STEFFAN DIEDRICHSEN
Great video!

Great explanation and nice demonstration. On the other hand, I’m tempted to 
ask, if this discussion is still relevant due to the slight changes in music 
distribution. CD is still a medium, many artist prefer for distribution, mostly 
for the artwork and booklet, that’s delivered to the buyer. As a consequence, 
in most cases, the 16 bit, dithered or noise shaped master is used for the 
compressed versions as well. But the question is, if this process is really the 
best way? I made some experiments and found out, that AAC benefits from a 24 
bit or floating point input, dither noise is rather disturbing the encoding 
process. That said, CD final mastering should be done in parallel  to the 
creation of compressed versions. 


Steffan   


 On 24.01.2015|KW4, at 18:49, Nigel Redmon earle...@earlevel.com wrote:
 
 “In the coming weeks”, I said…OK, maybe 10 months…(I wasn’t *just* slow, 
 actually rethought and changed courses a couple of times)…
 
 Here’s my new “Dither—The Naked Truth” video, looking at isolated truncation 
 distortion in music:
 
 https://www.youtube.com/watch?v=KCyA6LlB3As 
 https://www.youtube.com/watch?v=KCyA6LlB3As
 

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-03 Thread Didier Dambrin

Sorry, but if I sum up this video, it goes like this:
you need dithering to 16bit and I'm going to prove it, then the video 
actually proves that you don't need it starting at 14bit, but adds it's 
only because of the nature of the sound I used for demo.


..then why not use a piece of audio that does prove the point, instead?
I know why, it's because you can't, and dithering to 16bit will never make 
any audible difference.
It's ok to tell the world to dither to 16bit, because it's nothing harmful 
either (it only mislays people from the actual problems that matter in 
mixing). But if there is such a piece of audio that makes dithering to 16bit 
any audible, without an abnormally massive boost to hear it, I'd like to 
hear it.


Andrew says he agrees, but then adds that it's important when you post-edit 
the sound. Yes it is, totally, but if you're gonna post-edit the sound, you 
will rather keep it 32 or 24bit anyway - the argument about dithering to 
16bit is for the final mix.


To me, until proven otherwise, for normal-to-(not abnormally)-high dynamic 
ranges, we can't perceive quantization above 14bit for audio, and 10bits for 
images on a screen (debatable here because monitors aren't linear but that's 
another story). Yet people seem to care less about images, and there's 
gradient banding all over the place.







-Message d'origine- 
From: Andrew Simper

Sent: Wednesday, February 04, 2015 6:06 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

Hi Nigel,

Isn't the rule of thumb in IT estimates something like: Double the
time you estimated, then move it up to the next time unit? So 2 weeks
actually means 4 months, but since we're in Music IT I think we should
be allowed 5 times instead of 2, so from my point of view you've
actually delivered on time ;)

Thanks very much for doing the video! I agree with your recommended
workflows of 16 bit = always dither, and 24 bit = don't dither. I
would probably go further and say just use triangular dither, since at
some time in the future you may want to pitch the sound down (ie for a
sample library of drums with a tom you want to tune downwards, or
remixing a song) then any noise shaped dither will cause an issue
since the noise will become audible.

All the best,

Andrew

-- cytomic -- sound music software --


On 25 January 2015 at 01:49, Nigel Redmon earle...@earlevel.com wrote:
“In the coming weeks”, I said…OK, maybe 10 months…(I wasn’t *just* slow, 
actually rethought and changed courses a couple of times)…


Here’s my new “Dither—The Naked Truth” video, looking at isolated 
truncation distortion in music:


https://www.youtube.com/watch?v=KCyA6LlB3As



On Mar 26, 2014, at 4:45 PM, Nigel Redmon earle...@earlevel.com wrote:

Since it’s been quiet…

Maybe this would be interesting to some list members? A basic and 
intuitive explanation of audio dither:


https://www.youtube.com/watch?v=zWpWIQw7HWU

The video will be followed by a second part, in the coming weeks, that 
covers details like when, and when not to use dither and noise shaping. I’ll 
be putting up some additional test files in an article on ear level.com 
in the next day or so.


For these and other articles on dither:

http://www.earlevel.com/main/category/digital-audio/dither-digital-audio/


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, 
dsp links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

-
Aucun virus trouvé dans ce message.
Analyse effectuée par AVG - www.avg.fr
Version: 2015.0.5645 / Base de données virale: 4281/9051 - Date: 03/02/2015 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Dither video and articles

2015-02-03 Thread Andrew Simper
Hi Nigel,

Isn't the rule of thumb in IT estimates something like: Double the
time you estimated, then move it up to the next time unit? So 2 weeks
actually means 4 months, but since we're in Music IT I think we should
be allowed 5 times instead of 2, so from my point of view you've
actually delivered on time ;)

Thanks very much for doing the video! I agree with your recommended
workflows of 16 bit = always dither, and 24 bit = don't dither. I
would probably go further and say just use triangular dither, since at
some time in the future you may want to pitch the sound down (ie for a
sample library of drums with a tom you want to tune downwards, or
remixing a song) then any noise shaped dither will cause an issue
since the noise will become audible.

All the best,

Andrew

-- cytomic -- sound music software --


On 25 January 2015 at 01:49, Nigel Redmon earle...@earlevel.com wrote:
 “In the coming weeks”, I said…OK, maybe 10 months…(I wasn’t *just* slow, 
 actually rethought and changed courses a couple of times)…

 Here’s my new “Dither—The Naked Truth” video, looking at isolated truncation 
 distortion in music:

 https://www.youtube.com/watch?v=KCyA6LlB3As


 On Mar 26, 2014, at 4:45 PM, Nigel Redmon earle...@earlevel.com wrote:

 Since it’s been quiet…

 Maybe this would be interesting to some list members? A basic and intuitive 
 explanation of audio dither:

 https://www.youtube.com/watch?v=zWpWIQw7HWU

 The video will be followed by a second part, in the coming weeks, that 
 covers details like when, and when not to use dither and noise shaping. I’ll 
 be putting up some additional test files in an article on ear level.com in 
 the next day or so.

 For these and other articles on dither:

 http://www.earlevel.com/main/category/digital-audio/dither-digital-audio/

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Dither video and articles

2015-01-24 Thread Nigel Redmon
“In the coming weeks”, I said…OK, maybe 10 months…(I wasn’t *just* slow, 
actually rethought and changed courses a couple of times)…

Here’s my new “Dither—The Naked Truth” video, looking at isolated truncation 
distortion in music:

https://www.youtube.com/watch?v=KCyA6LlB3As


 On Mar 26, 2014, at 4:45 PM, Nigel Redmon earle...@earlevel.com wrote:
 
 Since it’s been quiet…
 
 Maybe this would be interesting to some list members? A basic and intuitive 
 explanation of audio dither:
 
 https://www.youtube.com/watch?v=zWpWIQw7HWU
 
 The video will be followed by a second part, in the coming weeks, that covers 
 details like when, and when not to use dither and noise shaping. I’ll be 
 putting up some additional test files in an article on ear level.com in the 
 next day or so.
 
 For these and other articles on dither:
 
 http://www.earlevel.com/main/category/digital-audio/dither-digital-audio/

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2014-03-30 Thread Didier Dambrin
no need to deal with denormals on x86's unless you use the FPU, though, as 
SSE does it for you




-Message d'origine- 
From: Nigel Redmon

Sent: Saturday, March 29, 2014 10:04 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

Ah yes, the hated denormals—still not hard to deal with, but every once in a 
while, you get too comfortable and forget about them and... I meant easy in 
that most people don’t pay attention to the susceptibility of certain 
topologies to quantization error, and with doubles you *mostly* don’t have 
to. (And easy compared to laboring over parallel memory accesses to get 
reasonable performance form a 56k family, and the fact that you’re out of 
luck with high level languages…)


I, too, still like ints for the reasons you stated.

but then for double floats, why would anyone feel the need to bother with 
dithering and noise shaping the quantization?


Exactly. However, I’ve had lengthy exchanges with someone who firmly 
believes that every truncation should be dithered. The output of every 
plug-in on every channel if it’s 24-bit (TDM) or 32-bit float. I tried to 
explain that no one does it, nor is it practical or necessary, and , but I 
know he’d feel better if everything was done in doubles (including the audio 
bus, as is now available in some DAWs). And of course many believe that 
dither to a 24-bit product is a must. Then there are the “32-bit” 
DACs…sigh



On Mar 29, 2014, at 12:55 PM, robert bristow-johnson 
r...@audioimagination.com wrote:

On 3/29/14 12:37 PM, Nigel Redmon wrote:

(Not address to you, Robert, because you know it well...)

One thing people don’t realize is that integer processors like the 56k 
family had a full-precision accumulator for 24-bit multiply results 
(48-bit), plus 8 bits of headroom (56 bit accumulator). Floating point, 
in general, truncates on every operation.


Of course if you’ve got double precision floats, which are just about 
free for native (host based) DSP), life is pretty easy...


except for the hated denormals.  (actually denormals ain't bad at all, 
it's just that they didn't want to carve out much real estate in silicon 
for dealing with denormals, so when you happen to hit one of them, it 
causes an interrupt and makes life painful for real-time operation.)


i still think that, if you toss enough bits into it, fixed-point with 
double-wide accumulators, where you have immediate access to what is 
truncated (for the sake of noise shaping of the quantization error), i 
*still* like that better than just coding it straight with 
single-precision or double-precision float.  but i like cooking 
coefficients (or whatever math that occurs between the knob setting and 
the actual numbers your DSP algorithm uses; coefficients, thresholds, 
offsets, etc.) in floating point.  maybe we have to scale it and cast it 
to an int before passing it to the DSP.


for an apropo example, doing dither and/or noise shaping for floating 
point is a royal pain-in-arse.  and modeling it (so you have some idea if 
you're making it better) is even more painful.  but then for double 
floats, why would anyone feel the need to bother with dithering and noise 
shaping the quantization?  but at 16 bits (like mastering a red book CD), 
that's a whole different animal.


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


-
Aucun virus trouvé dans ce message.
Analyse effectuée par AVG - www.avg.fr
Version: 2014.0.4354 / Base de données virale: 3722/7269 - Date: 29/03/2014

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2014-03-30 Thread Nigel Redmon
I don’t think any C compiler is going to do well for the 56k family. It’s so 
reliant on parallel memory move optimization for reasonable performance. Not 
that it can’t be done, but look at the history. The early ones could barely 
spare a cycle (I spent a while optimizing the first version of Amp Farm so all 
the the components could run at the same time on the old 56002 digi card, 
pre-56300. The first release had _one_ spare clock cycle, after allowing for 
pull-up to 50 kHz, as the Digi spec called for.) And in later versions, higher 
clock speeds…there were other choices and not a tremendous motivation for the 
advance C compiler needed for good performance, not much chance at recouping 
the cost I think. Not that I minded coding 56k, but C++ just has so many 
advantages for big DSP projects. And you can read it a couple of years later.

Actually, I did macros quit a bit, especially by the time the 56300 cards came 
out, and higher sample rates. When allow multiple plug-in instances per chip, 
it was much more efficient to hard code certain things than to make everything 
variable as would be required in spots to support multiple instances. So I’d 
write the various components as macros, then the body of code would call the 
macros, unrolling what needed to be specialized between plug-in instances, 
sample rates, processor versions, and TDM card types. It made it easy to allow 
all of those specialization and still have completely optimized code, while 
writing the code once (so if I needed to add a feature next rev, I wouldn’t 
have a bunch of specialized versions to make changes to). There wasn’t always 
complete independence, like subroutines—sometimes there needed to be a “free 
parallel memory moves on the end of one macro because you know that the macro 
next in line needed it, but that was OK.

 ever write something like this, Nigel?  7 instructions per 5-coef biquad 
 section!  (and this is pre-56300 code.)

Found this quick in old 56k plug-in code, probably better examples elsewhere:

;--- calculate first filter (with noise shaping on this filter only, at 96k)
movea,y0 x:(r0)+,x0 ;fetch input ;fetch x:cabBpGain
mpy x0,y0,b y:(r4),y1   ;input * gain ;fetch y:cabBpZ1b
moveb,y:(r4)+   ;;store updated y:cabBpZ1b
sub y1,b x:(r0)+,x0 y:(r4)+,y1  ;;fetch x:cabBpA1Div2 ;fetch 
y:cabBpZ1a
mac x0,y1,b y:(r4),x1   ;;fetch y:cabBpZ2a
mac x0,y1,b x:(r0)+,x0 y1,y:(r4)-   ;;fetch x:cabBpA2 ;store 
updated y:cabBpZ2a
mac -x0,x1,b

 ya know, if it's an output (rather than one of many 24-bit internal 
 signals), it's just some instructions, when a float or double are converted 
 to 24-bit fixed, why not dither it?

Yeah, I always add “but go ahead and do it if you want, it’s basically free”. 
But I’m usually responding to someone asking me what situations should be 
dithered. And if I start with “go ahead, might as well”, some people hear “you 
absolutely must dither to avoid sterile, brittle ‘digital’ sound”, and the next 
question is about dithering all the intermediate truncations and talk of 32-bit 
DACs (lol). So I like to make a point of being overly clear that I think it’s 
equivalent to waving a rubber chicken over their hard drive. Still, the 
hardcode ones usually follow with improbably reasons why their audio will be so 
pristine, and the human ear is possibly evolved specifically to detect this 
particular kind of distortion on a subliminal level that doesn’t respond well 
to blind tests… :-/


On Mar 29, 2014, at 6:33 PM, robert bristow-johnson r...@audioimagination.com 
wrote:
 On 3/29/14 5:04 PM, Nigel Redmon wrote:
 Ah yes, the hated denormals—still not hard to deal with, but every once in a 
 while, you get too comfortable and forget about them and... I meant easy in 
 that most people don’t pay attention to the susceptibility of certain 
 topologies to quantization error, and with doubles you *mostly* don’t have 
 to.
 
 you should use double for accumulators, even if the coefs and signal are both 
 float.
 
 (And easy compared to laboring over parallel memory accesses to get 
 reasonable performance form a 56k family, and the fact that you’re out of 
 luck with high level languages…)
 
 i heard horrible things about the Motorola C compiler (like it wrote shit for 
 code) but did anyone here have any experience with the Tasking C compiler?  ( 
 http://www.tasking.com/products/dsp56xxx/ )  was it any better?
 
 actually, if you got used to the register convention, i found it easy in the 
 56K to do scalable fixed-point arithmetic on it.  then who needs a compiler?
 
 for instance, when you do arithmetic with the A and B accumulators, when you 
 compute an offset in samples:
 
move x:(r0)+,x0   ; get input to table
clrb #(MAX_OFFSET/8388608.0),y0
mpyy0,x0,a   #table_origin,r2  

Re: [music-dsp] Dither video and articles

2014-03-29 Thread Ethan Duni
So this talk of compressors in the playback chain brings up an important
point. The usual results about CD rate/depth being sufficient are referring
to the signal delivered to the final analog audio output. We all know that
higher rates and depths are appropriate/required for intermediate
processing, but historically we've relied on as assumption that the
playback chain downstream from the DAC is either very simple and linear (in
the hi-fi case) or anyway of lower fidelity than the DAC. So no need to
deliver higher rates/depths, since there's no high-fidelity intermediate
processing downstream from the converter.

But that assumption is something of a relic of the days when simply
delivering digital audio was at the cutting edge. These days your average
smartphone can handle multi-channel audio processing and mixing without
breaking a sweat, and in the entertainment space we're seeing the emergence
of quite sophisticated downstream processing chains to accommodate various
playback environments/configurations. And Theo brings up some good points
about using DSP in live audio scenarios.

E


On Fri, Mar 28, 2014 at 10:27 PM, Didier Dambrin di...@skynet.be wrote:

 that doesn't matter a single bit, *unless* you're raising your listening
 volume during quiet parts of a song (are you?), or you're running a
 compressor (most likely not on classical music)

 and if the whole song isn't mixed very loud, it can still be 12dB quieter
 ( your listening level 12dB higher) and still be 14bit worth of audio




 -Message d'origine- From: Andrew Simper
 Sent: Saturday, March 29, 2014 3:30 AM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles


  On 29 March 2014 03:31, Sampo Syreeni de...@iki.fi wrote:

 On 2014-03-28, robert bristow-johnson wrote:

 On 3/28/14 12:25 PM, Didier Dambrin wrote:

 my opinion is: above 14bit, dithering is pointless (other than for
 marketing reasons),


 14 bits???  i seriously disagree.  i dunno about you, but i still listen
 to red-book CDs (which are 2-channel, uncompressed 16-bit fixed-point).
 they would sound like excrement if not well dithered when mastered to the
 16-bit medium.


 I'd argue the same. First, it's meaningless to talk about bit depth alone.
 What we can hear is dictated first by absolute amplitude. If the user
 turns
 the knob to eleven, the number of bits doesn't matter: at some point
 you'll
 hear the noise floor, and any distortion products produced by
 quantization.
 That will even happen without user intervention when your work is used in
 a
 sampler, and because of things like broadcast compressors.. Second, at
 that
 point you'll also hear noise modulation, which sounds pretty nasty in
 things
 like reverb tails which always go to zero in the end. And third, people
 can
 hear stuff well below the noise floor. Even if the floor is set so low
 that
 you can hear it but don't really mind it, distortion products can still be
 clearly audible, and coming from hard quantization, rather annoying.


 I think the important thing to remember here is that audio content
 varies in dynamic range, you don't always have a near 0 dBFS signal
 playing all the time (although some modern mastering comes close,
 but not every makes pounding dance music Didier!). Lets take classical
 music for example, there will be sections of the full orchestra
 playing in the recording at near 0 dBFS (around 95 dB SPL), but then
 quieter sections at -45 dBFS (around 45 dB SPL). In loud sections 16
 bits without dither may be fine, but as soon as they stop then you are
 around 7 bits down so you have 9 bits left of your 16 (and this isn't
 even a reverb tail here, just quiet playing!), so dither is very much
 needed. Don't worry about riding the volume knob on your amp since
 your ears (and eyes) already have dynamic range processors built in to
 adjust the gain for you (if the background is quiet enough).

 --Andy
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp


 -
 Aucun virus trouve dans ce message.
 Analyse effectuee par AVG - www.avg.fr
 Version: 2014.0.4354 / Base de donnees virale: 3722/7262 - Date:
 28/03/2014
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2014-03-29 Thread Nigel Redmon
Ah yes, the hated denormals—still not hard to deal with, but every once in a 
while, you get too comfortable and forget about them and... I meant easy in 
that most people don’t pay attention to the susceptibility of certain 
topologies to quantization error, and with doubles you *mostly* don’t have to. 
(And easy compared to laboring over parallel memory accesses to get reasonable 
performance form a 56k family, and the fact that you’re out of luck with high 
level languages…)

I, too, still like ints for the reasons you stated.

 but then for double floats, why would anyone feel the need to bother with 
 dithering and noise shaping the quantization?

Exactly. However, I’ve had lengthy exchanges with someone who firmly believes 
that every truncation should be dithered. The output of every plug-in on every 
channel if it’s 24-bit (TDM) or 32-bit float. I tried to explain that no one 
does it, nor is it practical or necessary, and , but I know he’d feel better if 
everything was done in doubles (including the audio bus, as is now available in 
some DAWs). And of course many believe that dither to a 24-bit product is a 
must. Then there are the “32-bit” DACs…sigh


On Mar 29, 2014, at 12:55 PM, robert bristow-johnson 
r...@audioimagination.com wrote:
 On 3/29/14 12:37 PM, Nigel Redmon wrote:
 (Not address to you, Robert, because you know it well...)
 
 One thing people don’t realize is that integer processors like the 56k 
 family had a full-precision accumulator for 24-bit multiply results 
 (48-bit), plus 8 bits of headroom (56 bit accumulator). Floating point, in 
 general, truncates on every operation.
 
 Of course if you’ve got double precision floats, which are just about free 
 for native (host based) DSP), life is pretty easy...
 
 except for the hated denormals.  (actually denormals ain't bad at all, it's 
 just that they didn't want to carve out much real estate in silicon for 
 dealing with denormals, so when you happen to hit one of them, it causes an 
 interrupt and makes life painful for real-time operation.)
 
 i still think that, if you toss enough bits into it, fixed-point with 
 double-wide accumulators, where you have immediate access to what is 
 truncated (for the sake of noise shaping of the quantization error), i 
 *still* like that better than just coding it straight with single-precision 
 or double-precision float.  but i like cooking coefficients (or whatever 
 math that occurs between the knob setting and the actual numbers your DSP 
 algorithm uses; coefficients, thresholds, offsets, etc.) in floating point.  
 maybe we have to scale it and cast it to an int before passing it to the DSP.
 
 for an apropo example, doing dither and/or noise shaping for floating point 
 is a royal pain-in-arse.  and modeling it (so you have some idea if you're 
 making it better) is even more painful.  but then for double floats, why 
 would anyone feel the need to bother with dithering and noise shaping the 
 quantization?  but at 16 bits (like mastering a red book CD), that's a whole 
 different animal.
 
 -- 
 
 r b-j  r...@audioimagination.com
 
 Imagination is more important than knowledge.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2014-03-28 Thread Emanuel Landeholm
Dither theory is way cool. The problem with quantization noise is that it's
correlated to the signal. This is the reason it sounds so horrible. When
you're doing 1 bit dsp, dither (and noise shaping) is an absolute
requirement. When rendering to 8 bits you definitely benefit from
dithering. 16 bits and above though... Color me a skeptic. I'm sure it kind
of makes sense to apply some form of dithering when rendering a critically
sampled mix to 16 bits. This way you can turn the volume knob all the way
up and listen to that lovely 1 LSB (-96dB) FS signal without the ugly
correlation noise. But how much of music/sound is really sitting just
above the noise floor?


On Thu, Mar 27, 2014 at 7:08 AM, Nigel Redmon earle...@earlevel.com wrote:

 As far as being interesting in subtractive dither, I can't say I'm
 terribly interested in it, mainly because I prefer a larger word size
 (24-bit is convenient, it can be smaller, but more than 16), and no dither
 at all...but I'd be willing to discuss it with you, Sampo ;-)

 On Mar 26, 2014, at 10:53 PM, Sampo Syreeni de...@iki.fi wrote:
  On 2014-03-26, Nigel Redmon wrote:
 
  Maybe this would be interesting to some list members? A basic and
 intuitive explanation of audio dither:
 
  https://www.youtube.com/watch?v=zWpWIQw7HWU
 
  Since it's been quiet and dither was mentioned... Is anybody interested
 in the development of subtractive dither? I have a broad idea in my mind,
 and a little bit of code (for once!) as well. Unfortunately nothing too
 easily adaptable though... Willing to copy and explain all of it, though. :)
 
  The video will be followed by a second part, in the coming weeks, that
 covers details like when, and when not to use dither and noise shaping.
 I'll be putting up some additional test files in an article on ear
 level.com in the next day or so.
 
  In any case, thank you kindly. Dithering and noise shaping, both in
 theory and in practice is *still* something far too few people grasp for
 real.
  --
  Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
  +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2--

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2014-03-28 Thread David Olofson
On Fri, Mar 28, 2014 at 9:56 AM, Emanuel Landeholm
emanuel.landeh...@gmail.com wrote:
[...]
 16 bits and above though... Color me a skeptic. I'm sure it kind
 of makes sense to apply some form of dithering when rendering a critically
 sampled mix to 16 bits. This way you can turn the volume knob all the way
 up and listen to that lovely 1 LSB (-96dB) FS signal without the ugly
 correlation noise. But how much of music/sound is really sitting just
 above the noise floor?

Well, dithering may not really matter much these days, as most CDs are
compressed so hard they can't be played loud anyway... :-) However, if
you have something with some dynamics left in, and try to play it
loud, it will be blatantly obvious whether it's dithered or not. You
probably won't notice the shaped dither noise unless you actively look
for it, but there's no way you can miss the GSM-ish feeping in quiet
sections and fades of a non-dithered CD.


-- 
//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.--- Games, examples, libraries, scripting, sound, music, graphics ---.
|   http://consulting.olofson.net  http://olofsonarcade.com   |
'-'
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2014-03-28 Thread robert bristow-johnson

On 3/28/14 12:25 PM, Didier Dambrin wrote:
my opinion is: above 14bit, dithering is pointless (other than for 
marketing reasons),


14 bits???  i seriously disagree.  i dunno about you, but i still listen 
to red-book CDs (which are 2-channel, uncompressed 16-bit fixed-point).  
they would sound like excrement if not well dithered when mastered to 
the 16-bit medium.


in fact, i think that in a very real manner, Stan Lipshitz and John 
Vanderkooy and maybe their grad student, Robert Wannamaker, did no less 
than *save* the red-book CD format in the late 80s, early 90s.  and they 
did it without touching the actual format.  same 44.1 kHz, same 
2-channels, same 16-bit fixed-point PCM words.  they did it with 
optimizing the quantization to 16 bits and they did that with (1) 
dithering the quantization and (2) noise-shaping the quantization.


the idea is to get the very best 16-bit words you can outa audio that 
has been recorded, synthesized, processed, and mixed to a much higher 
precision.  i'm still sorta agnostic about float v. fixed except that i 
had shown that for the standard IEEE 32-bit floating format (which has 8 
exponent bits), that you do better with 32-bit fixed as long as the 
headroom you need is less than 40 dB.  if all you need is 12 dB headroom 
(and why would anyone need more than that?) you will have 28 dB better 
S/N ratio with 32-bit fixed-point.


and all of the demonstrations will always make you hear 10bit worth 
of audio in a 16bit file  tell you to crank the volume to death


to *hear* a difference non-subtly, you may have to go down to as few as 
7 bits.  in 2008 i presented a side-by-side comparison between 
floating-point and fixed-point quantization ( 
http://www.aes.org/events/125/tutorials/session.cfm?code=T19 ) trying to 
compare apples-to-apples.  and i wanted people to readily hear 
differences.  in order to do that i had to go down to 7 bits (the floats 
had 3 exponent bits, 1 sign bit, 3 additional mantissa bits and a hidden 
leading 1).


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2014-03-28 Thread Theo Verelst

It will depend on you monitoring/listening equipment and situation.

I can easily hear the difference between a 192 or 96kHz 24 (or 22 bits + 
exponent) bit and downgrading to 48 or 44.1 / 24 bit OR to 192 or 96 kHz 
16 bits. Let alone both, easily audible.


It becomes ridiculous when using either natural (room/hall) or quality 
artificial reverb (I mean reverb that actually works): then the 
differences become funny. Unfortunately, the idea of post-processing 
hasn't become what it should be, so it could well be all kinds of 
ill-(re-)-mastered materials aren't up to the HiFi norms that once ruled.


T.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2014-03-28 Thread Didier Dambrin
my opinion is: above 14bit, dithering is pointless (other than for marketing 
reasons), and all of the demonstrations will always make you hear 10bit 
worth of audio in a 16bit file  tell you to crank the volume to death





-Message d'origine- 
From: Emanuel Landeholm

Sent: Friday, March 28, 2014 9:56 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Dither video and articles

Dither theory is way cool. The problem with quantization noise is that it's
correlated to the signal. This is the reason it sounds so horrible. When
you're doing 1 bit dsp, dither (and noise shaping) is an absolute
requirement. When rendering to 8 bits you definitely benefit from
dithering. 16 bits and above though... Color me a skeptic. I'm sure it kind
of makes sense to apply some form of dithering when rendering a critically
sampled mix to 16 bits. This way you can turn the volume knob all the way
up and listen to that lovely 1 LSB (-96dB) FS signal without the ugly
correlation noise. But how much of music/sound is really sitting just
above the noise floor?


On Thu, Mar 27, 2014 at 7:08 AM, Nigel Redmon earle...@earlevel.com wrote:


As far as being interesting in subtractive dither, I can't say I'm
terribly interested in it, mainly because I prefer a larger word size
(24-bit is convenient, it can be smaller, but more than 16), and no dither
at all...but I'd be willing to discuss it with you, Sampo ;-)

On Mar 26, 2014, at 10:53 PM, Sampo Syreeni de...@iki.fi wrote:
 On 2014-03-26, Nigel Redmon wrote:

 Maybe this would be interesting to some list members? A basic and
intuitive explanation of audio dither:

 https://www.youtube.com/watch?v=zWpWIQw7HWU

 Since it's been quiet and dither was mentioned... Is anybody interested
in the development of subtractive dither? I have a broad idea in my mind,
and a little bit of code (for once!) as well. Unfortunately nothing too
easily adaptable though... Willing to copy and explain all of it, though. 
:)


 The video will be followed by a second part, in the coming weeks, that
covers details like when, and when not to use dither and noise shaping.
I'll be putting up some additional test files in an article on ear
level.com in the next day or so.

 In any case, thank you kindly. Dithering and noise shaping, both in
theory and in practice is *still* something far too few people grasp for
real.
 --
 Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
 +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2--

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


-
Aucun virus trouve dans ce message.
Analyse effectuee par AVG - www.avg.fr
Version: 2014.0.4354 / Base de donnees virale: 3722/7261 - Date: 28/03/2014 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2014-03-28 Thread Didier Dambrin

..but can we hear that?

I'd really like to be convinced by, in the same 32bit float wav file, 
something (anything, as long as it's to be listened at normal levels) in its 
original form, then 16bit truncated, then 16bit with dithering.


Really, this shouldn't have anything to do with CDs, nor names, all it 
requires is a single, simple audio file that everyone can listen to. This 
should be the proof of all proofs, something I've been asking since forver  
I never got anything I could listen to.




-Message d'origine- 
From: robert bristow-johnson

Sent: Friday, March 28, 2014 6:04 PM
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] Dither video and articles

On 3/28/14 12:25 PM, Didier Dambrin wrote:
my opinion is: above 14bit, dithering is pointless (other than for 
marketing reasons),


14 bits???  i seriously disagree.  i dunno about you, but i still listen
to red-book CDs (which are 2-channel, uncompressed 16-bit fixed-point).
they would sound like excrement if not well dithered when mastered to
the 16-bit medium.

in fact, i think that in a very real manner, Stan Lipshitz and John
Vanderkooy and maybe their grad student, Robert Wannamaker, did no less
than *save* the red-book CD format in the late 80s, early 90s.  and they
did it without touching the actual format.  same 44.1 kHz, same
2-channels, same 16-bit fixed-point PCM words.  they did it with
optimizing the quantization to 16 bits and they did that with (1)
dithering the quantization and (2) noise-shaping the quantization.

the idea is to get the very best 16-bit words you can outa audio that
has been recorded, synthesized, processed, and mixed to a much higher
precision.  i'm still sorta agnostic about float v. fixed except that i
had shown that for the standard IEEE 32-bit floating format (which has 8
exponent bits), that you do better with 32-bit fixed as long as the
headroom you need is less than 40 dB.  if all you need is 12 dB headroom
(and why would anyone need more than that?) you will have 28 dB better
S/N ratio with 32-bit fixed-point.

and all of the demonstrations will always make you hear 10bit worth of 
audio in a 16bit file  tell you to crank the volume to death


to *hear* a difference non-subtly, you may have to go down to as few as
7 bits.  in 2008 i presented a side-by-side comparison between
floating-point and fixed-point quantization (
http://www.aes.org/events/125/tutorials/session.cfm?code=T19 ) trying to
compare apples-to-apples.  and i wanted people to readily hear
differences.  in order to do that i had to go down to 7 bits (the floats
had 3 exponent bits, 1 sign bit, 3 additional mantissa bits and a hidden
leading 1).

--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


-
Aucun virus trouve dans ce message.
Analyse effectuee par AVG - www.avg.fr
Version: 2014.0.4354 / Base de donnees virale: 3722/7262 - Date: 28/03/2014 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2014-03-28 Thread Ethan Duni
Not to be overly antagonistic, but:

 I can easily hear the difference between a 192 or 96kHz 24 (or 22 bits +
exponent) bit and downgrading to 48 or 44.1 / 24 bit OR to 192 or 96 kHz 16
bits. Let alone both, easily audible.

If you are hearing obvious differences between those settings, it's a clear
sign that (at least) one of the signal paths in question has a problem in
it. Some kind of bad dithering or sloppy reconstruction filter, or
intermodulation artifacts in your audio chain, or something like that.
That's supposing you aren't just hearing confirmation bias in the first
place - are you doing double-blind testing?

Multiple different reputable organizations have conducted independent
double-blind tests using carefully calibrated audio/listening set-ups, and
have uniformly failed to find any evidence that anybody can hear any
difference beyond 48kHz/16 bits. Higher rates and depths are important as
an intermediate stage for processing, but are complete overkill for the
final rendered audio stream.

E


On Fri, Mar 28, 2014 at 10:34 AM, Theo Verelst theo...@theover.org wrote:

 It will depend on you monitoring/listening equipment and situation.

 I can easily hear the difference between a 192 or 96kHz 24 (or 22 bits +
 exponent) bit and downgrading to 48 or 44.1 / 24 bit OR to 192 or 96 kHz 16
 bits. Let alone both, easily audible.

 It becomes ridiculous when using either natural (room/hall) or quality
 artificial reverb (I mean reverb that actually works): then the differences
 become funny. Unfortunately, the idea of post-processing hasn't become
 what it should be, so it could well be all kinds of ill-(re-)-mastered
 materials aren't up to the HiFi norms that once ruled.

 T.
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2014-03-28 Thread Theo Verelst
You think I'm stupid or something? I can truncate, use a very similar DA 
convertor solution, that isn't difficult.


You could argue, if the reconstruction is good, it shouldn't matter much 
to go from 48 to 44.1 for instance sure. Go try.


You could argue: my music is fine, even 128kbps mp3: fine, but mine 
isn't necessarily. Take a (normally decent) microphone and all kinds of 
decent quality sampling options, and try it out. Hell, it isn't even 
right with CD's to get a Equal Loudness Curve mid-frequency range that 
is ok after every producer of CD lifts those frequencies considerable. 
You want to have some real sampling fun? Set the mic close to feedback 
with a digital path in the signal path, and of course good monitors/PS 
system, if *that* doesn't tell you why you want more bit, I don't know 
anymore...


T.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2014-03-28 Thread Sampo Syreeni

On 2014-03-28, robert bristow-johnson wrote:

14 bits???  i seriously disagree.  i dunno about you, but i still 
listen to red-book CDs (which are 2-channel, uncompressed 16-bit 
fixed-point).  they would sound like excrement if not well dithered 
when mastered to the 16-bit medium.


I'd argue the same. First, it's meaningless to talk about bit depth 
alone. What we can hear is dictated first by absolute amplitude. If the 
user turns the knob to eleven, the number of bits doesn't matter: at 
some point you'll hear the noise floor, and any distortion products 
produced by quantization. That will even happen without user 
intervention when your work is used in a sampler, and because of things 
like broadcast compressors.. Second, at that point you'll also hear 
noise modulation, which sounds pretty nasty in things like reverb tails 
which always go to zero in the end. And third, people can hear stuff 
well below the noise floor. Even if the floor is set so low that you can 
hear it but don't really mind it, distortion products can still be 
clearly audible, and coming from hard quantization, rather annoying.


A fourth reason which might not be too important in audio DSP but sure 
can be in measurement and detection processes is how linear your 
circuits actually are. As soon as you apply things like matched filters, 
statistical tests or classification engines to data, those things don't 
have an absolute threshold of hearing at all, and especially with binary 
decisions, can latch onto arbitrarily faint spurs caused by 
quantization. An audio relevant example of that might be given by 
digital watermarking which is *supposed* to be inaudible, or let's say 
audio forensics, where you purposely try to unmask otherwise inaudible 
content in audio, or even things like audio coders, which shouldn't but 
still can be inordinately sensitive to inaudible statistical features of 
sound. (E.g. MP3 is ridiculously sensitive to high quality, uncorrelated 
stereo reverb. That's effectively just colored noise to the ear, and the 
precise time structure doesn't matter at all, but once you compress it, 
it eats so much bandwidth 192kbps often ceases to be transparent.)


All that means that just to be sure, it makes sense to be principled and 
always apply proper dither no matter how many bits you have, or latest 
when your bits leave the signal chain you personally control, and whose 
gain structure you can engineer. Certainly with any 16 bit format, 
because we already know it takes something like 21 to 22 bits to cover 
the whole dynamic range of the human ear.


In that vein, I should probably say a couple of words about subtractive 
dither, which is my particular interest. The audio standard is additive 
TPDF at two quantization levels peak to peak. That's because the process 
is onesided, so that it's easy to apply, and that amount and shape are 
in a certain sense optimum. The theory goes so that a rectangular dither 
at one level P2P decouples the first moment of the error signal from the 
utility one, adding a second similar dither signal decouples the second 
moment, and so on to infinity. Two independent 1RPDF signals summed 
means the result is white independent and its PDF is the convolution of 
the two rectangular ones, yielding the standard 2TPDF. That's sufficient 
for audio use because the second moment is just variance, so that 
decouping it kills noise modulation. You can't hear the difference 
beyond that, but the analysis is nifty in that it shows you which 
precise statistical assumptions you can make about the noise floor, and 
e.g. that a Gaussian dither signal -- which is the limit of an infinite 
number of 1RPDF signals added by the central limit theorem -- is never 
an ideal dither because its amplitude would have to be infinite as well 
if you want to decouple the first, most important moments fully.


The fun thing about subtractive dither is that in that case 1RPDF is 
already perfect wrt all momenta, and if you add anything more, it won't 
hurt because it will be subtracted out just the same, except with 
ridiculous amounts when headroom becomes an issue. Based on that I've 
even been coding a little something which I'm hoping some anal 
audiophile might even find reassuring enough to use. The idea is to do 
subtractive dither but with 2TPDF. The point is, if you can't decode it, 
it still works as a compatibility additive format. If you can decode it, 
it's ideal and perfect, with all of the subtractive benefits such as no 
accumulation in a long signal chain. So much is old news, but then the 
tricky part is to actually make it efficient enough and usable in the 
wild.


The way I go about it right now is to use an efficient xor-shift RNG 
which is periodically rekeyed from a kind of randomness extractor 
operating in a closed loop over the data stream. That means that if you 
have a signal but aren't sure if it's using the system (blindly 
subtracting the dither would lead to additive 

Re: [music-dsp] Dither video and articles

2014-03-28 Thread Emanuel Landeholm

 First, it's meaningless to talk about bit depth alone


I agree with the points you raise and I'd like to add that you can also
trade bandwidth for bits.


On Fri, Mar 28, 2014 at 8:31 PM, Sampo Syreeni de...@iki.fi wrote:

 On 2014-03-28, robert bristow-johnson wrote:

  14 bits???  i seriously disagree.  i dunno about you, but i still listen
 to red-book CDs (which are 2-channel, uncompressed 16-bit fixed-point).
  they would sound like excrement if not well dithered when mastered to the
 16-bit medium.


 I'd argue the same. First, it's meaningless to talk about bit depth alone.
 What we can hear is dictated first by absolute amplitude. If the user turns
 the knob to eleven, the number of bits doesn't matter: at some point you'll
 hear the noise floor, and any distortion products produced by quantization.
 That will even happen without user intervention when your work is used in a
 sampler, and because of things like broadcast compressors.. Second, at that
 point you'll also hear noise modulation, which sounds pretty nasty in
 things like reverb tails which always go to zero in the end. And third,
 people can hear stuff well below the noise floor. Even if the floor is set
 so low that you can hear it but don't really mind it, distortion products
 can still be clearly audible, and coming from hard quantization, rather
 annoying.

 A fourth reason which might not be too important in audio DSP but sure can
 be in measurement and detection processes is how linear your circuits
 actually are. As soon as you apply things like matched filters, statistical
 tests or classification engines to data, those things don't have an
 absolute threshold of hearing at all, and especially with binary decisions,
 can latch onto arbitrarily faint spurs caused by quantization. An audio
 relevant example of that might be given by digital watermarking which is
 *supposed* to be inaudible, or let's say audio forensics, where you
 purposely try to unmask otherwise inaudible content in audio, or even
 things like audio coders, which shouldn't but still can be inordinately
 sensitive to inaudible statistical features of sound. (E.g. MP3 is
 ridiculously sensitive to high quality, uncorrelated stereo reverb. That's
 effectively just colored noise to the ear, and the precise time structure
 doesn't matter at all, but once you compress it, it eats so much bandwidth
 192kbps often ceases to be transparent.)

 All that means that just to be sure, it makes sense to be principled and
 always apply proper dither no matter how many bits you have, or latest when
 your bits leave the signal chain you personally control, and whose gain
 structure you can engineer. Certainly with any 16 bit format, because we
 already know it takes something like 21 to 22 bits to cover the whole
 dynamic range of the human ear.

 In that vein, I should probably say a couple of words about subtractive
 dither, which is my particular interest. The audio standard is additive
 TPDF at two quantization levels peak to peak. That's because the process is
 onesided, so that it's easy to apply, and that amount and shape are in a
 certain sense optimum. The theory goes so that a rectangular dither at one
 level P2P decouples the first moment of the error signal from the utility
 one, adding a second similar dither signal decouples the second moment, and
 so on to infinity. Two independent 1RPDF signals summed means the result is
 white independent and its PDF is the convolution of the two rectangular
 ones, yielding the standard 2TPDF. That's sufficient for audio use because
 the second moment is just variance, so that decouping it kills noise
 modulation. You can't hear the difference beyond that, but the analysis is
 nifty in that it shows you which precise statistical assumptions you can
 make about the noise floor, and e.g. that a Gaussian dither signal -- which
 is the limit of an infinite number of 1RPDF signals added by the central
 limit theorem -- is never an ideal dither because its amplitude would have
 to be infinite as well if you want to decouple the first, most important
 moments fully.

 The fun thing about subtractive dither is that in that case 1RPDF is
 already perfect wrt all momenta, and if you add anything more, it won't
 hurt because it will be subtracted out just the same, except with
 ridiculous amounts when headroom becomes an issue. Based on that I've even
 been coding a little something which I'm hoping some anal audiophile might
 even find reassuring enough to use. The idea is to do subtractive dither
 but with 2TPDF. The point is, if you can't decode it, it still works as a
 compatibility additive format. If you can decode it, it's ideal and
 perfect, with all of the subtractive benefits such as no accumulation in a
 long signal chain. So much is old news, but then the tricky part is to
 actually make it efficient enough and usable in the wild.

 The way I go about it right now is to use an efficient xor-shift RNG which
 is periodically 

Re: [music-dsp] Dither video and articles

2014-03-28 Thread Sampo Syreeni

On 2014-03-28, Emanuel Landeholm wrote:

I agree with the points you raise and I'd like to add that you can 
also trade bandwidth for bits.


Totally, and you don't even need to go as far as to apply noise shaping. 
High sampling rates and linear filtering already raises that question. 
Okay, in audio DSP you'd typically want to do the real, hardcore, noise 
shaping trick, at least in release formats with insufficient bits like 
CD, but e.g. in RF work you immediately bump into these kinds of 
considerations.


One of the nicest examples is something we bumped into a little while 
ago already, after something Theo said. That's because, as soon as you 
start doing frequency analyses in the presence of noise, high bandwidth 
counter-intuitively means that the same precise noise RMS in your 
signals is spread over a wider bandwidth, so that cutting it out with a 
filter is per se already an instance of that tradeoff.


That then also means that you can't read spectra at all without invoking 
the concept of resolution bandwidth. FFT's are slightly easier compared 
to the analog sweeped ones because thought of as filter banks they're 
critically sampled by definition, but even they lead to nasty surprises 
for the uninitiated, because the length of the transform leads the 
individual bins to be narrower. When that's so, in a longer block the 
noise is distributed over more bins, but steady state sinusoids -- with 
their infinitely thin Dirac spectra -- stay within a single bin and 
hence stick out like a sore thumb. Thus, with a long enough transform, 
something that in the time domain looks like nothing but noise, in the 
frequency one suddenly has a spike so high that scaling it to range 
makes the noise floor round off to invisibility.


Analog spectra are then even nastier, because they're fundamentally 
overcompete representations where you have two separate things to worry 
about: the sweep rate which sets the convolutional spreading of peaks 
due to amplitude modulation, and the resolution bandwidth, which sets 
integration time, and so both temporal responsiveness and the noise 
supression of the matched filter.


Funnily enough, eventhough I've been interested in the theoretical 
aspects of DSP for some two decades now, all such woes of matched 
filtering and the like are relatively new to me. That leads me to 
suspect those aspects aren't stressed enough in modern treatments of the 
subject, and that I might not be the only diginative who has gap in 
their understanding regarding continuous spectra, matched filtering, 
statistical detection in the presence of noise, reading the relevant 
diagrams, and so on. And in fact, while I'm rather critical of Theo's 
and other audiophile minded folks' claims over things like ultrahigh 
fidelity formats, I must say their understanding of traditional analog 
EE and background in tinkering with it probably make them better armed 
to deal with this side of the field than I'll ever become.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2014-03-28 Thread Theo Verelst


Quick idea about the dithering matter, without suggesting to shed a lot 
of light sending myself in such subjects: making sure the bit depth is 
properly used is understandable, even though it may well be the 
difference between a straight AD-converted signal of 16 bits, coming 
from a natural source or mix/production, and a dithered signal brought 
down to 16 bits is not all too much, and also the risk exists, 
especially with large and uncomely dither noise, that the resulting 
noise adds up to a negative improvement.


Audio isn't the same as visual, even though of course it's a nice 
picture in the video, the equivalent in audio would be to dither a 
signal of full (0dBS) amplitude, and surely that can't be the objective!


Also clear from the sine wave with blocky-roundings (isn't math 
wonderful ?) example is that there may be confusion about vertical 
dithering may have time -rounding effects, which would be a wrong 
suggestion, and isn't mentioned as such. But it is true that the example 
dither signal makes clear there's a need to bandlimit also the dither 
signal, and to hope/make sure there's no correlation with the real 
signals of similar nature, or you might get phasing dither signal 
interference.


Thinking about the result of dither on certain signal properties, like 
the usually pre-equalized mid range on CDs: it may be important to 
dither at the right point in your production path, or you will emphasize 
certain predictable signal filter properties, like the impulse response 
of the equalizer used. Futhermore, there are signal correlation 
computations (like with a (averaged FFT)) which may or may not be 
influenced by dither, and also which may give pretty low level (-50dBS) 
pulsing sub-bands very important for the perceptual quality of audio 
that could get messed up with wrongly tuned dither, and in pro-audio 
studio production, they, as well as (natural or well done artificial) 
reverb give natural dither patterns!


T.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


  1   2   >