dear all,

i'm trying to meassure the difference between two equivalent but not identical 
processes.
right now i'm feeding some test signals to both algorithms at the same time and 
subtract the output signals.

now i'm looking for something to quantify the error signal.
from statistics i know there is something like the "mean squared error".
so i'm squaring the error signal and take the (running) average.

mostly i'm getting some numbers very close to zero and a gut feeling tells me i 
want to see those on a dB scale.
so i'm taking the logarithm and multiply by 10, as i have already squared the 
values before.
(as far as i can see, this is equivalent to a RMS meassurement).

is there a correct/better/preferred way of doing this?

next to a listening test, in the end i want to have a simple meassure of the 
difference of the two processes which is close to our perception of the 
difference. does that make sense?

thanks for any comments,
volker.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Reply via email to