Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-30 Thread Theo Verelst

Interesting story about the interpolation noise from very
high oversampled signal approximations. I tend to think ïf it doesn't
concern an actual sinc function of significant width and accuracy then
the up-sampling is wrong unless the signal is prepared for it.

I can imagine in sample processing machines and software that you could
do effect processing in an oversampled domain which is arrived at by
having knowledge of the sample signals and possibly their detune filters,
and that it is possible to compute sample sets which allow oversampling
by relatively simple or cheap DSP operations to anyway fulfill certain
accuracy criteria, such as low noise, frequency accuracy, or effect
accuracy (like the number of fractional sampling accuracy in a phaser
effect for instance).

The linear interpolation I've used a decade ago for chorussing on a moderately
strong DSP of the time was ok bvecause of signal properties and the tunings of
the chorus I programmed, of course there's a lot to be said for using more than
zero order interpolations in general. I've looked at Taylor expansions for the
sinc function and it's possible accuracies, for instance. In mechanical design,
it was one of the early computer math issues to use all kinds of interpolation
schemes for a variety of purposes, with some terminology I suppose from the 
early
days of the industrial revolution. However, a good understanding of these should
be based on an understanding of what they are for. Some interpolations are for
minimal stress, some for minimal distance given a certain curvature, others
are statistically neutral in some sense, etc. More interesting is to look at
more dimensional curves and surfaces or try out these in functional analysis
or computations, which is far outside the scope here, and not very useful in
normal audio subjects.

Unfortunately the averaging and continuity considerations of the various
interpolation curves and their mathematical properties aren't very well 
correlated
with audi signals, and certainly not necessarily with sampling issues. So think
about what some suggested here: what is the filte rkernel that you're putting 
over your
signal when using them, and what does the sampled nature add in terms of
misery by the side ?

T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-28 Thread robert bristow-johnson

On 8/26/15 9:47 PM, Ethan Duni wrote:

15.6 dB  +  (12.04 dB) * log2( Fs/(2B) )

Oh I see, you're actually taking the details of the sinc^2 into account.


really, just the fact that the sinc^2 has nice deep zeros at every 
integer multiple of Fs (except 0).


What I had in mind was more of a worst-case analysis where we just 
call the sin() component 1 and then look at the 1/n^2 decay (which is 
12dB per octave). Which we see in the second term, but of course the 
sine's contribution also whacks away a certain portion of energy, 
hence the 15.6dB offset.


well, it's more just how the strengths of the images add up.



On the other hand if you're interested in something like the 
spurious-free dynamic range, then the simple 12dB/octave estimate is 
appropriate. The worst-case components aren't going to get attenuated 
at all by the sin(), just the 1/k^2. I tend to favor that in cases 
where we can't be confident that the noise floor in question is (at 
least approximately) flat.



but whether you're assuming a flat spectrum up to B or just a single 
sinusoidal component at the maximum frequency of B, it's the sin() (or 
the sin^4 in the power spectrum of the images resulting from linear 
interpolation) that is the mathematical force in reducing the image 
strength in the oversampled case where 2B  Fs.  so it's *both* the 
sin^4 *and* the 1/k^4 is used.  the 1/k^4 thing is needed for the power 
of all those images to add up to a decent finite number.  but it's the 
sin() that puts a stake in the heart of the image and that is 
quantitatively quite useful.


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-26 Thread robert bristow-johnson

On 8/25/15 7:08 PM, Ethan Duni wrote:
if you can, with optimal coefficients designed with the tool of your 
choice, so i am ignoring any images between B and Nyquist-B, upsample 
by 512x and then do linear interpolation between adjacent samples for 
continuous-time interpolation, you can show that it's something like 
12 dB S/N per octave of oversampling plus another 12 dB.  that's 120 
dB.  that's how i got to 512x.


Wait, where does the extra 12dB come from? Seems like it should just 
be 12dB per octave of oversampling. What am I missing?


okay, this is painful.  in our 2-decade old paper, Duane and i did this 
theoretical approximation analysis for drop-sample interpolation, and i 
did it myself for linear, but we did not put in the math for linear 
interpolation in the paper.


so, to satisfy Nyquist (or Shannon or Whittaker or the Russian guy) the 
sample rate Fs must exceed 2B which is twice the bandwidth.  the 
oversampling ratio is defined to be Fs/(2B).  and in octaves it is 
log2(Fs/(2B)).  all frequencies in your baseband satisfy |f|B and if 
it's highly oversampled, 2B  Fs.


now, i'm gonna assume that Fs is so much (like 512x) greater than 2B 
that i will assume the attenuation due to the sinc^2 for |f|B is 
negligible.  i will assume that the spectrum between -B and +B is 
uniformly flat (that's not quite worst case, but it's worser case than 
what music, in the bottom 5 or 6 octaves, is).  so given a unit height 
on that uniform power spectrum, the energy will be 2B.


so, the k-th image (where k is not 0) will have a zero of the sinc^2 
function going right through the heart of it.  that's what's gonna kill 
the son-of-a-bitch.  the energy of that image is:



   k*Fs+B
 integral{ (sinc(f/Fs))^4 df }
   k*Fs-B


since it's power spectrum it's sinc^4 for linear and sinc^2 for 
drop-sample interpolation.


changing the variable of integration


   +B
 integral{ (sinc((k*Fs+f)/Fs))^4 df }
   -B



   +B
 integral{ (sinc(k+f/Fs))^4 df }
   -B



 sinc(k+f/Fs) =  sin(pi*(k+f/Fs))/(pi*(k+f/Fs))

  =  (-1)^k * sin(pi*f/Fs)/(pi*(k+f/Fs))

  =approx  (-1)^k  *  (pi*f/Fs)/(pi*k)

  since  |f|  B  Fs

raising to the 4th power gets rid of the toggling polarity.  so now it's

+B
 1/(k*Fs)^4 * integral{ f^4 df }  =  (2/5)/(k*Fs)^4 * B^5
-B


now you have to sum up the energies of all of the bad images (we are 
assuming that *all* of those images, *after* they are beaten down, will 
somehow fall into the baseband during resampling and their energies will 
team up).  there are both negative and positive frequency images to add 
up.  (but we don't add up the energy from the image at the baseband, 
that's the good image.)


+inf   +inf
2 * SUM{ (2/5)/(k*Fs)^4 * B^5 }  =  B*(4/5)*(B/Fs)^4 * SUM{1/k^4}
k=1k=1


the summation on the right is (pi^4)/90

so the energy of all of the nasty images (after being beaten down due to 
the application of the sinc^2 that comes from linear interpolation 
between the subsamples) is


   B*(4/5)*(B/Fs)^4 * (pi^4)/90

and the  S/N ratio is 2B divided by that.

   (  (2/450) * (2B/Fs)^4 * (pi/2)^4  )^-1

in dB we use 3.01*log2() because this is an *energy* ratio, not a 
voltage ratio.


   -3.01*log2( (2/450) * (2B/Fs)^4 * (pi/2)^4 )

 =  3.01*log2(225) + 12.04*log2(2/pi)  +  12.04*log2( Fs/(2B) )

 =  15.6 dB  +  (12.04 dB) * log2( Fs/(2B) )


so, it seems to come out a little more than 12 dB.  i think Duane did a 
better empirical analysis and he got it slightly less.


but, using linear interpolation between subsamples, you should get about 
12 dB of S/N for every octave of oversampling plus 15 dB more.





but the difference in price in memory only, *not* in computational burden.

Well, you don't get the full cost in computational burden since you 
can skip computing most of the upsamples.


exactly and it's the same whether you upsample by 32x or 512x.  but 
upsampling by 512x will cost 8 times the memory to store coefficients.


But the complexity still goes up with increasing oversampling factor 
since the interpolation filter needs to get longer and longer, no?


no.  that deals with a different issue, in my opinion.  the oversampling 
ratio determines the number of discrete (and uniformly spaced) 
fractional delays.  there is one FIR filter for each fractional delay.  
the number of coefs in the FIR filter is a performance issue regarding 
how well you're gonna beat down them images in between baseband and the 
next *oversampled* image.  in the analysis above, i am assuming all of 
those in-between images are beaten down to zero.  it's a crude analysis 
and i just wanted to see what the linear interpolation (on the upsampled 
signal) does for us.


So there is some balancing of computational 

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-26 Thread Ethan Duni
15.6 dB  +  (12.04 dB) * log2( Fs/(2B) )

Oh I see, you're actually taking the details of the sinc^2 into account.
What I had in mind was more of a worst-case analysis where we just call the
sin() component 1 and then look at the 1/n^2 decay (which is 12dB per
octave). Which we see in the second term, but of course the sine's
contribution also whacks away a certain portion of energy, hence the 15.6dB
offset.

On the other hand if you're interested in something like the spurious-free
dynamic range, then the simple 12dB/octave estimate is appropriate. The
worst-case components aren't going to get attenuated at all by the sin(),
just the 1/n^2. I tend to favor that in cases where we can't be confident
that the noise floor in question is (at least approximately) flat.

so, it seems to come out a little more than 12 dB.

I long ago adopted an informal rule that when an engineer says 6dB he
means 20*log10(2), and not exactly 6dB. And likewise for 3dB, 12dB, etc.
Doubly so when talking about the rolloff of linear systems, nobody ever
splits that hair... IIRC the prof in my freshman linear circuits class
instructed us to fudge it this way immediately after introducing the
concept of dB :]

the number of coefs in the FIR filter is a performance issue regarding how
well you're gonna
beat down them images in between baseband and the next *oversampled* image

Right I see what you mean. I had mixed up my arithmetic on the lengths of
the filters as a function of oversampling ratio.

i think we're on the same page.  ain't we?

Yeah, I was unclear on which scenario(s) the aliasing analysis was supposed
to apply to.

E



On Wed, Aug 26, 2015 at 12:53 PM, robert bristow-johnson 
r...@audioimagination.com wrote:

 On 8/25/15 7:08 PM, Ethan Duni wrote:

 if you can, with optimal coefficients designed with the tool of your
 choice, so i am ignoring any images between B and Nyquist-B, upsample by
 512x and then do linear interpolation between adjacent samples for
 continuous-time interpolation, you can show that it's something like 12 dB
 S/N per octave of oversampling plus another 12 dB.  that's 120 dB.  that's
 how i got to 512x.

 Wait, where does the extra 12dB come from? Seems like it should just be
 12dB per octave of oversampling. What am I missing?


 okay, this is painful.  in our 2-decade old paper, Duane and i did this
 theoretical approximation analysis for drop-sample interpolation, and i did
 it myself for linear, but we did not put in the math for linear
 interpolation in the paper.

 so, to satisfy Nyquist (or Shannon or Whittaker or the Russian guy) the
 sample rate Fs must exceed 2B which is twice the bandwidth.  the
 oversampling ratio is defined to be Fs/(2B).  and in octaves it is
 log2(Fs/(2B)).  all frequencies in your baseband satisfy |f|B and if it's
 highly oversampled, 2B  Fs.

 now, i'm gonna assume that Fs is so much (like 512x) greater than 2B that
 i will assume the attenuation due to the sinc^2 for |f|B is negligible.  i
 will assume that the spectrum between -B and +B is uniformly flat (that's
 not quite worst case, but it's worser case than what music, in the bottom 5
 or 6 octaves, is).  so given a unit height on that uniform power spectrum,
 the energy will be 2B.

 so, the k-th image (where k is not 0) will have a zero of the sinc^2
 function going right through the heart of it.  that's what's gonna kill the
 son-of-a-bitch.  the energy of that image is:


k*Fs+B
  integral{ (sinc(f/Fs))^4 df }
k*Fs-B


 since it's power spectrum it's sinc^4 for linear and sinc^2 for
 drop-sample interpolation.

 changing the variable of integration


+B
  integral{ (sinc((k*Fs+f)/Fs))^4 df }
-B



+B
  integral{ (sinc(k+f/Fs))^4 df }
-B



  sinc(k+f/Fs) =  sin(pi*(k+f/Fs))/(pi*(k+f/Fs))

   =  (-1)^k * sin(pi*f/Fs)/(pi*(k+f/Fs))

   =approx  (-1)^k  *  (pi*f/Fs)/(pi*k)

   since  |f|  B  Fs

 raising to the 4th power gets rid of the toggling polarity.  so now it's

 +B
  1/(k*Fs)^4 * integral{ f^4 df }  =  (2/5)/(k*Fs)^4 * B^5
 -B


 now you have to sum up the energies of all of the bad images (we are
 assuming that *all* of those images, *after* they are beaten down, will
 somehow fall into the baseband during resampling and their energies will
 team up).  there are both negative and positive frequency images to add
 up.  (but we don't add up the energy from the image at the baseband, that's
 the good image.)

 +inf   +inf
 2 * SUM{ (2/5)/(k*Fs)^4 * B^5 }  =  B*(4/5)*(B/Fs)^4 * SUM{1/k^4}
 k=1k=1


 the summation on the right is (pi^4)/90

 so the energy of all of the nasty images (after being beaten down due to
 the application of the sinc^2 that comes from linear interpolation between
 the subsamples) is


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-24 Thread Sampo Syreeni

On 2015-08-19, Ethan Duni wrote:

and it doesn't require a table of coefficients, like doing 
higher-order Lagrange or Hermite would.


Robert I think this is where you lost me. Wasn't the premise that memory
was cheap, so we can store a big prototype FIR for high quality 512x
oversampling?


In my (admittedly limited) experience these sorts of tradeoffs come when 
you need to resample generally, so not just downwards from the original 
sample rate but upwards as well, and you're doing it all on a dedicated 
DSP chip.


In that case, when your interpolator approaches and goes beyond the 
Nyquist frequency of the original sample, you need longer and longer 
approximations of the sinc(x) response, with wonkier and wonkier 
recursion formulas for online calculation of the coefficients of the 
interpolating polynomial. Simply because of aliasing suppression, and 
because you'd like to calculate the coefficients on the fly to save on 
memory bandwidth.


However, if you suitably resample both in the output sampling frequency 
and in the incoming one, you're left with some margin as far as the 
interpolator goes, and it's always working downwards, so that it doesn't 
actually have to do aliasing suppression. An arbitrary low order 
polynomial is easier to calculate on the fly, then.


The crucial part on dedicated DSP chips is that they can generate 
radix-2 FFT coefficients basically for free, with no table lookup and 
severely accelerated inline computation as well. That means that you can 
implement both the input and the output side anti-aliasing/anti-imaging 
filters via polyphase Fourier methods, for much less effort than the 
intervening arbitrary interpolation step would ever require. When you do 
that right, the code is still rather complex since it needs to 
dynamically mipmap the input sample for larger shifts in frequency, 
but when done right, you can also get essentially perfect and perfectly 
flexible results from a signal chain with perhaps 3x the computational 
load of a baseband third order interpolation polynomial, absent hardware 
acceleration.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-24 Thread robert bristow-johnson

On 8/24/15 11:18 AM, Sampo Syreeni wrote:

On 2015-08-19, Ethan Duni wrote:

and it doesn't require a table of coefficients, like doing 
higher-order Lagrange or Hermite would.


Robert I think this is where you lost me. Wasn't the premise that memory
was cheap, so we can store a big prototype FIR for high quality 512x
oversampling?


that was my premise for using linear interpolation *between* adjacent 
oversampled (by 512x) samples.  if you can, with optimal coefficients 
designed with the tool of your choice, so i am ignoring any images 
between B and Nyquist-B, upsample by 512x and then do linear 
interpolation between adjacent samples for continuous-time 
interpolation, you can show that it's something like 12 dB S/N per 
octave of oversampling plus another 12 dB.  that's 120 dB.  that's how i 
got to 512x.  some apps where you might care less about inharmonic 
energy from images folding over (a.k.a. aliasing), you might not need 
to go that high of whatever-x.


but the difference in price in memory only, *not* in computational 
burden.  whether it's 64x or 512x, the computational cost is separating 
the index into integer and fractional parts, using the integer part to 
select the N samples to combine and the fractional part to tell you how 
to combine them.  if it's 512x, the fractional part is broken up into 
the top 9 bits to select your N coefficients (and the neighboring set of 
N coefficients) and the rest of the bits are for the linear 
interpolation.  with only the cost of a few K of words (i remember the 
days when 4K was a lotta memory :-), you can get to arbitrarily good 
with the cost of 2N+1 MAC instructions.


with drop-sample interpolation between fractional delays (6 dB per 
octave of oversampling), then you need another 10 octaves of 
oversampling, 512K*N words of memory, but only N MAC instructions per 
output sample.


when it's using Hermite or Lagrange then the S/N is 24 dB per octave of 
oversampling, i don't think it's worth it that you need only 16x or 32x 
oversampling (that saves only memory and the cost of computation becomes 
4 times worse or worser).  maybe in an ASIC or an FPGA, but in DSP code 
or regular-old software, i don't see the advantage of cubic or 
higher-order interpolation unless memory is *really* tight and you gotta 
lotta MIPs to burn.




In my (admittedly limited) experience these sorts of tradeoffs come 
when you need to resample generally, so not just downwards from the 
original sample rate but upwards as well, and you're doing it all on a 
dedicated DSP chip.


In that case, when your interpolator approaches and goes beyond the 
Nyquist frequency of the original sample, you need longer and longer 
approximations of the sinc(x) response,


you need that to get sharper and sharper brick-wall LPFs to whack those 
511 images in between the baseband and 512x.


then the sinc^2 function in the linear interpolation blasts the hell 
outa all them images that are at multiples of 512x (except the 0th 
multiple of course).  drop-sample interpolation would have only a sinc 
function doing it whereas and Mth-order B-spline would have a sinc^(M+1) 
function really blasting the hell outa them images.


with wonkier and wonkier recursion formulas for online calculation of 
the coefficients of the interpolating polynomial. Simply because of 
aliasing suppression, and because you'd like to calculate the 
coefficients on the fly to save on memory bandwidth.


However, if you suitably resample both in the output sampling 
frequency and in the incoming one, you're left with some margin as far 
as the interpolator goes, and it's always working downwards, so that 
it doesn't actually have to do aliasing suppression. An arbitrary low 
order polynomial is easier to calculate on the fly, then.


The crucial part on dedicated DSP chips is that they can generate 
radix-2 FFT coefficients basically for free, with no table lookup


yeah, but you get accumulated errors as you compute the twiddle factors 
on-the-fly.  either in linear or bit-reversed order.


and severely accelerated inline computation as well. That means that 
you can implement both the input and the output side 
anti-aliasing/anti-imaging filters via polyphase Fourier methods,


the way that Bob Adams did it, with a single side (the input side) was 
to step through the sinc-like LPF impulse response at smaller steps (and 
interpolate) to LPF the input further for down-sampling.


admittedly, i haven't worried about the natural foldover from 
downsampling and just assume there wasn't stuff below the original 
Nyquist that you had to worry about.  or maybe my optimal filter kernel 
started to whack stuff at a frequency below 20 kHz.  but i didn't change 
it for downsampling.  i guess i was lazy.


for much less effort than the intervening arbitrary interpolation step 
would ever require. When you do that right, the code is still rather 
complex since it needs to dynamically mipmap the input sample for 
larger 

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-22 Thread Peter S
On 22/08/2015, Ethan Duni ethan.d...@gmail.com wrote:

 So your whole point is that it's not *exactly* sinc^2, but a slightly noisy
 version thereof? My point was that there are no effects of resampling
 visible in the graphs.

And you're wrong - all those 88 alias images are effects of resampling...

 That has nothing to do with exactly how the graphs
 were generated, nor does insisting that the graphs are slightly noisy
 address the point.

Well, it was *you* who insisted that it displays a graphed sinc^2
curve, and not a resampled signal... And you were wrong.

 Indeed, you've already conceded that the resampling effects are not visible
 in the graphs several posts back.

Aren't all those 88 alias images effects of resampling?
What are those, if not effects of resampling?

You claimed no upsampling is involved, yet when I upsample noise, I
get exactly that graph. So it seems you were wrong.

 It seems like you're just casting about
 for some other issue that you can tell yourself you won, and then call me
 names, to feed your fragile ego.

Well, if you do not see that the curve is NOT a graphed sinc^2, but
rather, a noisy curve seemingly from resampled noise, then you have
some underlying problem.

 Honestly, it's a pretty sad spectacle and I'm embarrassed for you.

I'm embarrassed for you.

 It really would be better for everyone - including
 you - if you could interact in a good-faith, mature manner. Please make an
 effort to start doing so, or you're pretty soon going to find that nobody
 here will interact with you any more.

Yet - for some reason - you keep interacting with me for the past 22
mails you wrote. Maybe to feed your fragile ego and prove that you
won... (?)

 By the way, there's no reason for any jaggedness to appear in the plots,
 given the lengths of data you were talking about.

There *is* reason for jaggedness to appear in the plots. If you don't
believe, try it yourself - take some white noise sampled at 500 Hz,
and resample it to 44.1 kHz. The shorter the length, the more jagged
the spectrum will look.

Besides, we do not know how much data Olli processed, so you cannot
say there's no reason for jaggedness in his graph - as you do not
know how he derived his graph. So your argument is invalid again.

 Producing a very smooth graph from a long enough segment of data is
 straightforward, if you use appropriate techniques (not just one big FFT of
 the whole thing, that won't ever get rid of the noisiness no matter how
 much data you throw at it).

Exactly. And that's what I used (spectral averaging over a long
segment), yet it is STILL noisy, if the white noise segment is not
very long.

So your argument is wrong again...

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-22 Thread Peter S
So you claim that the graph depicts a sinc^2 graph, and it shows the
frequency response of a continuous time linearly interpolated signal,
and involves no resampling.

That is false. That is not how Olli created his graph. First, the
continuous time signal (which, by the way, already contains an
infinite amount of aliases of the original spectrum) exists only in
your imagination - I'm almost 100% certain Olli made his graph by
resampling noise. The telltale signs of this are:

- the curves on the graph are jagged/noisy, typical of averaged white
noise spectrum
- if you watch closely, the same jaggedness repeats at a 2*PI
frequency interval, showing that they are aliases of the original
spectrum, which was noisy.

Therefore, Ollis graph does *not* depict a continuous time signal, but
rather, a noisy signal that was resampled to 44.1 kHz. Therefore, what
you see on the graph, is the artifacts from the resampling.

Therefore, all your arguments are invalid.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-22 Thread Peter S
So let me get this straight - you have an *imaginary* graph in your
head, depicting the frequency response of a continuous time linearly
interpolated signal, and you keep arguing about this *imaginary* graph
(maybe to feed your fragile ego and to prove that you won).

That is *not* what you see on Olli's graph, as been discussed in
depth. So what you're arguing about, is not Olli's graph that was
presented, but rather, an *imaginary* graph, that exists only in your
head.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-22 Thread Peter S
And besides, no one ever said that Olli's graph depicts analyitical
frequency responses of continuous time interpolators. The graphs come
from a musicdsp.org code entry:

http://musicdsp.org/archive.php?classid=5#49

There's no comment whatsover, just the code and the graphs.

If you read his 65 page long paper on interpolators, he doesn't
discuss analytical continuous time interpolator frequency responses
whatsoever. He just shows their graphs, and tells where they have
zeros in the response. No formulas for analytical frequency responses,
at all - seemingly he is not interested in that. I just skimmed
through his paper again, and the closest thing that he has in it, are
polynomial approximations for frequency responses in the passband.
About that's all, other than that, no frequency response formulas.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-22 Thread Sampo Syreeni

On 2015-08-18, Tom Duffy wrote:

In order to reconstruct that sinusoid, you'll need a filter with an 
infinitely steep transition band. You've demonstrated that SR/2 
aliases to 0Hz, i.e. DC. That digital stream of samples is not 
reconstructable.


The conjugate sine to +1, -1, +1, -1, ... is 0, 0, 0, 0... Just phase 
shift the original sine at the Nyquist frequence.


That'll show you that that precise signal cannot be reconstructed 
without resorting to complex continuation of the signal, on the Fourier 
plane.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-22 Thread Peter S
On 22/08/2015, Sampo Syreeni de...@iki.fi wrote:

 The conjugate sine to +1, -1, +1, -1, ... is 0, 0, 0, 0... Just phase
 shift the original sine at the Nyquist frequence.

Let me ask what do you mean by conjugate sine ?

If you mean complex conjugate, and assume the sine to be the real
part of complex phasor rotating around the complex unit circle, then
isn't the conjugate of that phasor also +1, -1, +1, -1,... ? The only
difference is that the phasor is mirrored around the X axis (so the
imaginary part +i becomes -i), so it rotates in the opposite direction
(negative frequency). Since the frequency of that phasor is pi, the
complex conjugate phasor rotating at the other direction is also +1,
-1, +1, -1... Either direction, the phasor toggles between positions
z=1 and z=-1.

Maybe you meant quadrature sine ?

 That'll show you that that precise signal cannot be reconstructed
 without resorting to complex continuation of the signal, on the Fourier
 plane.

Let me ask, what do you mean by Fourier plane? I never heard that
term, and Google only gives me optics-related pages.

Maybe you mean complex plane?
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-22 Thread Peter S
Okay, I'll risk exceeding my daily message limit. If the
administrators think it is inappropriate, dealing with that is at
their discretion.

Here is another proof that the alias images in the spectrum are caused
by the sampling/upsampling, not the interpolation:

Let's replace linear interpolation with simply stuffing zeros between
samples. So that means, we upsample the signal without applying
interpolation or filtering. Let's try this on an ~50 Hz sine wave
sampled at 44100/88 ~= 501 Hz, upsampled to 44.1 kHz by stuffing 87
zeros between each sample.

The resulting waveform looks like individual impulses, spaced 88 samples apart:
http://morpheus.spectralhead.com/img/sine_upsampled_waveform.png

Here is the spectrum:
http://morpheus.spectralhead.com/img/sine_upsampled_spectrum.png

We can see the usual alias frequencies at 450 Hz, 550 Hz, 950 Hz, 1050
Hz, 1450 Hz, 1550 Hz, 1950 Hz, 2050 Hz, ... This is because the
upsampling causes the original spectrum to repeated infinite times,
causing these alias frequencies to appear in the resulting spectrum.

Therefore, it is NOT the interpolation that is causing these alias
images, but rather, the upsampling... More precisely, they're already
present in the original signal sampled at 500 Hz, the upsampling just
makes them visible. I used no interpolation at all, yet all this
aliasing appeared on the spectrum.

All the interpolation does, is it filters out some of this aliasing...
Since the impulse response of linear interpolation is a triangle,
applying linear interpolation is equivalent to convolving the
resulting upsampled signal with a triangular kernel filter. Since the
Fourier transform of a rectangle is a sinc function, and a triangular
kernel is equivalent to convolving two rectangular kernels, the
Fourier spectrum of a triangular kernel will look like a sinc^2
function.

But that's not what causes the aliasing... it's there already after
the upsampling, before you apply the interpolation/convolution. You
can take a discretized version of a continuous triangular kernel
sampled at the upsampled rate, and convolving the upsampled signal
with that kernel will be equivalent to linear interpolation. You do
not actually need a continuous time signal to be present, and the
aliasing/imaging is there already before doing the triangular
convolution at the upsampled rate.

Several authors discuss the equivalence of linear interpolation and
convolution with a triangular filter, examples:

1) linear interpolation can be expressed as convolving the sampled
function with a triangle function[1]

http://morpheus.spectralhead.com/img/linear_interpolation1.png

2) The first-order hold [= linear interpolation] corresponds to an
impulse response for the reconstruction filter that is a triangle of
duration equal to twice the sampling period.[2]

http://morpheus.spectralhead.com/img/linear_interpolation2.png

3) http://morpheus.spectralhead.com/img/linear_interpolation3.png

[1] Oliver Kreylos, Sampling Theory 101
http://idav.ucdavis.edu/~okreylos/PhDStudies/Winter2000/SamplingTheory.html

[2] Alan V. Oppenheim, Signals and Systems, ch. 17. Interpolation
http://ocw.mit.edu/resources/res-6-007-signals-and-systems-spring-2011/lecture-notes/MITRES_6_007S11_lec17.pdf

[3] Ruye Wang, Sampling Theorem, Reconstruction of Signal by Interpolation
http://fourier.eng.hmc.edu/e101/lectures/Sampling_theorem/node3.html

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
Creating a 22000 Hz signal from a 250 Hz signal by interpolation, is
*exactly* upsampling

That is not what is shown in that graph. The graph simply shows the
continuous-time frequency response of the interpolation polynomials,
graphed up to 22kHz. No resampling is depicted, or the frequency responses
would show the aliasing associated with that. It's just showing the sinc^2
response of the linear interpolator, and similar for the other polynomials.
This is what you'd get if you used those interpolation polynomials to
convert a 250Hz sampled signal into a continuous time signal, not a
discrete time signal of whatever sampling rate.

E

On Fri, Aug 21, 2015 at 2:09 AM, Peter S peter.schoffhau...@gmail.com
wrote:

 On 21/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
 In this graph, the signal frequency seems to be 250 Hz, so this graph
 shows the equivalent of about 22000/250 = 88x oversampling.
 
  That graph just shows the frequency responses of various interpolation
  polynomials. It's not related to oversampling.

 Creating a 22000 Hz signal from a 250 Hz signal by interpolation, is
 *exactly* upsampling - the sampling rate changes by a factor of 88x.
 It's not bandlimited interpolation (using a windowed sinc
 interpolator), hence there is a lot of aliasing above Nyquist.
 Irregardless, it's still oversampling - the resulting signal is
 sampled with a 88x higher frequency than the original. It's equivalent
 to creating a 3,880,800 Hz signal from a 44100 Hz signal.

 -P
 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Peter S
Also, you even contradict yourself. You claim that:

1) Olli's graph was created by graphing sinc(x), sinc^2(x), and not via FFT.

2) The artifacts from the resampling would be barely visible, because
the oversampling rate is quite high.

So, if - according to 2) - the artifacts are not visible because the
oversampling is high and the graph doesn't focus on that, then how do
you know that 1) is true? You claim that the resampling artifacts
wouldn't be visible anyways.

If that's true, then how would you prove that FFT was not used for
creating Olli's graph?

Also, even you yourself acknowledge that

It shows the aliasing left by linear interpolation into the
continuous time domain.

So, we agree that the graph shows aliasing, right?

I do not know where you get your idea of additional aliasing - it's
the very same aliasing, except the resampling folds it back...
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Peter S
On 21/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
 So you agree that the effects of resampling are not shown, and all we see
 is the spectrum of the continuous time polynomial interpolators.

I claim that they are aliases of the original spectrum.

Just as you also call them:

It shows the aliasing left by linear interpolation into the
continuous time domain.

I never claimed anything about the folding back of alias frequencies
from the resampling at 44.1k rate.

 If I were you,
 I'd quit haranguing people over irrelevancies and straw men,

It's rather you who argue about irrelevant things and straw man
arguments - for that matter, I never claimed that the folded back
aliases from the resampling at 44.1k are visible on Olli's graph. It's
you who is forcing this irrelevant argument.

So maybe you should listen to your own advice, first.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Peter S
A sampled signal contains an infinte number of aliases:
http://morpheus.spectralhead.com/img/sampling_aliases.png

the spectrum is replicated infinitely often in both directions

These are called aliases of the spectrum. You do not need to fold
back the aliasing via resampling for them to become aliases...
They're aliases already - when sampled at the original rate, they
would all alias back to the original signal.

This is because exp(i*x) is periodic, and after 2*PI radians, you get
back to the same frequency... hence, frequencies that are 2*PI apart
from each other, are all aliases...

If you fail to understand that, I think you fail to understand even
the basics of sampling theory.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
Since that image is not meant to illustrate the effects of
resampling, but rather, to illustrate the effects of interpolation,
*obviously* it doesn't focus on the aliasing from the resampling.

So you agree that the effects of resampling are not shown, and all we see
is the spectrum of the continuous time polynomial interpolators.

I'm going to accept that concession of my point and move on. If I were you,
I'd quit haranguing people over irrelevancies and straw men, and generally
trying to pretend to superiority. Nobody is buying it, and it just
highlights your insecurity.

E

On Fri, Aug 21, 2015 at 1:24 PM, Peter S peter.schoffhau...@gmail.com
wrote:

 On 21/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
 It shows *exactly* the aliasing
 
  It shows the aliasing left by linear interpolation into the continuous
 time
  domain. It doesn't show the additional aliasing produced by then delaying
  and sampling that signal. I.e., the images that would get folded back
 onto
  the new baseband, disturbing the sinc^2 curve.

 This image doesn't involve any fractional delay.

  Those differences would be quite small for resampling to 44.1kHz with no
  delay, since the oversampling ratio is considerable, so you'd have to
 look
  carefully to see them.

 I think they're actually on the image:
 http://morpheus.spectralhead.com/img/resampling_aliasing.png

 They're hard to notice, because the other aliasing masks it.

  This is a big hint that they are not portrayed:
  Ollie knows what he is doing, so if he wanted to illustrate the effects
 of
  the resampling, he would have constructed a scenario where they are
 easily
  visible.

 Since that image is not meant to illustrate the effects of
 resampling, but rather, to illustrate the effects of interpolation,
 *obviously* it doesn't focus on the aliasing from the resampling.

 Therefore, it is not a hint at all, and your argument is invalid.

  And probably mentioned a second sample rate, explicitly shown both
  the sinc^2 and its aliased counterpart, etc. The effect would be shown
 in a
  visible, explicit manner, if that was what the graph was supposed to
 show.

 The fact that this graph is not supposed to demonstrate the aliasing
 from the resampling, does not mean that

 1) it's not there on the graph (it's just barely visible)

 2) the images of the continuous time interpolated signal are not
 aliasing. That's also called aliasing!!!

  But all of those things depend on parameters like oversampling ratio and
  delay, so it would be a much more complicated picture.

 Yes, and that's all entirely irrelevant here... Because the images in
 the continuous time signal before the resampling are also called
 aliasing!!! They're all aliases of the original spectrum, and they all
 alias back to the original spectrum when sampled at the original
 sampling rate! They're called aliasing even before you resample them!

  What we're shown
  here is just the effects of polynomial interpolation to get to the
  continuous time domain.

 False. I've shown the FFT frequency spectra of actual upsampled signals.

  The additional effects of delaying and then
  sampling that signal back into the discrete time domain are not visible.

 There was no delaying involved at all.

 The effects of sampling that signal back are not visible, because
 there's 88x oversampling, just as I pointed out. If you want, you can
 repeat the same with less oversampling, and present us your results.

  It seems that you have assumed that some resampling must be happening
  because the graph only goes up to 22kHz. But that's just the range of the
  graph, you don't need to do any resampling of anything to graph sinc^2
 over
  any particular range of frequencies.

 I never said you need do to resampling of the continuous time signal
 to graph sinc^2.

 I said: the images in the frequency spectrum of the continuous time
 signal are aliases of the original spectrum, and they alias back to
 the original spectrum when the continuous time signal is sampled at
 the original rate!

  But that's not quite the exact same graph.

 It's essentially the exact same graph.

  And why are you putting a sound card in the loop?

 That was the most convenient way to record the signal.

  This is all just digital processing in question here. You
  don't even need to process any signals, there are analytic expressions
 for
  all of the quantities involved.

 That's just one way of drawing fancy graphs.
 FFT is another way of drawing fancy graphs.
 Why would I restrict myself to one method?

  That's how Ollie generated graphs of them
  without reference to any particular signals.

 How do you know? Prove it! I'm convinced he generated it via numerical
 means and FFT.

  Again, the differences in question are small due to the high oversampling
  ratio, so it's going to be quite difficult to see them in macroscopic
  graphs like this.

 Let me point out again, that all those spectral images in the
 continunous time signal 

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Peter S
Let's repeat the same with a 50 Hz sine wave, sampled at 500 Hz, then
linearly interpolated and resampled at 44.1 kHz:

http://morpheus.spectralhead.com/img/sine_aliasing.png

The resulting alias frequencies are at: 450 Hz, 550 Hz, 950 Hz, 1050
Hz, 1450 Hz, 1550 Hz, 1950 Hz, 2050 Hz, 2450 Hz, 2550 Hz, ...

I think it should be obvious that these are all alias frequencies of
50 Hz, since if you sample any of these sinusoids at 500 Hz rate, they
will all alias to 50 Hz. Hence, they are - by definition - aliases of
the 50 Hz sinusoid.

Welcome to sampling theory 101.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
It shows *exactly* the aliasing

It shows the aliasing left by linear interpolation into the continuous time
domain. It doesn't show the additional aliasing produced by then delaying
and sampling that signal. I.e., the images that would get folded back onto
the new baseband, disturbing the sinc^2 curve. This is how we end up with a
zero at Nyquist when we do half-sample delay, for example. And also how we
end up with a perfectly flat response if we do the trivial resampling
(original rate, no delay).

Those differences would be quite small for resampling to 44.1kHz with no
delay, since the oversampling ratio is considerable, so you'd have to look
carefully to see them. This is a big hint that they are not portrayed:
Ollie knows what he is doing, so if he wanted to illustrate the effects of
the resampling, he would have constructed a scenario where they are easily
visible. And probably mentioned a second sample rate, explicitly shown both
the sinc^2 and its aliased counterpart, etc. The effect would be shown in a
visible, explicit manner, if that was what the graph was supposed to show.
But all of those things depend on parameters like oversampling ratio and
delay, so it would be a much more complicated picture. What we're shown
here is just the effects of polynomial interpolation to get to the
continuous time domain. The additional effects of delaying and then
sampling that signal back into the discrete time domain are not visible.

It seems that you have assumed that some resampling must be happening
because the graph only goes up to 22kHz. But that's just the range of the
graph, you don't need to do any resampling of anything to graph sinc^2 over
any particular range of frequencies.

Oh, it's the *exact* same graph! (Minus some
difference above 20 kHz, due to my soundcard's anti-alias filter.)
You get the same graph if you sample that continuous time signal
at a 44.1 kHz sampling rate (with some further aliasing from the
sampling).

But that's not quite the exact same graph. And why are you putting a sound
card in the loop? This is all just digital processing in question here. You
don't even need to process any signals, there are analytic expressions for
all of the quantities involved. That's how Ollie generated graphs of them
without reference to any particular signals.

Again, the differences in question are small due to the high oversampling
ratio, so it's going to be quite difficult to see them in macroscopic
graphs like this. If you want to see the differences, just make a plot of
both sinc^2 and its aliased versions (for whatever oversampling ratios
and/or delays), and look at the differences. It won't be interesting for
high oversampling ratios and zero delay - which is exactly why that
scenario is a poor choice for illustrating the effects in question.

The fact that sampling a continuous time signal at a very high rate results
in a spectrum that closely resembles the continuous time spectrum (over the
sampled bandwidth) is beside the point. It just means that you're operating
in a regime where the effects are very hard to spot. It doesn't follow from
that resemblance that resampling must be occurring to get a plot of the
spectrum of the continuous time signal.

E

On Fri, Aug 21, 2015 at 10:51 AM, Peter S peter.schoffhau...@gmail.com
wrote:

 On 21/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
 Creating a 22000 Hz signal from a 250 Hz signal by interpolation, is
 *exactly* upsampling
 
  That is not what is shown in that graph. The graph simply shows the
  continuous-time frequency response of the interpolation polynomials,
  graphed up to 22kHz. No resampling is depicted, or the frequency
 responses
  would show the aliasing associated with that.

 It shows *exactly* the aliasing
 http://morpheus.spectralhead.com/img/interpolation_aliasing.png

 There are about 88 alias images visible on the graph.
 The linear interpolation curve is not smooth, so it contains aliasing.

  It's just showing the sinc^2
  response of the linear interpolator, and similar for the other
 polynomials.

 If the signal you interpolate is white noise, and the spectrum of the
 signal is a flat spectrum rectangle like the one displayed, then after
 resampling, you get *exactly* the spectrum you see on the graph,
 showing 88 alias images.

 Proof:
 I created 60 seconds of white noise sampled at 500 Hz, then resampled
 it to 44.1 kHz using linear interpolation. After the upsampling, it
 sounds like this:

 http://morpheus.spectralhead.com/wav/noise_resampled.wav

 Its spectrum looks like this:
 http://morpheus.spectralhead.com/img/noise_resampled.png

 Looks familiar? Oh, it's the *exact* same graph! (Minus some
 difference above 20 kHz, due to my soundcard's anti-alias filter.) It
 is an FFT graph of the upsampled white noise, and it shows *exactly*
 the aliasing. Good morning!

  This is what you'd get if you used those interpolation polynomials to
  convert a 250Hz sampled signal into a continuous time signal, 

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Peter S
On 21/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
It shows *exactly* the aliasing

 It shows the aliasing left by linear interpolation into the continuous time
 domain. It doesn't show the additional aliasing produced by then delaying
 and sampling that signal. I.e., the images that would get folded back onto
 the new baseband, disturbing the sinc^2 curve.

This image doesn't involve any fractional delay.

 Those differences would be quite small for resampling to 44.1kHz with no
 delay, since the oversampling ratio is considerable, so you'd have to look
 carefully to see them.

I think they're actually on the image:
http://morpheus.spectralhead.com/img/resampling_aliasing.png

They're hard to notice, because the other aliasing masks it.

 This is a big hint that they are not portrayed:
 Ollie knows what he is doing, so if he wanted to illustrate the effects of
 the resampling, he would have constructed a scenario where they are easily
 visible.

Since that image is not meant to illustrate the effects of
resampling, but rather, to illustrate the effects of interpolation,
*obviously* it doesn't focus on the aliasing from the resampling.

Therefore, it is not a hint at all, and your argument is invalid.

 And probably mentioned a second sample rate, explicitly shown both
 the sinc^2 and its aliased counterpart, etc. The effect would be shown in a
 visible, explicit manner, if that was what the graph was supposed to show.

The fact that this graph is not supposed to demonstrate the aliasing
from the resampling, does not mean that

1) it's not there on the graph (it's just barely visible)

2) the images of the continuous time interpolated signal are not
aliasing. That's also called aliasing!!!

 But all of those things depend on parameters like oversampling ratio and
 delay, so it would be a much more complicated picture.

Yes, and that's all entirely irrelevant here... Because the images in
the continuous time signal before the resampling are also called
aliasing!!! They're all aliases of the original spectrum, and they all
alias back to the original spectrum when sampled at the original
sampling rate! They're called aliasing even before you resample them!

 What we're shown
 here is just the effects of polynomial interpolation to get to the
 continuous time domain.

False. I've shown the FFT frequency spectra of actual upsampled signals.

 The additional effects of delaying and then
 sampling that signal back into the discrete time domain are not visible.

There was no delaying involved at all.

The effects of sampling that signal back are not visible, because
there's 88x oversampling, just as I pointed out. If you want, you can
repeat the same with less oversampling, and present us your results.

 It seems that you have assumed that some resampling must be happening
 because the graph only goes up to 22kHz. But that's just the range of the
 graph, you don't need to do any resampling of anything to graph sinc^2 over
 any particular range of frequencies.

I never said you need do to resampling of the continuous time signal
to graph sinc^2.

I said: the images in the frequency spectrum of the continuous time
signal are aliases of the original spectrum, and they alias back to
the original spectrum when the continuous time signal is sampled at
the original rate!

 But that's not quite the exact same graph.

It's essentially the exact same graph.

 And why are you putting a sound card in the loop?

That was the most convenient way to record the signal.

 This is all just digital processing in question here. You
 don't even need to process any signals, there are analytic expressions for
 all of the quantities involved.

That's just one way of drawing fancy graphs.
FFT is another way of drawing fancy graphs.
Why would I restrict myself to one method?

 That's how Ollie generated graphs of them
 without reference to any particular signals.

How do you know? Prove it! I'm convinced he generated it via numerical
means and FFT.

 Again, the differences in question are small due to the high oversampling
 ratio, so it's going to be quite difficult to see them in macroscopic
 graphs like this.

Let me point out again, that all those spectral images in the
continunous time signal before the resampling, *are* aliasing, as
they're aliases of the original spectrum, and are *very* visible on
the graph!

 If you want to see the differences, just make a plot of
 both sinc^2 and its aliased versions (for whatever oversampling ratios
 and/or delays), and look at the differences. It won't be interesting for
 high oversampling ratios and zero delay - which is exactly why that
 scenario is a poor choice for illustrating the effects in question.

And you're entirely missing the point what it is supposed to illustrate.

 The fact that sampling a continuous time signal at a very high rate results
 in a spectrum that closely resembles the continuous time spectrum (over the
 sampled bandwidth) is beside the point.

Exactly. It's 

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Peter S
On 21/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
Creating a 22000 Hz signal from a 250 Hz signal by interpolation, is
*exactly* upsampling

 That is not what is shown in that graph. The graph simply shows the
 continuous-time frequency response of the interpolation polynomials,
 graphed up to 22kHz. No resampling is depicted, or the frequency responses
 would show the aliasing associated with that.

It shows *exactly* the aliasing
http://morpheus.spectralhead.com/img/interpolation_aliasing.png

There are about 88 alias images visible on the graph.
The linear interpolation curve is not smooth, so it contains aliasing.

 It's just showing the sinc^2
 response of the linear interpolator, and similar for the other polynomials.

If the signal you interpolate is white noise, and the spectrum of the
signal is a flat spectrum rectangle like the one displayed, then after
resampling, you get *exactly* the spectrum you see on the graph,
showing 88 alias images.

Proof:
I created 60 seconds of white noise sampled at 500 Hz, then resampled
it to 44.1 kHz using linear interpolation. After the upsampling, it
sounds like this:

http://morpheus.spectralhead.com/wav/noise_resampled.wav

Its spectrum looks like this:
http://morpheus.spectralhead.com/img/noise_resampled.png

Looks familiar? Oh, it's the *exact* same graph! (Minus some
difference above 20 kHz, due to my soundcard's anti-alias filter.) It
is an FFT graph of the upsampled white noise, and it shows *exactly*
the aliasing. Good morning!

 This is what you'd get if you used those interpolation polynomials to
 convert a 250Hz sampled signal into a continuous time signal, not a
 discrete time signal of whatever sampling rate.

Nope. You get the same graph if you sample that continuous time signal
at a 44.1 kHz sampling rate (with some further aliasing from the
sampling). Just as I've shown.

Besides, I think the graph was created via numerical means using FFT,
because it has noise at the low ampliutes (marked on the image).
Therefore, it doesn't show a continuous time sinc^2 graph, because
that wouldn't be noisy.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
The details of how the graphs were generated don't really matter. The point
is that the only effect shown is the spectrum of the continuous-time
polynomial interpolator. The additional spectral effects of delaying and
resampling that continuous-time signal (to get fractional delay, for
example) are not shown. There is no resampling to be seen in the graphs.

I claim that they are aliases of the original spectrum.

What we see in the graph is simply the spectra of the continuous-time
interpolators. Since the spectra extend beyond the original nyquist rate,
there will indeed be images of the original signal weighted by the
interpolator spectrum present in the continuous-time interpolated signal.
Whether those are ultimately expressed as aliases depends on what you then
do with that continuous time signal. If you resample to the original rate
(in order to implement a fractional delay, say), then those weighted images
will be folded back to the same place they came from. In that case, there
is no aliasing, you just end up with a modified frequency response of your
fractional interpolator. This is where the zero at Nyquist comes from when
we do a half-sample delay - the linear phase term corresponding to a
half-sample delay causes the signal images to become out of phase with each
other as you approach Nyquist, so they cancel out and you get a zero.

It is only if the interpolated continuous-time signal is resampled at a
different rate, or just used directly, that those signal images end up
expressed as aliases.

The rest of your accusations are your usual misreadings and straw men. I
won't be legitimating them by responding, and I hope you will accept that
and give up on these childish tactics. It would be better for everyone if
you could make a point of engaging in good faith and trying to stick to the
subject rather than attacking the intellects of others.

E

On Fri, Aug 21, 2015 at 2:05 PM, Peter S peter.schoffhau...@gmail.com
wrote:

 Also, you even contradict yourself. You claim that:

 1) Olli's graph was created by graphing sinc(x), sinc^2(x), and not via
 FFT.

 2) The artifacts from the resampling would be barely visible, because
 the oversampling rate is quite high.

 So, if - according to 2) - the artifacts are not visible because the
 oversampling is high and the graph doesn't focus on that, then how do
 you know that 1) is true? You claim that the resampling artifacts
 wouldn't be visible anyways.

 If that's true, then how would you prove that FFT was not used for
 creating Olli's graph?

 Also, even you yourself acknowledge that

 It shows the aliasing left by linear interpolation into the
 continuous time domain.

 So, we agree that the graph shows aliasing, right?

 I do not know where you get your idea of additional aliasing - it's
 the very same aliasing, except the resampling folds it back...
 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
Which contains alias images of the original spectrum, which was my point.

There is no original spectrum pictured in that graph. Only the responses
of the interpolators. There is no reference to any input signal at all.

No one claimed there was fractional delay involved.

Fractional delay is a primary topic of this thread, and a major motivation
for interest in polynomial interpolation in dsp in general.

Then how do you explain that taking noise sampled at 500 Hz, and
resampling it to 44.1 kHz gives an identical FFT graph?

We've been over this already. It's because you're resampling the signal at
such a large rate that the effects of the sampling are not visible. And
you've chosen a signal with a flat spectrum, so there are no features of
the signal spectrum visible - only the interpolator response. This goes
exactly to the point that no resampling effects are present in the graphs.
All we see are the interpolator spectra.

The fact that there are various ways to generate a graph of an interpolator
spectrum is entirely beside the point.

 If you resample to the original rate
 (in order to implement a fractional delay, say), then those weighted
images
 will be folded back to the same place they came from.
That's exactly why they're called aliases.

No, if you fold the images back to the same spots they originated, they are
not aliases. All of the frequencies are mapped back to their original
locations, none end up at other frequencies. Aliases are when signal images
end up in new locations corresponding to different frequency bands.

This distinction is crucial to understanding the operation of fractional
delay interpolators: it's why they don't produce aliasing at their output.
We just get a fractional delay filter with an imperfect spectrum. It's only
the frequency response of the interpolator that gets aliased (introducing
the zero at Nyquist for half-sample delay, for example), not the underlying
signal content. That's why it's important to graph the frequency response
of the interpolators directly, without worrying about signal spectra - to
figure out what happens in the final digital interpolator, you take that
continuous time interpolator spectrum, add a linear phase term for whatever
delay you want, and then alias it according to your new sampling rate to
get the final response of the digital interpolation filter. Signal aliasing
only results if that involves a change in sampling rate.

Which is not the case on Olli's graph.

Right, Ollie's graph shows only the intermediate stage, the spectrum of the
polynomial interpolator in continuous time. This is an analytical
convenience, we never actually produce any such signal. It's used as an
input to figure out what the final response of a digital interpolator based
on one of these polynomials will be. You can of course sample that at a
very high rate and so neglect the aliasing of the interpolator response,
but what is the point of that? You wouldn't use any of these interpolators
if what you're trying to do is upsample a 500Hz sampled signal to 44.1kHz,
the graphs show that they're crap for that.

I spent (wasted?) a considerate amount of time creating various
demonstrations and FFT graphs showing my point.

Your time would be better spent figuring out a point that is relevant to
what I'm saying in the first place. It is indeed a waste of your time to
invent equivalent ways to generate graphs, since that is not the point.

E



On Fri, Aug 21, 2015 at 2:56 PM, Peter S peter.schoffhau...@gmail.com
wrote:

 On 21/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
  The details of how the graphs were generated don't really matter.

 Then why do you keep insisting that they're generated by plotting
 sinc^2(x) ?

  The point
  is that the only effect shown is the spectrum of the continuous-time
  polynomial interpolator.

 Which contains alias images of the original spectrum, which was my point.

  The additional spectral effects of delaying and
  resampling that continuous-time signal (to get fractional delay, for
  example) are not shown.

 No one claimed there was fractional delay involved.

  There is no resampling to be seen in the graphs.

 I recreated the exact same graph via resampling a signal, proving that
 is one method of generating that graph.

 I claim that they are aliases of the original spectrum.
 
  What we see in the graph is simply the spectra of the continuous-time
  interpolators.

 Then how do you explain that taking noise sampled at 500 Hz, and
 resampling it to 44.1 kHz gives an identical FFT graph?

 How do you explain that an 50 Hz sine wave, resampled to 44.1 kHz,
 contains alias frequencies at 450 Hz, 550 Hz, 950 Hz, 1050 Hz, 1450
 Hz, 1550 Hz, etc. ? What are those, if not aliases ?

  Whether those are ultimately expressed as aliases depends on what you
 then
  do with that continuous time signal.

 They're already aliases... You may filter them out, or do whatever
 you want with them - that doesn't change the fact that 

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
1) Olli Niemiatalo's graph *is* equivalent of the spectrum of
upsampled white noise.

We've been over this repeatedly, including in the very post you are
responding to. The fact that there are many ways to produce a graph of the
interpolation spectrum is not in dispute, nor is it germaine to my point.
I'm not sure what you're trying to accomplish by harping on this point,
while ignoring everything I say. Certainly, it is not convincing me that
you have some worthwhile response to my points, or even that you are
understanding them in the first place. It's seems like you are trying to
avoid my point entirely, in favor of some imaginary dispute of your own
invention, which you think you can win.

Have you actually looked at Olli Niemitalo's graph closely?
Here is proof that it is NOT a graph of sinc(x)/sinc^2(x):

http://morpheus.spectralhead.com/img/other001-analysis.gif

It is NOT sinc(x)/sinc^2(x), and you're blind as a bat if you do not see
that.

I have no idea what you think you are proving by scrutinizing graph
artifacts like that, but it's a preposterous approach to signal analysis on
its face.

It's also in extremely poor taste to use retard as a term of abuse.
People with mental disabilities have it hard enough already, without others
treating their status as an insult to be thrown around. I'd appreciate it
if you would compose yourself and refrain from these kinds of ugly
outbursts.

Meanwhile, it seems that you are suggesting that the spectrum of white
noise linearly interpolated up to a high oversampling rate is not sinc^2.
Is your whole point here that generating such a plot by FFTing the
interpolation of a finite segment of white noise will produce finite-data
artifacts in the resulting graph? Because that's not relevant to the
subject, and only goes to show that it's better to just graph the sinc^2
curve directly and so avoid all of the excess computation and finite-data
effects. Are you claiming that those wiggles in the graph represent
aliasing of the spectrum from resampling at 44.1kHz? If so, that is
unlikely.

You do agree that the spectrum of a continuous-time linear interpolator is
given by sinc^2, right?

E


On Fri, Aug 21, 2015 at 4:59 PM, Peter S peter.schoffhau...@gmail.com
wrote:

 Since you constantly derail this topic with irrelevant talk, let me
 instead prove that

 1) Olli Niemiatalo's graph *is* equivalent of the spectrum of
 upsampled white noise.
 2) Olli Niemitalo's graph does *not* depict sinc(x)/sinc^2(x).

 First I'll prove 1).

 Using palette modification, I extracted the linear interpolation curve
 from Olli's figure:
 http://morpheus.spectralhead.com/img/other001b.gif

 Then I sampled white noise at 500 Hz, and resampled it to 44.1 kHz
 using linear interpolation. I got this spectrum:

 http://morpheus.spectralhead.com/img/resampled_noise_spectrum.gif

 To do a proper A/B comparison between the two spectra, I tried to
 align and match them as much as possible, and created an animated GIF
 file that blinks between the two graphs at a 500 ms rate:

 http://morpheus.spectralhead.com/img/olli_vs_resampled_noise.gif

 Although the alignment is not 100% exact, to my eyes, they look like
 totally equivalent graphs.

 This proves that upsampled white noise has the same spectrum as the
 graph shown on Olli's graph for linear interpolation.

 Second, I'll prove 2).

 Have you actually looked at Olli Niemitalo's graph closely?
 Here is proof that it is NOT a graph of sinc(x)/sinc^2(x):

 http://morpheus.spectralhead.com/img/other001-analysis.gif

 It is NOT sinc(x)/sinc^2(x), and you're blind as a bat if you do not see
 that.

 Since I proved both 1) and 2), it is totally irrelevant what you say,
 because none of what you could ever say would disprove this.

 Sinc(x) does not have a jagged/noisy look, therefore it is 100%
 certain it is not what you see on Olli's graph. Point proven, end of
 discussion.

 -P
 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
Naturally, there's going to be some jaggedness in the spectrum because
of the noise. So, obviously, that is not sinc^2 then.

So your whole point is that it's not *exactly* sinc^2, but a slightly noisy
version thereof? My point was that there are no effects of resampling
visible in the graphs. That has nothing to do with exactly how the graphs
were generated, nor does insisting that the graphs are slightly noisy
address the point.

Indeed, you've already conceded that the resampling effects are not visible
in the graphs several posts back. It seems like you're just casting about
for some other issue that you can tell yourself you won, and then call me
names, to feed your fragile ego. Honestly, it's a pretty sad spectacle and
I'm embarrassed for you. It really would be better for everyone - including
you - if you could interact in a good-faith, mature manner. Please make an
effort to start doing so, or you're pretty soon going to find that nobody
here will interact with you any more.

By the way, there's no reason for any jaggedness to appear in the plots,
given the lengths of data you were talking about. You might want to look
into spectral density estimation methods to trade off frequency resolution
and bin accuracy.  It's pretty standard statistical signal processing 101
stuff. Producing a very smooth graph from a long enough segment of data is
straightforward, if you use appropriate techniques (not just one big FFT of
the whole thing, that won't ever get rid of the noisiness no matter how
much data you throw at it).

E

On Fri, Aug 21, 2015 at 5:47 PM, Peter S peter.schoffhau...@gmail.com
wrote:

 On 22/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
 
  We've been over this repeatedly, including in the very post you are
  responding to. The fact that there are many ways to produce a graph of
 the
  interpolation spectrum is not in dispute, nor is it germaine to my point.

 Earlier you disputed that there's no upsampling involved.
 Apparently you change your mind quite often...

  It's seems like you are trying to
  avoid my point entirely, in favor of some imaginary dispute of your own
  invention, which you think you can win.

 I claimed something, and you disputed it. I proved that what I
 claimed, is true. Therefore, all your further arguments are invalid...
 (and are boring)

  I have no idea what you think you are proving by scrutinizing graph
  artifacts like that

 I am proving that what you see on the graph is not sinc(x) /
 sinc^2(x), but rather some noisy curve, like the spectrum of upsampled
 noise. Therefore, my original argument is correct.

  It's also in extremely poor taste to use retard as a term of abuse.

 Well, if you do not see that the graph pictured on Olli's figure is
 not sinc(x), then you're retarded.

  Meanwhile, it seems that you are suggesting that the spectrum of white
  noise linearly interpolated up to a high oversampling rate is not sinc^2.

 Naturally, there's going to be some jaggedness in the spectrum because
 of the noise. So, obviously, that is not sinc^2 then.

  Are you claiming that those wiggles in the graph represent
  aliasing of the spectrum from resampling at 44.1kHz? If so, that is
  unlikely.

 Nope, the wiggles in the graph are from the noise.

 -P
 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Peter S
Since you constantly derail this topic with irrelevant talk, let me
instead prove that

1) Olli Niemiatalo's graph *is* equivalent of the spectrum of
upsampled white noise.
2) Olli Niemitalo's graph does *not* depict sinc(x)/sinc^2(x).

First I'll prove 1).

Using palette modification, I extracted the linear interpolation curve
from Olli's figure:
http://morpheus.spectralhead.com/img/other001b.gif

Then I sampled white noise at 500 Hz, and resampled it to 44.1 kHz
using linear interpolation. I got this spectrum:

http://morpheus.spectralhead.com/img/resampled_noise_spectrum.gif

To do a proper A/B comparison between the two spectra, I tried to
align and match them as much as possible, and created an animated GIF
file that blinks between the two graphs at a 500 ms rate:

http://morpheus.spectralhead.com/img/olli_vs_resampled_noise.gif

Although the alignment is not 100% exact, to my eyes, they look like
totally equivalent graphs.

This proves that upsampled white noise has the same spectrum as the
graph shown on Olli's graph for linear interpolation.

Second, I'll prove 2).

Have you actually looked at Olli Niemitalo's graph closely?
Here is proof that it is NOT a graph of sinc(x)/sinc^2(x):

http://morpheus.spectralhead.com/img/other001-analysis.gif

It is NOT sinc(x)/sinc^2(x), and you're blind as a bat if you do not see that.

Since I proved both 1) and 2), it is totally irrelevant what you say,
because none of what you could ever say would disprove this.

Sinc(x) does not have a jagged/noisy look, therefore it is 100%
certain it is not what you see on Olli's graph. Point proven, end of
discussion.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Peter S
On 22/08/2015, Ethan Duni ethan.d...@gmail.com wrote:

 We've been over this repeatedly, including in the very post you are
 responding to. The fact that there are many ways to produce a graph of the
 interpolation spectrum is not in dispute, nor is it germaine to my point.

Earlier you disputed that there's no upsampling involved.
Apparently you change your mind quite often...

 It's seems like you are trying to
 avoid my point entirely, in favor of some imaginary dispute of your own
 invention, which you think you can win.

I claimed something, and you disputed it. I proved that what I
claimed, is true. Therefore, all your further arguments are invalid...
(and are boring)

 I have no idea what you think you are proving by scrutinizing graph
 artifacts like that

I am proving that what you see on the graph is not sinc(x) /
sinc^2(x), but rather some noisy curve, like the spectrum of upsampled
noise. Therefore, my original argument is correct.

 It's also in extremely poor taste to use retard as a term of abuse.

Well, if you do not see that the graph pictured on Olli's figure is
not sinc(x), then you're retarded.

 Meanwhile, it seems that you are suggesting that the spectrum of white
 noise linearly interpolated up to a high oversampling rate is not sinc^2.

Naturally, there's going to be some jaggedness in the spectrum because
of the noise. So, obviously, that is not sinc^2 then.

 Are you claiming that those wiggles in the graph represent
 aliasing of the spectrum from resampling at 44.1kHz? If so, that is
 unlikely.

Nope, the wiggles in the graph are from the noise.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Peter S
Upsampling means, that the sampling rate increases. So if you have a
250 Hz signal, and create a 22000 Hz signal from it, that is - by
definition - upsampling.

That's *exactly* what upsampling means... You insert new samples
between the original ones, and interpolate between them (using
whatever interpolation filter of your preference).

And that is often used synonymously with 'oversampling', and that's
what happens in an oversampled D/A converter. (Though 'oversampling'
has a different meaning in A/D context.)

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Theo Verelst

Hi,

A suggestion for those working on practical implementations, and lighten 
up the tone of the discussion with some people I know from worked on all 
kinds of (semi-) pro implementations when I wasn't even into more than 
basic DSP yet.


The tradeoffs about engineering and implementing on a platform with 
given limitations (or for advanced people making filters: possibly even 
trading off the computation properties required for a self-designed DSP 
unit) including memory use, required clock speed, and heat build-up (not 
so important nowadays for simple filters) can be more accurately met by 
being specific about the requirements in terms of the quality and the 
quantification of the error bounds, as in this case how much high 
frequency loss can I prevent, at which engineering (or research) cost, 
and how many extra clock cycles of my DSP/CPU.


In some cases, it can pay to do the extra effort of separating your 
audio frequency range in a couple of bands, so say you make an 
interpolator for low frequencies (e.g. simple zero-order) for 
mid-frequencies (with some attention for artifacts in the oh so 
sensitive m3 kHz range, and for instance for frequencies above 10kHz, 
where you can then pay most attention to the way the damping of the 
higher frequencies come across more than the exact accuracy of the short 
time convolution filter you use. Such a in this case limited multi-band 
approach costs a few filters and a little thinking about how those bands 
will later add back up properly to a decent signal, but it can make 
audio quality higher without requiring extreme resources.


T.V.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Peter S
On 20/08/2015, Ethan Duni ethan.d...@gmail.com wrote:

 Wasn't the premise that memory
 was cheap, so we can store a big prototype FIR for high quality 512x
 oversampling? So why are we then worried about the table space for the
 fractional interpolator?

For the record, wasn't it you who said memory is often a constraint?
Quote from you:
There are lots of dsp applications where memory is very much the main
constraint.

So apparently your premise is that memory can be expensive and a
constraint, and now you ask why are we worried about using extra
memory.

At least make up your mind, whether you consider memory cheap or expensive...

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Peter S
Let's analyze your suggestion of using a FIR filter at f = 0.5/512 =
0.0009765625 for an interpolation filter for 512x oversampling.

Here's the frequency response of a FIR filter of length 1000:
http://morpheus.spectralhead.com/img/fir512_1000.png

Closeup of the frequency range between 0-0.01 (cutoff marked with black line):
http://morpheus.spectralhead.com/img/fir512_1000_closeup.png

Apparently that's a pretty crappy anti-alias filter, the transition
band is very wide.

So let's try a FIR filter of length 5000:
http://morpheus.spectralhead.com/img/fir512_5000_closeup.png

Better, but still quite a lot of aliasing above the cutoff freq.

FIR filter of length 20,000:
http://morpheus.spectralhead.com/img/fir512_2_closeup.png

Now this starts to look like a proper-ish anti-alias filter.

The problem is - its length is 20,000 samples, so assuming 32-bit
float representation, the coefficients alone need about 80 KB of
memory... meaning that there's a high chance that it won't even fit
into the L1 cache, causing a lot of cache misses, so this filter will
be extra slow, since your CPU will be constantly waiting for the
coefficients to arrive from the L2 cache and/or RAM. Also consider how
much CPU power you need to do convolution with a 20,000 sample long
kernel at 512x oversampled rate... I bet you're not trying to do this
in realtime, are you?

So, that's not exactly the brightest way to do 512x oversampling,
unless you prefer to waste a lot of resources and spend a week on
upsampling. In that case, it is ideal.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Ethan Duni
In this graph, the signal frequency seems to be 250 Hz, so this graph
shows the equivalent of about 22000/250 = 88x oversampling.

That graph just shows the frequency responses of various interpolation
polynomials. It's not related to oversampling.

E

On Thu, Aug 20, 2015 at 5:40 PM, Peter S peter.schoffhau...@gmail.com
wrote:

 In the case of variable pitch playback with interpolation, here are
 the frequency responses:

 http://musicdsp.org/files/other001.gif
 (graphs by Olli Niemitalo)

 In this case, there's no zero at the original Nyquist freq, rather
 there are zeros at the original sampling rate and its multiplies.

 So it's useful to specify what you mean by high frequency signal loss
 due to interpolation, beacause that term is ambiguous and can mean
 various things.

 In this graph, the signal frequency seems to be 250 Hz, so this graph
 shows the equivalent of about 22000/250 = 88x oversampling. At that
 oversampling rate, gain of alias images of linear interpolation is -84
 dB. High amounts of oversampling for high SNR ratios may be
 necessitated by the slow rolloff of aliasing. (This was not mentioned
 in the question in this thread, but is relevant.)

 -P
 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Peter S
In the starting post, it was not specified that resampling was also
used - the question was:

Is it possible to use a filter to compensate for high frequency
signal loss due to interpolation? For example linear or hermite
interpolation.

Without specifying that variable rate playback is involved, that could
be understood in various ways - for example, at first I thought the
interpolation was for the purpose of a (static or modulated)
fractional delay line. A third possible situation is using
linear/hermite interpolation as an upsampling filter in a 2^N
oversampler.

It was only specified 18 posts later, that the interpolation is used
for variable pitch playback.

These three situations all are different, and different formulas apply...

And the combination oversampling and linear/hermite interpolation can
also be meant in multiple ways.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Peter S
Here's a graph of performance in mflops of varying length FFT
transforms from the fftw.org benchmark page, for Intel Pentium 4:

http://morpheus.spectralhead.com/img/fftw_benchmark_pentium4.png

Afaik Pentium 4 has 16 KB of L1 data cache. If you check the graph,
around 8-16k the performance starts to drop drastically. I believe the
main reason for this is that the data doesn't fit into the L1 data
cache any more, which is 16 KB. You'll see similar graphs for most
other CPU types as well, there's a dropoff near the L1 cache size.

So using more memory is only free(ish) until a certain point - if your
data doesn't fit into the L1 cache any more, it will cause cache
misses and give you a performance penalty, because the CPU needs to
fetch the data from the L2 cache, which is several times slower. In
the graph, you can see ~3-4x performance difference between transforms
that fit into the L1 cache, and transforms that don't. For this
reason, very large filters have a notable performance penalty. The
coefficients for a FIR filter of length 20,000 will certainly not fit
into a 16 KB L1 data cache.

Here's the memory topology of AMD Bulldozer server microarchitecture:
https://upload.wikimedia.org/wikipedia/commons/9/95/Hwloc.png

Each core has a 16 KB L1 data cache. The further away you go from the
CPU core, the slower the memory access gets. L2 cache is 2 MB, and
there's a shared 8 MB L3 cache across cores. There's a 64 KB
instruction cache per two cores.

Similar cache architectures are common among computer processors
(sometimes without L3 cache). There's a document that discusses this
in depth:

Ulrich Drepper, What Every Programmer Should Know About Memory
http://morpheus.spectralhead.com/pdf/cpumemory.pdf

This document gives the following memory access times for Intel Pentium M:

Register: = 1 cycle
L1 data cache: ~3 cycles
L2 cache: ~14 cycles
RAM: ~240 cycles

So this means, on a Pentium M, accessing data in the L1 cache is ~3x
slower, accessing data in the the L2 cache is ~14x slower, and
accessing data in the RAM is ~240x slower than accessing data in a
register. (Earlier I wrongly said RAM is about 10x slower, rather
that's about the L2 cache speed.) So if the data doesn't fit into the
L1 cache and needs to be fetched from the L2 cache, that's nearly 5x
slower on the Pentium M. A notable part of the L2 cache penalty is
caused by the physical limits of the universe - data travels in wires
at the speed of light, which is about 1 foot (30 cm) per nanosecond.
The larger the cache, the longer the wires, hence the longer the data
access delay.

For further details and detailed performance analysis, see the above paper.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Ethan Duni
If all you're trying to do is mitigate the rolloff of linear interp

That's one concern, and by itself it implies that you need to oversample by
at least some margin to avoid having a zero at the top of your audio band
(along with a transition band below that).

But the larger concern is the overall accuracy of the interpolator. At low
oversampling ratios, the sinc^2 rolloff of the linear interpolator response
isn't effective at squashing the signal images, so you end up with aliasing
corrupting your results. Hence the need for higher order interpolation at
lower oversampling ratios, as described in Ollie's paper. If you want to
get high SNR out of linear interpolation, you need to crank up the
oversampling considerably - far beyond what is needed just to avoid the
attenuation of high frequencies of the in-band component, in order to
sufficiently squash the images.

E

On Thu, Aug 20, 2015 at 12:18 PM, Chris Santoro chris.sant...@gmail.com
wrote:

 As far as the oversampling + linear interpolation approach goes, I have to
 ask... why oversample so much (512x)?

 Purely from a rolloff perspective, it seems you can figure out what your
 returns are going to be by calculating sinc^2 at (1/upsample_ratio) for a
 variety of oversampling ratios. Here's the python code to run the numbers...

 #-
 import numpy as np

 #normalized frequency points
 X = [1.0/512.0, 1.0/256.0, 1.0/128.0, 1.0/64.0, 1.0/32.0, 1.0/16.0,
 1.0/8.0, 1.0/4.0]
 #find attenuation at frequency points due to linear interpolation worst
 case (halfway in between)
 S = np.sinc(X)
 S = 20*np.log10(S*S)

 print S
 #---

 and here's what it spits out for various attenuation values at what would
 be nyquist in the baseband:

 2X:   -7.8 dB
 4X:   -1.8 dB
 8X:   -0.44 dB
 16X: -0.11 dB
 32X: -0.027 dB
 64X: -0.0069 dB
 128X:   -0.0017 dB
 256X:   -0.00043 dB
 512X:   -0.00010 dB

 If all you're trying to do is mitigate the rolloff of linear interp, it
 looks like there's diminishing returns beyond 16X or 32X, where you're
 talking about a tenth of a dB or less at nyquist, which most people can't
 even hear in that range. Your anti-aliasing properties are going to be
 determined by your choice of upsampling/windowed-sync/anti-imaging filter
 and how long you want to let that be. Or am I missing something? It just
 doesn't seem worth it go to that high.

 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Peter S
Let me just add, that in case of having a non-oversampled linearly
interpolated fractional delay line with exactly 0.5 sample delay (most
high-frequency roll-off position), the frequency response formula is
not sinc^2, but rather, sin(2*PI*f)/(2*sin(PI*f)), as I discussed
earlier.

In that case, the results are slightly different:

# Tcl code -
set pi 3.141592653589793238
set freqs {1/4. 1/8. 1/16. 1/32. 1/64. 1/128. 1/256. 1/512. 1/1024.}
set amt 2
foreach freq $freqs {
set amp [expr sin(2*$pi*$freq)/(2*sin($pi*$freq))]
set db [expr 20.0 * log($amp)/log(10)]
puts [format %-8s ${amt}X:][format %f $db] dB
set amt [expr $amt*2]
}
# End of code --

Results:
2X: -3.01 dB
4X: -0.688 dB
8X: -0.169 dB
16X:-0.0419 dB
32X:-0.0105 dB
64X:-0.00262 dB
128X:   -0.00065 dB
256X:   -0.000164 dB
512X:   -0.41 dB
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Peter S
On 20/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
 But I'm on the fence about
 whether it's the tightest use of resources (for whatever constraints).

Then try and measure it yourself - you don't believe my words anyways.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
Comparison of the two formulas from previous post: (1) in blue, sinc^2
(2) in red:
http://morpheus.spectralhead.com/img/sinc.png

   sin(pi*x*2)
-(1)
   2*sin(pi*x)

(Formula from Steven W. Smith, absolute value taken on graph)

   sin(pi*x)
--- ^2   = sinc^2(x)(2)
 pi*x

(Formula from JOS, Nigel R.)

(1) and (2) (blue and red curve) are quite different.

Let's test how equation (1) compares against measured frequency
response of a LTI filter with coeffs [0.5, 0.5]:

http://morpheus.spectralhead.com/img/halfsample_delay_response.png

The maximum error between formula (1) and the measured frequency
response of the filter (a0=0.5, a1=0.5) is 3.3307e-16, or -310 dB,
which about equals the limits of the floating point precision at 64
bits. The frequency response was measured using Octave's freqz()
function, using 512 points.

Conclusion: Steven W. Smith's formula (1) seems correct.

Frequency response of the same filter in decibel scale:
http://morpheus.spectralhead.com/img/halfsample_delay_response2.png

(this graph is normalized to 0..1 rad, not 0..0.5)

The pole-zero plot was shown earlier, having a zero at z=-1, meaning
-Inf gain at Nyquist.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
On 19/08/2015, Ethan Duni ethan.d...@gmail.com wrote:

 But why would you constrain yourself to use first-order linear
 interpolation?

Because it's computationally very cheap?

 The oversampler itself is going to be a much higher order
 linear interpolator. So it seems strange to pour resources into that

Linear interpolation needs very little computation, compared to most
other types of interpolation. So I do not consider the idea of using
linear interpolation for higher stages of oversampling strange at all.
The higher the oversampling, the more optimal it is to use linear in
the higher stages.

 So heavy oversampling seems strange, unless there's some hard
 constraint forcing you to use a first-order interpolator.

The hard constraint is CPU usage, which is higher in all other types
of interpolators.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Ethan Duni
i would say way more than 2x if you're using linear in between.  if memory
is cheap, i might oversample by perhaps as much as 512x and then use
linear to get in between the subsamples (this will get you 120 dB S/N).

But why would you constrain yourself to use first-order linear
interpolation? The oversampler itself is going to be a much higher order
linear interpolator. So it seems strange to pour resources into that, just
so you can avoid putting them into the final fractional interpolator. Is
the justification that the oversampler is a fixed interpolator, whereas the
final stage is variable (so we don't want to muck around with anything too
complex there)? I've seen it claimed (by Julius Smith IIRC) that
oversampling by as little as 10% cuts the interpolation filter requirements
by over 50%. So heavy oversampling seems strange, unless there's some hard
constraint forcing you to use a first-order interpolator.

quite familiar with it.

Yeah that was more for the list in general, to keep this discussion
(semi-)grounded.

E

On Wed, Aug 19, 2015 at 9:15 AM, robert bristow-johnson 
r...@audioimagination.com wrote:

 On 8/18/15 11:46 PM, Ethan Duni wrote:

  for linear interpolation, if you are a delayed by 3.5 samples and you
 keep that delay constant, the transfer function is
 
H(z)  =  (1/2)*(1 + z^-1)*z^-3
 
 that filter goes to -inf dB as omega gets closer to pi.

 Note that this holds for symmetric fractional delay filter of any odd
 order (i.e., Lagrange interpolation filter, windowed sinc, etc). It's not
 an artifact of the simple linear approach,


 at precisely Nyquist, you're right.  as you approach Nyquist, linear
 interpolation is worser than cubic Hermite but better than cubic B-spline
 (better in terms of less roll-off, worser in terms of killing images).

 it's a feature of the symmetric, finite nature of the fractional
 interpolator. Since there are good reasons for the symmetry constraint, we
 are left to trade off oversampling and filter order/design to get the final
 passband as flat as we need.

 My view is that if you are serious about maintaining fidelity across the
 full bandwidth, you need to oversample by at least 2x.


 i would say way more than 2x if you're using linear in between.  if memory
 is cheap, i might oversample by perhaps as much as 512x and then use linear
 to get in between the subsamples (this will get you 120 dB S/N).

 That way you can fit the transition band of your interpolation filter
 above the signal band. In applications where you are less concerned about
 full bandwidth fidelity, oversampling isn't required. Some argue that 48kHz
 sample rate is already effectively oversampled for lots of natural
 recordings, for example. If it's already at 96kHz or higher I would not
 bother oversampling further.


 i might **if** i want to resample by an arbitrary ratio and i am doing
 linear interpolation between the new over-sampled samples.

 remember, when we oversample for the purpose of resampling, if the
 prototype LPF is FIR (you know, the polyphase thingie), then you need not
 calculate all of the new over-sampled samples.  only the two you need to
 linear interpolate between.  so oversampling by a large factor only costs
 more in terms of memory for the coefficient storage.  not in computational
 effort.

 Also this is recommended reading for this thread:

 https://ccrma.stanford.edu/~jos/Interpolation/ 
 https://ccrma.stanford.edu/%7Ejos/Interpolation/


 quite familiar with it.

 --

 r b-j  r...@audioimagination.com

 Imagination is more important than knowledge.



 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Theo Verelst
SOmetimes I feel the personal integrity about these undergrad level 
scientific quests is nowhere to be found with some people, and that's a 
shame.


Working on a decent subject like these mathematical approximations in 
the digital signal processing should be accompanied with at least some 
self-respect in the treatment of subjects one involves oneself in, 
obviously apart from chatter and stories and so on, because otherwise 
people might feel hurt to be contributing only as it were to feed da 
Man or something of that nature, and that's not cool in my opinion.


T.V.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread robert bristow-johnson

On 8/19/15 1:43 PM, Peter S wrote:

On 19/08/2015, Ethan Duniethan.d...@gmail.com  wrote:

But why would you constrain yourself to use first-order linear
interpolation?

Because it's computationally very cheap?


and it doesn't require a table of coefficients, like doing higher-order 
Lagrange or Hermite would.



The oversampler itself is going to be a much higher order
linear interpolator. So it seems strange to pour resources into that

Linear interpolation needs very little computation, compared to most
other types of interpolation. So I do not consider the idea of using
linear interpolation for higher stages of oversampling strange at all.
The higher the oversampling, the more optimal it is to use linear in
the higher stages.



here, again, is where Peter and i are on the same page.


So heavy oversampling seems strange, unless there's some hard
constraint forcing you to use a first-order interpolator.

The hard constraint is CPU usage, which is higher in all other types
of interpolators.



for plugins or embedded systems with a CPU-like core, computation burden 
is more of a cost issue than memory used.  but there are other embedded 
DSP situations where we are counting every word used.  8 years ago, i 
was working with a chip that offered for each processing block 8 
instructions (there were multiple moves, 1 multiply, and 1 addition that 
could be done in a single instruction), 1 state (or 2 states, if you 
count the output as a state) and 4 scratch registers.  that's all i 
had.  ain't no table of coefficients to look up.  in that case memory is 
way more important than wasting a few instructions recomputing numbers 
that you might otherwise just look up.





--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Ethan Duni
and it doesn't require a table of coefficients, like doing higher-order
Lagrange or Hermite would.

Well, you can compute those at runtime if you want - and you don't need a
terribly high order Lagrange interpolator if you're already oversampled, so
it's not necessarily a problematic overhead.

Meanwhile, the oversampler itself needs a table of coefficients. Assuming
we're talking about FIR interpolation, to avoid phase distortion. But
that's a single fixed table for supporting a single oversampling ratio, so
I can see how it would add up to a memory savings compared to a bank of
tables for different fractional interpolation points, if you're looking for
really fine/arbitrary granularity. If we're talking about a fixed
fractional delay, I'm not really seeing the advantage.

Obviously it will depend on the details of the application, it just seems
kind of unbalanced on its face to use heavy oversampling and then the
lightest possible fractional interpolator. It's not clear to me that a
moderate oversampling combined with a fractional interpolator of modestly
high order wouldn't be a better use of resources.

So it doesn't make a lot of sense to me to point to the low resource costs
of the first-order linear interpolator, when you're already devoting
resources to heavy oversampling in order to use it. They need to be
considered together and balanced, no? Your point about computing only the
subset of oversamples needed to drive the final fractional interpolator is
well-taken, but I think I need to see a more detailed accounting of that to
be convinced.

E

On Wed, Aug 19, 2015 at 1:00 PM, robert bristow-johnson 
r...@audioimagination.com wrote:

 On 8/19/15 1:43 PM, Peter S wrote:

 On 19/08/2015, Ethan Duniethan.d...@gmail.com  wrote:

 But why would you constrain yourself to use first-order linear
 interpolation?

 Because it's computationally very cheap?


 and it doesn't require a table of coefficients, like doing higher-order
 Lagrange or Hermite would.

 The oversampler itself is going to be a much higher order
 linear interpolator. So it seems strange to pour resources into that

 Linear interpolation needs very little computation, compared to most
 other types of interpolation. So I do not consider the idea of using
 linear interpolation for higher stages of oversampling strange at all.
 The higher the oversampling, the more optimal it is to use linear in
 the higher stages.


 here, again, is where Peter and i are on the same page.

 So heavy oversampling seems strange, unless there's some hard
 constraint forcing you to use a first-order interpolator.

 The hard constraint is CPU usage, which is higher in all other types
 of interpolators.


 for plugins or embedded systems with a CPU-like core, computation burden
 is more of a cost issue than memory used.  but there are other embedded DSP
 situations where we are counting every word used.  8 years ago, i was
 working with a chip that offered for each processing block 8 instructions
 (there were multiple moves, 1 multiply, and 1 addition that could be done
 in a single instruction), 1 state (or 2 states, if you count the output as
 a state) and 4 scratch registers.  that's all i had.  ain't no table of
 coefficients to look up.  in that case memory is way more important than
 wasting a few instructions recomputing numbers that you might otherwise
 just look up.





 --

 r b-j  r...@audioimagination.com

 Imagination is more important than knowledge.



 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
3.2 Multistage
3.2.1 Can I interpolate in multiple stages?

Yes, so long as the interpolation ratio, L, is not a prime number.
For example, to interpolate by a factor of 15, you could interpolate
by 3 then interpolate by 5. The more factors L has, the more choices
you have. For example you could interpolate by 16 in:

- one stage: 16
- two stages: 4 and 4
- three stages: 2, 2, and 4
- four stages: 2, 2, 2, and 2

3.2.2 Cool. But why bother with all that?

Just as with decimation, the computational and memory requirements
of interpolation filtering can often be reduced by using multiple
stages.

3.2.3 OK, so how do I figure out the optimum number of stages, and the
interpolation ratio at each stage?

There isn't a simple answer to this one: the answer varies depending
on many things. However, here are a couple of rules of thumb:

- Using two or three stages is usually optimal or near-optimal.
- Interpolate in order of the smallest to largest factors. For
example, when interpolating by a factor of 60 in three stages,
interpolate by 3, then by 4, then by 5. (Use the largest ratio on the
highest rate.)

http://dspguru.com/dsp/faqs/multirate/interpolation
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Ethan Duni
Nope. Ever heard of multistage interpolation?

I'm well aware that multistage interpolation gives cost savings relative to
single-stage interpolation, generally. That is beside the point: the costs
of interpolation all still scale with oversampling ratio and quality
requirements, just like in single stage interpolation. There's no magic  to
multi-stage interpolation that avoids that relationship.

that's just plain wrong and stupid, and that's what all advanced multirate
books
will also tell you.

You've been told repeatedly that this kind of abusive, condescending
behavior is not welcome here, and you need to cut it out immediately.

Tell me, you don't have an extra half kilobyte of memory in a typical
computer?

There are lots of dsp applications that don't run on personal computers,
but rather on very lightweight embedded targets. Memory tends to be at a
premium on those platforms.

E










On Wed, Aug 19, 2015 at 3:55 PM, Peter S peter.schoffhau...@gmail.com
wrote:

 On 20/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
 
  I don't dispute that linear fractional interpolation is the right choice
 if
  you're going to oversample by a large ratio. The question is what is the
  right balance overall, when considering the combined costs of
  the oversampler and the fractional interpolator.

 It's hard to tell in general. It depends on various factors, including:

 - your desired/available CPU usage
 - your desired/available memory usage and cache size
 - the available instruction set of your CPU
 - your desired antialias filter steepness
 - your desired stopband attenuation

 ...and possibly other factors. Since these may vary largely, I think
 it is impossible to tell in general. What I read in multirate
 literature, and what is also my own experience, is that - when using a
 relatively large oversampling ratio - then it's more cost-effective to
 use linear interpolation at the higher stages (and that's Olli's
 conclusion as well).

  You can leverage any finite interpolator to skip computations in an FIR
  oversampler, not just linear. You get the most skipping in the case of
  high oversampling ratio and linear interpolation, but the same trick
 still
  works any time your oversampling ratio is greater than your interpolator
  order.

 But to a varying degree. A FIR interpolator is still heavy if you
 skip samples where the coefficient is zero, compared to linear
 interpolation (but it is also higher quality).

  The flipside is that the higher the oversampling ratio, the longer the
 FIR
  oversampling filter needs to be in the first place.

 Nope. Ever heard of multistage interpolation? You may do a small FIR
 stage (say, 2x or 4x), and then a linear stage (or another,
 low-complexity FIR stage according to your desired specifications, or
 even further stages). Seems you still don't understand that you can
 oversample in multiple stages, and use a linear interpolator for the
 higher stages of oversampling... Which is almost always optimal than
 using a single costy FIR filter to do the interpolation. You don't
 need to use a 512x FIR at 100 dB stopband attentuation, that's just
 plain wrong and stupid, and that's what all advanced multirate books
 will also tell you.

 Same for IIR case.

 Since memory is usually not an issue,
 
  There are lots of dsp applications where memory is very much the main
  constraint.

 Tell me, you don't have an extra half kilobyte of memory in a typical
 computer? I hear, those have 8-32 GB of RAM nowadays, and CPU cache
 sizes are like 32-128 KiB.

  The performance of your oversampler will be garbage if you do that. And
 so
  there will be no point in worrying about the quality of fractional
  interpolation after that point, since the signal you'll be interpolating
  will be full of aliasing to begin with.

 Exactly. But it won't be heavy! So it's not the oversampling what
 makes the process heavy, but rather, the interpolation / anti-aliasing
 filter!!

  And that means it needs lots of resources, especially as the oversampling
  ratio gets large. It's the required quality that drives the oversampler
  costs (and filter design choices).

 Which is exactly what I said. If your specification is low, you can
 have a 128x oversampler that is (relatively) low-cost. It's not the
 oversampling ratio what matters most.

  If you are willing to accept low quality in order to save on CPU (or
 maybe
  there's nothing in the upper frequencies that you're worried about), then
  there's no point in resampling at all. Just use a low order fractional
  interpolator directly on the signal.

 Seems you still miss the whole point of multistage interpolation. I
 recommend you read some books / papers on multirate processing.

 It should also be noted that the linear interpolation can be used for
 the upsampling itself as well, reducing the cost of your oversampling,
 
  Again, that would add up to a very low quality upsampler.

 You're wrong. Read Olli Niemitalo's paper again 

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
On 19/08/2015, Ethan Duni ethan.d...@gmail.com wrote:

 Obviously it will depend on the details of the application, it just seems
 kind of unbalanced on its face to use heavy oversampling and then the
 lightest possible fractional interpolator.

It should also be noted that the linear interpolation can be used for
the upsampling itself as well, reducing the cost of your oversampling,
not just as your fractional delay. A potential method to do fractional
delay is to upsample by a large factor, then delay by an integer
number of samples, and then downsample, without the use of an actual
fractional delay.

Say, if the fraction is 0.37, then you may upsamle by 512x, then delay
the upsampled signal by round(512*0.37) = 189 samples, then downsample
back. So you did a fractional delay without using actual fractional
interpolation for the delay - you delayed by an integer number of
samples. You'll also have a little error - your delay is 0.369140625
instead of the desired 0.37, since it's quantized to 512 steps, so the
error is -0.000859375. I'm not saying this is ideal, I'm just saying
this is one possible way of doing a fractional delay.

This is discussed by JOS[1]:

In discrete time processing, the operation Eq.(4.5) can be
approximated arbitrarily closely by digital upsampling by a large
integer factor M, delaying by L samples (an integer), then finally
downsampling by M, as depicted in Fig.4.7 [96]. The integers L and M
are chosen so that eta ~= L/M, where eta the desired fractional
delay.

[1] Julius O. Smith, Physical Audio Signal Processing
https://ccrma.stanford.edu/~jos/pasp/Convolution_Interpretation.html

Ref. [96] is:
R. Crochiere, L. Rabiner, and R. Shively, ``A novel implementation
of digital phase shifters,'' Bell System Technical Journal, vol. 65,
pp. 1497-1502, Oct. 1975.

Abstract:
A novel technique is presented for implementing a variable digital
phase shifter which is capable of realizing noninteger delays. The
theory behind the technique is based on the idea of first
interpolating the signal to a high sampling rate, then using an
integer delay, and finally decimating the signal back to the original
sampling rate. Efficient methods for performing these processes are
discussed in this paper. In particular, it is shown that the digital
phase shifter can be implemented by means of a simple convolution at
the sampling rate of the original signal.

In short, there are a zillion ways of implementing both oversampling
and fractional delays, and they can be combined arbitrarily.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
On 20/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
 Ugh, I suppose this is what I get for attempting to engage with Peter S
 again. Not sure what I was thinking...

Well, you asked, why use linear interpolation at all? We told you
the advantages - fast computation, no coefficient table needed, and
(nearly) optimal for high oversampling ratios, and you were given some
literature.

If you don't believe it - well, not my problem... it's still true. #notmyloss
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
On 19/08/2015, Peter S peter.schoffhau...@gmail.com wrote:
 Another way to show that half-sample delay has -Inf gain at Nyquist:
 see the pole-zero plot of the equivalent LTI filter a0=0.5, a1=0.5. It
 will have a zero at z=-1. A zero on the unit circle means -Inf gain,
 and z=-1 means Nyquist frequency. Therefore, a half-sample delay has
 -Inf gain at Nyquist frequency.

It looks like this:
http://morpheus.spectralhead.com/img/halfsample_delay_zplane.png

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
Another way to show that half-sample delay has -Inf gain at Nyquist:
see the pole-zero plot of the equivalent LTI filter a0=0.5, a1=0.5. It
will have a zero at z=-1. A zero on the unit circle means -Inf gain,
and z=-1 means Nyquist frequency. Therefore, a half-sample delay has
-Inf gain at Nyquist frequency.

It would be ill-advised to dismiss Nyquist frequency because it may
alias to DC signal when sampling. The zero on the unit circle is at
Nyquist (z=-1), not at DC (z=1).

Frequency response graphs of linear interpolation, according to JOS:
https://ccrma.stanford.edu/~jos/Interpolation/Frequency_Responses_Linear_Interpolation.html
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Ethan Duni
rbj
and it doesn't require a table of coefficients, like doing higher-order
Lagrange or Hermite would.

Robert I think this is where you lost me. Wasn't the premise that memory
was cheap, so we can store a big prototype FIR for high quality 512x
oversampling? So why are we then worried about the table space for the
fractional interpolator?

I wonder if the salient design concern here is less about balancing
resources, and more about isolating and simplifying the portions of the
system needed to support arbitrary (as opposed to just very-high-but-fixed)
precision. I like the modularity of the high oversampling/linear interp
approach, since that it supports arbitrary precision with a minimum of
fussy variable components or arcane coefficient calculations. It's got a
lot going for it in software engineering terms. But I'm on the fence about
whether it's the tightest use of resources (for whatever constraints).
Typically those are the arcane ones that take a ton of debugging and
optimization :P

E



On Wed, Aug 19, 2015 at 1:00 PM, robert bristow-johnson 
r...@audioimagination.com wrote:

 On 8/19/15 1:43 PM, Peter S wrote:

 On 19/08/2015, Ethan Duniethan.d...@gmail.com  wrote:

 But why would you constrain yourself to use first-order linear
 interpolation?

 Because it's computationally very cheap?


 and it doesn't require a table of coefficients, like doing higher-order
 Lagrange or Hermite would.

 The oversampler itself is going to be a much higher order
 linear interpolator. So it seems strange to pour resources into that

 Linear interpolation needs very little computation, compared to most
 other types of interpolation. So I do not consider the idea of using
 linear interpolation for higher stages of oversampling strange at all.
 The higher the oversampling, the more optimal it is to use linear in
 the higher stages.


 here, again, is where Peter and i are on the same page.

 So heavy oversampling seems strange, unless there's some hard
 constraint forcing you to use a first-order interpolator.

 The hard constraint is CPU usage, which is higher in all other types
 of interpolators.


 for plugins or embedded systems with a CPU-like core, computation burden
 is more of a cost issue than memory used.  but there are other embedded DSP
 situations where we are counting every word used.  8 years ago, i was
 working with a chip that offered for each processing block 8 instructions
 (there were multiple moves, 1 multiply, and 1 addition that could be done
 in a single instruction), 1 state (or 2 states, if you count the output as
 a state) and 4 scratch registers.  that's all i had.  ain't no table of
 coefficients to look up.  in that case memory is way more important than
 wasting a few instructions recomputing numbers that you might otherwise
 just look up.





 --

 r b-j  r...@audioimagination.com

 Imagination is more important than knowledge.



 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Peter S
On 18/08/2015, Nigel Redmon earle...@earlevel.com wrote:

 well, if it's linear interpolation and your fractional delay slowly sweeps
 from 0 to 1/2 sample, i think you may very well hear a LPF start to kick
 in.  something like -7.8 dB at Nyquist.  no, that's not right.  it's -inf
 dB at Nyquist.  pretty serious LPF to just slide into.

 Right the first time, -7.8 dB at the Nyquist frequency, -inf at the sampling
 frequency. No?

-Inf at Nyquist when you're halfway between two samples.

Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1, -1...
After interpolating with fraction=0.5, it becomes a constant signal
0,0,0,0,0,0,0...
(because (-1+1)/2 = 0)
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1,
-1...

The sampling theorem requires that all frequencies be *below* the Nyquist
frequency. Sampling signals at exactly the Nyquist frequency is an edge
case that sort-of works in some limited special cases, but there is no
expectation that digital processing of such a signal is going to work
properly in general.

But even given that, the interpolator outputting the zero signal in that
case is exactly correct. That's what you would have gotten if you'd sampled
the same sine wave (*not* square wave - that would imply frequencies above
Nyquist) with a half-sample offset from the 1, -1, 1, -1, ... case. The
incorrect behavior arises when you try to go in the other direction (i.e.,
apply a second half-sample delay), and you still get only DC.

But, again, that doesn't really say anything about interpolation. It just
says that you sampled the signal improperly in the first place, and so
digital processing can't be relied upon to work appropriately.

E

On Tue, Aug 18, 2015 at 1:40 AM, Peter S peter.schoffhau...@gmail.com
wrote:

 On 18/08/2015, Nigel Redmon earle...@earlevel.com wrote:
 
  well, if it's linear interpolation and your fractional delay slowly
 sweeps
  from 0 to 1/2 sample, i think you may very well hear a LPF start to kick
  in.  something like -7.8 dB at Nyquist.  no, that's not right.  it's
 -inf
  dB at Nyquist.  pretty serious LPF to just slide into.
 
  Right the first time, -7.8 dB at the Nyquist frequency, -inf at the
 sampling
  frequency. No?

 -Inf at Nyquist when you're halfway between two samples.

 Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1,
 -1...
 After interpolating with fraction=0.5, it becomes a constant signal
 0,0,0,0,0,0,0...
 (because (-1+1)/2 = 0)
 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread robert bristow-johnson

On 8/18/15 4:28 PM, Peter S wrote:


1, -1, 1, -1, 1, -1 ... is a proper bandlimited signal,
and contains no aliasing. That's the maximal allowed frequency without
any aliasing.


well Peter, here again is where you overreach.  assuming, without loss 
of generality that the sampling period is 1, the continuous-time signals



   x(t)  =  1/cos(theta) * cos(pi*t + theta)

are all aliases for the signal described above (and incorrectly as 
contain[ing] no aliasing).


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
What's causing you to be unable to reconstruct the waveform?

There are an infinite number of different nyquist-frequency sinusoids that,
when sampled, will all give the same ...,1, -1, 1, -1, ... sequence of
samples. The sampling is a many-to-one mapping in that case, and so cannot
be inverted.

See here:
https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem#Critical_frequency

Or consider what happens if you shift a nyquist-frequency sinusoid by half
a period before sampling it. You get ..., 0, 0, 0, 0, ... - which is quite
obviously the zero signal. It is not going to reproduce a nyquist frequency
sinusoid when you run it through a DAC.

E

On Tue, Aug 18, 2015 at 1:28 PM, Peter S peter.schoffhau...@gmail.com
wrote:

 On 18/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
 Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1,
  -1...
 
  The sampling theorem requires that all frequencies be *below* the Nyquist
  frequency. Sampling signals at exactly the Nyquist frequency is an edge
  case that sort-of works in some limited special cases, but there is no
  expectation that digital processing of such a signal is going to work
  properly in general.

 Not necessarily, at least in theory.

 In practice, an anti-alias filter will filter out a signal exactly at
 Nyquist freq, both when sampling it (A/D conversion), and both when
 reconstructing it (D/A conversion). But that doesn't mean that a
 half-sample delay doesn't have -Inf dB gain at Nyquist frequency. It's
 another thing that the anti-alias filter of a converter will typically
 filter it out anyways when reconstructing - but we weren't talking
 about reconstruction, so that is irrelevant here.

 A Nyquist frequency signal (1, -1, 1, -1, ...) is a perfectly valid
 bandlimited signal.

  But even given that, the interpolator outputting the zero signal in that
  case is exactly correct. That's what you would have gotten if you'd
 sampled
  the same sine wave (*not* square wave - that would imply frequencies
 above
  Nyquist) with a half-sample offset from the 1, -1, 1, -1, ... case.

 More precisely: a bandlimited Nyquist frequency square wave *equals* a
 Nyquist frequency sine wave. Or any other harmonic waveform for that
 matter (triangle, saw, etc.) In all cases, only the fundamental
 partial is there (1, -1, 1, -1, ... = Nyquist frequency sine), all the
 other partials are filtered out from the bandlimiting.

 So the signal 1, -1, 1, -1, *is* a Nyquist frequency bandlimited
 square wave, and also a sine-wave as well. They're identical. It *is*
 a bandlimited square wave - that's what you get when you take a
 Nyquist frequency square wave, and bandlimit it by removing all
 partials above Nyquist freq (say, via DFT). You may call it a square,
 a sine, saw, doesn't matter - when bandlimited, they're identical.

  The
  incorrect behavior arises when you try to go in the other direction
 (i.e.,
  apply a second half-sample delay), and you still get only DC.

 What would be incorrect about it? I'm not sure what is your
 assumption. Of course if you apply any kind of filtering to a zero DC
 signal, you'll still have a zero DC signal. -Inf + -Inf = -Inf...  Not
 sure what you're trying to achieve by applying a second half-sample
 delay... That also has -Inf dB gain at Nyquist, so you'll still have
 a zero DC signal after that. Since a half-sample delay has -Inf gain
 at Nyquist, you cannot undo it by applying another half-sample
 delay...

  But, again, that doesn't really say anything about interpolation.It just
  says that you sampled the signal improperly in the first place, and so
  digital processing can't be relied upon to work appropriately.

 That's false. 1, -1, 1, -1, 1, -1 ... is a proper bandlimited signal,
 and contains no aliasing. That's the maximal allowed frequency without
 any aliasing. It is a bandlimited Nyquist frequency square wave (which
 is equivalent to a Nyquist frequency sine wave). From that, you can
 reconstruct a perfect alias-free sinusoid of frequency SR/2.

 What's causing you to be unable to reconstruct the waveform?

 -P
 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread robert bristow-johnson

On 8/18/15 4:50 PM, Nigel Redmon wrote:

I’m sorry, I’m missing your point here, Peter (and perhaps I missed Roberts, 
hence the “No?” in my reply to him).

The frequency response of linear interpolation is (sin(pi*x)/(pi*x))^2, -7.8 dB 
at 0.5 of the sample rate...


i will try to spell out my point.

there are probably a zillion applications of fractional-sample 
interpolation.  Vesa Valimaki's IEEE article Splitting the Unit Delay 
from the 90s is sorta equivalently seminal as fred harris's classic 
windowing paper about this.


but, within the zillion of applications, i can think of two classes of 
application in which all applications will fall into one or the other:


  1.  slowly-varying (or constant) delay with a fractional component.   
this would be a precision delay we might use to time-align things (like 
speakers) or for effects like flanging or to compensate for the delay of 
some other component like a filter.


  2.  rapidly-varying delay (again with a fractional component).  this 
would be sample-rate-conversion (SRC), resampling sound files, pitch 
shifting (either the splicing thing a Harmonizer might do or the 
sample-playback and looping a sampler might do), and wild-assed delay 
effects.


it's only in the second class of application that i think the sinc^2 
frequency rolloff (assuming linear interpolation) is a valid model (or 
hand-wavy approximation of a model).


in the first class of application, i think the model should be what you 
get if the delay was constant.  for linear interpolation, if you are a 
delayed by 3.5 samples and you keep that delay constant, the transfer 
function is


   H(z)  =  (1/2)*(1 + z^-1)*z^-3

that filter goes to -inf dB as omega gets closer to pi.

--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Peter S
On 18/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
In order to reconstruct that sinusoid, you'll need a filter with
an infinitely steep transition band.

 No, even an ideal reconstruction filter won't do it. You've got your
 +Nyquist component sitting right on top of your -Nyquist component. Hence
 the aliasing. The information has been lost in the sampling, there's no way
 to reconstruct without some additional side information.

You cannot calculate 1/x when x=0, can you? Since that's division by zero.
Yet you'll know when x tends to zero from right towards left, then 1/x
will tend to +infinity.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
Okay, I get what you mean. But that doesn't change the frequency
response of a half-sample delay, or doesn't mean that a half-sample
delay doesn't have a specific gain at Nyquist.

Never said that it did. In fact, I explicitly said that this issue of
sampling of Nyquist frequency sinusoids has no bearing on the frequency
response of fractional interpolators. I'd suggest dropping this whole
derail, if you are no longer hung up on this point.

E

On Tue, Aug 18, 2015 at 2:08 PM, Peter S peter.schoffhau...@gmail.com
wrote:

 On 18/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
 
  That class of signals is band limited to SR/2. The aliasing is in the
  amplitude/phase offset, not the frequency.

 Okay, I get what you mean. But that doesn't change the frequency
 response of a half-sample delay, or doesn't mean that a half-sample
 delay doesn't have a specific gain at Nyquist.
 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Tom Duffy

In order to reconstruct that sinusoid, you'll need a filter with
an infinitely steep transition band.
You've demonstrated that SR/2 aliases to 0Hz, i.e. DC.
That digital stream of samples is not reconstructable.

On 8/18/2015 1:28 PM, Peter S wrote:


That's false. 1, -1, 1, -1, 1, -1 ... is a proper bandlimited signal,
and contains no aliasing. That's the maximal allowed frequency without
any aliasing. It is a bandlimited Nyquist frequency square wave (which
is equivalent to a Nyquist frequency sine wave). From that, you can
reconstruct a perfect alias-free sinusoid of frequency SR/2.


NOTICE: This electronic mail message and its contents, including any attachments hereto 
(collectively, this e-mail), is hereby designated as confidential and 
proprietary. This e-mail may be viewed and used only by the person to whom it has been sent 
and his/her employer solely for the express purpose for which it has been disclosed and only in 
accordance with any confidentiality or non-disclosure (or similar) agreement between TEAC 
Corporation or its affiliates and said employer, and may not be disclosed to any other person or 
entity.



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread robert bristow-johnson

On 8/18/15 5:01 PM, Emily Litella wrote:

... Never mind.



too late.

:-)

--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
You cannot calculate 1/x when x=0, can you? Since that's division by zero.
Yet you'll know when x tends to zero from right towards left, then 1/x
will tend to +infinity.

Not sure what that is supposed to have to do with the present subject.

If you want to put it in terms of simple arithmetic, the aliasing issue
works like this: I add two numbers together, and find that the answer is X.
I tell you X, and then ask you to determine what the two numbers were. Can
you do it?

E

On Tue, Aug 18, 2015 at 2:13 PM, Peter S peter.schoffhau...@gmail.com
wrote:

 On 18/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
 In order to reconstruct that sinusoid, you'll need a filter with
 an infinitely steep transition band.
 
  No, even an ideal reconstruction filter won't do it. You've got your
  +Nyquist component sitting right on top of your -Nyquist component. Hence
  the aliasing. The information has been lost in the sampling, there's no
 way
  to reconstruct without some additional side information.

 You cannot calculate 1/x when x=0, can you? Since that's division by zero.
 Yet you'll know when x tends to zero from right towards left, then 1/x
 will tend to +infinity.
 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread robert bristow-johnson

On 8/18/15 3:44 PM, Ethan Duni wrote:
Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 
1, -1...


The sampling theorem requires that all frequencies be *below* the 
Nyquist frequency. Sampling signals at exactly the Nyquist frequency 
is an edge case that sort-of works in some limited special cases, but 
there is no expectation that digital processing of such a signal is 
going to work properly in general.


But even given that, the interpolator outputting the zero signal in 
that case is exactly correct. That's what you would have gotten if 
you'd sampled the same sine wave (*not* square wave - that would imply 
frequencies above Nyquist) with a half-sample offset from the 1, -1, 
1, -1, ... case. The incorrect behavior arises when you try to go in 
the other direction (i.e., apply a second half-sample delay), and you 
still get only DC.


But, again, that doesn't really say anything about interpolation. It 
just says that you sampled the signal improperly in the first place, 
and so digital processing can't be relied upon to work appropriately.




as suprizing as it may first appear, i think Peter S and me are totally 
on the same page here.


regarding *linear* interpolation, *if* you use linear interpolation in a 
precision delay (an LTI thingie, or at least quasi-time-invariant) and 
you delay by some integer + 1/2 sample, the filter you get has 
coefficients and transfer function


   H(z) =  (1/2)*(1 + z^-1)*z^-N

(where N is the integer part of the delay).

the gain of that filter, as you approach Nyquist, approaches -inf dB.

*my* point is that as the delay slowly slides from a integer number of 
samples, where the transfer function is


   H(z) = z^-N

to the integer + 1/2 sample (with gain above), this linear but 
time-variant system is going to sound like there is a LPF getting segued in.


this, for me, is enough to decide never to use solely linear 
interpolation for a modulateable delay widget.  if i vary delay, i want 
only the delay to change.  and i would prefer if the delay was the same 
for all frequencies, which makes the APF fractional delay thingie 
problematic.


bestest,

r b-j



On Tue, Aug 18, 2015 at 1:40 AM, Peter S peter.schoffhau...@gmail.com 
mailto:peter.schoffhau...@gmail.com wrote:


On 18/08/2015, Nigel Redmon earle...@earlevel.com
mailto:earle...@earlevel.com wrote:

 well, if it's linear interpolation and your fractional delay
slowly sweeps
 from 0 to 1/2 sample, i think you may very well hear a LPF start
to kick
 in.  something like -7.8 dB at Nyquist.  no, that's not right. 
it's -inf

 dB at Nyquist.  pretty serious LPF to just slide into.

 Right the first time, -7.8 dB at the Nyquist frequency, -inf at
the sampling
 frequency. No?

-Inf at Nyquist when you're halfway between two samples.

Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1,
-1, 1, -1...
After interpolating with fraction=0.5, it becomes a constant signal
0,0,0,0,0,0,0...
(because (-1+1)/2 = 0)



--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
*my* point is that as the delay slowly slides from a integer number of
samples, where the transfer function is

   H(z) = z^-N

to the integer + 1/2 sample (with gain above), this linear but
time-variant system is going to sound like there is a LPF getting segued in.

this, for me, is enough to decide never to use solely linear interpolation
for a modulateable delay widget.  if i vary delay, i want only the delay
to change.

Yeah, absolutely. The variable suppression of high frequencies when
fractional delay changes is undesirable, and indicates that better
interpolation schemes should be used there.

But the example of the weird things that can happen when you try to sample
a sine wave right at the nyquist rate and then process it is orthogonal to
that point.

E

On Tue, Aug 18, 2015 at 1:16 PM, robert bristow-johnson 
r...@audioimagination.com wrote:

 On 8/18/15 3:44 PM, Ethan Duni wrote:

 Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1,
 -1...

 The sampling theorem requires that all frequencies be *below* the Nyquist
 frequency. Sampling signals at exactly the Nyquist frequency is an edge
 case that sort-of works in some limited special cases, but there is no
 expectation that digital processing of such a signal is going to work
 properly in general.

 But even given that, the interpolator outputting the zero signal in that
 case is exactly correct. That's what you would have gotten if you'd sampled
 the same sine wave (*not* square wave - that would imply frequencies above
 Nyquist) with a half-sample offset from the 1, -1, 1, -1, ... case. The
 incorrect behavior arises when you try to go in the other direction (i.e.,
 apply a second half-sample delay), and you still get only DC.

 But, again, that doesn't really say anything about interpolation. It just
 says that you sampled the signal improperly in the first place, and so
 digital processing can't be relied upon to work appropriately.


 as suprizing as it may first appear, i think Peter S and me are totally on
 the same page here.

 regarding *linear* interpolation, *if* you use linear interpolation in a
 precision delay (an LTI thingie, or at least quasi-time-invariant) and you
 delay by some integer + 1/2 sample, the filter you get has coefficients and
 transfer function

H(z) =  (1/2)*(1 + z^-1)*z^-N

 (where N is the integer part of the delay).

 the gain of that filter, as you approach Nyquist, approaches -inf dB.

 *my* point is that as the delay slowly slides from a integer number of
 samples, where the transfer function is

H(z) = z^-N

 to the integer + 1/2 sample (with gain above), this linear but
 time-variant system is going to sound like there is a LPF getting segued in.

 this, for me, is enough to decide never to use solely linear interpolation
 for a modulateable delay widget.  if i vary delay, i want only the delay to
 change.  and i would prefer if the delay was the same for all frequencies,
 which makes the APF fractional delay thingie problematic.

 bestest,

 r b-j


 On Tue, Aug 18, 2015 at 1:40 AM, Peter S peter.schoffhau...@gmail.com
 mailto:peter.schoffhau...@gmail.com wrote:

 On 18/08/2015, Nigel Redmon earle...@earlevel.com
 mailto:earle...@earlevel.com wrote:
 
  well, if it's linear interpolation and your fractional delay
 slowly sweeps
  from 0 to 1/2 sample, i think you may very well hear a LPF start
 to kick
  in.  something like -7.8 dB at Nyquist.  no, that's not right.
  it's -inf
  dB at Nyquist.  pretty serious LPF to just slide into.
 
  Right the first time, -7.8 dB at the Nyquist frequency, -inf at
 the sampling
  frequency. No?

 -Inf at Nyquist when you're halfway between two samples.

 Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1,
 -1, 1, -1...
 After interpolating with fraction=0.5, it becomes a constant signal
 0,0,0,0,0,0,0...
 (because (-1+1)/2 = 0)


 --

 r b-j  r...@audioimagination.com

 Imagination is more important than knowledge.




 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Peter S
On 18/08/2015, Ethan Duni ethan.d...@gmail.com wrote:

 But the example of the weird things that can happen when you try to sample
 a sine wave right at the nyquist rate and then process it is orthogonal to
 that point.

That's not weird, and that's *exactly* what you have in the highest
bin of an FFT.

The signal 1, -1, 1, -1, 1, -1 ... is the highest frequency basis
function of the DFT:
http://www.dspguide.com/graphics/F_8_5.gif

If you think that's weird, then I guess you think that the Fourier
transformation is weird.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Peter S
On 18/08/2015, Tom Duffy tdu...@tascam.com wrote:
 In order to reconstruct that sinusoid, you'll need a filter with
 an infinitely steep transition band.

I can use an arbitrarily long sinc kernel to reconstruct / interpolate
it. Therefore, for any desired precision, you can find an appropriate
sinc kernel length. Where's the problem?

I can also oversample the signal arbitrarily, using an arbitrarily
long sinc kernel.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
In order to reconstruct that sinusoid, you'll need a filter with
an infinitely steep transition band.

No, even an ideal reconstruction filter won't do it. You've got your
+Nyquist component sitting right on top of your -Nyquist component. Hence
the aliasing. The information has been lost in the sampling, there's no way
to reconstruct without some additional side information.

E

On Tue, Aug 18, 2015 at 1:45 PM, Tom Duffy tdu...@tascam.com wrote:

 In order to reconstruct that sinusoid, you'll need a filter with
 an infinitely steep transition band.
 You've demonstrated that SR/2 aliases to 0Hz, i.e. DC.
 That digital stream of samples is not reconstructable.

 On 8/18/2015 1:28 PM, Peter S wrote:

 That's false. 1, -1, 1, -1, 1, -1 ... is a proper bandlimited signal,
 and contains no aliasing. That's the maximal allowed frequency without
 any aliasing. It is a bandlimited Nyquist frequency square wave (which
 is equivalent to a Nyquist frequency sine wave). From that, you can
 reconstruct a perfect alias-free sinusoid of frequency SR/2.


 NOTICE: This electronic mail message and its contents, including any
 attachments hereto (collectively, this e-mail), is hereby designated as
 confidential and proprietary. This e-mail may be viewed and used only by
 the person to whom it has been sent and his/her employer solely for the
 express purpose for which it has been disclosed and only in accordance with
 any confidentiality or non-disclosure (or similar) agreement between TEAC
 Corporation or its affiliates and said employer, and may not be disclosed
 to any other person or entity.




 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
 for linear interpolation, if you are a delayed by 3.5 samples and you
keep that delay constant, the transfer function is

   H(z)  =  (1/2)*(1 + z^-1)*z^-3

that filter goes to -inf dB as omega gets closer to pi.

Note that this holds for symmetric fractional delay filter of any odd order
(i.e., Lagrange interpolation filter, windowed sinc, etc). It's not an
artifact of the simple linear approach, it's a feature of the symmetric,
finite nature of the fractional interpolator. Since there are good reasons
for the symmetry constraint, we are left to trade off oversampling and
filter order/design to get the final passband as flat as we need.

My view is that if you are serious about maintaining fidelity across the
full bandwidth, you need to oversample by at least 2x. That way you can fit
the transition band of your interpolation filter above the signal band. In
applications where you are less concerned about full bandwidth fidelity,
oversampling isn't required. Some argue that 48kHz sample rate is already
effectively oversampled for lots of natural recordings, for example. If
it's already at 96kHz or higher I would not bother oversampling further.

Also this is recommended reading for this thread:

https://ccrma.stanford.edu/~jos/Interpolation/

E

On Tue, Aug 18, 2015 at 1:45 PM, Tom Duffy tdu...@tascam.com wrote:

 In order to reconstruct that sinusoid, you'll need a filter with
 an infinitely steep transition band.
 You've demonstrated that SR/2 aliases to 0Hz, i.e. DC.
 That digital stream of samples is not reconstructable.

 On 8/18/2015 1:28 PM, Peter S wrote:

 That's false. 1, -1, 1, -1, 1, -1 ... is a proper bandlimited signal,
 and contains no aliasing. That's the maximal allowed frequency without
 any aliasing. It is a bandlimited Nyquist frequency square wave (which
 is equivalent to a Nyquist frequency sine wave). From that, you can
 reconstruct a perfect alias-free sinusoid of frequency SR/2.


 NOTICE: This electronic mail message and its contents, including any
 attachments hereto (collectively, this e-mail), is hereby designated as
 confidential and proprietary. This e-mail may be viewed and used only by
 the person to whom it has been sent and his/her employer solely for the
 express purpose for which it has been disclosed and only in accordance with
 any confidentiality or non-disclosure (or similar) agreement between TEAC
 Corporation or its affiliates and said employer, and may not be disclosed
 to any other person or entity.




 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Peter S
On 18/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1,
 -1...

 The sampling theorem requires that all frequencies be *below* the Nyquist
 frequency. Sampling signals at exactly the Nyquist frequency is an edge
 case that sort-of works in some limited special cases, but there is no
 expectation that digital processing of such a signal is going to work
 properly in general.

Not necessarily, at least in theory.

In practice, an anti-alias filter will filter out a signal exactly at
Nyquist freq, both when sampling it (A/D conversion), and both when
reconstructing it (D/A conversion). But that doesn't mean that a
half-sample delay doesn't have -Inf dB gain at Nyquist frequency. It's
another thing that the anti-alias filter of a converter will typically
filter it out anyways when reconstructing - but we weren't talking
about reconstruction, so that is irrelevant here.

A Nyquist frequency signal (1, -1, 1, -1, ...) is a perfectly valid
bandlimited signal.

 But even given that, the interpolator outputting the zero signal in that
 case is exactly correct. That's what you would have gotten if you'd sampled
 the same sine wave (*not* square wave - that would imply frequencies above
 Nyquist) with a half-sample offset from the 1, -1, 1, -1, ... case.

More precisely: a bandlimited Nyquist frequency square wave *equals* a
Nyquist frequency sine wave. Or any other harmonic waveform for that
matter (triangle, saw, etc.) In all cases, only the fundamental
partial is there (1, -1, 1, -1, ... = Nyquist frequency sine), all the
other partials are filtered out from the bandlimiting.

So the signal 1, -1, 1, -1, *is* a Nyquist frequency bandlimited
square wave, and also a sine-wave as well. They're identical. It *is*
a bandlimited square wave - that's what you get when you take a
Nyquist frequency square wave, and bandlimit it by removing all
partials above Nyquist freq (say, via DFT). You may call it a square,
a sine, saw, doesn't matter - when bandlimited, they're identical.

 The
 incorrect behavior arises when you try to go in the other direction (i.e.,
 apply a second half-sample delay), and you still get only DC.

What would be incorrect about it? I'm not sure what is your
assumption. Of course if you apply any kind of filtering to a zero DC
signal, you'll still have a zero DC signal. -Inf + -Inf = -Inf...  Not
sure what you're trying to achieve by applying a second half-sample
delay... That also has -Inf dB gain at Nyquist, so you'll still have
a zero DC signal after that. Since a half-sample delay has -Inf gain
at Nyquist, you cannot undo it by applying another half-sample
delay...

 But, again, that doesn't really say anything about interpolation.It just
 says that you sampled the signal improperly in the first place, and so
 digital processing can't be relied upon to work appropriately.

That's false. 1, -1, 1, -1, 1, -1 ... is a proper bandlimited signal,
and contains no aliasing. That's the maximal allowed frequency without
any aliasing. It is a bandlimited Nyquist frequency square wave (which
is equivalent to a Nyquist frequency sine wave). From that, you can
reconstruct a perfect alias-free sinusoid of frequency SR/2.

What's causing you to be unable to reconstruct the waveform?

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Peter S
On 18/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
You cannot calculate 1/x when x=0, can you? Since that's division by zero.
Yet you'll know when x tends to zero from right towards left, then 1/x
will tend to +infinity.

 Not sure what that is supposed to have to do with the present subject.

You cannot calculate 1/x when x=0, because that's division by zero,
yet you can calculate the limit of 1/x as x tends towards zero.
Meaning that you can approach zero arbitrarily, and 1/x will approach
+infinity arbitrarily.

Similarly, even if frequency f=0.5 may be considered ill-specified
(because it's critical frequency), you can still approach it to
arbitrary precision, and the gain will approach -infinity. So

f=0.4
f=0.49
f=0.499
f=0.4999
f=0.49
f=0.499
etc.

The more you approach f=0.5, the more the gain will approach
-infinity. Even if f=0.5 is a critical frequency. f=0.4 isn't,
and it's quite close to f=0.5.

That's what I mean.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Jerry

On Aug 17, 2015, at 9:38 AM, Esteban Maestre este...@ccrma.stanford.edu wrote:

 No experience with compensation filters here. 
 But if you can afford to use a higher order interpolation scheme, I'd go for 
 that.
 
 Using Newton's Backward Difference Formula, one can construct time-varying, 
 table-free, efficient Lagrange interpolation schemes of arbitrary order (up 
 to 30-th or 40-th order) which stay within linear complexity while allowing 
 for run-time modulation of the interpolation order.
 
 https://ccrma.stanford.edu/~jos/Interpolation/Lagrange_Interpolation.html
 
 Cheers,
 Esteban

I would think that polynomial interpolators of order 30 or 40 would provide no 
end of unpleasant surprises due to the behavior of high-order polynomials. I'm 
thinking of weird spikes, etc. Have you actually used polynomial interpolators 
of this order?

Jerry
 
 
 On 8/17/2015 12:07 PM, STEFFAN DIEDRICHSEN wrote:
 I could write a few lines over the topic as well, since I made such a 
 compensation filter about 17 years ago. 
 So, there are people, that do care about that topic, but there are only 
 some, that do find time to write up something. 
 
 ;-)
 
 Steffan 
 
 
 On 17.08.2015|KW34, at 17:50, Theo Verelst theo...@theover.org wrote:
 
 However, no one here besides RBJ and a few brave souls seems to even care 
 much about real subjects.
 
 
 
 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp
 
 -- 
 
 Esteban Maestre
 CIRMMT/CAML - McGill Univ
 MTG - Univ Pompeu Fabra
 http://ccrma.stanford.edu/~esteban 
 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Esteban Maestre



On 8/18/2015 6:41 AM, Jerry wrote:
I would think that polynomial interpolators of order 30 or 40 would 
provide no end of unpleasant surprises due to the behavior of 
high-order polynomials. I'm thinking of weird spikes, etc. Have you 
actually used polynomial interpolators of this order?


I remember going even above 40-th order with no problems.
But I also remember having problems with 80-th order interpolation.
I think it's called the /Runge phenomenon/.

Esteban



--

Esteban Maestre
CIRMMT/CAML - McGill Univ
MTG - Univ Pompeu Fabra
http://ccrma.stanford.edu/~esteban

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Peter S
On 18/08/2015, robert bristow-johnson r...@audioimagination.com wrote:

 *my* point is that as the delay slowly slides from a integer number of
 samples [...] to the integer + 1/2 sample (with gain above), this linear but
 time-variant system is going to sound like there is a LPF getting segued
 in.

Exactly. As the fractional delay varies between 0..1, it will sound
like a fluttering LP filter that closes and opens as the delay varies,
having the most 'muffled' (LPF'ed) sound when fraction = 1/2.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Nigel Redmon
I’m sorry, I’m missing your point here, Peter (and perhaps I missed Roberts, 
hence the “No?” in my reply to him).

The frequency response of linear interpolation is (sin(pi*x)/(pi*x))^2, -7.8 dB 
at 0.5 of the sample rate...


 On Aug 18, 2015, at 1:40 AM, Peter S peter.schoffhau...@gmail.com wrote:
 
 On 18/08/2015, Nigel Redmon earle...@earlevel.com wrote:
 
 well, if it's linear interpolation and your fractional delay slowly sweeps
 from 0 to 1/2 sample, i think you may very well hear a LPF start to kick
 in.  something like -7.8 dB at Nyquist.  no, that's not right.  it's -inf
 dB at Nyquist.  pretty serious LPF to just slide into.
 
 Right the first time, -7.8 dB at the Nyquist frequency, -inf at the sampling
 frequency. No?
 
 -Inf at Nyquist when you're halfway between two samples.
 
 Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1, -1...
 After interpolating with fraction=0.5, it becomes a constant signal
 0,0,0,0,0,0,0...
 (because (-1+1)/2 = 0)
 __


___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Peter S
On 18/08/2015, robert bristow-johnson r...@audioimagination.com wrote:
 On 8/18/15 4:28 PM, Peter S wrote:

 1, -1, 1, -1, 1, -1 ... is a proper bandlimited signal,
 and contains no aliasing. That's the maximal allowed frequency without
 any aliasing.

 well Peter, here again is where you overreach.  assuming, without loss
 of generality that the sampling period is 1, the continuous-time signals

 x(t)  =  1/cos(theta) * cos(pi*t + theta)

 are all aliases for the signal described above (and incorrectly as
 contain[ing] no aliasing).

Well, strictly speaking, that is true. But I assumed the signal to be
bandlimited to 0..SR/2. In that case, you can perfectly reconstruct
it, as you have no other alias between 0..SR/2.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Nigel Redmon
OK, I looked back at Robert’s post, and see that the fact his reply was broken 
up into segments (as he replied to segments of Peter’s comment) made me miss 
his point. At first he was talking general (pitch shifting), but at that point 
he was talking about strictly sliding into halfway between samples in the 
interpolation. Never mind.


 On Aug 18, 2015, at 1:50 PM, Nigel Redmon earle...@earlevel.com wrote:
 
 I’m sorry, I’m missing your point here, Peter (and perhaps I missed Roberts, 
 hence the “No?” in my reply to him).
 
 The frequency response of linear interpolation is (sin(pi*x)/(pi*x))^2, -7.8 
 dB at 0.5 of the sample rate...
 
 
 On Aug 18, 2015, at 1:40 AM, Peter S peter.schoffhau...@gmail.com wrote:
 
 On 18/08/2015, Nigel Redmon earle...@earlevel.com wrote:
 
 well, if it's linear interpolation and your fractional delay slowly sweeps
 from 0 to 1/2 sample, i think you may very well hear a LPF start to kick
 in.  something like -7.8 dB at Nyquist.  no, that's not right.  it's -inf
 dB at Nyquist.  pretty serious LPF to just slide into.
 
 Right the first time, -7.8 dB at the Nyquist frequency, -inf at the sampling
 frequency. No?
 
 -Inf at Nyquist when you're halfway between two samples.
 
 Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1, 
 -1...
 After interpolating with fraction=0.5, it becomes a constant signal
 0,0,0,0,0,0,0...
 (because (-1+1)/2 = 0)
 __


___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Peter S
On 18/08/2015, Ethan Duni ethan.d...@gmail.com wrote:

 That class of signals is band limited to SR/2. The aliasing is in the
 amplitude/phase offset, not the frequency.

Okay, I get what you mean. But that doesn't change the frequency
response of a half-sample delay, or doesn't mean that a half-sample
delay doesn't have a specific gain at Nyquist.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Nigel Redmon
And to add to what Robert said about “write code and sell it”, sometimes it’s 
more comfortable to make general but helpful comments here, and stop short of 
detailing the code that someone paid you a bunch of money for and might not 
want to be generally known.

And before people assume that I mean strictly “keep the secret sauce secret”, 
there’s also the fact that marketing might not want it known that every detail 
of their expensive plug-in is not 256x oversampled, 128-bit floating point data 
path throughout, dithered every stage. :-D

 On Aug 17, 2015, at 1:46 PM, robert bristow-johnson 
 r...@audioimagination.com wrote:
 
 On 8/17/15 12:07 PM, STEFFAN DIEDRICHSEN wrote:
 I could write a few lines over the topic as well, since I made such a 
 compensation filter about 17 years ago.
 So, there are people, that do care about that topic, but there are only 
 some, that do find time to write up something.
 
 ;-)
 
 Steffan
 
 
 On 17.08.2015|KW34, at 17:50, Theo Verelst theo...@theover.org 
 mailto:theo...@theover.org wrote:
 
 However, no one here besides RBJ and a few brave souls seems to even care 
 much about real subjects.
 
 Theo, there are a lotta heavyweights here (like Steffan).  if you want a 
 3-letter acronym to toss around, try JOS.   i think there are plenty on 
 this list that care deeply about reality because they write code and sell it. 
  my soul is chicken-shit in the context.
 
 -- 
 
 r b-j  r...@audioimagination.com
 
 Imagination is more important than knowledge.


___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Nigel Redmon
OK, Robert, I did consider the slow versus fast issue. But there have been few 
caveats posted in this thread, so I thought it might be misleading to some to 
not be specific about context. The worst case would be a precision delay of an 
arbitrary constant. (For example, at 44.1 kHz SR, there would be a significant 
frequency response difference between 250 ms and 250.01 ms, despite no 
perceptible difference in time. Of course, in some cases, even when using 
interpolated delays, you can quantize the delay time to a sample boundary—say 
if modulation is transient and the steady state is the main concern.)

So, yes, the context means a lot, so we should be clear. (And can you tell I’m 
doing something with delays right now?)

Personally, I’m a fan of upsampling, when needed.


 On Aug 17, 2015, at 1:55 PM, robert bristow-johnson 
 r...@audioimagination.com wrote:
 
 On 8/17/15 2:39 PM, Nigel Redmon wrote:
 Since compensation filtering has been mentioned by a few, can I ask if 
 someone could get specific on an implementation (including a description of 
 constraints under which it operates)? I’d prefer keeping it simple by 
 restricting to linear interpolation, where it’s most needed, and perhaps 
 these comments will make clearer what I’m after:
 
 As I noted in the first reply to this thread, while it’s temping to look at 
 the sinc^2 rolloff of a linear interpolator, for example, and think that 
 compensation would be to boost the highs to undo the rolloff, that won’t 
 work in the general case. Even in Olli Niemitalo’s most excellent paper on 
 interpolation methods 
 (http://yehar.com/blog/wp-content/uploads/2009/08/deip.pdf), he seems to 
 suggest doing this with pre-emphasis, which seems to be a mistake, unless I 
 misunderstood his intentions.
 
 In Olli’s case, he specifically recommended pre-emphasis (which I believe 
 will not work except for special cases of resampling at fixed fractions 
 between real samples) over post, as post becomes more complicated. (It seems 
 that you could do it post, taking into account the fractional part of a 
 particular lookup and avoiding the use of recursive filters—personally I’d 
 just upsample to begin with.)
 
 to me, it really depends on if you're doing a slowly-varying precision delay 
 in which the pre-emphasis might also be slowly varying.
 
 but if the application is sample-rate conversion or similar (like pitch 
 shifting) where the fractional delay is varying all over the place, i think a 
 fixed compensation for sinc^2 might be a good idea.  i don't see how it would 
 hurt.  especially for the over-sampled case.
 
 i like Olli's pink-elephant paper, too.  and i think (since he was picking 
 up on Duane's and my old and incomplete paper) it was more about the 
 fast-varying fractional delay.  and i think that the Zölzer/Bolze paper 
 suggested the same thing (since it was even worse than linear interp).
 
 
 -- 
 
 r b-j  r...@audioimagination.com
 
 Imagination is more important than knowledge.


___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Sham Beam

Thanks for the suggestions and discussion.

In my application I'm playing back 44.1khz wavefiles with variable pitch 
envelopes. I'm currently using hermite interpolation and the quality 
seems fine for playback. It's only after resampling and running through 
the audio engine multiple times does the high frequency roll off become 
a problem. I'll try adding in some oversampling.



Cheers
Shannon
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Sampo Syreeni

On 2015-08-17, robert bristow-johnson wrote:

As I noted in the first reply to this thread, while it’s temping to 
look at the sinc^2 rolloff of a linear interpolator, for example, and 
think that compensation would be to boost the highs to undo the 
rolloff, that won’t work in the general case. Even in Olli Niemitalo’s 
most excellent paper on interpolation methods 
(http://yehar.com/blog/wp-content/uploads/2009/08/deip.pdf), he seems 
to suggest doing this with pre-emphasis, which seems to be a mistake, 
unless I misunderstood his intentions.


Actually it's not that simple. Substandard interpolation methods do lead 
to high frequency rolloff, which can be corrected to a degree with a 
complementary filter. But the trouble is, at the same time they lead to 
aliasing and even nonlinear artifacts, whose high frequency content will 
be amplified by the compensatory filter as well. As such, that approach 
is basically sound...but at the same time only within a very narrowly 
parametrized envelope.


to me, it really depends on if you're doing a slowly-varying precision 
delay in which the pre-emphasis might also be slowly varying.


In slowly varying delay it ought to work no matter what.

but if the application is sample-rate conversion or similar (like 
pitch shifting) where the fractional delay is varying all over the 
place, i think a fixed compensation for sinc^2 might be a good idea. 
i don't see how it would hurt. especially for the over-sampled case.


It doesn't necessarily hurt, but here it isn't guaranteed to do any good 
either. And it's close to doing something bad instead.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Nigel Redmon

 On Aug 17, 2015, at 7:23 PM, robert bristow-johnson 
 r...@audioimagination.com wrote:
 
 On 8/17/15 7:29 PM, Sampo Syreeni wrote:
 
 to me, it really depends on if you're doing a slowly-varying precision 
 delay in which the pre-emphasis might also be slowly varying.
 
 In slowly varying delay it ought to work no matter what.
 
 well, if it's linear interpolation and your fractional delay slowly sweeps 
 from 0 to 1/2 sample, i think you may very well hear a LPF start to kick in.  
 something like -7.8 dB at Nyquist.  no, that's not right.  it's -inf dB at 
 Nyquist.  pretty serious LPF to just slide into.

Right the first time, -7.8 dB at the Nyquist frequency, -inf at the sampling 
frequency. No?


___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Esteban Maestre

No experience with compensation filters here.
But if you can afford to use a higher order interpolation scheme, I'd go 
for that.


Using Newton's Backward Difference Formula, one can construct 
time-varying, table-free, efficient Lagrange interpolation schemes of 
arbitrary order (up to 30-th or 40-th order) which stay within linear 
complexity while allowing for run-time modulation of the interpolation 
order.


https://ccrma.stanford.edu/~jos/Interpolation/Lagrange_Interpolation.html

Cheers,
Esteban


On 8/17/2015 12:07 PM, STEFFAN DIEDRICHSEN wrote:
I could write a few lines over the topic as well, since I made such a 
compensation filter about 17 years ago.
So, there are people, that do care about that topic, but there are 
only some, that do find time to write up something.


;-)

Steffan


On 17.08.2015|KW34, at 17:50, Theo Verelst theo...@theover.org 
mailto:theo...@theover.org wrote:


However, no one here besides RBJ and a few brave souls seems to even 
care much about real subjects.




___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


--

Esteban Maestre
CIRMMT/CAML - McGill Univ
MTG - Univ Pompeu Fabra
http://ccrma.stanford.edu/~esteban

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Theo Verelst


For people including scientific oriented it always surprises me how 
little actual science is involved in this talk about tradeoffs.


First, what it is you want to achieve by preserving high frequencies 
(which of course I'm all for)? Second, is it really only at the level of 
first order interpolations ? And if so, isn't the compensation 
interpolation much more expensive than a solution that tries to qualify, 
and preferably quantify the errors involved.


Using least squares and error estimates is a bit too easy for our 
sampling issues because of at least the mid and high frequencies getting 
interpreted but the DAC reconstruction filter, subsequent digital signal 
processing or as I prefer: the perfect reconstruction interpretation of 
the resulting digital signal streams.


IMO, high frequencies will be most served by leaving them alone as much 
as possible, and honoring the studio and post processing that has 
checked them out and pre-averaged them for normal sound reproduction. 
However, no one here besides RBJ and a few brave souls seems to even 
care much about real subjects.


Now I get it, everyone has a sound card and endless supplies of digital 
materials, and from my student efforts I recall it is fun to understand 
the theoretics of interpolation curves (and (hyper-) surfaces), but 
unfortunately they correlate only very very loosely with useful sampled 
signal theories, unless you want an effort for a particular niche.


T.V.

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread STEFFAN DIEDRICHSEN
I could write a few lines over the topic as well, since I made such a 
compensation filter about 17 years ago. 
So, there are people, that do care about that topic, but there are only some, 
that do find time to write up something. 

;-)

Steffan 


 On 17.08.2015|KW34, at 17:50, Theo Verelst theo...@theover.org wrote:
 
 However, no one here besides RBJ and a few brave souls seems to even care 
 much about real subjects.

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Nigel Redmon
Since compensation filtering has been mentioned by a few, can I ask if someone 
could get specific on an implementation (including a description of constraints 
under which it operates)? I’d prefer keeping it simple by restricting to linear 
interpolation, where it’s most needed, and perhaps these comments will make 
clearer what I’m after:

As I noted in the first reply to this thread, while it’s temping to look at the 
sinc^2 rolloff of a linear interpolator, for example, and think that 
compensation would be to boost the highs to undo the rolloff, that won’t work 
in the general case. Even in Olli Niemitalo’s most excellent paper on 
interpolation methods 
(http://yehar.com/blog/wp-content/uploads/2009/08/deip.pdf), he seems to 
suggest doing this with pre-emphasis, which seems to be a mistake, unless I 
misunderstood his intentions.

In Olli’s case, he specifically recommended pre-emphasis (which I believe will 
not work except for special cases of resampling at fixed fractions between real 
samples) over post, as post becomes more complicated. (It seems that you could 
do it post, taking into account the fractional part of a particular lookup and 
avoiding the use of recursive filters—personally I’d just upsample to begin 
with.)

It just occurred to me that perhaps one possible implementation is to 
cross-fade between a pre-emphasized and normal delay line, depending on the 
fractional position (0.5 gets all pre-emph, 0.0 gets all normal). This sort of 
thing didn’t seem to be what Olli was getting at, since he only gave the 
worst-case rolloff curve and didn’t discuss it further.

I also think about the possibility of crossfading between emphasis and none, 
depending on the fractional position (full emphasis for 

I’m not asking because I need to do this—I’m asking for the sake of the thread, 
where people are talking about compensation, but not explaining the 
implementation they have in mind, and not necessarily explaining the conditions 
under which it works.


___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Ethan Duni
Yeah I am also curious. It's not obvious to me where it would make sense to
spend resources compensating for interpolation rather than just juicing up
the interpolation scheme in the first place.

E

On Mon, Aug 17, 2015 at 11:39 AM, Nigel Redmon earle...@earlevel.com
wrote:

 Since compensation filtering has been mentioned by a few, can I ask if
 someone could get specific on an implementation (including a description of
 constraints under which it operates)?
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread robert bristow-johnson

On 8/17/15 12:07 PM, STEFFAN DIEDRICHSEN wrote:
I could write a few lines over the topic as well, since I made such a 
compensation filter about 17 years ago.
So, there are people, that do care about that topic, but there are 
only some, that do find time to write up something.


;-)

Steffan


On 17.08.2015|KW34, at 17:50, Theo Verelst theo...@theover.org 
mailto:theo...@theover.org wrote:


However, no one here besides RBJ and a few brave souls seems to even 
care much about real subjects.


Theo, there are a lotta heavyweights here (like Steffan).  if you want a 
3-letter acronym to toss around, try JOS.   i think there are plenty 
on this list that care deeply about reality because they write code and 
sell it.  my soul is chicken-shit in the context.


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread robert bristow-johnson

On 8/17/15 2:39 PM, Nigel Redmon wrote:

Since compensation filtering has been mentioned by a few, can I ask if someone 
could get specific on an implementation (including a description of constraints 
under which it operates)? I’d prefer keeping it simple by restricting to linear 
interpolation, where it’s most needed, and perhaps these comments will make 
clearer what I’m after:

As I noted in the first reply to this thread, while it’s temping to look at the 
sinc^2 rolloff of a linear interpolator, for example, and think that 
compensation would be to boost the highs to undo the rolloff, that won’t work 
in the general case. Even in Olli Niemitalo’s most excellent paper on 
interpolation methods 
(http://yehar.com/blog/wp-content/uploads/2009/08/deip.pdf), he seems to 
suggest doing this with pre-emphasis, which seems to be a mistake, unless I 
misunderstood his intentions.

In Olli’s case, he specifically recommended pre-emphasis (which I believe will 
not work except for special cases of resampling at fixed fractions between real 
samples) over post, as post becomes more complicated. (It seems that you could 
do it post, taking into account the fractional part of a particular lookup and 
avoiding the use of recursive filters—personally I’d just upsample to begin 
with.)


to me, it really depends on if you're doing a slowly-varying precision 
delay in which the pre-emphasis might also be slowly varying.


but if the application is sample-rate conversion or similar (like pitch 
shifting) where the fractional delay is varying all over the place, i 
think a fixed compensation for sinc^2 might be a good idea.  i don't see 
how it would hurt.  especially for the over-sampled case.


i like Olli's pink-elephant paper, too.  and i think (since he was 
picking up on Duane's and my old and incomplete paper) it was more about 
the fast-varying fractional delay.  and i think that the Zölzer/Bolze 
paper suggested the same thing (since it was even worse than linear 
interp).



--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Peter S
On 17/08/2015, STEFFAN DIEDRICHSEN sdiedrich...@me.com wrote:
 I could write a few lines over the topic as well, since I made such a
 compensation filter about 17 years ago.
 So, there are people, that do care about that topic, but there are only
 some, that do find time to write up something.

I also made a compensation filter for linear interpolation. Definitely doable.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Compensate for interpolation high frequency signal loss

2015-08-16 Thread Sham Beam

Hi,

Is it possible to use a filter to compensate for high frequency signal 
loss due to interpolation? For example linear or hermite interpolation.


Are there any papers that detail what such a filter might look like?


Thanks
Shannon
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-16 Thread Sampo Syreeni

On 2015-08-16, Sham Beam wrote:

Is it possible to use a filter to compensate for high frequency signal 
loss due to interpolation? For example linear or hermite 
interpolation.


Are there any papers that detail what such a filter might look like?


Look at Vesa Välimäki's work, and his students'. They did fractional 
delay delay lines, which had just this problem in the high end. Also, 
Julius O. Smith's work with waveguides bumped into this very same thing, 
because they're implemented as (fractional) delay lines as well. Beyond 
that, most reverb designers could tell you about this sort of thing, 
only they tend to keep their secret sauce *most* secret. ;)


The usual thing you do is to go for higher order interpolation, with the 
interpolating polynomial being designed for flatter performance over the 
utility band than the linear spline. It's already very much better at 
3rd order, and if you do something like 4th to 5th order with 2x 
oversampling, it's essentially perfect.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-16 Thread Nigel Redmon
As far as compensation: Taking linear as an example, we know that the response 
rolls off (“sinc^2). Would you compensate by boosting the highs? Consider that 
for a linearly interpolated delay line, a delay of an integer number of 
samples, i, has no high frequency loss at all. But that the error is max it you 
need a delay of i + 0.5 samples. More difficult to compensate for, would be 
such a delay line where the delay time is modulated.

A well-published way of getting around the fractional problem is allpass 
compensation. But a lot of people seem to miss that this method doesn’t lend 
itself to modulation—it’s ideally suited for a fixed fractional delay. Here’s a 
paper that shows one possible solution, crossfading two allpass filters:

http://scandalis.com/jarrah/Documents/DelayLine.pdf

Obviously, the most straight-forward way to avoid the problem is to convert to 
a higher sample rate going into the delay line (using windowed sinc, etc.), 
then use linear, hermite, etc.


 On Aug 16, 2015, at 1:09 AM, Sham Beam sham.b...@gmail.com wrote:
 
 Hi,
 
 Is it possible to use a filter to compensate for high frequency signal loss 
 due to interpolation? For example linear or hermite interpolation.
 
 Are there any papers that detail what such a filter might look like?
 
 
 Thanks
 Shannon
 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp


___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-16 Thread Robin Whittle
Hi Shannon,

If the number of reads from the delay line per sample cycle is high
enough, as a less expensive alternative to the most obvious solution
(higher order interpolation based on multiple samples before and
after, with some fancy set of coefficients calculated on the spot, or
looked up from a table of sufficiently high resolution, depending on the
fraction of a sample delay involved) you might like to consider
upsampling the input signal to twice the normal rate, and then doing
simple linear interpolation.

This would not be mathematically perfect, since the high frequency
response would be slightly reduced if the delay fraction was 0.25 or
0.75, whereas it would be flat for 0, and as flat as the upsampling
algorithm for 0.5 (I recall that the upsampling algorithm produces the
odd numbered samples in the final output, with the even ones being the
input samples).  However, assuming the sampling rate is 44.1kHz, 48kHz
or higher, I think this slight variation is unlikely to be perceptible
to human ears.

  Robin


On 2015-08-16 6:09 PM, Sham Beam wrote:
 Hi,
 
 Is it possible to use a filter to compensate for high frequency signal
 loss due to interpolation? For example linear or hermite interpolation.
 
 Are there any papers that detail what such a filter might look like?
 
 
 Thanks
 Shannon
 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-16 Thread Marcus Hobbs

Is this Robin Whittle of Devilfish fame?  I bought a Devilfish from you back in 
the mid-1990s.  Best mod ever!

 On Aug 16, 2015, at 8:07 PM, Robin Whittle r...@firstpr.com.au wrote:
 
 Hi Shannon,
 
 If the number of reads from the delay line per sample cycle is high
 enough, as a less expensive alternative to the most obvious solution
 (higher order interpolation based on multiple samples before and
 after, with some fancy set of coefficients calculated on the spot, or
 looked up from a table of sufficiently high resolution, depending on the
 fraction of a sample delay involved) you might like to consider
 upsampling the input signal to twice the normal rate, and then doing
 simple linear interpolation.
 
 This would not be mathematically perfect, since the high frequency
 response would be slightly reduced if the delay fraction was 0.25 or
 0.75, whereas it would be flat for 0, and as flat as the upsampling
 algorithm for 0.5 (I recall that the upsampling algorithm produces the
 odd numbered samples in the final output, with the even ones being the
 input samples).  However, assuming the sampling rate is 44.1kHz, 48kHz
 or higher, I think this slight variation is unlikely to be perceptible
 to human ears.
 
  Robin
 
 
 On 2015-08-16 6:09 PM, Sham Beam wrote:
 Hi,
 
 Is it possible to use a filter to compensate for high frequency signal
 loss due to interpolation? For example linear or hermite interpolation.
 
 Are there any papers that detail what such a filter might look like?
 
 
 Thanks
 Shannon
 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp
 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-16 Thread robert bristow-johnson

On 8/16/15 4:09 AM, Sham Beam wrote:

Hi,

Is it possible to use a filter to compensate for high frequency signal 
loss due to interpolation? For example linear or hermite interpolation.


Are there any papers that detail what such a filter might look like?



besides the well-known sinc^2 rolloff that comes with linear 
interpolation, this paper discussed interpolation using higher-order 
B-splines.


Udo Zölzer and Thomas Bolze, Interpolation Algorithms: Theory and 
Application, http://www.aes.org/e-lib/browse.cfm?elib=6334 .  i thought 
i had a copy of it, Duane Wise and i referenced the paper in our 
2-decade-old paper about different polynomial interpolation effects and 
which were better for what.  (you can have a .pdf of that paper if you 
want.)


but Zölzer and Bolze (they might be hanging out on this list, i was 
pleasantly surprized to see JOS post here recently) *do* discuss 
pre-compensation of high-frequency rolloff due to interpolation 
polynomials that cause such rolloff.  you just design a filter (using 
MATLAB or whatever) that has magnitude response that is, in the 
frequency band of interest, approximately the reciprocal of the rolloff 
effect from the interpolation.


Zölzer and Bolze suggested Nth-order B-spline without really justifying 
why that is better than other polynomial kernels such as Lagrange or 
Hermite.  the Nth-order B-spline (at least how it was shown in their 
paper), is what you get when you convolve N unit-rectangular functions 
with each other (or N zero-order holds).  the frequency response of a 
Nth-order B-spline is sinc^(N+1).  this puts really deep and wide 
notches at integer multiples of the original sampling frequency (other 
than the integer 0) which is where all those images are that you want to 
kill.  linear interpolation is all of a 1st-order Lagrange, a 1st-order 
Hermite, and a 1st-order B-spline (and the ZOH or drop-sample 
interpolation is a 0th-order realization of those three).


an Nth-order polynomial interpolator will have, somewhere in the 
frequency response, a H(f) = (sinc(f/Fs))^(N+1) in there, but if it's 
not the simple B-spline, there will be lower order terms of sinc() that 
will add to (contaminate) the highest order sinc^(N+1) and make those 
notches less wide.  any other polynomial interpolation (at higher order 
than 1), will have at least one sinc() term with lower order than N+1.


so the cool thing about interpolating with B-splines is that it kills 
the images (which become aliases when you resample) the most, but it 
also has wicked LPFing that needs to be compensated unless your sampling 
frequency is *much* higher than twice the bandwidth (oversampled 
big-time).  but if you *are* experiencing that LPFing, as you have 
suspected, you can design a filter to undo that for much of the 
baseband.  not all of it.


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp