Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
Comparison of the two formulas from previous post: (1) in blue, sinc^2
(2) in red:
http://morpheus.spectralhead.com/img/sinc.png

   sin(pi*x*2)
-(1)
   2*sin(pi*x)

(Formula from Steven W. Smith, absolute value taken on graph)

   sin(pi*x)
--- ^2   = sinc^2(x)(2)
 pi*x

(Formula from JOS, Nigel R.)

(1) and (2) (blue and red curve) are quite different.

Let's test how equation (1) compares against measured frequency
response of a LTI filter with coeffs [0.5, 0.5]:

http://morpheus.spectralhead.com/img/halfsample_delay_response.png

The maximum error between formula (1) and the measured frequency
response of the filter (a0=0.5, a1=0.5) is 3.3307e-16, or -310 dB,
which about equals the limits of the floating point precision at 64
bits. The frequency response was measured using Octave's freqz()
function, using 512 points.

Conclusion: Steven W. Smith's formula (1) seems correct.

Frequency response of the same filter in decibel scale:
http://morpheus.spectralhead.com/img/halfsample_delay_response2.png

(this graph is normalized to 0..1 rad, not 0..0.5)

The pole-zero plot was shown earlier, having a zero at z=-1, meaning
-Inf gain at Nyquist.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
On 19/08/2015, Ethan Duni ethan.d...@gmail.com wrote:

 But why would you constrain yourself to use first-order linear
 interpolation?

Because it's computationally very cheap?

 The oversampler itself is going to be a much higher order
 linear interpolator. So it seems strange to pour resources into that

Linear interpolation needs very little computation, compared to most
other types of interpolation. So I do not consider the idea of using
linear interpolation for higher stages of oversampling strange at all.
The higher the oversampling, the more optimal it is to use linear in
the higher stages.

 So heavy oversampling seems strange, unless there's some hard
 constraint forcing you to use a first-order interpolator.

The hard constraint is CPU usage, which is higher in all other types
of interpolators.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Ethan Duni
i would say way more than 2x if you're using linear in between.  if memory
is cheap, i might oversample by perhaps as much as 512x and then use
linear to get in between the subsamples (this will get you 120 dB S/N).

But why would you constrain yourself to use first-order linear
interpolation? The oversampler itself is going to be a much higher order
linear interpolator. So it seems strange to pour resources into that, just
so you can avoid putting them into the final fractional interpolator. Is
the justification that the oversampler is a fixed interpolator, whereas the
final stage is variable (so we don't want to muck around with anything too
complex there)? I've seen it claimed (by Julius Smith IIRC) that
oversampling by as little as 10% cuts the interpolation filter requirements
by over 50%. So heavy oversampling seems strange, unless there's some hard
constraint forcing you to use a first-order interpolator.

quite familiar with it.

Yeah that was more for the list in general, to keep this discussion
(semi-)grounded.

E

On Wed, Aug 19, 2015 at 9:15 AM, robert bristow-johnson 
r...@audioimagination.com wrote:

 On 8/18/15 11:46 PM, Ethan Duni wrote:

  for linear interpolation, if you are a delayed by 3.5 samples and you
 keep that delay constant, the transfer function is
 
H(z)  =  (1/2)*(1 + z^-1)*z^-3
 
 that filter goes to -inf dB as omega gets closer to pi.

 Note that this holds for symmetric fractional delay filter of any odd
 order (i.e., Lagrange interpolation filter, windowed sinc, etc). It's not
 an artifact of the simple linear approach,


 at precisely Nyquist, you're right.  as you approach Nyquist, linear
 interpolation is worser than cubic Hermite but better than cubic B-spline
 (better in terms of less roll-off, worser in terms of killing images).

 it's a feature of the symmetric, finite nature of the fractional
 interpolator. Since there are good reasons for the symmetry constraint, we
 are left to trade off oversampling and filter order/design to get the final
 passband as flat as we need.

 My view is that if you are serious about maintaining fidelity across the
 full bandwidth, you need to oversample by at least 2x.


 i would say way more than 2x if you're using linear in between.  if memory
 is cheap, i might oversample by perhaps as much as 512x and then use linear
 to get in between the subsamples (this will get you 120 dB S/N).

 That way you can fit the transition band of your interpolation filter
 above the signal band. In applications where you are less concerned about
 full bandwidth fidelity, oversampling isn't required. Some argue that 48kHz
 sample rate is already effectively oversampled for lots of natural
 recordings, for example. If it's already at 96kHz or higher I would not
 bother oversampling further.


 i might **if** i want to resample by an arbitrary ratio and i am doing
 linear interpolation between the new over-sampled samples.

 remember, when we oversample for the purpose of resampling, if the
 prototype LPF is FIR (you know, the polyphase thingie), then you need not
 calculate all of the new over-sampled samples.  only the two you need to
 linear interpolate between.  so oversampling by a large factor only costs
 more in terms of memory for the coefficient storage.  not in computational
 effort.

 Also this is recommended reading for this thread:

 https://ccrma.stanford.edu/~jos/Interpolation/ 
 https://ccrma.stanford.edu/%7Ejos/Interpolation/


 quite familiar with it.

 --

 r b-j  r...@audioimagination.com

 Imagination is more important than knowledge.



 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Theo Verelst
SOmetimes I feel the personal integrity about these undergrad level 
scientific quests is nowhere to be found with some people, and that's a 
shame.


Working on a decent subject like these mathematical approximations in 
the digital signal processing should be accompanied with at least some 
self-respect in the treatment of subjects one involves oneself in, 
obviously apart from chatter and stories and so on, because otherwise 
people might feel hurt to be contributing only as it were to feed da 
Man or something of that nature, and that's not cool in my opinion.


T.V.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread robert bristow-johnson

On 8/19/15 1:43 PM, Peter S wrote:

On 19/08/2015, Ethan Duniethan.d...@gmail.com  wrote:

But why would you constrain yourself to use first-order linear
interpolation?

Because it's computationally very cheap?


and it doesn't require a table of coefficients, like doing higher-order 
Lagrange or Hermite would.



The oversampler itself is going to be a much higher order
linear interpolator. So it seems strange to pour resources into that

Linear interpolation needs very little computation, compared to most
other types of interpolation. So I do not consider the idea of using
linear interpolation for higher stages of oversampling strange at all.
The higher the oversampling, the more optimal it is to use linear in
the higher stages.



here, again, is where Peter and i are on the same page.


So heavy oversampling seems strange, unless there's some hard
constraint forcing you to use a first-order interpolator.

The hard constraint is CPU usage, which is higher in all other types
of interpolators.



for plugins or embedded systems with a CPU-like core, computation burden 
is more of a cost issue than memory used.  but there are other embedded 
DSP situations where we are counting every word used.  8 years ago, i 
was working with a chip that offered for each processing block 8 
instructions (there were multiple moves, 1 multiply, and 1 addition that 
could be done in a single instruction), 1 state (or 2 states, if you 
count the output as a state) and 4 scratch registers.  that's all i 
had.  ain't no table of coefficients to look up.  in that case memory is 
way more important than wasting a few instructions recomputing numbers 
that you might otherwise just look up.





--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Ethan Duni
and it doesn't require a table of coefficients, like doing higher-order
Lagrange or Hermite would.

Well, you can compute those at runtime if you want - and you don't need a
terribly high order Lagrange interpolator if you're already oversampled, so
it's not necessarily a problematic overhead.

Meanwhile, the oversampler itself needs a table of coefficients. Assuming
we're talking about FIR interpolation, to avoid phase distortion. But
that's a single fixed table for supporting a single oversampling ratio, so
I can see how it would add up to a memory savings compared to a bank of
tables for different fractional interpolation points, if you're looking for
really fine/arbitrary granularity. If we're talking about a fixed
fractional delay, I'm not really seeing the advantage.

Obviously it will depend on the details of the application, it just seems
kind of unbalanced on its face to use heavy oversampling and then the
lightest possible fractional interpolator. It's not clear to me that a
moderate oversampling combined with a fractional interpolator of modestly
high order wouldn't be a better use of resources.

So it doesn't make a lot of sense to me to point to the low resource costs
of the first-order linear interpolator, when you're already devoting
resources to heavy oversampling in order to use it. They need to be
considered together and balanced, no? Your point about computing only the
subset of oversamples needed to drive the final fractional interpolator is
well-taken, but I think I need to see a more detailed accounting of that to
be convinced.

E

On Wed, Aug 19, 2015 at 1:00 PM, robert bristow-johnson 
r...@audioimagination.com wrote:

 On 8/19/15 1:43 PM, Peter S wrote:

 On 19/08/2015, Ethan Duniethan.d...@gmail.com  wrote:

 But why would you constrain yourself to use first-order linear
 interpolation?

 Because it's computationally very cheap?


 and it doesn't require a table of coefficients, like doing higher-order
 Lagrange or Hermite would.

 The oversampler itself is going to be a much higher order
 linear interpolator. So it seems strange to pour resources into that

 Linear interpolation needs very little computation, compared to most
 other types of interpolation. So I do not consider the idea of using
 linear interpolation for higher stages of oversampling strange at all.
 The higher the oversampling, the more optimal it is to use linear in
 the higher stages.


 here, again, is where Peter and i are on the same page.

 So heavy oversampling seems strange, unless there's some hard
 constraint forcing you to use a first-order interpolator.

 The hard constraint is CPU usage, which is higher in all other types
 of interpolators.


 for plugins or embedded systems with a CPU-like core, computation burden
 is more of a cost issue than memory used.  but there are other embedded DSP
 situations where we are counting every word used.  8 years ago, i was
 working with a chip that offered for each processing block 8 instructions
 (there were multiple moves, 1 multiply, and 1 addition that could be done
 in a single instruction), 1 state (or 2 states, if you count the output as
 a state) and 4 scratch registers.  that's all i had.  ain't no table of
 coefficients to look up.  in that case memory is way more important than
 wasting a few instructions recomputing numbers that you might otherwise
 just look up.





 --

 r b-j  r...@audioimagination.com

 Imagination is more important than knowledge.



 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
3.2 Multistage
3.2.1 Can I interpolate in multiple stages?

Yes, so long as the interpolation ratio, L, is not a prime number.
For example, to interpolate by a factor of 15, you could interpolate
by 3 then interpolate by 5. The more factors L has, the more choices
you have. For example you could interpolate by 16 in:

- one stage: 16
- two stages: 4 and 4
- three stages: 2, 2, and 4
- four stages: 2, 2, 2, and 2

3.2.2 Cool. But why bother with all that?

Just as with decimation, the computational and memory requirements
of interpolation filtering can often be reduced by using multiple
stages.

3.2.3 OK, so how do I figure out the optimum number of stages, and the
interpolation ratio at each stage?

There isn't a simple answer to this one: the answer varies depending
on many things. However, here are a couple of rules of thumb:

- Using two or three stages is usually optimal or near-optimal.
- Interpolate in order of the smallest to largest factors. For
example, when interpolating by a factor of 60 in three stages,
interpolate by 3, then by 4, then by 5. (Use the largest ratio on the
highest rate.)

http://dspguru.com/dsp/faqs/multirate/interpolation
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Ethan Duni
Nope. Ever heard of multistage interpolation?

I'm well aware that multistage interpolation gives cost savings relative to
single-stage interpolation, generally. That is beside the point: the costs
of interpolation all still scale with oversampling ratio and quality
requirements, just like in single stage interpolation. There's no magic  to
multi-stage interpolation that avoids that relationship.

that's just plain wrong and stupid, and that's what all advanced multirate
books
will also tell you.

You've been told repeatedly that this kind of abusive, condescending
behavior is not welcome here, and you need to cut it out immediately.

Tell me, you don't have an extra half kilobyte of memory in a typical
computer?

There are lots of dsp applications that don't run on personal computers,
but rather on very lightweight embedded targets. Memory tends to be at a
premium on those platforms.

E










On Wed, Aug 19, 2015 at 3:55 PM, Peter S peter.schoffhau...@gmail.com
wrote:

 On 20/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
 
  I don't dispute that linear fractional interpolation is the right choice
 if
  you're going to oversample by a large ratio. The question is what is the
  right balance overall, when considering the combined costs of
  the oversampler and the fractional interpolator.

 It's hard to tell in general. It depends on various factors, including:

 - your desired/available CPU usage
 - your desired/available memory usage and cache size
 - the available instruction set of your CPU
 - your desired antialias filter steepness
 - your desired stopband attenuation

 ...and possibly other factors. Since these may vary largely, I think
 it is impossible to tell in general. What I read in multirate
 literature, and what is also my own experience, is that - when using a
 relatively large oversampling ratio - then it's more cost-effective to
 use linear interpolation at the higher stages (and that's Olli's
 conclusion as well).

  You can leverage any finite interpolator to skip computations in an FIR
  oversampler, not just linear. You get the most skipping in the case of
  high oversampling ratio and linear interpolation, but the same trick
 still
  works any time your oversampling ratio is greater than your interpolator
  order.

 But to a varying degree. A FIR interpolator is still heavy if you
 skip samples where the coefficient is zero, compared to linear
 interpolation (but it is also higher quality).

  The flipside is that the higher the oversampling ratio, the longer the
 FIR
  oversampling filter needs to be in the first place.

 Nope. Ever heard of multistage interpolation? You may do a small FIR
 stage (say, 2x or 4x), and then a linear stage (or another,
 low-complexity FIR stage according to your desired specifications, or
 even further stages). Seems you still don't understand that you can
 oversample in multiple stages, and use a linear interpolator for the
 higher stages of oversampling... Which is almost always optimal than
 using a single costy FIR filter to do the interpolation. You don't
 need to use a 512x FIR at 100 dB stopband attentuation, that's just
 plain wrong and stupid, and that's what all advanced multirate books
 will also tell you.

 Same for IIR case.

 Since memory is usually not an issue,
 
  There are lots of dsp applications where memory is very much the main
  constraint.

 Tell me, you don't have an extra half kilobyte of memory in a typical
 computer? I hear, those have 8-32 GB of RAM nowadays, and CPU cache
 sizes are like 32-128 KiB.

  The performance of your oversampler will be garbage if you do that. And
 so
  there will be no point in worrying about the quality of fractional
  interpolation after that point, since the signal you'll be interpolating
  will be full of aliasing to begin with.

 Exactly. But it won't be heavy! So it's not the oversampling what
 makes the process heavy, but rather, the interpolation / anti-aliasing
 filter!!

  And that means it needs lots of resources, especially as the oversampling
  ratio gets large. It's the required quality that drives the oversampler
  costs (and filter design choices).

 Which is exactly what I said. If your specification is low, you can
 have a 128x oversampler that is (relatively) low-cost. It's not the
 oversampling ratio what matters most.

  If you are willing to accept low quality in order to save on CPU (or
 maybe
  there's nothing in the upper frequencies that you're worried about), then
  there's no point in resampling at all. Just use a low order fractional
  interpolator directly on the signal.

 Seems you still miss the whole point of multistage interpolation. I
 recommend you read some books / papers on multirate processing.

 It should also be noted that the linear interpolation can be used for
 the upsampling itself as well, reducing the cost of your oversampling,
 
  Again, that would add up to a very low quality upsampler.

 You're wrong. Read Olli Niemitalo's paper again 

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
On 19/08/2015, Ethan Duni ethan.d...@gmail.com wrote:

 Obviously it will depend on the details of the application, it just seems
 kind of unbalanced on its face to use heavy oversampling and then the
 lightest possible fractional interpolator.

It should also be noted that the linear interpolation can be used for
the upsampling itself as well, reducing the cost of your oversampling,
not just as your fractional delay. A potential method to do fractional
delay is to upsample by a large factor, then delay by an integer
number of samples, and then downsample, without the use of an actual
fractional delay.

Say, if the fraction is 0.37, then you may upsamle by 512x, then delay
the upsampled signal by round(512*0.37) = 189 samples, then downsample
back. So you did a fractional delay without using actual fractional
interpolation for the delay - you delayed by an integer number of
samples. You'll also have a little error - your delay is 0.369140625
instead of the desired 0.37, since it's quantized to 512 steps, so the
error is -0.000859375. I'm not saying this is ideal, I'm just saying
this is one possible way of doing a fractional delay.

This is discussed by JOS[1]:

In discrete time processing, the operation Eq.(4.5) can be
approximated arbitrarily closely by digital upsampling by a large
integer factor M, delaying by L samples (an integer), then finally
downsampling by M, as depicted in Fig.4.7 [96]. The integers L and M
are chosen so that eta ~= L/M, where eta the desired fractional
delay.

[1] Julius O. Smith, Physical Audio Signal Processing
https://ccrma.stanford.edu/~jos/pasp/Convolution_Interpretation.html

Ref. [96] is:
R. Crochiere, L. Rabiner, and R. Shively, ``A novel implementation
of digital phase shifters,'' Bell System Technical Journal, vol. 65,
pp. 1497-1502, Oct. 1975.

Abstract:
A novel technique is presented for implementing a variable digital
phase shifter which is capable of realizing noninteger delays. The
theory behind the technique is based on the idea of first
interpolating the signal to a high sampling rate, then using an
integer delay, and finally decimating the signal back to the original
sampling rate. Efficient methods for performing these processes are
discussed in this paper. In particular, it is shown that the digital
phase shifter can be implemented by means of a simple convolution at
the sampling rate of the original signal.

In short, there are a zillion ways of implementing both oversampling
and fractional delays, and they can be combined arbitrarily.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
On 20/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
 Ugh, I suppose this is what I get for attempting to engage with Peter S
 again. Not sure what I was thinking...

Well, you asked, why use linear interpolation at all? We told you
the advantages - fast computation, no coefficient table needed, and
(nearly) optimal for high oversampling ratios, and you were given some
literature.

If you don't believe it - well, not my problem... it's still true. #notmyloss
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
On 19/08/2015, Peter S peter.schoffhau...@gmail.com wrote:
 Another way to show that half-sample delay has -Inf gain at Nyquist:
 see the pole-zero plot of the equivalent LTI filter a0=0.5, a1=0.5. It
 will have a zero at z=-1. A zero on the unit circle means -Inf gain,
 and z=-1 means Nyquist frequency. Therefore, a half-sample delay has
 -Inf gain at Nyquist frequency.

It looks like this:
http://morpheus.spectralhead.com/img/halfsample_delay_zplane.png

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
Another way to show that half-sample delay has -Inf gain at Nyquist:
see the pole-zero plot of the equivalent LTI filter a0=0.5, a1=0.5. It
will have a zero at z=-1. A zero on the unit circle means -Inf gain,
and z=-1 means Nyquist frequency. Therefore, a half-sample delay has
-Inf gain at Nyquist frequency.

It would be ill-advised to dismiss Nyquist frequency because it may
alias to DC signal when sampling. The zero on the unit circle is at
Nyquist (z=-1), not at DC (z=1).

Frequency response graphs of linear interpolation, according to JOS:
https://ccrma.stanford.edu/~jos/Interpolation/Frequency_Responses_Linear_Interpolation.html
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Ethan Duni
rbj
and it doesn't require a table of coefficients, like doing higher-order
Lagrange or Hermite would.

Robert I think this is where you lost me. Wasn't the premise that memory
was cheap, so we can store a big prototype FIR for high quality 512x
oversampling? So why are we then worried about the table space for the
fractional interpolator?

I wonder if the salient design concern here is less about balancing
resources, and more about isolating and simplifying the portions of the
system needed to support arbitrary (as opposed to just very-high-but-fixed)
precision. I like the modularity of the high oversampling/linear interp
approach, since that it supports arbitrary precision with a minimum of
fussy variable components or arcane coefficient calculations. It's got a
lot going for it in software engineering terms. But I'm on the fence about
whether it's the tightest use of resources (for whatever constraints).
Typically those are the arcane ones that take a ton of debugging and
optimization :P

E



On Wed, Aug 19, 2015 at 1:00 PM, robert bristow-johnson 
r...@audioimagination.com wrote:

 On 8/19/15 1:43 PM, Peter S wrote:

 On 19/08/2015, Ethan Duniethan.d...@gmail.com  wrote:

 But why would you constrain yourself to use first-order linear
 interpolation?

 Because it's computationally very cheap?


 and it doesn't require a table of coefficients, like doing higher-order
 Lagrange or Hermite would.

 The oversampler itself is going to be a much higher order
 linear interpolator. So it seems strange to pour resources into that

 Linear interpolation needs very little computation, compared to most
 other types of interpolation. So I do not consider the idea of using
 linear interpolation for higher stages of oversampling strange at all.
 The higher the oversampling, the more optimal it is to use linear in
 the higher stages.


 here, again, is where Peter and i are on the same page.

 So heavy oversampling seems strange, unless there's some hard
 constraint forcing you to use a first-order interpolator.

 The hard constraint is CPU usage, which is higher in all other types
 of interpolators.


 for plugins or embedded systems with a CPU-like core, computation burden
 is more of a cost issue than memory used.  but there are other embedded DSP
 situations where we are counting every word used.  8 years ago, i was
 working with a chip that offered for each processing block 8 instructions
 (there were multiple moves, 1 multiply, and 1 addition that could be done
 in a single instruction), 1 state (or 2 states, if you count the output as
 a state) and 4 scratch registers.  that's all i had.  ain't no table of
 coefficients to look up.  in that case memory is way more important than
 wasting a few instructions recomputing numbers that you might otherwise
 just look up.





 --

 r b-j  r...@audioimagination.com

 Imagination is more important than knowledge.



 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp