On 8/19/15 1:43 PM, Peter S wrote:
On 19/08/2015, Ethan Duni<ethan.d...@gmail.com>  wrote:
But why would you constrain yourself to use first-order linear
interpolation?
Because it's computationally very cheap?

and it doesn't require a table of coefficients, like doing higher-order Lagrange or Hermite would.

The oversampler itself is going to be a much higher order
linear interpolator. So it seems strange to pour resources into that
Linear interpolation needs very little computation, compared to most
other types of interpolation. So I do not consider the idea of using
linear interpolation for higher stages of oversampling strange at all.
The higher the oversampling, the more optimal it is to use linear in
the higher stages.


here, again, is where Peter and i are on the same page.

So heavy oversampling seems strange, unless there's some hard
constraint forcing you to use a first-order interpolator.
The hard constraint is CPU usage, which is higher in all other types
of interpolators.


for plugins or embedded systems with a CPU-like core, computation burden is more of a cost issue than memory used. but there are other embedded DSP situations where we are counting every word used. 8 years ago, i was working with a chip that offered for each processing block 8 instructions (there were multiple moves, 1 multiply, and 1 addition that could be done in a single instruction), 1 state (or 2 states, if you count the output as a state) and 4 scratch registers. that's all i had. ain't no table of coefficients to look up. in that case memory is way more important than wasting a few instructions recomputing numbers that you might otherwise just look up.




--

r b-j                  r...@audioimagination.com

"Imagination is more important than knowledge."



_______________________________________________
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to