Interesting story about the interpolation noise from very
high oversampled signal approximations. I tend to think ïf it doesn't
concern an actual sinc function of significant width and accuracy then
the up-sampling is wrong unless the signal is prepared for it.
I can imagine in sample processing
On 8/26/15 9:47 PM, Ethan Duni wrote:
15.6 dB + (12.04 dB) * log2( Fs/(2B) )
Oh I see, you're actually taking the details of the sinc^2 into account.
really, just the fact that the sinc^2 has nice deep zeros at every
integer multiple of Fs (except 0).
What I had in mind was more of a
On 8/25/15 7:08 PM, Ethan Duni wrote:
if you can, with optimal coefficients designed with the tool of your
choice, so i am ignoring any images between B and Nyquist-B, upsample
by 512x and then do linear interpolation between adjacent samples for
continuous-time interpolation, you can show
15.6 dB + (12.04 dB) * log2( Fs/(2B) )
Oh I see, you're actually taking the details of the sinc^2 into account.
What I had in mind was more of a worst-case analysis where we just call the
sin() component 1 and then look at the 1/n^2 decay (which is 12dB per
octave). Which we see in the second
On 2015-08-19, Ethan Duni wrote:
and it doesn't require a table of coefficients, like doing
higher-order Lagrange or Hermite would.
Robert I think this is where you lost me. Wasn't the premise that memory
was cheap, so we can store a big prototype FIR for high quality 512x
oversampling?
In
On 8/24/15 11:18 AM, Sampo Syreeni wrote:
On 2015-08-19, Ethan Duni wrote:
and it doesn't require a table of coefficients, like doing
higher-order Lagrange or Hermite would.
Robert I think this is where you lost me. Wasn't the premise that memory
was cheap, so we can store a big prototype
On 22/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
So your whole point is that it's not *exactly* sinc^2, but a slightly noisy
version thereof? My point was that there are no effects of resampling
visible in the graphs.
And you're wrong - all those 88 alias images are effects of
So you claim that the graph depicts a sinc^2 graph, and it shows the
frequency response of a continuous time linearly interpolated signal,
and involves no resampling.
That is false. That is not how Olli created his graph. First, the
continuous time signal (which, by the way, already contains an
So let me get this straight - you have an *imaginary* graph in your
head, depicting the frequency response of a continuous time linearly
interpolated signal, and you keep arguing about this *imaginary* graph
(maybe to feed your fragile ego and to prove that you won).
That is *not* what you see on
And besides, no one ever said that Olli's graph depicts analyitical
frequency responses of continuous time interpolators. The graphs come
from a musicdsp.org code entry:
http://musicdsp.org/archive.php?classid=5#49
There's no comment whatsover, just the code and the graphs.
If you read his 65
On 2015-08-18, Tom Duffy wrote:
In order to reconstruct that sinusoid, you'll need a filter with an
infinitely steep transition band. You've demonstrated that SR/2
aliases to 0Hz, i.e. DC. That digital stream of samples is not
reconstructable.
The conjugate sine to +1, -1, +1, -1, ... is 0,
On 22/08/2015, Sampo Syreeni de...@iki.fi wrote:
The conjugate sine to +1, -1, +1, -1, ... is 0, 0, 0, 0... Just phase
shift the original sine at the Nyquist frequence.
Let me ask what do you mean by conjugate sine ?
If you mean complex conjugate, and assume the sine to be the real
part
Okay, I'll risk exceeding my daily message limit. If the
administrators think it is inappropriate, dealing with that is at
their discretion.
Here is another proof that the alias images in the spectrum are caused
by the sampling/upsampling, not the interpolation:
Let's replace linear
Creating a 22000 Hz signal from a 250 Hz signal by interpolation, is
*exactly* upsampling
That is not what is shown in that graph. The graph simply shows the
continuous-time frequency response of the interpolation polynomials,
graphed up to 22kHz. No resampling is depicted, or the frequency
Also, you even contradict yourself. You claim that:
1) Olli's graph was created by graphing sinc(x), sinc^2(x), and not via FFT.
2) The artifacts from the resampling would be barely visible, because
the oversampling rate is quite high.
So, if - according to 2) - the artifacts are not visible
On 21/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
So you agree that the effects of resampling are not shown, and all we see
is the spectrum of the continuous time polynomial interpolators.
I claim that they are aliases of the original spectrum.
Just as you also call them:
It shows the
A sampled signal contains an infinte number of aliases:
http://morpheus.spectralhead.com/img/sampling_aliases.png
the spectrum is replicated infinitely often in both directions
These are called aliases of the spectrum. You do not need to fold
back the aliasing via resampling for them to become
Since that image is not meant to illustrate the effects of
resampling, but rather, to illustrate the effects of interpolation,
*obviously* it doesn't focus on the aliasing from the resampling.
So you agree that the effects of resampling are not shown, and all we see
is the spectrum of the
Let's repeat the same with a 50 Hz sine wave, sampled at 500 Hz, then
linearly interpolated and resampled at 44.1 kHz:
http://morpheus.spectralhead.com/img/sine_aliasing.png
The resulting alias frequencies are at: 450 Hz, 550 Hz, 950 Hz, 1050
Hz, 1450 Hz, 1550 Hz, 1950 Hz, 2050 Hz, 2450 Hz, 2550
It shows *exactly* the aliasing
It shows the aliasing left by linear interpolation into the continuous time
domain. It doesn't show the additional aliasing produced by then delaying
and sampling that signal. I.e., the images that would get folded back onto
the new baseband, disturbing the
On 21/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
It shows *exactly* the aliasing
It shows the aliasing left by linear interpolation into the continuous time
domain. It doesn't show the additional aliasing produced by then delaying
and sampling that signal. I.e., the images that would
On 21/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
Creating a 22000 Hz signal from a 250 Hz signal by interpolation, is
*exactly* upsampling
That is not what is shown in that graph. The graph simply shows the
continuous-time frequency response of the interpolation polynomials,
graphed up to
The details of how the graphs were generated don't really matter. The point
is that the only effect shown is the spectrum of the continuous-time
polynomial interpolator. The additional spectral effects of delaying and
resampling that continuous-time signal (to get fractional delay, for
example)
Which contains alias images of the original spectrum, which was my point.
There is no original spectrum pictured in that graph. Only the responses
of the interpolators. There is no reference to any input signal at all.
No one claimed there was fractional delay involved.
Fractional delay is a
1) Olli Niemiatalo's graph *is* equivalent of the spectrum of
upsampled white noise.
We've been over this repeatedly, including in the very post you are
responding to. The fact that there are many ways to produce a graph of the
interpolation spectrum is not in dispute, nor is it germaine to my
Naturally, there's going to be some jaggedness in the spectrum because
of the noise. So, obviously, that is not sinc^2 then.
So your whole point is that it's not *exactly* sinc^2, but a slightly noisy
version thereof? My point was that there are no effects of resampling
visible in the graphs.
Since you constantly derail this topic with irrelevant talk, let me
instead prove that
1) Olli Niemiatalo's graph *is* equivalent of the spectrum of
upsampled white noise.
2) Olli Niemitalo's graph does *not* depict sinc(x)/sinc^2(x).
First I'll prove 1).
Using palette modification, I extracted
On 22/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
We've been over this repeatedly, including in the very post you are
responding to. The fact that there are many ways to produce a graph of the
interpolation spectrum is not in dispute, nor is it germaine to my point.
Earlier you disputed
Upsampling means, that the sampling rate increases. So if you have a
250 Hz signal, and create a 22000 Hz signal from it, that is - by
definition - upsampling.
That's *exactly* what upsampling means... You insert new samples
between the original ones, and interpolate between them (using
whatever
Hi,
A suggestion for those working on practical implementations, and lighten
up the tone of the discussion with some people I know from worked on all
kinds of (semi-) pro implementations when I wasn't even into more than
basic DSP yet.
The tradeoffs about engineering and implementing on a
On 20/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
Wasn't the premise that memory
was cheap, so we can store a big prototype FIR for high quality 512x
oversampling? So why are we then worried about the table space for the
fractional interpolator?
For the record, wasn't it you who said
Let's analyze your suggestion of using a FIR filter at f = 0.5/512 =
0.0009765625 for an interpolation filter for 512x oversampling.
Here's the frequency response of a FIR filter of length 1000:
http://morpheus.spectralhead.com/img/fir512_1000.png
Closeup of the frequency range between 0-0.01
In this graph, the signal frequency seems to be 250 Hz, so this graph
shows the equivalent of about 22000/250 = 88x oversampling.
That graph just shows the frequency responses of various interpolation
polynomials. It's not related to oversampling.
E
On Thu, Aug 20, 2015 at 5:40 PM, Peter S
In the starting post, it was not specified that resampling was also
used - the question was:
Is it possible to use a filter to compensate for high frequency
signal loss due to interpolation? For example linear or hermite
interpolation.
Without specifying that variable rate playback is involved,
Here's a graph of performance in mflops of varying length FFT
transforms from the fftw.org benchmark page, for Intel Pentium 4:
http://morpheus.spectralhead.com/img/fftw_benchmark_pentium4.png
Afaik Pentium 4 has 16 KB of L1 data cache. If you check the graph,
around 8-16k the performance starts
If all you're trying to do is mitigate the rolloff of linear interp
That's one concern, and by itself it implies that you need to oversample by
at least some margin to avoid having a zero at the top of your audio band
(along with a transition band below that).
But the larger concern is the
Let me just add, that in case of having a non-oversampled linearly
interpolated fractional delay line with exactly 0.5 sample delay (most
high-frequency roll-off position), the frequency response formula is
not sinc^2, but rather, sin(2*PI*f)/(2*sin(PI*f)), as I discussed
earlier.
In that case,
On 20/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
But I'm on the fence about
whether it's the tightest use of resources (for whatever constraints).
Then try and measure it yourself - you don't believe my words anyways.
-P
___
music-dsp mailing
Comparison of the two formulas from previous post: (1) in blue, sinc^2
(2) in red:
http://morpheus.spectralhead.com/img/sinc.png
sin(pi*x*2)
-(1)
2*sin(pi*x)
(Formula from Steven W. Smith, absolute value taken on graph)
sin(pi*x)
On 19/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
But why would you constrain yourself to use first-order linear
interpolation?
Because it's computationally very cheap?
The oversampler itself is going to be a much higher order
linear interpolator. So it seems strange to pour resources
i would say way more than 2x if you're using linear in between. if memory
is cheap, i might oversample by perhaps as much as 512x and then use
linear to get in between the subsamples (this will get you 120 dB S/N).
But why would you constrain yourself to use first-order linear
interpolation? The
SOmetimes I feel the personal integrity about these undergrad level
scientific quests is nowhere to be found with some people, and that's a
shame.
Working on a decent subject like these mathematical approximations in
the digital signal processing should be accompanied with at least some
On 8/19/15 1:43 PM, Peter S wrote:
On 19/08/2015, Ethan Duniethan.d...@gmail.com wrote:
But why would you constrain yourself to use first-order linear
interpolation?
Because it's computationally very cheap?
and it doesn't require a table of coefficients, like doing higher-order
Lagrange or
and it doesn't require a table of coefficients, like doing higher-order
Lagrange or Hermite would.
Well, you can compute those at runtime if you want - and you don't need a
terribly high order Lagrange interpolator if you're already oversampled, so
it's not necessarily a problematic overhead.
3.2 Multistage
3.2.1 Can I interpolate in multiple stages?
Yes, so long as the interpolation ratio, L, is not a prime number.
For example, to interpolate by a factor of 15, you could interpolate
by 3 then interpolate by 5. The more factors L has, the more choices
you have. For example you
Nope. Ever heard of multistage interpolation?
I'm well aware that multistage interpolation gives cost savings relative to
single-stage interpolation, generally. That is beside the point: the costs
of interpolation all still scale with oversampling ratio and quality
requirements, just like in
On 19/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
Obviously it will depend on the details of the application, it just seems
kind of unbalanced on its face to use heavy oversampling and then the
lightest possible fractional interpolator.
It should also be noted that the linear
On 20/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
Ugh, I suppose this is what I get for attempting to engage with Peter S
again. Not sure what I was thinking...
Well, you asked, why use linear interpolation at all? We told you
the advantages - fast computation, no coefficient table needed,
On 19/08/2015, Peter S peter.schoffhau...@gmail.com wrote:
Another way to show that half-sample delay has -Inf gain at Nyquist:
see the pole-zero plot of the equivalent LTI filter a0=0.5, a1=0.5. It
will have a zero at z=-1. A zero on the unit circle means -Inf gain,
and z=-1 means Nyquist
Another way to show that half-sample delay has -Inf gain at Nyquist:
see the pole-zero plot of the equivalent LTI filter a0=0.5, a1=0.5. It
will have a zero at z=-1. A zero on the unit circle means -Inf gain,
and z=-1 means Nyquist frequency. Therefore, a half-sample delay has
-Inf gain at Nyquist
rbj
and it doesn't require a table of coefficients, like doing higher-order
Lagrange or Hermite would.
Robert I think this is where you lost me. Wasn't the premise that memory
was cheap, so we can store a big prototype FIR for high quality 512x
oversampling? So why are we then worried about the
On 18/08/2015, Nigel Redmon earle...@earlevel.com wrote:
well, if it's linear interpolation and your fractional delay slowly sweeps
from 0 to 1/2 sample, i think you may very well hear a LPF start to kick
in. something like -7.8 dB at Nyquist. no, that's not right. it's -inf
dB at Nyquist.
Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1,
-1...
The sampling theorem requires that all frequencies be *below* the Nyquist
frequency. Sampling signals at exactly the Nyquist frequency is an edge
case that sort-of works in some limited special cases, but there is no
On 8/18/15 4:28 PM, Peter S wrote:
1, -1, 1, -1, 1, -1 ... is a proper bandlimited signal,
and contains no aliasing. That's the maximal allowed frequency without
any aliasing.
well Peter, here again is where you overreach. assuming, without loss
of generality that the sampling period is 1,
What's causing you to be unable to reconstruct the waveform?
There are an infinite number of different nyquist-frequency sinusoids that,
when sampled, will all give the same ...,1, -1, 1, -1, ... sequence of
samples. The sampling is a many-to-one mapping in that case, and so cannot
be inverted.
On 8/18/15 4:50 PM, Nigel Redmon wrote:
I’m sorry, I’m missing your point here, Peter (and perhaps I missed Roberts,
hence the “No?” in my reply to him).
The frequency response of linear interpolation is (sin(pi*x)/(pi*x))^2, -7.8 dB
at 0.5 of the sample rate...
i will try to spell out my
On 18/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
In order to reconstruct that sinusoid, you'll need a filter with
an infinitely steep transition band.
No, even an ideal reconstruction filter won't do it. You've got your
+Nyquist component sitting right on top of your -Nyquist component.
Okay, I get what you mean. But that doesn't change the frequency
response of a half-sample delay, or doesn't mean that a half-sample
delay doesn't have a specific gain at Nyquist.
Never said that it did. In fact, I explicitly said that this issue of
sampling of Nyquist frequency sinusoids has no
In order to reconstruct that sinusoid, you'll need a filter with
an infinitely steep transition band.
You've demonstrated that SR/2 aliases to 0Hz, i.e. DC.
That digital stream of samples is not reconstructable.
On 8/18/2015 1:28 PM, Peter S wrote:
That's false. 1, -1, 1, -1, 1, -1 ... is a
On 8/18/15 5:01 PM, Emily Litella wrote:
... Never mind.
too late.
:-)
--
r b-j r...@audioimagination.com
Imagination is more important than knowledge.
___
music-dsp mailing list
music-dsp@music.columbia.edu
You cannot calculate 1/x when x=0, can you? Since that's division by zero.
Yet you'll know when x tends to zero from right towards left, then 1/x
will tend to +infinity.
Not sure what that is supposed to have to do with the present subject.
If you want to put it in terms of simple arithmetic,
On 8/18/15 3:44 PM, Ethan Duni wrote:
Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1,
1, -1...
The sampling theorem requires that all frequencies be *below* the
Nyquist frequency. Sampling signals at exactly the Nyquist frequency
is an edge case that sort-of works in
*my* point is that as the delay slowly slides from a integer number of
samples, where the transfer function is
H(z) = z^-N
to the integer + 1/2 sample (with gain above), this linear but
time-variant system is going to sound like there is a LPF getting segued in.
this, for me, is enough to
On 18/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
But the example of the weird things that can happen when you try to sample
a sine wave right at the nyquist rate and then process it is orthogonal to
that point.
That's not weird, and that's *exactly* what you have in the highest
bin of an
On 18/08/2015, Tom Duffy tdu...@tascam.com wrote:
In order to reconstruct that sinusoid, you'll need a filter with
an infinitely steep transition band.
I can use an arbitrarily long sinc kernel to reconstruct / interpolate
it. Therefore, for any desired precision, you can find an appropriate
In order to reconstruct that sinusoid, you'll need a filter with
an infinitely steep transition band.
No, even an ideal reconstruction filter won't do it. You've got your
+Nyquist component sitting right on top of your -Nyquist component. Hence
the aliasing. The information has been lost in the
for linear interpolation, if you are a delayed by 3.5 samples and you
keep that delay constant, the transfer function is
H(z) = (1/2)*(1 + z^-1)*z^-3
that filter goes to -inf dB as omega gets closer to pi.
Note that this holds for symmetric fractional delay filter of any odd order
(i.e.,
On 18/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1,
-1...
The sampling theorem requires that all frequencies be *below* the Nyquist
frequency. Sampling signals at exactly the Nyquist frequency is an edge
case that
On 18/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
You cannot calculate 1/x when x=0, can you? Since that's division by zero.
Yet you'll know when x tends to zero from right towards left, then 1/x
will tend to +infinity.
Not sure what that is supposed to have to do with the present subject.
On Aug 17, 2015, at 9:38 AM, Esteban Maestre este...@ccrma.stanford.edu wrote:
No experience with compensation filters here.
But if you can afford to use a higher order interpolation scheme, I'd go for
that.
Using Newton's Backward Difference Formula, one can construct time-varying,
On 8/18/2015 6:41 AM, Jerry wrote:
I would think that polynomial interpolators of order 30 or 40 would
provide no end of unpleasant surprises due to the behavior of
high-order polynomials. I'm thinking of weird spikes, etc. Have you
actually used polynomial interpolators of this order?
I
On 18/08/2015, robert bristow-johnson r...@audioimagination.com wrote:
*my* point is that as the delay slowly slides from a integer number of
samples [...] to the integer + 1/2 sample (with gain above), this linear but
time-variant system is going to sound like there is a LPF getting segued
I’m sorry, I’m missing your point here, Peter (and perhaps I missed Roberts,
hence the “No?” in my reply to him).
The frequency response of linear interpolation is (sin(pi*x)/(pi*x))^2, -7.8 dB
at 0.5 of the sample rate...
On Aug 18, 2015, at 1:40 AM, Peter S peter.schoffhau...@gmail.com
On 18/08/2015, robert bristow-johnson r...@audioimagination.com wrote:
On 8/18/15 4:28 PM, Peter S wrote:
1, -1, 1, -1, 1, -1 ... is a proper bandlimited signal,
and contains no aliasing. That's the maximal allowed frequency without
any aliasing.
well Peter, here again is where you
OK, I looked back at Robert’s post, and see that the fact his reply was broken
up into segments (as he replied to segments of Peter’s comment) made me miss
his point. At first he was talking general (pitch shifting), but at that point
he was talking about strictly sliding into halfway between
On 18/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
That class of signals is band limited to SR/2. The aliasing is in the
amplitude/phase offset, not the frequency.
Okay, I get what you mean. But that doesn't change the frequency
response of a half-sample delay, or doesn't mean that a
And to add to what Robert said about “write code and sell it”, sometimes it’s
more comfortable to make general but helpful comments here, and stop short of
detailing the code that someone paid you a bunch of money for and might not
want to be generally known.
And before people assume that I
OK, Robert, I did consider the slow versus fast issue. But there have been few
caveats posted in this thread, so I thought it might be misleading to some to
not be specific about context. The worst case would be a precision delay of an
arbitrary constant. (For example, at 44.1 kHz SR, there
Thanks for the suggestions and discussion.
In my application I'm playing back 44.1khz wavefiles with variable pitch
envelopes. I'm currently using hermite interpolation and the quality
seems fine for playback. It's only after resampling and running through
the audio engine multiple times does
On 2015-08-17, robert bristow-johnson wrote:
As I noted in the first reply to this thread, while it’s temping to
look at the sinc^2 rolloff of a linear interpolator, for example, and
think that compensation would be to boost the highs to undo the
rolloff, that won’t work in the general case.
On Aug 17, 2015, at 7:23 PM, robert bristow-johnson
r...@audioimagination.com wrote:
On 8/17/15 7:29 PM, Sampo Syreeni wrote:
to me, it really depends on if you're doing a slowly-varying precision
delay in which the pre-emphasis might also be slowly varying.
In slowly varying delay
No experience with compensation filters here.
But if you can afford to use a higher order interpolation scheme, I'd go
for that.
Using Newton's Backward Difference Formula, one can construct
time-varying, table-free, efficient Lagrange interpolation schemes of
arbitrary order (up to 30-th or
For people including scientific oriented it always surprises me how
little actual science is involved in this talk about tradeoffs.
First, what it is you want to achieve by preserving high frequencies
(which of course I'm all for)? Second, is it really only at the level of
first order
I could write a few lines over the topic as well, since I made such a
compensation filter about 17 years ago.
So, there are people, that do care about that topic, but there are only some,
that do find time to write up something.
;-)
Steffan
On 17.08.2015|KW34, at 17:50, Theo Verelst
Since compensation filtering has been mentioned by a few, can I ask if someone
could get specific on an implementation (including a description of constraints
under which it operates)? I’d prefer keeping it simple by restricting to linear
interpolation, where it’s most needed, and perhaps these
Yeah I am also curious. It's not obvious to me where it would make sense to
spend resources compensating for interpolation rather than just juicing up
the interpolation scheme in the first place.
E
On Mon, Aug 17, 2015 at 11:39 AM, Nigel Redmon earle...@earlevel.com
wrote:
Since compensation
On 8/17/15 12:07 PM, STEFFAN DIEDRICHSEN wrote:
I could write a few lines over the topic as well, since I made such a
compensation filter about 17 years ago.
So, there are people, that do care about that topic, but there are
only some, that do find time to write up something.
;-)
Steffan
On 8/17/15 2:39 PM, Nigel Redmon wrote:
Since compensation filtering has been mentioned by a few, can I ask if someone
could get specific on an implementation (including a description of constraints
under which it operates)? I’d prefer keeping it simple by restricting to linear
interpolation,
On 17/08/2015, STEFFAN DIEDRICHSEN sdiedrich...@me.com wrote:
I could write a few lines over the topic as well, since I made such a
compensation filter about 17 years ago.
So, there are people, that do care about that topic, but there are only
some, that do find time to write up something.
I
Hi,
Is it possible to use a filter to compensate for high frequency signal
loss due to interpolation? For example linear or hermite interpolation.
Are there any papers that detail what such a filter might look like?
Thanks
Shannon
___
music-dsp
On 2015-08-16, Sham Beam wrote:
Is it possible to use a filter to compensate for high frequency signal
loss due to interpolation? For example linear or hermite
interpolation.
Are there any papers that detail what such a filter might look like?
Look at Vesa Välimäki's work, and his
As far as compensation: Taking linear as an example, we know that the response
rolls off (“sinc^2). Would you compensate by boosting the highs? Consider that
for a linearly interpolated delay line, a delay of an integer number of
samples, i, has no high frequency loss at all. But that the error
Hi Shannon,
If the number of reads from the delay line per sample cycle is high
enough, as a less expensive alternative to the most obvious solution
(higher order interpolation based on multiple samples before and
after, with some fancy set of coefficients calculated on the spot, or
looked up
Is this Robin Whittle of Devilfish fame? I bought a Devilfish from you back in
the mid-1990s. Best mod ever!
On Aug 16, 2015, at 8:07 PM, Robin Whittle r...@firstpr.com.au wrote:
Hi Shannon,
If the number of reads from the delay line per sample cycle is high
enough, as a less
On 8/16/15 4:09 AM, Sham Beam wrote:
Hi,
Is it possible to use a filter to compensate for high frequency signal
loss due to interpolation? For example linear or hermite interpolation.
Are there any papers that detail what such a filter might look like?
besides the well-known sinc^2
95 matches
Mail list logo