Re: [music-dsp] Time-variant 2nd-order sinusoidal resonator

2019-02-20 Thread Ian Esten
The problem you are experiencing is caused by the fact that after changing
the filter coefficients, the state of the filter produces something
different to the current output. There are several ways to solve the
problem:
- The time varying bilinear transform:
http://www.aes.org/e-lib/browse.cfm?elib=18490
- Every time you modify the filter coefficients, modify the state of the
filter so that it will produce the output you are expecting. Easy to do.

I will also add that some filter structures are less prone to problems like
this. I used a lattice filter structure to allow audio rate modulation of a
biquad without any amplitude problems. I have no idea how it would work for
using the filter as an oscillator.

Best,
Ian

On Wed, Feb 20, 2019 at 9:07 AM Dario Sanfilippo 
wrote:

> Hello, list.
>
> I'm currently working with digital resonators for sinusoidal oscillators
> and I was wondering if you have worked with some design which allows for
> frequency variations without discontinuities in phase or amplitude.
>
> Thank you so much for your help.
>
> Dario
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] FFT for realtime synthesis?

2018-10-23 Thread Ian Esten
The Kurzweil K150 is the first product I can think of that did it. To
create custom sounds for it required the use of software that modeled the
sound using partial amplitudes over time. It's a very powerful technique
for synthesising certain types of sound, such as a piano, where frequencies
of partials do not change. The more rapidly partial frequencies change, the
more complicated it becomes to model the resulting spectrum.

It's fun stuff.

On Tue, Oct 23, 2018 at 2:51 PM gm  wrote:

> Does anybody know a real world product that uses FFT for sound synthesis?
> Do you think its feasable and makes sense?
>
> Totally unrelated to the recent discussion here I consider replacing
> (WS)OLA
> granular "clouds" with a spectral synthesis and was wondering if I
> should use FFT for that.
>
> I want to keep all the musical artefacts of the granular approach when
> desired
> and I was thinking that you can get the "grain cloud" sound when you add
> noise to the phases/frequencies
> for instance and do similar things.
>
>
> An advantage of using FFT instead of sinusoids would be that you dont
> have to worry
> about partial trajectories, residual noise components and that sort of
> thing.
>
> Whether or not it would use much less CPU I am not sure, depends on how
> much overlap
> of frames you have.
>
> Disadvantages I see is latency, even more so if you want an even workload,
> and that the implementation is somewhat fuzzy/messy when you do a
> timestretch followed by resampling.
>
> Another disadvantage would be that you cant have immediate parameter
> changes
> since everything is frame based, and even though some granularity is
> fine for me
> the granularity of FFT would be fixed to the overlap/frame size, which
> is another disadvantage.
>
> Another disadvantage I see is the temporal blur you get when you modify
> the sound.
>
> Any thoughs on this? Experiences?
>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] tracking drum partials

2017-07-30 Thread Ian Esten
That would also be my first choice. It might also be worth looking at
modern algorithms in that family of methods. A lot of effort has gone into
designing exponentially damped sine methods for voice compression and
transmission. They will be more robust to noise than Prony's method. Some
methods can even handle arbitrarily coloured noise, not just white. Some
methods also deal with time-varying damped sinusoids, which would be
interesting if your drums can change their resonant modes over time, as
happens with the tabla, timpani and others.

Best,
Ian

On Sun, Jul 30, 2017 at 12:14 PM, Corey K  wrote:

> You might want to look into a parametric method of estimating the partials
> as well, e.g., Prony's method, which could give you much higher resolution
> than the FFT.
>
> Best,
>
> Corey Kereliuk
> www.reverberate.ca
>
>
> On Jul 28, 2017 12:47 PM, "Thomas Rehaag"  wrote:
>
> see below.
>
>
>>  Original Message 
>> Subject: Re: [music-dsp] tracking drum partials
>> From: "Thomas Rehaag" 
>> Date: Thu, July 27, 2017 4:02 pm
>> To: music-dsp@music.columbia.edu
>> 
>> --
>>
>> >
>> > @Robert:
>> > I didn't quite get "for analysis, i would recommend Gaussian windows
>> > because each partial will have it's own gaussian bump in the frequency
>> > domain ..."
>> > Looks like you wouldn't just pick up the peaks like I do.
>> >
>>
>> well, i *would* sorta, but it's not always as simple as that.
>>
>> ...
>>
>>
>> > Next the highest peaks are taken from every FFT and then tracks in time
>> > are built.
>> >
>> > And it looks like I've just found the simple key for better results
>> > after putting the whole stuff aside for 3 weeks now.
>> > It's just to use samples from drums that have been hit softly.
>> > Else every bin of the first FFTs will be crossed by 2 or 3 sweeps which
>> > leads to lots of artifacts.
>>
>> are you using MATLAB/Octave?  you might need to think about  fftshift() .
>>
>> suppose you have a sinusoid that goes on forever and you use a **very**
>> long FFT and transform it.  if you can imagine that very long FFT as
>> approximating the Fourier Transform, you will get better than a "peak", you
>> will get a *spike* and +/- f0, the frequency of that sinusoid (the
>> "negative frequency" spike will be at N-f0).  in the F.T., it's a pair
>> dirac impulses at +/- f0.  then when you multiply by a window in the time
>> domain, that is convolving by the F.T. of the window in the frequency
>> domain.  i will call that F.T. of the window, the "window spectrum".  a
>> window function is normally a low-pass kinda signal, which means the window
>> spectrum will peak at a frequency of 0.   convolving that window spectrum
>> with a dirac spike at f0 simply moves that window spectrum to f0.  so it's
>> no longer a spike, but a more rounded peak at f0.  i will call that "more
>> rounded peak" the "main lobe" of the window spectrum.  and it is what i
>> meant by the "gaussian bump" in the previous response.
>>
>> now most (actually all, to some degree) window spectra have sidelobes and
>> much of what goes on with designing a good window function is to deal with
>> those sidelobe bumps.  because the sidelobe of one partial will add to the
>> mainlobe of another partial and possibly skew the apparent peak location
>> and peak height.  one of the design goals behind the Hamming window was to
>> beat down the sidelobes a little.  a more systematic approach is the Kaiser
>> window which allows you to trade off sidelobe height and mainlobe width.
>> you would like *both* a skinny mainlobe and small sidelobes, but you can't
>> get both without increasing the window length "L".
>>
>> another property that *some* windows have that are of interest to the
>> music-dsp crowd is that of being (or not) complementary.  that is adjacent
>> windows add to 1.  this is important in **synthesis** (say in the phase
>> vocoder), but is not important in **analysis**.  for example, the Kaiser
>> window is not complementary, but the Hann window is.  so, if you don't need
>> complementary, then you might wanna choose a window with good sidelobe
>> behavior.
>>
>> the reason i suggested Gaussian over Kaiser was just sorta knee-jerk.
>> perhaps Kaiser would be better, but one good thing about the Gaussian is
>> that its F.T. is also a Gaussian (and there are other cool properties
>> related to chirp functions).  a theoretical Gaussian window has no
>> sidelobes.  so, if you let your Gaussian window get extremely close to zero
>> before it is truncated, then it's pretty close to a theoretical Gaussian
>> and you need not worry about sidelobes and the behavior of the window
>> spectrum is also nearly perfectly Gaussian and you can sometimes take
>> advantage of that.  like in interpolating around an apparent peak (at an
>> integer FFT bin) to 

Re: [music-dsp] Transient shaping - differential envelope?

2016-07-06 Thread Ian Esten
Look up spectral flux.

On Wed, Jul 6, 2016 at 7:24 AM, Danijel Domazet <
danijel.doma...@littleendian.com> wrote:

> Hi music-dsp,
> How does one implement an envelope adjustment algorithm that is triggered
> only on transients, rather than on a loudness threashold which is used in
> conventional compressors? Something similar to Voxengo's TransGainer or
> SPL's Transient Designer. Transient Designer uses “differential envelope”.
> There is some info in the tech-talk section of their manual (
> spl.info/fileadmin/user_upload/anleitungen/english/TransientDesigner4_9842_OM_E.pdf).
> Could someone elaborate on this concept of level-independent envelope
> processing which doesn't need conventional “threshold” control?
>
> Thanks,
> Danijel
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Announcement: libsoundio 1.0.0 released

2015-09-06 Thread Ian Esten
This discussion is a refreshing change from some recent topics.
Constructive, respectful, not insulting. This is how it should be.

On Sun, Sep 6, 2015 at 2:41 AM, Ross Bencina  wrote:
> Hello Andrew,
>
> Thanks for your helpful feedback. Just to be clear: I maintain the PortAudio
> core common code and some Windows host API codes. Many of the issues that
> you've raised are for other platforms. In those cases I can only respond
> with general comments. I will forward the specific issues to the PortAudio
> list and make sure that they are ticketed.
>
> Your comments highlight a difference between your project and ours: you're
> one guy, apparently with time and talent to do it all. PortAudio has had 30+
> contributors, all putting in their little piece. As your comments indicate,
> we have not been able to consistently achieve the code quality that you
> expect. There are many reasons for that. Probably it is due to inadequate
> leadership, and for that I am responsible. However, some of these issues can
> be mitigated by more feedback and more code review, and for that I am most
> appreciative of your input.
>
> A few responses...
>
> On 6/09/2015 5:15 PM, Andrew Kelley wrote:
>>
>> PortAudio dumps a bunch of logging information to stdio without
>> explicitly turning logging on. Here's a simple program and the
>> corresponding output:
>> https://github.com/andrewrk/node-groove/issues/13#issuecomment-70757123
>
>
> Those messages are printed by ALSA, not by PortAudio. We considered
> suppressing them, but current opinion seems to be that if ALSA has problems
> it's better to log them than to suppress them. That said, it's an open
> issue:
>
> https://www.assembla.com/spaces/portaudio/tickets/163
>
> Do you have any thoughts how how best to handle ALSA's dumping messages to
> stdio?
>
>> Another example, when I start audacity, here's a bunch of stuff dumped
>> to stdio. Note that this is the *success* case; audacity started up just
>> fine.
>>
>> ALSA lib pcm.c:2338:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
>> ALSA lib pcm.c:2338:(snd_pcm_open_noupdate) Unknown PCM
>> cards.pcm.center_lfe
>> ALSA lib pcm.c:2338:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
>> ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel
>> map
>> ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel
>> map
>
>
> See above ticket.
>
>> Expression 'ret' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1733
>
> 
>
> The "Expression ... failed" looks to me like a two level bug: #1 that it's
> logging like that in a release build, and #2 that those messages are being
> hit. (But as I say, PortAudio on Linux is not my area). I'll report these to
> the PortAudio list.
>
>
>> It would be helpful to me at least, to give a quick example of what a
>> "meaningful error code" is and why PortAudio's error codes are not
>> meaningful.
>>
>>
>> PortAudio error codes are indeed meaningful; I did not intend to accuse
>> PortAudio of this. I was trying to point out that error codes are the
>> only way errors are communicated as opposed to logging.
>>
>> I changed it to "Errors are never dumped to stdio" to avoid the
>> accidental implication that PortAudio has non meaningful error codes.
>
>
> Given the error messages that you posted above, I can see your point. I am
> not sure why the code is written to post those diagnostic errors in a
> release build but I will check with our Linux contributor.
>
>
>> In particular, as far as I know, there are no problems with
>> PortAudio's
>> handling of memory allocation errors. If you know of specific cases of
>> problems with this I would be *very* interested to hear about them.
>>
>>
>> Not memory, but this one is particularly striking:
>>
>>/* FEEDBACK: I'm not sure what to do when this call fails.
>> There's nothing in the PA API to
>> * do about failures in the callback system. */
>>assert( !err );
>
>
>
> It's true, pa_mac_core.c could use some love. There is an issue on Mac if
> the hardware switches sample rates while a stream is open.
>
>
>> As for WMME and DirectSound, I think you need to be careful not to
>> confuse "deprecated" with "bad." Personally I prefer WMME to anything
>> newer when latency isn't an issue -- it just works. WASAPI has been
>> notoriously variable/unreliably on different Windows versions.
>>
>>
>> My understanding is, if you use DirectSound on a Windows Vista or
>> higher, it's an API wrapper and is using WASAPI under the hood.
>
>
> I believe that is true. Microsoft also know all of the version-specific
> WASAPI quirks to make DirectSound work reliabily with all the buggy
> iterations of WASAPI.
>
>
>> May I suggest listing support for all of these APIs as a benefit of
>> PortAudio?
>>
>>
>> Fair enough.
>>
>> Would you like to have another look at the wiki page and see if it seems
>> more neutral and factual?
>
>
> I 

Re: [music-dsp] Announcement: libsoundio 1.0.0 released

2015-09-04 Thread Ian Esten
I was going to ask the same question, until I looked at the webpage.
The features are listed out nicely.

On Fri, Sep 4, 2015 at 9:58 AM, Brad Fuller  wrote:
> On 09/04/2015 09:42 AM, Andrew Kelley wrote:
>
> libsoundio is a C library providing cross-platform audio input and output
> for real-time and consumer software. It supports JACK, PulseAudio, ALSA,
> CoreAudio, and WASAPI. (Linux, Mac OS X, and Windows.)
>
> It is an alternative to PortAudio, RtAudio, and SDL audio.
>
> http://libsound.io/
>
> Why would I use libsound instead of JUCE, PortAudio, etc.?
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Announcement: libsoundio 1.0.0 released

2015-09-04 Thread Ian Esten
Thanks for sharing. Looks nice!

A question: I see that the write callback supplies a minimum and maximum
number of frames that the callback is allowed to produce. I would prefer a
callback that instructed me to produce a given number of samples. It is
simpler and more consistent with existing APIs. Is there a reason for the
minimum and maximum arguments?

And an observation: libsoundio has a read and a write callback. If I was
writing an audio program that produced output based on the input (such as a
reverb, for example), do I have any guarantee that a write callback will
only come after a read callback, and that for every write callback there is
a read callback?

Thanks!
Ian

On Fri, Sep 4, 2015 at 9:42 AM, Andrew Kelley  wrote:

> libsoundio is a C library providing cross-platform audio input and output
> for real-time and consumer software. It supports JACK, PulseAudio, ALSA,
> CoreAudio, and WASAPI. (Linux, Mac OS X, and Windows.)
>
> It is an alternative to PortAudio, RtAudio, and SDL audio.
>
> http://libsound.io/
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Uses of Fourier Synthesis?

2015-04-05 Thread Ian Esten
On Sun, Apr 5, 2015 at 3:32 PM, robert bristow-johnson
r...@audioimagination.com wrote:
 On 4/5/15 5:21 PM, Theo Verelst wrote:

 In the context of synthesis, or intelligent multi sampling with
 complicated signal issues, you could try to make the FFT analysis and
 filtering a targeted part of the synthesis path, so that the playing back of
 samples contain variations and sample information that can be picked up and
 transformed by a FFT based signal path element. In some form (not exactly as
 I described in general terms) I believe this is the case with the Kurzweils.


 uhm, i worked for Kurz from 2006 to 2008, specifically on the synthesis
 component of the chip they used in the PC3.  while in off-line note
 analysis, there might be all sorts of frequency-domain stuff going on, i can
 tell you for sure there is no FFT-based signal path element in the real-time
 synthesis process.  (and saying so might have been saying too much so i
 won't be any more specific.  but i'll say it might be simpler than a
 particular guess.)  there are real limits to what can be done in real time,
 specifically if you want 128-voice polyphony.



Theo was probably referring to the K150 - the only FFT based product
Kurzweil made. His statements approximate how it works.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Linearity of compression algorithms on more than one sound component

2015-02-13 Thread Ian Esten
 Lossy encoding wouldn't necessarily be non-linear in all cases.

Of course it is non-linear. Lossy encoding does not satisfy the
conditions of linearity:
f(a + b) = f(a) + f(b)
f(a.b) = a.f(b)
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Linearity of compression algorithms on more than one sound component

2015-02-12 Thread Ian Esten
It's lossy. Definitely not linear.

On Thu, Feb 12, 2015 at 4:33 PM, robert bristow-johnson
r...@audioimagination.com wrote:
 On 2/12/15 3:02 PM, Theo Verelst wrote:

 Hi all,
 Just a thought I share, because of associations I won't bother you with,
 suppose you take some form of audio compression, say Fmp3(wav) which
 transforms wav to an mp3 form, with some encoding parameters. Now we
 consider the linearity of the transform, most people will know this:

   Fmp3(Lambda * wav) ^= Lambda * Fmp3(wav)

   Fmp3(wav1 + wav2) ^= Fmp3(wav1) + Fmp3(wav2)


 i don't think mp3 encoding is linear.  i'm almost certain it is not.



 --

 r b-j  r...@audioimagination.com

 Imagination is more important than knowledge.




 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Thoughts on DSP books and neural networks

2015-02-05 Thread Ian Esten
Octave or Matlab. Or even Mathematica. It would be very interesting to see
the transfer function of your filter on the same graph as the 'ideal'
analog filter.

Ian

On Thursday, February 5, 2015, Peter S peter.schoffhau...@gmail.com wrote:

 What do you guys use to turn your impulse responses into fancy FFT
 diagrams? If you can recommend some software, I'll post some transfer
 curves of the 2 pole 1 zero biquad filter.

 - P
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] note onset detection

2013-08-08 Thread Ian Esten
Hmm. I think keeping it in the time domain would be difficult. Most
time domain pitch detection algs are error prone, which would yield
false positives.

You can of course tweak FFT size and your interval between FFTs. I
think it would be pretty reasonable to run your FFTs at a pretty
coarse time resolution and increase the density to get better time
resolution when you find a note on. That would be pretty low cost.

On Thu, Aug 8, 2013 at 11:23 AM, robert bristow-johnson
r...@audioimagination.com wrote:
 On 8/8/13 11:05 AM, Ian Esten wrote:

 On Mon, Aug 5, 2013 at 1:01 PM, robert bristow-johnson
 r...@audioimagination.com  wrote:

 the big problem i am dealing with is people singing or humming and
 changing

 notes.  i really want to encode those pitch changes as new notes rather
 than
 as a continuation of the previous note (perhaps adjusted with MIDI pitch
 bend messages).  there is not sufficient change in amplitude or even in
 the
 timbre (brightness or spectral centroid) or the periodicity measure to
 get a
 grip on this.  i *am* trying to kludge together something that looks at
 the
 tracked pitch output (of my pitch detector), but i need to condition
 that
 signal because little blips (bad pitch measurements) trigger spurious
 NoteOn
 messages.  so just observing pitch changes is not robust enough.

 Spectral flux is the way to go. It will detect both changes in level
 and sharp changes in frequency content. It will be a more robust onset
 detector than trying to repurpose a pitch detector, with the added
 advantage of being simpler to compute. There are other frequency
 domain methods to compute a new note likelihood or novelty
 function, but they end up doing about as well as spectral flux with
 the downside of being more computationally complex.


 my issue with spectral flux, which is similar to bandsplitting and looking
 for changes in each band is how to combine the decisions coming from the
 vector of band decisions.  the simplest is if *any* band detects a
 sufficient change to warrant a NoteOn message (a big OR operation), that you
 launch the NoteOn, but you must put in some kinda timer so that 10 different
 NoteOns are not launched milliseconds apart due to these different bands
 hitting at slightly different times due to a common source of change.

 i'd sorta like to keep this in the time-domain if possible so that it might
 be able to operate real-time, even with a small delay.  but the delay in
 doing an STFT seems too long.

 the way i look at this novelty function whether it includes spectral flux
 or not, is that there are several parameters being tracked simultaneously
 and the novelty function defines how these several parameters are combined
 to yield a single novelty parameter that one can make an onset decision
 with.

 and i am also concerned with offset.  when the note ends and a MIDI
 NoteOff message is derived.  my case is mono-phonic (which is one reason i
 thought that spectral flux might be more than i need, but i dunno) so, i
 guess, if a new onset is detected when i am still in the NoteOn state, i can
 quickly end the current note with a NoteOff and then launch the new NoteOn.


 (I also did not know I was sending html mail... This is take 3 at
 getting the message through!)


 we hear you now, Ian.  thanks.



 --

 r b-j  r...@audioimagination.com

 Imagination is more important than knowledge.



 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] who else needs a fractional delay.

2010-11-19 Thread Ian Esten
A Leslie emulation (or effect similar to that) might well need one,
depending on how you modeled it. Same statement applies for tape delay
style effects too. As you say, I bet there's plenty of others, too.
Anyone else got any other effects to add to the list?

Ian


On Fri, Nov 19, 2010 at 1:07 PM, robert bristow-johnson
r...@audioimagination.com wrote:

 On Nov 19, 2010, at 3:42 PM, Alan Wolfe wrote:

 i fear to post a question being the OP of this huge 100+ message thread
 but...

 it was mentioned here and in a previous email that for digital
 flangers you want to interpolate between samples for best results.

 Would you want to do this for all sampling digital effects such as
 delay and reverb too?  Or is flanger special because it's dealing with
 usually a small offset in the samples, so interpolation becomes more
 important to fake a higher resolution signal?

 the issue is if it's a moving or modulateable delay.

 there is a basic audio process called a precision delay.  sometimes it's
 used for time alignment.  flangers, pitch shifters, chorus, sample rate
 converters (upsamplers) would all need to get in between the original
 samples.

 a reasonably *simple* reverb would likely not have to because whether it's
 the Schroeder APF/comb or the FDN (feedback delay network) kind, the delays
 can be set to integer values and the exact delay amount is not critical.
  some reverbs have a delay that moves very slowly to prevent it locking down
 on any particular room mode, that moving delay might need interpolation of
 samples.

 i can't think of another effect, offhand, that would definitely need a
 fractional delay filter in it, but i am sure they exist.

 --

 r b-j                  ...@audioimagination.com

 Imagination is more important than knowledge.




 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp