[music-dsp] Antialias question

2018-05-31 Thread Kevin Chi

Dear List,

Long time lurker here, learned a lot from the random posts, so thanks 
for those.


Maybe somebody can help me out what is the best practice for realtime 
applications to minimize aliasing when
scanning a waveform by changing speed or constantly modulating the delay 
time on a delay (it's like the

resampling rate is changing by every step)?

I guess the smaller the change, the smaller the aliasing effect is, but 
what if the change can be fast enough to make it

audible?

If you can suggest a paper or site or keywords that I should look for, 
I'd appreciate the help!


Kevin

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialias question (Kevin Chi)

2018-06-01 Thread Kevin Chi

Thanks Frank,

I use cubic interpolation to interpolate between samples and it seems a 
bit better than linear for me. I am more worried
about the downsampling, when the playhead is going faster than the 
original samplerate and that's
when the high frequencies of the original material start to fold back at 
Nyquist to cause aliasing.


My concern is if it would be a constant pitching up (downsample) rate 
then it would be "easy" to apply an
antialias filter based on the resampling rate. But as the resampling 
rate can change by every step (as the speed of the
playhead is modulating), you should apply different antialias filters 
for every step. Or at least the filter needs to use
different coefficients every time it's applied... That's what I am 
unsure of what is the best solution to solve.


--
Kevin



/Hi Kevin. />//>/I'm the least-expert guy here I'm sure, but as a fellow newbie I might />/have some newbie-level ideas for you. />//>/Just to mention something simple: linear interpolating between samples is />/a huge improvement in reducing aliasing over not interpolating. For />/instance if your playback math says to get sample 171.25 out of the 
buffer, />/use .75 of sample 171 and .25 of sample 172. I don't know the math but my />/listening tests of tricky test waveforms (eg, a waveform with a 
fundamental />/at 100Hz and a 60th harmonic at 6kHz pumped up to 10, 20,30x the power, />/showed aliasing reduced by about the same amount that quadrupling the />/buffer sample rate did. />//>/I looked into other interpolations besides linear, and they didn't seem />/effective enough to bother programming. Just to give a feeling for it, if />/linear eliminated 90% of aliasing, then then more complex interpolation />/might eliminate 95%. So they might reduce it by half compared to linear, />/but only a tiny bit better compared to none at all. (The percentages are />/just meant to be illustrative and actually everything totally depends on />/your input data.) />//>/Another thing is to make sure your input data has been filtered out such />/that there's no frequencies over the Nyquist frequency. But if you're />/dealing with a PC or smart phone I'd imagine the computer hardware handles />/that for you. Once the data is in memory you cannot filter it out, as it />/will already have morphed into an alias in your input buffer and be "baked />/in" no matter how you try to play the data back. />//>/Finally, listen with good headphones; you'll hear things you probably />/won't in a room even if you have a good amp and speakers. />//>/Frank />/http://moselle-synth.com//


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialias question

2018-06-01 Thread Kevin Chi

Thanks for your ideas, I'll look into those!

It's actually just a digital delay effect or a sample playback system,
where I have a playhead that have to read samples from a buffer, but the 
playhead
position can be modulated, so the output will be pitching up/down 
depending on the
actual direction. It's realtime resampling of the original material 
where if the playhead is
moving faster than the original sample rate, then the higher frequencies 
will be folding back
at Nyquist. So before sampling I should apply an antialias filter to 
prevent it, but as the rate of
the playback is always modulated, there is not an exact frequency where 
I should apply the

lowpass filter, it's changing constantly.

This is what I meant by comparing to resampling.

--
Kevin



Hello Kevin

I am not convinced that your application totally compares to a
continously changed sampling rate, but anyway:

The maths stays the same, so you will have to respect Nyquist and take
the artifacts of your AA filter as well as your signal processing into
account. This means you might use a sampling rate significantly higher
than the highest frequency to be represented correctly and this is the
edge frequency of the stop band of your AA-filter.

For a wave form generator in an industrial device, having similar
demands, we are using something like DSD internally and perform a
continous downsampling / filtering. According to the fully digital
representation no further aliasing occurs. There is only the alias from
the primary sampling process, held low because of the high input rate.

What you can / must do is an internal upsampling, since I expect to
operate with normal 192kHz/24Bit input (?)

Regarding your concerns: It is a difference if you playback the stream
with a multiple of the sampling frequency, especially with the same
frequency, performing modulation mathematically or if you perform a
slight variation of the output frequency, such as with an analog PLL
with modulation taking the values from a FIFO. In the first case, there
is a convolution with the filter behaviour of you processing, in the
second case, there is also a spreading, according to the individual
ratio to the new sampling frequency.

  From the view of a musical application, case 2 is preferred, because
any harmonics included in the stream , such as the wave table, can be
preprocess, easier controlled and are a "musical" harmonic. In one of my
synths I operate this way, that all primary frequencies come from a PLL
buffered 2 stage DDS accesssing the wave table with 100% each so there
are no gaps and jumps in the wave table as with classical DDS.

j


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Antialiased OSC

2018-08-03 Thread Kevin Chi

Thank you for the quick ideas, the code looks nice.

Currently I am not designing a wavetable OSC, I am just trying to do 
some basic VA waveforms (saw, tri, square) so naively I thought
I just set up the 15-20 tables/waveform  from fourier series whenever I 
start the app/plugin and that could do the job. Then interpolating

between the samples of the closest wavetables. But maybe I am too naive? :)

With wavetables I was also wondering how one can act when there is a 
pitch modulation of a running note that will go
up/down an octave... as my understanding this would mean if I don't 
switch wavetables at a certain pitch mod, then it will
introduce more and more aliasing. But checking this at every sample 
sounds like some overhead maybe shouldn't be there.


--
Kevin @ Fine Cut Bodies
Mailto: ke...@finecutbodies.com
Web: http://finecutbodies.com
LinkedIn: http://lnkd.in/XZiSjF

On 8/3/18 3:11 PM, music-dsp-requ...@music.columbia.edu wrote:

lemme know if this doesn't 
work:?https://www.dropbox.com/s/cybcs7tgzgplnwc/wavetable_oscillator.c
remember that this is*synthesis*  code.? it does not extract wavetables from a 
sampled sound (i wrote a paper a quarter century ago how to do that, but it's 
not code).?
nor does it define bandlimited square, saw, PWM, hard-sync, whatever.? that's a 
sorta difficult problem, but one that someone has for sure solved and we can 
discuss here how to do that (perhaps in MATLAB).? extracting wavetables from 
sampled notes requires pitch detection/tracking and
interpolation.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] Antialiased OSC

2018-08-03 Thread Kevin Chi

Hi,

Is there such a thing as today's standard for softSynth antialiased 
oscillators?


I was looking up PolyBLEP oscillators, and was wondering how it would relate
to a 1-2 waveTables per octave based oscillator or maybe to some other 
algos.


thanks for any ideas and recommendations in advance,
Kevin
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Antialiased OSC

2018-08-07 Thread Kevin Chi
I just want to thank you guys for the amount of experience and knowledge 
you are sharing here! This list is a gem!


I started to replace my polyBLEP oscillators with waveTables to see how 
it compares!



Although while experimenting with PolyBLEP I just run into something I 
don't get and probably you will know the answer for this.


I read at a couple of places if you use a leaky integrator on a Square 
then you can get a Triangle. But as a leaky integrator
is a first order lowpass filter, you won't get a Triangle waveform, but 
this:


https://www.dropbox.com/s/1xq321xqcb7ir3a/PolyBLEPTri.png?dl=0

Is it me doing something wrong misunderstanding the concept, or what is 
the best way to make a triangle with PolyBLEPs?



thanks again for the great discussion on wavetables!

--
Kevin @ Fine Cut Bodies

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp