Re: [music-dsp] Antialias question

2018-06-01 Thread Sound of L.A. Music and Audio

Hello Kevin

I am not convinced that your application totally compares to a 
continously changed sampling rate, but anyway:


The maths stays the same, so you will have to respect Nyquist and take 
the artifacts of your AA filter as well as your signal processing into 
account. This means you might use a sampling rate significantly higher 
than the highest frequency to be represented correctly and this is the 
edge frequency of the stop band of your AA-filter.


For a wave form generator in an industrial device, having similar 
demands, we are using something like DSD internally and perform a 
continous downsampling / filtering. According to the fully digital 
representation no further aliasing occurs. There is only the alias from 
the primary sampling process, held low because of the high input rate.


What you can / must do is an internal upsampling, since I expect to 
operate with normal 192kHz/24Bit input (?)


Regarding your concerns: It is a difference if you playback the stream 
with a multiple of the sampling frequency, especially with the same 
frequency, performing modulation mathematically or if you perform a 
slight variation of the output frequency, such as with an analog PLL 
with modulation taking the values from a FIFO. In the first case, there 
is a convolution with the filter behaviour of you processing, in the 
second case, there is also a spreading, according to the individual 
ratio to the new sampling frequency.


From the view of a musical application, case 2 is preferred, because 
any harmonics included in the stream , such as the wave table, can be 
preprocess, easier controlled and are a "musical" harmonic. In one of my 
synths I operate this way, that all primary frequencies come from a PLL 
buffered 2 stage DDS accesssing the wave table with 100% each so there 
are no gaps and jumps in the wave table as with classical DDS.


j


Am 01.06.2018 um 04:03 schrieb Kevin Chi:

Dear List,

Long time lurker here, learned a lot from the random posts, so thanks 
for those.


Maybe somebody can help me out what is the best practice for realtime 
applications to minimize aliasing when
scanning a waveform by changing speed or constantly modulating the delay 
time on a delay (it's like the

resampling rate is changing by every step)?

I guess the smaller the change, the smaller the aliasing effect is, but 
what if the change can be fast enough to make it

audible?

If you can suggest a paper or site or keywords that I should look for, 
I'd appreciate the help!


Kevin



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Antialias question (Kevin Chi)

2018-06-01 Thread Kevin Chi

Thanks Frank,

I use cubic interpolation to interpolate between samples and it seems a 
bit better than linear for me. I am more worried
about the downsampling, when the playhead is going faster than the 
original samplerate and that's
when the high frequencies of the original material start to fold back at 
Nyquist to cause aliasing.


My concern is if it would be a constant pitching up (downsample) rate 
then it would be "easy" to apply an
antialias filter based on the resampling rate. But as the resampling 
rate can change by every step (as the speed of the
playhead is modulating), you should apply different antialias filters 
for every step. Or at least the filter needs to use
different coefficients every time it's applied... That's what I am 
unsure of what is the best solution to solve.


--
Kevin



/Hi Kevin. />//>/I'm the least-expert guy here I'm sure, but as a fellow newbie I might />/have some newbie-level ideas for you. />//>/Just to mention something simple: linear interpolating between samples is />/a huge improvement in reducing aliasing over not interpolating. For />/instance if your playback math says to get sample 171.25 out of the 
buffer, />/use .75 of sample 171 and .25 of sample 172. I don't know the math but my />/listening tests of tricky test waveforms (eg, a waveform with a 
fundamental />/at 100Hz and a 60th harmonic at 6kHz pumped up to 10, 20,30x the power, />/showed aliasing reduced by about the same amount that quadrupling the />/buffer sample rate did. />//>/I looked into other interpolations besides linear, and they didn't seem />/effective enough to bother programming. Just to give a feeling for it, if />/linear eliminated 90% of aliasing, then then more complex interpolation />/might eliminate 95%. So they might reduce it by half compared to linear, />/but only a tiny bit better compared to none at all. (The percentages are />/just meant to be illustrative and actually everything totally depends on />/your input data.) />//>/Another thing is to make sure your input data has been filtered out such />/that there's no frequencies over the Nyquist frequency. But if you're />/dealing with a PC or smart phone I'd imagine the computer hardware handles />/that for you. Once the data is in memory you cannot filter it out, as it />/will already have morphed into an alias in your input buffer and be "baked />/in" no matter how you try to play the data back. />//>/Finally, listen with good headphones; you'll hear things you probably />/won't in a room even if you have a good amp and speakers. />//>/Frank />/http://moselle-synth.com//


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialias question (Kevin Chi)

2018-06-01 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] Antialias question (Kevin Chi)

From: "Kevin Chi" 

Date: Fri, June 1, 2018 2:50 pm

To: music-dsp@music.columbia.edu

--



> Thanks Frank,

>

> I use cubic interpolation to interpolate between samples and it seems a

> bit better than linear for me. I am more worried

> about the downsampling, when the playhead is going faster than the

> original samplerate
or the output pointer ("playhead") is moving faster than the input pointer 
("recordhead").� the latter is always moving at a rate of one sample 
displacement per sample period.� the former, the output pointer has both 
integer and fractional
parts.� the integer part of the output pointer tells you which samples you are 
going to combine, and the fractional part tells you how you will be combining 
them.

>� and that's

> when the high frequencies of the original material start to fold back at

> Nyquist to cause aliasing.

>

> My concern is if it would be a constant pitching up (downsample) rate

> then it would be "easy" to apply an

> antialias filter based on the resampling rate. But as the resampling

> rate can change by every step (as the speed of the

> playhead is modulating), you should apply different antialias filters

> for every step. Or at least the filter needs to use

> different coefficients every time it's applied... That's what I am

> unsure of what is the best solution to solve.
�
what you can do, but it's a little expensive, is when your output pointer 
stepsize is larger than one, cut that step size in half and compute two 
samples.� lowpass filter that stream of samples and pick out every other
sample, discarding the samples in between.
�
--

r b-j� � � � � � � � � � � � �r...@audioimagination.com



"Imagination is more important than knowledge."

�
�
�
�
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialias question

2018-06-01 Thread Kevin Chi

Thanks for your ideas, I'll look into those!

It's actually just a digital delay effect or a sample playback system,
where I have a playhead that have to read samples from a buffer, but the 
playhead
position can be modulated, so the output will be pitching up/down 
depending on the
actual direction. It's realtime resampling of the original material 
where if the playhead is
moving faster than the original sample rate, then the higher frequencies 
will be folding back
at Nyquist. So before sampling I should apply an antialias filter to 
prevent it, but as the rate of
the playback is always modulated, there is not an exact frequency where 
I should apply the

lowpass filter, it's changing constantly.

This is what I meant by comparing to resampling.

--
Kevin



Hello Kevin

I am not convinced that your application totally compares to a
continously changed sampling rate, but anyway:

The maths stays the same, so you will have to respect Nyquist and take
the artifacts of your AA filter as well as your signal processing into
account. This means you might use a sampling rate significantly higher
than the highest frequency to be represented correctly and this is the
edge frequency of the stop band of your AA-filter.

For a wave form generator in an industrial device, having similar
demands, we are using something like DSD internally and perform a
continous downsampling / filtering. According to the fully digital
representation no further aliasing occurs. There is only the alias from
the primary sampling process, held low because of the high input rate.

What you can / must do is an internal upsampling, since I expect to
operate with normal 192kHz/24Bit input (?)

Regarding your concerns: It is a difference if you playback the stream
with a multiple of the sampling frequency, especially with the same
frequency, performing modulation mathematically or if you perform a
slight variation of the output frequency, such as with an analog PLL
with modulation taking the values from a FIFO. In the first case, there
is a convolution with the filter behaviour of you processing, in the
second case, there is also a spreading, according to the individual
ratio to the new sampling frequency.

  From the view of a musical application, case 2 is preferred, because
any harmonics included in the stream , such as the wave table, can be
preprocess, easier controlled and are a "musical" harmonic. In one of my
synths I operate this way, that all primary frequencies come from a PLL
buffered 2 stage DDS accesssing the wave table with 100% each so there
are no gaps and jumps in the wave table as with classical DDS.

j


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Antialias question

2018-06-01 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] Antialias question

From: "Sound of L.A. Music and Audio" 

Date: Fri, June 1, 2018 4:48 am

To: music-dsp@music.columbia.edu

--



>

> What you can / must do is an internal upsampling, since I expect to

> operate with normal 192kHz/24Bit input (?)

>
...

>

> j

>
�
the anonymous "j" is right.� that's better (simpler) than what i suggested.� as 
long as you know that you'll never be pitching up more than an octave, then 
whatever your intended output pointer stepsize is, whether it's more than one 
(but less than two) or
less than one, always cut it in half and generate two output samples per sample 
period and put those two output samples in another stream that doesn't need to 
be all that long.� that other stream is at twice your original sample rate and 
you will be tossing every odd-sample (keeping just the
even samples), but don't do that until *after* you low-pass filter that 
upsampled stream to half of that stream's Nyquist frequency (which is a nice 
fixed filter, could be an IIR, maybe 4th-order Butterworth).� after LPFing, 
throw away every odd sample and output the even samples at your
original sample rate.
--


r b-j� � � � � � � � � � � � �r...@audioimagination.com



"Imagination is more important than knowledge."

�
�
�
�
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] [AES LAC 2018] Final Deadline

2018-06-01 Thread Martín Rocamora
* AES LAC 2018: FINAL
DEADLINEDear Colleagues,The 2018 AES Latin American Congress of Audio
Engineering (AES LAC 2018) organizing committee has decided to extend the
paper submission deadline until Tuesday, June 5, 2018. There will be no
further extensions.IMPORTANT DATES - Submission Deadline: June 5, 2018-
Notification of Acceptance: July 2, 2018- Camera Ready: July 23, 2018Please
click here  to submit your
paper (in English, Portuguese or Spanish) through the JEMS/SBC system.For
more information please visit AES LAC 2018
 webpage or address
the organizing committee:congresoacademicolac2...@aesuruguay.org
We look forward to seeing you at
AES LAC 2018.Sincerely,AES LAC 2018 organizing committee*
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialias question (Kevin Chi)

2018-06-01 Thread Frank Sheeran
Hi Kevin.

I'm the least-expert guy here I'm sure, but as a fellow newbie I might have
some newbie-level ideas for you.

Just to mention something simple: linear interpolating between samples is a
huge improvement in reducing aliasing over not interpolating.  For instance
if your playback math says to get sample 171.25 out of the buffer, use .75
of sample 171 and .25 of sample 172.  I don't know the math but my
listening tests of tricky test waveforms (eg, a waveform with a fundamental
at 100Hz and a 60th harmonic at 6kHz pumped up to 10, 20,30x the power,
showed aliasing reduced by about the same amount that quadrupling the
buffer sample rate did.

I looked into other interpolations besides linear, and they didn't seem
effective enough to bother programming.  Just to give a feeling for it, if
linear eliminated 90% of aliasing, then then more complex interpolation
might eliminate 95%.  So they might reduce it by half compared to linear,
but only a tiny bit better compared to none at all.  (The percentages are
just meant to be illustrative and actually everything totally depends on
your input data.)

Another thing is to make sure your input data has been filtered out such
that there's no frequencies over the Nyquist frequency.  But if you're
dealing with a PC or smart phone I'd imagine the computer hardware handles
that for you.  Once the data is in memory you cannot filter it out, as it
will already have morphed into an alias in your input buffer and be "baked
in" no matter how you try to play the data back.

Finally, listen with good headphones; you'll hear things you probably won't
in a room even if you have a good amp and speakers.

Frank
http://moselle-synth.com/
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialias question (Kevin Chi)

2018-06-01 Thread Evan Balster
Hey, Frank —

I use four-point, third-order hermite resampling everywhere.  It's fast,
relatively simple, and as good as will be necessary for most applications.
Linear resampling can introduce some perceptible harmonic distortion into
higher frequencies.  This will be especially noticeable when slowing down
or upsampling audio with high-frequency content.

Aliasing in audio occurs when frequencies outside the representable range
are generated, by resampling, synthesis or other sources of harmonic
distortion.  Aliased frequencies "wrap around"; generating a 1700hz
frequency in a signal with 1000hz nyquist will result in a 300hz
frequency.  As a rule of thumb, when speeding up audio, you first filter
out those frequencies which will be pushed over Nyquist and wrap around;
when slowing it down, you afterward filter out those frequencies which
should not exist (but come into existence because of harmonic distortion).
I made a great improvement in a resampling-heavy soundscape application by
implementing fourth-order butterworth filters in my pitch-shifters, which
had formerly implemented no antialiasing; a great deal of "harshness" was
eliminated in one sweep.

Here is a wonderful, very skimmable paper that tells you everything you
could ever want to know about resampling and provides example code for many
techniques:  http://yehar.com/blog/wp-content/uploads/2009/08/deip.pdf

– Evan Balster
creator of imitone 

On Fri, Jun 1, 2018 at 1:03 PM, Frank Sheeran  wrote:

> Hi Kevin.
>
> I'm the least-expert guy here I'm sure, but as a fellow newbie I might
> have some newbie-level ideas for you.
>
> Just to mention something simple: linear interpolating between samples is
> a huge improvement in reducing aliasing over not interpolating.  For
> instance if your playback math says to get sample 171.25 out of the buffer,
> use .75 of sample 171 and .25 of sample 172.  I don't know the math but my
> listening tests of tricky test waveforms (eg, a waveform with a fundamental
> at 100Hz and a 60th harmonic at 6kHz pumped up to 10, 20,30x the power,
> showed aliasing reduced by about the same amount that quadrupling the
> buffer sample rate did.
>
> I looked into other interpolations besides linear, and they didn't seem
> effective enough to bother programming.  Just to give a feeling for it, if
> linear eliminated 90% of aliasing, then then more complex interpolation
> might eliminate 95%.  So they might reduce it by half compared to linear,
> but only a tiny bit better compared to none at all.  (The percentages are
> just meant to be illustrative and actually everything totally depends on
> your input data.)
>
> Another thing is to make sure your input data has been filtered out such
> that there's no frequencies over the Nyquist frequency.  But if you're
> dealing with a PC or smart phone I'd imagine the computer hardware handles
> that for you.  Once the data is in memory you cannot filter it out, as it
> will already have morphed into an alias in your input buffer and be "baked
> in" no matter how you try to play the data back.
>
> Finally, listen with good headphones; you'll hear things you probably
> won't in a room even if you have a good amp and speakers.
>
> Frank
> http://moselle-synth.com/
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp