Re: [music-dsp] Practical filter programming questions

2020-01-12 Thread Stefan Sullivan
My 2 cents on 1: yes we always pre-compute filter coefficients, especially
since they often involve trig functions which are expensive. I've rarely
seen them actually stored as a filter, but if your application is to have
many filters operating in parallel it's a good idea, but requires you to
use FIR filters and no IIR filters. I'm used to seeing implementations
where you instantiate objects that represent filters, and set their
coefficients in some sort of initialization phase, which doesn't
necessarily benefit from storing them in memory together or not. If you
_are_ using FIR filters, and you are computing many outputs from one or
more inputs, you can benefit from some matrix math libraries, but again
that's under pretty specific conditions.

To contribute to the conversation on 2:
In theory when you change the filter coefficients the states are relevant
to a different filter, but computing the new states is expensive and
non-trivial. I've never come across anything in the literature that
explains it. In my experience some topologies are worse than others. If you
have very large state variables, which happens when you compute the poles
before zeros, then you end up with higher probability of audible
clicks/pops. See this question where I asked about changing filter
parameters on stack exchange
https://dsp.stackexchange.com/questions/11242/how-can-i-prove-stability-of-a-biquad-filter-with-non-zero-initial-conditions/11267#11267
(especially
Hilmar's answer which gives a lot of practical advice).

Regarding SIMD and GPUs, the crux of what makes a DSP processor different
from a CPU is highly-optimized SIMD, so yes typically DSP engineers are
optimizing for SIMD, but not by using the GPU. Take this part with a grain
of salt, but if I understand correctly GPUs have much higher parallelism
than typical CPUs (something like 128 vs 4 operations). GPUs do, however,
have their own memory, and transferring between the CPU and GPU has always
been described as expensive. When you want low-latency operation (which you
always do for audio), that memory transfer is at least something to think
about. So those are the typical arguments against using GPUs for audio DSP,
but I'm not aware of anybody who's actually tried it. If you have the
skills I highly encourage trying it and sharing your results with the list.
Something tells me these are probably arguments that are highly outweighed
by the computation efficiency of a GPU, especially if you're not changing
filter coefficients very often.

Anecdotally, I've started to hear whispers from some audio DSP folks who
are starting to prefer MISD operations of SIMD operations, and reducing
buffer sizes as low as single-sample to get high performance out of their
CPUs with very low latency. I haven't heard yet if this is easier or not.
It might make implementing algorithms and optimizing them much more
decoupled for audio DSP, but I haven't really tried it out on anything I've
made yet. Again, if you have the skills for that type of optimization, I'd
highly encourage trying it and sharing your results with the list. Has
anybody else tested MISD vs SIMD optimization techniques on DSP?

-Stefan


On Sun, Jan 12, 2020, 16:33 Ross Bencina  wrote:

> On 12/01/2020 5:06 PM, Frank Sheeran wrote:
> > I have a couple audio programming books (Zolzer DAFX and Pirkle
> > Designing Audio Effect Plugins in C++).  All the filters they describe
> > were easy enough to program.
> >
> > However, they don't discuss having the frequency and resonance (or
> > whatever inputs a given filter has--parametric EQ etc.) CHANGE.
> >
> > I am doing the expensive thing of recalculating all the coefficients
> > every sample, but that uses a lot of CPU.
> >
> > My questions are:
> >
> > 1. Is there a cheaper way to do this?  For instance can one
> > pre-calculate a big matrix of filter coefficients, say 128 cutoffs
> > (about enough for each semitone of human hearing) and maybe 10
> > resonances, and simply interpolating between them?  Does that even work?
>
> It depends on the filter topology. Coefficient space is not the same as
> linear frequency or resonance space. Interpolating in coefficient space
> may or may not produce the desired results -- but like any other
> interpolation situation, the more pre-computed points that you have, the
> closer you get to the original function. One question that you need to
> resolve is whether all of the interpolated coefficient sets produce
> stable filters (e.g.. keep all the poles inside the unit circle).
>
>
> > 2. when filter coefficients change, are the t-1 and t-2 values in the
> > pipeline still good to use?
>
> Really there are two questions:
>
> - Are the filter states still valid after coefficient change (Not in
> general)
> - Is the filter unconditionally stable if you change the components at
> audio rate (maybe)
>
> To some extent it depends how frequently you intend to update the
> coefficients. Jean Laroche's paper is the one to read for an
> 

Re: [music-dsp] 2-point DFT Matrix for subbands Re: FFT for realtime synthesis?

2018-11-05 Thread Stefan Sullivan
I'm definitely not the most mathy person on the list, but I think there's
something about the complex exponentials, real transforms and the 2-point
case. For all real DFTs you should get a real-valued sample at DC and
Nyquist, which indeed you do get with your matrix. However, there should be
some complex numbers in a matrix for a 4-point DFT, which you won't get no
matter how many matrices of that form you multiply together. My guess is
that yours is a special case of a DFT Matrix for 2 bins. I suspect if you
took a 4-point DFT Matrix and tried the same it might work out better?

https://en.wikipedia.org/wiki/DFT_matrix

Stefan

On Mon, Nov 5, 2018, 12:40 Ethan Duni  You can combine consecutive DFTs. Intuitively, the basis functions are
> periodic on the transform length. But it won't be as efficient as having
> done the big FFT (as you say, the decimation in time approach interleaves
> the inputs, so you gotta pay the piper to unwind that). Note that this is
> for naked transforms of successive blocks of inputs, not a WOLA filter
> bank.
>
> There are Dolby codecs that do similar with a suitable flavor of DCT (type
> II I think?) - you have your encoder going along at the usual frame rate,
> but if it detects a string of stationary inputs it can fold them together
> into one big high-res DCT and code that instead.
>
> On Mon, Nov 5, 2018 at 11:34 AM Ethan Fenn  wrote:
>
>> I don't think that's correct -- DIF involves first doing a single stage
>> of butterfly operations over the input, and then doing two smaller DFTs on
>> that preprocessed data. I don't think there is any reasonable way to take
>> two "consecutive" DFTs of the raw input data and combine them into a longer
>> DFT.
>>
>> (And I don't know anything about the historical question!)
>>
>> -Ethan
>>
>>
>>
>> On Mon, Nov 5, 2018 at 2:18 PM, robert bristow-johnson <
>> r...@audioimagination.com> wrote:
>>
>>>
>>>
>>> Ethan, that's just the difference between Decimation-in-Frequency FFT
>>> and Decimation-in-Time FFT.
>>>
>>> i guess i am not entirely certainly of the history, but i credited both
>>> the DIT and DIF FFT to Cooley and Tukey.  that might be an incorrect
>>> historical impression.
>>>
>>>
>>>
>>>  Original Message
>>> 
>>> Subject: Re: [music-dsp] 2-point DFT Matrix for subbands Re: FFT for
>>> realtime synthesis?
>>> From: "Ethan Fenn" 
>>> Date: Mon, November 5, 2018 10:17 am
>>> To: music-dsp@music.columbia.edu
>>>
>>> --
>>>
>>> > It's not exactly Cooley-Tukey. In Cooley-Tukey you take two
>>> _interleaved_
>>> > DFT's (that is, the DFT of the even-numbered samples and the DFT of the
>>> > odd-numbered samples) and combine them into one longer DFT. But here
>>> you're
>>> > talking about taking two _consecutive_ DFT's. I don't think there's any
>>> > cheap way to combine these to exactly recover an individual bin of the
>>> > longer DFT.
>>> >
>>> > Of course it's possible you'll be able to come up with a clever
>>> frequency
>>> > estimator using this information. I'm just saying it won't be exact in
>>> the
>>> > way Cooley-Tukey is.
>>> >
>>> > -Ethan
>>> >
>>> >
>>>
>>>
>>>
>>> --
>>>
>>> r b-j r...@audioimagination.com
>>>
>>> "Imagination is more important than knowledge."
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> ___
>>> dupswapdrop: music-dsp mailing list
>>> music-dsp@music.columbia.edu
>>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>>
>>
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] WSOLA

2018-10-13 Thread Stefan Sullivan
Because the o stands for overlap.
https://ieeexplore.ieee.org/document/319366. I'm not specifically familiar
with wsola, but if it's like the other overlap-add techniques then starting
with some overlap and modifying that relationship is the fundamental way to
change the sound.

Wikipedia has a decent article on pitch timescale modifications.
https://en.m.wikipedia.org/wiki/Audio_time_stretching_and_pitch_scaling

Stefan


On Sat, Oct 13, 2018, 07:41 Alex Dashevski  wrote:

> Hi,
>
> Could you explain why we need an overlap parameter in the WSOLA ? How can
> I find optimal values ? What effect will I get if I use bigger or smaller
> value ?
>
> Thanks,
> Alex
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] What is resonance?

2018-07-22 Thread Stefan Sullivan
Oh interest challenge of the definition of resonance. The typical textbook
diagram shows a little local bump at some frequency in a bode diagram, but
if you consider dc the resonant frequency, then the only way to show a bump
is by being higher than all the other frequencies (i.e. monotonically
decreasing).

Hmm, I wonder. One common definition of resonance bandwidth is also the
"3dB" point. But I guess resonating at dc is probably not a common use case.

Stefan


On Sun, Jul 22, 2018, 18:11 robert bristow-johnson 
wrote:

>
> I've been wondering about the connection that resonance and filter orders
> at least 2.  That's 2 delays (and feedback).
>
> But if you're limiting the resonant frequencies to DC and Nyquist, then
> with a one-sample delay digital filter, you can have something like
> "resonance".
>
>  Even if the single delay is two samples long (but no tap in the middle),
> that allows only DC, Nyquist, or half-Nyquist as tunable frequencies.
>
> So just sayin, in another sense of the word, that "resonance" can be had
> with a single *long* delay and one feedback path or with two arbitrarily
> short delays and two feedback paths.
>
>
>
> --
> r b-j r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
>
>
>  Original message 
> From: Stefan Sullivan 
> Date: 7/22/2018 2:20 PM (GMT-08:00)
> To: A discussion list for music-related DSP 
>
> Subject: Re: [music-dsp] What is resonance?
>
> Yes. The term helmholz resonator should be a hint ;) Basically when a
> sounds gets added to itself after a delay you end up adding energy to the
> frequency that corresponds to that delay amount. For very long echos we
> don't hear it as a resonance, but for shorter delays it will boost higher
> and higher frequencies into the audible range.
>
> Stefan
>
> On Sun, Jul 22, 2018, 08:10  wrote:
>
>> Hello all
>>
>> Is "feedback with delay" really resonance? I recognize many people
>> describe the effects of "room resonanes this way", but to my understanding
>> these are no resonances in the basic meaning but reflections. A resonance
>> is a self standing oscillating system like a guitar string or an air mass
>> in a Helmholtz resonator.
>>
>>  Rolf
>>
>> *Gesendet:* Samstag, 21. Juli 2018 um 04:33 Uhr
>> *Von:* "Andrew Simper" 
>> *An:* "A discussion list for music-related DSP" <
>> music-dsp@music.columbia.edu>
>> *Cc:* audit...@lists.mcgill.ca, local-us...@ccrma.stanford.edu,
>> surso...@music.vt.edu
>> *Betreff:* Re: [music-dsp] What is resonance?
>> Resonance is just delay with feedback. Resonance occurs when you delay a
>> signal and then feed it back with some gain to the input of the delay "in
>> phase"
>>
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] What is resonance?

2018-07-22 Thread Stefan Sullivan
Yes. The term helmholz resonator should be a hint ;) Basically when a
sounds gets added to itself after a delay you end up adding energy to the
frequency that corresponds to that delay amount. For very long echos we
don't hear it as a resonance, but for shorter delays it will boost higher
and higher frequencies into the audible range.

Stefan

On Sun, Jul 22, 2018, 08:10  wrote:

> Hello all
>
> Is "feedback with delay" really resonance? I recognize many people
> describe the effects of "room resonanes this way", but to my understanding
> these are no resonances in the basic meaning but reflections. A resonance
> is a self standing oscillating system like a guitar string or an air mass
> in a Helmholtz resonator.
>
>  Rolf
>
> *Gesendet:* Samstag, 21. Juli 2018 um 04:33 Uhr
> *Von:* "Andrew Simper" 
> *An:* "A discussion list for music-related DSP" <
> music-dsp@music.columbia.edu>
> *Cc:* audit...@lists.mcgill.ca, local-us...@ccrma.stanford.edu,
> surso...@music.vt.edu
> *Betreff:* Re: [music-dsp] What is resonance?
> Resonance is just delay with feedback. Resonance occurs when you delay a
> signal and then feed it back with some gain to the input of the delay "in
> phase"
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialias question

2018-06-03 Thread Stefan Sullivan
Edits:
Paragraph 1
...assuming you're going to end up *masking* the aliased components
Masking ≠ making

Paragraph 2
...or otherwise *generic* samples
Generic ≠ genetic

Stefan



On Sun, Jun 3, 2018, 19:41 Stefan Sullivan 
wrote:

> You can still take a heuristic approach. You probably have some idea of
> the max modulation rate, right? I would just indiscriminately apply a
> low-pass filter at Nyquist/fastest rate of change (forgive my fast and
> loose math here). You can also relax that a little bit by taking a
> perceptual criteria (say -60dB or so assuming you're going to end up making
> the aliased components). You can further relax your constraints if you know
> anything about the frequency content of your samples. For example, if
> they're constructed to have a certain number of partials with a known
> rolloff than your highest frequency content may already be at, say -20dB.
> If your highest partial were my completely arbitrarily chosen -20dB and it
> didn't come very close to Nyquist and it shifted into a range where you
> know it's going to be completely masked at some level, you might get away
> with a pretty gentle (relatively) roll-off instead of the usual brick-wall
> criteria (cue comments from oversampling-philes).
>
> Knowing the frequency content may be a generous assumption depending on
> your application. Wavetables can be pretty well categorized, but if there's
> any user-provided or otherwise genetic samples it may not be very useful to
> you. Still, it's probably safe to assume that you don't have a 0dBFS
> partial right below Nyquist, and I'm guessing that your modulated delay
> line is probably not being applied to percussive samples.
>
> Actually now that I say that, I wouldn't be surprised if there's some
> papers about perceptual alias-free resampling of wavetables by Dattoro or
> Pirkle, or one of the other authors who like synthesis. Anybody on the list
> know some papers on the subject?
>
> Stefan
>
>
> On Fri, Jun 1, 2018, 12:04 Kevin Chi  wrote:
>
>> Thanks for your ideas, I'll look into those!
>>
>> It's actually just a digital delay effect or a sample playback system,
>> where I have a playhead that have to read samples from a buffer, but the
>> playhead
>> position can be modulated, so the output will be pitching up/down
>> depending on the
>> actual direction. It's realtime resampling of the original material
>> where if the playhead is
>> moving faster than the original sample rate, then the higher frequencies
>> will be folding back
>> at Nyquist. So before sampling I should apply an antialias filter to
>> prevent it, but as the rate of
>> the playback is always modulated, there is not an exact frequency where
>> I should apply the
>> lowpass filter, it's changing constantly.
>>
>> This is what I meant by comparing to resampling.
>>
>> --
>> Kevin
>>
>>
>> > Hello Kevin
>> >
>> > I am not convinced that your application totally compares to a
>> > continously changed sampling rate, but anyway:
>> >
>> > The maths stays the same, so you will have to respect Nyquist and take
>> > the artifacts of your AA filter as well as your signal processing into
>> > account. This means you might use a sampling rate significantly higher
>> > than the highest frequency to be represented correctly and this is the
>> > edge frequency of the stop band of your AA-filter.
>> >
>> > For a wave form generator in an industrial device, having similar
>> > demands, we are using something like DSD internally and perform a
>> > continous downsampling / filtering. According to the fully digital
>> > representation no further aliasing occurs. There is only the alias from
>> > the primary sampling process, held low because of the high input rate.
>> >
>> > What you can / must do is an internal upsampling, since I expect to
>> > operate with normal 192kHz/24Bit input (?)
>> >
>> > Regarding your concerns: It is a difference if you playback the stream
>> > with a multiple of the sampling frequency, especially with the same
>> > frequency, performing modulation mathematically or if you perform a
>> > slight variation of the output frequency, such as with an analog PLL
>> > with modulation taking the values from a FIFO. In the first case, there
>> > is a convolution with the filter behaviour of you processing, in the
>> > second case, there is also a spreading, according to the individual
>> > ratio to the new sampling frequency.
>> >
>> >   From the view of a musical application, case 2 is preferre

Re: [music-dsp] Antialias question

2018-06-03 Thread Stefan Sullivan
You can still take a heuristic approach. You probably have some idea of the
max modulation rate, right? I would just indiscriminately apply a low-pass
filter at Nyquist/fastest rate of change (forgive my fast and loose math
here). You can also relax that a little bit by taking a perceptual criteria
(say -60dB or so assuming you're going to end up making the aliased
components). You can further relax your constraints if you know anything
about the frequency content of your samples. For example, if they're
constructed to have a certain number of partials with a known rolloff than
your highest frequency content may already be at, say -20dB. If your
highest partial were my completely arbitrarily chosen -20dB and it didn't
come very close to Nyquist and it shifted into a range where you know it's
going to be completely masked at some level, you might get away with a
pretty gentle (relatively) roll-off instead of the usual brick-wall
criteria (cue comments from oversampling-philes).

Knowing the frequency content may be a generous assumption depending on
your application. Wavetables can be pretty well categorized, but if there's
any user-provided or otherwise genetic samples it may not be very useful to
you. Still, it's probably safe to assume that you don't have a 0dBFS
partial right below Nyquist, and I'm guessing that your modulated delay
line is probably not being applied to percussive samples.

Actually now that I say that, I wouldn't be surprised if there's some
papers about perceptual alias-free resampling of wavetables by Dattoro or
Pirkle, or one of the other authors who like synthesis. Anybody on the list
know some papers on the subject?

Stefan


On Fri, Jun 1, 2018, 12:04 Kevin Chi  wrote:

> Thanks for your ideas, I'll look into those!
>
> It's actually just a digital delay effect or a sample playback system,
> where I have a playhead that have to read samples from a buffer, but the
> playhead
> position can be modulated, so the output will be pitching up/down
> depending on the
> actual direction. It's realtime resampling of the original material
> where if the playhead is
> moving faster than the original sample rate, then the higher frequencies
> will be folding back
> at Nyquist. So before sampling I should apply an antialias filter to
> prevent it, but as the rate of
> the playback is always modulated, there is not an exact frequency where
> I should apply the
> lowpass filter, it's changing constantly.
>
> This is what I meant by comparing to resampling.
>
> --
> Kevin
>
>
> > Hello Kevin
> >
> > I am not convinced that your application totally compares to a
> > continously changed sampling rate, but anyway:
> >
> > The maths stays the same, so you will have to respect Nyquist and take
> > the artifacts of your AA filter as well as your signal processing into
> > account. This means you might use a sampling rate significantly higher
> > than the highest frequency to be represented correctly and this is the
> > edge frequency of the stop band of your AA-filter.
> >
> > For a wave form generator in an industrial device, having similar
> > demands, we are using something like DSD internally and perform a
> > continous downsampling / filtering. According to the fully digital
> > representation no further aliasing occurs. There is only the alias from
> > the primary sampling process, held low because of the high input rate.
> >
> > What you can / must do is an internal upsampling, since I expect to
> > operate with normal 192kHz/24Bit input (?)
> >
> > Regarding your concerns: It is a difference if you playback the stream
> > with a multiple of the sampling frequency, especially with the same
> > frequency, performing modulation mathematically or if you perform a
> > slight variation of the output frequency, such as with an analog PLL
> > with modulation taking the values from a FIFO. In the first case, there
> > is a convolution with the filter behaviour of you processing, in the
> > second case, there is also a spreading, according to the individual
> > ratio to the new sampling frequency.
> >
> >   From the view of a musical application, case 2 is preferred, because
> > any harmonics included in the stream , such as the wave table, can be
> > preprocess, easier controlled and are a "musical" harmonic. In one of my
> > synths I operate this way, that all primary frequencies come from a PLL
> > buffered 2 stage DDS accesssing the wave table with 100% each so there
> > are no gaps and jumps in the wave table as with classical DDS.
> >
> > j
> >
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Real-time pitch shifting?

2018-05-19 Thread Stefan Sullivan
Surprisingly, googling for mq synthesis produces fewer results than I
thought. It's not very recent but my understanding is that it was the
relatively modern approach to a phase vocoder. I didn't find any library
implementing it but I found a couple older papers about it.

https://www.ll.mit.edu/publications/journal/pdf/vol01_no2/1.2.3.speechprocessing.pdf

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.21.8075=rep1=pdf

Does anybody else know more about whether mq analysis/synthesis has been
left behind by the industry? I'm also not familiar with Serra's work on
sinusoid + noise. Maybe it's the better algorithm.

I'm also curious if anybody knows whether the melodyne/Antares class of
algorithms have a speech model in them. I'm unclear if OP wanted pitch
shifting for instrumental music or not, but I'm definitely curious about
the state of the art for musical frequency/scale modification for
instrumental musical signals.

-Stefan S

On Sat, May 19, 2018, 13:35 RJ Skerry-Ryan  wrote:

> It may not be the state of the art, but RubberBand
>  is, I believe, the best open
> source pitch shift / time stretch library out there at the moment, and can
> run in realtime on modern CPUs. SoundTouch
>  is another good option that is
> cheaper to compute (and therefore easier to run in realtime on e.g. a
> mobile CPU).
>
> Mixxx  uses both (giving the user the choice, since
> they may want SoundTouch on older CPUs) for realtime pitch shifting and
> tempo adjustment.
>
> On Sat, May 19, 2018 at 1:29 PM gm  wrote:
>
>>
>>
>> Am 19.05.2018 um 20:19 schrieb Nigel Redmon:
>> > Again, my knowledge of Melodyne is limited (to seeing a demo years
>> > ago), but I assume it’s based on somewhat similar techniques to those
>> > taught by Xavier Serra (https://youtu.be/M4GRBJJMecY)—anyone know for
>> > sure?
>>
>> I always thought the seperation of notes was based on cepstrum?
>> My idea is that a harmonic tone, comb like in the spectrum, is a peak in
>> the cepstrum. (isn't it?)
>> Probably then you can also track pitch by following a peak in the
>> cepstrum.
>> Not sure if this makes sense?
>> I never tried Melodyne in person so I am not sure what it is capable of.
>>
>>
>>
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Reading a buffer at variable speed

2018-02-06 Thread Stefan Sullivan
Can you explain your notation a little bit? Is x[t] the sample index into
your signal? And t is time in samples?

I might formulate it as a Delta of indicies where a Delta of 1 is a normal
playback speed and you have some exponential rate. Would something like
this work?

delta *= rate
t += delta
y[n] = x[n - t + N]

My notation probably means something different from yours, but the idea is
the time varying index t accelerates or decelerates at a given rate. I've
kind of written a hybrid of pseudocode and DSP notation, sorry.

You would probably actually want some interpolation and for an application
like this one I would probably stick with linear interpolation (even though
most people on this list will probably disagree with me on that). Keep in
mind, though, that skipping samples might mean aliasing which will mean
low-pass filtering your signal (unless you know that there's no frequency
content in offensive ranges), and since you're essentially doing a variable
skip rate your low pass filter might either need to be aggressive or
variable.

Something about this algorithm scares me in it's seemingly unbounded need
for memory. Seems like a lot of heuristic constraints...

Stefan


On Feb 6, 2018 06:45, "Maximiliano Estudies"  wrote:

I am having trouble with this concept for quite some time now, I hope that
I can explain it well enough so you can understand what I mean.
I have signal stored in a buffer of known length. The buffer must be read
at a variable speed, and the variations in speed have to be exponential, so
that the resulting glissandi are (aurally) linear. In order to do that I
came up with the following formula:

x[t] = t * sample_rate * end_speed^(x[t] / T) where T is the total
length of the buffer in samples.

This doesn’t seem to work and I can’t understand why.

And my second question is, how can I get the resulting length in
milliseconds? (how long will it take to play the whole buffer)

I hope I managed to be clear enough!

Maxi

-- 
Maximiliano Estudies
+49 176 36784771 <+49%20176%2036784771>
maxiestudies.com

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Reverb, magic numbers and random generators #2 solution?

2017-10-01 Thread Stefan Sullivan
Forgive me if you said this already, but did you try negative feedback
values? I wonder what that does to the aesthetics of the reverb.

Stefan


On Oct 1, 2017 16:24, "gm"  wrote:

> and here's the impulse response, large 4APs Early- > 3AP Loop
>
> its pretty smooth without tweaking anything manually
>
> https://soundcloud.com/traumlos_kalt/whd-ln2-impresponse/s-d1ArU
>
> the autocorrelation and autoconvolution are also very good
>
> Am 02.10.2017 um 00:45 schrieb gm:
>
> So...
> Heres my "paper", a very sloppy very first draft, several figures and
> images missing and too long.
>
> http://www.voxangelica.net/transfer/magic%20numbers%
> 20for%20reverb%20design%203b.pdf
>
> questions, comments, improvements, critique are very welcome.
> But is it even worth to write a paper about that?, its just plain simpel:
>
> The perfect allpass and echo comes at *1/(N+1 -ln(2)).*
>
> Formal proof outstanding.
>
> And if you hack & crack why it's 1/(N+1 ln(2)) exactly you'll get 76.52 %
> of the fame.
> Or 99.% even.
>
> Imagine that this may lead to perfect accoustic rooms as well...
> Everywhere in the world they will build rooms that bare your name, for
> millenia to come!
> So, yes, participate please. ;)
>
> I assume it has to do with fractional expansion but that paragraph is
> still missing in the paper.
> I have no idea about math tbh.  but I' d love to understand that.
>
>
>
> ___
> dupswapdrop: music-dsp mailing 
> listmusic-dsp@music.columbia.eduhttps://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Reverb, magic numbers and random generators #2 the Go approach

2017-09-30 Thread Stefan Sullivan
Sometimes the simplest approach is the best approach. Sounds like a good
reverb paper to me. Some user evaluation and references to standard papers
and 

On Sep 29, 2017 8:51 AM, "gm"  wrote:

> It's a totally naive laymans approach
> I hope the formatting stays in place.
>
> The feedback delay in the loop folds the signal back
> so we have periods of a comb filter.
> |  |  |  |
> |__|__|__|___
>
> Now we want to fill the period densly with impulses:
>
> First bad idea is to place a first impulse exactly in the middle
>
> that would be a ratio for the allpass delay of 0.5 in respect to the comb
> filter.
> It means that the second next impulse falls on the period.
>
> | |
> |||___
>
>
> The next idea is to place the impulse so that after the second cycle
> it exactly fills the free space between the first pulse and the period
> like this,
> exactly in the middle between the first impulse and the period:
>
> |   |   |
> | | |  ||
> |_|_|__|__|_|___
>
> this means we need a ratio "a" for the allpass delay in respect to the
> combfilter loop that fulfills:
>
> 2a - 1 = a/2
>
> Where 1 is the period of the combfilter.
> Alternativly, to place it on the other side, we need
>
> 2a - 1 = 1 - a/2;
>
>
> |   |   |
> |   |   | | |
> |___|___|___|_|_|___
>
> This gives ratios of 0.5. 0.7 and 0.8
>
> These are bad ratios since they have very small common multiples with the
> loop period.
> So we detune them slightly so they are never in synch with the loop period
> or each other.
> That was my very naive approach, and surprisingly it worked.
>
>
> The next idea is to place the second impulse not in the middle of the free
> space
> but in a golden ratio in respect to the first impulse
>
> |||
> |   |||   |
> |___|||__||
>
> 2a - 1 = a*0.618...
>
> or
>
> N*a mod 1 = a*0.618..
>
> or if you prefer the exact solution:
>
> a = (1 + SQRT(5)) / ( SQRT(5)*N + N - 2)
>
> wich is ~ 0.723607  and the same as 1/ (1+ 0.382...) or 1/ (N + 0.382)
>
> where N is the number of impulses, that means instead of placing the 2nd
> impulse on a*0.618
> we can also place the 3rd, 4th etc for shorter AP diffusors.
>
> (And again we can also fill the other side of the first impulse with
> 0.839643
> And the solution for N = 1 is 2.618.. and we can use the reciprocal 0.381
> to place a first impusle)
>
> The pattern this gives for 0.72.. is both regular but evenly distributed
> so that each pulse
> falls an a free space, just like on a Fibonaccy flower pattern each petal
> falls an a free space,
> forever.
> (I have only estimated the first few periods manually, and it appeared
> like that
> Its hard to identify in the impulse response since I test a loop with 3
> APs )
>
> The regularity is a bad thing, but the even distribution seems like a good
> thing (?).
> I assume it doesn't even make a huge difference to using 0.618.. for a
> ratio though it seemed to sound better.
> (And if you use 0.618, what do you use for the other APs?)
>
> So it's not the solution I am looking for but interesting never the less.
>
> I believe that instant and well distributed echo density is a desired
> property
> and I assume that the more noise like the response is as a time series
> the better it works also in the frequency/phase domain.
>
> For instance you can make noise loops with randomizing all phases by FFT
> in circular convolution
> that sound very reverberated.
>
>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] PCM audio amplitudes represent pressure or displacement?

2017-09-30 Thread Stefan Sullivan
so there might be a phase
offset between the recorded
and the reproduced sound.


Ah, I think I might be understanding your question more intuitively. Is
your question about positive voltages from microphones being represented as
one direction of displacement, whereas the positive voltages from speakers
being represented as the opposite displacement? To be honest I'm not sure
what the convention is here, but there must be an industry-wide convention
or even one speaker manufacturer to the next might be phase incoherent? I
actually don't know the answer here, but maybe somebody else on the list
does?

It is worth pointing out that Nigel is right about phase being frequency
dependent. Even the mechanical system has dynamic components that have a
frequency response, which means their phase response could be nonlinear,
which transducer engineers would either need to compensate for with other
reactive mechanical components, or with the electrical components, or DSP.

Interestingly, the acoustical and mechanical systems of transducers can be
modeled as electrical circuit complements themselves. I assume that all
speaker/microphone manufacturers model their systems this way, but again
it's not actually my industry so I can't speak to what actually happens.
Marshall Leach has a really good book on the subject:
https://he.kendallhunt.com/product/introduction-electroacoustics-and-audio-amplifier-design

Stefan
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] PCM audio amplitudes represent pressure or displacement?

2017-09-30 Thread Stefan Sullivan
Acoustic transducers (aka microphones and speakers) would be a good keyword
for finding more technical information. They convert pressure differentials
(not pressure per se) to +/- voltage. The pressure is change relative to a
baseline, which is usually right around 1 atmosphere (although it doesn't
matter to the functioning of the transducer). The means of doing so depends
on the type of microphone/speaker. The ADC/DAC converts between whatever
digital representation (usually LPCM iiuc) and analog voltages.

I guess it's a pedantic distinction between pressure and pressure
differentials, but I think understanding the flow of changes in
representation is intuitively meaningful. Each component is responsible for
understanding both representations.

Acoustic pressure waves <=> mechanical motion <=> electrical signal <=>
digitally sampled signal

The microphone diaphragm or speaker come converts between mechanical motion
and acoustic pressure waves. The electrical components, usually either
capacitive or inductive components, convert to an electrical signal. The
ADC/DAC converts between a sampled digital signal and an electrical signal.

Hopefully that's actually helpful to your question.

Stefan


On Sep 30, 2017 11:12, "Renato Fabbri"  wrote:

> I thinks I get it.
> But, as the samples are converted to analog signal
> and then sent to speakers,
> it seems reasonable to assume that there
> is a convention.
>
> Probably displacement?
>
> At the same time,
> I think mics usually converts
> pressure into voltage,
> which would lead to PCM samples
> that represent pressure.
>
> One is proportional to the other,
> with a pi/4 phase difference,
> so there might be a phase
> offset between the recorded
> and the reproduced sound. (?? yes?)
>
> My motivation is to know the fundamentals
> as I often read and write about audio and music
> (as an academic).
>
> tx.
> R.
>
>
>
> On Sat, Sep 30, 2017 at 2:24 PM, Nigel Redmon 
> wrote:
>
>> PCM audio samples usually represent an analog voltage, so, whatever the
>> analog voltage represents.
>>
>> A mic usually converts pressure into displacement, and the displacement
>> into an analog voltage.
>>
>> Sorry if I’m missing something, due to not understanding your motivation
>> for the questions.
>>
>> The PCM samples themselves are a Pulse Code Modulated representation of
>> the analog voltage over time. If you want to understand that part better,
>>
>> http://www.earlevel.com/main/tag/sampling-theory-series/?order=ASC
>>
>>
>> On Sep 30, 2017, at 9:03 AM, Renato Fabbri 
>> wrote:
>>
>> I am not finding this information clearly.
>>
>> BTW. microphones capture amplitude or displacement
>> or it depends?
>>
>> --
>> Renato Fabbri
>> GNU/Linux User #479299
>> labmacambira.sourceforge.net
>>
>>
>>
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>
>
>
>
> --
> Renato Fabbri
> GNU/Linux User #479299
> labmacambira.sourceforge.net
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] basic in-store speaker evaluation

2017-07-04 Thread Stefan Sullivan
It would be difficult to control for things like the time-varying behavior
of the audio processing on the phone, as well as the nonlinearities of the
same audio processing on the phone, not to mention the environmental noise
in the store, which would confound both of these behaviors. The audio
engineering approach would be to just play back pink noise, but that
involves training your ears to know what to expect. I don't know any
resources to train your ears for this kind of thing off the top of my head,
but I definitely recall learning about some apps/websites that could help
you at AES a few years ago. Maybe some others on this list know of
something?

Stefan


On Jul 4, 2017 8:54 AM, "Sampo Syreeni"  wrote:

> On 2017-07-04, Andy Farnell wrote:
>
> Something in the vein of "just put in a test DVD-A, and let your Android
>>> app run"?
>>>
>>
>> DVD? Maybe in 2000. Not knowing the playback capabilities at the store
>> you'd be better to put the test files online, in a spotify or soundcloud
>> channel.
>>
>
> Granted...if you could just drop something like eight channel FLAC's in
> and have them work out of the box on any and all of your setups.
>
> I too thoroughly hate the standardization that DVD-A never was. But you'll
> have to agree there's one thing which speaks for it: choosing the proper
> rates and channel counts, every little bit of variance has been certified
> and test out of the system. It *does* work out of the box, unlike so many
> newer file based systems.
>
> The room and listening position will be variables, needs factoring out
>> clientside, so quite a bit of DSP on the mobile.
>>
>
> Not perhaps on the mobile; let the mobile just work as the pickup, and
> stream the result back to something more workstation-like.
>
> Obviously you couldn't help your phone's pickup being uneven, that
>>>
>>
>> You need to know the phone, the app must do a client detect and
>> look up a database because there are large variances between
>> devices.
>>
>
> If you're only doing comparisons, you don't need that.
> --
> Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
> +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] Smule Is Hiring

2017-04-27 Thread Stefan Sullivan
Hey all,

Smule is hiring 2 audio/DSP-related positions (and several others).

We are looking to hire one Audio/DSP systems engineer:
https://www.smule.com/jobs?gh_jid=597566

and one audio effects engineer:
https://www.smule.com/jobs?gh_jid=660357

First and foremost, we hop you can help us make our community sound
stellar! We hope you have good ears, and creativity in
problem-solving. LGBTQIA folks, people of color, and people of all
genders or gender non-conforming are encouraged to apply.

Smule is located in the SoMa district of San Francisco. It's an
exciting fast-paced company building social music products. Smule
boasts competitive perks including an open collaborative office
environment, great healthcare/dental/vision benefits, catered food,
weekly boba, a yearly hackathon and more. To learn more check out our
jobs website: https://www.smule.com/jobs

To get a small sense for the company, you can check out this video
from a past hackathon: https://youtu.be/WhDt_gh9tx8

-Stefan
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Fractional delay filters in Python?

2017-03-06 Thread Stefan Sullivan
https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.lagrange.html

https://docs.scipy.org/doc/scipy/reference/interpolate.html

I guess I should have thought of "interpolators" when I suggested
interpolation :D

Stefan


On Mar 6, 2017 02:29, "Leonardo Gabrielli"  wrote:

> Thank you all for the support on my question.
>
> At first I implemented linear interpolation to go on with the coding
> quickly and skipping this issue, but I'm aware of the spectral cutoff that
> this implies. When refining the model I will rather have a Lagrange design
> for the interpolator filter as I done in Matlab in the past. What I was
> looking for was a quick python design method that would already implement
> this without having to code it myself. @Olivier: thank you very much, I
> will adopt your waveguide class, that's what I was looking for.
>
> Best regards,
> Leonardo
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Fractional delay filters in Python?

2017-03-02 Thread Stefan Sullivan
Fractional sample delays are simply integer sample delays with
interpolators at the back of them. It's common to implement it as a
delay followed by an allpass filter. Take a look at Julius Smith's
book: 
https://ccrma.stanford.edu/~jos/Interpolation/Simple_Interpolators_suitable_Real.html.

For the delay part, it really depends on your application. You could
just prepend a bunch of zeros to a signal. For the interpolator, you
can implement it as a filter using scipy.signal.lfilter()
https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.signal.lfilter.html#scipy.signal.lfilter

-stefan


On Thu, Mar 2, 2017 at 8:50 AM, Olivier Bélanger  wrote:
> Hi Leonardo,
>
> Don't know if this can be of any help, but I've implemented one in the
> pyo module:
>
> http://ajaxsoundstudio.com/pyodoc/api/classes/effects.html#waveguide
>
> It's not in pure python (would be too slow to compute) but you can
> take a look at the C implementation (line 975+):
>
> https://github.com/belangeo/pyo/blob/master/src/objects/delaymodule.c
>
> Olivier
>
> http://ajaxsoundstudio.com/software/pyo/
>
>
> On Thu, Mar 2, 2017 at 9:51 AM, Leonardo Gabrielli
>  wrote:
>> Dear all,
>> I'm looking into a fractional delay filter design implemented in python, if
>> it exists... If not, well I will need to implement one, and since I'm new to
>> numeric stuff in python pointers and tips will be highly appreciated!
>> BTW: My goal is to implement a simple digital waveguide string model.
>>
>> Best regards,
>> Leonardo
>>
>>
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Allpass filter

2016-12-07 Thread Stefan Sullivan
A linear phase all-pass filter is a delay.

Stefan

On Dec 7, 2016 4:30 AM, "STEFFAN DIEDRICHSEN"  wrote:

>
> On 07.12.2016|KW49, at 13:10, Uli Brueggemann 
> wrote:
>
> Is there a solution to elegantly calculate the pulse response ap ? The
> calculation of p^-1 may be difficult or numerically unstable.
>
>
>
> A spectral inversion can be a challenging at times. However, lp/p should
> be fine to calculate, since the amplitudes of lp and p are identical,
> otherwise the result won’t be an allpass. (Which might be a strategy to
> “condition” the alpass filter, if the amplitude is not 1. at all bins ….)
>
> Best,
>
> Steffan
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] BW limited peak computation?

2016-09-12 Thread Stefan Sullivan
TL; DR

A high-pass filter? The first and second derivatives could be easily enough
described with first and second-order feedback filters, respectively, but
once you start fitting that stuff into DSP terminology, then you might as
well make a low-order high-pass filter that has the characteristics you
desire.

If you're looking for the time of the peak, I've had a lot of luck by
taking the time (or index) of samples weighted by their values. This
produces surprisingly high accuracy. I'm sure very mathy person will tell
me why that is mathematically inaccurate, and some other mathy person will
probably tell them that it is approximately correct to some precision
criteria...



On Jul 25, 2016 2:01 PM, "Paul Stoffregen"  wrote:

> Does anyone have any suggestions or references for an efficient algorithm
> to find the peak of a bandwidth limited signal?
>
> If I just look only at the numerical values of the samples (yeah, that's
> what I've been doing), when a signal is close to an integer division of Fs,
> even collecting data over many cycles tends to miss the phases of the
> waveform containing the peaks.  For example:
>
> Image also available here: https://forum.pjrc.com/threads
> /35478-Problems-Plotting-Filter-Response?p=110442=1#post110442
>
> The only solution I'm imagining would involve expensive upsampling and
> filtering.  Even then, if I multiply the sample rate by 16 or more and the
> filter is good enough, I still might not get a sample right at the peak.
>
> Is there a better way?
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Changing Biquad filter coefficients on-the-fly, how to handle filter state?

2016-03-03 Thread Stefan Sullivan
I looked into this exact issue a little while ago. I found that my filters
sounded better/worse depending on the biquad topology. Basically if your
gaining your input going into states, then those states are more likely to
be very far off from where they should be when you change the parameters.
But if you're not doing that, it's pretty hard to notice any clicks that
don't sound totally reasonable (e.g. instantaneously changing huge
changes). Like everybody else has indicated, zeroing your states is
definitely _not_ the best approach.

Other recommendations I heard (but didn't try) were: run two biquads in
parallel, and interpolate between their outputs. Or, use other topologies
altogether, although there's something to be said for how many tightly
optimized biquad implementations there are in the world.

-stefan

On Thu, Mar 3, 2016 at 4:40 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:

> On 3/3/16 7:23 PM, robert bristow-johnson wrote:
>
>>
>> 6.  another form to consider is the Lattice or, if you're doing it in
>> fixed-point, the Normalized Ladder form.  these are all second-order so
>> they all have the same transfer function and you can calculate coefficients
>> as a function of Cookbook coefs or from emulating an analog circuit with
>> trapezoid-integration or Euler's forward method or Euler's backward method
>> or predictor-corrector or whatever source of your heart's desire.  so you
>> can have Andrew's or Hal's or Andy Moorer's or Stanley Whites or
>> harris-Brooking's or Massie's or Regalia-Mitra's or whatever definition of
>> a second-order IIR filter you like.
>>
>
> forgot to mention Orfanidis's and Knud Christenson's coefficient
> definitions...
>
>   that is an orthogonal issue.  but, if you really wanna crank on the
>> knobs (perhaps with an LFO for tremolo/vibrato), i would do this with a
>> Lattice filter because it decouples the Q from one of the pole coefficients
>> (k2) which means it depends only on the resonant frequency.  there is an
>> LPF output tap and an APF output tap and a tap in between (dunno what i
>> would call it) and you can combine those three taps in any arbitrary way
>> and get any arbitrary numerator to your biquad transfer function.  but i
>> think that the slewing/stabilty behavior of a Lattice is better than that
>> of any other filter topology that i have tried.
>>
>>
> --
>
> r b-j  r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] [admin] list etiquette

2015-08-24 Thread Stefan Sullivan
Well that didn't take long

On Mon, Aug 24, 2015 at 2:08 PM Peter S peter.schoffhau...@gmail.com
wrote:

 On 24/08/2015, Theo Verelst theo...@theover.org wrote:
  I'm not going to confuse etiquette with thinking straight, it's clear if
  people can be respected and have some things to learn or teach, or go on
  about emotionally, that a lot of reasonable communication can take place.

 Note: when I tell someone that he is wrong, in case when he is
 provably wrong, that has nothing to do with emotions. So do not
 confuse emotion with logic.

 -P
 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] [admin] list etiquette

2015-08-22 Thread Stefan Sullivan
Perhaps the knowledge that you might risk exceeding your limit (which I'm
sure would not be pedantically enforced) would make you to consider for
yourself how much the given message is contributing to the discussion.

Thank you Douglas, for clarifying the etiquette and audience. It was needed
and I fully agree with the policy.

Stefan

On Sat, Aug 22, 2015, 18:35 Richard Dobson richarddob...@blueyonder.co.uk
wrote:

 I think this might be a bit too restrictive; there have been many highly
 informative exchanges here over the years, all well-considered, that
 have exceeded this limit. The key is surely well-considered - and the
 absence of egoic chest-beating!

 Richard Dobson

 On 22/08/2015 16:21, Douglas Repetto wrote:

  * Please limit yourself to two well-considered posts per day. Take it
  off list if you need more than that.

 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] MSVC 2012/2013 upgrade with audio differences

2015-02-25 Thread Stefan Sullivan
Laurent,

Thanks for the pointers. It turned out that SSE2-generation was a
default parameter in Visual Studio 2012/2013, where it was not in
VS2008/2010. For those who experience this problem in the future, you
can fix it in Visual Studio by going to the project properties page
(right-click on project and go to properties) and going to
Configuration Properties - C/C++ - Code Generation - Enable
Enhanced Instruction Set. In older versions of visual studio the
default value is No Enhanced Instruction (/arch:IA32), and in the
newer versions the default becomes Streaming SIMD Extensions 2
(/arch:SSE2). The reason it was hard to find is because the property
is entirely missing from both projects, so doing a diff on the text
will not help. It's the compiler that decided that without options at
all, these are the ones to use.

To address your comment, you're right that bit-exactness not being
entirely necessary. In fact, we were just trying to do a sort of
sanity check that the output of the same system compiled under two
different versions of the same compiler didn't produce a different
result. For a while I thought I was insane. It would be terrible if
the actual behavior changed because of a compiler change.

-stefan

On Wed, Feb 25, 2015 at 11:25 AM, Laurent de Soras
laurent.de.so...@free.fr wrote:
 Stefan Sullivan wrote:


 similar problems: the output is not bit-exact.


 Have you a simple project showing the difference?

 I’m not sure why you need this, but there are a lot of cases
 where bit-exactness is not guaranteed when using floating
 point data, especially if you compare FPU- and SSE2-generated
 codes. Make sure that precision, rounding and behaviour with
 denormals are set exactly the same in all the code paths.
 Also, check the strictness of the Floating Point Model in
 the code generation options.

 --
 Laurent de Soras  |   Ohm Force
 DSP developer  Software designer |  Digital Audio Software
 http://ldesoras.free.fr   | http://www.ohmforce.com

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] MSVC 2012/2013 upgrade with audio differences

2015-02-25 Thread Stefan Sullivan
Hey music-dsp folks,

I know that it is not exactly everybody's favorite compiler, in part
because of questions like the one I have right now. But suffice it to
say, there are situations in which I'm required to use Visual Studio.
A couple of months ago I was working on a project in which I attempted
to upgraded from Visual Studio 2008 to Visual Studio 2012. During the
upgrade, I noticed that comparing the audio output of the old version
to the new version would produce small errors. At the time I didn't
worry too much about it because I could continue to use the new GUI
(which really is worlds better) with the 2008 toolchain. Now I'm
working on another project and a colleague of mine is trying to
upgrade from Visual Studio 2010 to Visual Studio 2013 and he's having
similar problems: the output is not bit-exact.

Has anybody else experienced this problem with the newer MSVC
compilers? Has anybody found a solution?

I'm guessing it has to do with either different optimization settings,
or different versions of math libraries or something like that. Any
help would be very much appreciated

Thanks,
Stefan
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] LLVM or GCC for DSP Architectures

2014-12-08 Thread Stefan Sullivan
Hey music DSP folks,

I'm wondering if anybody knows much about using these open source compilers
to compile to various DSP architectures (e.g. SHARC, ARM, TI, etc). To be
honest I don't know so much about the compilers/toolchains for these
architectures (they are mostly proprietary compilers right?). I'm just
wondering if anybody has hooked the back-end of the compilers to the
architectures to a more widely used compiler.

The reason I ask is because I've done quite a bit of development lately
with C++ and template programming. I'm always struggling with being able to
develop more advanced widely applicable audio code, and being able to
address lower-level DSP architectures. I am assuming that the more advanced
feature set of c++11 (and eventually c++14) would be more slow to appear in
these proprietary compilers.

Thanks all,
Stefan
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Nyquist-Shannon sampling theorem

2014-03-27 Thread Stefan Sullivan
 On Mar 26, 2014, at 10:07 PM, Doug Houghton doug_hough...@sympatico.ca
wrote:
  so is there a requirement for the signal to be periodic? or can any
series of numbers be cnsidered periodic if it is bandlimited, or infinit?
 Periodic is the best word I can come up with.
  --

 Well, no--you can decompose any portion of waveform that you want...I'm not
sure at this point if you're talking about the discrete Fourier Transform
or continuous, but I assume discrete in this context...but it's not that
generally useful to, say, do a single transform of an entire song. Sorry,
I'm not sure where you're going here...


Actually, yes there IS a requirement that it be periodic. Fourier theorem
says that any periodic sequence can be represented as a sum of sinusoids.
Sampling theory says that any band-limited _perriodic_ signal can be
properly sampled at the Nyquist rate. The trick is that any finite-duration
signal can be thought of as one period of a periodic signal. This is part
of the reason you get infinite repetitions in the frequency domain after
you sample. ...sort of.

As for frequencies jumping in and out, I think you were on the right track
when you said that it's a Fourier theorem thing. Imagine you had a signal
with one sinusoid that slowly fades in and out for the duration of the
signal. Imagine that the the envelope of this sinusoid is the first half of
a sinusoid. The envelope can be described as a sinusoid whose period is
twice the signal duration. If you were to simply take these two stationary
sinusoids (the envelope and the audible tone) and multiply them you end up
with a spectrum that contains their sum and difference tones. In that way
it can already be thought of as a tone that (slowly) pops in and out, but
which is represented as a sum of two stationary sinusoids.

If you wanted to have the tone come in and out more quickly you could add
the first harmonic of a square wave (or several) to the envelope. For each
additional harmonic you add to the envelope you get an additional two
sinusoids in spectrum of the whole signal. You can keep adding harmonics up
to the Nyquist frequency. This means that your frequencies can pop in and
out very quickly, but only as fast as your sampling rate allows.

Rinse and repeat for additional e.g. harmonics of your audible tone,
additional tones, etc.

Note that you can construct any envelope you could imagine, including
apparently asymmetrical, or ones which are approximately zero for most of
its duration pops in and out for only a small portion of it. That all comes
from Fourier theorem. The important part is that each of these envelopes
would be periodic in the duration of your sampled signal, as far as the
sampling theorem is concerned.

I hope that helps a bit
-Stefan
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Iterative decomposition of an arbitrary frequency response by biquad IIR

2014-03-03 Thread Stefan Sullivan
For matching just the magnitude response, MATLAB has a built-in function for it:
http://www.mathworks.com/help/signal/ref/yulewalk.html

And maybehaps some more parametric modelling techniques will be useful for you
http://www.mathworks.com/help/signal/ug/parametric-modeling.html

-Stefan

On Mon, Mar 3, 2014 at 12:00 PM, Uli Brueggemann
uli.brueggem...@gmail.com wrote:
 Hello music-dsp,

 I like to decompose an arbitrary frequency response by biquads. So I'm
 searching for an algorithm or paper on how to run an iterative
 decomposition. In my imagination it should be possible to
 a) find a first set of biquad parameters with a best fit frequency response
 in comparison to the given response
 b) create a IIR filter with inverse gain
 c) apply the filter to the given response to get a new one
 d) repeat a)-d) until some end criteria is reached

 a) should include the different filter types like peaking filter, lowpass,
 highpass, lowshelf, highshelf...

 Is there any good information for such an approach around? Is there a
 downside for such an approach?

 Uli
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] R: Iterative decomposition of an arbitrary frequency response by biquad IIR

2014-03-03 Thread Stefan Sullivan
Marco,

No, I've never actually had much luck with those kinds of filter
design approaches. Matlab is not my favorite tool in the world. I
usually find it a bit annoying that matlab seems more interested in
matching some sort of brick-wall filter than designing a good-sounding
audio filter. Although, this one doesn't seem to have weights, so it
may be a little easier to use. That said, for OP's question, I think
the paper referenced in that page is at least a step towards a good
(better than matlab's?) solution.

-Stefan

On Mon, Mar 3, 2014 at 4:24 PM, Marco Lo Monaco
marco.lomon...@teletu.it wrote:
 Stefan/Uli
 I use Scilab and there is not so much as in MAtlab for filter estimation.
 My experience with Scilab's invfreqz (which is based on the Levi paper I
 think) was always deluding for practical analog filter identification, but I
 bet I was unlucky or I didnt get the point of how to use it effectively.
 What I don't personally like in these methods is that they also need a
 weight vector on the data set, which adds a new degree of freedom that you
 must guess (not only the order).

 Let me know how is/was your experience then...

 Marco

 -Messaggio originale-
 Da: music-dsp-boun...@music.columbia.edu
 [mailto:music-dsp-boun...@music.columbia.edu] Per conto di Stefan Sullivan
 Inviato: lunedì 3 marzo 2014 12:17
 A: A discussion list for music-related DSP
 Oggetto: Re: [music-dsp] Iterative decomposition of an arbitrary frequency
 response by biquad IIR

 For matching just the magnitude response, MATLAB has a built-in function for
 it:
 http://www.mathworks.com/help/signal/ref/yulewalk.html

 And maybehaps some more parametric modelling techniques will be useful for
 you http://www.mathworks.com/help/signal/ug/parametric-modeling.html

 -Stefan

 On Mon, Mar 3, 2014 at 12:00 PM, Uli Brueggemann uli.brueggem...@gmail.com
 wrote:
 Hello music-dsp,

 I like to decompose an arbitrary frequency response by biquads. So I'm
 searching for an algorithm or paper on how to run an iterative
 decomposition. In my imagination it should be possible to
 a) find a first set of biquad parameters with a best fit frequency
 response in comparison to the given response
 b) create a IIR filter with inverse gain
 c) apply the filter to the given response to get a new one
 d) repeat a)-d) until some end criteria is reached

 a) should include the different filter types like peaking filter,
 lowpass, highpass, lowshelf, highshelf...

 Is there any good information for such an approach around? Is there a
 downside for such an approach?

 Uli
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book
 reviews, dsp links http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Moselle Alpha 0.1.3 Released

2014-01-13 Thread Stefan Sullivan
On Mon, Jan 13, 2014 at 2:24 PM, Thomas Strathmann tho...@pdp7.org wrote:

 On 13.01.14 09:46, Frank Sheeran wrote:
  At this point, the #1 goal is to evaluate the language itself.  Is a
  functional, textual, programming language the best way to design a
  patch?  Better than Csound, better than visual environments like Buzz
  or Cycling '74's Max?  Can I write a patch in Moselle that someone
  with no Moselle experience can understand at a glance?  Can I write
  one that even a Max expert understands better than the same sound
  written in Max?  I THINK so, but I haven't heard any feedback on this
  yet.

 This question has probably come up before: How does your language
 compare to FAUST? FAUST is quite elegant because it has an underlying
 formal semantics in terms of an algebra of data flow graphs, but the
 resulting programs (even simple examples) are hard to read because of
 the inherent impedance mismatch you run into when trying to describe
 arbitrary directed graphs (especially those with cycles) in a linear
 format. I guess the question I'm really interested in here is: How does
 your language attack this basic problem.

Actully, I have basically the same question. Have you
evaluated/investigated Common Lisp Music
(https://ccrma.stanford.edu/software/clm/clm/clm.html)? A couple of
years ago I took a course in Nyquist
(http://www.cs.cmu.edu/~music/nyquist/), which I believe is based on
Lisp.  I found Nyquist to be quite powerful, althought I always found
myself writing very complicated structures instead of composing
scores, but maybe that's just because I'm such a nerd ;)

Stefan
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp