Re: [music-dsp] Virtual Analog Models of Audio Circuitry

2020-03-12 Thread Ross Bencina

I am not familiar with the workshop, but maybe these:

https://ccrma.stanford.edu/~stilti/papers/Welcome.html
https://ccrma.stanford.edu/~dtyeh/papers/pubs.html

I always thought this was a good place to start:

"Simulation of the diode limiter in guitar distortion circuits by 
numerical solution of ordinary differential equations."

https://ccrma.stanford.edu/~dtyeh/papers/yeh07_dafx_clipode.pdf

I'm sure there's many more in the user pages linked off:
https://ccrma.stanford.edu/people

Not specific to the CCRMA workshop, but  there are plenty of papers on 
this topic in DAFx proceedings:


https://www.dafx.de/paper-archive/

And maybe even some in ICMC proceedings since mid-90s:

https://quod.lib.umich.edu/i/icmc

Ross.


On 11/03/2020 10:27 PM, Jerry Evans wrote:

In 2017 CCRMA ran a short workshop:
https://ccrma.stanford.edu/workshops/virtualanalogmodeling-2017.

Are there any papers or examples etc. that are generally available?

TIA

Jerry.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Practical filter programming questions

2020-01-12 Thread Ross Bencina

On 12/01/2020 5:06 PM, Frank Sheeran wrote:
I have a couple audio programming books (Zolzer DAFX and Pirkle 
Designing Audio Effect Plugins in C++).  All the filters they describe 
were easy enough to program.


However, they don't discuss having the frequency and resonance (or 
whatever inputs a given filter has--parametric EQ etc.) CHANGE.


I am doing the expensive thing of recalculating all the coefficients 
every sample, but that uses a lot of CPU.


My questions are:

1. Is there a cheaper way to do this?  For instance can one 
pre-calculate a big matrix of filter coefficients, say 128 cutoffs 
(about enough for each semitone of human hearing) and maybe 10 
resonances, and simply interpolating between them?  Does that even work?


It depends on the filter topology. Coefficient space is not the same as 
linear frequency or resonance space. Interpolating in coefficient space 
may or may not produce the desired results -- but like any other 
interpolation situation, the more pre-computed points that you have, the 
closer you get to the original function. One question that you need to 
resolve is whether all of the interpolated coefficient sets produce 
stable filters (e.g.. keep all the poles inside the unit circle).



2. when filter coefficients change, are the t-1 and t-2 values in the 
pipeline still good to use?


Really there are two questions:

- Are the filter states still valid after coefficient change (Not in 
general)
- Is the filter unconditionally stable if you change the components at 
audio rate (maybe)


To some extent it depends how frequently you intend to update the 
coefficients. Jean Laroche's paper is the one to read for an 
introduction "On the stability of time-varying recursive filters".


There is a more recent DAFx paper that adresses the stability of the 
trapezoidally integrated SVF. See the references linked here:


http://www.rossbencina.com/code/time-varying-bibo-stability-analysis-of-trapezoidal-integrated-optimised-svf-v2


3. Would you guess that most commercial software is using SIMD or GPU 
for this nowadays?  Can anyone confirm at least some implementations use 
SIMD or GPU?


I don't have an answer for this, by my guesses are that most commercial 
audio software doesn't use GPU, and that data parallelism (GPU and/or 
SIMD) is not very helpful to evaluate single IIR filters since there are 
tight data dependencies between each iteration of the filter. Multiple 
independent channels could be evaluated in parallel of course.


Ross.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] SOLA, PSOLA and WSOLA.

2019-02-26 Thread Ross Bencina

Hi Alex,

> I can't understand the difference between  SOLA, PSOLA and WSOLA.

I'll attempt a partial answer:

I think PSOLA and WSOLA are clearly distinct.

PSOLA involves identifying a time varying pitch (fundamental frequency) 
track for the input, segmenting the input signal into (possibly 
overlapping) windowed grains which are synchronous to this fundamental 
frequency (e.g. grains that are centered on glottal pulses) and then 
altering the rate at which the grains are assembled in the output stream.


WSOLA involves breaking the signal into grains using some method (e.g. 
constant duration grains), then concatenating input grains to the output 
stream with relative phase adjusted according to two criteria: (1) on 
average, the input must be consumed at a rate that maintains the 
timescaling factor; (2) the source material should be mixed (with 
windowing) into the output stream in a way that minimizes local error 
over the crossfade region (i.e. to minimize phase cancellation) -- if 
the signal is strongly periodic, and the parameters are just right, this 
will fairly nicely keep the period of the source waveform, but it lacks 
sub-sample-accurate phase alignment I think. You can add enhancements 
such as trying to avoid mixing the same transient into the output stream 
more than once.


Not sure what SOLA is.

Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] 2-point DFT Matrix for subbands Re: FFT for realtime synthesis?

2018-11-06 Thread Ross Bencina

On 7/11/2018 12:03 AM, gm wrote:
A similar idea would be to do some basic wavelet transfrom in octaves 
for instance and then

do smaller FFTs on the bands to stretch and shift them but I have no idea
if you can do that - if you shift them you exceed their bandlimit I assume?
and if you stretch them I am not sure what happens, you shift their 
frequency content down I assume?

Its a little bit fuzzy to me what the waveform in a such a band represents
and what happens when you manipulate it, or how you do that.


Look into constant-Q and bounded-Q transforms.

Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] two fundamental questions Re: FFT for realtime synthesis?

2018-11-03 Thread Ross Bencina

[resending, I think I accidentally replied off-list]

On 1/11/2018 5:00 AM, gm wrote:
> My question rephrased:
> Lets assume a spectrum of size N, can you create a meaningfull 
spectrum of size N/2

> by simply adding every other bin together?
>
> Neglecting the artefacts of the forward transform, lets say an 
artificial spectrum
> (or a spectrum after peak picking that discards the region around the 
peaks)

>
> Lets say two sinusoids in two adjacent bins, will summing them into a 
single bin of a half sized spectrum

> make sense and represent them adequately?
> In my limited understanding, yes, but I am not sure, and would like 
to know why not

> if that is not the case.

You can analyze this by looking at the definition of the short-time 
discrete Fourier transform. (Or the corresponding C code for a DFT).


Each spectral bin is the sum of samples in the windowed signal 
multiplied by a complex exponential.


Off the top of my head, assuming a rectangular window, I think you'll 
find that dropping every second bin in the length N spectrum gives you 
the equivalent of the bin-wise sum of two length N/2 DFTs computed with 
hop size N/2.


Summing adjacent bins would do something different. You could work it 
out by taking the definition of the DFT and doing some algebra. I think 
you'd get a spectrum with double the amplitude, frequency shifted by 
half the bin-spacing. (i.e. the average of the two bin's center 
frequencies).


Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] two fundamental questions Re: FFT for realtime synthesis?

2018-11-02 Thread Ross Bencina

On 3/11/2018 3:41 AM, Ethan Fenn wrote:
No length of FFT will distinguish between a mixture of these sine waves 
and a single amplitude-modulated one, because they're mathematically 
identitical! Specifically:


sin(440t) + sin(441t) = 2*cos(0.5t)*sin(440.5t)

So the question isn't whether an algorithm can distinguish between them 
but rather which one of these two interpretations it should pick. And I 
would say in most audio applications the best answer is that it should 
pick the same interpretation that the human hearing system would. In 
this example it's clearly the right-hand side. In the case of a large 
separation (e.g. 440Hz and 550Hz, a major third) it's clearly the 
left-hand side. And somewhere in between I guess it must be a toss-up.


I guess you could model both simultaneously, with some kind of 
probability weighting.


Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] two fundamental questions Re: FFT for realtime synthesis?

2018-10-31 Thread Ross Bencina

Hi,

Sorry, late to the party and unable to read the backlog, but:

The "FFT^-1" technique that Robert mentions is from a paper by Rodet and 
Depalle that I can't find right now. It's widely cited in the literature 
as "FFT^-1"


That paper only deals with steady-state sinusoids however. It won't 
accurately deal with transients or glides.


There has been more recent work on spectral-domain synthesis and I'm 
fairly sure that some techniques have found their way into some quite 
famous commercial products.


Bonada, J.; Loscos, A.; Cano, P.; Serra, X.; Kenmochi, H. (2001). 
"Spectral Approach to the Modeling of the Singing Voice". In Proc. of 
the 111th AES Convention.




> My goal is to resynthesize arbitary noises.

In that case you need to think about how an FFT represents "arbitrary 
noises".


One approach is to split the signal into sinusoids + noise (a.k.a. 
spectral modeling synthesis).

https://en.wikipedia.org/wiki/Spectral_modeling_synthesis

It is worth reviewing Xavier Serra's PhD thesis for the basics (what was 
already established in the late 1980s.)


http://mtg.upf.edu/content/serra-PhD-thesis

Here's the PDF:
https://repositori.upf.edu/bitstream/handle/10230/34072/Serra_PhDthesis.pdf?sequence=1=y

There was a bunch of in the early 90's on real-time additive synthesis 
at CNMAT, e.g.


https://quod.lib.umich.edu/i/icmc/bbp2372.1995.091/1/--bring-your-own-control-to-additive-synthesis?page=root;size=150;view=text

Of course there is a ton of more recent work. You could do worse than 
looking at the papers of Xavier Serra and Jordi Bonada:

http://mtg.upf.edu/research/publications



On 31/10/2018 1:35 PM, gm wrote:
But back to my question, I am serious, could you compress a spectrum by 
just adding the bins that fall together? 


I'm not sure what "compress" means in this context, nor am I sure what 
"fall together" means. But here's some points to note:


A steady state sine wave in the time domain will be transformed by a 
short-time fourier transform into a spectral peak, convolved (in the 
frequency domain) by the spectrum of the analysis envelope. If you know 
that all of your inputs are sine waves, then you can perform "spectral 
peak picking" (AKA MQ analysis) and reduce your signal to a list of sine 
waves and their frequencies and phases -- this is the sinusoidal 
component of Serra's SMS (explained in the pdf linked above).


Note that since a sinusoid ends up placing non-zero values in every FFT 
bin, you'd need to account for that in your spectral estimation, which 
basic MQ does not -- hence it does not perfectly estimate the sinusoids.


In any case, most signals are not sums of stationary sinusoids. And 
since signals are typically buried in noise, or superimposed on top of 
each other, so the problem is not well posed. For two very simple 
examples: consider two stable sine waves at 440Hz and 441Hz -- you will 
need a very long FFT to distinguish this from a single 
amplitude-modulated sine wave? or consider a sine wave plus white noise 
-- the accuracy of frequency and phase recovery will depend on how much 
input you have to work with.


I think by "compression" you mean "represent sparsely" (i.e. with some 
reduced representation.) The spectral modeling approach is to "model" 
the signal by assuming it has some particular structure (e.g. 
sinusoids+noise, or sinusoids+transients+noise) and then work out how to 
extract this structure from the signal (or to reassemble it for synthesis).


An alternative (more mathematical) approach is to simply assume that the 
signal is sparse in some (unknown) domain. It turns out that if your 
signal is sparse, you can apply a constrained random dimensionality 
reduction to the signal and not lose any information. This is the field 
of compressed sensing. Note that in this case, you haven't recovered any 
structure.


Ross


















___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Antialiased OSC

2018-08-04 Thread Ross Bencina

Hi Robert,

On 5/08/2018 8:17 AM, robert bristow-johnson wrote:
In a software 
synthetic that runs on a modern computer, the waste of memory does not 
seem to be salient.  4096 × 4 × 64 = 1 meg.  Thats 64 wavetables for 
some instrument.


The salient metric is amortized number of L1/L2/L3 cache misses per 
sample lookup.


Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-04 Thread Ross Bencina

Hi Kevin,

Wavetables are for synthesizing ANY band-limited *periodic* signal. On 
the other hand, the BLEP family of methods are for synthesizing 
band-limited *discontinuities* (first order, and/or higher order 
discontinuities).


It is true that BLEP can be used to synthesize SOME bandlimited periodic 
signals (typically those involving a few low-order discontinuities per 
cycle such as sqare, saw and triangle waveforms). In this case, BLEP can 
be thought of as adding "corrective grains" that cancel out the aliasing 
present in a naive (aliased) waveform that has sample-aligned 
discontinuities. The various __BL__ methods tend to vary in how they 
synthesize the corrective grains.


If all you need to do is synthesize bandlimited periodic signals, I 
don't see many benefits to BLEP methods over wavetable synthesis.


Where BLEP comes into its own (in theory at least) is when the signal 
contains discontinuities that are not synchronous with the fundamental 
period. The obvious application is hard-sync, where discontinuities vary 
relative to the phase of the primary waveform.


The original BLEP method used wavetables for the corrective grains, so 
has no obvious performance benefit over periodic wavetable synthesis. 
Other derivative methods (maybe polyBLEP?) don't use wavetables for the 
corrective grains, so they might potentially have benefits in settings 
where there is limited RAM or where the cost of memory access and/or 
cache pollution is large (e.g. modern memory hierarchies) -- but you'd 
need to measure!


You mention frequency modulation. A couple of thoughts on that:

(1) With wavetable switching, frequency modulation will cause high 
frequency harmonics to fade in and out as the wavetables are crossfaded 
-- a kind of amplitude modulation on the high-order harmonics. The 
strength of the effect will depend on the amount of high-frequency 
content in the waveform, and the number of wavetables (per octave, say): 
Less wavetables per-octave will cause lower frequency harmonics to be 
affected, more wavetables per-octave will lessen the effect on low-order 
harmonics, but will cause the rate of amplitude modulation to increase. 
To some extent you can push this AM effect to higher frequencies by 
allowing some aliasing (say above 18kHz). You could eliminate the AM 
effect entirely with 2x oversampling.


(2) With BLEP-type methods, full alias suppression is dependent on 
generating corrective bandlimited pulses for all non-zero higher-order 
derivatives of the signal. Unless your original signal is a 
square/rectangle waveform, any frequency modulation will introduce 
additional derivative terms (product rule) that need to be compensated. 
For sufficiently low frequency, low amplitude modulation you may be able 
to ignore these terms, but beyond some threshold they will become 
significant and would need to be dealt with. I don't recall how PolyBLEP 
deals with higher-order corrective terms.


In any case, my main point here is that BLEP methods don't magically 
support artifact-free frequency modulation (except maybe for square waves).


In the end I don't think there's one single standard, because there are 
mutually exclusive trade-offs to be made. The design space includes:


Features:

- Supported frequency modulation? (just pitch bend? low frequency LFOs? 
audio-rate modulation?).


- Support for hard sync?

- Support for arbitrary waveforms?

Audio quality:

- Allowable aliasing specification (e.g. below 120dB over whole audio 
spectrum, or below 70dB below 10kHz, etc.)


- High-frequency harmonic modulation margin under FM?

Compute performance:

- RAM usage

- CPU usage per voice

And of course:

- Development time/cost (initial and cost of adding features)


Today, desktop CPUs are fast enough to support the most 
difficult-to-achieve synthesis capabilities with no measurable audio 
artifacts. There are plugins that aim for that, and use a whole CPU core 
to synthesize a single voice. There is a market for that. But if you 
goal is a 128-voice polysynth that uses a maximum of 2% CPU on a 
smartphone then you may not want to aim for say, completely-alias-free 
hard-sync of audio-rate frequency modulated arbitrary waveforms.


Cheers,

Ross.


On 4/08/2018 7:23 AM, Kevin Chi wrote:

Hi,

Is there such a thing as today's standard for softSynth antialiased 
oscillators?


I was looking up PolyBLEP oscillators, and was wondering how it would 
relate
to a 1-2 waveTables per octave based oscillator or maybe to some other 
algos.


thanks for any ideas and recommendations in advance,
Kevin
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] EQ-building with fine adjustable steepness

2018-07-02 Thread Ross Bencina

Hello Rolf,

On 27/06/2018 11:31 PM, rolfsassin...@web.de wrote:
Now, I like to have an EQ with most probable flat response which is 
adjustable in steepness and frequency.


[snip]


Is there an analytic function decribing this?


Check this one out:

Thomas Hélie, "Simulation of Fractional-Order Low-Pass Filters."

Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Job: Audio and Video Project Manager at Google

2017-03-02 Thread Ross Bencina

Hi Don,

> I work as an engineer in the Pro Audio team here at Google.
[snip]
> Mobile is the future of how people consume media

There seems to be a contradiction here. Typically, "Pro Audio" includes 
the production and broadcast side of things. The job ad mentions only 
consumption and consumer media applications.


What exactly does "Pro Audio" mean in the context of Android's mission, 
and your team in particular?


Thanks,

Ross.


On 3/03/2017 2:55 AM, Don Turner wrote:

I work as an engineer in the Pro Audio team here at Google. We are
looking for a Project Manager in the Android audio and video framework
engineering team. I thought someone on this list might be interested.
The role is not yet advertised publicly.

*Job description*

Do you absolutely love music and video? Come join the team building the
world’s most popular OS.


We are looking for a PM to own Android Media, including all our OS
features around audio and video. Mobile is the future of how people
consume media - depending on which benchmark you read, we are about to
cross over (mobile greater than TV for media consumption) or we already
have. And yet, this space remains incredibly nascent. Audio and video
experiences are nowhere near the quality and richness they need to
reach; further, mobile unlocks a richness of new potential use cases
that we've barely begun to tap. And if that isn't enough, Android's
Media stack powers Chromecast - now the world's most popular set-top
connectors to TVs - as well as AndroidTV. This is a great career
opportunity if you are passionate about music and video and also have
the technical chops to envision moonshots about what this platform looks
like in the long run.


Ideal candidates are strongly user focused and great at cross-team
collaboration and vision. Requirements:

  *

BA/BS in Computer Science (or equivalent)

  *

Passionate about music and video

Bonus points for:

  *

Domain knowledge / expertise in Media frameworks

  *

Exceptional analytical skills

  *

Experience working as a developer, and/or as PM on technical products

Note: the role is based in Mountain View, US (Google HQ) and only
available to applicants who have permission to work in the US.

Please email dontur...@google.com  with
your CV if interested.

Many thanks,

Don


Don Turner | Developer Advocate | dontur...@google.com
 | +44 7939 287199 


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Bandlimited morphable waveform generation

2016-09-23 Thread Ross Bencina

On 24/09/2016 3:01 PM, Andrew Simper wrote:

> "Hard Sync Without Aliasing," Eli Brandt
> http://www.cs.cmu.edu/~eli/papers/icmc01-hardsync.pdf
>

>

But stick to linear phase as you can correct more easily for dc offsets.


What's your reasoning for saying that?

I'm guessing it depends on whether you have an analytic method for 
generating the minBLEP.


Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Bandlimited morphable waveform generation

2016-09-23 Thread Ross Bencina

On 24/09/2016 1:28 PM, Andrew Simper wrote:

Corrective grains are also called BLEP / BLAMP etc, so have a read about those.


Original reference:

"Hard Sync Without Aliasing," Eli Brandt
http://www.cs.cmu.edu/~eli/papers/icmc01-hardsync.pdf
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] efficient running max algorithm

2016-09-03 Thread Ross Bencina

On 4/09/2016 1:42 PM, robert bristow-johnson wrote:

i think the worst case ain't gonna be too much better than
O(log2(windowSize)) per sample even with amortization over the block.


You can think that if you like, but I don't think the worst case is that 
bad. I have given examples. If you would like to provide a 
counter-example, feel free. But first you probably need to understand 
the algorithm well enough to implement it.


I'm exiting this discussion now. Talking in vague terms isn't getting us 
anywhere. Much as I would like to, I don't have time to write a new 
implementation and a paper to convince you.


Sorry,

Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] efficient running max algorithm

2016-09-03 Thread Ross Bencina

On 4/09/2016 6:27 AM, robert bristow-johnson wrote:

if i were to test this out myself, i need to understand it enough to
write C code (or asm code) to do it.


The paper provides all of the information that you need for the basic 
implementation (which I recommend to start with). If there is something 
in the paper that is unclear, we can try to explain it.


Use a queue of (sample, index) pairs. The input (LIFO) end of the queue 
is where you do trimming based on new input samples. The output (FIFO) 
end of the queue is where you read off the max value and trim values 
that are older than the window size.




so far, i am still unimpressed with the Lemire thing.  i don't, so far,
see the improvement over the binary tree, which is O(log2(R)) per sample
for worst case and a stream or sound file of infinite size.


If your processing block size is 64, and your max window size is 64, the 
original Lemire algorithm is worst-case O(1) per sample. There is a 
constant time component associated with the FIFO process, and a maximum 
of two worst-case O(windowSize) trimming events per processing buffer. 
Since blockSize == windowSize, these amortize to give _guaranteed_ 
worst-case O(1) per block. If the block size is larger, the constant 
term goes down slightly. If the block size is smaller, then the constant 
term goes up.


For sample-by sample processing, the naive algorithm is worst-case 
O(windowSize) per sample. For Ethan's version, the worst case is 
O(log2(windowSize)). I think the benefits for sample-by-sample 
processing are unclear.


Ross.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] efficient running max algorithm

2016-09-03 Thread Ross Bencina

On 4/09/2016 2:49 AM, robert bristow-johnson wrote:

sorry to have to get to the basics, but there are no *two* length
parameters to this alg.  there is only one.

   define the streaming real-time input sequence as x[n].  the length of
this signal is infinite.

   output of running max alg is y[n].  the length of this signal is
infinite.

which is it?:

   y[n] = max{ x[n], x[n-1], x[n-2], ... x[n-N+1] }

or

   y[n] = max{ x[n], x[n-1], x[n-2], ... x[n-R+1] }



i've been under the impression it's the first one. using "N".  earlier i
had been under the impression that you're processing R samples at a
time, like processing samples in blocks of R samples. now i have no idea.


I agree that Evan's setup is unusual and not what you'd use for 
streaming real-time processing.


For what it's worth, in my descriptions of my code:

y[n] = max{ x[n], x[n-1], x[n-2], ... x[n-windowSize+1] }

The history may contain a maximum of windowSize elements (or 
windowSize-1 depending on the implementation details).


I don't think I mentioned processing samples in blocks, but I don't 
think we can usefully analyse the practical complexity of this algorithm 
without discussing block-wise processing.


A worst-case windowSize trimming event (LIFO end of queue) can only 
possibly happen every windowSize samples(*). This reduces the worst-case 
complexity for most samples in a block. Hence block-based processing 
will always (not just on average) yield amortization benefits. If the 
block size is larger than the windowSize, the original algorithm will 
run in O(1) worst-case time per sample, otherwise it will be O(k) where 
k is some function of windowSize and blockSize that I am not prepared to 
calculate before coffee.


(*) Because once windowSize samples have been trimmed, it will take 
another windowSize input samples before the queue is at maximum capacity 
again (and this, only in the decreasing ramp case).


Ross.







___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] efficient running max algorithm

2016-09-03 Thread Ross Bencina

On 3/09/2016 2:14 PM, robert bristow-johnson wrote:

and in discussing iterators, says nothing about push_back()
and the like.


push_back(), push_front(), pop_back(), pop_front() are methods generally 
available on queue-like containers.




from online i can get an idea, but it seems to me to be a
LIFO stack, not a FIFO buffer.  so a sample value gets pushed onto this
thing and pop_back() pops it off, but how does the oldest sample get
pushed off the front?  what makes this vector a circular FIFO thingie?


What you're missing is that the code to drop the oldest samples is 
omitted from my example entirely, and is in a separate file in Evan's code.


You're kinda right though, there's a LIFO process that deals with the 
incoming data, and old samples drop off the far end of the queue (FIFO) 
when they age beyond the window size. The running maximum is always at 
the far end of the queue (since the sequence in the queue is decreasing)


In my code, /front/ is the oldest value and /back/ is the newest value. 
The queue only contains the decresing segments, so it's a discontiguous 
history -- the oldest value in the queue is not usually windowSize 
samples old, it's probably newer than that.


Each queue entry is a pair (value, index). The index is some integer 
that counts input samples.


During decreasing segments, (value, index) are pushed on the back of the 
queue. During increasing segments, samples are trimmed off the back. 
(This is the LIFO part)


Samples are dropped off the front when index is older than the current 
input index (This is the FIFO part).




that said, the "10 lines of code" is deceptive.  it's 10 lines of code
with function calls.  you gotta count the cost of the function calls.


I agree that it's opaque. But a modern C++ compiler will inline 
everything and optimize away most of the overhead. I know this sounds 
like fanboy talk, but the C++ optimizers are surprisingly good these days.


Personally I steer clear of STL for dsp code.



now, Ross, this is Evan's diagram (from earlier today), but maybe you
can help me:


[image: Inline image 3]
http://interactopia.com/archive/images/lemire_algorithm.png


I read that with time running from left to right.

The newest samples are added on the right, the oldest samples are 
dropped from the left.


The red segments are the portions stored in the running max history buffer.



The algorithm can safely forget anything in grey because it has been
"shadowed" by newer maximum or minimum values.

>

what is not shown on the diagram is what happens when the current
running max value expires (or falls off the edge of the delay buffer)?
 when the current max value falls offa the edge, what must you do to
find the new maximum over the past N samples?


Let's call the edge "front". That's the leftmost sample in Evan's 
picture. So we expire that one, it falls of the edge, it's no longer 
there. Notice how the stored (red) samples are all in a decreasing 
sequence? So the new maximum over N samples is just the new "front".




you would have to be riding the slope down on the left side of the peak
that just fell offa the edge and then how do you compare that value to
any peaks (that are lower for the moment) that have not yet expired?


If they're all guaranteed lower, you don't need to compare them.

It would be impossible for them to be higher, because the LIFO process 
at the input end of the queue ensures that all samples in the history 
form a decreasing sequence.



 they have to be other wedges, right?  with their own record of height
and location, right?  the only thing shown on that diagram is what
happens when new maxima come into the delay line on the left and the
running max value increases.  it does not show how it decreases and the
running max must decrease eventually.


If the history is non-empty, the running max decreases every time you 
pop a sample off the front. As I said above, that wouldn't necessarily 
happen at every step. For example, if a new global max arrives, the 
queue would be completely flushed, and that global max would remain the 
"front" value in the queue for windowSize samples.




i cannot see how this strategy works without keeping a record of *every*
local maxima.  both its value and its location.


It keeps a record of a decreasing monotonic sequence of inputs, 
including their locations.




that record would also
be FIFO and the size of that record would be large if there were a lotta
peaks (like a high frequency signal), but each entry would be
chronologically placed.


Correct.



any local maximum between the current running max and the "runner up"
(which is more current) can be ignored.  perhaps this means that you
only need to note where the current running max is (both location and
value) and the runner-up that is more current.  when the current max
falls offa the edge on the right, you have to ride the left slope of
that peak (which is at the maximum delay location) down until it's the
value 

Re: [music-dsp] efficient running max algorithm

2016-09-02 Thread Ross Bencina

On 3/09/2016 3:12 AM, Evan Balster wrote:

Just a few clarifications:

- Local maxima and first difference don't really matter.  The maximum
wedge describes global maxima for all intervals [t-x, t], where x=[R-1..0].


I think it's interesting to look at the algorithm from different 
perspectives. It's clear from your diagram that the backwards trimming 
happens when the signal is at a new local maximum (value is increasing)


As for first difference: it's a shorthand for "increasing" or 
"decreasing". My implementation compares to the previous sample to 
switch between two different behaviors on the input:


if (current > previous_) { // increasing
 (trim loop goes here)
 if (U_.empty())
  return current
 else
  return U_.front().value
} else { // decreasing
 U_.push_back({current, index})
 // (*)
 return U_.front().value // front being oldest sample
}

previous_ = current

I also add the the test to maintain the window size into this loop, 
which saves a couple of comparisons since e.g. at (*) we know that the

history buffer is non-empty.

Anyhow, just another perspective. I prefer my way for implementation 
because it makes the branches explicit. Your STL version makes the 
algorithm clearer.




- If two equal samples are encountered, the algorithm can forget about
the older of them.


I guess so. So the first comparison above should be >=.

Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] efficient running max algorithm

2016-09-02 Thread Ross Bencina

On 2/09/2016 4:37 PM, Ross Bencina wrote:

When the first difference is positive, the history is trimmed. This is
the only time any kind of O(N) or O(log2(N)) operation is performed.
First difference positive implies that a new local maximum is achieved:
in this case, all of the most recent history samples that are less than
the new maximum are trimmed.


Correction:


[The new local maximum dominates all
history up to the most recent *history sample* that exceeds it, which is
retained.]


I had said "most recent local maximum" but the history contains a 
monotonic non-increasing sequence.


Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] efficient running max algorithm

2016-09-02 Thread Ross Bencina

Hello Robert,

> i think i understand the algorithm.

Your description seems quite distant from the algorithm as I understand it.

Considering only running max:

In effect, the running max keeps a history of the decreasing 
sub-sequences of the input.


When the first difference of the input is non-positive, every input 
sample is added to the history. e.g. for a strictly decreasing input, 
the algorithm behaves as a FIFO queue.


When the first difference is positive, the history is trimmed. This is 
the only time any kind of O(N) or O(log2(N)) operation is performed.
First difference positive implies that a new local maximum is achieved: 
in this case, all of the most recent history samples that are less than 
the new maximum are trimmed. [The new local maximum dominates all 
history up to the most recent local maximum that exceeds it, which is 
retained.]


The trimming can only occur once for a given history sample, so in the 
case of a strictly increasing input, only the most recent input would be 
present in the history queue. A similar situation would arise if the 
input was a high-frequency increasing amplitude signal. In this case the 
algorithm is O(1) because there is only one history sample to compare 
to/trim.


If a high frequency decreasing amplitude signal is input, then every 
second input would be retained in the history. For this type of input 
signal, a new local maximum would never cause old history to be trimmed 
[new local max would be less than all previous local maxes] -- therefore 
the trimming is a single comparison, i.e. O(1). In this case, you 
correctly observe that the memory requirements are similar to a history 
keeping tree.


The current maximum is always the oldest value that has been retained in 
the history. It does not require any search in the queue. For example, 
in a strictly increasing sequence there is only one value in the history 
(the most recent). In a strictly decreasing sequence, the whole input 
history (up to the window size) would be in the queue.


I believe that the worst case input would be a descending sawtooth with 
period windowSize-1. During the ramp segment, the algorithm would run in 
constant time, but at the jump, the whole history would have to be 
trimmed: that's where the worst-case O(N) or O(log2(N)) comes in: each 
time that a whole sequence of descending history has to be purged.


There can only ever be N trim operations per N input samples, so the 
amortized performance depends on how big your processing block size is. 
The O(N) or O(log2(N)) worst case spike gets spread over the processing 
block size -- there can never be more than one such spike per N input 
samples. This is presumably better than O(log2(N)) per sample for a tree 
based algorithm.


Ross.


On 2/09/2016 2:52 PM, robert bristow-johnson wrote:



i think i understand the algorithm.  let's assume running max.  you
will, for each wedge, keep a record of the height and location of the
maximum.

and for only two wedges, the wedge that is falling of the edge of the
delay buffer and the new wedge being formed from the new incoming
samples, only on those two wedges you need to monitor and update on a
sample-by-sample basis. as a wedge is created on the front end and as it
falls offa the back end, you will have to update the value and location
of the maxima of each of those two wedges.

each time the first derivative (the difference between the most current
two samples) crosses zero from a positive first difference to a negative
first difference (which is when you have a maximum), you have to record
a new wedge, right?

and then you do a binary max tree for the entire set of wedge maxima?
 is that how it works?

the latter part is O(log2(N)) worst case.  are you expecting a
computation savings over the straight binary search tree because the
number of wedges are expected to be a lot less than the buffer length?

if it's a really high frequency signal, you'll have lotsa wedges.  then
i think there won't be that much difference between the O(log2(N)) worst
case of this Lemire thing and the O(log2(N)) worst case of a straight
binary tree.  and the latter has **very** simple self-contained
code (which was previously posted).

i just wanna know, most specifically, what is the expectation of gain of
speed (or reduction of execution time) of this Lemire alg over the
simpler binary tree alg.  it doesn't seem to me to be a lot better for
memory requirements.


--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



 Original Message 
Subject: Re: [music-dsp] efficient running max algorithm
From: "Evan Balster" 
Date: Fri, September 2, 2016 12:12 am
To: music-dsp@music.columbia.edu
--


Hello, all ---

Reviving this topic to mention I've created an STL-compatible header
implementing what 

Re: [music-dsp] Choosing the right DSP, what things to look out for?

2016-08-24 Thread Ross Bencina

On 25/08/2016 8:44 AM, Max K wrote:

How important do you reckon FFT hardware acceleration [is]
when choosing the DSP?


Most (all?) DSPs will be somewhat optimised for performing FFTs. They 
may not have special FFT hardware, but the vendor will most likely 
provide an optimised FFT library.


Fourier domain spectral processing is not a particularly mainstream 
technique for digital audio effects, but it's certainly a fruitful area 
if you want to go in that direction. (Michael Norris' plugins come to 
mind as an example of nice things that you can do in the spectral domain 
http://www.michaelnorris.info/software/soundmagic-spectral).


Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Faster Fourier transform from 2012?

2016-08-22 Thread Ross Bencina

Igor Carron's blog is also worth a look:

http://nuit-blanche.blogspot.com.au/

On 23/08/2016 12:27 AM, Bjorn Roche wrote:

In case you can't access that link, he doesn't give much info about
how System Compression works

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Faster Fourier transform from 2012?

2016-08-21 Thread Ross Bencina

[Sorry about my previous truncated message, Thuderbird is buggy.]

I wonder what the practical musical applications of sFFT are, and 
whether any work has been published in this area since 2012?



> http://groups.csail.mit.edu/netmit/sFFT/hikp12.pdf

Last time I looked at this paper, it seemed to me that sFFT would 
correctly return the highest magnitude FFT bins irrespective of the 
sparsity of the signal. That could be useful for spectral peak-picking 
based algorithms such as SMS sinusoid/noise decomposition and related 
pitch-tracking techniques. I'm not sure how efficient sFFT is for 
"dense" audio vectors however.



More generally, Compressive Sensing was a hot topic a few years back. 
There is at least one EU-funded research project looking at audio-visual 
applications:

http://www.spartan-itn.eu/#2|

And Mark Plumbley has a couple of recent co-publications:
http://www.surrey.ac.uk/cvssp/people/mark_plumbley/

No doubt there is other work in the field.

Cheers,

Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Faster Fourier transform from 2012?

2016-08-21 Thread Ross Bencina

On 22/08/2016 3:08 AM, Max Little wrote:

indeed there are
faster algorithms than the FFT if the signal is 'sparse' (or
approximately sparse) in the Fourier domain. This is essentially the
same idea as in compressed sensing, where you can 'beat' the Nyquist
criterion for sparse signals.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] idealized flat impact like sound

2016-07-28 Thread Ross Bencina

On 28/07/2016 3:00 AM, gm wrote:

I want to create a signal thats similar to a reverberant knocking or
impact sound,
basically decaying white noise, but with a more compact onset similar to
a minimum phase signal
and spectrally completely flat.


Maybe consider mixing multiple signals together: e.g. a bandlimited 
impulse for the attack, plus something else for the tail.


You might consider the variant of the Karplus-Strong algorithm for drum 
synthesis. Not the original source, but I found a description here:


https://ccrma.stanford.edu/~sdill/220A-project/drums.html#ks

Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Supervised DSP architectures (vs. push/pull)

2016-07-27 Thread Ross Bencina

Hi Evan,

Greetings from my little cave deep in the multi-core scheduling rabbit 
hole! If multi-core is part of the plan, you may find that multicore 
scheduling issues dominate the architecture. Here are a couple of 
starting points:


Letz, Stephane; Fober, Dominique; Orlarey, Yann; P.Davis,
"Jack Audio Server: MacOSX port and multi-processor version"
Proceedings of the first Sound and Music Computing conference – SMC’04, 
pp. 177–183, 2004.

http://www.grame.fr/ressources/publications/SMC-2004-033.pdf

CppCon 2015: Pablo Halpern “Work Stealing"
https://www.youtube.com/watch?v=iLHNF7SgVN4

Re: prioritization. Whether the goal is lowest latency or highest 
throughput, the solutions come under the category of Job Shop Scheduling 
Problems. Large classes of multi-worker multi-job-cost scheduling 
problems are NP-complete. I don't know where your particular problem 
sits. The Work Stealing schedulers seem to be a popular procedure, but 
I'm not sure about optimal heuristics for selection of work when there 
are multiple possible tasks to select -- it's further complicated by 
imperfect information about task cost (maybe the tasks have 
unpredictable run time), inter-core communication costs etc.


Re: scratch storage allocation. For a single-core single-graph scenario 
you can use graph coloring (same as a compiler register allocator). For 
multi-core I guess you can do the same, but you might want to do 
something more dynamic. E.g. reuse a scratch buffer that is likely in 
the local CPUs cache.


Cheers,

Ross.



On 28/07/2016 5:38 AM, Evan Balster wrote:

Hello ---

Some months ago on this list, Ross Bencina remarked about three
prevailing "structures" for DSP systems:  Push, pull and *supervised
architectures*.  This got some wheels turning, and lately I've been
confronted by the need to squeeze more performance by adding multi-core
support to my audio framework.

I'm looking for wisdom or reference material on how to implement a
supervised DSP architecture.

While I have a fairly solid idea as to how I might go about it, there
are a few functions (such as prioritization and scratch-space
management) which I think are going to require some additional thought.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] BW limited peak computation?

2016-07-27 Thread Ross Bencina

On 28/07/2016 12:04 AM, Ethan Fenn wrote:

Because I don't think there can be more than one between any two
adjacent sampling times.


This really got the gears turning. It seems true, but is it a theorem?
If not, can anyone give a counterexample?


I don't know whether it's a classical theorem, but I think it is true.

Define the normalized sinc function as:

sinc(t) := sin( pi t ) / (pi t)

sinc(0) = 1. the signal is analytic everywhere.

A bandlimited, periodically sampled discrete-time signal {x_n} can be 
interpolated by a series of time-shifted normalized sinc functions, each 
centered at time n and scaled by amplitude x_n. This procedure can be 
used to produce the continuous-time analytic signal x(t) induced by 
{x_n}. We want to know how many peaks (direction changes) there can be 
in x(t) between x(n) and x(n+1).


Sinc is bandlimited and has no frequencies above the Nyquist rate 
(fs/2). A sum of time shifted sincs is also bandlimited and therefore 
has no frequencies above the Nyquist rate.


Now all you need to do is prove that a band-limited analytic signal 
whose highest frequency is fs/2 has no more than one direction change 
per sample period. I can't think how to do that formally right now, but 
intuitively it seems plausible that a signal with no frequencies above 
the nyquist rate would not have time-domain peaks spaced closer than the 
sampling period.


Ross.



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] BW limited peak computation?

2016-07-26 Thread Ross Bencina

On 27/07/2016 7:09 AM, Sampo Syreeni wrote:

Now, what I wonder is, could you still somehow pinpoint the temporal
location of an extremum between sampling instants, by baseband logic?
Because I don't think there can be more than one between any two
adjacent sampling times.


Presumably the certainty of such an estimate would depend on how many 
baseband time samples you considered. Sinc decays as 1/x so that gives 
you some idea of the potential influence of distant values -- not sure 
exactly how that maps into distant sample's influence on peak location 
though.


Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] highly optimised variable rms and more

2016-07-18 Thread Ross Bencina

Hi Robert,

I'm not going to defend the algorithm (it's a while since I implemented 
it). But it does work. It keeps a history of certain peaks, which makes 
it efficient to drop them off the end.


Ross.


On 19/07/2016 7:24 AM, robert bristow-johnson wrote:



hi Ross,

i just now saw this reference.  never knew of it before.  i downloaded
it and took my first cursory look at it.

i have to admit that i am skeptical of O(1), even though i haven't
gotten the complete gist of the article.  it says that there are 3
comparisons made per element.  i cannot possibly see how it can be
anything that isn't an increasing function of the window length, w.
 every element in the array has to be verified as not being the max.
 otherwise you have to track the 2nd-place max and the 3rd-place max, ad
arraylengthum.  the best you can keep track of w elements is something
O(log2(w)).  below is a code snippet that does this.  it comes from some
"maxtree" algorithm that i can't remember from who.

now, if the window length w is about 2^30 or a billion, there could be
millions of isolated peaks to keep track of once the maximum falls offa
the edge in the buffer.  even if all the peaks are in a sorted list  i
just cannot fathom how an algorithm can guarantee it has the max without
a search through that list.  and the shortest way to do that is with a
binary tree, but when each new element comes in, you only need to
compare one element per "generation" (an element and its adjacent
neighbor are siblings and everybody has a parent). and there are
O(log2(w)) generations in a binary tree.

this sorta reminds me of an earlier time i was skeptical of this
"zero-delay feedback" notion.

i'll try to understand this paper, but i have this biased skepticism.  i
just can't see how it can possibly be better than O(log2(w)).

:-\

r b-j







 Original Message 
Subject: Re: [music-dsp] highly optimised variable rms and more
From: "Ross Bencina" <rossb-li...@audiomulch.com>
Date: Mon, July 18, 2016 10:52 am
To: music-dsp@music.columbia.edu
--


On 19/07/2016 12:29 AM, Ethan Fenn wrote:

a $ b = max(|a|, |b|)

which I think is what you mean when you describe the peak hold meter.
Certainly an interesting application! And one where I don't think
anything analogous to the Tito method will work.


I've posted here before that there is an O(1) algorithm for running min
and max:

Daniel Lemire, Streaming Maximum-Minimum Filter Using No More than Three
Comparisons per Element, Nordic Journal of Computing, Volume 13, Number
4, pages 328-339, 2006

http://arxiv.org/abs/cs/0610046

Cheers,

Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp





--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."











#define A_REALLY_LARGE_NUMBER 1.0e20

typedef struct
   {
   unsigned long filter_length;  // array_size/2 < filter_length
<= array_size
   unsigned long array_size; // must be power of 2 for this
simple implementation
   unsigned long input_index;// the actual sample placement
is at (array_size + input_index);
   float* big_array_base;// the big array is malloc()
separately and is actually twice array_size;
   } search_tree_array_data;


void initSearchArray(unsigned long filter_length,
search_tree_array_data* array_data)
   {
   array_data->filter_length = filter_length;

   array_data->array_size = 1;
   filter_length--;
   while (filter_length > 0)
{
array_data->array_size <<= 1;
filter_length >>= 1;
}
   // array_size is a power of 2 such that
   // filter_length <= array_size < 2*filter_length
   // array_size = 2^ceil(log2(filter_length)) =
2^(1+floor(filter_length-1))

   array_data->input_index = 0;

   array_data->big_array_base =
(float*)malloc(sizeof(float)*2*array_data->array_size);// dunno
what to do if malloc() fails.

   for (unsigned long n=0; n<2*array_data->array_size; n++)
{
array_data->big_array_base[n] = -A_REALLY_LARGE_NUMBER;
 // init array.
} //
array_base[0] is never used.
}



/*
 *   findMaxSample(new_sample, _data) will place new_sample into
the circular
 *   buffer in the latter half of the array pointed to by
array_data.big_array_base .
 *   it will then compare the value in new_sample to its "sibling"
value, takes the
 *   greater of the two and then pops up one generation to the parent
node where
 *   this parent also has a sibling and repeats the process.  since the
other parent
 *   nodes already have the max value of the t

Re: [music-dsp] looking for tutorials

2016-06-13 Thread Ross Bencina

>>Do everything in the recording studio

Here's my first attempt at a tutorial on seekable lock-free audio 
record/playback:


http://www.rossbencina.com/code/interfacing-real-time-audio-and-file-io



Passion is a good thing


Ty seems to be planning to re-implement just about everything:

https://www.google.com.au/?q=ty+armour+looking+for+tutorials+write

They were asking about how to re-implement PortAudio a while back...

It would be great if every piece of software came with a tutorial about 
how to re-write it. I remember reading about NASA having a documentation 
folder for every source code file for the space shuttle software.


Cheers,

Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] a family of simple polynomial windows and waveforms

2016-06-13 Thread Ross Bencina

On 13/06/2016 3:01 PM, robert bristow-johnson wrote:

many hours of integration by parts


there's gotta be easier ways of doing it (like Euler's with binomial).


I made a Python script for James' polynomial (binomial, Eulers) (sample 
output is at the bottom of the script). It did take a few hours though, 
and it's pretty-much unreadable. I should revisit SymPy.


https://gist.github.com/RossBencina/d8e3a3b1c54218711da7d47949bf354a



anyway, i haven't quite groked Charles Z's thing quite yet.


Me neither -- I need to think about how this maps to the interpolation 
coefficients.


Ross.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] a family of simple polynomial windows and waveforms

2016-06-12 Thread Ross Bencina

On 12/06/2016 8:04 PM, Andy Farnell wrote:

Great to follow this Ross, even with my weak powers of math
its informative.


My powers of math are still pretty weak, but I've been spending time at 
the gym lately ;)




I did some experiments with Bezier after being hugely inspired by
the sounds Jagannathan Sampath got with his DIN synth.
(http://dinisnoise.org/)
Jag told me that he had a cute method for matching the endpoints
of the segment (you can see in the code), and listening, sounds
seem to be alias free, but we never could arrive at a proof of
that.

Now I am revisiting that territory for another reason and wondering
about the properties of easily computed polynomials again.



My less-than-stellar understanding is that at the breakpoints, 
higher-order continuity determines the bandwidth of the harmonics 
induced by the discontinuity (This is related to the BLIT, BLEP, etc 
story discussed here many times). Each additional matched derivative 
gives you an extra 6db of roll-off. Which would stand to reason, since 
in the limit (i.e. infinite matched derivatives in a periodic waveform) 
you'd get a sinusoid. Or you could bandlimit the breakpoints using some 
other scheme (e.g. oversampling).


As to the exact impact the order of the polynomial has on bandwidth, 
aside from at the breakpoints, I'm not sure. But taking a stab at it: a 
polynomial of order n will have at-most n zero-crossings -- that might 
allow for a rough estimate of the maximum bandwidth. As for the minimum 
bandwidth: it doesn't take that many terms to get a sine wave with 
100db-accuracy (consider the error term in the Taylor series).


It may well be that the harmonic content in the DIN synth comes mainly 
from controlling (dis)continuity at the breakpoints rather than the 
spectrum of the polynomial curve per-se.


Cheers,

Ross.





___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] a family of simple polynomial windows and waveforms

2016-06-11 Thread Ross Bencina

Hi Andy,

On 11/06/2016 9:16 PM, Andy Farnell wrote:

Is there something general for the spectrum of all polynomials?


I think Robert was referring to the waveshaping spectrum with a 
sinusoidal input.


If the input is a (complex) sinusoid it follows from the index laws:

(e^(iw))^2 = e^(i2w)

In excruciating detail*:

Consider the expansion of (1-z)^b (use the binomial theorem). The 
highest power of z in the expansion will be z^b.


e.g. for b = 3:

(1-z)^3 = -z^3 + 3z^2 - 3z + 1

Similarly, for f(z) = (1-z^a)^b, the highest power of z will be ab. (Not 
sure where Robert got a|b| from though).


e.g. for a = 2, b = 3:

(1-z^2)^3 = -z^6 + 3z^4 - 3z + 1


Now, assume that the input is sinusoidal:

Let z = e^(iw), with w being oscillator phase.

Then z^(ab) = (e^(iw))^(ab) = e^(iabw).

So, e.g. for a = 2, b = 3:

Hence highest harmonic will be ab above the base frequency.

(1-(e^(iw))^2)^3
   = -(e^(iw))^6 + 3(e^(iw))^4 - 3(e^(iw)) + 1
   = -e^(i6w) + 3e^(i4w) - 3e^(iw) + 1




I don't know whether there is a closed-form expression for the spectrum 
of James' window functions, windowed over [-1, 1].



Greetings from Down Under,

Ross.

[*] As always, I could be wrong about this.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] a family of simple polynomial windows and waveforms

2016-06-10 Thread Ross Bencina

Nice!

On 11/06/2016 11:31 AM, James McCartney wrote:

f(x) = (1-x^a)^b


Also potentially interesting for applying waveshaping to quadrature 
oscillators:


https://www.desmos.com/calculator/vlmynkrlbs

Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] High quality really broad bandwidth pinknoise (ideally more than 32 octaves)

2016-04-11 Thread Ross Bencina

On 12/04/2016 10:26 AM, Evan Balster wrote:

I haven't yet come across an automated process for designing
high-quality pinking filters, so if someone can offer one up I'd also
love to hear about it!


Last time that I  checked (about a year and a half ago) the following 
was the best reference that I could find. Unfortuately I'm not yet 
sufficiently initiated to follow the Hardy-space methods.


"Simulation of Fractional-Order Low-Pass Filters"
Thomas Hélie (IRCAM)
IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 
22, NO. 11, NOVEMBER 2014


Abstract:

"""
The attenuation of standard analog low-pass filters
corresponds to a multiple value of decibels per octave. This
quantified value is related to the order of the filter. The issue
addressed here is concerned with the extension of integer orders
to non integer orders, such that the attenuation of a low-pass filter
can be continuously adjusted. Fractional differential systems are
known to provide such asymptotic behaviors and many results
about their simulation are available. But even for a fixed cutoff
frequency, their combination does not generate an additive group
with respect to the order and they involve stability problems. In
this paper, a class of low-pass filters with orders between 0 (the
filter is a unit gain) and 1 (standard one-pole filter) is defined to
restore these properties. These infinite dimensional filters are not
fractional differential but admit some well-posed representations
into weighted integrals of standard one-pole filters. Based on this,
finite dimensional approximations are proposed and recast into the
framework of state-space representations. A special care is given
to reduce the computational complexity, through the dimension
of the state. In practice, this objective is reached for the complete
family, without damaging the perceptive quality, with dimension
13. Then, an accurate low-cost digital version of this family is
built in the time-domain. The accuracy of the digital filters is
verified on the complete range of parameters (cutoff frequencies
and fractional orders). Moreover, the stability is guaranteed, even
for time-varying parameters. As an application, a plugin has been
implemented which provides a new audio tool for tuning the cutoff
frequency and the asymptotic slope in a continuous way. As a
very special application, choosing a one-half order combined with
a low cutoff frequency (20 Hz or less), the filter fed with a white
noise provides a pink noise generator.
"""


There is an AES paper by the same author:
http://dl.acm.org/citation.cfm?id=2693064

HTH,

Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Tip for an audio resampler library doing cubic interpolation

2016-02-24 Thread Ross Bencina

On 23/02/2016 7:42 PM, Kjetil Matheussen wrote:

But that's why I ask, so I don't have to do the implementation. It
seems like a common task that someone, probably many, have already
done.


Just because many people have already done it does not mean you should 
not also do it from scratch. Fine-grained external dependencies bring 
many hazards. It's better to hone the skills at designing and 
implementing small things yourself, as you have ended up doing.


Sure there's a cut over point where you won't do it all yourself (e.g. 
for most people, writing a whole operating system and driver set.) But a 
cubic interpolator in C/C++ is way below the threshold of do-it-yourself.


imho ymmv etc.


I'm surprised it's apparently so uncommon to implement a
callback interface for providing samples when resampling. It's the
really the natural thing to do.


It is natural in your application. You would have to do some research to 
work out when it is not natural. (As I said earlier, there are at least 
3 structures: push, pull, and supervised).


Ross.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Tip for an audio resampler library doing cubic interpolation

2016-02-22 Thread Ross Bencina

On 23/02/2016 1:24 AM, Kjetil Matheussen wrote:

On Mon, Feb 22, 2016 at 2:59 PM, Ross Bencina
<rossb-li...@audiomulch.com <mailto:rossb-li...@audiomulch.com>> wrote:

Hi Kjetil,

On 22/02/2016 11:52 PM, Kjetil Matheussen wrote:

I wonder if anyone has a tip for a C or C++ of an implementation
of a
Cubic interpolating resampler. I'm not asking about the algorithm
itself, that is all covered (currently I'm using a Catmull-Rom
spline
algorithm, but that is not so important here). What I'm asking about
is a framework for using the algorithm.


It's hard to suggest a framwork out of thin air. One thing: it
probably doesn't matter what kind of interpolation you use. The
interface is concerned with samples in, samples out, current
interpolation phase, conversion ratio. Here's a few questions that
come to mind:


Thanks Ross. I'm sorry I didn't provide more information. I did actually
answer all your questions,
through code, but I guess it was quite arrogant of me not to explain the
somewhat hairy interface.


Hi Kjetil,

Thanks for the clarification. Sorry, I misunderstood your question. When 
you ask for "a framework for using the algorithm." I thought you meant 
the best way to structure the interface to the code. But it sounds like 
you have already decided on the interface.


The interpolation is a function of the lookup phase and a few input 
samples. What else do you need to resolve?


If I was going to implement that interface, I'd write some tests and the 
simplest possible implementation (probably, pulling single samples from 
the source as needed, maybe just using linear or 0-order interpolation 
to start with). Then I'd profile, refactor and optimize as needed.


Ross.

P.S. Using a C function pointer in C++ is not very flexible. You could 
either define an abstract interface for the source, or use std::function.


P.P.S. How do you communicate the amount of source data needed? the 
callback has no "count" parameter.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] MIDI Synth Design Advice Please

2016-02-21 Thread Ross Bencina

On 2/02/2016 2:10 AM, Scott Gravenhorst wrote:

Advice regarding this endeavor would be appreciated


In case you haven't found it, you should research the disable_pvt config
file flag. It can reduce system jitter a little.


Hi again Scott,

The following Linux system jitter optimisation blog posts by Mark Price 
were referenced on the mechanical-sympathy mailing list today. They may 
be of interest to your project:


http://epickrram.blogspot.co.uk/2015/09/reducing-system-jitter.html
http://epickrram.blogspot.co.uk/2015/11/reducing-system-jitter-part-2.html

Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] MIDI Synth Design Advice Please

2016-02-06 Thread Ross Bencina

Hi Scott,

Interesting project!

On 2/02/2016 2:10 AM, Scott Gravenhorst wrote:

Advice regarding this endeavor would be appreciated


In case you haven't found it, you should research the disable_pvt config 
file flag. It can reduce system jitter a little.


Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Anyone using Chebyshev polynomials to approximate trigonometric functions in FPGA DSP

2016-01-20 Thread Ross Bencina

On 21/01/2016 2:36 PM, robert bristow-johnson wrote:
> i thought i understood Tchebyshev polynomials well.  including their
> trig definitions (for |x|<1), but if what you're trying to do is
> generate a sinusoid from polynomials, i don't understand where the
> "Tchebyshev" (with or without the "T") comes in.
>
> is it min/max error (a.k.a. Tchebyshev norm)?

Here's the relevant passage from p. 119:

"""
An article about sines and cosines wouldn’t be complete without some 
mention of the use of Chebyshev polynomials. Basically, the theory of 
Chebyshev polynomials allows the programmer to tweak the coefficients a 
bit for a lower error bound overall. When I truncate a polynomial, I 
typically get very small errors when x is small, and the errors increase 
dramatically and exponentially outside a certain range near x = 0. The 
Chebyshev polynomials, on the other hand, oscillate about zero with peak 
deviations that are bounded and equal. Expressing the power series in 
terms of Chebyshev polynomials allows you to trade off the small errors 
near zero for far less error near the extremes of the argument range. I 
will not present a treatise on Chebyshev polynomials here; for now, I’ll 
only give the results of the process.


You don’t need to know how this is done for the purposes of this 
discussion, but the general idea is to substitute every power of x by 
its equivalent in terms of Chebyshev polynomials, collect the terms, 
truncate the series in that form, and substitute back again. When all 
the terms have been collected, you’ll find that you are back to a power 
series in x again, but the coefficients have been slightly altered in an 
optimal way. Because this process results in a lower maximum error, 
you’ll normally find you can drop one term or so in the series expansion 
while still retaining the same accuracy

"""

R.



if you want a quick 'n clean polynomial for sin(x), there is a very
low-order optimized polynomial implementation for |x|http://dsp.stackexchange.com/questions/20444/books-resources-for-implementing-various-mathematical-functions-in-fixed-point-a/20482#20482
.

is the purpose to generate a sinusoidal waveform or perhaps to do math
with (like in the generation of filter coefficients)?

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Anyone using Chebyshev polynomials to approximate trigonometric functions in FPGA DSP

2016-01-19 Thread Ross Bencina

Sorry, my previous message got truncated for some reason.

On 20/01/2016 5:56 AM, Alan Wolfe wrote:

Did you know that rational quadratic Bezier curves can exactly represent
conic sections, and thus give you exact trig values?


As Andrew said, the curve lies on a conic section, but the 
parameterization is not in terms of radian angle.


Taking equation 106 from here:

http://www.cl.cam.ac.uk/teaching/2000/AGraphHCI/SMEG/node5.html#SECTION00051000

You'll see that the graphs of P(t) and sin(pi*t/2) do not align:

https://www.desmos.com/calculator/qn7yn1jxcp

Cheers,

Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Introducing: Axoloti

2015-12-14 Thread Ross Bencina

Hi Johannes,

Nice to see a board with 1/4 inch jacks :)

Does the board run an OS, or does the patcher compile bare-metal images? 
I assume there's some kind of OS if you support class-compliant MIDI 
over USB.


Thanks,

Ross.

On 15/12/2015 7:05 AM, Johannes Taelman wrote:

Hi,

I'm pleased to announce public availability of Axoloti Core.

Axoloti is an open source platform for sketching music-DSP algorithms
running on standalone hardware.

Axoloti Core is a circuit board containing a 168MHz Cortex-M4F
microcontroller, audio ADC/DAC, DIN MIDI, USB host port, USB device
port, switching power supply, 8MB SDRam, a micro-SDCard slot, and a set
of general purpose inputs and outputs.

Axoloti allows you to build custom synths, FX units and new instruments
using a graphical patcher that generates C++ code, and also manages
compilation and upload to the microcontroller. The object library offers
oscillators, filters, envelopes, and more. The patcher runs on Windows,
OSX and Linux.

www.axoloti.com 

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-05 Thread Ross Bencina

Thanks Ethan(s),

I was able to follow your derivation. A few questions:

On 4/11/2015 7:07 PM, Ethan Duni wrote:

It's pretty straightforward to derive the autocorrelation and psd for
this one. Let me restate it with some convenient notation. Let's say
there are a parameter P in (0,1) and 3 random processes:
r[n] i.i.d. ~U(0,1)
y[n] i.i.d. ~(some distribution with at least first and second moments
finite)
x[n] = (r[n]

ac[k] = P^abs(k)

>

And so the psd is given by:

psd[w] = (1 - P^2)/(1 - 2Pcos(w) + P^2)


What is the method that you used to go from ac[k] to psd[w]? Robert 
mentioned that psd was the Fourier transform of ac. Is this particular 
case a standard transform that you knew off the top of your head?


And is psd[w] in exactly the same units as the magnitude squared 
spectrum of x[n] (i.e. |ft(x)|^2)?




Unless I've screwed up somewhere?


A quick simulation suggests that it might be okay:

https://www.dropbox.com/home/Public?preview=SH1_1.png


But I don't seem to have the scale factors correct. The psd has 
significantly smaller magnitude than the fft.


Here's the numpy code I used (also pasted below).

https://gist.github.com/RossBencina/a15a696adf0232c73a55

The FFT output is scaled by (2.0/N) prior to computing the magnitude 
squared spectrum.


I have also scaled the PSD by (2.0/N). That doesn't seem quite right to 
me for two reasons: (1) the scale factor is applied to the linear FFT, 
but to the mag squared PSD and (2) I don't have the 1/3 factor anywhere.


Any thoughts on what I'm doing wrong?

Thanks,

Ross.


P.S. Pasting the numpy code below:

# ---8<
# see 
https://lists.columbia.edu/pipermail/music-dsp/2015-November/000424.html

# psd derivation due to Ethan Duni
import numpy as np
from numpy.fft import fft, fftfreq
import matplotlib.pyplot as plt

N = 16384*2*2*2*2*2 # FFT size

y = (np.random.random(N) * 2) - 1 # ~U(-1,1)
r = np.random.random(N) # ~U(0,1)
x = np.empty(N) # (r[n]

[music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-03 Thread Ross Bencina

Hi Everyone,

Suppose that I generate a time series x[n] as follows:

>>>
P is a constant value between 0 and 1

At each time step n (n is an integer):

r[n] = uniform_random(0, 1)
x[n] = (r[n] <= P) ? uniform_random(-1, 1) : x[n-1]

Where "(a) ? b : c" is the C ternary operator that takes on the value b 
if a is true, and c otherwise.

<<<

What would be a good way to derive a closed-form expression for the 
spectrum of x? (Assuming that the series is infinite.)



I'm guessing that the answer is an integral over the spectra of shifted 
step functions, but I don't know how to deal with the random magnitude 
of each step, or the random onsets. Please assume that I barely know how 
to take the Fourier transform of a step function.


Maybe the spectrum of a train of randomly spaced, random amplitude 
pulses is easier to model (i.e. w[n] = x[n] - x[n-1]). Either way, any 
hints would be appreciated.


Thanks in advance,

Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-03 Thread Ross Bencina

On 4/11/2015 9:39 AM, robert bristow-johnson wrote:

i have to confess that this is hard and i don't have a concrete solution
for you.


Knowing that this isn't well known helps. I have an idea (see below). It 
might be wrong.




it seems to me that, by this description:

r[n] = uniform_random(0, 1)

if (r[n] <= P)

x[n] = uniform_random(-1, 1);

else

  x[n] = x[n-1];

from that, and from the assumption of ergodicity (where all time
averages can be replaced with probabilistic averages), then it should be
possible to derive an autocorrelation function from this.

but i haven't done it.


Using AMDF instead of autocorrelation:

let n be an arbitrary time index
let t be the AMDF lag time of interest

AMDF[t] = fabs(x[n] - x[n-t])

there are two cases:

case 1, (holding): x[n-t] == x[n]
case 2, (not holding) x[n-t] == uniform_random(-1, 1)

In case 1, AMDF[t] = 0
In case 2, AMDF[t] = 2/3 (i think?)

To get the limit of AMDF[t], weight the values of the two cases by the 
probability of each case case. (Which seems like a textbook waiting-time 
problem, but will require me to return to my textbook).


Then I just need to convert the AMDF to PSD somehow.

Does that seem like a reasonable approach?

Ross.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-03 Thread Ross Bencina

On 4/11/2015 5:26 AM, Ethan Duni wrote:

Do you mean the literal Fourier spectrum of some realization of this
process, or the power spectral density? I don't think you're going to
get a closed-form expression for the former (it has a random component).


I am interested in the long-term magnitude spectrum. I had assumed 
(wrongly?) that in the limit (over an infinite length series), that the 
fourier integral would converge. And modeling in that way would be 
(slightly) more familiar to me. However, If autocorrelation or psd is 
the better way to characterize the spectra of random signals then I 
should learn about that.




For the latter what you need to do is work out an expression for the
autocorrelation function of the process.

>

As far as the autocorrelation function goes you can get some hints by
thinking about what happens for different values of P. For P=1 you get
an IID uniform noise process, which will have autocorrelation equal to a
kronecker delta, and so psd equal to 1. For P=0 you get a constant
signal. If that's the zero signal, then the autocorrelation and psd are
both zero. If it's a non-zero signal (depends on your initial condition
at n=-inf) then the autocorrelation is a constant and the psd is a dirac
delta.Those are the extreme cases. For P in the middle, you have a
piecewise-constant signal where the length of each segment is given by a
stopping time criterion on the uniform process (and P). If you grind
through the math, you should end up with an autocorrelation that decays
down to zero, with a rate of decay related to P (the larger P, the
longer the decay). The FFT of that will have a similar shape, but with
the rate of decay inversely proportional to P (ala Heisenberg
Uncertainty principle).

So in broad strokes, what you should see is a lowpass spectrum
parameterized by P - for P very small, you approach a flat spectrum, and
for P close to 1 you approach a spectrum that's all DC.

Deriving the exact expression for the autocorrelation/spectrum is left
as an exercise for the reader :]


Ok, thanks. That gives me a place to start looking.

Ross.




E

On Tue, Nov 3, 2015 at 9:42 AM, Ross Bencina <rossb-li...@audiomulch.com
<mailto:rossb-li...@audiomulch.com>> wrote:

Hi Everyone,

Suppose that I generate a time series x[n] as follows:

 >>>
P is a constant value between 0 and 1

At each time step n (n is an integer):

r[n] = uniform_random(0, 1)
x[n] = (r[n] <= P) ? uniform_random(-1, 1) : x[n-1]

Where "(a) ? b : c" is the C ternary operator that takes on the
value b if a is true, and c otherwise.
<<<

What would be a good way to derive a closed-form expression for the
spectrum of x? (Assuming that the series is infinite.)


I'm guessing that the answer is an integral over the spectra of
shifted step functions, but I don't know how to deal with the random
magnitude of each step, or the random onsets. Please assume that I
barely know how to take the Fourier transform of a step function.

Maybe the spectrum of a train of randomly spaced, random amplitude
pulses is easier to model (i.e. w[n] = x[n] - x[n-1]). Either way,
any hints would be appreciated.

Thanks in advance,

Ross.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Announcement: libsoundio 1.0.0 released

2015-09-20 Thread Ross Bencina

On 21/09/2015 10:34 AM, Bjorn Roche wrote:

I noticed that PortAudio's API allows one to open a duplex stream
with different stream parameters for each device. Does it actually
make sense to open an input device and an output device with...

  * ...different sample rates?


PA certainly doesn't support this. You might have two devices open at
one time (one for input and one for output) and they might be running at
separate sample rates, but the stream itself will only have one sample
rate -- at least one device will be SR covered if necessary.


A full duplex PA stream has a single sample rate. There is only one 
sample rate parameter to Pa_OpenStream().




  * ...different latency / hardware buffer values?


Some host APIs support separate values for input and output. And yes, in 
my experience you can get lowest full-duplex latency by tuning the 
parameters separately for input and output.




PA probably only uses one of the two values in at least some situations
like this. In fact, on OS X (and possibly on other APIs),


It depends on the host API. e.g. an ASIO full duplex stream only has one 
buffer size parameter.




the latency
parameter is often ignored completely anyway. (or at least it was when I
last looked at the code)


That is false. Phil and I did a lot of work a couple of years back to 
fix the interpretation of latency parameters.




  * ...different sample formats?


I don't think this is of much use to many people (anybody?). If it is, I
don't think the person who needs it would complain too much about a few
extra lines of conversion code, but maybe I'm wrong.


Agree. There is no particular benefit.

Ross.







___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] sinc interp, higher orders

2015-09-11 Thread Ross Bencina

> sinc(x) := sin(x)/x

On 12/09/2015 2:20 AM, Nigel Redmon wrote:

I’m also aware that some people would look at me like I’m a nut to even bring 
up that distinction.


I considered making the distinction, but it is discussed at the first 
link that I provided:


> https://en.wikipedia.org/wiki/Sinc_function

Mathworld also says: "There are two definitions in common use."

http://mathworld.wolfram.com/SincFunction.html


With hindsight, given the audience and intended use, I should have 
quoted the "DSP and information theory" definition.


R.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] sinc interp, higher orders

2015-09-11 Thread Ross Bencina

On 12/09/2015 1:13 AM, Nuno Santos wrote:

Curiosity, by sinc do you mean sin function?


sinc(x) := sin(x)/x

https://en.wikipedia.org/wiki/Sinc_function

https://ccrma.stanford.edu/~jos/pasp/Windowed_Sinc_Interpolation.html

Cheers,

Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Announcement: libsoundio 1.0.0 released

2015-09-06 Thread Ross Bencina

Hello Andrew,

Congratulations on libsoundio. I know what's involved.

I have some feedback about the libsoundio-vs-PortAudio comparison. Most 
of my comments relate to improving the accuracy and clarify of the 
comparison page, but forgive me for providing a bit of commentary for 
other readers of music-dsp too.


On 5/09/2015 3:13 AM, Andrew Kelley wrote:

https://github.com/andrewrk/libsoundio/wiki/libsoundio-vs-PortAudio


Many of the points listed at the above URL are accurate. Many have been 
considered as feature requests for future versions of PortAudio, or 
could easily be accommodated as enhancements. All would be welcome 
improvements to PortAudio either has core API improvements or as 
host-API extensions.


Some points fall into the category of bugs in PortAudio. PortAudio has 
139 open tickets. For completeness, here's the full list:


https://www.assembla.com/spaces/portaudio/tickets

That said, I humbly request clarification or correction of a few of your 
points:


> * Ability to connect to multiple backends at once. For example you
> could have an ALSA device open and a JACK device open at the same
> time.

PortAudio can do this just fine. Very old versions of PortAudio only 
allowed one host API to be active at a time. But for at least 5 years 
"V19" has supported simultaneous access to multiple host APIs.


If this is something specific to ALSA vs. JACK it would be nice to learn 
more. But as far as I understand it, this point is inaccurate.



> * Exposes extra API that is only available on some backends. For
> example you can provide application name and stream names which is
> used by JACK and PulseAudio.

PortAudio does have per-host-API extensions. For example it exposes 
channel maps (another feature listed) for host APIs that support them. 
Another example: on Mac OS X it provides an extension to control 
exclusive-mode access to the device.


That said,, afaik, PortAudio doesn't support JACK "stream names", 
therefore may I suggest changing this point to:


* Provide application name and stream names used by JACK and PulseAudio.

(Btw, that would make a good host API extension for PortAudio too.)


> * Errors are communicated via meaningful return codes, not logging to 
stdio.


PortAudio has a rich set of error codes, mechanisms for converting them 
to text strings, and also provides access to underlying native error 
codes and error text.


I am not clear what your claim of "not logging to stdio" is about. The 
only thing PortAudio prints to stdio is diagnostic debugging 
information. And only when debug logging is turned on. Usually it's used 
to diagnose bugs in a particular PortAudio host-api back end.


It would be helpful to me at least, to give a quick example of what a 
"meaningful error code" is and why PortAudio's error codes are not 
meaningful.



> * Meticulously checks all return codes and memory allocations and uses
> meaningful error codes. Meanwhile, PortAudio is a mess.

PortAudio is meticulous enough to mark where further code review is 
needed. For example, many of the FIXMEs that you indicate in the link 
were added by me during code review:

https://gist.github.com/andrewrk/7b7207f9c8efefbdbcbd

But note that not all of these FIXMEs relate to the listed criticisms.

In particular, as far as I know, there are no problems with PortAudio's 
handling of memory allocation errors. If you know of specific cases of 
problems with this I would be *very* interested to hear about them.



> * Ability to monitor devices and get an event when available devices
> change.

For anyone reading, there is PortAudio code for doing this under Windows 
on the hot-plug development branch. If someone would like to work on 
finishing it for other platforms that would be great.



> * Does not have code for deprecated backends such as OSS, DirectSound,
> asihpi, wdmks, wmme.

Not all of these are deprecated. I'm pretty sure OSS is still the 
preferred API on some BSD systems. ASIHPI is not deprecated, 
AudioScience HPI drivers are newer than their ALSA drivers 
(http://www.audioscience.com/internet/download/linux_drivers.htm). 
WDM/KS is still the user-space direct access path to WDM drivers.


As for WMME and DirectSound, I think you need to be careful not to 
confuse "deprecated" with "bad." Personally I prefer WMME to anything 
newer when latency isn't an issue -- it just works. WASAPI has been 
notoriously variable/unreliably on different Windows versions.


May I suggest listing support for all of these APIs as a benefit of 
PortAudio?


Best wishes,

Ross.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Announcement: libsoundio 1.0.0 released

2015-09-06 Thread Ross Bencina

Hello Andrew,

Thanks for your helpful feedback. Just to be clear: I maintain the 
PortAudio core common code and some Windows host API codes. Many of the 
issues that you've raised are for other platforms. In those cases I can 
only respond with general comments. I will forward the specific issues 
to the PortAudio list and make sure that they are ticketed.


Your comments highlight a difference between your project and ours: 
you're one guy, apparently with time and talent to do it all. PortAudio 
has had 30+ contributors, all putting in their little piece. As your 
comments indicate, we have not been able to consistently achieve the 
code quality that you expect. There are many reasons for that. Probably 
it is due to inadequate leadership, and for that I am responsible. 
However, some of these issues can be mitigated by more feedback and more 
code review, and for that I am most appreciative of your input.


A few responses...

On 6/09/2015 5:15 PM, Andrew Kelley wrote:

PortAudio dumps a bunch of logging information to stdio without
explicitly turning logging on. Here's a simple program and the
corresponding output:
https://github.com/andrewrk/node-groove/issues/13#issuecomment-70757123


Those messages are printed by ALSA, not by PortAudio. We considered 
suppressing them, but current opinion seems to be that if ALSA has 
problems it's better to log them than to suppress them. That said, it's 
an open issue:


https://www.assembla.com/spaces/portaudio/tickets/163

Do you have any thoughts how how best to handle ALSA's dumping messages 
to stdio?



Another example, when I start audacity, here's a bunch of stuff dumped
to stdio. Note that this is the *success* case; audacity started up just
fine.

ALSA lib pcm.c:2338:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2338:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2338:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map


See above ticket.


Expression 'ret' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1733



The "Expression ... failed" looks to me like a two level bug: #1 that 
it's logging like that in a release build, and #2 that those messages 
are being hit. (But as I say, PortAudio on Linux is not my area). I'll 
report these to the PortAudio list.




It would be helpful to me at least, to give a quick example of what a
"meaningful error code" is and why PortAudio's error codes are not
meaningful.


PortAudio error codes are indeed meaningful; I did not intend to accuse
PortAudio of this. I was trying to point out that error codes are the
only way errors are communicated as opposed to logging.

I changed it to "Errors are never dumped to stdio" to avoid the
accidental implication that PortAudio has non meaningful error codes.


Given the error messages that you posted above, I can see your point. I 
am not sure why the code is written to post those diagnostic errors in a 
release build but I will check with our Linux contributor.




In particular, as far as I know, there are no problems with PortAudio's
handling of memory allocation errors. If you know of specific cases of
problems with this I would be *very* interested to hear about them.


Not memory, but this one is particularly striking:

   /* FEEDBACK: I'm not sure what to do when this call fails.
There's nothing in the PA API to
* do about failures in the callback system. */
   assert( !err );



It's true, pa_mac_core.c could use some love. There is an issue on Mac 
if the hardware switches sample rates while a stream is open.




As for WMME and DirectSound, I think you need to be careful not to
confuse "deprecated" with "bad." Personally I prefer WMME to anything
newer when latency isn't an issue -- it just works. WASAPI has been
notoriously variable/unreliably on different Windows versions.


My understanding is, if you use DirectSound on a Windows Vista or
higher, it's an API wrapper and is using WASAPI under the hood.


I believe that is true. Microsoft also know all of the version-specific 
WASAPI quirks to make DirectSound work reliabily with all the buggy 
iterations of WASAPI.




May I suggest listing support for all of these APIs as a benefit of
PortAudio?


Fair enough.

Would you like to have another look at the wiki page and see if it seems
more neutral and factual?


I think it looks good. The only things I'd change:

> *Supports channel layouts (also known as channel maps), important for
> surround sound applications.

PortAudio has channel maps, but only for some host APIs as a per-API 
extension. It's not part of the portable public API. You could say 
"Support for channel layouts with every API" or something like that.


> *Ability to open an output stream simultaneously for input and output.

Just a 

Re: [music-dsp] warts in JUCE

2015-09-05 Thread Ross Bencina

On 6/09/2015 8:37 AM, Daniel Varela wrote:

sample rate is part of the audio information so any related message
  ( AudioSampleBuffer ) should provide it, no need to extend the discursion.


There's more than one concept at play here:

(1) If you consider the AudioSampleBuffer as a stand-alone entity, then 
it should carry it's own sample rate. Further, if the object is fully in 
control of its own buffer allocations then you can argue for all kind of 
extensions (e.g. zero-padding, power-of-two size, etc.).


(2) On the other hand, if the system has a fixed sample rate, then the 
sample rate is a (global) property of the system, not of the buffers. By 
definition, all buffers have the same sample rate -- the system sample 
rate. Further, if the buffers are allocated externally (e.g. by a 
plug-in host) then the role of AudioSampleBuffer is purely a wrapper 
(smart pointer) and there is no way for it to provide a mechanism for 
zero-padding or any other buffer allocation related features.


This discussion seems to be about whether AudioSampleBuffer is (1), (2), 
or both.


There is no one-true-answer, but if the object is modeling (2), adding a 
field for sample rate not only violates the zero-overhead principle but 
also opens the door to violating a system invariant (i.e. that all 
buffers have the same sample rate). As far as I know, case (2) addresses 
the main use-case for JUCE.


Personally, I think AudioSampleBuffer is (a) trying to do too much 
(there should be two objects: a Buffer and a BufferRef); and (b) it's 
abstraction overkill for the plug-in use-case.


Cheers,

Ross.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] kPlugCategShell

2015-06-18 Thread Ross Bencina

Hello Ralph,

On 19/06/2015 9:18 AM, Ralph Glasgal wrote:

I used to have AudioMulch 1.0 working fine with Waves IR-1 VST hall
impulse responses.  But after a computer crash I can't seem to get
Waves working with either AudioMulch 1.0 or 2.2 due to a lack of
kPlugCategShell support.  How do I get this back into 1.0 or 2.2?  Is
kPlug a file available somewhere?


Music-dsp is a technical mailing list. It is not an appropriate forum to 
post end-user questions regarding specific products.


FYI: There are two official support channels for AudioMulch:

1. The community forum at AudioMulch.com

2. supp...@audiomulch.com (or emailing me directly).

But to answer your question:

AudioMulch doesn't support WaveShell directly. If you can get this 
working at all it will involve using a 3rd party adapter that converts 
wave-shell VSTs into individual plugins. One such adapter is called 
Shell2VST. You probably need to re-install and reconfigure it.


Please continue the conversation at one of the channels listed above.

Kind regards,

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FFTW Help in C

2015-06-12 Thread Ross Bencina

Hey Bjorn, Connor,

On 12/06/2015 1:27 AM, Bjorn Roche wrote:

The important thing is to do anything that might take an unbounded
amount of time outside your callback. For a simple FFT, the rule of
thumb might bethat all setup takes place outside the callback. For
example, as long as you do all your malloc stuff outside the
callback, processing and soon can usually be done in the callback.


All true, but Connor may need to be more careful than that. I.e. make 
sure that the amount of time taken is guaranteed to be less than the 
time available in each callback.


An FFT is not exactly a free operation. On a modern 8-core desktop
machine it's probably trivial to perform a 2048-point FFT in the audio
callback. But on a low-powered device, a single FFT of large enough size
may exceed the available time in an audio callback. (Connor mentioned
Raspberry Pi on another list).

The only way to be sure that the FFT is OK to run in the callback is to:

- work out the callback period

- work out how long the FFT takes to compute on your device and how many
you need to coompute per-callback.

- make sure time-to-execute-FFT  callback-period (I'd aim for below
75% of one callback period to execute the entire FFT). This is not
something that can be easily amortized across multiple callbacks.


The above also assumes that your audio API lets you use 100% of the
available CPU time within each callback period. A safer default
assumption might be 50%.

Remember that that your callback period will be short (64 samples) but
your FFT may be large, e.g. 2048 bins. In such cases you have to perform
a large FFT in the time of a small audio buffer period.

If the goal is to display the results I'd just shovel the audio data
into a buffer and FFT it in a different thread. That way if the FFT
thread falls behind it can drop frames without upsetting the audio callback.

The best discussion I've seen about doing FFTs synchronously is in this
paper:

Implementing Real-Time Partitioned Convolution Algorithms on
Conventional Operating Systems
Eric Battenberg, Rimas Avizienis
DAFx2011

Google says it's available here:

http://cnmat.berkeley.edu/system/files/attachments/main.pdf

If anyone has other references for real-time FFT scheduling I'd be
interested to read them.

Cheers,

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-05 Thread Ross Bencina

Hi Ethan,

On 6/02/2015 1:17 PM, Ethan Duni wrote:
 There is just no way A/B testing on a sample of listeners,
 at loud, but still realistic listening levels, would show that
 dithering to 16bit makes a difference.

 Well, can you refer us to an A/B test that confirms your assertions?
 Personally I take a dim view of people telling me that a test would 
surely

 confirm their assertions, but without actually doing any test.

Here's a double-blind A/B/X test that indicated no one could hear the 
difference between 16 and 24 bit. 24-bit is better than 16-bit with 
dithering so maybe you can extrapolate.


AES Journal 2007 September, Volume 55 Number 9: Audibility of a 
CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback

E. Brad Meyer and David R. Moran

I found this link with google:
http://drewdaniels.com/audible.pdf

The test results show that the CD-quality A/D/A loop was undetectable 
at normal-to-loud listening levels, by any of the subjects, on any of 
the playback systems. The noise of the CD-quality loop was audible only

at very elevated levels.

Cheers,

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiently modulate filter coefficients without artifacts?

2015-02-02 Thread Ross Bencina

On 2/02/2015 9:45 PM, Vadim Zavalishin wrote:
 One should be careful not to mix up two different requirements:

 - time-varying stability of the filter
 - the minimization of modulation artifacts

True.

My logic was thus: One way to minimise artifacts is to band-limit the 
coefficient changes. This more-or-less requires audio-rate coefficient 
update. Therefore a time-varying stable filter is required.


Side note: band limiting the coefficient changes doesn't necessarily 
require recomputing them. Some structures will be stable for ramped 
changes of the raw coefficients.





a filter based on the 2nd order resonating Jordan normal cell, which is
effectively just implementing a decaying complex exponential:

x[n+1] = A*(x[n]*cos(a)-y[n]*sin(a))
y[n+1] = A*(x[n]*sin(a)+y[n]*cos(a))


I believe that's also called coupled form, see e.g. 
http://www.dsprelated.com/showarticle/183.php


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiently modulate filter coefficients without artifacts?

2015-02-01 Thread Ross Bencina

Hello Alan,

On 1/02/2015 4:51 AM, Alan O Cinneide wrote:
 Dear List,

 While filtering an audio stream, I'd like to change the filter's
 characteristics.

You didn't say what kind of filter, so I'll assume a bi-quad section.


 In order to do this without audible artifacts, I've been filtering a
 concurrent audio buffer (long enough so that the initial transient
 behaviour peeters out) and then crossfading.

 I can't believe that this is the most efficient design.  Can someone
 explain to me a better implementation or give me a reference which
 discusses such?

Cross-fading is not entirely unreasonable. Another option is to 
band-limit (smooth) the parameter change. For that you need a filter 
that is stable for audio-rate time-varying parameter change (not many are).


Giulio's suggestions are good ones. Here's a recent paper that surveys a 
range of approaches:


Wishnick, A. (2014) “Time-Varying Filters for Musical Applications” 
Proc. of the 17th Int. Conference on Digital Audio Effects (DAFx-14), 
Erlangen, Germany, September 1-5, 2014.


Available here:
http://www.dafx14.fau.de/papers/dafx14_aaron_wishnick_time_varying_filters_for_.pdf

Here is a practical implementation of a time-variant stable filter:

http://www.cytomic.com/files/dsp/SvfLinearTrapezoidalSin.pdf
see also:
http://www.cytomic.com/technical-papers

Hope that helps,

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sallen Key with sin only coefficient computation

2014-12-21 Thread Ross Bencina

On 21/12/2014 5:12 PM, Andrew Simper wrote:

and all the other papers (including the SVF version of the same thing I did
a while back) are always available here:

www.cytomic.com/techincal-papers


Actually:

http://www.cytomic.com/technical-papers
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] magic formulae

2014-11-27 Thread Ross Bencina

On 28/11/2014 12:54 AM, Victor Lazzarini wrote:

Thanks everyone for the links. Apart from an article in arXiv written by 
viznut, I had no
further luck finding papers on the subject (the article was from 2011, so I 
thought that by
now there would have been something somewhere, beyond the code examples and
overviews etc.).


What exactly are you looking for Victor?

Perhaps this stuff had its peak in the 80s in video games (maybe there 
is an article in one of the Audio Anecdotes books, if I remember correctly).


There was a discussion on ACMA-L a while back discussing that it was 
being done in the 70s too. (with discrete digital circuits: counters, 
gates etc.)


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-21 Thread Ross Bencina

Hi Rich,

On 22/06/2014 1:09 AM, Rich Breen wrote:

Just as a data point; Been measuring and dealing with converter and
DSP throughput latency in the studio since the first digital machines
in the early '80's;


Out of interest, what is your latency measurement method of choice?



my own experience is that anything above 2 or 3
msec of throughput latency starts to become an issue for professional
musicians; 5 msec becomes very noticable on headphones, and above
6msec is not usable.


Do you think they notice below 2ms?

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-19 Thread Ross Bencina

On 19/06/2014 4:52 PM, Rohit Agarwal wrote:

In terms of computational complexity, most of the complexity is in
modelling, tuning the parameters to fit data. However, once you're done
with this offline task, running the result should not be that heavy. That
process should be real-time on new CPUs. Your latency should then be just
the buffering which should get you down to 25 ms.


Sure, but the point Sampo was making is that 25ms is one or two orders 
of magnitude too large for musicians. Especially musicians who have the 
option of using analog gear with zero latency.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-19 Thread Ross Bencina

On 19/06/2014 7:09 PM, Rohit Agarwal wrote:

Enlighten me, does that mean faster tempo or is 10% too much delay for
that?


I think that this conversation is at risk of going off the rails. Make 
sure that you're asking the right question.


There are a number of different ways that delays can impact a musical 
performance, including:


- perceived impact of time alignment on grove as heard by external 
listeners


- perceived impact of time jitter (Nigel's D50 example) for performers 
and listeners.


- effect of self-delay and inter-musician delay on the musical performers.

I think the last point is the one that's relevant here.

Talking about time alignment in techno is not really relevant if we're 
talking about valve amps for guitarists.


Clearly a 25ms delay post-applied to the rhythm guitar is going to mess 
with the groove in almost any musical setting. But the real question is 
whether it's going to mess with the guitarist's performance of the 
groove if they can hear the delay while they are playing (i.e. can they 
compensate to recreate the same groove). And then, when comparing to 
zero latency hardware, at what point does it become trivial to 
compensate (i.e. doesn't introduce any additional cognitive burden).


I believe that various studies have put the playable latency threshold 
below 8ms. (Sorry, don't have time to look this up but I think that one 
example is in one of Alex Carot's network music papers).


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Nyquist–Shannon sampling theorem

2014-03-27 Thread Ross Bencina

On 27/03/2014 3:23 PM, Doug Houghton wrote:

Is that making any sense? I'm struggling with the fine points.  I bet
this is obvious if you understand the math in the proof.


I'm following along, vaguely.

My take is that this conversation is not making enough sense to give you 
the certainty you seem to be looking for.


Your question seems to be very particular regarding specifics of the 
definitions used in a theorem, but you have not quoted the theorem or 
the definitions.


Most of the answers so far seem to be talking about interpretations and 
consequences of the theorem.


May I suggest: quote the version of the theorem that you're working from 
and the definitions of terms used that you're assuming, and we can go 
from there. It may also be helpful to see the proof that you are working 
from, perhaps someone can help unpack that.


Here's a version of the theorem that you may or may not be happy with:

SHANNON-NYQUIST SAMPLING THEOREM [1]

If a function x(t) contains no frequencies higher than B hertz,
it is completely determined by giving its ordinates at a series
of points spaced 1/(2B) seconds apart.

---


I'm loath to contribute my limited interpretation, but let me try (feel 
free to ignore or ideally, correct me):


x(t) is an infinite duration continuous-time function.

a frequency is defined to be an infinite duration complex 
sinusoid with a particular period.


The theorem is saying something about an infinite duration continuous 
time signal x(t), and expressing a constraint on that signal in terms of 
the signal's frequency components.


To be able to talk about the frequency components of x(t) we can use a 
continuous Fourier representation of the signal, i.e. the Fourier 
transform [2], say x'(w), a complex valued function, w is a (continuous) 
real-valued frequency parameter:


 +inf
x'(w) = integrate x(t)*e^(-2*pi*i*t*w) dt
 -inf

The Fourier transform can represent any continuous signal that is 
integrable and continuous (I deduce this from the invertability of the 
Fourier transform [3]). One consequence of this is that any practical 
analog signal x(t) may be represented by its Fourier transform.


The theorem expresses a constraint the frequencies for which the Fourier 
transform may be non-zero. Specifically, it requires that x'(w) = 0 for 
all w  -N and all w  N, where N is the Nyquist frequency.


Note specifically that we are dealing with the continuous Fourier 
transform, therefore there is no requirement for x(t) to be periodic or 
of finite temporal extent.


The theorem also does not say anything about the time extent of the 
discrete time signal (it is assumed to be infinite too).


That's my take on it anyway.

Ross.

[1] 
http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Nyquist%E2%80%93Shannon_sampling_theorem.html


[2] http://en.wikipedia.org/wiki/Fourier_transform

[3] http://en.wikipedia.org/wiki/Fourier_inversion_formula
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Inquiry: new systems for Live DSP

2014-03-23 Thread Ross Bencina

On 15/03/2014 1:46 AM, Richard Dobson wrote:

But portaudio only states the software i/o buffer latency, it knows
nothing directly of internal codec latencies. You would need to subtract
the (two-way?) buffer latency portaudio reports, and then measure or
compute how much of the remainder is down to the dac filtering. I am not
entirely sure how accurate that portaudio estimate is anyway - it's an
area of portaudio that may not be entirely bug-free. It has been
discussed on the portaudio list fairly recently IIRC.


Possible bugs not withstanding, there are indeed limits to what 
PortAudio can report with respect to latency.


However the following is not strictly correct: portaudio only states 
the software i/o buffer latency


In summary, PortAudio reports the sum of:

- any buffering latency introduced by PortAudio (often this is zero)

AND

- any buffering latency that can be inferred to be introduced by the 
native API.


AND/OR

- any latency (of any kind, not just buffering) that is reported by the 
native API and/or the driver.


Some native APIs (CoreAudio, ASIO) provide mechanisms for the audio 
driver to report detailed latency information for components below the 
client software interface. In general PortAudio aims to surface that 
information to the PortAudio client.


Many of the bugs in PA were fixed (hence discussions), and those that 
remain I think relate to imperfect inference when the native APIs that 
don't explicitly report latency information, or with obscure edge cases 
(multiple-device full duplex I/O on OS X comes to mind).


As has been pointed out elsewhere, if an audio interface has a digital 
interface, its driver can't report the latency introduced by arbitrary 
ADC/DACs connected to the digital interface. As far as I know, it is 
largely undocumented/undefined whether the latency reported by an ASIO 
driver includes the hardware ADC/DAC delays for hardware that has analog 
IO. CoreAudio reports multiple latency components, and may well go all 
the way to reporting converter latency (I don't remember). I'm not sure 
about ALSA.


Ross.


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Iterative decomposition of an arbitrary frequency response by biquad IIR

2014-03-04 Thread Ross Bencina

On 5/03/2014 7:56 AM, Ethan Duni wrote:

Seems like somebody somewhere should have already thought
through the problem of matching a single biquad stage to an arbitrary
frequency response - anybody?


Pretty sure that the oft-cited Knud Bank Christensen paper does LMS fit 
of a biquad over an arbitrary sampled frequency response. In the paper 
they match a low-order analog filter response curve, but from memory the 
technique breaks the target response into frequency bands, as such it 
should be usable for an arbitrary response. Maybe that's not the most 
efficient approach for a real-time situation, but it will give you an 
optimal fit.


Knud Bank Christensen, A Generalization of the Biquadratic Parametric 
Equalizer, AES 115, 2003, New York.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Iterative decomposition of an arbitrary frequency response by biquad IIR

2014-03-04 Thread Ross Bencina

On 5/03/2014 2:27 PM, Sampo Syreeni wrote:

Pretty sure that literature has to contain the relevant algorithms if
used with just a single resonance.


I never looked at rational function fitting, but this would be easy 
enough to try:


http://www.mathworks.com.au/help/rf/rationalfit.html

The link cites:

B. Gustavsen and A. Semlyen, Rational approximation of frequency domain 
responses by vector fitting, IEEE Trans. Power Delivery, Vol. 14, No. 
3, pp. 1052–1061, July 1999.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Hosting playback module for samples

2014-02-27 Thread Ross Bencina

On 28/02/2014 12:16 AM, Michael Gogins wrote:

For straight sample playback, the C library FluidSynth, you can use it via
PInvoke. FluidSynth plays SoundFonts, which are widely available, and there
are tools for making your own SoundFonts from sample recordings.

For more sophisticated synthesis, the C library Csound, you can use it via
PInvoke. Csound is basically as powerful as it gets in sound synthesis.
Csound can use FluidSynth. Csound also has its own basic toolkit for simple
sample plaback, or you can build your own more complex samplers using
Csound's orchestra language.


If I understand correctly the OP wants a way to host Kontakt and other 
commercial sample players within a C# application, not to code his own 
sample player or use something open source.


The question is the quickest path to hosting pre-existing VSTis in C# 
and sending them MIDI events.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Hosting playback module for samples

2014-02-27 Thread Ross Bencina

On 28/02/2014 2:06 PM, Michael Gogins wrote:

I think the VSTHost code could be adapted. It is possible to mix managed
C++/CLI and unmanaged standard C++ code in a single binary. I think this
could be used to provide a .NET wrapper for the VSTHost classes that C#
could use.


I agree.

Maybe I missed something, but which VSTHost classes are you referring to?

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Hosting playback module for samples

2014-02-26 Thread Ross Bencina

Hi Mark,

I'm not really sure that I understand the problem. Can you be more 
specific about the problems that you're facing?


Personally I would avoid managed code for anything real-time (ducks).

You're  need to build a simple audio engine (consider PortAudio or the 
ASIO SDK). And write some VSTi hosting code using the VST SDK. It's this 
last bit that will require some work. But if you limit yourself to a 
small number of supported plugins to begin with it should not be too 
hard. MIDI scheduling in a VSTi is not particularly challenging -- the 
plugins do the sub-buffer scheduling, you just need to put together a 
frame of MIDI events for each audio frame.


If there's any kind of synchronisation with the outside world things 
will get trickier, but if you can clock the MIDI time off the 
accumulated sample position it's not hard.


Other details:

You're going to need some kind of lock-free communication mechanism with 
your audio callback (e.g. some kind of FIFO). I guess the main 
approaches would be to either (A) schedule MIDI events ahead of time 
from your C# code and use a priority queue (Knuth Heap is easy and 
relatively safe for real-time) in the audio thread to work out when to 
schedule them; or (B) maintain the whole MIDI sequence in a vector and 
just play through it from the audio thread. Then you need a mechanism to 
update the sequence when it changes (just swap in a new one?).


Cheers,

Ross



On 27/02/2014 3:56 AM, Mark Garvin wrote:

I realize that this is slightly off the beaten path for this group,
but it's a problem that I've been trying to solve for a few years:

I had written software for notation-based composition and playback of
orchestral scores. That was done via MIDI. I was working on porting
the original C++ to C#, and everything went well...except for playback.
The world has changed from MIDI-based rack-mount samplers to computer-
based samples played back via hosted VSTi's.

And unfortunately, hosting a VSTi is another world of involved software
development, even with unmanaged C++ code. Hosting with managed code
(C#) should be possible, but I don't think it has been done yet. So
I'm stuck. I've spoken to Marc Jacobi, who has a managed wrapper for
VST C++ code, but VSTi hosting is still not that simple. Marc is very
helpful and generous, and I pester him once a year, but it remains an
elusive problem.

It occurred to me that one of the resourceful people here may have
ideas for working around this. What I'm looking for, short term, is
simply a way to play back orchestral samples or even guitar/bass/drums
as a way of testing my ported C# code. Ideally send note-on, velocity,
note-off, similar to primitive MIDI. Continuous controller for volume
would be icing.

Any ideas, however abstract, would be greatly appreciated.

MG
NYC

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Hosting playback module for samples

2014-02-26 Thread Ross Bencina

Hello Mark,

On 27/02/2014 3:52 PM, Mark Garvin wrote:

Most sample banks these days seem to be in NKI format (Native
Instruments). They have the ability to map ranges of a keyboard into
different samples so the timbres don't become munchkin-ized or
Vader-ized. IOW, natural sound within each register.

A playback engine is typically something like Native Instruments'
Kontakt, which is 'hosted' by the main program (my composition
software, for ex). then NI Kontakt can load up NKI files and
deliver sound when it receives events.

The whole process of linking events, etc is what usually what
stymies programmers who are new to VST-based programming. And
even many who are familiar.


Yes the VST SDK is not the best documented in the world.



Personally I would avoid managed code for anything real-time (ducks).


Actually, C# can be faster than pre-compiled code!


Speed has nothing to do with real-timeness.

Real-time is all about deterministic timing. Runtime-JIT and garbage 
collection both mess with timing. It may be that CLR always JITs at load 
time. That doesn't save you from GC (of course there are ways to avoid 
GC stalls in C#, but if you just used a deterministic language this 
wouldn't be necessary).




You're  need to build a simple audio engine (consider PortAudio or the
ASIO SDK). And write some VSTi hosting code using the VST SDK. It's this
last bit that will require some work. But if you limit yourself to a
small number of supported plugins to begin with it should not be too
hard. MIDI scheduling in a VSTi is not particularly challenging -- the
plugins do the sub-buffer scheduling, you just need to put together a
frame of MIDI events for each audio frame.


That's inspiring. I'm not sure that this is done in the same way as a
regular plugin though.


I'm not sure what you mean by a regular plugin.

I have a commercial VST host on the market so I do know what I'm talking 
about.




And I believe it's pretty difficult to host a
VSIi in managed code. That is pretty much the crux of the problem right
there. I've heard of a lot of people who started the project but were
never aboe to get it off te ground.


So you're insisting on using C# for real-time audio? As noted above I 
think this is a bad idea. There is no rational reason to use C# in this 
situation.


Just use unmanaged C++ for this part of your program. Things will go 
much better for you. Not the least because both real-time audio APIs and 
the VST SDK are unmanaged components.




If there's any kind of synchronisation with the outside world things
will get trickier, but if you can clock the MIDI time off the
accumulated sample position it's not hard.


I could do without sync to external for now.


... I guess the main
approaches would be to either (A) schedule MIDI events ahead of time
from your C# code and use a priority queue (Knuth Heap is easy and
relatively safe for real-time) in the audio thread to work out when to
schedule them; or (B) maintain the whole MIDI sequence in a vector and
just play through it from the audio thread. Then you need a mechanism to
update the sequence when it changes (just swap in a new one?).


The internals of a VSTi host are beyond me at present. I was hoping
for some simple thing that could be accessed by sending MIDI-like events
to a single queue.


I'm sure there are people who will licence you something but I don't 
know of an open source solution. JUCE might have something maybe?


Ross.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Are natural sounds of minimum phase?

2013-12-10 Thread Ross Bencina

On 11/12/2013 4:29 PM, Sol Friedman wrote:

minimum phase would be a likely candidate


Is minimum-phase a well defined property of non-linear time-varying systems?

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Implicit integration is an important term, ZDF is not

2013-11-14 Thread Ross Bencina

Hi Max,

Another data point which would seem to support your take on things. This 
is the first monograph in Sound Synthesis that I'm aware of that 
specifically addresses finite difference methods (other examples welcome):


Stefan Bilbao (2009), Numerical Sound Synthesis: Finite Difference 
Schemes and Simulation in Musical Acoustics


http://www.amazon.com/Numerical-Sound-Synthesis-Difference-Simulation/dp/0470510463

Not that there are that many books on sound synthesis...

Ross.


On 15/11/2013 1:14 AM, Max Little wrote:

Thanks Ross.

Good point about the practical utility of implicit FD and increasing
computational power. There's also all the issues about uniqueness of
implicit FDs arising from nonlinear IVPs, and then there's stability,
convergence, weather the resulting method is essentially
non-oscillatory etc. I suppose there are additional issues to do with
frequency response which may be what matters most in audio DSP.

Max


On 14 November 2013 14:06, Ross Bencina rossb-li...@audiomulch.com wrote:

On 14/11/2013 11:41 PM, Max Little wrote:


I may have misread, but the discussion seems to suggest that this
discipline is just discovering implicit finite differencing! Is that
really the case? If so, that would be odd, because implicit methods
have been around for a very long time in numerical analysis.



Hi Max,

I think you would be extrapolating too far to say that a few people tossing
around ideas on a mailing list are representative of the trends of an entire
discipline. On this mailing list I would struggle to guess which this
discipline you are refering to. Suffice to say that a lot of the people
discussing things in this thread are developers not research scientists.

Some practitioners are just discovering new practicable applications of
implicit finite differencing in the last 10 years or so. One good reason for
this is that in the past these techniques were completely irrelevant because
they were too expensive to apply in real time at the required scale (100+
synthesizer voices, 100+ DAW channels). It also seems that the market has
changed such that people will pay for a monophonic synth that burns a whole
i7 CPU core.

Cheers,

Ross.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp





--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-12 Thread Ross Bencina

On 12/11/2013 7:40 PM, Tim Blechmann wrote:

some real-world benchmarks from the csound community imply a performance
difference of roughly 10% [1].


Csound doesn't have a facility for running multiple filters in parallel 
though does it? not even 2 in parallel for stereo.


4 biquads in parallel can be useful if you have a higher-order filter 
you can factor (granted not so useful for non-linear stuff).


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Time Varying BIBO Stability Analysis of Trapezoidal integrated optimised SVF v2

2013-11-10 Thread Ross Bencina

With reference to my previous message:

It looks like there is a change of basis matrix T that can be used to 
satisfy Laroche's Criterion 2 (time varying BIBO stability at full audio 
rate), at least for k  0.


T:

[ 0, 1]
[ 1, -1/1 ]

This matrix requires k  1/1 but it seems that the lower bound on  k 
can approaches zero as the 2,2 entry approaches zero from below.


Hopefully I'm not imagining things.

Ross.



On 11/11/2013 2:58 AM, Ross Bencina wrote:

Hi Everyone,

I took a stab at converting Andrew's SVF derivation [1] to a state space
representation and followed Laroche's paper to perform a time varying
BIBO stability analysis [2]. Please feel free to review and give
feedback. I only started learning Linear Algebra recently.

Here's a slightly formatted html file:

http://www.rossbencina.com/static/junk/SimperSVF_BIBO_Analysis.html

And the corresponding Maxima worksheet:

http://www.rossbencina.com/static/junk/SimperSVF_BIBO_Analysis.wxm

I had to prove a number of the inequalities by cut and paste to Wolfram
Alpha, if anyone knows how to coax Maxima into proving the inequalities
I'm all ears. Perhaps there are some shortcuts to inequalities on
rational functions that I'm not aware of. Anyway...

The state matrix X:

[ic1eq]
[ic2eq]

The state transition matrix P:

[-(g*k+g^2-1)/(g*k+g^2+1), -(2*g)/(g*k+g^2+1) ]
[(2*g)/(g*k+g^2+1),(g*k-g^2+1)/(g*k+g^2+1)]

(g  0, k  0 = 2)

Laroche's method proposes two time varying stability criteria both using
the induced Euclidian (p2?) norm of the state transition matrix:

Either:

Criterion 1: norm(P)  1 for all possible state transition matrices.

Or:

Criterion 2: norm(TPT^-1)  1 for all possible state transition
matrices, for some fixed constant change of basis matrix T.

norm(P) can be computed as the maximum singular value or the positive
square root of the maximum eigenvalue of P.transpose(P). I've taken a
shortcut and not taken square roots since we're testing for norm(P)
strictly less than 1 and the square root doesn't change that.

 From what I can tell norm(P) is 1, so the trapezoidal SVF filter fails
to meet Criterion 1.

The problem with Criterion 2 is that Laroche doesn't tell you how to
find the change of basis matrix T. I don't know enough about SVD,
induced p2 norm or eigenvalues of P.P' to know whether it would even be
possible to cook up a T that will reduce norm(P) for all possible
transition matrices. Is it even possible to reduce the norm of a
unit-norm matrix by changing basis?

 From reading Laroche's paper it's not really clear whether there is any
way to prove Criterion 2 for a norm-1 matrix. He kind-of side steps the
issue with the norm=1 Normalized Ladder and ends up proving that
norm(P^2)1. This means that the Normalized Ladder is time-varying BIBO
stable for parameter update every second sample.

Using Laroche's method I was able to show that Andrew's trapezoidal SVF
(state transition matrix P above) is also BIBO stable for parameter
update every second sample. This is the final second of the linked file
above.

If anyone has any further insights on Criterion 2 (is it possible that T
could exist?) I'd be really interested to hear about it.

Constructive feedback welcome :)

Thanks,

Ross


[1] Andrew Simper trapazoidal integrated SVF v2
http://www.cytomic.com/files/dsp/SvfLinearTrapOptimised2.pdf

[2] On the Stability of Time-Varying Recursive Filters
http://www.aes.org/e-lib/browse.cfm?elib=14168
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] R: R: Trapezoidal integrated optimised SVF v2

2013-11-10 Thread Ross Bencina

Hi Ezra,

A few comments:

On 11/11/2013 3:19 PM, Ezra Buchla wrote:

there seems to be some concern about distortion introduced by the
trapezoidal integration. i've tried the algo in both fixed 32 ands
float, and it seems to sound and look ok to but i have not done a
proper analysis either numerically or physically. has anyone else?


All single-step digital integration procedures will necessarily diverge 
from the analog integration.


Here's a graph of the amplitude response of various integrators:

https://docs.google.com/document/d/1O_38tHxkMIrSScLXULuzJAmNxYj01PxEKw50hhY4kkU/edit?usp=sharing

Trapezoidal is good because it is stable but as you can see it has 
high-frequency roll-off compared to some others. As noted earlier 
Simpsons rule is not usable if you want your cutoff to go near Nyquist 
but is more accurate up to pi/4, which means that if you're using 2x 
oversampling.


Trapezoidal is equivalent to the warping introduced by BLT (bilnear 
transform) I believe. One way to think about it is that a lowpass filter 
in the analog domain has gain 0 at infinite frequency, and infinite 
frequency maps to Nyquist after BLT/Trapezoidal -- so you will get some 
warping.



theo vereslt seems to think the distortion would be unacceptable, and
so i tend to believe that in a really high-fidelity environment it
could be an issue.


The usual suggestion is to oversample by 2x. Many implementations do 
this and it has been discussed here many times over the years. It is 
possible to do other tricks to match the amplitude response at 1x but 
the phase response can get screwed up and you really want something 
close to analog for both amplitude and phase responses.


My take is that for synthesis it's all about your aesthetic -- if you 
want it to sound like an analog synth that's different from if you want 
it to sound good. There's whole music genres based on commodore 64 SID 
chips or destorted noise.. so anything goes really.




additionally, he seems to imply (maybe? i'm not
sure) that not only is the optimization not worth it, but you can't
adequately suppress artifacts during coefficient change at all (which
i've always understood as the main raison for the digital SVF) without
analog components (referenced switched caps as variable R's, an
interesting trick), or more bandlimiting somewhere, or something.


I think the main point in the current discussion is that if you want to 
do *audio rate* modulation (think filter FM) then most tricks won't 
save you. You need a filter that can be modulated at audio rate and 
doesn't introduce spurious artifacts. Of course audio-rate FM will 
alias, but that's a separate issue that can be addressed by oversampling 
etc.



the KeepTopology paper by vadim zavalishin (http://t.co/SVJp7iAgqb)
proposes modelling the SVF with digital integrators. it seems like
these would follow the behaviors of the caps pretty closely and be
amenable to parameter change at the cost of some expense of course...'


All filters use digital integrators. There's just a lot of different 
types of integrators.




wouldn't that satisfy the purists? this last exchange seems to
indicate not... but i'm not sure why. again i should just try this
myself but it's a bit more work! ha.


I can recommend the following book as a gentle and no-hype low-math 
introduction to numerical integration. Written by a guy who cut his 
teeth computing trajectories on the Apollo project:


Math Toolkit for real-time programming
Jack W. Crenshaw
CMP Books.

Cheers,

Ross.






thank you all for the discussion and sorry for my noise

ezra b
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] R: R: Trapezoidal integrated optimised SVF v2

2013-11-10 Thread Ross Bencina

On 11/11/2013 12:21 PM, robert bristow-johnson wrote:

but you cannot define your current output sample in terms of the
current output sample.

But that, with all due respect, is what has been done for quite a while.


it's been reported or *reputed* to be done for quite a while.

but when the smoke and dust clear, logic still prevails.


I presume Urs (hi Urs!) is talking about using implicit solvers. Which 
some people have been using for a while.


I guess it depends how you define current output sample. Is it the 
trial output sample that you're refining, or the sample you actually output?


Seems logic could go either way.

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-07 Thread Ross Bencina

On 8/11/2013 4:29 AM, Theo Verelst wrote:

Fine. He insulted run of the mill academic EE insights from decades ago,
i merely stated facts, which should be respected, but here are still not.

The theory is quite right, and I've taken the effort of correcting a lot
of misinterpretations. I suppose that isn't popular.


Hi Theo,

I read your initial response to Andrew and it seems to me that it is 
*you* who is ignoring what he posted.


Of course Andrew is making assumptions and of course this is all old 
news from an EE methods standpoint. But I think Andrew has been very 
clear on his sources.


What you seem to be missing are the benefits of what Andrew is doing. To 
me these include:


1. Novelty in the world of open-access music-dsp algorithms that 
people can actually read about, use and learn from. Many people here 
don't have EE degrees -- and they certainly don't need one to follow the 
maths in Andrew's paper. Wasn't it you who just the other day criticised 
someone else for not providing a theoretical basis for their work?


2. Numerical performance even when using 32 bit floats (did you look at 
Andy's graphs of numerical performance vs DF1, DF2?).


3. Topology preservation: if you want to emulate a non-linear analog SVF 
without moving up to numerical integration techniques Andy's filter 
allows simple introduction of *approximate* static non-linearities. This 
is also generally useful for efficient implementation of musical 
filters, that often include static nonlinearities.


4. Stability under audio-rate time-varying coefficients. We recently 
discussed that you don't get this with Dattoro approved DF-1, DF-2, 
see Laroche's JAES paper on BIBO stability for details. Sure you get it 
with Lattice but that doesn't give you topology preservation if your 
source model is an SVF.


Each of these points alone is interesting. When taken together I think 
that what Andy has posted is a really a useful contribution. It has got 
a better result than the status-quo. Personally I don't think this is 
really about the coefficient calculations, which I agree can be unified 
with higher-end s-plane/z-plane theory, it's about the combination of 
benefits above.


In light of all this I really fail to see how your criticisms are even 
valid, let alone useful.


Now, I do have one thing I would like to see: and that is a mathematical 
proof that point (4) above is actually true for this topology. Ever 
since I read the Laroche BIBO paper it scared the crap out of me to be 
modulating any IIR filter at audio rate without a trusted analysis.


My 2 cents,

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] IIR Coefficient Switching Glitches

2013-11-03 Thread Ross Bencina



On 3/11/2013 3:22 PM, Laurent de Soras wrote:

Chris Townsend wrote:


Any ideas?  Recommendations?


Probably this:
http://cytomic.com/files/dsp/SvfLinearTrapOptimised.pdf


Consider ramping interpolated coefficients at audio rate to smooth out 
parameter changes. I'm pretty sure that Andy's SVF is stable with audio 
rate coefficient modulation.


Consider interpolating something closer to cf and q rather than 
interpolating the raw coefficients.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] note onset detection

2013-08-09 Thread Ross Bencina

Hi Robert,

I have a question: are you trying to output the pitch and note on/off 
information in a real-time streaming scenario with minimum delay? or is 
this an off-line process? My impression is that the MIR folk worry less 
about minimum-delay/causal processing than us real-time people.


Another comment:

On 9/08/2013 7:51 AM, robert bristow-johnson wrote:

On 8/8/13 2:23 PM, Ian Esten wrote:

Hmm. I think keeping it in the time domain would be difficult. Most
time domain pitch detection algs are error prone, which would yield
false positives.

You can of course tweak FFT size and your interval between FFTs. I
think it would be pretty reasonable to run your FFTs at a pretty
coarse time resolution and increase the density to get better time
resolution when you find a note on. That would be pretty low cost.


well, pitch detection is one thing, note onsets are another.  my
experience with pitch detectors came to an opposite conclusion.  unless
you're using the FFT to quick compute the autocorrelation (ACF), other
frequency-domain pitch detectors have (at least where i have seen them)
relied on energy at the fundamental.


I don't think that's true.

As far as I know the usual frequency-domain pitch tracking approch is 
based on performing MQ partial tracking combined with Maher and 
Beauchamp's two-way mismatch procedure:


http://ems.music.uiuc.edu/beaucham/papers/JASA.04.94.pdf

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] note onset detection

2013-08-07 Thread Ross Bencina



On 7/08/2013 12:23 PM, charles morrow wrote:

Please explain your reference Roberts transcription notes for me.


Robert expressed the following requirement:

On 6/08/2013 6:01 AM, robert bristow-johnson wrote:
 the big problem i am dealing with is people singing or humming and
 changing notes.  i really want to encode those pitch changes as new
 notes rather than as a continuation of the previous note (perhaps
 adjusted with MIDI pitch bend messages).


My thought was that this kind of top down parsing requires some kind of 
musical knowledge that is not intrinsic to the signal. I.e. I don't 
think any kind of signal novelty function is going to tell you how to 
segment features at the note level of abstraction. I rather suspect 
that humans do it with reference to their knowledge of previously heard 
melodic fragments.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] note onset detection

2013-08-06 Thread Ross Bencina

On 7/08/2013 2:38 AM, Theo Verelst wrote:

I suppose in EE terms, if you know something about the waves you're
trying to detect


Strikes me that we are talking about perceptual note onset, not 
something you could define /easily/ in EE terms.


You would need a definition of note onset that somehow includes 
segmenting a hummed glide transition from one pitch to another according 
to Robert's transcription rules.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Synth thread timing strategies

2013-03-26 Thread Ross Bencina

On 26/03/2013 4:55 PM, Alan Wolfe wrote:

I just wanted to chime in real quick to say that unless you need to go
multithreaded for some reason, you are far better off doing things
single threaded.

Introducing more threads does give you more processing power, but the
communication between threads isn't free, so it comes at a cost of
possible latency etc.  When you do it all on a single thread, that
disappears and you get a lot more bang for your buck.


Well that's true. But if you want to send MIDI events with millesecond 
resolution and your audio callback is running at a 5ms period with 90% 
processor load the only way you're going to get your 1ms MIDI 
granularity is with a separate thread that is either (1) running on 
another core or (2) is pre-empting the audio callback compute.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Synth thread timing strategies

2013-03-26 Thread Ross Bencina

On 26/03/2013 5:28 PM, ChordWizard Software wrote:

Hi Ross,

Thanks, couple more questions then:


- There can be significant jitter in the time at which an audio callback
is called.


Can you define jitter?  Callbacks with different frame counts, or dropped 
frames?


If you call QueryPerformanceCounter() at the start of each callback you 
may notice significant deviation from the expected callback time if your 
expectation is that they period is constant.


Ideally you don't want this jitter to be added to other sources of jitter.




If the former, it would seem my proposed mechanism could adapt,

 as long as the callback is flexible about using each new frame
 count as the midi event horizon.

Callbacks with different framecounts is a separate but related matter.



The way I do it is to recover a time base for the audio callback using
some variant of a delay-locked-loop. Then use this time base to map
between sample time, midi beat time and system time (QPC time). Then I
schedule the MIDI events to be output at a future QPC time in another
thread (where the future is adjusted by the audio latency). In that
other thread I run a loop that polls every millisecond. With work you
can make it poll less often when there are no events.


Are these well-known techniques?  Don't suppose you could point me to any

 articles that might help me get my head around them?

I am not aware of a good clear overview. It's well known in the lore. 
CoreAudio does a lot of this under the hood I think. If you're on Mac 
you get relatively stabe timestamps for free.



This is a good introduction to DLLs for buffer time smoothing. Although 
I have found the result of that filter to be numercially poor without 
extra tweaks (since time is always increasing you lose precision):


http://kokkinizita.linuxaudio.org/papers/usingdll.pdf


Ross.



Regards,

Stephen Clarke
Managing Director
ChordWizard Software Pty Ltd
corpor...@chordwizard.com
http://www.chordwizard.com
ph: (+61) 2 4960 9520
fax: (+61) 2 4960 9580

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-14 Thread Ross Bencina

On 15/03/2013 6:02 AM, jpff wrote:

Ross == Ross Bencinarossb-li...@audiomulch.com  writes:

  Ross I am suspicious about whether the mask is fast than the conditional for
  Ross a couple of reasons:
  Ross - branch prediction works well if the branch usually falls one way
  Ross - cmove (conditional move instructions) can avoid an explicit branch
  Ross Once again, you would want to benchmark.

I did the comparison for Csound a few months ago. The loss in using
modulus over mask was more than I could contemplate my users
accepting.  We provide both versions for those who want non-power-of-2
tables and can take the considerable hit (gcc 4, x86_64)


Hi John,

I just want to clarify whether we're talking about the same thing:

You wrote:

John The loss in using modulus over mask

Do you mean :

x = x % 255 // modulus
x = x  0xFF // mask

?

Because I wrote:

Ross whether the mask is fast than the conditional

Ie:

x = x  0x255 // modulus
if( x == 256 ) x = 0; // conditional


Note that I am referring to the case where the instruction set has CMOVE 
(On IA32 it was added with Pentium Pro I think).


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-14 Thread Ross Bencina

On 15/03/2013 7:27 AM, Sampo Syreeni wrote:

Quite a number of processors have/used to have explicit support for
counted for loops. Has anybody tried masking against doing the inner
loop as a buffer-sized counted for and only worrying about the
wrap-around in an outer, second loop, the way we do it with unaligned
copies, SIMD and other forms of unrolling?


Yes. I usually do that when I can. I posted code earlier in the thread.

Doesn't work so well if your phase increment varies in non-simple ways 
(ie FM).


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-09 Thread Ross Bencina

On 10/03/2013 7:01 AM, Tim Goetze wrote:

[robert bristow-johnson]

On 3/9/13 1:31 PM, Wen Xue wrote:

I think one can trust the compiler to handle a/3.14 as a multiplication. If it
doesn't it'd probably be worse to write a*(1/3.14), for this would be a
division AND a multiplication.


there are some awful crappy compilers out there.  even ones that start from gnu
and somehow become a product sold for use with some DSP.

Though recent gcc versions will replace the above a/3.14 with a
multiplication, I remember a case where the denominator was constant
as well but not quite as explicitly stated, where gcc 4.x produced a
division instruction.


I don't think this has anything to do with crappy compilers

Unless multiplication by reciprocal gives exactly the same result -- 
with the same precision and the same rounding behavior and the same 
denormal behavior etc then it would be *incorrect* to automatically 
replace division by multiplicaiton by reciprocal.


So I think it's more a case of conformant compilers, not crappy compilers.

I have always assumed that it is not (in general) valid for the compiler 
to automatically perform the replacement; and the ony reason we can get 
away with it is because we make certain simplifying assumptions.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-08 Thread Ross Bencina

On 9/03/2013 9:53 AM, ChordWizard Software wrote:

Maybe you can advise me on a related question - what's the best
approach to implementing attenuation?   I'm guessing it is not
linear, since perceived sound loudness has a logarithmic profile - or
am I confusing amplifier wattage with signal amplitude?


What I do is use a linear scaling value internally -- that's the number 
that multiplies the signal. Let's call it linearGain. linearGain has the 
value 1.0 for unity gain and 0.0 for infinite attenuation.


there is usually some mapping from userGain

linearGain = f( userGain );

If userGain  is expressed in decibels you can use the standard decibel 
to amplitude mapping:


linearGain = 10 ^ (gainDb / 20.)


If your input is MIDI master volume you have to map from the MIDI value 
range to linear gain (perhaps via decibels). Maybe there is a standard 
curve for this?



Note that audio faders are not linear in decibels either, e.g.:
http://iub.edu/~emusic/etext/studio/studio_images/mixer9.jpg

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-08 Thread Ross Bencina



On 9/03/2013 2:55 PM, Ross Bencina wrote:

Note that audio faders are not linear in decibels either, e.g.:
http://iub.edu/~emusic/etext/studio/studio_images/mixer9.jpg


There is some discussion here:

http://www.kvraudio.com/forum/viewtopic.php?t=348751


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-07 Thread Ross Bencina

Stephen,

On 8/03/2013 9:29 AM, ChordWizard Software wrote:

a) additive mixing of audio buffers b) clearing to zero before
additive processing


You could also consider writing (rather than adding) the first signal to 
the buffer. That way you don't have to zero it first. It requires having 
a write and an add version of your generators. Depending on your 
code this may or may not be worth the trouble vs zeroing first.


In the past I've sometimes used C++ templates to paramaterise by the 
output operation (write/add) so you only have to write the code that 
generates the signals once


c) copying from one buffer to another

Of course you should avoid this whereever possible. Consider using 
(reference counted) buffer objects so you can share them instead of 
copying data. You could use reference counting, or just reclaim 
everything at the end of every cycle.



d) converting between short and float formats


No surprises to any of you there I'm sure.  My question is, can you
give me a few pointers about making them as efficient as possible
within that critical realtime loop?

For example, how does the efficiency of memset, or ZeroMemory,
compare to a simple for loop?


Usually memset has a special case for writing zeros, so you shouldn't 
see too much difference between memset and ZeroMemory.


memset vs simple loop will depend on your compiler.

The usual wisdom is:

1) use memset vs writing your own. the library implementation will use 
SSE/whatever and will be fast. Of course this depends on the runtime


2) always profile and compare if you care.



Or using HeapAlloc with the
HEAP_ZERO_MEMORY flag when the buffer is created (I know buffers
shouldn’t be allocated in a realtime callback, but just out of
interest, I assume an initial zeroing must come at a cost compared to
not using that flag)?


It could happen in a few ways, but I'm not sure how it *does* happen on 
Windows and OS X.


For example the MMU could map all the pages to a single zero page and 
then allocate+zero only when there is a write to the page.




I'm using Win32 but intend to port to OSX as well, so comments on the
merits of cross-platform options like the C RTL would be particularly
helpful.  I realise some of those I mention above are Win-specific.

Also for converting sample formats, are there more efficient options
than simply using

nFloat = (float)nShort / 32768.0


Unless you have a good reason not to you should prefer multiplication by 
reciprocal for the first one


const float scale = (float)(1. / 32768.0);
nFloat = (float)nShort * scale;

You can do 4 at once if you use SSE/intrinsics.

 nShort = (short)(nFloat * 32768.0)

Float = int conversion can be expensive depending on your compiler 
settings and supported processor architectures. There are various ways 
around this.


Take a look at pa_converters.c and the pa_x86_plain_converters.c in 
PortAudio. But you can do better with SSE.




for every sample?

Are there any articles on this type of optimisation that can give me
some insight into what is happening behind the various memory
management calls?


Probably. I would make sure you allocate aligned memory, maybe lock it 
in physical memory, and then use it -- and generally avoid OS-level 
memory calls from then on.


I would use memset() memcpy(). These are optimised and the compiler may 
even inline an even more optimal version.


The alternative is to go low-level and benchmark everything and write 
your own code in SSE (and learn how to optimise it).


If you really care you need a good profiler.

That's my 2c.

HTH

Ross.






Regards,

Stephen Clarke Managing Director ChordWizard Software Pty Ltd
corpor...@chordwizard.com http://www.chordwizard.com ph: (+61) 2 4960
9520 fax: (+61) 2 4960 9580



-- dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book
reviews, dsp links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] M4 Music Mood Recommendation Survey

2013-02-24 Thread Ross Bencina

Hi Andy,

On 22/02/2013 10:54 AM, Andy Farnell wrote:

I have noticed Ross, that I tend to seek out music that reflects
an already emerging emotion, such that the music then precipitates
a physiological emotion. If I am in the mood to be excited by
Bizet or the Furious Five MCs, then Radiohead or Gorecki
cannot sadden me. And conversely.


I often find myself looking for music that is congruent with my mood -- 
meaning that it somehow resonates with or supports my mental state.

The wrong music doesn't change my mood/emotions, it just doesn't work.

But I'm not sure whether this applies if I'm put in a neutral situation 
and asked to interpret the mood of a piece of music.




The longer I have
studied music the more it seems plausible; music does not
drive emotion, emotion drives music. At such times as we
encounter cultural supposition, as in a film score, the music
may resonate more strongly with expectation and work its
magic on us, but the emotion is not extempore in the music,
it lives in the listener.  Production music, as a choice of score

 to complement activity is therefore a question of good fit.
 Such a gifted composer or music supervisor chooses carefully,
 informed by understanding of narrative context.

Film is interesting because there is an intentional suspension of 
disbelief, a constant process of reading or interpreting the stimulus. 
Film composers make use of whatever tricks they can to elicit a response 
(use of the alien/familiar/culturally-charged etc).


I don't think there's any question that if you feed someone a stimulus 
and require them to interpret it that they will select the 'most 
appropriate' response. The question is what factors lead to the choice.




The disagreement
with some MIR projects, if indeed it is their mistake, is that
they presume music to be the driver and suppose a strict causality.
This makes many industry investors, mainly advertisers, very excited
where they assume a manipulative (a la Bernays/Lippmann) application.


I think there is confusion here on what music is and who the subjects 
are. Is it any organised sound played to any human? or a very restricted 
set of popular music played to modern westeners inculcated in consumer 
and media culture. Perhaps as an effectiveness metric to measure how 
well people have been trained into certain emotional responses (by 
media, film, etc) this kind of study is valid. The problem is when such 
a relativistic context-dependent evaluation is put forward as 
empirical science. It reeks of cultural imperialism masquerading as 
empirical science.




I find many sensitive artists are repelled by this idea, not
from a sense of instrumental reason displacing the artist, but from
an understanding that the listener is not a passive subject
amenable to a behaviourist interpretation.


There is that. Personally I'm repelled by the reductionism, lack of 
nuance, misuse of the English language, and gross experimental bias.


Somehow this whole discussion reminds me of that Lauri Anderson song 
Smoke Rings:


Bienvenidos. La primera pregunta es: Que es mas macho, pineapple o 
knife? Well, let's see. My guess is that a pineapple is more macho than 
a knife. Si! Correcto! Pineapple es mas macho que knife. La segunda 
pregunta es: Que es mas macho, lightbulb o schoolbus? Uh, lightbulb? No! 
Lo siento, Schoolbus es mas macho que lightbulb.



Best to you,

Ross.



best @ all
Andy



On Fri, Feb 22, 2013 at 10:19:02AM +1100, Ross Bencina wrote:



On 22/02/2013 9:54 AM, Richard Dobson wrote:

Listen to each track at least once and then select which track is the
best match with the seed. If you think that none of them match, just
select an answer at random.


Now I am no statistician, but with only four possible answers offered
per test, and with none of the above excluded as an answer (which
rather begs the question...),


You mean the one about adding to the large number of studies
offering empirical evidence in support of the assumption?


However, despite a recent upswing of research on musical emotions
(for an extensive review, see Juslin Sloboda 2001), the literature
presents a confusing picture with conficting views on almost every
topic in the field.1 A few examples may suffice to illustrate this
point: Becker (2001, p. 137) notes that “emotional responses to
music do not occur spontaneously, nor ‘naturally’,” yet Peretz
(2001, p. 126) claims that “this is what emotions are: spontaneous
responses that are dif?cult to disguise.” Noy (1993, p. 137)
concludes that “the emotions evokedby music are not identical with
the emotions aroused by everyday, interpersonal activity,” but
Peretz (2001, p. 122) argues that “there is as yet no theoretical or
empiricalr eason for assuming such specifcity.” Koelsch (2005,p.
412) observes that emotions to music may be induced “quite
consistently across subjects,” yet Sloboda (1996,p. 387) regards
individual differences as an “acute problem.” Scherer (2003, p. 25)
claims that “music

Re: [music-dsp] RE : TR : Production Music Mood Annotation Survey 2013

2013-02-21 Thread Ross Bencina

Hello Mathieu,

Thanks for responding. You answered my questions and your examples were 
interesting. I have made a few brief comments:


The description of the survey begins:

 We are conducting an experiment to determine the moods or emotions
 *expressed* by music.
(my emphasis)

As Richard has noted there is some question as to whether music 
expresses anything at all. But far more importantly, given that the 
survey uses an existing corpus of recorded music, and engages only with 
listeners (not creators, composers, improvisers or performers) it is 
difficult to understand how musical *expression* can be considered the 
subject of the experiment.


Your examples below (babies reacting, music lovers brought to tears), 
suggest that the research is concerned with evoked or induced emotional 
response.


Elsewhere [1] you cite Sloboda and Juslin [2], as making reference to 
expressed emptions (percieved emotions). I only have access to the 
abstract of [2] right now, it uses the phrase how and why we experience 
music as expressive of emotion. Experiencing music as expressive of 
emotion is quite different from music expressing emotion. I would be 
interested to know where the transposition in your survey title 
originated since it does not appear in the introduction to [1]:


music can either (i) elicit/induce/evoke emotions in listeners (felt
emotions), or (ii) express/suggest emotions to listeners (perceived 
emotions)


Perhaps a more accurate title for your survey is: an experiment to 
determine the moods or emotions percieved to be expressed by music. ?




Does the study below control for cultural bias?

-- Yes, up to a certain extent. During registration

 (http://musiclab.cc.jyu.fi/register.html), we collect demographic
 information about participants such as background (listener, musician,
 trained professional), country, gender, age (voluntary).

However the survey appears to require internet access and is presented 
only in English.



 I fully agree that the way we perceive music is influenced e.g. by
 our culture, past experiences, tastes and the listening context (e.g.
 at work, at home).
 I do also believe that there are some strong invariants across human
 listeners, given a specific culture.

So the research is focused on discovering strong invariants within a 
specific culture? In that case perhaps we are in agreement and the 
experiment is intended purely to document culturally conditioned 
emotional responses to music -- an anthropological study if you will.


My concern is that by framing the experiment as determining moods or 
emotions expressed by music without drawing attention to the culturally 
relativistic nature of the investication (your: strong invariants 
across human listeners, given a specific culture.) That the work 
perpetuates a myth and obscures far deeper issues. Juslin and Västfjäll, 
for example, propose 7 potential underlying mechanisms.[1] (For anyone 
reading, the paper provides a nice overview of the current controversial 
state of play.)




Please explain why an otherwise reputable institution (QMUL) is
wasting resources by engaging in this kind of pseudoscience.

-- I don't think that the funding body for this research survey is

 wasting money, nor that QMUL is wasting resources; commenting further
 on that one would certainly waste my time ;-)

My concern is with intellectual rather than financial waste.

Ross.

[1] Multidisciplinary Perspectives on Music Emotion Recognition: 
Implications for Content and Context-Based Models

http://cmmr2012.eecs.qmul.ac.uk/sites/cmmr2012.eecs.qmul.ac.uk/files/pdf/papers/cmmr2012_submission_101.pdf

[2] Psychological perspectives on music and emotion. Sloboda, John A.; 
Juslin, Patrik N. Juslin, Patrik N. (Ed); Sloboda, John A. (Ed), (2001). 
Music and emotion: Theory and research. Series in affective science., 
(pp. 71-104). New York, NY, US: Oxford University Press, viii, 487 pp.

http://psycnet.apa.org/psycinfo/2001-05534-001

[3] Emotional responses to music: the need to consider underlying 
mechanisms. Juslin PN, Västfjäll D. Behav Brain Sci. 2008 Oct;31(5):559-75;

http://www.psyk.uu.se/digitalAssets/31/31194_BBS_article.pdf


On 20/02/2013 2:20 PM, mathieu barthet wrote:

Dear Ross,

Please see comments below.

Best wishes,

Mathieu Barthet
Postdoctoral Research Assistant
Centre for Digital Music (Room 109)
School of Electronic Engineering and Computer Science
Queen Mary University of London
Mile End Road, London E1 4NS
Tel: +44 (0)20 7882 7986 - Fax: +44 (0)20 7882 7997

E-mail: mathieu.bart...@eecs.qmul.ac.uk
http://www.elec.qmul.ac.uk/digitalmusic/

De : music-dsp-boun...@music.columbia.edu 
[music-dsp-boun...@music.columbia.edu] de la part de Ross Bencina 
[rossb-li...@audiomulch.com]
Date d'envoi : mardi 19 février 2013 11:57
À : A discussion list for music-related DSP
Objet : Re: [music-dsp] TR : Production Music Mood Annotation Survey 2013

Can someone please explain

Re: [music-dsp] M4 Music Mood Recommendation Survey

2013-02-21 Thread Ross Bencina



On 22/02/2013 9:54 AM, Richard Dobson wrote:

Listen to each track at least once and then select which track is the
best match with the seed. If you think that none of them match, just
select an answer at random.


Now I am no statistician, but with only four possible answers offered
per test, and with none of the above excluded as an answer (which
rather begs the question...),


You mean the one about adding to the large number of studies offering 
empirical evidence in support of the assumption?



However, despite a recent upswing of research on musical emotions 
(for an extensive review, see Juslin Sloboda 2001), the literature 
presents a confusing picture with conficting views on almost every topic 
in the field.1 A few examples may suffice to illustrate this point: 
Becker (2001, p. 137) notes that “emotional responses to music do not 
occur spontaneously, nor ‘naturally’,” yet Peretz (2001, p. 126) claims 
that “this is what emotions are: spontaneous responses that are dif?cult 
to disguise.” Noy (1993, p. 137) concludes that “the emotions evokedby 
music are not identical with the emotions aroused by everyday, 
interpersonal activity,” but Peretz (2001, p. 122) argues that “there is 
as yet no theoretical or empiricalr eason for assuming such specifcity.” 
Koelsch (2005,p. 412) observes that emotions to music may be induced 
“quite consistently across subjects,” yet Sloboda (1996,p. 387) regards 
individual differences as an “acute problem.” Scherer (2003, p. 25) 
claims that “music does not induce basic emotions,” but Panksepp and 
Bernatzky(2002, p. 134) consider it “remarkable that any medium could so 
readily evoke all the basic emotions.” Researchers do not even agree 
about whether music induces emotions: Sloboda (1992, p. 33) claims that 
“there is a general consensus that music is capable of arousing deep and 
signifcant emotions,” yet Konec?ni (2003, p. 332) writes that 
“instrumental music cannot directly induce genuine emotions in 
listeners.” 


http://www.psyk.uu.se/digitalAssets/31/31194_BBS_article.pdf


Ross
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sound effects and Auditory illusions

2013-02-20 Thread Ross Bencina

Hi Marcelo,

Just came accross this, maybe it is helpful:

Rorschach Audio – Art  Illusion for Sound On The Art
http://rorschachaudio.wordpress.com/about/

Ross.


On 19/02/2013 9:26 PM, Marcelo Caetano wrote:

Dear list,

I'll teach a couple of introductory lectures on audio and music processing, and 
I'm looking for some interesting examples of cool stuff to motivate the 
students, like sound transformations, auditory illusions, etc. I'd really 
appreciate suggestions, preferably with sound files.

Thanks in advance,
Marcelo
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] TR : Production Music Mood Annotation Survey 2013

2013-02-19 Thread Ross Bencina

Can someone please explain the scientific basis for this kind of study?

Surely by now it is widely accepted that correlations between music and 
mood and emotion are culturally biased and socially acquired?


Does the study below control for cultural bias?

Please explain why an otherwise reputable institution (QMUL) is wasting 
resources by engaging in this kind of pseudoscience.


Thanks,

Ross


On 19/02/2013 10:43 PM, mathieu barthet wrote:

--
2nd Call for Participation
Apologies for potential cross-postings
--

Dear all,

We are conducting an experiment to determine the moods or emotions expressed by 
music. The tracks used in the experiment come from various production music 
catalogues and are typically used in film, television, radio and other media.

This work is done in collaboration with the BBC and I Like Music as part of the TSB 
project Making Musical Mood Metadata (TS/J002283/1).

The experiment consists of rating music track excerpts along six scales 
characterising the emotions they express or suggest. These scales are Arousal 
(or Activity), Valence (or Pleasantness), Tension, Dominance (or Power), Love 
(or Romance), and Fun (or Humor).

The link to the survey can be found below:

http://musiclab.cc.jyu.fi/login.html

The experiment will run until Saturday 23rd February 2013.

Please note that your will have to rate the emotions *expressed or suggested* 
by the music (perceived emotions) and not the emotions elicited or induced by 
the music (felt emotions).

The annotations can be done at your own pace, on any computer with a web 
access, using headphones or good quality speakers. Please rate at least 50 
excerpts, if possible. If you have the time, thanks for completing the 
experiment with all 205 excerpts. (The experiment requires approximatively one 
hour for ~60-100 excerpts.)

In order to participate, you will first have to register and fill in a brief 
form. Your participation doesn't tie you to anything else. It is fine if you 
register but choose not to participate. You can do the experiment in several 
steps by logging in at different times. Don't hesitate to take breaks after a 
certain amount of ratings.

If you have questions, comments or other enquiries, please contact Pasi Saari 
(email: pasi.sa...@jyu.fi).

Please forward this call to interested parties.

Many thanks for your participation,
Pasi Saari, Mathieu Barthet, George Fazekas.


Mathieu Barthet
Postdoctoral Research Assistant
Centre for Digital Music (Room 109)
School of Electronic Engineering and Computer Science
Queen Mary University of London
Mile End Road, London E1 4NS
Tel: +44 (0)20 7882 7986 - Fax: +44 (0)20 7882 7997

E-mail: mathieu.bart...@eecs.qmul.ac.uk
http://www.elec.qmul.ac.uk/digitalmusic/
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] filter smoothly changeable from LP-BP-HP?

2013-02-10 Thread Ross Bencina

Hi Bram,

A Generalization of the Biquadratic Parametric Equalizer
Christensen, Knud Bank
AES 115 (October 2003)
https://secure.aes.org/forum/pubs/conventions/?elib=12429

Defines equations with a symmetry parameter for smoothly moving 
between the states you mention. There are graphs so you can check it out.


The paper is excellent.

There is a related patent. I haven't looked at the patent so I can't 
comment on that.


Another approach might be to crossfade between the taps of an SVF. I'm 
not sure if that would work.


Ross



On 10/02/2013 10:23 PM, Bram de Jong wrote:

Hello everyone,

does anyone know of a filter design that can smoothly be changed from
LP to BP to HP with a parameter? IIRC LP/AP/HP could be done simply by
perfect reconstruction LP/HP filter pairs, but never seen something
similar for BP in the middle...

The filter doesn't need to be perfect, it's for something
musical/creative rather than a purely scientific goal...

Any help very welcome! :-)

  - Bram


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] filter smoothly changeable from LP-BP-HP?

2013-02-10 Thread Ross Bencina

On 11/02/2013 1:37 AM, robert bristow-johnson wrote:

maybe i shouldn't say this, but someone here likely has a pdf copy of
the paper in case it breaks your bank to buy it from AES.


Unfortunately not me. I lost the pdf in a data loss incident and only 
have a printout and don't have an AES digital library sub at the moment.


But this does raise an issue that I've been thinking about for a while:

Does anyone know whether the AES has any intentions of moving to open 
access for their publication archive? It seems overdue.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] 24dB/oct splitter

2013-02-07 Thread Ross Bencina

Hi Russell,

So to be clear, you're creating a Linkwitz-Riley crossover?

http://en.wikipedia.org/wiki/Linkwitz%E2%80%93Riley_filter

On 8/02/2013 6:05 PM, Russell Borogove wrote:
 I have two digital 12dB/octave state-variable filters, each with
 lowpass/highpass/bandpass/notch outputs; I'd like to use them as a
 24db/octave low/high band splitter.

You didn't specify which state-variable filter you're using. There are a 
at least two linear SVFs floating round now (the Hal Chamberlin one and 
Andy Simper's [1] )




Will I be happy if I use the lowpass of the first filter as input to
the second, then take the lowpass and highpass outputs of the second
as my bands


The lowpass output of the first filter presumably has a zero at nyquist, 
so I don't think this is going to work out well if you highpass it.. you 
could try though.




or do I need to put the low and high outputs of the
first filter into two different second stage filters?


That's my impression.

Ross


[1] http://www.cytomic.com/files/dsp/SvfLinearTrapOptimised.pdf
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Starting From The Ground Up

2013-01-21 Thread Ross Bencina

Hello Jeff,

Before I attempt an answer, can I ask: what programming languages do you 
know (if any) and how proficient are you at programming?


Ross.


On 21/01/2013 9:49 PM, Jeffrey Small wrote:

Hello,

I'm a recently new computer programmer that is interested in getting into the 
world of Audio Plug Ins. I have a degree in Recording/Music, as well as a 
degree in Applied Mathematics. How would you recommend that I start learning 
how to program for audio from the ground up? I bought a handful of textbooks 
that all have to do with audio programming, but I was wondering what your 
recommendations are?

Thanks,
Jeff
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Starting From The Ground Up

2013-01-21 Thread Ross Bencina

Hi Jeff,

At your stage of learning with C the advice to just write some code 
seems most pertinent, but I guess it depends on your learning style. 
Coming up with an achievable project and seeing it through to completion 
is a good way to learn programming.



Read lots of code applies, and is important. What you will find is 
that there are many different coding styles. Many of the open source 
music-dsp codebases arose in different eras -- so you will be navigating 
a varied stylistic terrain at a time where you are just getting to grips 
with programming in C. That might be confusing, but it's probably 
unavoidable.


For source code I would recommend looking at (at least) the following 
open source projects:


Pd, CSound, SuperCollider, STK, CMix and/or RTCmix

Perhaps you're best off choosing one that you like best and learning how 
to use the system as a user, and also studying how it works from the 
inside. They are all quite different.


Back in the day I found CMix the most approachable since you actually 
write the whole DSP instrument routine in C by calling CMix DSP 
functions (also written in C). Most of the other environments I 
mentioned are virtual machines where the DSP code is buried a few layers 
deep. STK is maybe an exception.



You might want to check out Adrian Freed's
Guidelines for signal processing applications in C article -- at the 
very least to give you things to think about:


http://cnmat.berkeley.edu/publication/guidelines_signal_processing_applications_c


I'm not sure whether anyone mentioned it already, but there is the 
musicdsp.org source code snippet archive:


http://musicdsp.org/


Two chapters on SuperCollider internals (from the SuperCollider Book) 
are available for free download here: http://supercolliderbook.net/



Keep in mind that music dsp is, in some ways, just another form of 
numerical programming and you can learn a lot by reading more broadly in 
that area (eg. get a copy of Numerical Recipies in C). Similarly, a lot 
of modern analog modelling techniques come from the SPICE domain rather 
than music-dsp.



---

I don't know how much discrete-time signal processing theory you studied 
in your math degree but you should at least read one or two solid DSP 
texts (ask on comp.dsp or read reviews on Amazon). There are also a few 
books available on line.


DSP and music-dsp are not exactly the same thing. There are a lot of 
music-dsp books aimed more at programming musician and people with much 
less mathematical training than you. You will find these useful to 
bridge into the realm of music, but you can probably handle the hardcore 
math.


The JoS online books that were linked earlier are probably 
mathematically appropriate. They are written for readers with a solid 
engineering maths background.

https://ccrma.stanford.edu/~jos/

Further book links and suggestions are available at: What is the best 
way to learn DSP?

http://www.redcedar.com/learndsp.htm


I'm going to mention this one just because I found it on line recently:
Signal Processing First, by James H. McClellan, Ronald W. Schafer, 
Mark A. Yoder is introductory, but since it is now available for free on 
arcive.org it might be a good way to refresh on DSP basics:


http://archive.org/details/SignalProcessingFirst


In a different direction, I'm not sure whether you've seen the recently 
released Will Pirkle Plugin Programming book. I haven't read it but my 
impression is that it's at the introductory level:


http://www.amazon.com/Designing-Audio-Effect-Plug-Ins-Processing/dp/0240825152

---

The DAFX Digital Audio Effects conference has all of its proceedings on 
line. There is a bunch of interesting algorithm knowledge there:


http://www.dafx.de/

The DAFX book isn't a bad introduction to some topics either but it 
won't help you with C coding.


Other conferences that have online materials you can search:

International Computer Music Conference proceedings archive:
http://quod.lib.umich.edu/i/icmc/

Linux Audio conference: http://lac.linuxaudio.org/

All the major research groups and many researchers have publication 
archives that you can find on line if you're looking for information 
about specific techniques.


At some stage you may want to browse the AES digital library: 
http://www.aes.org/e-lib/


---

Here are a few papers that I think everyone starting out needs to know 
about (maybe not the first step on the path, but an early step):


John Dattorro Digital Signal Processing papers, including Effect 
Design parts 1, 2, and 3:

https://ccrma.stanford.edu/~dattorro/research.html

Splitting the Unit Delay
http://signal.hut.fi/spit/publications/1996j5.pdf

---

If you're writing plugins then the host and plugin framework will take 
care of a lot of the non-dsp type stuff (scheduling, parameter handling 
etc etc) but be aware that for more complex projects you may need to 
move into realms of real-time programming that go beyond music-dsp.


---

If you're just looking to 

Re: [music-dsp] Starting From The Ground Up

2013-01-21 Thread Ross Bencina

On 21/01/2013 9:49 PM, Jeffrey Small wrote:

I'm a recently new computer programmer that is interested in getting
into the world of Audio Plug Ins. I have a degree in Recording/Music,
as well as a degree in Applied Mathematics. How would you recommend
that I start learning how to program for audio from the ground up? I
bought a handful of textbooks that all have to do with audio
programming, but I was wondering what your recommendations are?


Another angle that I didn't cover is that of learning to program. You 
should get hold of at least one good C programming book. I program in 
C++ so I don't have any straight C examples to recommend but even 
something like Kernighan and Ritchie might be OK.


---

Reading a style and practice book might not be a bad idea, I'm 
thinking of books like Code Complete and The Pragmatic Programmer.


When I was starting out I read a bunch of coding style guidelines.
If I remember correctly started out with the Indian Hill one:

http://www.cs.arizona.edu/~mccann/cstyle.html

But you will find others if you search for C programming style guides. 
Things like this:

Best practices for programming in C
http://www.ibm.com/developerworks/aix/library/au-hook_duttaC.html

---

Reading an introductory algorithms and data structures textbook would be 
a good idea.



To give an idea: I have over 75 general programming and software 
engineering books on my bookshelf and only about 50 (if that) 
DSP/music-dsp/computer-music books. I don't think a 2:1 spit between 
general programming study and music-dsp study is unreasonable -- a lot 
of the programming you do will be more general than simply dsp.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


  1   2   >