Re: [music-dsp] Errata in The Art of VA Filter Design 2.1.0

2019-01-28 Thread Vadim Zavalishin



On 26-Jan-19 18:25, Giulio Moro wrote:

Hi there, really appreciate your VA book. I am reading version 2.1.0
and I think I spotted an error: on page 93, the text goes:

"In this respect consider that Fig. 3.12 is trying to explicitly
emulate the analog integration behavior, preserving the topology of
the original analog structure, while Fig. 3.34 is concerned solely
with implementing a correct transfer function. Since Fig. 3.34
implements a classical approach to the bilinear transform application
for digital filter design (which ignores the filter topology) we’ll
refer to the trapezoidal integration replacement technique as the
topology-preserving bilinear transform (or, shortly, TPBLT)."

I *think* that it should be "Since Fig. 3.12 implements  ..." instead
of 3.34.

Am I right? If I am not, then I guess I did not understand much about
TPT :)


Hi,

no, the text is correct at this point. 3.34 implements a classical
version of the filter, which is bilinear transform without preserving
the topology, while 3.12 preserves it, thus we refer to 3.12 as 
topology-preserving bilinear transform. (and for the reference of 
others, who might be reading this, the page number is actually 81, while 
93 is the "raw" page number, depending on your PDF viewer you might need 
to look for one or the other)




PS: any chance there could be a permalink to the most recent version
of the book? I seem that I have to go back through the archives of
music-dsp to find the most recent link (or manually attempt a URL for
version 2.1.1).


https://www.kvraudio.com/forum/viewtopic.php?t=350246
(if you google for "the art of va filter design" this is one of the 
first links)



Also, is the book linked to from anywhere on the NI website?


It is linked from the Reaktor area of the website. The current URL is
https://www.native-instruments.com/en/products/komplete/synths/reaktor-6/dsp-articles/

Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] The art of VA filter design - video

2018-11-28 Thread Vadim Zavalishin




On 28-Nov-18 11:07, Jean-Baptiste Thiebaut wrote:

I'm proud to share this video of Vadim Zavalishin, who came to ADC in
London last week to share his DSP knowledge. I came across Vadim's work on
this list and invited him to ADC a few months ago, and thought I'd share
this with you. (I hope you don't mind, Vadim).


No, of course I don't ;) Thank you for sharing this and thanks again for 
inviting me to the ADC. It has been lots of fun watching the talks and 
meeting other people from the industry (and even some of my former NI 
colleagues going as far back as 2001).


Best regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] zero delay feedback for phase modulation synthesis?

2018-11-16 Thread Vadim Zavalishin
I think people have thought about this (IIRC at least I heard from one 
of the U-He guys that he tried zero-delay feedback FM). I'm not sure 
what's the origin of your equation, but then I'm not into phase 
modulation synthesis. Anyway, I suspect in certain excessive nonlinear 
situations there is no solution whatsoever. E.g. consider the 
zero-feedback equation of the form


x = a*x + b*y
y = b*x - a*y
where x,y are signals and a and b are coefficients such that a^2+b^2=1 
(actually this equation is linear, but I think you get the idea).


Some basic thoughts on the rising issues (and on how to solve your 
equation) can be found in Section 3.13 of this text:


https://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_2.1.0.pdf

Plus the usual numerical nonlinear solution techniques of course (where 
Sections 6.4-6.8 and 6.10 may be of interest).


Regards,
Vadim


On 15-Nov-18 20:55, gm wrote:


I wonder if anyone has thought about this?

I am aware that it may have little practical use and may actually worsen 
the

"fractal noise" behaviour at higher feedback levels.
(Long ago I tested this with a tuned delay in the feedback path and 
thats what I recall)

But still I am interested.

If the recurrence for the oscillator with feedback is:

y[n] = ( 1/(2Pi) * sine(2*Pi* y[n-1]) + k) mod 1

there are two nonlinearities and it's all above my head...

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Book: The Art of VA Filter Design 2.1.0

2018-11-02 Thread Vadim Zavalishin



On 01-Nov-18 16:16, Fabian-Robert Stöter wrote:
I appreciate that but it would still be nice if your book could be cited 
appropriately. Have you thought about putting it on arxiv or zenodo.org 
<http://zenodo.org>?
This would give you the possibility to version the book and make folks 
from academia happy with a proper DOI reference.


Nobody complained so far, but I will consider your suggestion, thank you!

Best regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Book: The Art of VA Filter Design 2.1.0

2018-11-02 Thread Vadim Zavalishin




On 02-Nov-18 03:06, Andrew Simper wrote:
If you prize symmetry then you can use a cascade with 2 x one pole HP 
and 2 x one pole LP to make a 4 pole BP (band pass) then you can use the 
same old FIR based output tap mixing to generate all the different 
responses. It may not be so easy to do in a real circuit, but in 
software we're not bound by what is easy to build :)


https://cytomic.com/files/dsp/cascade-all-to-all-responses.pdf


Symmetry is one of the things. The other is the shape of the amplitude 
response. I'm personally not convinced by the -4dB dip prior to the 
resonance, although YMMV. At any rate it doesn't qualify as a "bread and 
butter" LP IMHO ;) With BP8 it's getting way worse.


Incidentally, another way to come at more or less the same structure is 
raising the orders of LP and HP filters (by stacking identical 1-poles 
in series) in the transposed Sallen-Key (Fig.5.23 of the book). Since 
TSK is essentially a bandpass ladder with a special output mode, it's 
actually the same. Further ways (originating at lowpass ladder) to look 
at this can be found here:

https://www.kvraudio.com/forum/viewtopic.php?p=6844369#p6844369
and here
https://www.kvraudio.com/forum/viewtopic.php?p=6844470#p6844470


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Book: The Art of VA Filter Design 2.1.0

2018-11-01 Thread Vadim Zavalishin

On 01-Nov-18 15:18, pa...@synth.net wrote:



Hmmm, 500 A4 pages would be rather heavy ;)


I'd willingly pay for a copy.


Quite pleased to hear that, thank you ;) Still, you could ask a copy 
shop to print and bind a copy for yourself (the book license allows it).


I have been considering selling this book for money, but so far I don't 
really want to do that. One of the reasons, it'd be prohibitively 
difficult to release small updates such as this one.


Best regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Book: The Art of VA Filter Design 2.1.0

2018-11-01 Thread Vadim Zavalishin

On 31-Oct-18 18:19, Stefan Stenzel wrote:

Vadim,

I was more refering to the analog multimode filter based on the moog cascade I 
did some years ago, and found it amusing to find a warning against it.


Ah, you mean the one at the beginning of Section 5.5? Well, that's an 
artifact of the older revision 1, where the ladder filter was introduced 
before the SVF (I still believe it's better didactically, unfortunately 
new material dependencies made me switch the order). The modal mixtures 
of the transistor ladder are asymmetric (HP is not symmetric to LP and 
has the resonance peak kind of "in the middle of its slope" and BP is 
not symmetric on its own). I felt that it might be confusing for a 
beginner if their first encounter with resonating HP and BP is with this 
kind of special-looking filters, hence the warning. With revision 2 this 
warning becomes less important, since the 2-pole LP and BP were 
discussed already before, but I still believe it's informative. After 
all, it doesn't say that these filters are bad, it says that they are 
special ;)




Anyway, excellent writeup,


Thank you! I'm glad my book is appreciated not only by newbies, but also 
by the industry experts.




I wish I cuold have it printed as a proper book for more relaxed reading.


Hmmm, 500 A4 pages would be rather heavy ;)

Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Book: The Art of VA Filter Design 2.1.0

2018-10-31 Thread Vadim Zavalishin

On 31-Oct-18 15:58, Stefan Stenzel wrote:

Thank you very much, Sir!


You're highly welcome, Sir!



But why the warning about multimode lattice filters?
In my case, this comes way too late!


I'm not sure I'm fully following you... Or are you referring to this:


New additions:
- Generalized ladder filters


You mean, why isn't this discussed in Chapter 5? Well, good question. 
But in the same sense, one could ask why generalized SVF doesn't come in 
Chapter 4. Chapter 8 is specifically concerned with building filters 
with arbitrary transfer functions of arbitrary orders, whereas Chapters 
4 and 5 rather deal with structures commonly used in synths (more or 
less), and from this POV it belongs there. Also, had I discussed it in 
Chapter 5, it would have been difficult to derive it from the 
generalized SVF idea, which in my opinion is highly educative.


Actually, are you by any chance aware of this structure being discussed 
elsewhere? Haven't encountered anything like that so far (not that it's 
difficult to derive ;) ), just wanted to build a nonlinear 2nd kind 
Butterworth filter of 4th order, so needed something like that ;)


Best regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Book: The Art of VA Filter Design 2.1.0

2018-10-31 Thread Vadim Zavalishin

Announcing a small update to the book

https://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_2.1.0.pdf

New additions:
- Generalized ladder filters
- Elliptic filters of order 2^N
- Steepness estimation of elliptic shelving filters

Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] variations on exponential curves

2018-10-01 Thread Vadim Zavalishin



On 01-Oct-18 13:58, Frank Sheeran wrote:
For curves other than 0 and 1, I discover a delta that will work to the 
exact number of samples iteratively because I am too stupid to figure 
out an equation for delta.  Werner is much better at math than I am!


This is quite a smart way to work around the precision loss issues! 
Although I'm somewhat concerned about its reliability. I guess it can be 
proven that the final value is a monotonic function of delta, but 
possibly with large multipliers the output value step (for the smallest 
change of delta) will be quite large, so the search will not fully converge.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] variations on exponential curves

2018-10-01 Thread Vadim Zavalishin




On 01-Oct-18 14:12, Vadim Zavalishin wrote:
In principle IIRC the same rule applies for 
multiplier < 1, but there the losses are not too large. This also 
manifests at multiplier = 1 by having the "best offset" so that the 
curve's middle is at zero.


Sorry, I meant to say that for multiplier < 1 you need the opposite 
rule, the curve should end at zero. Therefore at mult = 1 you position 
the curve halfway ;)


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] variations on exponential curves

2018-10-01 Thread Vadim Zavalishin



On 01-Oct-18 13:52, Frank Sheeran wrote:
Indeed, that's a simple parametric, but for generating envelopes we have 
the freedom to depend on the previous sample's output.  So, while an 
exponential curve parametric-style requires a pow(), the iterative 
solution is simply current = previous * multiplier.


Adding a range of curves can be done by adding a delta, as Andre/Werner 
and I have independently discovered (along with probably many others).


And "current = previous * multiplier + delta" is going to be fewer 
calculations than a parametric formula such as yours.  So I suggest the 
Andre/Werner/my method seems faster for the specific application of 
envelopes, as well as offering true exponential curves.


With multiplier values >> 1 there can be noticeable incremental 
precision losses. They can be reduced by offsetting the whole curve so 
that its starting position is zero (which means that that the "output" 
value is current + offset). In principle IIRC the same rule applies for 
multiplier < 1, but there the losses are not too large. This also 
manifests at multiplier = 1 by having the "best offset" so that the 
curve's middle is at zero.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] What is resonance?

2018-07-23 Thread Vadim Zavalishin

On 20-Jul-18 18:13, Mehdi Touzani wrote:

So... how do you do a resonance in a lowpass circuit?   :-)   not the
math, not the code, just the architecture.


There are many different ways to create resonance in a lowpass circuit
(esp. if the order is larger than 2). The higher is the order of the
filter, the more different answers there are.

Making a feedback loop around a lowpass chain is one way, but AFAIK it
works perfectly (or close to that) only for the 4th order filter (the so
called Moog ladder). I'm not aware of any standard generic structure (or
even a transfer function to begin with) which could be referred to as a
generic Nth order resonating filter. Recently I tried to propose one way
of generalizing the 2nd order resonance to an arbitrary order by what I
called "Butterworth filters of the 2nd kind", but this involves just the
transfer function, whereas you still have lots of freedom in the
implementation structure. You could look into the latest revision of my
book for more details (where I also explain the problems with the
lowpass feedback).

Regards,
Vadim


--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Book: The Art of VA Filter Design 2.0.0alpha

2018-06-21 Thread Vadim Zavalishin


On 21-Jun-18 00:55, list_em...@icloud.com wrote:

Thank you for this work!


You're welcome!


May I make a suggestion—do not use an acronym in the title. This
might not be obscure to you but it might be obscure to someone who
would otherwise benefit from your book.


This might be a good idea, although, on the other hand, changing the 
title, under which the book is already known, might be not necessarily 
the best thing to do. I'll give it a thought though, thanks!


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] Book: The Art of VA Filter Design 2.0.0alpha

2018-06-11 Thread Vadim Zavalishin

Hi everyone!

As usual, I'm duplicating here the announcement on KVR, since (I assume) 
not everyone from this list is also present there.


The Art of VA Filter Design has been updated to 2.0.0alpha. Freely 
available at this link:

https://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_2.0.0a.pdf

Major highlights compared to the previous release:

- different presentation of Sallen-Key filters
- 8-pole ladders
- detailed discussion of nonlinearities
- "Butterworth filters of the 2nd kind"
- different presentation of shelving filters, high-order generalized 
Butterworth and elliptic shelving

- generalized Linkwitz-Riley crossovers
- lots of theoretical stuff
- last but not least: a neat formula for the frequency shifter's poles

The book is a mixture of new research, common knowledge (presented from 
the POV of the author) and reinventing the wheel. I would be thankful 
for pointers to previous research, which I might not be aware of.


Some of the new material is used in the soon to be released Reaktor Core 
macro library update, so it can be tried out (and looked into) directly 
there.


The book is in the "alpha" state, some of the material had only surface 
checking and to an extent is a bit of a "work in progress".


Best regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Elliptic filters coefficients

2018-02-02 Thread Vadim Zavalishin
I don't know if it is possible to make the math behind elliptic filters 
simpler. They really stand out by using somewhat exotic functions.


As for the online resource about filter basics unfortunately I can't 
recommend any. Also IMHO the resource choice may be strongly affected by 
your goals.


Regards,
Vadim

PS. If your goal is to design synth filters, I probably would have 
recommended my book, but it requires some math background, and also 
since you seem to be interested in elliptic filters, I'm not sure if 
this is the area you're looking for.


On 02-Feb-18 12:37, Dario Sanfilippo wrote:

Thanks, Vadim.

I don't have a math background so it might take me longer than I wished 
to obtain the coefficients that way, but it's probably time to learn it. 
With that regard, would you have a particularly good online resource 
that you'd suggest for pole-zero analysis and filter design?


Thanks to you too, Shannon.

Best,
Dario

On 1 February 2018 at 11:16, Vadim Zavalishin 
<mailto:vadim.zavalis...@native-instruments.de>> wrote:


Hmm, the Wikipedia article on elliptic filters has a formula to
calculate the poles and further references the Wikipedia article on
elliptic rational functions which effectively contains the formula
for the zeros. Obtaining the coefficients from poles and zeros
should be straightforward.

    Regards,
Vadim



--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Elliptic filters coefficients

2018-02-01 Thread Vadim Zavalishin
Hmm, the Wikipedia article on elliptic filters has a formula to 
calculate the poles and further references the Wikipedia article on 
elliptic rational functions which effectively contains the formula for 
the zeros. Obtaining the coefficients from poles and zeros should be 
straightforward.


Regards,
Vadim

On 01-Feb-18 12:00, Dario Sanfilippo wrote:

Hello, everybody.

I was wondering if you could please help me with elliptic filters. I had 
a look online and I couldn't find the equations to calculate the 
coefficients.


Has any of you worked on that?

Thanks,
Dario


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] ± 45° Hilbert transformer using pair of IIR APFs

2017-02-06 Thread Vadim Zavalishin

Funny that no one mentioned this

https://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_1.1.1.pdf

Particularly, formula 7.43

Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Delays: sampling rate modulation vs. buffer size modulation

2016-03-23 Thread Vadim Zavalishin

On 23-Mar-16 00:45, Matthias Puech wrote:

Does this mean for instance that if I provide a control over the
integral of D in 1/ I will get the exact same effect as in 2/?


I have been thinking about this question a while ago (in terms of tape 
rather than BBD delay, but that's more or less the same). It seems that 
you need to solve an integral equation, something like (latex notation)


\int_{t-T}^t v(\tau)d\tau = L

where

t = the current time moment
T = the delay time (the unknown to be found)
v(\tau) = tape speed
L = distance between the read and write heads

For an arbitrary v(t) this can be solved only numerically. For a 
predefined v(t) this can be done analytically.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-06 Thread Vadim Zavalishin
Okay, an updated idea. Represent the signal as a sum of time-shifted box 
functions of random amplitudes and durations. We assume that the sum is 
finite and then we can take the limit (if the values approach the 
infinity as the result, we can normalize them according to the length of 
the signal).


Respectively the (complex) spectrum of such sum will be the sum of box 
function spectra which are randomly phase-rotated and randomly 
scaled/stretched in the frequency domain (according to their stretching 
in the time domain). The phase rotation can be assumed uniformly 
distributed. So we need to determine the distribution of the amplitudes 
and of the stretching of the box function spectra. Both of the latter 
can be found from the distribution of the box amplitudes and box lengths 
(under the assumption of uniform phase rotation distribution). The box 
amplitudes are uniformly distributed according to your specs. The 
distribution of box lengths must be IIRC one of the commonly known 
distributions, don't remember which one.


On 06-Nov-15 11:06, Vadim Zavalishin wrote:

On 06-Nov-15 11:03, Vadim Zavalishin wrote:

Apologies if this question has already been answered, I didn't read the
entire thread, just wanted to share the following idea off the top of my
head FWIW.


Oops, nevermind, I didn't realize that the SnH period is also random in
the original question.




--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-06 Thread Vadim Zavalishin

On 06-Nov-15 11:03, Vadim Zavalishin wrote:

Apologies if this question has already been answered, I didn't read the
entire thread, just wanted to share the following idea off the top of my
head FWIW.


Oops, nevermind, I didn't realize that the SnH period is also random in 
the original question.


--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-06 Thread Vadim Zavalishin
Apologies if this question has already been answered, I didn't read the 
entire thread, just wanted to share the following idea off the top of my 
head FWIW.


Consider a random PCM signal whose period is equal to the SnH period. 
This is the same as discrete-time noise and, if I'm not mistaken, given 
the random generator is ideal (uncorrelated) its PSD is simply flat 
everywhere. Now, in order to get the random SnH signal we need to 
convolve it with a box function. So, the remaining question is what's 
the effect of convolution on the PSD. Is it simply multiplication in the 
frequency domain or not?


OTOH, if we are not talking of PSD, but rather of normal spectra, then 
we need to assume some time-domain windowing of the noise. The phases 
are going to be uniformly random (except for frequencies which are low, 
"compared to window length"). The amplitudes will be following the 
Gaussian distribution according to CLT. Assuming the window is large 
enough (and the signal is uncorrelated), I guess the standard deviations 
of the amplitudes for each frequency will be equal. Now, if we apply a 
convolution to this windowed signal, it's going to simply multiply the 
spectra of the signal and the window, and so will be the standard 
deviations. What has to be kept in mind though is that convolving a 
windowed signal is not exactly the same as windowing a convolved signal 
and this is going to distort the picture at lower frequencies (I guess).


Regards,
Vadim

On 03-Nov-15 18:42, Ross Bencina wrote:

Hi Everyone,

Suppose that I generate a time series x[n] as follows:

 >>>
P is a constant value between 0 and 1

At each time step n (n is an integer):

r[n] = uniform_random(0, 1)
x[n] = (r[n] <= P) ? uniform_random(-1, 1) : x[n-1]

Where "(a) ? b : c" is the C ternary operator that takes on the value b
if a is true, and c otherwise.
<<<

What would be a good way to derive a closed-form expression for the
spectrum of x? (Assuming that the series is infinite.)


I'm guessing that the answer is an integral over the spectra of shifted
step functions, but I don't know how to deal with the random magnitude
of each step, or the random onsets. Please assume that I barely know how
to take the Fourier transform of a step function.

Maybe the spectrum of a train of randomly spaced, random amplitude
pulses is easier to model (i.e. w[n] = x[n] - x[n-1]). Either way, any
hints would be appreciated.

Thanks in advance,

Ross.


--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] List settings after the switch

2015-08-10 Thread Vadim Zavalishin
- So, it seems both addresses (music-dsp@music and 
music-dsp@lists...) do actually work (so, apologies for a double mail) 
but with a 30 minute latency. Used to be less than 1 minute before (when 
the old list server was active). Maybe it's a local glitch on my 
mailserver, but that happened to 2 mails in a row (actually 3, counting 
the double mail).


- The Reply button seems to work for the messages sent by myself (they 
go to the list, not to me)


- The list web page that I meant is the old list page (the one found by 
google, if searching for the music dsp list). On the opposite, the new 
list page is not pointing to the old archives.


Regards,
Vadim

On 10-Aug-15 10:46, Vadim Zavalishin wrote:

Hi Douglas and all,

it seems that after the switching of the list server there are a few
issues, which I just noticed:

- the "reply-to" field in the list mails is configured to the sender. So
hitting "reply" no longer sends the answer to the list. I'm not sure
whether this is the new standard, but I'm not sure whether it's so
commonly used either. At least I'll need to learn to use the "Reply
List" button.

- the list web page is still pointing to the old archives only. it's
seems not possible to find the new archives except by following the
footer link in the list mails. I guess the addresses listed there for
subscribing to the list do not work either.

- the "reply list" button sends to "music-dsp@music.columbia.edu", which
doesn't seem to work. the same email address is listed in the footer. I
just have found the other address in the Trash folder in the test mail
from Douglas, trying it now (I wonder why does it work for other people,
who manage to write to the list?)

Regards,
Vadim




--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] List settings after the switch

2015-08-10 Thread Vadim Zavalishin

Hi Douglas and all,

it seems that after the switching of the list server there are a few 
issues, which I just noticed:


- the "reply-to" field in the list mails is configured to the sender. So 
hitting "reply" no longer sends the answer to the list. I'm not sure 
whether this is the new standard, but I'm not sure whether it's so 
commonly used either. At least I'll need to learn to use the "Reply 
List" button.


- the list web page is still pointing to the old archives only. it's 
seems not possible to find the new archives except by following the 
footer link in the list mails. I guess the addresses listed there for 
subscribing to the list do not work either.


- the "reply list" button sends to "music-dsp@music.columbia.edu", which 
doesn't seem to work. the same email address is listed in the footer. I 
just have found the other address in the Trash folder in the test mail 
from Douglas, trying it now (I wonder why does it work for other people, 
who manage to write to the list?)


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] The Art of VA Filter Design book revision 1.1.0

2015-08-10 Thread Vadim Zavalishin
Just realized that the following answer has never made its way through 
to the list.


@Robert: I believe it has to do with your mail client settings, which
override the "reply-to" field. So it's quite possible that more answers
to your mails do not get to the list. Or is that on purpose?


Hi Robert

On 24-Jul-15 21:48, robert bristow-johnson wrote:

in the 2nd-order analog filters, i might suggest replacing "2R" with
1/Q in all of your equations, text, and figures because Q is a
notation and parameter much more commonly used and referred to in
either the EE or audio/music-dsp contexts.


I'm not such a big friend of the Q notation. My guess is that the Q
parameter was introduced originally for something like radio tuning
LC-circuits, where it makes perfect sense. For music 2-pole filters the
problem is that Q changes from +inf to -inf during the transition into
the selfoscillation region. OTOH, the R parameter is simply crossing the
zero. Also see the footnote on page 84 of the rev 1.1.1. The R parameter
also nicely maps to the pole position, being simply the cosine of the
polar angle, while the cutoff is the radius. So the pole's coordinates
are simply -w*cos R, +-w*sin R (for |R|<=1).



in section 3.2, i would replace n0-1 with n0 (which means replacing
n0 with n0+1 in the bottom limit of the summation).  let t0
correspond directly with n0.


On one hand makes sense. OTOH, using zero-based array indexing, like in
C, n0 is intuitively understood as the first output sample. I agree,
this is less conventional mathematically, but from a software
developer's point of view this might be more intuitive. So, to an
extent, I believe this is a matter of taste and intention.



now even though it is ostensibly obvious on page 40, somewhere (and
maybe i just missed it) you should be explicit in identifying the
"trapezoidal integrator" with the "BLT integrator".  you intimate
that such is the case, but i can't see where you say so directly.


p.40, directly under (3.5)
"The substitution (3.5) is referred to as the bilinear transform, or
shortly BLT.
For that reason we can also refer to trapezoidal integrators as BLT
integrators."

Not good enough?



section 3.9 is about pre-warping the cutoff frequency, which is of
course oft treated in textbooks regarding the BLT.  it turns out
that any *single* frequency (for each degree of freedom or "knob")
can be prewarped, not only or specifically the cutoff.



Bottom of p.43
"Notice that it's possible to choose any other point for the prewarping,
not necessarily the cutoff point." etc


in 2nd-order system, you have two independent degrees of freedom that
can, in a BPF, be expressed as two frequencies (both left and right
bandedges).  you might want to consider pre-warping both, or
alternatively, pre-warping the bandwidth defined by both bandedges.


That's a good point. This approach is used in 7.9 but you're right, it
should have been introduced in chapter 5.



lastly, i know this was a little bit of a sore point before (i can't
remember if it was you also that was involved with the little tiff i
had with Andrew Simper), but as depicted on Fig. 3-18, any purported
"zero-delay" feedback using this trapezoidal or BLT integrator does
get "resolved" (as you put it) into a form where there truly is no
zero-delay feedback.  a "resolved" zero-delay feedback really isn't
a zero-delay feedback at all.  the paths that actually feedback come
from the output end of a delay element.  the structure in Fig 3-18
can be transposed into a simple 1st-order direct form that would be
clear *not* having zero-delay feedback (but there is some zero-delay
feedforward, which has never been a problem).


The structure in 3.18 clearly doesn't have ZDF (although I don't think
it can be made equivalent to any of direct forms without changing its
topology and hence the time-varying behavior). That's the whole point of
the illustration. However, once you get used to the ZDF, I'd say that
it's probably much easier and more intuitive to stick to ZDF structures
like 3.12 and understand the resolution implicitly (or actually even
directly 2.2, you can notice that afterwards the book hardly uses any
discrete-time diagrams). Particularly, when nonlinearities are
introduced into the structure, thereby leaving you the freedom of
choosing the numerical approach to treat them (post-resolution
application, Newton-Raphson, analytical solution, "mystran's method" etc).



i'll be looking this over more closely, but these are my first
impressions.  i hope you don't mind the review (that was not
explicitly asked for).


Would be highly appreciated. And thanks for the comments which you
already made.

Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] The Art of VA Filter Design book revision 1.1.0

2015-07-24 Thread Vadim Zavalishin

Released the promised bugfix
http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_1.1.1.pdf

On 22-Jun-15 10:51, Vadim Zavalishin wrote:

Didn't realize I was answering a personal rather than a list email, so
I'm forwarding here the piece of information which was supposed to go to
the list:

While we are on the topic of the book, I have to mention that I found
the bug in the Hilbert transformer cutoff formulas 7.42 and 7.43. Tried
to merge odd and even orders into a more simple formula and introduced
several mistakes. The necessary corrections are (if I didn't do another
mistake again ;) )
- the sign in front of each occurence of sn must be flipped
- x=(4n+2+(-1)^N)*K(k)/N
- the stable poles are given by n

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-07-13 Thread Vadim Zavalishin

On 10-Jul-15 19:50, Charles Z Henry wrote:

The more general conjecture for the math heads :
If u is the solution of a differential equation with forcing function g
and y = conv(u, v)
Then, y is the solution of the same differential equation with forcing function
h=conv(g,v)

I haven't got a solid proof for that yet, but it looks pretty easy.


How about the equation

u''=-w*u+g

where v is sinc and w is above the sampling frequency?


--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-07-07 Thread Vadim Zavalishin

On 06-Jul-15 04:03, Sampo Syreeni wrote:

On 2015-06-30, Vadim Zavalishin wrote:

I would say the whole thread has been started mostly because of the
exponential segments. How are they out of the picture?


They are for *now* out, because I don't yet see how they could be
bandlimited systematically within the BLEP framework.


Didn't I describe this is my previous posts?



But then evidently they can be bandlimited in all: just take a segment
and bandlimit it. It's not going to be an easy math exercise, but it
*is* going to be possible even within the distributional framework.


I'd say even without one. A time-limited segment is in L2, isn't it?



I don't think I'm good enough with integration to do that one myself.
But you, Ethan and many others on this list probably are. Once you then
have the analytic solution to that problem, I'm pretty sure you can tell
from its manifest form whether the BLEP framework cut it.


That would be a nice check, but I'm not sure I'd be able to derive an 
analytic closed-form expression for the related sum of the BLEPs which 
is what we need to compare against. But could you spot a mistake in my 
argument otherwise?



Consider a piecewise-exponential signal being bandlimited by BLEP.


That sort of implies an infinite sum of equal amplitude BLEPs, which
probably can't converge.


I think I have addressed exactly this convergence issue in my previous 
posts and in my paper. Furthermore, the convergence seems to be directly 
related to the bandlimitedness of the sine (see the paper). The same 
conditions hold for an exponential, hence my idea to define the 
"extended bandlimitedness" based on the BLEP convergence (or rather, the 
rolloff of the derivatives, which defines the BLEP convergence).



Each of these exponentials can be represented as a sum of
rectangular-windowed monomials (by windowing each term of the Taylor
series separately).


They can't: they are not finite sums, but infinite series, and I don't
think we know how to handle such series right now.


I meant "inifinite series of rectangular-windowed monomials". I'm not 
sure what specifically you are referring to by "we don't know how to 
handle them". We are just talking about pointwise convergence of this 
series.





We can apply the BLEP method to bandlimit each of these monomials and
then sum them up.


We can handle each (actually sum of them) monomial. To finite order. But
handling the whole series towards the exponential...not so much.


Again, pointwise convergence is meant.




If the sum converges then the obtained signal is bandlimited, right?


If it does, yes. But I don't think it does.


According to my paper, it does. Unless I did a mistake, the BLEP 
amplitudes roll off as 1/n, so if the derivatives (which are the BLEP 
gains) roll off exponentially decaying (which they do for a sine 
bandlimited to 1), the sum converges. Notice that this sufficient 
condition for the BLEP convergence is fulfilled if and only if the sine 
is bandlimited.



I'm pretty sure you shouldn't be thinking about the bandlimited forms,
now. The whole BLIT/BLEP theory hangs on the idea that you think about
the continuous time, unlimited form first, and only then substitute --
in the very final step -- the corresponding bandlimited primitives.


So it did, but what's wrong in doing the same for the monomial series?


The sufficient convergence condition for the latter is that the
derivatives of the exponent roll off sufficiently fast.


But they don't, do they?


Of course they do

d^n exp(a*t) /dt^n = a^n * exp(a*t)

so they roll off as a^n. The same for the sine. For a sine bandlimited 
to 1 we have a<1 and thus the BLEP sum converges.


> When you snap an exponential back to zero, you

necessarily snap back all of its derivatives.


I'm not sure what you are referring to as "snapping".



I mean, when you introduce a hard phase shift to a sine, you don't just
modulate the waveform AM-wise.


I do, if we consider the same in complex numbers. This is more or less 
what my paper is doing in the "ring modulation" approach.



Especially when it gets bandlimited, the way you interpolate the
waveform ain't gonna have just Diracs there, but Hilberts as well, and
both of all orders...


You lost me here a little bit. What's a Hilbert? 1/t? I thought it's the 
Fourier transform of a Heaviside. How is it a derivative of a Dirac? 
Furthermore, if we are talking in the spectral domain, we are going to 
have issues arising from the convergence of the infinite series in the 
time domain (you mentioned that the set of tempered distributions is not 
closed), that's why I specifically tried to stay in the time domain. 
That's the whole point: the bandlimitedness can be checked in the time 
domain, without even knowing the spectrum. Maybe th

Re: [music-dsp] Sampling theorem extension

2015-06-30 Thread Vadim Zavalishin

On 30-Jun-15 00:43, Sampo Syreeni wrote:

And even if what we've been talking about above does go as far as I
(following Vadim) suggested, exponential segments are still out of
the picture for now.


I would say the whole thread has been started mostly because of the
exponential segments. How are they out of the picture? And if they are
out of the picture, then so must be the BLEP sines, because we don't
have a sufficiently rigorous framework to mathematically substantiate
the application of the BLEP method to the sine sync.

But I wonder if you can spot a mistake in the following (and if you do,
it probably should equally apply to the sine).

Consider a piecewise-exponential signal being bandlimited by BLEP. We
wish to know if we obtain a bandlimited signal in the result. Represent
the signal as a sum of rectangular-windowed exponentials. Each of these
exponentials can be represented as a sum of rectangular-windowed
monomials (by windowing each term of the Taylor series separately). We
can apply the BLEP method to bandlimit each of these monomials and then
sum them up. If the sum converges then the obtained signal is
bandlimited, right?

Now the sum of bandlimited rectangular-windowed monomials converges if
and only if the sum of their BLEP residuals converges. The sufficient
convergence condition for the latter is that the derivatives of the
exponent roll off sufficiently fast. We have seen exactly this for the
sine (in my paper), where the sufficient conditions for the BLEP
convergence is that the sine frequency is below Nyquist. Notice that the
Taylor series for exp is essentially the same as for the sine, therefore
they have the same rolloff speed. So, the exponent is "bandlimited" (in
the sense that the BLEP sum converges) under the same conditions as the
sine (the "frequency" must be below Nyquist).

On 29-Jun-15 21:15, Sampo Syreeni wrote:

I'm still going to have to let this one lie for awhile, because I
don't think it's what Schwarz's theorem actually says. Or that of
Hörmander.


Well, that's what Wikipedia says and a number of other sources. A signal
is bandlimited to A if and only if it's entire, its imaginary part is of
exponential type A (grows as exp(A*Im|x|) and otherwise has a polynomial
growth speed (or so I understood the statement, please correct me if I'm
wrong).

This is where the definition of "bandlimited" can be adjusted. The
signals which do not have spectrum are considered non-bandlimited,
whereas we can say that they are neither nor ("we don't know"). And I
was proposing an extended definition, which more or less seems to agree
with the usual one for all the signals which have spectra.


But in any case tell me, does what I was talking about seem like
your intuition? What do you think is the part which needs the most
work still?


It's pretty close in the intuitive reasoning to what I have been
thinking. Except that I would like to take this further to
non-polynomial entire functions (like the sine), and those which cannot
be handled by the tempered distribution framework (exponential), but
still are of practical importance.

--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-06-25 Thread Vadim Zavalishin

On 24-Jun-15 21:30, Ethan Duni wrote:

Could you expand a bit on exactly what it means to apply the BLEP method to
the discontinuities? I have a general grasp of the basic idea but I'm a bit
fuzzy on exactly what this means in practice. If you're getting a
truly band limited signal, then isn't the result infinite in time extent?
In which case how do you use this for dynamic synthesis in practice? If the
idea is to stitch together these BLEP'd segments to represent, say, a
hard-synced sawtooth or something, then don't you end up needing huge
latency?


By integrating the Dirac delta we obtain a Heaviside step function (a 
discontinuity of the 0th order). Integrating again we obtain a 
discontinuity of the 1st order and so on. The bandlimited version of the 
0th order discontinuity is the integral sine Si(x). The bandlimited 
versions of discontinuities of higher orders are integrals of Si(x), 
which have analytical expressions via Si(x). The BLEP method replaces a 
discontinuity of each derivative by its respective bandlimited version.

More detail can be found e.g. here
http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/SineSync.pdf

In practice the BLEPs need to be timelimited by windowing. But neither 
can we compute an infinite sum of those at a single point anyway (except 
in special cases), which is another tradeoff. As to the latency, 
practical window sizes have order of magnitude of a few samples which is 
a quite acceptable.


--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-06-24 Thread Vadim Zavalishin

On 24-Jun-15 15:31, Sampo Syreeni wrote:

Certainly any signal with compact support isn't bandlimited. That's the
simplest form of the uncertainty principle. But even if you take a
strictly bandlimited "window" function with rapid falloff (a bandlimited
square/flattop convolved with itself a couple of times comes to mind),
you can easily fit stuff under it which behaves nice as peaches with
regard to derivative conditions, but manages to have arbitrarily high
frequency content.


I don't want to take bandlimited windows. I want to take a rectangular 
window. But then I can apply the BLEP method to the discontinuities in 
the function and the derivatives arising from this window. Now, do I get 
a bandlimited version in the result? If I apply the window to a 
polynomial (particularly a straight line, occuring in the sawtooth) or 
to a bandlimited sine and then "BLEP" the result, then I would expect to 
get a bandlimited signal in the result, right? Now, how about doing the 
same to an exponential?




Say, the simplest form of phase modulation at low index,
f(x)=sin(x+sin(x)/10). When you "window" that, the result has all of the
properties you're asking for, including no discontinuities in any
derivative and nice enough rolloff of them in modulus. So it obeys the
Paley-Wiener-Schwartz conditions, and its Fourier transform extends into
an entire function. But the resulting function still contains infinitely
high frequencies, and so can't be sampled losslessly at any finite rate.


Upon a first attempt I failed to estimate the growth of derivatives of 
an FM sine analytically. But numerical checks of the first few 
derivatives were suggesting that the derivatives are not rolling off 
fast enough. So I would expect that this function doesn't satisfy the 
PWS conditions (exponential growth along the imaginary axis). 
Furthermore, if it did satisfy the PWS conditions, this would have meant 
that the signal is bandlimited (according to the PWS theorem itself).




Don't get me wrong, I'm not trying to rain on your parade. I'm pretty
impressed that someone is willing to take the time to go back into the
fundamentals, and I too would like to see stuff such as PM properly
bandwidth limited.


I don't think that the BLEP method is capable of bandlimiting an FM 
sine. But it might be able to bandlimit an FM sawtooth, even if FM is 
exponential.



But I still don't see how you're going to get there
starting with derivative conditions. To me they seem incommeasurate with
what you're trying to achieve.


The derivative rolloff is exactly what is defining the convergence of 
the sum of the BLEPs (one BLEP per derivative). More specifically, it is 
a sufficient condition for the convergence of the BLEPs. So, there seems 
to be a strong correspondence between the convergence of the BLEPs and 
the bandlimitedness of the "base" signal.


--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-06-23 Thread Vadim Zavalishin

On 22-Jun-15 21:59, Sampo Syreeni wrote:

On 2015-06-22, Vadim Zavalishin wrote:


After some googling I rediscovered (I think I already found out it one
year ago and forgot) the Paley-Wiener-Schwartz theorem for tempered
distributions, which is closely related to what I was aiming at.


It'll you land right back at the extended sampling theorem I told about,
above.


Exactly (if by "extended sampling theorem" you mean the sampling theorem 
for tempered distributions). And now, by dropping the polynomial growth 
on the real axis restriction in PWS, I can handle any analytic signal. 
And those which are not analytic are not bandlimited anyway.



So why fret about the complex extensions?


I'm not sure which specific meaning of the word "complex" you imply 
here. But the main motivation for the whole stuff is applying the BLEP 
method to frequency-modulated sawtooth and triangle, where the FM is 
either done in the exponential scale and/or the oscillator is 
self-modulated. In this case you get exponential segments (and more 
complex shapes if self-modulation is done in the exp scale I believe). 
This also should cover the question of applicability of BLEP to 
arbitrary signal shapes more or less.



It's just that you don't need any of that machinery in order to deal
with that mode of synthesis, and you can easily see from the
distributional theory that you can't do any better.


It seems I can do better. Because my question is not whether an 
infinitely-long signal, which doesn't even have Fourier transform, is 
bandlimited. My question is whether a time-limited version of that 
signal is "bandlimited except exactly for the discontinuities arising 
from the time-limiting".



--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] The Art of VA Filter Design book revision 1.1.0

2015-06-22 Thread Vadim Zavalishin
Didn't realize I was answering a personal rather than a list email, so 
I'm forwarding here the piece of information which was supposed to go to 
the list:


While we are on the topic of the book, I have to mention that I found
the bug in the Hilbert transformer cutoff formulas 7.42 and 7.43. Tried
to merge odd and even orders into a more simple formula and introduced
several mistakes. The necessary corrections are (if I didn't do another
mistake again ;) )
- the sign in front of each occurence of sn must be flipped
- x=(4n+2+(-1)^N)*K(k)/N
- the stable poles are given by nhttp://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-06-22 Thread Vadim Zavalishin
 BLEP bandlimiting to the discontinuities of y and its
derivatives, obtaining (if the ifninite sum of BLEPs converges) some
other signal y'. The signal x is called bandlimited if for any
rectangular window w(t), the signal y' exists (the BLEPs converge) and
y'=BL[y].

This definition is well-specified and directly maps to the goals of
the BLEP approach. The conjectures are

- for the signals which are in L_2 the definition is equivalent to the
usual definition of bandlimitedness.
- if y' exists (BLEPs converge), then y'=BL[y]

If the BLEP convergence is only given within some interval of the time
axis (don't know if such cases can exist), then we can speak of
signals "bandlimited on an interval".







--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sampling theorem extension

2015-06-12 Thread Vadim Zavalishin

On 12-Jun-15 12:54, Andreas Tell wrote:

I think it’s not hard to prove that there is no consistent
generalisation of the Fourier transform or regularisation method that
would allow plain exponentials. Take a look at the representation of
the time derivative operator in both time domain, d/dt, and frequency
domain, i*omega. The one-dimensional eigensubspaces  of i*omega are
spanned by the eigenvectors delta(omega-omega0) with the associated
eigenvalues i*omega0. That means all eigenvalues are necessarily
imaginary, with exception of omega0=0. On the other hand, exp(t) is
an eigenvector of d/dt with eigenvalue 1, which is not part of the
spectrum of the frequency domain representation.

This means, there is no analytic continuation from other transforms,
no regularisation or transform in a weaker distributional sense.


On one hand cos(omega0*t) is delta(omega-omega0)+delta(omega+omega0) in 
the frequency domain (some constant coefficients possibly omitted). On 
the other hand, its Taylor series expansion in time domain corresponds 
to an infinite sum of derivatives of delta(omega). So an infinite sum 
delta^(n)(omega) (which are zero everywhere except at the origin) must 
converge to delta(omega-omega0)+delta(omega+omega0), correct? ;)


This is just to illustrate that the eigenspace reasoning might not work 
for infinite sums. And I don't know the sufficient condition for it to 
work (and this condition wouldn't hold here anyway, probably). Or is 
there any mistake in the above?


Regards,
Vadim


--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sampling theorem extension

2015-06-12 Thread Vadim Zavalishin

On 11-Jun-15 19:58, Sampo Syreeni wrote:

On 2015-06-11, vadim.zavalishin wrote:


Not really, if the windowing is done right. The DC offsets have more
to do with the following integration step.


I'm not sure which integration step you are referring to.


The typical framework starts with BLITs, implemented as interpolated
wavetable lookup, and then goes via a discrete time summation to derive
BLEPs. Right?


I prefer analytical expressions for BLEPs of 0 and higher orders :)


So the main problem tends to be with the summation,
because it's a (borderline) unstable operation.


I don't think so. The analytical expressions give beautiful answers 
without any ill-conditioning.




So we don't know, if exp is bandlimited or not. This brings us back to
my idea to try to extend the definition of "bandlimitedness", by
replacing the usage of Fourier transform by the usage of a sequence of
windowed sinc convolutions.


The trouble is that once you go with such a local description, you start
to introduce elements of shift-variance.


How's that? This condition (transparency of the convolution of the 
original signal with sinc in continuous time domain) is equivalent to 
the normal definition of bandlimitedness via the Fourier transform, as 
long as Fourier transform exists. The thing is, by understanding the 
convolution integral in the generalized Cesaro sense (or just ignoring 
the 0th and higher-order "DC offsets" arising in this convolution) we 
might attempt to extend the class of applicable signals. This seems 
relatively straightforward for polynomials. As the next step we can 
attempt to use polynomials of infinite order, particularly the Taylor 
expansions, where the BLEP conversion question will arise. The answer to 
the latter might be given by the rolloff speed of the Taylor series 
terms (derivative rolloff).


Regards,
Vadim


--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-06-11 Thread Vadim Zavalishin

On 11-Jun-15 11:00, Sampo Syreeni wrote:

I don't know how useful the resulting Fourier transforms would be to the
original poster, though: their structure is weird to say the least.
Under the Fourier transform polynomials map to linear combinations of
the derivatives of various orders of the delta distribution, and their
spectrum has as its support the single point x=0.


So they can be considered "kind of" bandlimited, although as I noted in 
my other post, it seems to result in DC offsets in their restored 
versions, if sinc is windowed. It probably can be shown that in the 
context of BLEP these DC offsets will cancel each other (possibly under 
some additional restrictions). So, this seems to agree with my previous 
guesses and ideas.


You also mentioned (or I understood you so) that the exp(at) (a - real, 
t - from -infty to +infty) is not bandlimited (whereas my conjecture, 
based on the derivative rolloff speed, was that it's bandlimited if a is 
below the Nyquist). Could you tell us how does its spectrum look like?


Thanks,

Vadim


--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-06-11 Thread Vadim Zavalishin
or the BLEP sum to converge is one and the same 
and has to do with the rolloff speed of the function's derivatives as 
the derivative order increases.



--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-06-10 Thread Vadim Zavalishin

On 09-Jun-15 22:08, robert bristow-johnson wrote:

a Nth order polynomial, f(x), driven by an x(t) that is bandlimited to B
will be bandlimited to N*B.  if you oversample by a ratio of at least
(N+1)/2, none of the folded images (which we call "aliases") will reach
the original passband and can be filtered out with an LPF (at the high
sampling rate) before downsampling to the original rate.  with 4x
oversampling, you can do a 7th-order polynomial and avoid non-harmonic
aliased components.


We are not talking about signals being fed through the polynomials. We 
are talking about the polynomials as the signals.



I'm failing to see how Euler equation can relate exponentials of a real
argument to sinusoids of a real argument? Any hints here?



let f(x) be defined to be  f(x) = e^(j*x)/(cos(x) + j*sin(x))

> .

I'm failing to see how Euler equation can relate exponentials of a *real 
argument* to sinusoids of a *real argument*




I hope we are talking about
one and the same real exponential exp(at) on (-infty,+infty) and not
about exp(-at) on [0,+infty) or exp(|a|t).



oh, (assuming you meant e^(-|a*t|)), them's are in the textbooks.


Did I correctly understand you? The Fourier transform of exp(at) where a 
and t are real and t is from -infty to +infty is in the textbooks? Any 
hints how it looks like?



Yes, this is what I was referring to. Currently I'm interested in the
class of functions which are representable as a sum of a real function,
which, if analytically extended to the complex plane, is entire and
isolated derivative discontinuity functions (non-bandlimited versions of
BLEPs BLAMPs etc).



i think, if you allow for dirac impulses (or an immeasureably
indistinguishable approximation of width 10^(-44) second), any finite
and virtually bandlimited function will do.  if you insist on being
strict with your mathematics, i can't help you anymore (it's been more
than 3 decades since i cracked open any Real Analysis or Complex
Variables or Functional Analysis textbook)


The problem currently is not the impulses, but the entire (complex 
analytical) part of the signal.



BTW, i am no longer much enamoured with BLIT and the descendents of
BLIT.


I'm not sure how BLEP is a descendant of BLIT


Because, if the continuous part is bandlimited, then we have "just" to
replace the discontinuities by their bandlimited versions (the essence
of the BLEP approach) and the remaining question is only: if there are
infinitely many discontinuities at a given point, whether the sum of
their bandlimited versions will converge.



they don't.  imagine a perfect brickwall filter with sinc(t) as its
impulse response.  now drive the sonuvabitch with

   { (-1)^n  n>=0
x[n] = {
   { -(-1)^n n<0


which is   x[n] =   -1, +1, -1, +1, +1, -1, +1, -1, 
  ^
  |
  |
x[0]

and see what you get.  it ain't BIBO.


Interesting observation. I might need to think a little bit more about 
this :)
However I'm not sure how this is related to the convergence of the BLEPs 
in the context which we are talking about.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-06-10 Thread Vadim Zavalishin

On 09-Jun-15 19:23, Ethan Duni wrote:

Could you give a little bit more of a clarification here? So the
finite-order polynomials are not bandlimited, except the DC? Any hints
to what their spectra look like? How a bandlimited polynomial would look
like?



Any hints how the spectrum of an exponential function looks like? How
does a bandlimited exponential look like? I hope we are talking about
one and the same real exponential exp(at) on (-infty,+infty) and not
about exp(-at) on [0,+infty) or exp(|a|t).


The Fourier transform does not exist for functions that blow up to +-
infinity like that.


I understood from Sampo Syreeni's answer, that Fourier transform does 
exist for those functions. And that's exactly the reason for me asking 
the above question.




To do frequency domain analysis of those kinds of
signals, you need to use the Laplace and/or Z transforms. Equivalently, you
can think of doing a regular Fourier transform after applying a suitable
exponential damping to the signal of interest. This will handle signals
that blow up in one direction (like the exponential), but signals that blow
up in both directions (like polynomials) remain problematic.


Not good enough. If we're talking about unilateral Laplace transform, 
then it introduces a discontinuity at t=0, which immediately introduces 
further non-bandlimited partials into the spectrum. I'm not sure how you 
suppose to answer the question of the original signal being bandlimited 
in this case. With bilateral Laplace transform it's also complicated, 
because the damping doesn't work there, except possibly at one specific 
damping setting (for an exponent, where for polynomials it doesn't work 
at all), yielding a DC. I'm not fully sure, how to analytically extend 
this result to the entire complex plane and whether this will make sense 
in regards to the bandlimiting question.




That said, I'm not sure why this is relevant? Seems like you aren't so much
interested in complete exponential/polynomial functions over their entire
domain, but rather windowed versions that are restricted to some small time
region?


I am specifically interested in the functions on the entire real axis. 
Further in my original email there is an explanation of the reasons.


Regards,
Vadim


--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Sampling theorem extension

2015-06-09 Thread Vadim Zavalishin
functions, and integrate down to something
which fulfils your condition yet will fail any normal sampling
theory under the assumption of shift-invariance. If you don't have
such a limit, then you're essentially already back to the theory of
Schwartz spaces, which are used in the construction of the topology
of the space of tempered distributions I mentioned above.


See the above statement about the entire function. I just didn't mention
this in the context of the other thread, trying to be brief.

Anyway, here is my conjecture.

The signals in our class of interest are a sum of two parts:
- the "continuous part" a real function of real argument which is
simultaneously entire in the complex plane.
- the "discontinuous part" which is a sum of "non-bandlimited BLEPS (0th
order), BLAMPS (1st order bleps) and higher-order bleps", where the
discontinuity points are isolated but are allowed to have infinite
"multiplicity" (that is at a single isolated point there may be
discontinuities in all the derivatives).

We are interested in:
- whether the continuous part of our signal is bandlimited?
- what is the rolloff speed of the derivative discontinuity amplitudes
(as the order of the discontinuity grows) at each multiple-discontinuity
point?

Because, if the continuous part is bandlimited, then we have "just" to
replace the discontinuities by their bandlimited versions (the essence
of the BLEP approach) and the remaining question is only: if there are
infinitely many discontinuities at a given point, whether the sum of
their bandlimited versions will converge.

The convergence of BLEPs is defined by the rolloff speed of the
derivatives. My conjecture is that so is the bandlimitedness of the
entire function, and that the rolloff speed requirement is exactly the
same for both conditions!

If this conjecture is true, this should automatically imply that all
polynomials are bandlimited, since all their higher-order derivatives
are zeros.

Here is the older thread where I was asking the same question:
http://music.columbia.edu/pipermail/music-dsp/2014-June/072679.html

--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Did anybody here think about signal integrity

2015-06-08 Thread Vadim Zavalishin

On 08-Jun-15 15:06, Theo Verelst wrote:

Clearly, there's very little knowledge around the basic mathematical
proofs underpinning a decent undergrad engineering course. Prisms
understand fine what the Fourier transform is, and isn't. Maybe there's
an interest in this:
http://mathworld.wolfram.com/FourierTransformExponentialFunction.html .


Clearly this is not the exponential signal which I was referring to. 
This is also a clear indication of the limitation of the sampling 
theorem I was referring to: since it's not possible to take Fourier 
transform of an exponential function, people refer to some other 
function as the "exponential function" in the context of the Fourier 
transform :)


Anyway, since Theo expressed the wish not to change the direction of the 
thread, let's not stick to the question that I brought up. Although, if 
there is a renewed interest to discuss this aspect, I guess, creating a 
new thread could be an appropriate way to go.


Regards,
Vadim



--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Did anybody here think about signal integrity

2015-06-08 Thread Vadim Zavalishin
The direct impact on the application can be e.g. using BLEPs (of 0th and 
higher orders) to antialias a sine hard sync or an exp-scale-modulated 
or self-modulated sawtooth (which produces exp segments), which seems to 
work but is lacking a firm theoretical foundation. In fact so does even 
the basic BLEP approach, since the function x(t)=at cannot be analyzed 
by the sampling theorem either.


But I'm afraid I'm hijacking Theo's thread. Just wanted to point out 
another topic leading to the idea of the incompleteness of the sampling 
theorem. Although, if Theo doesn't mind, maybe we could further discuss 
this topic (or both topics simultaneously, since they seem to be related).


Regards,
Vadim

On 08-Jun-15 10:54, STEFFAN DIEDRICHSEN wrote:

IIRC, the discussion back then covered some topics like distortions created 
with polynomial functions, etc.
Although DC isn’t a real problem in practical applications, there are many 
cases, which are hard to predict, if they cause aliasing. A good example is FM, 
which spectra can be predicted using Bessel functions, but who wants to do that?
Or wave shaping using atan() or other transcendent functions.
I think, there was a conclusion, that there are cases, which aren’t covered so 
well by theory, but the impact on the application can be overseen.

Steffan




On 08.06.2015|KW24, at 10:35, Victor Lazzarini  wrote:

Not sure I understand this sentence. As far as I know the FT is defined as an 
integral between -inf and +inf, so I am not quite
sure how it cannot capture infinite-lenght sinusoidal signals. Maybe you meant 
something else? (I am not being difficult, just
trying to understand what you are trying to say).

Dr Victor Lazzarini
Dean of Arts, Celtic Studies and Philosophy,
Maynooth University,
Maynooth, Co Kildare, Ireland
Tel: 00 353 7086936
Fax: 00 353 1 7086952


On 8 Jun 2015, at 08:19, vadim.zavalishin mailto:vadim.zavalis...@native-instruments.de>> wrote:

It might seem that such signals are unimportant, however even the infinite 
sinusoidal signals, including DC, cannot be treated by the sampling theorem, 
since the Dirac delta (which is considered as their Fourier transform) is not a 
function in a normal sense and strictly speaking Fourier transform doesn't 
exist for these signals.




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp




--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Did anybody here think about signal integrity

2015-06-08 Thread Vadim Zavalishin
If you try to take the Fourier transform integral of a exp(j*omega_0*t), 
it will not converge in the sense, how an improper integral's 
convergence is usually understood. You will need to employ something 
like Cauchy principal value or Cesaro convergence to make it converge to 
zero at omega!=omega_0. At omega=omega_0 the integral diverges no matter 
in which sense you take it. So, strictly speaking, Fourier transform of 
a sine doesn't exist.


An equivalent look to this from the inverse transform's side is that the 
spectrum of the sine is a Dirac delta function, which is not a function 
in the normal sense.


So, none of the statements of the Fourier transform theory (including 
the sampling theorem, which assumes the existence of the Fourier 
transform), taken rigorously, seem to apply to the sinusoidal signals.


Regards,
Vadim

On 08-Jun-15 10:35, Victor Lazzarini wrote:

Not sure I understand this sentence. As far as I know the FT is defined as an 
integral between -inf and +inf, so I am not quite
sure how it cannot capture infinite-lenght sinusoidal signals. Maybe you meant 
something else? (I am not being difficult, just
trying to understand what you are trying to say).

Dr Victor Lazzarini
Dean of Arts, Celtic Studies and Philosophy,
Maynooth University,
Maynooth, Co Kildare, Ireland
Tel: 00 353 7086936
Fax: 00 353 1 7086952


On 8 Jun 2015, at 08:19, vadim.zavalishin 
 wrote:

It might seem that such signals are unimportant, however even the infinite 
sinusoidal signals, including DC, cannot be treated by the sampling theorem, 
since the Dirac delta (which is considered as their Fourier transform) is not a 
function in a normal sense and strictly speaking Fourier transform doesn't 
exist for these signals.


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp




--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiently modulate filter coefficients without artifacts?

2015-02-04 Thread Vadim Zavalishin
After writing the previous mail, I realized, that what I described in 
regards to mass-spring vs SVF was exactly the thing which I mentioned 
earlier: placing the (identical) cutoff gains before the integrators is 
equivalent to the time axis distortion. This might be seen as some kind 
of "proof" that such structures have the best time-varying performance 
in cases of cutoff modulation. Although, whether this is equivalent to 
the choice of the energy-based state variables in real-world physics 
systems is not fully clear to me yet.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiently modulate filter coefficients without artifacts?

2015-02-04 Thread Vadim Zavalishin

On 03-Feb-15 03:39, Andrew Simper wrote:

I completely agree! I find it mentally easier to think of "energy
stored" in each component rather than "state variables" even though
they are the same. So for musical applications it is important that a
change in the cutoff and resonance doesn't change (until you process
the next sample) the energy stored in each capacitor / inductor /
other energy storage component in your model. Direct form structures
do not have this energy conservation property, they are only
equivalent in the LTI case (linear time invariant - ie don't change
your cutoff or resonance ever). Any method that tries to jiggle the
states to preserve the energy would only be trying to do what already
happens automatically with some of state space model, so I feel it is
best to leave such forms for static filtering applications.


I'm not sure whether the choice of the energy-based state variables is 
indeed the best one (would be nice to try to have some kind of formal 
proof of that), but at least it seems to me that it might be that the 
SVF has the optimal (in a way) choice of those. My consideration is the 
following. Imagine a generic 2nd order mechanical system. Something like 
a mass on a spring. The external excitation force is the input signal. 
The natural choice of state variables is the position and the velocity 
of the mass. We can look at it as at a multimode LP/BP/HP filter. Up to 
some scaling, the position is the lowpass output, the velocity is the 
bandpass output and the highpass can be obtained as a linear combination 
of LP, BP and the input. Since there are not that many possibilities to 
construct a 2nd order linear differential system, our system is 
equivalent to an SVF in the LTI sense. The time-varying behavior will 
depend on which specific coefficients of the 2nd order differential 
equations are modulated and on the choice of state variables.


So, our state variables are the position and the velocity. Imagine our 
system has a high cutoff and we are feeding in a sufficiently high 
sinusoidal signal with the frequency below the cutoff. So the lowpass 
output (the position) is a similar sinusoid. At the zero-crossing time 
the velocity will be maximal. Suppose we suddenly lower the cutoff at 
this moment. Intuitively (I admit that this requires a more rigorous 
check), at a lower cutoff the velocity will not be changing so quickly 
any more. This means, the output signal of the system will significantly 
overshoot the previous output amplitude.


In comparison, the state variables of the SVF are using the velocity 
divided by the cutoff. Which means a sudden reduction of the cutoff will 
proportionally reduce the velocity and the overshoot will not be that 
big anymore.


In order to test your conjecture about the energy-based state variables, 
one would need to explicitly write down the mass-spring equation and 
compare the respective choices of state variables to those of SVF. 
Possibly, the energy-based state variables of the mass-spring system 
will be equivalent to the state variables of the SVF, which would then 
be a sign that your idea might be correct.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiently modulate filter coefficients without artifacts?

2015-02-04 Thread Vadim Zavalishin

On 04-Feb-15 00:25, robert bristow-johnson wrote:


On 2/2/15 6:21 PM, Stefan Sullivan wrote:

I actually found by playing around with a particular biquad problem
that changing the topology of the filter had a greater impact on
reducing artifacts than proving bibo stability. In fact, any
linearly interpolated biquad coefficients between stable filters
results in a stable filter, including with nonzero initial
conditions ( http://dsp.stackexchange.com/a/11267/5321).


i think so, too.  because the stability depends on the value of a2
and ramping from one stable value of a2 to another stable value
shouldn't be romping through unstable territory...


I'm not sure what exactly you are referring to here, but your statement
could evoke a common misconception. The "stable and unstable
territories" are defined only for LTI systems (if you look at the 
corresponding proofs, they all assume the time-invariance property of 
the filter). That's why the works dealing with proving the 
time-invariant stability have to resort to other criteria, than mere 
pole positions. Rigorously speaking, there is no reason to believe that 
the same thing should apply to time-varying systems. Intuitively 
speaking, the change of parameters can work as an additional 
"unstabilizing factor", which is not taken into account by the 
LTI-specific "poles inside the unit circle" rule.


As I mentioned earlier linked above proof is not convincing either,
because it doesn't assume the infinite modulation of the coefficients.
Sure, you can always treat the system as BIBO up to any given certain
finite point in time, but from that point of view ANY linear system is 
BIBO. The point of the BIBO definition is to put a horizontal bound on 
the filter state growth, not to state that the filter state is finite at 
any finite time.



but if there is vibrato connected to a filter's control parameters,
i can see how a filter can go unstable an never settle down.


This is exactly the point of the time-varying stability analysis.

Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiently modulate filter coefficients without artifacts?

2015-02-03 Thread Vadim Zavalishin

On 03-Feb-15 00:21, Stefan Sullivan wrote:

... In fact, any linearly interpolated
biquad coefficients between stable filters results in a stable filter,
including with nonzero initial conditions (
http://dsp.stackexchange.com/a/11267/5321). The trick is that the bounds
may be different with nonzero initial conditions than with zero initial
conditions.


I took a glance at the proof and it doesn't look very convincing. If 
there's interest, we could further discuss the details.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiently modulate filter coefficients without artifacts?

2015-02-02 Thread Vadim Zavalishin

One should be careful not to mix up two different requirements:

- time-varying stability of the filter
- the minimization of modulation artifacts

While both are probably closely related, there is no reason to believe 
that they are equivalent. My intuitive guess would be that the 
"absolutely best" (whatever that means) stability would occur (for 
2-pole filters) for a filter based on the 2nd order resonating Jordan 
normal cell, which is effectively just implementing a decaying complex 
exponential:


x[n+1] = A*(x[n]*cos(a)-y[n]*sin(a))
y[n+1] = A*(x[n]*sin(a)+y[n]*cos(a))

(the filter itself will be a state-space representation built around the 
above transition matrix). However, it's not even intuitively obvious if 
that structure would give you the minimal artifacts.


In regards to the artifact minimization, I have only an intuitive 
suggestion. Let's look at the SVF structure in continuous time (e.g. 
Fig.5.1 on p.77 of
http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_1.0.3.pdf) 
and at the structure of the continuous-time integrator (the two untitled 
pictures on p.49 in the same text). It's intuitively clear, that the 
integrator structure, where the cutoff gain precedes the integration 
generates "less" artifacts, since the integrator is "smoothing out" the 
coefficient changes. This leads to the idea that in this case the 
lowpass output of the SVF would be quite reasonable in regards to the 
artifact minimization, since each of the cutoff coefficients is smoothed 
by an integrator and the resonance coefficient is smoothed by both of 
them. Similar considerations can be applied to the other modes, where 
it's clear that the HP output gets the unsmoothed artifacts from the 
resonance changes. If we want to build a mixture of LP/BP/HP modes 
rather than picking the outputs one by one, then, maybe it's possible to 
smooth the artifacts by using the transposed (MISO) form of the SVF, but 
I'm not usre.


The thing with placing the cutoff before the integrator is pretty 
generic. It can be easily shown, that in this case the cutoff 
modulations can be equivalently represented as time dilation/compression 
(provided all integrators share the same cutoff), thus they don't affect 
the filter stability and their artifacts are exactly those of time axis 
warping. It would be reasonable to expect that if we then apply any 
state-variable preserving analog to digital transformation techniques 
(such as trapezoidal integration/TPT), the artifacts amount will be more 
or less the same. Similar reasoning can be applied to a continuous-time 
Jordan normal cell, which then can be converted to discrete time.


One would then generally expect other discretization approaches, which 
do not preserve the topology and state variables, such as direct forms 
to have a way poorer performance in regards to the artifacts, unless, of 
course, it's an approach which specifically targets the artifact 
minimization in one or the other way.


Regards,
Vadim

On 02-Feb-15 11:18, Ross Bencina wrote:

Hello Robert,

On 2/02/2015 10:10 AM, robert bristow-johnson wrote:

also, i might add to the list, the good old-fashioned lattice (or
ladder) filters.


In the Laroche paper mentioned earlier [1] he shows that Coupled Form is
BIBO stable and Normalized Ladder is stable only if coefficients are
varied at most every other sample (which, if adhered to, should also be
fine for the current discussion).

The Lattice filter is *not* time-varying stable according to Laroche's
analysis. I'd be curious to hear alternative views/discussion on that.

[1] Laroche, J. (2007) “On the Stability of Time-Varying Recursive
Filters,” J. Audio Eng. Soc., vol. 55, no. 6, pp. 460-471, June 2007.

Cheers,

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp



--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] SVF and SKF with input mixing

2015-01-06 Thread Vadim Zavalishin
I was referring specifically to the transposition of the linear 
continuous-time 1-pole lowpass filter (like a buffered RC circuit), 
where (off the top of my head) it seems that the only difference between 
the transposed and the original versions is the placement of the cutoff 
gain before or after the integrator, which can even be completely 
ignored, once we consider integrators-with-cutoffs as atomic elements.


Along the same lines, the transposition of the transistor ladder, when 
treating the 1-poles as elementary blocks, seems to preserve the filter 
structure. Which, in case of the famous Antti's model means that we can 
preserve the nonlinearities as they are.


Why I say "seems" is because I didn't try to verify this on paper, so 
I'm just reserving a possibility of a mistake ;)


For the SVF you're right, the effect of the undampling nonlinearity is 
not obvious in the transposed version.


Regards,
Vadim

On 06-Jan-15 14:57, STEFFAN DIEDRICHSEN wrote:

Transposed filters have identical transfer functions, but differ in terms of 
rounding noise and coefficient quantization.

In case of nonlinearities, it’s difficult. A typical non-linearity is the Diode 
circuitry to “un-damp” the filter, which can be seen as a voltage dependent 
voltage divider. In the original arrangement, like the SEM filter, it depends 
on the BP out, but if you transpose it, this output is lost. I’m not sure, if 
you get the same properties distortion wise after transposition. However, the 
analog input mixing filters will sound differently from  a standard SVF, too.


Steffan



On 06.01.2015|KW2, at 14:40, Vadim Zavalishin 
 wrote:

(although I'm not sure, whether their transposed version is not identical to 
the original ;-) ).

Things can get less straightforward, once nonlinearities are involved.


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp




--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] SVF and SKF with input mixing

2015-01-06 Thread Vadim Zavalishin
The SVF transposition can be done in continuous-time domain (where the 
filter is basically two integrators in series, both feeding back to the 
input). Then, applying the trapezoidal integration/ZDF techniques we 
obtain a multi-input 2-pole SVF.


Obviously, the same technique can be applied to any other linear 
multi-output filter, where in principle the transposition doesn't need 
to be applied on all levels of the filter. E.g. for a multimode 
transistor ladder we could handle the underlying 1-pole lowpasses as 
atomic blocks and not transpose them internally (although I'm not sure, 
whether their transposed version is not identical to the original ;-) ).


Things can get less straightforward, once nonlinearities are involved.

Regards,
Vadim

On 06-Jan-15 14:30, STEFFAN DIEDRICHSEN wrote:

Actually, it’s an interesting filter.
BTW, if you transpose Chamberlin’s SVF, you get a similar filter with HP/BP/LP 
inputs and a common output:

Out = HP + f * (BP + Z1)
Z1 +=  BP  - Out * q + f * (Z2 + LP - Out)
Z2 +=  LP - Out

f: frequency  coefficient
q: Q factor coefficient
Out: output
Z1, Z2: state variables
HP, BP, LP: filter inputs

HNY!

Steffan


On 05.01.2015|KW2, at 14:19, Andrew Simper  wrote:

Thanks to the ARP engineers for the original circuit and Sam
HOSHUYAMA's great work for outlining the theory and designing a
schematic for an input mixing SVF.

Sam's articles-
Theory: http://houshu.at.webry.info/200602/article_1.html
Schematic: http://houshu.at.webry.info/201202/article_2.html

I have taken Sam's design and written a technical paper on
discretizing this form of the SVF. I also took the chance to update
the SKF (Sallen Key Filter) paper to more explicitly deal with input
mixing of different signals, here they are in as similar form as
possible:

http://cytomic.com/files/dsp/SvfInputMixing.pdf
http://cytomic.com/files/dsp/SkfInputMixing.pdf

As always all the technical papers I've done can be accessed from this page:



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp



--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Instant frequency recognition

2014-08-04 Thread Vadim Zavalishin
I think it can be done simpler. Just extend the inverse Fourier 
transform in the same way how the bilateral Laplace transform extends 
the direct Fourier transform. Any mistake in that reasoning?


Regards,
Vadim

On 02-Aug-14 20:10, colonel_h...@yahoo.com wrote:

On Fri, 1 Aug 2014, Vadim Zavalishin wrote:


My quick guess is that bandlimited does imply analytic in the complex
analysis sense.


1st off, I am fairly sure it is true that a BL signal cannot be zero
over an interval, so two non-zero BL signals cannot differ by zero over
an interval, so a function with cetain values over any interval is
unique, so the rest of this may be cruft...

However, an audio signal is most often a real valued function of a real
value or a complex valued function of a real value whos imaginary part
happens to be zero (often almost interchangably to little ill effect.)

So to get an analytic complex function you'd have to extend the
function. A non-zero analytic can't have a zero imaginary part, so we'd
need a ``new'' imaginary part and to extent the real and imaginary parts
to a neighborhood of the real line.

Off the cuff I think you might use the real values of f on the real axis
as boundary conditions for the Cauchy-Reimann equations in a
neighborhood of the real axis to solve for a non-zero imaginary part for
f(z) which would then be analytic. This is /if/ BL is enough to show
such a solution exists tehn you're done (which I do not claim is false.
I just can't see a way to get there.)

Ron


--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Instant frequency recognition

2014-08-01 Thread Vadim Zavalishin

Sorry, I meant Laplace transform of a timelimited signal.

On 01-Aug-14 10:06, Vadim Zavalishin wrote:


On 01-Aug-14 05:22, colonel_h...@yahoo.com wrote:

On Fri, 18 Jul 2014, Sampo Syreeni wrote:


Well, theoretically, all you have to know is that the signal is
bandlimited. When that is the case, it's also analytic, which means
that an arbitrarily short piece of it (the analog signal) will be
enough to reconstruct all of it as a simple power series.


I believe it is true than band limited implies C^infinity, but the
function is not complex, so it's a different use of the term analytic
than in complex analysis,


My quick guess is that bandlimited does imply analytic in the complex
analysis sense. This must be a dual of Laplace transform of a
bandlimited signal being analytic (entire). Although I could be missing
something here.

Regards,
Vadim



--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Instant frequency recognition

2014-08-01 Thread Vadim Zavalishin


On 01-Aug-14 05:22, colonel_h...@yahoo.com wrote:

On Fri, 18 Jul 2014, Sampo Syreeni wrote:


Well, theoretically, all you have to know is that the signal is
bandlimited. When that is the case, it's also analytic, which means
that an arbitrarily short piece of it (the analog signal) will be
enough to reconstruct all of it as a simple power series.


I believe it is true than band limited implies C^infinity, but the
function is not complex, so it's a different use of the term analytic
than in complex analysis,


My quick guess is that bandlimited does imply analytic in the complex 
analysis sense. This must be a dual of Laplace transform of a 
bandlimited signal being analytic (entire). Although I could be missing 
something here.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Instant frequency recognition

2014-07-17 Thread Vadim Zavalishin

On 16-Jul-14 15:29, Olli Niemitalo wrote:

Not sure if this is related, but there appears to be something called
"chromatic derivatives":

  http://www.cse.unsw.edu.au/~ignjat/diff/


Seems pretty much related and going further in the same direction 
(alright, I just briefly glanced at chromatic derivatives). Anyway, it 
seems that for the discrete time signals the situation is somewhat 
different from what I described for continuous time in that there are no 
derivative discontinuities for discrete time signals. At the same time 
it's not possible to locally compute the derivatives of the discrete 
time signals, so the local Taylor expansion idea is not applicable 
anyway (the same applies to the the chromatic derivatives, I'd guess). 
However, instead, we could simply apply the inter-/extra-polation to the 
obtained sample points. The most intuitive would be applying Lagrange 
interpolation, which as we know, converges to the sinc interpolation. 
However (again, remember the BLEP discussion), any finite order 
polynomial contains only the generalized DC component. Not very useful 
for the frequency estimation. So, the question is, what kind of 
interpolation should we use? Sinc interpolation would be theoretically 
correct, but, remember, that this thread is not about "strictily 
theoretically correct" frequency recognition, but rather about some 
"more intuitive" version with the concept of "instant frequency". Maybe 
we could attempt exactly fitting a set of samples into a sum of sines of 
different frequencies? Each sine corresponding to 3 degrees of freedom.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Instant frequency recognition

2014-07-16 Thread Vadim Zavalishin

On 16-Jul-14 12:31, Olli Niemitalo wrote:

What does "O(B^N)" mean?

-olli


This is the so called "big O" notation.
f^(N)(t)=O(B^N) means (for a fixed t) that there is K such that
|f^(N)(t)|where f^(N) is the Nth derivative. Intuitively, "f^(N)(t) doesn't grow 
faster than B^N"


Regards,
Vadim





On Thu, Jul 10, 2014 at 4:02 PM, Vadim Zavalishin
 wrote:

Hi all,

a recent question to the list regarding the frequency analysis and my recent
posts concerning the BLEP led me to an idea, concerning the theoretical
possibility of instant recognition of the signal spectrum.

The idea is very raw, and possibly not new (if so, I'd appreciate any
pointers). Just publishing it here for the sake of
discussion/brainstorming/etc.

For simplicity I'm considering only continuous time signals. Even here the
idea is far from being ripe. In discrete time further complications will
arise.

According to the Fourier theory we need to know the entire signal from
t=-inf to t=+inf in order to reconstruct its spectrum (even if we talk
Fourier series rather than Fourier transform, by stating the periodicity of
the signal we make it known at any t). OTOH, intuitively thinking, if I'm
having just a windowed sine tone, the intuitive idea of its spectrum would
be just the frequency of the underlying sine rather than the smeared peak
arising from the Fourier transform of the windowed sine. This has been
commonly the source of beginner's misconception in the frequency analysis,
but I hope you can agree, that that misconception has reasonable
foundations.

Now, recall that in the recent BLEP discussion I conjectured the following
alternative "definition" of bandlimited signals: an entire complex function
is bandlimited (as a function of purely real argument t) if its derivatives
at any chosen point are O(B^N) for some B, where B is the band limit.

Thinking along the same lines, an entire function is fully defined by its
derivatives at any given point and (therefore) so is its spectrum. So, we
could reconstruct the signal just from its derivatives at one chosen point
and apply Fourier transform to the reconstructed signal.

In a more practical setting of a realtime input (the time is still
continuous, though), we could work under an assumption of the signal being
entire *until* proven otherwise. Particularly, if we get a mixture of
several static sinusoidal signals, they all will be properly restored from
an arbitrarily short fragment of the signal.

Now suppose that instead of sinusoidal signals we get a sawtooth. In the
beginning we detect just a linear segment. This is an entire function, but
of a special class: its derivatives do not fall off smoothly as O(B^N), but
stop immediately at the 2nd derivative. From the BLEP discussion we know,
that so far this signal is just a generalized version of the DC offset, thus
containing only a zero frequency partial. As the sawtooth transition comes
we can detect the discontinuity in the signal, therefore dropping the
assumption of an entire signal and use some other (yet undeveloped) approach
for the short-time frequency detection.

Any further thoughts?

Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp



--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Instant frequency recognition

2014-07-10 Thread Vadim Zavalishin

Hi all,

a recent question to the list regarding the frequency analysis and my 
recent posts concerning the BLEP led me to an idea, concerning the 
theoretical possibility of instant recognition of the signal spectrum.


The idea is very raw, and possibly not new (if so, I'd appreciate any 
pointers). Just publishing it here for the sake of 
discussion/brainstorming/etc.


For simplicity I'm considering only continuous time signals. Even here 
the idea is far from being ripe. In discrete time further complications 
will arise.


According to the Fourier theory we need to know the entire signal from 
t=-inf to t=+inf in order to reconstruct its spectrum (even if we talk 
Fourier series rather than Fourier transform, by stating the periodicity 
of the signal we make it known at any t). OTOH, intuitively thinking, if 
I'm having just a windowed sine tone, the intuitive idea of its spectrum 
would be just the frequency of the underlying sine rather than the 
smeared peak arising from the Fourier transform of the windowed sine. 
This has been commonly the source of beginner's misconception in the 
frequency analysis, but I hope you can agree, that that misconception 
has reasonable foundations.


Now, recall that in the recent BLEP discussion I conjectured the 
following alternative "definition" of bandlimited signals: an entire 
complex function is bandlimited (as a function of purely real argument 
t) if its derivatives at any chosen point are O(B^N) for some B, where B 
is the band limit.


Thinking along the same lines, an entire function is fully defined by 
its derivatives at any given point and (therefore) so is its spectrum. 
So, we could reconstruct the signal just from its derivatives at one 
chosen point and apply Fourier transform to the reconstructed signal.


In a more practical setting of a realtime input (the time is still 
continuous, though), we could work under an assumption of the signal 
being entire *until* proven otherwise. Particularly, if we get a mixture 
of several static sinusoidal signals, they all will be properly restored 
from an arbitrarily short fragment of the signal.


Now suppose that instead of sinusoidal signals we get a sawtooth. In the 
beginning we detect just a linear segment. This is an entire function, 
but of a special class: its derivatives do not fall off smoothly as 
O(B^N), but stop immediately at the 2nd derivative. From the BLEP 
discussion we know, that so far this signal is just a generalized 
version of the DC offset, thus containing only a zero frequency partial. 
As the sawtooth transition comes we can detect the discontinuity in the 
signal, therefore dropping the assumption of an entire signal and use 
some other (yet undeveloped) approach for the short-time frequency 
detection.


Any further thoughts?

Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-07-03 Thread Vadim Zavalishin

On 03-Jul-14 15:29, Theo Verelst wrote:

* The "locality" of the proper resampling/reconstruction (sinc) funtion
is very limited: the amplitude of a single sinc function, regarding a
single sample diminishes only with 1/t


Yes, but we can window it (and then use the Fourier theory to analyse 
the artifacts brought up by that windowing). I think you can get very 
decent quality with quite moderate window lengths.



* FM isn't bandwidth limited AT ALL taken at the continuous/analog face
value like in a modular synth.


Talking DX7-style FM, yes. However, talking analog-style FM, while still 
being non-bandlimited, we can wonder if the entire non-bandlimited part 
of the spectrum is "contained in the discontinuities". This is one of 
the main motivations for the thread.



* Ring modulation as a standard 4 quadrant multiplication is perfectly
frequency limited with the notion that the input signals are.


But its band limit is the sum of the band limits of the sources, which 
makes it not Nyquist-limited "by default". An implementation which wants 
to deal with arbitrary waveforms could just use 2x 
upsampling/downsampling to produce properly bandlimited ring modulation 
output. With specific knowledge about analog-style waveform inputs we 
could instead directly apply the BLEPs.



* Natural E powers are NOT frequency limited according to the normal
Fourier transform.


Normal Fourier transform is not capable to do any statement regarding 
bandlimitedness of exp(a*t), since it doesn't exist. However, for a 
certain practical goal (generating bandlimited piecewise-exponential 
signals) we could try to come up with a different definition of 
"bandlimitedness" (you can argue the correctness of continuing to use 
the term "bandlimited" here, but that's a slightly different topic).


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-07-03 Thread Vadim Zavalishin

On 03-Jul-14 08:00, Nigel Redmon wrote:

On Jul 2, 2014, at 1:12 AM, Vadim Zavalishin
 wrote:

As for using the wavetables, BLIT, etc, they might provide superior
performance (wavetables), total absence of inharmonic aliasing
(BLIT) etc., but, AFAIK they tend to fail in extreme situations
like heavy audio-rate FM, ring modulation etc. BLEP, OTOH, should
still perform equally good.



Ring modulation shouldn’t be part of that, because given an
equivalent output of an oscillator by any method (say, sawtooth from
a wavetable oscillator, and a saw from another method—in general, the
same harmonic properties and output sample rate), if the ring
modulation (which is “balanced”, or four-quadrant amplitude
modulation) results in aliasing, it will result in aliasing for any
of the oscillator types, because it only depends on the frequency
content of the oscillator output, not how it got there.


Ring modulation is clearly a part of that. If the input waveforms of a 
ring mod can be specified analytically (which is normally the case in a 
fixed-layout synth) you can consider the ring mod as an oscillator 
itself. That is, you analytically compute the non-bandlimited output of 
the ring mod in the continuous-time domain and then apply BLEPs to the 
discontinuities in the output waveform.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-07-03 Thread Vadim Zavalishin

On 03-Jul-14 04:37, robert bristow-johnson wrote:

i think i understand what you're saying.  BLIT is a form of granular
synthesis where the BLI is the grain.  to frequency-modulate a waveform
generated from launching these BLI grains, we are modulating the spacing
of the onset time for each grain.  the grains themselves are not
scrunched or stretched like the waveform that comes out of a wavetable
by modulating the phase-increment.


Actually, it's not sufficient to just modulate the starting times of the 
BLI grains. You have to ensure the proper shape of the oscillator 
waveform in between the transitions (like the ones of the sawtooth) and 
that requires modulation of the DC offset of the BLIT. Which may produce 
aliasing on its own (including but theoretically not limited to aliasing 
from the discontinuities of the DC).


Anyway, instead of addressing the problem in the BLI domain, which 
requires subsequent integration and runtime re-expression of the 
modulation in terms of the BLI DC, I think it's way more straightforward 
to address it in the BLEP doman, where you explicitly generate the 
necessary waveform and *only* need to take care of the derivative 
discontinuities. You can do this as long as you can consider the 
waveform in between the transitions as bandlimited. This was the main 
reason I started this topic.


Oh, and as a footnote. I stated that BLIT doesn't generate inharmonic 
aliasing. This clearly applies only to the case where you are not using 
grains to generate the BLIT, but generate the BLIT directly. If you use 
grains, then I see no difference to the BLEP case (where you need to 
window the BLEPs), except the (unnecessary IMHO) runtime digital 
integration step.


Yet another thing. From my previous explanation regarding the 
bandlimited exponent, I was really tempted to make the following 
generalized conjecture: a signal is bandlimited if and only if it's 
derivatives at some chosen point are O(B^N), where B is the band limit. 
Obviously, this statement works in one direction (derivatives of 
bandlimited signal are O(B^N). As for the opposite direction this should 
be true only for some class of signals (as it clearly fails for signals 
with discontinuities of some order). I wonder if there is an intuitive 
definition of this class, such as "analytic" or "with infinite 
convergence radius of Taylor series" etc.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-07-02 Thread Vadim Zavalishin

A further interesting observation regarding the bandlimitedness of
exp(a*t), which kind of confirms my previous conjecture. We are
considering a "periodic exp", which is a sawtooth, whose segments are
exponential rather than linear.

Consider the amplitudes of "unit" aliasing residuals (residuals for the
discontinuity functions obtained by successful integration of the Dirac
delta). Bandlimited Dirac delta is sinc(pi t/T), where sinc(t)=sin(t)/t.
Integrating successfully with respect to t we notice that the amplitudes
of the unit residuals fall off as (T/pi)^N/N. Where (T/pi)^N is just
obtained form a standard property of integration of a horizontally
stretched function and 1/N is obtained from the biggest term of the
formula for the Nth antiderivative of Si(t).

The derivatives of exp(a*t) fall off as a^N.

Thus, the amplitudes of the residuals needed for exp(a*t) discontinuity
bandlimiting fall off as (a*T/pi)^N/N. Therefore, if a*Tthe residuals will converge and exp(a*t) can be considered bandlimited, 
otherwise not.


Notice that the same considerations can be applied to sin(a*t), which
fully coincides with the known condition for sin(a*t) being bandlimited:

sin(2*pi*f*t) is bandlimited iff f<1/2T
a=2*pi*f
a*Thttp://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-07-02 Thread Vadim Zavalishin

Hi Theo

On 01-Jul-14 16:36, Theo Verelst wrote:

The example with the e-power still cannot serve as a "perfect" example,
no matter if we sweep the proper boundary conditions for going from the
Fourier integral to the decent s-integral with the network response
being thought to start at t=0 under the rug, because, like I clearly
stated, the e-power sequence, as perfect sample-set of a decaying
a*E^(b-c*t) function IS NOT BANDWIDTH LIMITED.


Failing to grap (as usual) some of the terminology used in your post 
(such as "network response"), I would still like to ask, whether you are 
talking about unilateral Laplace transform here (put differently, 
whether your exp(t) is zero for t<0)? Because, if that is the case, then 
(a) the signal is obviously non-bandlimited and (b) one could conjecture 
that all aliasing is coming from the transition at t=0, where you have 
discontinuities in all derivatives.


In my original post I was referring to infinitely long exp(t). Although, 
using the "DC plus integration" approach, suggested in this thread, I'm 
wondering, if a periodic "sawtooth-like" exp(a*t) contains all its 
aliasing in the discontinuities (that is, if we bandlimit them one by 
one, whether we are getting a bandlimited signal in the limit). Maybe if 
a is small enough, the derivative discontinuities will decay 
sufficiently fast for that?


Oh, and could you also give the link to the other thread you mentioned?

Thanks

Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-07-02 Thread Vadim Zavalishin

On 01-Jul-14 19:08, Ethan Duni wrote:

This means that in principle "any" piecewise polynomial signal
with bandlimited discontinuities of the signal and its derivatives
is also band limited.


Sorry if I'm missing something obvious, but what is a "bandlimited
discontinuity"?


Despite the elaborate answer already given to that question, I'd like to 
get an answer from a different point of view, where we stick to 
continous-time domain. By considering bandlimiting in the 
continuous-time domain, prior to sampling (kind of virtual ADC), we have 
a simpler (IMHO) framework: if the continuous-time signals are 
bandlimited, we can simply sample them to produce non-aliasing digital 
signal, without *explicitly* considering "fractional delay filters" etc. 
The result should be pretty much the same, just with using more 
intuitive and simpler concepts. YMMV.


We can define a 0th order discontinuity as a value discontinuity in the 
signal. Like the sawtooth level jumping from +1 to -1 at the transition. 
We could also define the discontinuity function as being 0 for t<0 and 1 
for t>0 (you can set it to 0.5 at t=0). Then e.g. a sawtooth signal can 
be represented as a sum of inifinitely long x(t)=t plus the sum of 
discontinuity functions scaled to the necessary amplitudes and 
positioned at each transition of the sawtooth.


For a triangle we have a discontinuity in the 1st derivative, which we 
can refer to as the 1th order discontinuity. The respective 
discontinuity function can be defined as 0 for t<0 and t for t>=0. 
Respectively the triangle will be a sum of x(t)=t and 1st order 
discontinuity functions. Etc.


Now, the statement which I'm aiming at is something like "all 
non-bandlimited part of the spectrum of a signal is contained in its 
discontinuities". Obviously, this is not completely right, so the 
question was, to which extent is it right. Now, in the cases, where the 
statement holds, in order to bandlimit the signal it is sufficient to 
bandlimit the discontinuities.


As for the construction of bandlimited discontinuities, we can use the 
fact that the integration changes the amplitudes and phases of the 
signal's partials but doesn't generate new ones (if the ill-conditioning 
at DC is properly respected). In order to obtain a bandlimited 0th order 
discontinuity (bandlimited step, or BLEP), we notice that it is an 
integral of the Dirac delta function. So we take the bandlimited version 
of the Dirac delta, which is known to be the sinc function and integrate 
it, obtaining a so-called integral sine, or sine integral function 
Si(t). The DC of the result needs to be corrected, though, but it's 
obvious. To obtain the 1st order discontinuity we integrate Si(t) again 
(for which there is an analytical expression). And so on. The 
discontinuity functions can be pregenerated and stored in tables, 
approximated by polynomials, etc. Of course, being infinitely long, they 
require some windowing.


In the generation of the waveforms it might be more practical instead of 
adding bandlimited discontinuities to the "continuous" signal, add the 
resuduals (the differences between bandlimited and nonbandlimited 
versions of the same discontinuities) to non-bandlimited signal, but 
that's just a math trick to slightly simplify the processing, not an 
essential part of the entire idea.


So, everything is very simple, provided the underlying "continuous" 
signal can be considered bandlimited (which was the original question in 
the thread). One slight complication arises from the discontinuities 
being non-causal, which adds the latency to the oscillator. Eli Brandt, 
who (I believe) introduced the BLEP method was suggesting to use 
minimum-phase versions of those (which I believe exist only in 
discrete-time domain), but personally I'm not a big fan of those, and 
would rather have the latency in the oscillator.


As for using the wavetables, BLIT, etc, they might provide superior 
performance (wavetables), total absence of inharmonic aliasing (BLIT) 
etc., but, AFAIK they tend to fail in extreme situations like heavy 
audio-rate FM, ring modulation etc. BLEP, OTOH, should still perform 
equally good.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-07-01 Thread Vadim Zavalishin

An interesting observation. Since

(t+a)^2=t^2+2at+a^2

the lowpass filtering of t^2 with a symmetrically windowed sinc gives

(sincw * (.+a)^2)(t)=
   (sincw * (.^2+2a.+a^2)(t)=
   (sincw * .^2)(t)+(sincw * 2a.)(t)+(sincw * a^2)(t)=
   (sincw * .^2)(t)+0+a^2 =
   (sincw * .^2)(t)+a^2

where * is the convolution operator and . is the function's argument 
placeholder. This means that the lowpass filtering of t^2 may add a DC 
offset of (sincw * .^2)(t) but otherwise doesn't modify the function. 
Thus, t^2 is still kind-of bandlimited. The same argument probably 
generalizes to higher order terms.


For a non-symmetrical window you just get another DC term from the 
convolution with the second term of the sum.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-07-01 Thread Vadim Zavalishin

On 01-Jul-14 12:00, robert bristow-johnson wrote:

On 7/1/14 4:17 AM, Vadim Zavalishin wrote:

I belive that, contrarily to the BLIT method, BLxx typically
integrates in the continuous time domain. I fail to see any reason why
should we prefer discrete-time domain integration.


i dunno.  maybe to reprogram a capacitor requires desoldering it from
the circuit.  but reprogramming a discrete-time integrator is writing
and compiling a different line of code.


I'm not sure why do we need to integrate at runtime at all (if I get 
your point correctly). I thought, one of the elements of BLEP etc is 
that you integrate "offline" as a part of preparation of your algorithm.





The continuous-time domain integration has the benefit of avoiding any
spectral distortion in the hi freq area. Also it can be performed
analytically.


we might ask "how"?  internal upsampling (it's still discrete time, but
closer to the continuous-time domain)?


Either internal upsampling (again, in the "offline" mode, so you can 
upsample 1000 times or more), or use the analytical expression for the 
integral of sinc.





This means that in principle "any" piecewise polynomial signal with
bandlimited discontinuities of the signal and its derivatives is also
bandlimited.


okay, so you upsample a bit and integrate analytically and you've
captured your "x(t)=t" more perfectly.  now what are you gonna do?
output to a high-speed D/A converter?  downsample and output to the D/A
at the original sample rate?  (the latter would be equivalent to BLxx.)




Again, the intention is to be able to modulate the oscillator frequency, 
incl. audio-rate modulations (osc sync, pwm etc included as well). 
AFAIK, BLIT approach doesn't work too well here. So, I'm just trying to 
understand the boundaries of applicability of BLEP and its generalized 
flavors from a sufficiently rigorous theoretical standpoint, nothing more.



Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-07-01 Thread Vadim Zavalishin

On 01-Jul-14 10:17, Vadim Zavalishin wrote:

This addresses my original motivation to a large extent: applying BLXX
not only to waveforms of stable frequencies, but also to their modulated
versions. Although, not completely. Particularly, self-FM sawtooth
produces an exponential signal. I wonder, whether exponents (at least
those which are slow enough) are bandlimited. After all a good part of
the sine signals (which are kind of versions of exponentials) are
bandlimited.


As the exponents go, I believe, I have some kind of answer. As we are 
going to get an infinite number of discontinuities with self-FM saw, the 
question of bandlimitedness of the exponent is somewhat academic. 
However, we could consider low-order polynomial approximations of the 
exponential signals and bandlimit the respective discontinuities.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-07-01 Thread Vadim Zavalishin

On 30-Jun-14 18:44, Stefan Stenzel wrote:

The tools of BLXX are DC, digital integrators/filters and your BLXX signals that
are bandlimited by definition, no sampling and no nonlinear operation involved.

So as there is no possible source for aliasing, there is no aliasing.


Okay. So in principle, we could construct a BLXX-ed signal by simply 
integrating DC and bandlimited impulses a sufficient number of times, if 
necessary correcting the DC to zero along the way. Convincing enough. (A 
small remark: I belive that, contrarily to the BLIT method, BLXX 
typically integrates in the continuous time domain. I fail to see any 
reason why should we prefer discrete-time domain integration. The 
continuous-time domain integration has the benefit of avoiding any 
spectral distortion in the hi freq area. Also it can be performed 
analytically.) This means that in principle "any" piecewise polynomial 
signal with bandlimited discontinuities of the signal and its 
derivatives is also bandlimited.


This addresses my original motivation to a large extent: applying BLXX 
not only to waveforms of stable frequencies, but also to their modulated 
versions. Although, not completely. Particularly, self-FM sawtooth 
produces an exponential signal. I wonder, whether exponents (at least 
those which are slow enough) are bandlimited. After all a good part of 
the sine signals (which are kind of versions of exponentials) are 
bandlimited.


So, can we apply the above reasoning to Taylor series expansions 
(constructing Taylor series by repeated integration of signals)? 
Especially, if we consider only exponential segments, rather than 
infinitely long exponentials, then we could apply the above integration 
scheme an infinite number of times to arrive at the result. But (!) so 
we could do for the sine signals. This would mean that *any* sine is 
bandlimited. So, there must be some flaw in that reasoning.


Besides, while Stefan provided an almost convincing justification of the 
BLXX by integration of DC and impulses ("almost" is because there is 
this unanswered question of inifinite integration in the construction of 
the Taylor series, which is somewhat bothering). However, it would still 
be interesting to get a consistent look at the same problem from the 
virtual ADC point of view, if only for the sake of understanding.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-06-30 Thread Vadim Zavalishin

On 27-Jun-14 13:45, STEFFAN DIEDRICHSEN wrote:

Can we consider x(t)=t^2 bandlimited?


No, that’s a nonlinear operation , unlike the integration. The
difference betwenn both operations is what happens with the step.


Actually not, as pointed out by Stefan. In fact, you don't need the
condition t>=0. Just notice that t^2/2 and t are in the
differentiation/integration relationship.


Not really. If you differentiate a constant, the result is zero.
Since all differentiation and integraton is linear, only the
spectrum is modified, no new content is generated, so band limits
are preserved. At least on paper. ;-)


Actually, even not on paper. The problem is that the transformation of
the signal spectra by the integration is ill-conditioned at omega=0.
Therefore you should be careful when integrating signals whose spectra
are nonzero around DC. Particularly, the spectrum of x=const is already
infinite at DC. This means we have to be careful with any
transformations or conclusions regarding that spectrum, let alone
applying the integration.

It is exactly this issue which leads to the divergence of the ideal
lowpass filtering (sinc-convolution) for x(t)=t^2, and limited
convergence (only in Cauchy principal value sense) for x(t)=t.

Furthermore, if the transformation of spectra by the integration could
have been applied to x(t)=const, x(t)=t, x(t)=t^2 etc without any
further thought, then *any* sinusoidal signal would have been
bandlimited (or at least I believe so). Indeed, sin(a*t) could be
expanded into Taylor series around t=0. The convergence radius of this
series would be infinte, I believe (since sin(z) is analytical). The
series also converges absolutely (follows from the convergence of
exp(z)). It doesn't converge uniformly on (-inf,+inf), but I believe
this is unnecessary in order to conclude that the infinte sum of
bandlimited signals is also bandlimited.

OTOH, if, rather than integrating x(t)=t and bandlimited steps
separately, we integrate a bandlimited sawtooth signal, then the
resulting parabolic waveform should be bandlimited (here the
ill-conditioning of the integration doesn't apply, since the DC of the
sawtooth is 0).

Thus, the original question of the theoretical justification of BLEP
antialiasing remains open. At least for the waveforms with nonlinear
segments, such as x(t)=t^2.


Not sure. From which domain do you look at the problem?
Time-discrete or time continous? In the time-discrete domain, there’s
no way to distinct non-bandlimited from limited. And in the other
domain, you have no band limit.


I was considering bandlimited signals in the continous time domain. The
bandlimiting in this domain is the first step of theoretical ADC.

Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-06-27 Thread Vadim Zavalishin

A related conjecture:

a signal is bandlimited, if and only if the result of 
sampling-and-restoring this signal is invariant relatively to the 
subsample time-shifts.


Wonder if this is true or not.

Regards,
Vadim

On 27-Jun-14 11:18, Vadim Zavalishin wrote:

Hi all,

this is a question, which has been bothering me for a while: what are
the (more-or-less) rigorous theorectial foundations of the BLEP
approach. Didn't happen to see any (maybe I'm just missing some
references). I mean that on a formal look, the statement that "all
nonbandlimited signal is contained in the signal discontinuities" sounds
intuitively reasonable, but is missing a formal explanation (and is also
clearly false for arbitrary waveforms, just consider FM modulation of
one sine by another).

I have been trying to build one. We consider a sawtooth as a basic
example. The sawtooth can be represented as an infinite signal
x(t)=a*t
plus the steps. If we bandlimit the steps then the resulting
BLEP-antialiased sawtooth will be bandlimited if and only if the signal
x(t)=a*t is bandlimited.

So how can we check that x(t)=t is bandlimited. We cannot really take
the Fourier transform, as the integral diverges. So we need to find some
other way for the reasoning.

One approach would be to notice that x(t)=const is bandlimited and that
x(t)=t is an integral of that, thus it is also bandlimited. Not very
convincing. Especially since the bandlimitedness of x(t)=const is also
"somewhere on the edge", because its spectrum involves Dirac delta.

A probably more practical approach would be to remember the reason for
requiring the bandlimitedness: we want the signal to be exactly restored
by the ADC/DAC transformation. So, how about the following statement: we
consider a signal bandlimited, if the convolution with the (properly
scaled and stretched) sinc function doesn't change the signal.

For x(t)=const such convolution obviously doesn't change the signal, so
it can be considered bandlimited.

For x(t)=t the convolution integral already converges *only* in the
Cauchy principal value sense (notice that we need to take the
convolution only at t=0, as at all other points we can reduce the
problem to the convolution at t=0 plus the convolution with x(t)=const
and we already know that the latter is unchanged by the convolution).

So, what does this principal value convergence mean in practice. In
practice we are not going to use an infinite sinc in the ADC, rather a
windowed version thereof. Apparently, for any *symmetrical* window, the
convolution with such windowed sinc will also not change x(t)=t.
However, what if the window is not symmetric (unlikely, but
theoretically possible)? I'm not sure, how to answer that.

However, we are still relatively good for signals such as sawtooth,
pulse and triangle. But what about parabolic signal (integral of the
sawtooth). This signal can be introduced into the family of "classic
analog waveforms" because it can be considered as a "even and odd
harmonics" version of the triangle. Can we consider x(t)=t^2
bandlimited? The convolution with sinc converges only in some
generalized Cesaro sense here and the symmetric windowing doesn't help.
Does this mean that BLEP is not really applicable to this signal? (In
the sense that it will not fully suppress the aliasing)

A similar question can be raised for analog FM. E.g. a sawtooth
modulating a sawtooth, where you also get x(t)=x^2. Or a self-modulated
sawtooth where you get exp(t). Can exp(t) be considered bandlimited?

The above also raises the question of whether the convolution of sinc
with Si(t) leaves Si(t) unchanged? (Which would be a check that Si(t) is
indeed bandlimited, since the Fourier transform of Si(t) also diverges,
if I'm not mistaken). In principle we could take the residual approach
in our formalization and rather than considering Si(t) consider the
difference between Si(t) and the nonbandlimited step. Fourier transform
will clearly converge for this difference, but then how to show that
this difference contains exactly the entire nonbandlimited part of the
signal?


Regards,
Vadim




--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-06-27 Thread Vadim Zavalishin

Hi all,

this is a question, which has been bothering me for a while: what are 
the (more-or-less) rigorous theorectial foundations of the BLEP 
approach. Didn't happen to see any (maybe I'm just missing some 
references). I mean that on a formal look, the statement that "all 
nonbandlimited signal is contained in the signal discontinuities" sounds 
intuitively reasonable, but is missing a formal explanation (and is also 
clearly false for arbitrary waveforms, just consider FM modulation of 
one sine by another).


I have been trying to build one. We consider a sawtooth as a basic 
example. The sawtooth can be represented as an infinite signal

x(t)=a*t
plus the steps. If we bandlimit the steps then the resulting 
BLEP-antialiased sawtooth will be bandlimited if and only if the signal 
x(t)=a*t is bandlimited.


So how can we check that x(t)=t is bandlimited. We cannot really take 
the Fourier transform, as the integral diverges. So we need to find some 
other way for the reasoning.


One approach would be to notice that x(t)=const is bandlimited and that 
x(t)=t is an integral of that, thus it is also bandlimited. Not very 
convincing. Especially since the bandlimitedness of x(t)=const is also 
"somewhere on the edge", because its spectrum involves Dirac delta.


A probably more practical approach would be to remember the reason for 
requiring the bandlimitedness: we want the signal to be exactly restored 
by the ADC/DAC transformation. So, how about the following statement: we 
consider a signal bandlimited, if the convolution with the (properly 
scaled and stretched) sinc function doesn't change the signal.


For x(t)=const such convolution obviously doesn't change the signal, so 
it can be considered bandlimited.


For x(t)=t the convolution integral already converges *only* in the 
Cauchy principal value sense (notice that we need to take the 
convolution only at t=0, as at all other points we can reduce the 
problem to the convolution at t=0 plus the convolution with x(t)=const 
and we already know that the latter is unchanged by the convolution).


So, what does this principal value convergence mean in practice. In 
practice we are not going to use an infinite sinc in the ADC, rather a 
windowed version thereof. Apparently, for any *symmetrical* window, the 
convolution with such windowed sinc will also not change x(t)=t. 
However, what if the window is not symmetric (unlikely, but 
theoretically possible)? I'm not sure, how to answer that.


However, we are still relatively good for signals such as sawtooth, 
pulse and triangle. But what about parabolic signal (integral of the 
sawtooth). This signal can be introduced into the family of "classic 
analog waveforms" because it can be considered as a "even and odd 
harmonics" version of the triangle. Can we consider x(t)=t^2 
bandlimited? The convolution with sinc converges only in some 
generalized Cesaro sense here and the symmetric windowing doesn't help. 
Does this mean that BLEP is not really applicable to this signal? (In 
the sense that it will not fully suppress the aliasing)


A similar question can be raised for analog FM. E.g. a sawtooth 
modulating a sawtooth, where you also get x(t)=x^2. Or a self-modulated 
sawtooth where you get exp(t). Can exp(t) be considered bandlimited?


The above also raises the question of whether the convolution of sinc 
with Si(t) leaves Si(t) unchanged? (Which would be a check that Si(t) is 
indeed bandlimited, since the Fourier transform of Si(t) also diverges, 
if I'm not mistaken). In principle we could take the residual approach 
in our formalization and rather than considering Si(t) consider the 
difference between Si(t) and the nonbandlimited step. Fourier transform 
will clearly converge for this difference, but then how to show that 
this difference contains exactly the entire nonbandlimited part of the 
signal?



Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Frequency bounded DSP

2014-01-03 Thread Vadim Zavalishin
I'm also not sure I understand the question, although from a slightly 
different angle. As long as the sum of the bandwidths of the modulator 
and the carrier is below Nyquist, you're good, right? So, in principle, 
you can do arbitrary amounts of AM/RM by simply keeping the Nyquist 
equal to double of your desired bandwidth and bandlimiting again after 
each AM/RM. So, this way you can implement more or less arbitrary 
envelopes (and, in particular, oscillator sync for arbitrary waveforms), 
if you bandlimit them in advance (unless I'm missing something).


Regards,
Vadim

On 03-Jan-14 16:09, Wen Xue wrote:

Why not just inverse-transform any band-limited spectrum? (or have I got
the question wrong?)

Now, can we do better, can we make, say, some form of "other" envelope
that is still frequency limited ?

T.V.



--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] RMS calculation with IIR filter?

2014-01-03 Thread Vadim Zavalishin
This works only in integer arithmetics. In floating point you will 
potentially have a drift, due to precision losses (although, off the top 
of my head I could think of a few workarounds to handle those).


Regards,
Vadim

PS. I guess this is exactly what "IIR" was supposed to mean in the 
context of the question. RMS, being a nonlinear function, cannot be 
computed by a linear IIR.


On 03-Jan-14 14:03, Robert Bielik wrote:

If you have a history buffer you only need one addition + subtration +
multiply per sample, regardless of length of RMS, so that method will
always be faster than an IIR.

Regards,
/Rob

Tebjan Halm skrev 2014-01-03 14:01:

hello music-dsp,

is there a simple IIR filter which calculates the standard 300ms RMS
value for audio signals dependent on the sample rate?
if so, is it faster than a moving sum?

best,
tebjan


--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] R: R: Trapezoidal integrated optimised SVF v2

2013-11-13 Thread Vadim Zavalishin

On 13-Nov-13 11:56, Marco Lo Monaco wrote:

I personally don’t think that automatic systems (DK) will be the
panacea of nonlinear modeling (even if everybody here is dreaming of
a realtime spice). Very often only a human can see patterns in
circuits and find shortcuts to simplify things.


+1

Besides the shortcuts, only a human can judge the critical aspects of 
the analog model being discretized. Such as


- how precise should the component models be (e.g. if Ebers-Moll 
transistor model is sufficient or not), where in principle this question 
should be answered for each component separately


- whether the difference between parameter values of identically marked 
components is having any critical effect


- whether the effect caused by a certain element of the model (e.g. a 
nonlinearity) is musically insignificant (so that the element may be 
dropped)


- to which extent we can assume independence of different parts of the 
device (ignore the current leakage and other crosstalk)


and so on.

Perhaps, if in future we have computational powers several orders higher 
than the currently available ones, such automatic system would be more 
realistic, as we will be able to afford ridiculously precise and 
detailed analog component models as the basis of our discretization. But 
from my feeling it's still a long long way. And then, how important is 
being able to automatically convert from analog schematics to digital? I 
mean there has been some amount of brilliant engineering work to design 
those analog devices, but it's not happening much more. So, after we 
have modelled them all, we are not gonna need any further modelling.


OTOH, the lessons we learned from attempting to model those things (and 
you learn more, if you do this "by hand" rather than by some automated 
toolkit) should form an invaluable basis for the development of future 
software. We can design *new* filters, effects, etc, which all are gonna 
have "that analog sound". For that purpose of new designs (rather than 
modelling the old stuff), I believe the *continuous-time* block-diagram 
based approaches are more useful than the differential equations, as 
they are offering a more intuitive view of the signal processing (YMMV). 
The discrete-time block-diagrams are not that intuitive, in my opinion, 
but then again, you don't need to use them, if you implicitly understand 
the discretized version of the same analog block-diagram.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-12 Thread Vadim Zavalishin

On 12-Nov-13 10:10, Dave Gamble wrote:

So let me go out on a limb here: if you take some single precision code and
up it to double, and things get WORSE then there is something very strange
about your original code that merits investigation.


It's very easy. As I mentioned in my other email, switching from float 
to double halves the number of available SIMD channels, which means you 
need to run your code twice as many times. On the other hand, in my 
experience, most of the DSP algorithms are quite tolerant to using 
32-bit floats (DF filters being one exception).


--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-12 Thread Vadim Zavalishin

On 12-Nov-13 10:01, Dave Gamble wrote:

Because switching from double to float will bring extremely small
performance gains in CPU cost, and potentially sizeable problems with
numerical issues.


I'd be very careful with statements like that. There are people with 
exactly the opposite experience. YMMV ;-)


--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-12 Thread Vadim Zavalishin

On 12-Nov-13 09:53, Dave Gamble wrote:

Absolutely! SSE2, which by all the stats I've seen is entirely ubiquitous

now, is double precision, and has been since 2004, which is close enough to
a decade for my taste. http://en.wikipedia.org/wiki/SSE2


Well, you get only half the number of available channels with SSE if you 
use doubles. If 2 channels are sufficient for you, then you might not 
care, of course.


--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-12 Thread Vadim Zavalishin

On 12-Nov-13 09:53, Dave Gamble wrote:

PS. "Time-varying performance" is another word. "Nonlinearities" is the
third one.


Not criticisms I'm at all familiar with, I'm afraid. Can you expand?


As we are talking about inferiority of DF compared to ZDF, I just 
mentioned the other two, which are even way more prominent than the 
quantization issues, as they can't be addressed by switching to 64 bit 
floats (but they already have been mentioned multiple times in the scope 
of the present discussion). DF has very poor time-varying (modulated 
parameters) performance and is absolutely uncapable of properly hosting 
the nonlinearities present in the original analog prototype.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-12 Thread Vadim Zavalishin

On 12-Nov-13 09:40, Tim Blechmann wrote:

One word: SIMD


well, when benchmarking my performance code, about 2% of the CPU time is
spent in vector code, while about 60% is spent in scalar filter code.


Hi Tim

I can't believe vector code is running 30 times faster than the scalar 
code :-D :-D :-D


Seriously speaking, this ratio very much depends on the details of the 
software. It might as well be that you have no scalar filter code at all.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-12 Thread Vadim Zavalishin

On 12-Nov-13 09:05, Dave Gamble wrote:

Actually, I'll go one further: In 2013, single precision is just time
wasting. It's a pathological case for analysis, but it shouldn't represent
real-world usage.

I'm reminded of a conversation I had with my PhD supervisor 12 years ago,
when showing him some source which caused him to remark "single precision?
  What is this?  1980?".


One word: SIMD

Regards,
Vadim

PS. "Time-varying performance" is another word. "Nonlinearities" is the 
third one.


--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-11 Thread Vadim Zavalishin

On 11-Nov-13 17:33, Dave Gamble wrote:

At some point, the process of using algebraic rearrangements [...]
got dubbed the "delay-free" or "zero delay filters" movement.


Hi Dave

I think this is exactly the source of the confusion. As the distinctive
feature of those filters were zero-delay feedback loops, the filters
were called "delay-free feedback filters" or "zero delay feedback
filters" (which was further shortened to "zdf filters" or "0df
filters"). Then someone thought that "0df" stands for "zero-delay
filter" rather than "zero-delay feedback" and there you are :D

Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Time Varying BIBO Stability Analysis of Trapezoidal integrated optimised SVF v2

2013-11-11 Thread Vadim Zavalishin

On 11-Nov-13 15:32, Ross Bencina wrote:

That's why I was somewhat surprised that you simply managed to
restrict the eigenvalues of the system matrix in some coordinates.


To be clear: the eigenvalues of the transition matrix only cover
time-invariant stability.

The constraint for time-varying BIBO stability is that all transition
matrices P satisfy ||TPT^-1|| < 1 where ||.|| is the spectral norm and T
is some constant non-singular change of base matrix.


Okay, for the time-varying case it seems to be the eigenvalues of 
(P^T)*P in the discrete-time case, which according to Wikipedia define 
the spectral norm (while in the continuous time we have the eigenvalues 
of P^T+P). I'm currently a little short of time to take a precise look 
at all the details. Still, in both cases we are talking about some 
uniform property of eigenvalues of a symmetric matrix. In the discrete 
time they need to be smaller than one to kind of make sure the next 
state vector is smaller than the previous one (I think). In the 
continuous time case they need to be negative to kind of make sure that 
the time derivative of the state vector's length is negative.




The main reason I am suspicious is that Laroche does not even try to
cook up a change of basis matrix, or to show when it might be achieved.
It's kind of an orphan result in that paper that goes unused for showing
BIBO stability.


IIRC from briefly reading his paper, his sufficient criterion turned out 
to be not applicable for the DF filters (which by themselves also didn't 
seem to be time-variant BIBO-stable, IIRC), therefore he resorted to 
some other approaches. That (and my own SVF investigation) led me to 
consider this kind of criterion as a more or less useless one at that 
time. But I may be wrong, it was a while ago and only a brief look.





Particularly suspicious is that your coordinate transformation matrix is
"built for the smallest damping", while the more problematic case seems
to occur "at the larger damping".


I'm not sure I follow you here. Smallest damping means most resonance,
where the system decays most slowly. Don't you think this would be where
the greatest problems would arise?


Because of the "shooting" effect I described earlier. I discovered it by 
a numerical simulation of the system. For the low resonance the state 
vector moves in an ellipse (for the "worst-case" signal I described). 
The orientation and the amount of stretching of the ellipse depends on 
the resonance (the lower the resonance the more the stretch). You can 
suddenly switch to a lower resonance while your state vector is pointing 
at such angle, that the respective position on the new ellipse is within 
the "increasing radius" area. This will cause the state vector to grow.


Disclaimer: this all goes under the notice that I didn't double-check my 
results or may even have forgotten some important details or simply 
remember them wrong. I was posting them mostly because I thought they 
might be interesting and/or usable to some extent for you (and maybe 
others).




In short, using change of basis matrix T:
[1 f]
[0 1]

We have the time-varying BIBO stability constraint:

0 < f < 2, g > 0, f < k <= 2

f provides the bound on k from below.


This is also the kind of the result which I would intuitively expect at 
the first thought. It's just contradicting the *unverified* results of 
my earlier research, that's why I expressed my suspiciousness. 
Hopefully, I'll find time to check your research in more detail.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-11 Thread Vadim Zavalishin

On 11-Nov-13 13:04, Theo Verelst wrote:

Alright, simply put: the paradigm used to work with digital filters is
at stake


Funnily enough, it's a mathematically trivial fact that in the LTI case 
the 0df filters are mathematically equivalent to the DF BLT filters. So, 
the only "non-scientific" part there is about estimating the errors of 
time-varying and nonlinear effects. But then, I'm not so much aware 
(please correct me, if I'm wrong), if there are any psychoacoustically 
usable measurables developed until now, which help with those 
estimations. OTOH, the psychoacoustically perceivable error of the 
linear model is clearly estimatable by using the frequency response 
paradigm, but as I just said, in this respect 0df is fully equivalent to 
the DF BLT. So, it seems to me that it's not the paradigm, which is 
being put at stake, but rather only the methodology of digital filter 
design.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] R: Time Varying BIBO Stability Analysis of Trapezoidal integrated optimised SVF v2

2013-11-11 Thread Vadim Zavalishin

Hi Marco

On 11-Nov-13 11:26, Marco Lo Monaco wrote:

I basically demonstrate what I already said in my previous posts.
The standard state-space approach leads to identical results to your
algorithm, I would say even without the trick of the TPT, because of course
we are talking about an instantaneous _linear_ feedback.


Of course they are all equivalent, except for some small detail, as e.g. 
the usage of canonical integrators (although maybe even that was known 
for long time for trapezoidal integration).



Of course the main purpose of my analysis was to keep in mind that you will
_always_ have to deal with an "implicit"/hidden inversion of a matrix A of
the analog system (actually (I-A*h/2))


With the filters used in practice, the matrices typically turn out to be 
either simple or have quite regular structures. This often is given for 
granted in the 0df (TPT) approach, as you are just solving one linear 
feedback equation, instead of trying to invert a 5x5 matrix etc. Of 
course, it's mathematically equivalent, but is much simpler. Also, the 
division (the most expensive operation) which you have to perform while 
solving that equation is more or less the same division by a0 which you 
need to perform in the computation of the BLT-based discrete-time 
coefficients. So, computationally I think the 0df approach is comparable 
(even if slighly more expensive at times) to the transfer-function based 
DF filters, expecially at audio-rate modulations.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Time Varying BIBO Stability Analysis of Trapezoidal integrated optimised SVF v2

2013-11-11 Thread Vadim Zavalishin

Hi Ross,

since you opened this topic, I thought I'd try to share the intermediate 
results my findings, as much as I can remember them (that was a few 
years back). Most of them concern the continuous time case.


First note regarding the continuous time case is that cutoff modulations 
do not affect the BIBO stability at all. More rigorously:
- if the cutoff modulation is done by varying the gains *in front* 
(rather than behind) of *all* integrators in the system

- if the cutoff function w(t) is always positive
- if the system is BIBO stable for some cutoff function w(t)
then the system is also BIBO stable for any other positive cutoff function

Particularly, if a linear system is BIBO-stable in time-invariant case 
(for the constant cutoff function), then it's also stable for varying 
cutoff.


This is very easy to obtain from the state-space equation:
du/dt=w*F(u,x)
where u(t) is the state vector, x(t) is the input vector, w(t) is the 
cutoff scalar function and F(u,x,t) is the nonlinear time-varying 
version of A*u+B*x. Without reduction of generality we can assume w(t)=1 
for the given stable case. Then, we simply rewrite the equation as

du/(w*dt)=F(u,x)
and substitute the time parameter:
d tau = w*dt
Now in "tau" time coordinates the modulated system is exactly the same 
as the unmodulated one in the "t" coordinates.


The same doesn't seem to hold for the TPT discrete-time version, though.


In a more general case for *linear* continuous time, IIRC, we have a 
sufficient (but it seems, not necessary) time-varying stability 
criterion: all eigenvalues of the matrix A+A^T must be "uniformly 
negative", that is they must be bounded by some negative number from 
above. It is essential to require this uniform negativity, otherwise the 
eigenvalues can get arbitrarily close to the self-oscillation case. This 
condition is simply obtained from the fact that in the absence of the 
input signal you want the absolute value of the state to decay with a 
relative speed, which is uniformly less that 1. This will make sure, 
that, whatever the bound of the input signal is, a large enough state 
will decay sufficiently fast, to win over the input vector B(t)*x(t). 
Indeed, ignoring the B*x term, we have

(d/dt) |u|^2=(d/dt)(u^T*u)=u'^T*u+u^T*u'=
(A*u)^T*u+u^T*(A*u)=u^T*A^T*u+u^T*A*u=
u^T*(A+A^T)*u<=|u|^2*max{lambda_i}
where lambda_i are the eigenvalues of A+A^T.
Now on the other hand
(d/dt) |u|^2=2*|u|*(d/dt)|u|
So
2*|u|*(d/dt)|u|<=|u|^2*max{lambda_i}
and
2*(d/dt)|u|<=|u|*max{lambda_i}

Obviously, you don't have to satisfy the condition in the original 
state-space coordinates. Instead, you can satisfy it in any other 
coordinates, which corresponds to using P^T*A*P instead of A for some 
nonsingular matrix P.


Now I didn't manage to get this condition satisfied for the 
continuous-time SVF. Reading your post, I admit, that I could have made 
a mistake there, but FWIW... First, I discarded the consideration of 
varying cutoff, as explained above and concentrated on the varying 
damping. Not managing to find a matrix P, I constructed an input signal, 
requiring the maximum possible growth of the state vector. The signal, 
IIRC was either sgn(s_1) or -sgn(s_1), where s_1 is the first of the 
state components (or it could have been s_2). Then I noticed that for 
low damping the state vector is moving in almost a circle, while for 
higher damping (but still with complex poles) is turns into an ellipse. 
This was exactly the problem: "in principle" the circle is having a 
bigger size, than the ellipse, but by switching the damping from low to 
high you could "shoot" the state point into a much "higher orbit". Much 
worse, in certain cases the system state can increase even in the full 
absence of the input signal!!! However, IIRC, I managed to show, that 
for a sufficiently large "elliptic" orbit (with high damping), 
(d/dt)|u|^2<=0 regardless of the current damping. Since we are already 
considering the "worst possible" input signal, the system state can't 
cross this boundary "orbit" to the outside.


For the discrete-time case the situation is more complicated, because we 
can't use the continuity of the state vector function. IIRC, I also 
didn't manage to build the "worst-case" signal, but there was the same 
problem of the state vector becoming larger in the absence of the input 
signal. That's why I was somewhat surprised that you simply managed to 
restrict the eigenvalues of the system matrix in some coordinates. 
Particularly suspicious is that your coordinate transformation matrix is 
"built for the smallest damping", while the more problematic case seems 
to occur "at the larger damping". But, as I said, I didn't finish that 
research and I could have been wrong. So just take my input FWIW.


Regards,
Vadim

-

Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-11 Thread Vadim Zavalishin

On 11-Nov-13 01:09, robert bristow-johnson wrote:

On 11/8/13 6:47 PM, Andrew Simper wrote:

It depends if you value numerical performance, cutoff accuracy, dc
performance etc etc, DF1 scores badly on all these fronts,


nope.


  and this is even in the case where you keep your cutoff and q
unchanged.


Hi Robert,

from your reply I understood that you're referring to the parameter 
quantization, while I think Andy was referring to the state 
quantization. Furthermore, also parameter quantization seems much less 
of an issue with the 0df approach, since cos(...) doesn't occur there 
(although I don't have a rigorous proof).


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-08 Thread Vadim Zavalishin

On 08-Nov-13 12:13, Urs Heckmann wrote:

No offense meant, I wasn't aware that your book was considered a
standard dsp lecture. If you know of any university that uses it in
their curriculum, please let me know and I'll recommend that
university.


Damn, you got me there ;-)

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-08 Thread Vadim Zavalishin

On 08-Nov-13 10:34, STEFFAN DIEDRICHSEN wrote:

If you look at Figure 3.18 of said book, there’s a delay in the feedback path.
But it’s done in an elegant way, so no insult here.

;-)


Damn, I probably should remove the said Figure (and Figs. 3.19-3.20 for 
that matter) in the next revision. Thanks for the proofreading!


;-) ;-) ;-)

Vadim


--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-08 Thread Vadim Zavalishin

On 08-Nov-13 03:13, Ross Bencina wrote:

Now, I do have one thing I would like to see: and that is a mathematical
proof that point (4) above is actually true for this topology. Ever
since I read the Laroche BIBO paper it scared the crap out of me to be
modulating any IIR filter at audio rate without a trusted analysis.


Hi Ross,

I once tried to do the analysis you mentioned. IIRC, I managed to 
successfully show the time-varying BIBO stability of analog 1-pole (this 
is very simple) and 2-pole (somewhat more tricky) analog SVFs under the 
condition that the real parts of all poles are "uniformly negative" 
(that is bound by some negative constant from above). IIRC, it is also 
quite straightforward to show the time-varying stability of a 2nd-order 
real Jordan cell (which probably has the best time-varying stability 
anyway), but this one is not so convenient as the LP/BP/HP multimode 
SVF. I have no idea if there are papers addressing this.


For the discrete-time versions it seemed way more complicated. I 
couldn't prove it or build a disproving example for the TPT BLT version 
of the same 2-pole SVF (and I don't remember whether I managed to build 
a proof for the 1-pole).


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-08 Thread Vadim Zavalishin

On 08-Nov-13 10:00, Urs Heckmann wrote:

I have however not followed this list in a while, so my apologies if
"0df" has been discussed in a wider scope than Andy's papers before -
I have yet to see a standard dsp book that covers it. However,
products on the market seem sparse thus I think it may not have...?


Hi Urs! I don't believe this. So, you think that "The art of VA filter 
design" book doesn't cover it?


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Fwd: 24dB/oct splitter

2013-11-05 Thread Vadim Zavalishin

(the quotation is from Andy's mail)

On 2/8/13 2:15 AM, Ross Bencina wrote:
i've analyzed Hal's SVF to death, and i was exposted to Andy's
design some time ago, but at first glance, it looks like the
"Trapazoidal SVF" looks like it doubles the order of the filter.
it it was a second-order analog, it becomes a 4th-order digital.
but his final equations do not show that.  do those "trapazoidal"
integrators, become a single-delay element block (if one were to
simplify)?  even though they ostensibly have two delays?


You can use canonical (DF2/TDF2) trapezoidal integration, in which case 
the order of the filter doesn't formally grow. This is quite intuitively 
representable in the TPT papers and the book I mentioned earlier. If you 
use DF1 integrators, the order formally grows by a factor of 2, but I 
believe half of the poles will be cancelled by the zeroes.


BTW, IIRC, as for the optimization from 4 z^-1 to 3 z^-1 in Andy's SVF, 
I believe this optimization implicitly assumed the time-invariance of 
the filter. So, while keeping the transfer function intact, this 
optimization changes the time-varying behavior of the filter (not sure, 
how much and whether it's for the worse or for the better).


Regards,
Vadim


--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] IIR Coefficient Switching Issues

2013-11-04 Thread Vadim Zavalishin

Oh, completely forgot. Here's a step-by-step description of the TPT method:

http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_1.0.3.pdf 
 (A4 format)
http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_1.0.3_A5.pdf 
 (A5 format)


On 04-Nov-13 11:07, Vadim Zavalishin wrote:

Hi Chris

Direct forms are not good for coefficient modulation, plus IIRC they
tend to have precision issues at low cutoffs. I guess, the TPT (ZDF)
approach can solve your problem completely:
http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/KeepTopology.pdf

(For the 2nd order filters use the modes of the SVF from the same paper)

If you can read Reaktor Core, then here's some additional info:
http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/KeepTopologyRC.pdf


Regards,
Vadim



--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] IIR Coefficient Switching Issues

2013-11-04 Thread Vadim Zavalishin

Hi Chris

Direct forms are not good for coefficient modulation, plus IIRC they 
tend to have precision issues at low cutoffs. I guess, the TPT (ZDF) 
approach can solve your problem completely:

http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/KeepTopology.pdf
(For the 2nd order filters use the modes of the SVF from the same paper)

If you can read Reaktor Core, then here's some additional info:
http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/KeepTopologyRC.pdf

Regards,
Vadim

On 03-Nov-13 04:27, Chris Townsend wrote:

I'm working on an algorithm with some user controlled "presets" that
adjust various IIR filters under the hood.  This generally works fine,
but I get pops and glitches when switching between certain settings.
The filters that are causing trouble are typically second order hipass
filters with a sub 100Hz cutoff, but with some settings the filters
reconfigure to peaking, shelf and first order hipass types.  Generally
the problem is most noticeable when changing between types.

This appears to be a simple matter of the internal states of the
filter being un-normalized and so large gain chances of the state
variables can occur when coefficients are adjusted.  I read through
some old Music-DSP posts on this topic, but I didn't find a clear
solution that fit my needs.

I'm using 1 pole coefficient smoothing, which helps reduce the
glitches but definitely doesn't get rid of them.  Currently I'm using
DF2 transpose filter topology.  I also tried lattice and a couple
others topologies, but overall that didn't improve things and in some
cases was worse.

If I only needed a second order hipass then I would think a Chamberlin
State Variable Filter would be my best bet, since I found it to be
very adept at handling coefficient changes.  But I'm not sure it will
work for me, since it's not a fully generally filter topology.  I've
looked at using the Kingsbury topology which is very similar in form
to Chamberlin, but has poles that are generalized.  Apparently
Kingsbury's filter is all-pole (no zeros), so would need to tack on
some zeros to make it fully general, but then I'm not sure it would
maintain the nice properties of the Chamberlin filter.

I've also looked at using a ladder filter, which seems like it would
totally solve my problem, since all of the internal states are
normalized.  The only downside is that it's about double the
computational cost of most other filter topologies, but that's not a
huge deal in this case.

There's also the possibility of renormalizing the filter states every
time the coefficients are updated, but that seems complicated and
costly in terms of CPU, since the smoothing updates the coefficients
at a fairly high rate.

I'm also seeing some coefficient quantization issues at high sample
rates, when using DF2T, because I'm dealing with low cutoff
frequencies and using single precision floats.  It looks like
Chamberlin, Kingsbury or Ladder would also perform much better in that
respect.

Any ideas?  Recommendations?

Thanks,
Chris
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp



--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] ANN: Book: The Art of VA Filter Design

2012-05-25 Thread Vadim Zavalishin

Hi all

This is kind of a cross-announcement from KVRAudio, but since there are 
probably a number of different people on this list, I thought I'd 
announce it here as well. Get it here:


http://ay-kedi.narod2.ru/VAFilterDesign.pdf
http://images-l3.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign.pdf
http://www.discodsp.net/VAFilterDesign.pdf (thanks to "george" for 
mirroring)


There is a discussion thread at
http://www.kvraudio.com/forum/viewtopic.php?t=350246

Regards,
Vadim

--
Vadim Zavalishin
Software Integration Architect | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 29-30
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747

Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Noise performance of f32 iir filters

2011-10-04 Thread Vadim Zavalishin
I'm still waiting to hear the delicious details of Peter's Bacon Lettuce 
and Tomato Transform


As mentioned by the previous answers:
BLT = bilinear transform (I think this abbreviation is rather common in DSP)
TPT = topology-preserving transform
http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/KeepTopology.pdf
(there was a discussion on this list how exactly to name this thing, but it 
seems TPT gets slowly accepted as a term, at least I've already seen it 
mentioned on the KVR).


Regards,
Vadim

- Original Message - 
From: "Thomas Young" 

To: "A discussion list for music-related DSP" 
Sent: Monday, October 03, 2011 18:48
Subject: Re: [music-dsp] Noise performance of f32 iir filters


I'm still waiting to hear the delicious details of Peter's Bacon Lettuce 
and Tomato Transform


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Dave Hoskins

Sent: 03 October 2011 17:46
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Noise performance of f32 iir filters


Nigel Redmon wrote:
> (not to mention ATM--geez how many meanings can that one attain?).

thefreedictionary.com lists 106 meanings...
http://acronyms.thefreedictionary.com/ATM


At The Moment, According To Me, the Advanced Testing Method for Automated
Theorem Provers is to read the Acceptance Test Manual from the Association
of Teachers of Mathematics at the Annual Technical Meeting where they all
Ate Too Much!

I'll get my coat... : )
D.




--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 29-30
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747

Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Noise performance of f32 iir filters

2011-09-30 Thread Vadim Zavalishin
Just for the reference, I tried to measure the precision of the TPT version 
of the SVF using DF2 BLT integrators, as described in

http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/KeepTopology.pdf
If I didn't make any mistakes in the measurements, the 32-bit float 
precision is better than -100dB for cutoffs below 20kHz at 44.1kHz SR. The 
error seems to be larger at large cutoffs.


As already mentioned, the trapezoidal integration is pretty much equivalent 
to TPT with DF1 integrators. There was an observation made by Dominique 
Wurtz on KVRAudio DSP forum, that DF1 and DF2 integrators should behave 
fully identically (mathematically, ignoring the precision issues) even in 
the time-varying TPT case. Which pretty much eliminates (mathematically) the 
need for DF1 integrators in TPT (except in special cases).


Regards,
Vadim

- Original Message - 
From: "Andrew Simper" 

To: "A discussion list for music-related DSP" 
Sent: Tuesday, September 27, 2011 16:19
Subject: Re: [music-dsp] Noise performance of f32 iir filters



Hi Earl,

Since the analog SVF is one of the lowest noise topologies in analog
filters I suspect that a fixed point implementation will do well, but
I have not tested it yet.

Andy
--
cytomic - sound music software




On 27 September 2011 21:35, Earl Vickers  wrote:

Hi Andrew,

This looks most impressive. I look forward to seeing your article. Any
thoughts on how suitable this topology would be for fixed-point
implementation?

Earl

Andrew Simper  wrote:


I finally got around to following up on my hunch that a slightly
modified version of the trapezoidal integrated svf (ie a modified
version of the one I previously posted) should have excellent
numerical properties. My initial tests confirm this in spectacular
fashion. I used all sorts of tests, but the one to show up most
problems was a bell filter with q=2, gain=12 dB, and look at the
cutoffs 20, 200, 2k, 20k. I compare the modified state variable
filter, normalised ladder, normalised direct wave form, direct form 1,
and direct form 2 transposed. The only filter to match the low
quantization error of the modified svf is the normalized ladder
filter, but none of the filters can match the coefficient rounding
error, as is shown in the time domain error of the 20 Hz example:

http://www.cytomic.com/files/dsp/SVF-vs-DF1.pdf

The modified SVF works fine down to very low frequencies with all
single precision computation, which makes it ideal for use even at 192
kHz sample rates. I'll get around to writing it up some time, but I've
got a few plugins and other work to get on with for the moment.


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, 
dsp

links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, 
dsp links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp



--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 29-30
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747

Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Noise performance of f32 iir filters

2011-09-27 Thread Vadim Zavalishin

Hi Andrew


I finally got around to following up on my hunch that a slightly
modified version of the trapezoidal integrated svf (ie a modified
version of the one I previously posted) should have excellent
numerical properties.


Care to reveal what is the "modification"? Did you measure the precision of 
the "unmodified" version?


Regards,
Vadim

- Original Message - 
From: "Andrew Simper" 

To: "A discussion list for music-related DSP" 
Sent: Tuesday, September 27, 2011 10:09
Subject: [music-dsp] Noise performance of f32 iir filters



I finally got around to following up on my hunch that a slightly
modified version of the trapezoidal integrated svf (ie a modified
version of the one I previously posted) should have excellent
numerical properties. My initial tests confirm this in spectacular
fashion. I used all sorts of tests, but the one to show up most
problems was a bell filter with q=2, gain=12 dB, and look at the
cutoffs 20, 200, 2k, 20k. I compare the modified state variable
filter, normalised ladder, normalised direct wave form, direct form 1,
and direct form 2 transposed. The only filter to match the low
quantization error of the modified svf is the normalized ladder
filter, but none of the filters can match the coefficient rounding
error, as is shown in the time domain error of the 20 Hz example:

http://www.cytomic.com/files/dsp/SVF-vs-DF1.pdf

The modified SVF works fine down to very low frequencies with all
single precision computation, which makes it ideal for use even at 192
kHz sample rates. I'll get around to writing it up some time, but I've
got a few plugins and other work to get on with for the moment.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, 
dsp links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp



--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 29-30
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747

Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal andotherintegrationmethodsappliedtomusical resonant filters

2011-05-23 Thread Vadim Zavalishin
With regard to stability under time varying modulation, Jean Laroche has 
published some criteria:


Very interesting!

As to analysing the effect of parameter modulation on the output I guess 
this is a job for state-space models?


Yes, I guess this is easiest to analyse in the state-space form.

Regards,
Vadim

--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal and other integrationmethodsappliedtomusical resonant filters

2011-05-23 Thread Vadim Zavalishin
sound to an extent. When comparing the models, one also shouldn't forget 
the

time-variant (modulated parameters) behavior of the structures (which I
guess is the primary reason to use SVF instead of DF), but this is much 
more

difficult to theoretically analyse.



With regards time varying aspects it is easy to solve for them as
well.


What I meant is not the time-varying implementation, which of course is not 
difficult in the cases you describe. I was referring to the question of 
analysing how close is the digital model to the analog one in the 
time-varying case. For the time-invariant case we have the transfer function 
as our analysis tool, giving us full information about both versions of the 
system (so that we can e.g. say that bilinear-transformed implementations 
have amplitude/phase responses pretty close to their continuous time 
prototypes). However, the same thing doesn't work in the time-variant case. 
Thus, we typically just perform experimental analysis of the time-variant 
behavior (which, for music DSP purposes, includes the stability and the 
effect of parameter modulation on the output).


Regards,
Vadim

--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal and other integrationmethodsappliedtomusical resonant filters

2011-05-23 Thread Vadim Zavalishin
Just one other issue I wanted to mention in respect to using trapezoidal 
integration / bilinear TPT vs. trying to "manually" fix "simpler" Euler-like 
models. Besides giving a nice frequency response of a digital model, the BLT 
also results in an "equally nice" phase response, which also affects the 
sound to an extent. When comparing the models, one also shouldn't forget the 
time-variant (modulated parameters) behavior of the structures (which I 
guess is the primary reason to use SVF instead of DF), but this is much more 
difficult to theoretically analyse.


Regards,
Vadim

--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal and other integration methodsappliedtomusical resonant filters

2011-05-23 Thread Vadim Zavalishin
okay, so that's an analog resonant LPF filter.  there is a method,  called 
"impulse invariant" (an alternative to BLT) to transform this  analog 
filter to a digital filter.  as it's name suggests, we have a  digital 
impulse response that looks the same (again, leaving out the  unit step 
gating function):


Just for the record. The digital impulse response indeed looks exactly the 
same, but (!) it is not bandlimited, which results in distorted frequency 
response of the resulting filter. Bandlimiting the response is not a 
solution either, because the resulting response then is not a response of a 
system consisting of a finite number of unit delays.



Vadim, is this the article you meant?



Fontana, "Preserving the structure of the Moog VCF in the digital domain"

Proc. Int. Computer Music Conf., Copenhagen, Denmark, 27-31 Aug. 2007
http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.2007.062



Yes, thanks Martin. BTW, I'f I'm correct, this paper (I only briefly looked 
through it) doesn't address fixing the problem in the 1-pole components. 
It's correct that the bigger problem is in the outer feedback loop, but 
performing the same fix in the 1-poles improves the behavior further, IIRC.


-- it does beg the question: why didn't we think of this earlier? and why 
did Chamberlin do it the way he did?


A number of people *did* think of this earlier, also in the application to 
the analog simulation. E.g. the simulanalog.org article I mentioned earlier 
uses trapezoidal integration, and I would guess that there is even some 
earlier work. The right question is: why almost nobody cares about this 
issue?


Yet another issue, is the subjectivity of the judgement. Not all DSP 
engineers, and even not all musicians have the same high requirements to the 
details of the filter response. Many people would be fully happy with the 
Chamberlin SVF as it is. Also, BLT filters require division (once the 
parameters change), which was quite expensive, especially those days, I 
think.


Regards,
Vadim

--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


  1   2   >