Re: [music-dsp] The art of VA filter design - video

2018-11-28 Thread Vadim Zavalishin




On 28-Nov-18 11:07, Jean-Baptiste Thiebaut wrote:

I'm proud to share this video of Vadim Zavalishin, who came to ADC in
London last week to share his DSP knowledge. I came across Vadim's work on
this list and invited him to ADC a few months ago, and thought I'd share
this with you. (I hope you don't mind, Vadim).


No, of course I don't ;) Thank you for sharing this and thanks again for 
inviting me to the ADC. It has been lots of fun watching the talks and 
meeting other people from the industry (and even some of my former NI 
colleagues going as far back as 2001).


Best regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Book: The Art of VA Filter Design 2.1.0

2018-11-02 Thread Vadim Zavalishin



On 01-Nov-18 16:16, Fabian-Robert Stöter wrote:
I appreciate that but it would still be nice if your book could be cited 
appropriately. Have you thought about putting it on arxiv or zenodo.org 
<http://zenodo.org>?
This would give you the possibility to version the book and make folks 
from academia happy with a proper DOI reference.


Nobody complained so far, but I will consider your suggestion, thank you!

Best regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Book: The Art of VA Filter Design 2.1.0

2018-11-02 Thread Vadim Zavalishin




On 02-Nov-18 03:06, Andrew Simper wrote:
If you prize symmetry then you can use a cascade with 2 x one pole HP 
and 2 x one pole LP to make a 4 pole BP (band pass) then you can use the 
same old FIR based output tap mixing to generate all the different 
responses. It may not be so easy to do in a real circuit, but in 
software we're not bound by what is easy to build :)


https://cytomic.com/files/dsp/cascade-all-to-all-responses.pdf


Symmetry is one of the things. The other is the shape of the amplitude 
response. I'm personally not convinced by the -4dB dip prior to the 
resonance, although YMMV. At any rate it doesn't qualify as a "bread and 
butter" LP IMHO ;) With BP8 it's getting way worse.


Incidentally, another way to come at more or less the same structure is 
raising the orders of LP and HP filters (by stacking identical 1-poles 
in series) in the transposed Sallen-Key (Fig.5.23 of the book). Since 
TSK is essentially a bandpass ladder with a special output mode, it's 
actually the same. Further ways (originating at lowpass ladder) to look 
at this can be found here:

https://www.kvraudio.com/forum/viewtopic.php?p=6844369#p6844369
and here
https://www.kvraudio.com/forum/viewtopic.php?p=6844470#p6844470


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Book: The Art of VA Filter Design 2.1.0

2018-11-01 Thread Vadim Zavalishin

On 01-Nov-18 15:18, pa...@synth.net wrote:



Hmmm, 500 A4 pages would be rather heavy ;)


I'd willingly pay for a copy.


Quite pleased to hear that, thank you ;) Still, you could ask a copy 
shop to print and bind a copy for yourself (the book license allows it).


I have been considering selling this book for money, but so far I don't 
really want to do that. One of the reasons, it'd be prohibitively 
difficult to release small updates such as this one.


Best regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Book: The Art of VA Filter Design 2.1.0

2018-11-01 Thread Vadim Zavalishin

On 31-Oct-18 18:19, Stefan Stenzel wrote:

Vadim,

I was more refering to the analog multimode filter based on the moog cascade I 
did some years ago, and found it amusing to find a warning against it.


Ah, you mean the one at the beginning of Section 5.5? Well, that's an 
artifact of the older revision 1, where the ladder filter was introduced 
before the SVF (I still believe it's better didactically, unfortunately 
new material dependencies made me switch the order). The modal mixtures 
of the transistor ladder are asymmetric (HP is not symmetric to LP and 
has the resonance peak kind of "in the middle of its slope" and BP is 
not symmetric on its own). I felt that it might be confusing for a 
beginner if their first encounter with resonating HP and BP is with this 
kind of special-looking filters, hence the warning. With revision 2 this 
warning becomes less important, since the 2-pole LP and BP were 
discussed already before, but I still believe it's informative. After 
all, it doesn't say that these filters are bad, it says that they are 
special ;)




Anyway, excellent writeup,


Thank you! I'm glad my book is appreciated not only by newbies, but also 
by the industry experts.




I wish I cuold have it printed as a proper book for more relaxed reading.


Hmmm, 500 A4 pages would be rather heavy ;)

Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Book: The Art of VA Filter Design 2.1.0

2018-10-31 Thread Vadim Zavalishin

On 31-Oct-18 15:58, Stefan Stenzel wrote:

Thank you very much, Sir!


You're highly welcome, Sir!



But why the warning about multimode lattice filters?
In my case, this comes way too late!


I'm not sure I'm fully following you... Or are you referring to this:


New additions:
- Generalized ladder filters


You mean, why isn't this discussed in Chapter 5? Well, good question. 
But in the same sense, one could ask why generalized SVF doesn't come in 
Chapter 4. Chapter 8 is specifically concerned with building filters 
with arbitrary transfer functions of arbitrary orders, whereas Chapters 
4 and 5 rather deal with structures commonly used in synths (more or 
less), and from this POV it belongs there. Also, had I discussed it in 
Chapter 5, it would have been difficult to derive it from the 
generalized SVF idea, which in my opinion is highly educative.


Actually, are you by any chance aware of this structure being discussed 
elsewhere? Haven't encountered anything like that so far (not that it's 
difficult to derive ;) ), just wanted to build a nonlinear 2nd kind 
Butterworth filter of 4th order, so needed something like that ;)


Best regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Book: The Art of VA Filter Design 2.1.0

2018-10-31 Thread Vadim Zavalishin

Announcing a small update to the book

https://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_2.1.0.pdf

New additions:
- Generalized ladder filters
- Elliptic filters of order 2^N
- Steepness estimation of elliptic shelving filters

Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] variations on exponential curves

2018-10-01 Thread Vadim Zavalishin



On 01-Oct-18 13:58, Frank Sheeran wrote:
For curves other than 0 and 1, I discover a delta that will work to the 
exact number of samples iteratively because I am too stupid to figure 
out an equation for delta.  Werner is much better at math than I am!


This is quite a smart way to work around the precision loss issues! 
Although I'm somewhat concerned about its reliability. I guess it can be 
proven that the final value is a monotonic function of delta, but 
possibly with large multipliers the output value step (for the smallest 
change of delta) will be quite large, so the search will not fully converge.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] variations on exponential curves

2018-10-01 Thread Vadim Zavalishin




On 01-Oct-18 14:12, Vadim Zavalishin wrote:
In principle IIRC the same rule applies for 
multiplier < 1, but there the losses are not too large. This also 
manifests at multiplier = 1 by having the "best offset" so that the 
curve's middle is at zero.


Sorry, I meant to say that for multiplier < 1 you need the opposite 
rule, the curve should end at zero. Therefore at mult = 1 you position 
the curve halfway ;)


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] variations on exponential curves

2018-10-01 Thread Vadim Zavalishin



On 01-Oct-18 13:52, Frank Sheeran wrote:
Indeed, that's a simple parametric, but for generating envelopes we have 
the freedom to depend on the previous sample's output.  So, while an 
exponential curve parametric-style requires a pow(), the iterative 
solution is simply current = previous * multiplier.


Adding a range of curves can be done by adding a delta, as Andre/Werner 
and I have independently discovered (along with probably many others).


And "current = previous * multiplier + delta" is going to be fewer 
calculations than a parametric formula such as yours.  So I suggest the 
Andre/Werner/my method seems faster for the specific application of 
envelopes, as well as offering true exponential curves.


With multiplier values >> 1 there can be noticeable incremental 
precision losses. They can be reduced by offsetting the whole curve so 
that its starting position is zero (which means that that the "output" 
value is current + offset). In principle IIRC the same rule applies for 
multiplier < 1, but there the losses are not too large. This also 
manifests at multiplier = 1 by having the "best offset" so that the 
curve's middle is at zero.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] What is resonance?

2018-07-23 Thread Vadim Zavalishin

On 20-Jul-18 18:13, Mehdi Touzani wrote:

So... how do you do a resonance in a lowpass circuit?   :-)   not the
math, not the code, just the architecture.


There are many different ways to create resonance in a lowpass circuit
(esp. if the order is larger than 2). The higher is the order of the
filter, the more different answers there are.

Making a feedback loop around a lowpass chain is one way, but AFAIK it
works perfectly (or close to that) only for the 4th order filter (the so
called Moog ladder). I'm not aware of any standard generic structure (or
even a transfer function to begin with) which could be referred to as a
generic Nth order resonating filter. Recently I tried to propose one way
of generalizing the 2nd order resonance to an arbitrary order by what I
called "Butterworth filters of the 2nd kind", but this involves just the
transfer function, whereas you still have lots of freedom in the
implementation structure. You could look into the latest revision of my
book for more details (where I also explain the problems with the
lowpass feedback).

Regards,
Vadim


--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Book: The Art of VA Filter Design 2.0.0alpha

2018-06-11 Thread Vadim Zavalishin

Hi everyone!

As usual, I'm duplicating here the announcement on KVR, since (I assume) 
not everyone from this list is also present there.


The Art of VA Filter Design has been updated to 2.0.0alpha. Freely 
available at this link:

https://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_2.0.0a.pdf

Major highlights compared to the previous release:

- different presentation of Sallen-Key filters
- 8-pole ladders
- detailed discussion of nonlinearities
- "Butterworth filters of the 2nd kind"
- different presentation of shelving filters, high-order generalized 
Butterworth and elliptic shelving

- generalized Linkwitz-Riley crossovers
- lots of theoretical stuff
- last but not least: a neat formula for the frequency shifter's poles

The book is a mixture of new research, common knowledge (presented from 
the POV of the author) and reinventing the wheel. I would be thankful 
for pointers to previous research, which I might not be aware of.


Some of the new material is used in the soon to be released Reaktor Core 
macro library update, so it can be tried out (and looked into) directly 
there.


The book is in the "alpha" state, some of the material had only surface 
checking and to an extent is a bit of a "work in progress".


Best regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Elliptic filters coefficients

2018-02-02 Thread Vadim Zavalishin
I don't know if it is possible to make the math behind elliptic filters 
simpler. They really stand out by using somewhat exotic functions.


As for the online resource about filter basics unfortunately I can't 
recommend any. Also IMHO the resource choice may be strongly affected by 
your goals.


Regards,
Vadim

PS. If your goal is to design synth filters, I probably would have 
recommended my book, but it requires some math background, and also 
since you seem to be interested in elliptic filters, I'm not sure if 
this is the area you're looking for.


On 02-Feb-18 12:37, Dario Sanfilippo wrote:

Thanks, Vadim.

I don't have a math background so it might take me longer than I wished 
to obtain the coefficients that way, but it's probably time to learn it. 
With that regard, would you have a particularly good online resource 
that you'd suggest for pole-zero analysis and filter design?


Thanks to you too, Shannon.

Best,
Dario

On 1 February 2018 at 11:16, Vadim Zavalishin 
<vadim.zavalis...@native-instruments.de 
<mailto:vadim.zavalis...@native-instruments.de>> wrote:


Hmm, the Wikipedia article on elliptic filters has a formula to
calculate the poles and further references the Wikipedia article on
elliptic rational functions which effectively contains the formula
for the zeros. Obtaining the coefficients from poles and zeros
should be straightforward.

Regards,
Vadim



--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] ± 45° Hilbert transformer using pair of IIR APFs

2017-02-06 Thread Vadim Zavalishin

Funny that no one mentioned this

https://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_1.1.1.pdf

Particularly, formula 7.43

Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Delays: sampling rate modulation vs. buffer size modulation

2016-03-23 Thread Vadim Zavalishin

On 23-Mar-16 00:45, Matthias Puech wrote:

Does this mean for instance that if I provide a control over the
integral of D in 1/ I will get the exact same effect as in 2/?


I have been thinking about this question a while ago (in terms of tape 
rather than BBD delay, but that's more or less the same). It seems that 
you need to solve an integral equation, something like (latex notation)


\int_{t-T}^t v(\tau)d\tau = L

where

t = the current time moment
T = the delay time (the unknown to be found)
v(\tau) = tape speed
L = distance between the read and write heads

For an arbitrary v(t) this can be solved only numerically. For a 
predefined v(t) this can be done analytically.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-06 Thread Vadim Zavalishin

On 06-Nov-15 11:03, Vadim Zavalishin wrote:

Apologies if this question has already been answered, I didn't read the
entire thread, just wanted to share the following idea off the top of my
head FWIW.


Oops, nevermind, I didn't realize that the SnH period is also random in 
the original question.


--
Vadim Zavalishin
Reaktor Application Architect | R
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-06 Thread Vadim Zavalishin
Okay, an updated idea. Represent the signal as a sum of time-shifted box 
functions of random amplitudes and durations. We assume that the sum is 
finite and then we can take the limit (if the values approach the 
infinity as the result, we can normalize them according to the length of 
the signal).


Respectively the (complex) spectrum of such sum will be the sum of box 
function spectra which are randomly phase-rotated and randomly 
scaled/stretched in the frequency domain (according to their stretching 
in the time domain). The phase rotation can be assumed uniformly 
distributed. So we need to determine the distribution of the amplitudes 
and of the stretching of the box function spectra. Both of the latter 
can be found from the distribution of the box amplitudes and box lengths 
(under the assumption of uniform phase rotation distribution). The box 
amplitudes are uniformly distributed according to your specs. The 
distribution of box lengths must be IIRC one of the commonly known 
distributions, don't remember which one.


On 06-Nov-15 11:06, Vadim Zavalishin wrote:

On 06-Nov-15 11:03, Vadim Zavalishin wrote:

Apologies if this question has already been answered, I didn't read the
entire thread, just wanted to share the following idea off the top of my
head FWIW.


Oops, nevermind, I didn't realize that the SnH period is also random in
the original question.




--
Vadim Zavalishin
Reaktor Application Architect | R
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] The Art of VA Filter Design book revision 1.1.0

2015-08-10 Thread Vadim Zavalishin
Just realized that the following answer has never made its way through 
to the list.


@Robert: I believe it has to do with your mail client settings, which
override the reply-to field. So it's quite possible that more answers
to your mails do not get to the list. Or is that on purpose?


Hi Robert

On 24-Jul-15 21:48, robert bristow-johnson wrote:

in the 2nd-order analog filters, i might suggest replacing 2R with
1/Q in all of your equations, text, and figures because Q is a
notation and parameter much more commonly used and referred to in
either the EE or audio/music-dsp contexts.


I'm not such a big friend of the Q notation. My guess is that the Q
parameter was introduced originally for something like radio tuning
LC-circuits, where it makes perfect sense. For music 2-pole filters the
problem is that Q changes from +inf to -inf during the transition into
the selfoscillation region. OTOH, the R parameter is simply crossing the
zero. Also see the footnote on page 84 of the rev 1.1.1. The R parameter
also nicely maps to the pole position, being simply the cosine of the
polar angle, while the cutoff is the radius. So the pole's coordinates
are simply -w*cos R, +-w*sin R (for |R|=1).



in section 3.2, i would replace n0-1 with n0 (which means replacing
n0 with n0+1 in the bottom limit of the summation).  let t0
correspond directly with n0.


On one hand makes sense. OTOH, using zero-based array indexing, like in
C, n0 is intuitively understood as the first output sample. I agree,
this is less conventional mathematically, but from a software
developer's point of view this might be more intuitive. So, to an
extent, I believe this is a matter of taste and intention.



now even though it is ostensibly obvious on page 40, somewhere (and
maybe i just missed it) you should be explicit in identifying the
trapezoidal integrator with the BLT integrator.  you intimate
that such is the case, but i can't see where you say so directly.


p.40, directly under (3.5)
The substitution (3.5) is referred to as the bilinear transform, or
shortly BLT.
For that reason we can also refer to trapezoidal integrators as BLT
integrators.

Not good enough?



section 3.9 is about pre-warping the cutoff frequency, which is of
course oft treated in textbooks regarding the BLT.  it turns out
that any *single* frequency (for each degree of freedom or knob)
can be prewarped, not only or specifically the cutoff.



Bottom of p.43
Notice that it's possible to choose any other point for the prewarping,
not necessarily the cutoff point. etc


in 2nd-order system, you have two independent degrees of freedom that
can, in a BPF, be expressed as two frequencies (both left and right
bandedges).  you might want to consider pre-warping both, or
alternatively, pre-warping the bandwidth defined by both bandedges.


That's a good point. This approach is used in 7.9 but you're right, it
should have been introduced in chapter 5.



lastly, i know this was a little bit of a sore point before (i can't
remember if it was you also that was involved with the little tiff i
had with Andrew Simper), but as depicted on Fig. 3-18, any purported
zero-delay feedback using this trapezoidal or BLT integrator does
get resolved (as you put it) into a form where there truly is no
zero-delay feedback.  a resolved zero-delay feedback really isn't
a zero-delay feedback at all.  the paths that actually feedback come
from the output end of a delay element.  the structure in Fig 3-18
can be transposed into a simple 1st-order direct form that would be
clear *not* having zero-delay feedback (but there is some zero-delay
feedforward, which has never been a problem).


The structure in 3.18 clearly doesn't have ZDF (although I don't think
it can be made equivalent to any of direct forms without changing its
topology and hence the time-varying behavior). That's the whole point of
the illustration. However, once you get used to the ZDF, I'd say that
it's probably much easier and more intuitive to stick to ZDF structures
like 3.12 and understand the resolution implicitly (or actually even
directly 2.2, you can notice that afterwards the book hardly uses any
discrete-time diagrams). Particularly, when nonlinearities are
introduced into the structure, thereby leaving you the freedom of
choosing the numerical approach to treat them (post-resolution
application, Newton-Raphson, analytical solution, mystran's method etc).



i'll be looking this over more closely, but these are my first
impressions.  i hope you don't mind the review (that was not
explicitly asked for).


Would be highly appreciated. And thanks for the comments which you
already made.

Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect | RD
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] List settings after the switch

2015-08-10 Thread Vadim Zavalishin
- So, it seems both addresses (music-dsp@music and 
music-dsp@lists...) do actually work (so, apologies for a double mail) 
but with a 30 minute latency. Used to be less than 1 minute before (when 
the old list server was active). Maybe it's a local glitch on my 
mailserver, but that happened to 2 mails in a row (actually 3, counting 
the double mail).


- The Reply button seems to work for the messages sent by myself (they 
go to the list, not to me)


- The list web page that I meant is the old list page (the one found by 
google, if searching for the music dsp list). On the opposite, the new 
list page is not pointing to the old archives.


Regards,
Vadim

On 10-Aug-15 10:46, Vadim Zavalishin wrote:

Hi Douglas and all,

it seems that after the switching of the list server there are a few
issues, which I just noticed:

- the reply-to field in the list mails is configured to the sender. So
hitting reply no longer sends the answer to the list. I'm not sure
whether this is the new standard, but I'm not sure whether it's so
commonly used either. At least I'll need to learn to use the Reply
List button.

- the list web page is still pointing to the old archives only. it's
seems not possible to find the new archives except by following the
footer link in the list mails. I guess the addresses listed there for
subscribing to the list do not work either.

- the reply list button sends to music-dsp@music.columbia.edu, which
doesn't seem to work. the same email address is listed in the footer. I
just have found the other address in the Trash folder in the test mail
from Douglas, trying it now (I wonder why does it work for other people,
who manage to write to the list?)

Regards,
Vadim




--
Vadim Zavalishin
Reaktor Application Architect | RD
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] The Art of VA Filter Design book revision 1.1.0

2015-07-24 Thread Vadim Zavalishin

Released the promised bugfix
http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_1.1.1.pdf

On 22-Jun-15 10:51, Vadim Zavalishin wrote:

Didn't realize I was answering a personal rather than a list email, so
I'm forwarding here the piece of information which was supposed to go to
the list:

While we are on the topic of the book, I have to mention that I found
the bug in the Hilbert transformer cutoff formulas 7.42 and 7.43. Tried
to merge odd and even orders into a more simple formula and introduced
several mistakes. The necessary corrections are (if I didn't do another
mistake again ;) )
- the sign in front of each occurence of sn must be flipped
- x=(4n+2+(-1)^N)*K(k)/N
- the stable poles are given by nN/2 for N even and n(N+1)/2 for N odd.

I plan to release a bugfix update, but want to wait for possibly more
bugs being discovered.

Regards,
Vadim




--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-07-13 Thread Vadim Zavalishin

On 10-Jul-15 19:50, Charles Z Henry wrote:

The more general conjecture for the math heads :
If u is the solution of a differential equation with forcing function g
and y = conv(u, v)
Then, y is the solution of the same differential equation with forcing function
h=conv(g,v)

I haven't got a solid proof for that yet, but it looks pretty easy.


How about the equation

u''=-w*u+g

where v is sinc and w is above the sampling frequency?


--
Vadim Zavalishin
Reaktor Application Architect | RD
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-07-07 Thread Vadim Zavalishin

On 06-Jul-15 04:03, Sampo Syreeni wrote:

On 2015-06-30, Vadim Zavalishin wrote:

I would say the whole thread has been started mostly because of the
exponential segments. How are they out of the picture?


They are for *now* out, because I don't yet see how they could be
bandlimited systematically within the BLEP framework.


Didn't I describe this is my previous posts?



But then evidently they can be bandlimited in all: just take a segment
and bandlimit it. It's not going to be an easy math exercise, but it
*is* going to be possible even within the distributional framework.


I'd say even without one. A time-limited segment is in L2, isn't it?



I don't think I'm good enough with integration to do that one myself.
But you, Ethan and many others on this list probably are. Once you then
have the analytic solution to that problem, I'm pretty sure you can tell
from its manifest form whether the BLEP framework cut it.


That would be a nice check, but I'm not sure I'd be able to derive an 
analytic closed-form expression for the related sum of the BLEPs which 
is what we need to compare against. But could you spot a mistake in my 
argument otherwise?



Consider a piecewise-exponential signal being bandlimited by BLEP.


That sort of implies an infinite sum of equal amplitude BLEPs, which
probably can't converge.


I think I have addressed exactly this convergence issue in my previous 
posts and in my paper. Furthermore, the convergence seems to be directly 
related to the bandlimitedness of the sine (see the paper). The same 
conditions hold for an exponential, hence my idea to define the 
extended bandlimitedness based on the BLEP convergence (or rather, the 
rolloff of the derivatives, which defines the BLEP convergence).



Each of these exponentials can be represented as a sum of
rectangular-windowed monomials (by windowing each term of the Taylor
series separately).


They can't: they are not finite sums, but infinite series, and I don't
think we know how to handle such series right now.


I meant inifinite series of rectangular-windowed monomials. I'm not 
sure what specifically you are referring to by we don't know how to 
handle them. We are just talking about pointwise convergence of this 
series.





We can apply the BLEP method to bandlimit each of these monomials and
then sum them up.


We can handle each (actually sum of them) monomial. To finite order. But
handling the whole series towards the exponential...not so much.


Again, pointwise convergence is meant.




If the sum converges then the obtained signal is bandlimited, right?


If it does, yes. But I don't think it does.


According to my paper, it does. Unless I did a mistake, the BLEP 
amplitudes roll off as 1/n, so if the derivatives (which are the BLEP 
gains) roll off exponentially decaying (which they do for a sine 
bandlimited to 1), the sum converges. Notice that this sufficient 
condition for the BLEP convergence is fulfilled if and only if the sine 
is bandlimited.



I'm pretty sure you shouldn't be thinking about the bandlimited forms,
now. The whole BLIT/BLEP theory hangs on the idea that you think about
the continuous time, unlimited form first, and only then substitute --
in the very final step -- the corresponding bandlimited primitives.


So it did, but what's wrong in doing the same for the monomial series?


The sufficient convergence condition for the latter is that the
derivatives of the exponent roll off sufficiently fast.


But they don't, do they?


Of course they do

d^n exp(a*t) /dt^n = a^n * exp(a*t)

so they roll off as a^n. The same for the sine. For a sine bandlimited 
to 1 we have a1 and thus the BLEP sum converges.


 When you snap an exponential back to zero, you

necessarily snap back all of its derivatives.


I'm not sure what you are referring to as snapping.



I mean, when you introduce a hard phase shift to a sine, you don't just
modulate the waveform AM-wise.


I do, if we consider the same in complex numbers. This is more or less 
what my paper is doing in the ring modulation approach.



Especially when it gets bandlimited, the way you interpolate the
waveform ain't gonna have just Diracs there, but Hilberts as well, and
both of all orders...


You lost me here a little bit. What's a Hilbert? 1/t? I thought it's the 
Fourier transform of a Heaviside. How is it a derivative of a Dirac? 
Furthermore, if we are talking in the spectral domain, we are going to 
have issues arising from the convergence of the infinite series in the 
time domain (you mentioned that the set of tempered distributions is not 
closed), that's why I specifically tried to stay in the time domain. 
That's the whole point: the bandlimitedness can be checked in the time 
domain, without even knowing the spectrum. Maybe the spectrum even 
doesn't exist for the full signal (like for an exponential), but we 
don't care, since our definition works only with time-limited segments 
of the signal.



So to return to the discussion

Re: [music-dsp] Sampling theorem extension

2015-06-23 Thread Vadim Zavalishin

On 22-Jun-15 21:59, Sampo Syreeni wrote:

On 2015-06-22, Vadim Zavalishin wrote:


After some googling I rediscovered (I think I already found out it one
year ago and forgot) the Paley-Wiener-Schwartz theorem for tempered
distributions, which is closely related to what I was aiming at.


It'll you land right back at the extended sampling theorem I told about,
above.


Exactly (if by extended sampling theorem you mean the sampling theorem 
for tempered distributions). And now, by dropping the polynomial growth 
on the real axis restriction in PWS, I can handle any analytic signal. 
And those which are not analytic are not bandlimited anyway.



So why fret about the complex extensions?


I'm not sure which specific meaning of the word complex you imply 
here. But the main motivation for the whole stuff is applying the BLEP 
method to frequency-modulated sawtooth and triangle, where the FM is 
either done in the exponential scale and/or the oscillator is 
self-modulated. In this case you get exponential segments (and more 
complex shapes if self-modulation is done in the exp scale I believe). 
This also should cover the question of applicability of BLEP to 
arbitrary signal shapes more or less.



It's just that you don't need any of that machinery in order to deal
with that mode of synthesis, and you can easily see from the
distributional theory that you can't do any better.


It seems I can do better. Because my question is not whether an 
infinitely-long signal, which doesn't even have Fourier transform, is 
bandlimited. My question is whether a time-limited version of that 
signal is bandlimited except exactly for the discontinuities arising 
from the time-limiting.



--
Vadim Zavalishin
Reaktor Application Architect | RD
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-06-22 Thread Vadim Zavalishin
 (if the ifninite sum of BLEPs converges) some
other signal y'. The signal x is called bandlimited if for any
rectangular window w(t), the signal y' exists (the BLEPs converge) and
y'=BL[y].

This definition is well-specified and directly maps to the goals of
the BLEP approach. The conjectures are

- for the signals which are in L_2 the definition is equivalent to the
usual definition of bandlimitedness.
- if y' exists (BLEPs converge), then y'=BL[y]

If the BLEP convergence is only given within some interval of the time
axis (don't know if such cases can exist), then we can speak of
signals bandlimited on an interval.







--
Vadim Zavalishin
Reaktor Application Architect | RD
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] The Art of VA Filter Design book revision 1.1.0

2015-06-22 Thread Vadim Zavalishin
Didn't realize I was answering a personal rather than a list email, so 
I'm forwarding here the piece of information which was supposed to go to 
the list:


While we are on the topic of the book, I have to mention that I found
the bug in the Hilbert transformer cutoff formulas 7.42 and 7.43. Tried
to merge odd and even orders into a more simple formula and introduced
several mistakes. The necessary corrections are (if I didn't do another
mistake again ;) )
- the sign in front of each occurence of sn must be flipped
- x=(4n+2+(-1)^N)*K(k)/N
- the stable poles are given by nN/2 for N even and n(N+1)/2 for N odd.

I plan to release a bugfix update, but want to wait for possibly more
bugs being discovered.

Regards,
Vadim


--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-06-12 Thread Vadim Zavalishin

On 11-Jun-15 19:58, Sampo Syreeni wrote:

On 2015-06-11, vadim.zavalishin wrote:


Not really, if the windowing is done right. The DC offsets have more
to do with the following integration step.


I'm not sure which integration step you are referring to.


The typical framework starts with BLITs, implemented as interpolated
wavetable lookup, and then goes via a discrete time summation to derive
BLEPs. Right?


I prefer analytical expressions for BLEPs of 0 and higher orders :)


So the main problem tends to be with the summation,
because it's a (borderline) unstable operation.


I don't think so. The analytical expressions give beautiful answers 
without any ill-conditioning.




So we don't know, if exp is bandlimited or not. This brings us back to
my idea to try to extend the definition of bandlimitedness, by
replacing the usage of Fourier transform by the usage of a sequence of
windowed sinc convolutions.


The trouble is that once you go with such a local description, you start
to introduce elements of shift-variance.


How's that? This condition (transparency of the convolution of the 
original signal with sinc in continuous time domain) is equivalent to 
the normal definition of bandlimitedness via the Fourier transform, as 
long as Fourier transform exists. The thing is, by understanding the 
convolution integral in the generalized Cesaro sense (or just ignoring 
the 0th and higher-order DC offsets arising in this convolution) we 
might attempt to extend the class of applicable signals. This seems 
relatively straightforward for polynomials. As the next step we can 
attempt to use polynomials of infinite order, particularly the Taylor 
expansions, where the BLEP conversion question will arise. The answer to 
the latter might be given by the rolloff speed of the Taylor series 
terms (derivative rolloff).


Regards,
Vadim


--
Vadim Zavalishin
Reaktor Application Architect | RD
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-06-12 Thread Vadim Zavalishin

On 12-Jun-15 12:54, Andreas Tell wrote:

I think it’s not hard to prove that there is no consistent
generalisation of the Fourier transform or regularisation method that
would allow plain exponentials. Take a look at the representation of
the time derivative operator in both time domain, d/dt, and frequency
domain, i*omega. The one-dimensional eigensubspaces  of i*omega are
spanned by the eigenvectors delta(omega-omega0) with the associated
eigenvalues i*omega0. That means all eigenvalues are necessarily
imaginary, with exception of omega0=0. On the other hand, exp(t) is
an eigenvector of d/dt with eigenvalue 1, which is not part of the
spectrum of the frequency domain representation.

This means, there is no analytic continuation from other transforms,
no regularisation or transform in a weaker distributional sense.


On one hand cos(omega0*t) is delta(omega-omega0)+delta(omega+omega0) in 
the frequency domain (some constant coefficients possibly omitted). On 
the other hand, its Taylor series expansion in time domain corresponds 
to an infinite sum of derivatives of delta(omega). So an infinite sum 
delta^(n)(omega) (which are zero everywhere except at the origin) must 
converge to delta(omega-omega0)+delta(omega+omega0), correct? ;)


This is just to illustrate that the eigenspace reasoning might not work 
for infinite sums. And I don't know the sufficient condition for it to 
work (and this condition wouldn't hold here anyway, probably). Or is 
there any mistake in the above?


Regards,
Vadim


--
Vadim Zavalishin
Reaktor Application Architect | RD
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sampling theorem extension

2015-06-11 Thread Vadim Zavalishin

On 11-Jun-15 11:00, Sampo Syreeni wrote:

I don't know how useful the resulting Fourier transforms would be to the
original poster, though: their structure is weird to say the least.
Under the Fourier transform polynomials map to linear combinations of
the derivatives of various orders of the delta distribution, and their
spectrum has as its support the single point x=0.


So they can be considered kind of bandlimited, although as I noted in 
my other post, it seems to result in DC offsets in their restored 
versions, if sinc is windowed. It probably can be shown that in the 
context of BLEP these DC offsets will cancel each other (possibly under 
some additional restrictions). So, this seems to agree with my previous 
guesses and ideas.


You also mentioned (or I understood you so) that the exp(at) (a - real, 
t - from -infty to +infty) is not bandlimited (whereas my conjecture, 
based on the derivative rolloff speed, was that it's bandlimited if a is 
below the Nyquist). Could you tell us how does its spectrum look like?


Thanks,

Vadim


--
Vadim Zavalishin
Reaktor Application Architect | RD
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-06-11 Thread Vadim Zavalishin

On 10-Jun-15 21:26, Ethan Duni wrote:

With bilateral Laplace transform it's also complicated, because the
damping doesn't work there, except possibly at one specific damping
setting (for an exponent, where for polynomials it doesn't work at
all), yielding a DC


Why isn't that sufficient? Do you need a bigger region of convergence for
something? Note that the region of convergence for a DC signal is also
limited to the real line/unit circle (for Laplace/Z respectively). I'm
unclear on exactly what you're trying to do with these quantities.


I'm interested in the bandlimitedness of the signal. I'm not aware of 
how I can judge the bandlimitedness, if I don't know Laplace transform 
on the imaginary axis.





I'm not fully sure, how to analytically extend this result to the entire
complex plane and whether this will make sense in regards to the
bandlimiting question.


I'm not sure why you want to do that extension? But, again, note that you
have the same issue extending the transform of a regular DC signal to the
entire complex plane - maybe it would be enlightening to walk through what
you do in that case?


See above and below ;)

Alright, I'll try to reiterate my previous year's ideas in here.

I'm interested in a firm (well, reasonably firm, whatever that means) 
foundation of the BLEP approach. Intuitive description of the BLEP 
approach is: the discontinuities of the signal and its derivatives are 
the only source of nonbandlimitedness, so if we replace these 
discontinuities with their bandlimited versions, we effectively 
bandlimit the signal.


Now, how far can this statement be taken? This depends on the following 
two issues:


- Which infinitely differentiable signals are bandlimited and which 
aren't. Here come the polynomials and the exponentials, among other 
practically interesting signals. One could be tempted to intuitively 
think that polynomials, being integrals of a bandlimited DC are also 
bandlimited and, taking the limit, so should be infinite polynomials, 
particularly Taylor series, so any signal representable by its Taylor 
series (particularly, any analytic signal) should be bandlimited. 
However, this clearly contradicts the knowledge that FM'd sine is not 
bandlimited. This leads to the second question:


- Given a point where a signal and its derivatives are discontinuous, 
will the sum of the respective BLEPs converge?


In regards to the first question, we can notice any bandlimited (in the 
Fourier transform sense) signal is necessarily entire in the complex 
plane (if I'm not mistaken, this can be derived from the Laplace 
transform properties). Also, pretty much any practically interesting 
infinitely differentiable signal is also entire. So we can replace the 
infinite real differentiability requirement here with a stronger 
requirement of the signal being entire in the complex domain.


However, we also need to extend the definition of bandlimitedness. Since 
polynomials and exponentials have no Fourier transform (let's believe 
so, until Sampo Syreeni or someone else gives further clarifications 
otherwise), we can't say whether they are bandlimited or not. More 
precisely, the samping theorem cannot give any answer for these signals. 
But any answer to what exactly?


The real question (why we are talking about bandlimitedness and the 
sampling theorem in the first place) is not the bandlimitedness itself. 
Rather we want to know, what is going to be the result of a restoration 
of the discrete-time signal by the DAC. So the extended (more practical) 
definition of the bandlimitedness should be something like follows:


Suppose we are given a continuous-time signal, which is then naively 
sampled and further restored by a high-quality DAC. If the restored 
signal is reasonably identical to the original signal, the signal is 
called bandlimited.


It is probably reasonable to simplify the above, and replace the high 
quality DAC with a sequence of windowed sinc restoration filters, where 
the window length is approaching infinity.


Reasonably identical means the following:
- the higher the quality of the DAC, the closer is the signal to the 
original one
- we probably can allow a discrepancy at the DC. At least this 
discrepancy seems to appear in windowed sinc filters for polynomials.



Then, the BLEP approach applicability condition is the following. Given 
a bounded signal representable as a sum of an entire function and the 
derivative discontinuity functions, can we bandlimit it by simply 
bandlimiting the discontinuities? Apparently, we can do so if the entire 
function is bandlimited and the sum of the bandlimited derivatives 
converges. The conjecture is (please refer to my previuos year's posts) 
that the condition (at least a sufficient one) for the entire function 
being bandlimited and for the BLEP sum to converge is one and the same 
and has to do with the rolloff speed of the function's derivatives as 
the derivative order increases.



--
Vadim

Re: [music-dsp] Sampling theorem extension

2015-06-10 Thread Vadim Zavalishin

On 09-Jun-15 19:23, Ethan Duni wrote:

Could you give a little bit more of a clarification here? So the
finite-order polynomials are not bandlimited, except the DC? Any hints
to what their spectra look like? How a bandlimited polynomial would look
like?



Any hints how the spectrum of an exponential function looks like? How
does a bandlimited exponential look like? I hope we are talking about
one and the same real exponential exp(at) on (-infty,+infty) and not
about exp(-at) on [0,+infty) or exp(|a|t).


The Fourier transform does not exist for functions that blow up to +-
infinity like that.


I understood from Sampo Syreeni's answer, that Fourier transform does 
exist for those functions. And that's exactly the reason for me asking 
the above question.




To do frequency domain analysis of those kinds of
signals, you need to use the Laplace and/or Z transforms. Equivalently, you
can think of doing a regular Fourier transform after applying a suitable
exponential damping to the signal of interest. This will handle signals
that blow up in one direction (like the exponential), but signals that blow
up in both directions (like polynomials) remain problematic.


Not good enough. If we're talking about unilateral Laplace transform, 
then it introduces a discontinuity at t=0, which immediately introduces 
further non-bandlimited partials into the spectrum. I'm not sure how you 
suppose to answer the question of the original signal being bandlimited 
in this case. With bilateral Laplace transform it's also complicated, 
because the damping doesn't work there, except possibly at one specific 
damping setting (for an exponent, where for polynomials it doesn't work 
at all), yielding a DC. I'm not fully sure, how to analytically extend 
this result to the entire complex plane and whether this will make sense 
in regards to the bandlimiting question.




That said, I'm not sure why this is relevant? Seems like you aren't so much
interested in complete exponential/polynomial functions over their entire
domain, but rather windowed versions that are restricted to some small time
region?


I am specifically interested in the functions on the entire real axis. 
Further in my original email there is an explanation of the reasons.


Regards,
Vadim


--
Vadim Zavalishin
Reaktor Application Architect | RD
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-06-10 Thread Vadim Zavalishin

On 09-Jun-15 22:08, robert bristow-johnson wrote:

a Nth order polynomial, f(x), driven by an x(t) that is bandlimited to B
will be bandlimited to N*B.  if you oversample by a ratio of at least
(N+1)/2, none of the folded images (which we call aliases) will reach
the original passband and can be filtered out with an LPF (at the high
sampling rate) before downsampling to the original rate.  with 4x
oversampling, you can do a 7th-order polynomial and avoid non-harmonic
aliased components.


We are not talking about signals being fed through the polynomials. We 
are talking about the polynomials as the signals.



I'm failing to see how Euler equation can relate exponentials of a real
argument to sinusoids of a real argument? Any hints here?



let f(x) be defined to be  f(x) = e^(j*x)/(cos(x) + j*sin(x))

 .

I'm failing to see how Euler equation can relate exponentials of a *real 
argument* to sinusoids of a *real argument*




I hope we are talking about
one and the same real exponential exp(at) on (-infty,+infty) and not
about exp(-at) on [0,+infty) or exp(|a|t).



oh, (assuming you meant e^(-|a*t|)), them's are in the textbooks.


Did I correctly understand you? The Fourier transform of exp(at) where a 
and t are real and t is from -infty to +infty is in the textbooks? Any 
hints how it looks like?



Yes, this is what I was referring to. Currently I'm interested in the
class of functions which are representable as a sum of a real function,
which, if analytically extended to the complex plane, is entire and
isolated derivative discontinuity functions (non-bandlimited versions of
BLEPs BLAMPs etc).



i think, if you allow for dirac impulses (or an immeasureably
indistinguishable approximation of width 10^(-44) second), any finite
and virtually bandlimited function will do.  if you insist on being
strict with your mathematics, i can't help you anymore (it's been more
than 3 decades since i cracked open any Real Analysis or Complex
Variables or Functional Analysis textbook)


The problem currently is not the impulses, but the entire (complex 
analytical) part of the signal.



BTW, i am no longer much enamoured with BLIT and the descendents of
BLIT.


I'm not sure how BLEP is a descendant of BLIT


Because, if the continuous part is bandlimited, then we have just to
replace the discontinuities by their bandlimited versions (the essence
of the BLEP approach) and the remaining question is only: if there are
infinitely many discontinuities at a given point, whether the sum of
their bandlimited versions will converge.



they don't.  imagine a perfect brickwall filter with sinc(t) as its
impulse response.  now drive the sonuvabitch with

   { (-1)^n  n=0
x[n] = {
   { -(-1)^n n0


which is   x[n] =   -1, +1, -1, +1, +1, -1, +1, -1, 
  ^
  |
  |
x[0]

and see what you get.  it ain't BIBO.


Interesting observation. I might need to think a little bit more about 
this :)
However I'm not sure how this is related to the convergence of the BLEPs 
in the context which we are talking about.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect | RD
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Did anybody here think about signal integrity

2015-06-08 Thread Vadim Zavalishin
If you try to take the Fourier transform integral of a exp(j*omega_0*t), 
it will not converge in the sense, how an improper integral's 
convergence is usually understood. You will need to employ something 
like Cauchy principal value or Cesaro convergence to make it converge to 
zero at omega!=omega_0. At omega=omega_0 the integral diverges no matter 
in which sense you take it. So, strictly speaking, Fourier transform of 
a sine doesn't exist.


An equivalent look to this from the inverse transform's side is that the 
spectrum of the sine is a Dirac delta function, which is not a function 
in the normal sense.


So, none of the statements of the Fourier transform theory (including 
the sampling theorem, which assumes the existence of the Fourier 
transform), taken rigorously, seem to apply to the sinusoidal signals.


Regards,
Vadim

On 08-Jun-15 10:35, Victor Lazzarini wrote:

Not sure I understand this sentence. As far as I know the FT is defined as an 
integral between -inf and +inf, so I am not quite
sure how it cannot capture infinite-lenght sinusoidal signals. Maybe you meant 
something else? (I am not being difficult, just
trying to understand what you are trying to say).

Dr Victor Lazzarini
Dean of Arts, Celtic Studies and Philosophy,
Maynooth University,
Maynooth, Co Kildare, Ireland
Tel: 00 353 7086936
Fax: 00 353 1 7086952


On 8 Jun 2015, at 08:19, vadim.zavalishin 
vadim.zavalis...@native-instruments.de wrote:

It might seem that such signals are unimportant, however even the infinite 
sinusoidal signals, including DC, cannot be treated by the sampling theorem, 
since the Dirac delta (which is considered as their Fourier transform) is not a 
function in a normal sense and strictly speaking Fourier transform doesn't 
exist for these signals.


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp




--
Vadim Zavalishin
Reaktor Application Architect | RD
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Did anybody here think about signal integrity

2015-06-08 Thread Vadim Zavalishin

On 08-Jun-15 15:06, Theo Verelst wrote:

Clearly, there's very little knowledge around the basic mathematical
proofs underpinning a decent undergrad engineering course. Prisms
understand fine what the Fourier transform is, and isn't. Maybe there's
an interest in this:
http://mathworld.wolfram.com/FourierTransformExponentialFunction.html .


Clearly this is not the exponential signal which I was referring to. 
This is also a clear indication of the limitation of the sampling 
theorem I was referring to: since it's not possible to take Fourier 
transform of an exponential function, people refer to some other 
function as the exponential function in the context of the Fourier 
transform :)


Anyway, since Theo expressed the wish not to change the direction of the 
thread, let's not stick to the question that I brought up. Although, if 
there is a renewed interest to discuss this aspect, I guess, creating a 
new thread could be an appropriate way to go.


Regards,
Vadim



--
Vadim Zavalishin
Reaktor Application Architect | RD
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiently modulate filter coefficients without artifacts?

2015-02-04 Thread Vadim Zavalishin

On 04-Feb-15 00:25, robert bristow-johnson wrote:


On 2/2/15 6:21 PM, Stefan Sullivan wrote:

I actually found by playing around with a particular biquad problem
that changing the topology of the filter had a greater impact on
reducing artifacts than proving bibo stability. In fact, any
linearly interpolated biquad coefficients between stable filters
results in a stable filter, including with nonzero initial
conditions ( http://dsp.stackexchange.com/a/11267/5321).


i think so, too.  because the stability depends on the value of a2
and ramping from one stable value of a2 to another stable value
shouldn't be romping through unstable territory...


I'm not sure what exactly you are referring to here, but your statement
could evoke a common misconception. The stable and unstable
territories are defined only for LTI systems (if you look at the 
corresponding proofs, they all assume the time-invariance property of 
the filter). That's why the works dealing with proving the 
time-invariant stability have to resort to other criteria, than mere 
pole positions. Rigorously speaking, there is no reason to believe that 
the same thing should apply to time-varying systems. Intuitively 
speaking, the change of parameters can work as an additional 
unstabilizing factor, which is not taken into account by the 
LTI-specific poles inside the unit circle rule.


As I mentioned earlier linked above proof is not convincing either,
because it doesn't assume the infinite modulation of the coefficients.
Sure, you can always treat the system as BIBO up to any given certain
finite point in time, but from that point of view ANY linear system is 
BIBO. The point of the BIBO definition is to put a horizontal bound on 
the filter state growth, not to state that the filter state is finite at 
any finite time.



but if there is vibrato connected to a filter's control parameters,
i can see how a filter can go unstable an never settle down.


This is exactly the point of the time-varying stability analysis.

Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiently modulate filter coefficients without artifacts?

2015-02-04 Thread Vadim Zavalishin
After writing the previous mail, I realized, that what I described in 
regards to mass-spring vs SVF was exactly the thing which I mentioned 
earlier: placing the (identical) cutoff gains before the integrators is 
equivalent to the time axis distortion. This might be seen as some kind 
of proof that such structures have the best time-varying performance 
in cases of cutoff modulation. Although, whether this is equivalent to 
the choice of the energy-based state variables in real-world physics 
systems is not fully clear to me yet.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiently modulate filter coefficients without artifacts?

2015-02-04 Thread Vadim Zavalishin

On 03-Feb-15 03:39, Andrew Simper wrote:

I completely agree! I find it mentally easier to think of energy
stored in each component rather than state variables even though
they are the same. So for musical applications it is important that a
change in the cutoff and resonance doesn't change (until you process
the next sample) the energy stored in each capacitor / inductor /
other energy storage component in your model. Direct form structures
do not have this energy conservation property, they are only
equivalent in the LTI case (linear time invariant - ie don't change
your cutoff or resonance ever). Any method that tries to jiggle the
states to preserve the energy would only be trying to do what already
happens automatically with some of state space model, so I feel it is
best to leave such forms for static filtering applications.


I'm not sure whether the choice of the energy-based state variables is 
indeed the best one (would be nice to try to have some kind of formal 
proof of that), but at least it seems to me that it might be that the 
SVF has the optimal (in a way) choice of those. My consideration is the 
following. Imagine a generic 2nd order mechanical system. Something like 
a mass on a spring. The external excitation force is the input signal. 
The natural choice of state variables is the position and the velocity 
of the mass. We can look at it as at a multimode LP/BP/HP filter. Up to 
some scaling, the position is the lowpass output, the velocity is the 
bandpass output and the highpass can be obtained as a linear combination 
of LP, BP and the input. Since there are not that many possibilities to 
construct a 2nd order linear differential system, our system is 
equivalent to an SVF in the LTI sense. The time-varying behavior will 
depend on which specific coefficients of the 2nd order differential 
equations are modulated and on the choice of state variables.


So, our state variables are the position and the velocity. Imagine our 
system has a high cutoff and we are feeding in a sufficiently high 
sinusoidal signal with the frequency below the cutoff. So the lowpass 
output (the position) is a similar sinusoid. At the zero-crossing time 
the velocity will be maximal. Suppose we suddenly lower the cutoff at 
this moment. Intuitively (I admit that this requires a more rigorous 
check), at a lower cutoff the velocity will not be changing so quickly 
any more. This means, the output signal of the system will significantly 
overshoot the previous output amplitude.


In comparison, the state variables of the SVF are using the velocity 
divided by the cutoff. Which means a sudden reduction of the cutoff will 
proportionally reduce the velocity and the overshoot will not be that 
big anymore.


In order to test your conjecture about the energy-based state variables, 
one would need to explicitly write down the mass-spring equation and 
compare the respective choices of state variables to those of SVF. 
Possibly, the energy-based state variables of the mass-spring system 
will be equivalent to the state variables of the SVF, which would then 
be a sign that your idea might be correct.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiently modulate filter coefficients without artifacts?

2015-02-03 Thread Vadim Zavalishin

On 03-Feb-15 00:21, Stefan Sullivan wrote:

... In fact, any linearly interpolated
biquad coefficients between stable filters results in a stable filter,
including with nonzero initial conditions (
http://dsp.stackexchange.com/a/11267/5321). The trick is that the bounds
may be different with nonzero initial conditions than with zero initial
conditions.


I took a glance at the proof and it doesn't look very convincing. If 
there's interest, we could further discuss the details.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiently modulate filter coefficients without artifacts?

2015-02-02 Thread Vadim Zavalishin

One should be careful not to mix up two different requirements:

- time-varying stability of the filter
- the minimization of modulation artifacts

While both are probably closely related, there is no reason to believe 
that they are equivalent. My intuitive guess would be that the 
absolutely best (whatever that means) stability would occur (for 
2-pole filters) for a filter based on the 2nd order resonating Jordan 
normal cell, which is effectively just implementing a decaying complex 
exponential:


x[n+1] = A*(x[n]*cos(a)-y[n]*sin(a))
y[n+1] = A*(x[n]*sin(a)+y[n]*cos(a))

(the filter itself will be a state-space representation built around the 
above transition matrix). However, it's not even intuitively obvious if 
that structure would give you the minimal artifacts.


In regards to the artifact minimization, I have only an intuitive 
suggestion. Let's look at the SVF structure in continuous time (e.g. 
Fig.5.1 on p.77 of
http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_1.0.3.pdf) 
and at the structure of the continuous-time integrator (the two untitled 
pictures on p.49 in the same text). It's intuitively clear, that the 
integrator structure, where the cutoff gain precedes the integration 
generates less artifacts, since the integrator is smoothing out the 
coefficient changes. This leads to the idea that in this case the 
lowpass output of the SVF would be quite reasonable in regards to the 
artifact minimization, since each of the cutoff coefficients is smoothed 
by an integrator and the resonance coefficient is smoothed by both of 
them. Similar considerations can be applied to the other modes, where 
it's clear that the HP output gets the unsmoothed artifacts from the 
resonance changes. If we want to build a mixture of LP/BP/HP modes 
rather than picking the outputs one by one, then, maybe it's possible to 
smooth the artifacts by using the transposed (MISO) form of the SVF, but 
I'm not usre.


The thing with placing the cutoff before the integrator is pretty 
generic. It can be easily shown, that in this case the cutoff 
modulations can be equivalently represented as time dilation/compression 
(provided all integrators share the same cutoff), thus they don't affect 
the filter stability and their artifacts are exactly those of time axis 
warping. It would be reasonable to expect that if we then apply any 
state-variable preserving analog to digital transformation techniques 
(such as trapezoidal integration/TPT), the artifacts amount will be more 
or less the same. Similar reasoning can be applied to a continuous-time 
Jordan normal cell, which then can be converted to discrete time.


One would then generally expect other discretization approaches, which 
do not preserve the topology and state variables, such as direct forms 
to have a way poorer performance in regards to the artifacts, unless, of 
course, it's an approach which specifically targets the artifact 
minimization in one or the other way.


Regards,
Vadim

On 02-Feb-15 11:18, Ross Bencina wrote:

Hello Robert,

On 2/02/2015 10:10 AM, robert bristow-johnson wrote:

also, i might add to the list, the good old-fashioned lattice (or
ladder) filters.


In the Laroche paper mentioned earlier [1] he shows that Coupled Form is
BIBO stable and Normalized Ladder is stable only if coefficients are
varied at most every other sample (which, if adhered to, should also be
fine for the current discussion).

The Lattice filter is *not* time-varying stable according to Laroche's
analysis. I'd be curious to hear alternative views/discussion on that.

[1] Laroche, J. (2007) “On the Stability of Time-Varying Recursive
Filters,” J. Audio Eng. Soc., vol. 55, no. 6, pp. 460-471, June 2007.

Cheers,

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp



--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] SVF and SKF with input mixing

2015-01-06 Thread Vadim Zavalishin
The SVF transposition can be done in continuous-time domain (where the 
filter is basically two integrators in series, both feeding back to the 
input). Then, applying the trapezoidal integration/ZDF techniques we 
obtain a multi-input 2-pole SVF.


Obviously, the same technique can be applied to any other linear 
multi-output filter, where in principle the transposition doesn't need 
to be applied on all levels of the filter. E.g. for a multimode 
transistor ladder we could handle the underlying 1-pole lowpasses as 
atomic blocks and not transpose them internally (although I'm not sure, 
whether their transposed version is not identical to the original ;-) ).


Things can get less straightforward, once nonlinearities are involved.

Regards,
Vadim

On 06-Jan-15 14:30, STEFFAN DIEDRICHSEN wrote:

Actually, it’s an interesting filter.
BTW, if you transpose Chamberlin’s SVF, you get a similar filter with HP/BP/LP 
inputs and a common output:

Out = HP + f * (BP + Z1)
Z1 +=  BP  - Out * q + f * (Z2 + LP - Out)
Z2 +=  LP - Out

f: frequency  coefficient
q: Q factor coefficient
Out: output
Z1, Z2: state variables
HP, BP, LP: filter inputs

HNY!

Steffan


On 05.01.2015|KW2, at 14:19, Andrew Simper a...@cytomic.com wrote:

Thanks to the ARP engineers for the original circuit and Sam
HOSHUYAMA's great work for outlining the theory and designing a
schematic for an input mixing SVF.

Sam's articles-
Theory: http://houshu.at.webry.info/200602/article_1.html
Schematic: http://houshu.at.webry.info/201202/article_2.html

I have taken Sam's design and written a technical paper on
discretizing this form of the SVF. I also took the chance to update
the SKF (Sallen Key Filter) paper to more explicitly deal with input
mixing of different signals, here they are in as similar form as
possible:

http://cytomic.com/files/dsp/SvfInputMixing.pdf
http://cytomic.com/files/dsp/SkfInputMixing.pdf

As always all the technical papers I've done can be accessed from this page:



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp



--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Instant frequency recognition

2014-08-04 Thread Vadim Zavalishin
I think it can be done simpler. Just extend the inverse Fourier 
transform in the same way how the bilateral Laplace transform extends 
the direct Fourier transform. Any mistake in that reasoning?


Regards,
Vadim

On 02-Aug-14 20:10, colonel_h...@yahoo.com wrote:

On Fri, 1 Aug 2014, Vadim Zavalishin wrote:


My quick guess is that bandlimited does imply analytic in the complex
analysis sense.


1st off, I am fairly sure it is true that a BL signal cannot be zero
over an interval, so two non-zero BL signals cannot differ by zero over
an interval, so a function with cetain values over any interval is
unique, so the rest of this may be cruft...

However, an audio signal is most often a real valued function of a real
value or a complex valued function of a real value whos imaginary part
happens to be zero (often almost interchangably to little ill effect.)

So to get an analytic complex function you'd have to extend the
function. A non-zero analytic can't have a zero imaginary part, so we'd
need a ``new'' imaginary part and to extent the real and imaginary parts
to a neighborhood of the real line.

Off the cuff I think you might use the real values of f on the real axis
as boundary conditions for the Cauchy-Reimann equations in a
neighborhood of the real axis to solve for a non-zero imaginary part for
f(z) which would then be analytic. This is /if/ BL is enough to show
such a solution exists tehn you're done (which I do not claim is false.
I just can't see a way to get there.)

Ron


--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Instant frequency recognition

2014-08-01 Thread Vadim Zavalishin


On 01-Aug-14 05:22, colonel_h...@yahoo.com wrote:

On Fri, 18 Jul 2014, Sampo Syreeni wrote:


Well, theoretically, all you have to know is that the signal is
bandlimited. When that is the case, it's also analytic, which means
that an arbitrarily short piece of it (the analog signal) will be
enough to reconstruct all of it as a simple power series.


I believe it is true than band limited implies C^infinity, but the
function is not complex, so it's a different use of the term analytic
than in complex analysis,


My quick guess is that bandlimited does imply analytic in the complex 
analysis sense. This must be a dual of Laplace transform of a 
bandlimited signal being analytic (entire). Although I could be missing 
something here.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Instant frequency recognition

2014-08-01 Thread Vadim Zavalishin

Sorry, I meant Laplace transform of a timelimited signal.

On 01-Aug-14 10:06, Vadim Zavalishin wrote:


On 01-Aug-14 05:22, colonel_h...@yahoo.com wrote:

On Fri, 18 Jul 2014, Sampo Syreeni wrote:


Well, theoretically, all you have to know is that the signal is
bandlimited. When that is the case, it's also analytic, which means
that an arbitrarily short piece of it (the analog signal) will be
enough to reconstruct all of it as a simple power series.


I believe it is true than band limited implies C^infinity, but the
function is not complex, so it's a different use of the term analytic
than in complex analysis,


My quick guess is that bandlimited does imply analytic in the complex
analysis sense. This must be a dual of Laplace transform of a
bandlimited signal being analytic (entire). Although I could be missing
something here.

Regards,
Vadim



--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Instant frequency recognition

2014-07-17 Thread Vadim Zavalishin

On 16-Jul-14 15:29, Olli Niemitalo wrote:

Not sure if this is related, but there appears to be something called
chromatic derivatives:

  http://www.cse.unsw.edu.au/~ignjat/diff/


Seems pretty much related and going further in the same direction 
(alright, I just briefly glanced at chromatic derivatives). Anyway, it 
seems that for the discrete time signals the situation is somewhat 
different from what I described for continuous time in that there are no 
derivative discontinuities for discrete time signals. At the same time 
it's not possible to locally compute the derivatives of the discrete 
time signals, so the local Taylor expansion idea is not applicable 
anyway (the same applies to the the chromatic derivatives, I'd guess). 
However, instead, we could simply apply the inter-/extra-polation to the 
obtained sample points. The most intuitive would be applying Lagrange 
interpolation, which as we know, converges to the sinc interpolation. 
However (again, remember the BLEP discussion), any finite order 
polynomial contains only the generalized DC component. Not very useful 
for the frequency estimation. So, the question is, what kind of 
interpolation should we use? Sinc interpolation would be theoretically 
correct, but, remember, that this thread is not about strictily 
theoretically correct frequency recognition, but rather about some 
more intuitive version with the concept of instant frequency. Maybe 
we could attempt exactly fitting a set of samples into a sum of sines of 
different frequencies? Each sine corresponding to 3 degrees of freedom.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Instant frequency recognition

2014-07-16 Thread Vadim Zavalishin

On 16-Jul-14 12:31, Olli Niemitalo wrote:

What does O(B^N) mean?

-olli


This is the so called big O notation.
f^(N)(t)=O(B^N) means (for a fixed t) that there is K such that
|f^(N)(t)|K*B^N
where f^(N) is the Nth derivative. Intuitively, f^(N)(t) doesn't grow 
faster than B^N


Regards,
Vadim





On Thu, Jul 10, 2014 at 4:02 PM, Vadim Zavalishin
vadim.zavalis...@native-instruments.de wrote:

Hi all,

a recent question to the list regarding the frequency analysis and my recent
posts concerning the BLEP led me to an idea, concerning the theoretical
possibility of instant recognition of the signal spectrum.

The idea is very raw, and possibly not new (if so, I'd appreciate any
pointers). Just publishing it here for the sake of
discussion/brainstorming/etc.

For simplicity I'm considering only continuous time signals. Even here the
idea is far from being ripe. In discrete time further complications will
arise.

According to the Fourier theory we need to know the entire signal from
t=-inf to t=+inf in order to reconstruct its spectrum (even if we talk
Fourier series rather than Fourier transform, by stating the periodicity of
the signal we make it known at any t). OTOH, intuitively thinking, if I'm
having just a windowed sine tone, the intuitive idea of its spectrum would
be just the frequency of the underlying sine rather than the smeared peak
arising from the Fourier transform of the windowed sine. This has been
commonly the source of beginner's misconception in the frequency analysis,
but I hope you can agree, that that misconception has reasonable
foundations.

Now, recall that in the recent BLEP discussion I conjectured the following
alternative definition of bandlimited signals: an entire complex function
is bandlimited (as a function of purely real argument t) if its derivatives
at any chosen point are O(B^N) for some B, where B is the band limit.

Thinking along the same lines, an entire function is fully defined by its
derivatives at any given point and (therefore) so is its spectrum. So, we
could reconstruct the signal just from its derivatives at one chosen point
and apply Fourier transform to the reconstructed signal.

In a more practical setting of a realtime input (the time is still
continuous, though), we could work under an assumption of the signal being
entire *until* proven otherwise. Particularly, if we get a mixture of
several static sinusoidal signals, they all will be properly restored from
an arbitrarily short fragment of the signal.

Now suppose that instead of sinusoidal signals we get a sawtooth. In the
beginning we detect just a linear segment. This is an entire function, but
of a special class: its derivatives do not fall off smoothly as O(B^N), but
stop immediately at the 2nd derivative. From the BLEP discussion we know,
that so far this signal is just a generalized version of the DC offset, thus
containing only a zero frequency partial. As the sawtooth transition comes
we can detect the discontinuity in the signal, therefore dropping the
assumption of an entire signal and use some other (yet undeveloped) approach
for the short-time frequency detection.

Any further thoughts?

Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp



--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Instant frequency recognition

2014-07-10 Thread Vadim Zavalishin

Hi all,

a recent question to the list regarding the frequency analysis and my 
recent posts concerning the BLEP led me to an idea, concerning the 
theoretical possibility of instant recognition of the signal spectrum.


The idea is very raw, and possibly not new (if so, I'd appreciate any 
pointers). Just publishing it here for the sake of 
discussion/brainstorming/etc.


For simplicity I'm considering only continuous time signals. Even here 
the idea is far from being ripe. In discrete time further complications 
will arise.


According to the Fourier theory we need to know the entire signal from 
t=-inf to t=+inf in order to reconstruct its spectrum (even if we talk 
Fourier series rather than Fourier transform, by stating the periodicity 
of the signal we make it known at any t). OTOH, intuitively thinking, if 
I'm having just a windowed sine tone, the intuitive idea of its spectrum 
would be just the frequency of the underlying sine rather than the 
smeared peak arising from the Fourier transform of the windowed sine. 
This has been commonly the source of beginner's misconception in the 
frequency analysis, but I hope you can agree, that that misconception 
has reasonable foundations.


Now, recall that in the recent BLEP discussion I conjectured the 
following alternative definition of bandlimited signals: an entire 
complex function is bandlimited (as a function of purely real argument 
t) if its derivatives at any chosen point are O(B^N) for some B, where B 
is the band limit.


Thinking along the same lines, an entire function is fully defined by 
its derivatives at any given point and (therefore) so is its spectrum. 
So, we could reconstruct the signal just from its derivatives at one 
chosen point and apply Fourier transform to the reconstructed signal.


In a more practical setting of a realtime input (the time is still 
continuous, though), we could work under an assumption of the signal 
being entire *until* proven otherwise. Particularly, if we get a mixture 
of several static sinusoidal signals, they all will be properly restored 
from an arbitrarily short fragment of the signal.


Now suppose that instead of sinusoidal signals we get a sawtooth. In the 
beginning we detect just a linear segment. This is an entire function, 
but of a special class: its derivatives do not fall off smoothly as 
O(B^N), but stop immediately at the 2nd derivative. From the BLEP 
discussion we know, that so far this signal is just a generalized 
version of the DC offset, thus containing only a zero frequency partial. 
As the sawtooth transition comes we can detect the discontinuity in the 
signal, therefore dropping the assumption of an entire signal and use 
some other (yet undeveloped) approach for the short-time frequency 
detection.


Any further thoughts?

Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-07-03 Thread Vadim Zavalishin

On 03-Jul-14 08:00, Nigel Redmon wrote:

On Jul 2, 2014, at 1:12 AM, Vadim Zavalishin
vadim.zavalis...@native-instruments.de wrote:

As for using the wavetables, BLIT, etc, they might provide superior
performance (wavetables), total absence of inharmonic aliasing
(BLIT) etc., but, AFAIK they tend to fail in extreme situations
like heavy audio-rate FM, ring modulation etc. BLEP, OTOH, should
still perform equally good.



Ring modulation shouldn’t be part of that, because given an
equivalent output of an oscillator by any method (say, sawtooth from
a wavetable oscillator, and a saw from another method—in general, the
same harmonic properties and output sample rate), if the ring
modulation (which is “balanced”, or four-quadrant amplitude
modulation) results in aliasing, it will result in aliasing for any
of the oscillator types, because it only depends on the frequency
content of the oscillator output, not how it got there.


Ring modulation is clearly a part of that. If the input waveforms of a 
ring mod can be specified analytically (which is normally the case in a 
fixed-layout synth) you can consider the ring mod as an oscillator 
itself. That is, you analytically compute the non-bandlimited output of 
the ring mod in the continuous-time domain and then apply BLEPs to the 
discontinuities in the output waveform.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-07-03 Thread Vadim Zavalishin

On 03-Jul-14 15:29, Theo Verelst wrote:

* The locality of the proper resampling/reconstruction (sinc) funtion
is very limited: the amplitude of a single sinc function, regarding a
single sample diminishes only with 1/t


Yes, but we can window it (and then use the Fourier theory to analyse 
the artifacts brought up by that windowing). I think you can get very 
decent quality with quite moderate window lengths.



* FM isn't bandwidth limited AT ALL taken at the continuous/analog face
value like in a modular synth.


Talking DX7-style FM, yes. However, talking analog-style FM, while still 
being non-bandlimited, we can wonder if the entire non-bandlimited part 
of the spectrum is contained in the discontinuities. This is one of 
the main motivations for the thread.



* Ring modulation as a standard 4 quadrant multiplication is perfectly
frequency limited with the notion that the input signals are.


But its band limit is the sum of the band limits of the sources, which 
makes it not Nyquist-limited by default. An implementation which wants 
to deal with arbitrary waveforms could just use 2x 
upsampling/downsampling to produce properly bandlimited ring modulation 
output. With specific knowledge about analog-style waveform inputs we 
could instead directly apply the BLEPs.



* Natural E powers are NOT frequency limited according to the normal
Fourier transform.


Normal Fourier transform is not capable to do any statement regarding 
bandlimitedness of exp(a*t), since it doesn't exist. However, for a 
certain practical goal (generating bandlimited piecewise-exponential 
signals) we could try to come up with a different definition of 
bandlimitedness (you can argue the correctness of continuing to use 
the term bandlimited here, but that's a slightly different topic).


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-07-02 Thread Vadim Zavalishin

On 01-Jul-14 19:08, Ethan Duni wrote:

This means that in principle any piecewise polynomial signal
with bandlimited discontinuities of the signal and its derivatives
is also band limited.


Sorry if I'm missing something obvious, but what is a bandlimited
discontinuity?


Despite the elaborate answer already given to that question, I'd like to 
get an answer from a different point of view, where we stick to 
continous-time domain. By considering bandlimiting in the 
continuous-time domain, prior to sampling (kind of virtual ADC), we have 
a simpler (IMHO) framework: if the continuous-time signals are 
bandlimited, we can simply sample them to produce non-aliasing digital 
signal, without *explicitly* considering fractional delay filters etc. 
The result should be pretty much the same, just with using more 
intuitive and simpler concepts. YMMV.


We can define a 0th order discontinuity as a value discontinuity in the 
signal. Like the sawtooth level jumping from +1 to -1 at the transition. 
We could also define the discontinuity function as being 0 for t0 and 1 
for t0 (you can set it to 0.5 at t=0). Then e.g. a sawtooth signal can 
be represented as a sum of inifinitely long x(t)=t plus the sum of 
discontinuity functions scaled to the necessary amplitudes and 
positioned at each transition of the sawtooth.


For a triangle we have a discontinuity in the 1st derivative, which we 
can refer to as the 1th order discontinuity. The respective 
discontinuity function can be defined as 0 for t0 and t for t=0. 
Respectively the triangle will be a sum of x(t)=t and 1st order 
discontinuity functions. Etc.


Now, the statement which I'm aiming at is something like all 
non-bandlimited part of the spectrum of a signal is contained in its 
discontinuities. Obviously, this is not completely right, so the 
question was, to which extent is it right. Now, in the cases, where the 
statement holds, in order to bandlimit the signal it is sufficient to 
bandlimit the discontinuities.


As for the construction of bandlimited discontinuities, we can use the 
fact that the integration changes the amplitudes and phases of the 
signal's partials but doesn't generate new ones (if the ill-conditioning 
at DC is properly respected). In order to obtain a bandlimited 0th order 
discontinuity (bandlimited step, or BLEP), we notice that it is an 
integral of the Dirac delta function. So we take the bandlimited version 
of the Dirac delta, which is known to be the sinc function and integrate 
it, obtaining a so-called integral sine, or sine integral function 
Si(t). The DC of the result needs to be corrected, though, but it's 
obvious. To obtain the 1st order discontinuity we integrate Si(t) again 
(for which there is an analytical expression). And so on. The 
discontinuity functions can be pregenerated and stored in tables, 
approximated by polynomials, etc. Of course, being infinitely long, they 
require some windowing.


In the generation of the waveforms it might be more practical instead of 
adding bandlimited discontinuities to the continuous signal, add the 
resuduals (the differences between bandlimited and nonbandlimited 
versions of the same discontinuities) to non-bandlimited signal, but 
that's just a math trick to slightly simplify the processing, not an 
essential part of the entire idea.


So, everything is very simple, provided the underlying continuous 
signal can be considered bandlimited (which was the original question in 
the thread). One slight complication arises from the discontinuities 
being non-causal, which adds the latency to the oscillator. Eli Brandt, 
who (I believe) introduced the BLEP method was suggesting to use 
minimum-phase versions of those (which I believe exist only in 
discrete-time domain), but personally I'm not a big fan of those, and 
would rather have the latency in the oscillator.


As for using the wavetables, BLIT, etc, they might provide superior 
performance (wavetables), total absence of inharmonic aliasing (BLIT) 
etc., but, AFAIK they tend to fail in extreme situations like heavy 
audio-rate FM, ring modulation etc. BLEP, OTOH, should still perform 
equally good.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-07-02 Thread Vadim Zavalishin

Hi Theo

On 01-Jul-14 16:36, Theo Verelst wrote:

The example with the e-power still cannot serve as a perfect example,
no matter if we sweep the proper boundary conditions for going from the
Fourier integral to the decent s-integral with the network response
being thought to start at t=0 under the rug, because, like I clearly
stated, the e-power sequence, as perfect sample-set of a decaying
a*E^(b-c*t) function IS NOT BANDWIDTH LIMITED.


Failing to grap (as usual) some of the terminology used in your post 
(such as network response), I would still like to ask, whether you are 
talking about unilateral Laplace transform here (put differently, 
whether your exp(t) is zero for t0)? Because, if that is the case, then 
(a) the signal is obviously non-bandlimited and (b) one could conjecture 
that all aliasing is coming from the transition at t=0, where you have 
discontinuities in all derivatives.


In my original post I was referring to infinitely long exp(t). Although, 
using the DC plus integration approach, suggested in this thread, I'm 
wondering, if a periodic sawtooth-like exp(a*t) contains all its 
aliasing in the discontinuities (that is, if we bandlimit them one by 
one, whether we are getting a bandlimited signal in the limit). Maybe if 
a is small enough, the derivative discontinuities will decay 
sufficiently fast for that?


Oh, and could you also give the link to the other thread you mentioned?

Thanks

Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-07-02 Thread Vadim Zavalishin

A further interesting observation regarding the bandlimitedness of
exp(a*t), which kind of confirms my previous conjecture. We are
considering a periodic exp, which is a sawtooth, whose segments are
exponential rather than linear.

Consider the amplitudes of unit aliasing residuals (residuals for the
discontinuity functions obtained by successful integration of the Dirac
delta). Bandlimited Dirac delta is sinc(pi t/T), where sinc(t)=sin(t)/t.
Integrating successfully with respect to t we notice that the amplitudes
of the unit residuals fall off as (T/pi)^N/N. Where (T/pi)^N is just
obtained form a standard property of integration of a horizontally
stretched function and 1/N is obtained from the biggest term of the
formula for the Nth antiderivative of Si(t).

The derivatives of exp(a*t) fall off as a^N.

Thus, the amplitudes of the residuals needed for exp(a*t) discontinuity
bandlimiting fall off as (a*T/pi)^N/N. Therefore, if a*Tpi the sum of 
the residuals will converge and exp(a*t) can be considered bandlimited, 
otherwise not.


Notice that the same considerations can be applied to sin(a*t), which
fully coincides with the known condition for sin(a*t) being bandlimited:

sin(2*pi*f*t) is bandlimited iff f1/2T
a=2*pi*f
a*Tpi iff 2*pi*f*Tpi iff 2*f*Tpi iff f1/2T

Thus, it seems exp(a*t) is bandlimited exactly under the same condition.

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-07-01 Thread Vadim Zavalishin

On 30-Jun-14 18:44, Stefan Stenzel wrote:

The tools of BLXX are DC, digital integrators/filters and your BLXX signals that
are bandlimited by definition, no sampling and no nonlinear operation involved.

So as there is no possible source for aliasing, there is no aliasing.


Okay. So in principle, we could construct a BLXX-ed signal by simply 
integrating DC and bandlimited impulses a sufficient number of times, if 
necessary correcting the DC to zero along the way. Convincing enough. (A 
small remark: I belive that, contrarily to the BLIT method, BLXX 
typically integrates in the continuous time domain. I fail to see any 
reason why should we prefer discrete-time domain integration. The 
continuous-time domain integration has the benefit of avoiding any 
spectral distortion in the hi freq area. Also it can be performed 
analytically.) This means that in principle any piecewise polynomial 
signal with bandlimited discontinuities of the signal and its 
derivatives is also bandlimited.


This addresses my original motivation to a large extent: applying BLXX 
not only to waveforms of stable frequencies, but also to their modulated 
versions. Although, not completely. Particularly, self-FM sawtooth 
produces an exponential signal. I wonder, whether exponents (at least 
those which are slow enough) are bandlimited. After all a good part of 
the sine signals (which are kind of versions of exponentials) are 
bandlimited.


So, can we apply the above reasoning to Taylor series expansions 
(constructing Taylor series by repeated integration of signals)? 
Especially, if we consider only exponential segments, rather than 
infinitely long exponentials, then we could apply the above integration 
scheme an infinite number of times to arrive at the result. But (!) so 
we could do for the sine signals. This would mean that *any* sine is 
bandlimited. So, there must be some flaw in that reasoning.


Besides, while Stefan provided an almost convincing justification of the 
BLXX by integration of DC and impulses (almost is because there is 
this unanswered question of inifinite integration in the construction of 
the Taylor series, which is somewhat bothering). However, it would still 
be interesting to get a consistent look at the same problem from the 
virtual ADC point of view, if only for the sake of understanding.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-07-01 Thread Vadim Zavalishin

On 01-Jul-14 10:17, Vadim Zavalishin wrote:

This addresses my original motivation to a large extent: applying BLXX
not only to waveforms of stable frequencies, but also to their modulated
versions. Although, not completely. Particularly, self-FM sawtooth
produces an exponential signal. I wonder, whether exponents (at least
those which are slow enough) are bandlimited. After all a good part of
the sine signals (which are kind of versions of exponentials) are
bandlimited.


As the exponents go, I believe, I have some kind of answer. As we are 
going to get an infinite number of discontinuities with self-FM saw, the 
question of bandlimitedness of the exponent is somewhat academic. 
However, we could consider low-order polynomial approximations of the 
exponential signals and bandlimit the respective discontinuities.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-07-01 Thread Vadim Zavalishin

An interesting observation. Since

(t+a)^2=t^2+2at+a^2

the lowpass filtering of t^2 with a symmetrically windowed sinc gives

(sincw * (.+a)^2)(t)=
   (sincw * (.^2+2a.+a^2)(t)=
   (sincw * .^2)(t)+(sincw * 2a.)(t)+(sincw * a^2)(t)=
   (sincw * .^2)(t)+0+a^2 =
   (sincw * .^2)(t)+a^2

where * is the convolution operator and . is the function's argument 
placeholder. This means that the lowpass filtering of t^2 may add a DC 
offset of (sincw * .^2)(t) but otherwise doesn't modify the function. 
Thus, t^2 is still kind-of bandlimited. The same argument probably 
generalizes to higher order terms.


For a non-symmetrical window you just get another DC term from the 
convolution with the second term of the sum.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] On the theoretical foundations of BLEP, BLAMP etc

2014-06-30 Thread Vadim Zavalishin

On 27-Jun-14 13:45, STEFFAN DIEDRICHSEN wrote:

Can we consider x(t)=t^2 bandlimited?


No, that’s a nonlinear operation , unlike the integration. The
difference betwenn both operations is what happens with the step.


Actually not, as pointed out by Stefan. In fact, you don't need the
condition t=0. Just notice that t^2/2 and t are in the
differentiation/integration relationship.


Not really. If you differentiate a constant, the result is zero.
Since all differentiation and integraton is linear, only the
spectrum is modified, no new content is generated, so band limits
are preserved. At least on paper. ;-)


Actually, even not on paper. The problem is that the transformation of
the signal spectra by the integration is ill-conditioned at omega=0.
Therefore you should be careful when integrating signals whose spectra
are nonzero around DC. Particularly, the spectrum of x=const is already
infinite at DC. This means we have to be careful with any
transformations or conclusions regarding that spectrum, let alone
applying the integration.

It is exactly this issue which leads to the divergence of the ideal
lowpass filtering (sinc-convolution) for x(t)=t^2, and limited
convergence (only in Cauchy principal value sense) for x(t)=t.

Furthermore, if the transformation of spectra by the integration could
have been applied to x(t)=const, x(t)=t, x(t)=t^2 etc without any
further thought, then *any* sinusoidal signal would have been
bandlimited (or at least I believe so). Indeed, sin(a*t) could be
expanded into Taylor series around t=0. The convergence radius of this
series would be infinte, I believe (since sin(z) is analytical). The
series also converges absolutely (follows from the convergence of
exp(z)). It doesn't converge uniformly on (-inf,+inf), but I believe
this is unnecessary in order to conclude that the infinte sum of
bandlimited signals is also bandlimited.

OTOH, if, rather than integrating x(t)=t and bandlimited steps
separately, we integrate a bandlimited sawtooth signal, then the
resulting parabolic waveform should be bandlimited (here the
ill-conditioning of the integration doesn't apply, since the DC of the
sawtooth is 0).

Thus, the original question of the theoretical justification of BLEP
antialiasing remains open. At least for the waveforms with nonlinear
segments, such as x(t)=t^2.


Not sure. From which domain do you look at the problem?
Time-discrete or time continous? In the time-discrete domain, there’s
no way to distinct non-bandlimited from limited. And in the other
domain, you have no band limit.


I was considering bandlimited signals in the continous time domain. The
bandlimiting in this domain is the first step of theoretical ADC.

Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Frequency bounded DSP

2014-01-03 Thread Vadim Zavalishin
I'm also not sure I understand the question, although from a slightly 
different angle. As long as the sum of the bandwidths of the modulator 
and the carrier is below Nyquist, you're good, right? So, in principle, 
you can do arbitrary amounts of AM/RM by simply keeping the Nyquist 
equal to double of your desired bandwidth and bandlimiting again after 
each AM/RM. So, this way you can implement more or less arbitrary 
envelopes (and, in particular, oscillator sync for arbitrary waveforms), 
if you bandlimit them in advance (unless I'm missing something).


Regards,
Vadim

On 03-Jan-14 16:09, Wen Xue wrote:

Why not just inverse-transform any band-limited spectrum? (or have I got
the question wrong?)

Now, can we do better, can we make, say, some form of other envelope
that is still frequency limited ?

T.V.



--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-12 Thread Vadim Zavalishin

On 12-Nov-13 09:53, Dave Gamble wrote:

PS. Time-varying performance is another word. Nonlinearities is the
third one.


Not criticisms I'm at all familiar with, I'm afraid. Can you expand?


As we are talking about inferiority of DF compared to ZDF, I just 
mentioned the other two, which are even way more prominent than the 
quantization issues, as they can't be addressed by switching to 64 bit 
floats (but they already have been mentioned multiple times in the scope 
of the present discussion). DF has very poor time-varying (modulated 
parameters) performance and is absolutely uncapable of properly hosting 
the nonlinearities present in the original analog prototype.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-12 Thread Vadim Zavalishin

On 12-Nov-13 10:01, Dave Gamble wrote:

Because switching from double to float will bring extremely small
performance gains in CPU cost, and potentially sizeable problems with
numerical issues.


I'd be very careful with statements like that. There are people with 
exactly the opposite experience. YMMV ;-)


--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-12 Thread Vadim Zavalishin

On 12-Nov-13 10:10, Dave Gamble wrote:

So let me go out on a limb here: if you take some single precision code and
up it to double, and things get WORSE then there is something very strange
about your original code that merits investigation.


It's very easy. As I mentioned in my other email, switching from float 
to double halves the number of available SIMD channels, which means you 
need to run your code twice as many times. On the other hand, in my 
experience, most of the DSP algorithms are quite tolerant to using 
32-bit floats (DF filters being one exception).


--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-11 Thread Vadim Zavalishin

On 11-Nov-13 01:09, robert bristow-johnson wrote:

On 11/8/13 6:47 PM, Andrew Simper wrote:

It depends if you value numerical performance, cutoff accuracy, dc
performance etc etc, DF1 scores badly on all these fronts,


nope.


  and this is even in the case where you keep your cutoff and q
unchanged.


Hi Robert,

from your reply I understood that you're referring to the parameter 
quantization, while I think Andy was referring to the state 
quantization. Furthermore, also parameter quantization seems much less 
of an issue with the 0df approach, since cos(...) doesn't occur there 
(although I don't have a rigorous proof).


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Time Varying BIBO Stability Analysis of Trapezoidal integrated optimised SVF v2

2013-11-11 Thread Vadim Zavalishin

Hi Ross,

since you opened this topic, I thought I'd try to share the intermediate 
results my findings, as much as I can remember them (that was a few 
years back). Most of them concern the continuous time case.


First note regarding the continuous time case is that cutoff modulations 
do not affect the BIBO stability at all. More rigorously:
- if the cutoff modulation is done by varying the gains *in front* 
(rather than behind) of *all* integrators in the system

- if the cutoff function w(t) is always positive
- if the system is BIBO stable for some cutoff function w(t)
then the system is also BIBO stable for any other positive cutoff function

Particularly, if a linear system is BIBO-stable in time-invariant case 
(for the constant cutoff function), then it's also stable for varying 
cutoff.


This is very easy to obtain from the state-space equation:
du/dt=w*F(u,x)
where u(t) is the state vector, x(t) is the input vector, w(t) is the 
cutoff scalar function and F(u,x,t) is the nonlinear time-varying 
version of A*u+B*x. Without reduction of generality we can assume w(t)=1 
for the given stable case. Then, we simply rewrite the equation as

du/(w*dt)=F(u,x)
and substitute the time parameter:
d tau = w*dt
Now in tau time coordinates the modulated system is exactly the same 
as the unmodulated one in the t coordinates.


The same doesn't seem to hold for the TPT discrete-time version, though.


In a more general case for *linear* continuous time, IIRC, we have a 
sufficient (but it seems, not necessary) time-varying stability 
criterion: all eigenvalues of the matrix A+A^T must be uniformly 
negative, that is they must be bounded by some negative number from 
above. It is essential to require this uniform negativity, otherwise the 
eigenvalues can get arbitrarily close to the self-oscillation case. This 
condition is simply obtained from the fact that in the absence of the 
input signal you want the absolute value of the state to decay with a 
relative speed, which is uniformly less that 1. This will make sure, 
that, whatever the bound of the input signal is, a large enough state 
will decay sufficiently fast, to win over the input vector B(t)*x(t). 
Indeed, ignoring the B*x term, we have

(d/dt) |u|^2=(d/dt)(u^T*u)=u'^T*u+u^T*u'=
(A*u)^T*u+u^T*(A*u)=u^T*A^T*u+u^T*A*u=
u^T*(A+A^T)*u=|u|^2*max{lambda_i}
where lambda_i are the eigenvalues of A+A^T.
Now on the other hand
(d/dt) |u|^2=2*|u|*(d/dt)|u|
So
2*|u|*(d/dt)|u|=|u|^2*max{lambda_i}
and
2*(d/dt)|u|=|u|*max{lambda_i}

Obviously, you don't have to satisfy the condition in the original 
state-space coordinates. Instead, you can satisfy it in any other 
coordinates, which corresponds to using P^T*A*P instead of A for some 
nonsingular matrix P.


Now I didn't manage to get this condition satisfied for the 
continuous-time SVF. Reading your post, I admit, that I could have made 
a mistake there, but FWIW... First, I discarded the consideration of 
varying cutoff, as explained above and concentrated on the varying 
damping. Not managing to find a matrix P, I constructed an input signal, 
requiring the maximum possible growth of the state vector. The signal, 
IIRC was either sgn(s_1) or -sgn(s_1), where s_1 is the first of the 
state components (or it could have been s_2). Then I noticed that for 
low damping the state vector is moving in almost a circle, while for 
higher damping (but still with complex poles) is turns into an ellipse. 
This was exactly the problem: in principle the circle is having a 
bigger size, than the ellipse, but by switching the damping from low to 
high you could shoot the state point into a much higher orbit. Much 
worse, in certain cases the system state can increase even in the full 
absence of the input signal!!! However, IIRC, I managed to show, that 
for a sufficiently large elliptic orbit (with high damping), 
(d/dt)|u|^2=0 regardless of the current damping. Since we are already 
considering the worst possible input signal, the system state can't 
cross this boundary orbit to the outside.


For the discrete-time case the situation is more complicated, because we 
can't use the continuity of the state vector function. IIRC, I also 
didn't manage to build the worst-case signal, but there was the same 
problem of the state vector becoming larger in the absence of the input 
signal. That's why I was somewhat surprised that you simply managed to 
restrict the eigenvalues of the system matrix in some coordinates. 
Particularly suspicious is that your coordinate transformation matrix is 
built for the smallest damping, while the more problematic case seems 
to occur at the larger damping. But, as I said, I didn't finish that 
research and I could have been wrong. So just take my input FWIW.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive

Re: [music-dsp] R: Time Varying BIBO Stability Analysis of Trapezoidal integrated optimised SVF v2

2013-11-11 Thread Vadim Zavalishin

Hi Marco

On 11-Nov-13 11:26, Marco Lo Monaco wrote:

I basically demonstrate what I already said in my previous posts.
The standard state-space approach leads to identical results to your
algorithm, I would say even without the trick of the TPT, because of course
we are talking about an instantaneous _linear_ feedback.


Of course they are all equivalent, except for some small detail, as e.g. 
the usage of canonical integrators (although maybe even that was known 
for long time for trapezoidal integration).



Of course the main purpose of my analysis was to keep in mind that you will
_always_ have to deal with an implicit/hidden inversion of a matrix A of
the analog system (actually (I-A*h/2))


With the filters used in practice, the matrices typically turn out to be 
either simple or have quite regular structures. This often is given for 
granted in the 0df (TPT) approach, as you are just solving one linear 
feedback equation, instead of trying to invert a 5x5 matrix etc. Of 
course, it's mathematically equivalent, but is much simpler. Also, the 
division (the most expensive operation) which you have to perform while 
solving that equation is more or less the same division by a0 which you 
need to perform in the computation of the BLT-based discrete-time 
coefficients. So, computationally I think the 0df approach is comparable 
(even if slighly more expensive at times) to the transfer-function based 
DF filters, expecially at audio-rate modulations.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-11 Thread Vadim Zavalishin

On 11-Nov-13 13:04, Theo Verelst wrote:

Alright, simply put: the paradigm used to work with digital filters is
at stake


Funnily enough, it's a mathematically trivial fact that in the LTI case 
the 0df filters are mathematically equivalent to the DF BLT filters. So, 
the only non-scientific part there is about estimating the errors of 
time-varying and nonlinear effects. But then, I'm not so much aware 
(please correct me, if I'm wrong), if there are any psychoacoustically 
usable measurables developed until now, which help with those 
estimations. OTOH, the psychoacoustically perceivable error of the 
linear model is clearly estimatable by using the frequency response 
paradigm, but as I just said, in this respect 0df is fully equivalent to 
the DF BLT. So, it seems to me that it's not the paradigm, which is 
being put at stake, but rather only the methodology of digital filter 
design.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Time Varying BIBO Stability Analysis of Trapezoidal integrated optimised SVF v2

2013-11-11 Thread Vadim Zavalishin

On 11-Nov-13 15:32, Ross Bencina wrote:

That's why I was somewhat surprised that you simply managed to
restrict the eigenvalues of the system matrix in some coordinates.


To be clear: the eigenvalues of the transition matrix only cover
time-invariant stability.

The constraint for time-varying BIBO stability is that all transition
matrices P satisfy ||TPT^-1||  1 where ||.|| is the spectral norm and T
is some constant non-singular change of base matrix.


Okay, for the time-varying case it seems to be the eigenvalues of 
(P^T)*P in the discrete-time case, which according to Wikipedia define 
the spectral norm (while in the continuous time we have the eigenvalues 
of P^T+P). I'm currently a little short of time to take a precise look 
at all the details. Still, in both cases we are talking about some 
uniform property of eigenvalues of a symmetric matrix. In the discrete 
time they need to be smaller than one to kind of make sure the next 
state vector is smaller than the previous one (I think). In the 
continuous time case they need to be negative to kind of make sure that 
the time derivative of the state vector's length is negative.




The main reason I am suspicious is that Laroche does not even try to
cook up a change of basis matrix, or to show when it might be achieved.
It's kind of an orphan result in that paper that goes unused for showing
BIBO stability.


IIRC from briefly reading his paper, his sufficient criterion turned out 
to be not applicable for the DF filters (which by themselves also didn't 
seem to be time-variant BIBO-stable, IIRC), therefore he resorted to 
some other approaches. That (and my own SVF investigation) led me to 
consider this kind of criterion as a more or less useless one at that 
time. But I may be wrong, it was a while ago and only a brief look.





Particularly suspicious is that your coordinate transformation matrix is
built for the smallest damping, while the more problematic case seems
to occur at the larger damping.


I'm not sure I follow you here. Smallest damping means most resonance,
where the system decays most slowly. Don't you think this would be where
the greatest problems would arise?


Because of the shooting effect I described earlier. I discovered it by 
a numerical simulation of the system. For the low resonance the state 
vector moves in an ellipse (for the worst-case signal I described). 
The orientation and the amount of stretching of the ellipse depends on 
the resonance (the lower the resonance the more the stretch). You can 
suddenly switch to a lower resonance while your state vector is pointing 
at such angle, that the respective position on the new ellipse is within 
the increasing radius area. This will cause the state vector to grow.


Disclaimer: this all goes under the notice that I didn't double-check my 
results or may even have forgotten some important details or simply 
remember them wrong. I was posting them mostly because I thought they 
might be interesting and/or usable to some extent for you (and maybe 
others).




In short, using change of basis matrix T:
[1 f]
[0 1]

We have the time-varying BIBO stability constraint:

0  f  2, g  0, f  k = 2

f provides the bound on k from below.


This is also the kind of the result which I would intuitively expect at 
the first thought. It's just contradicting the *unverified* results of 
my earlier research, that's why I expressed my suspiciousness. 
Hopefully, I'll find time to check your research in more detail.


Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-11 Thread Vadim Zavalishin

On 11-Nov-13 17:33, Dave Gamble wrote:

At some point, the process of using algebraic rearrangements [...]
got dubbed the delay-free or zero delay filters movement.


Hi Dave

I think this is exactly the source of the confusion. As the distinctive
feature of those filters were zero-delay feedback loops, the filters
were called delay-free feedback filters or zero delay feedback
filters (which was further shortened to zdf filters or 0df
filters). Then someone thought that 0df stands for zero-delay
filter rather than zero-delay feedback and there you are :D

Regards,
Vadim

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-08 Thread Vadim Zavalishin

On 08-Nov-13 12:13, Urs Heckmann wrote:

No offense meant, I wasn't aware that your book was considered a
standard dsp lecture. If you know of any university that uses it in
their curriculum, please let me know and I'll recommend that
university.


Damn, you got me there ;-)

--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] IIR Coefficient Switching Issues

2013-11-04 Thread Vadim Zavalishin

Hi Chris

Direct forms are not good for coefficient modulation, plus IIRC they 
tend to have precision issues at low cutoffs. I guess, the TPT (ZDF) 
approach can solve your problem completely:

http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/KeepTopology.pdf
(For the 2nd order filters use the modes of the SVF from the same paper)

If you can read Reaktor Core, then here's some additional info:
http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/KeepTopologyRC.pdf

Regards,
Vadim

On 03-Nov-13 04:27, Chris Townsend wrote:

I'm working on an algorithm with some user controlled presets that
adjust various IIR filters under the hood.  This generally works fine,
but I get pops and glitches when switching between certain settings.
The filters that are causing trouble are typically second order hipass
filters with a sub 100Hz cutoff, but with some settings the filters
reconfigure to peaking, shelf and first order hipass types.  Generally
the problem is most noticeable when changing between types.

This appears to be a simple matter of the internal states of the
filter being un-normalized and so large gain chances of the state
variables can occur when coefficients are adjusted.  I read through
some old Music-DSP posts on this topic, but I didn't find a clear
solution that fit my needs.

I'm using 1 pole coefficient smoothing, which helps reduce the
glitches but definitely doesn't get rid of them.  Currently I'm using
DF2 transpose filter topology.  I also tried lattice and a couple
others topologies, but overall that didn't improve things and in some
cases was worse.

If I only needed a second order hipass then I would think a Chamberlin
State Variable Filter would be my best bet, since I found it to be
very adept at handling coefficient changes.  But I'm not sure it will
work for me, since it's not a fully generally filter topology.  I've
looked at using the Kingsbury topology which is very similar in form
to Chamberlin, but has poles that are generalized.  Apparently
Kingsbury's filter is all-pole (no zeros), so would need to tack on
some zeros to make it fully general, but then I'm not sure it would
maintain the nice properties of the Chamberlin filter.

I've also looked at using a ladder filter, which seems like it would
totally solve my problem, since all of the internal states are
normalized.  The only downside is that it's about double the
computational cost of most other filter topologies, but that's not a
huge deal in this case.

There's also the possibility of renormalizing the filter states every
time the coefficients are updated, but that seems complicated and
costly in terms of CPU, since the smoothing updates the coefficients
at a fairly high rate.

I'm also seeing some coefficient quantization issues at high sample
rates, when using DF2T, because I'm dealing with low cutoff
frequencies and using single precision floats.  It looks like
Chamberlin, Kingsbury or Ladder would also perform much better in that
respect.

Any ideas?  Recommendations?

Thanks,
Chris
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp



--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] IIR Coefficient Switching Issues

2013-11-04 Thread Vadim Zavalishin

Oh, completely forgot. Here's a step-by-step description of the TPT method:

http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_1.0.3.pdf 
 (A4 format)
http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_1.0.3_A5.pdf 
 (A5 format)


On 04-Nov-13 11:07, Vadim Zavalishin wrote:

Hi Chris

Direct forms are not good for coefficient modulation, plus IIRC they
tend to have precision issues at low cutoffs. I guess, the TPT (ZDF)
approach can solve your problem completely:
http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/KeepTopology.pdf

(For the 2nd order filters use the modes of the SVF from the same paper)

If you can read Reaktor Core, then here's some additional info:
http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/KeepTopologyRC.pdf


Regards,
Vadim



--
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] ANN: Book: The Art of VA Filter Design

2012-05-25 Thread Vadim Zavalishin

Hi all

This is kind of a cross-announcement from KVRAudio, but since there are 
probably a number of different people on this list, I thought I'd 
announce it here as well. Get it here:


http://ay-kedi.narod2.ru/VAFilterDesign.pdf
http://images-l3.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign.pdf
http://www.discodsp.net/VAFilterDesign.pdf (thanks to george for 
mirroring)


There is a discussion thread at
http://www.kvraudio.com/forum/viewtopic.php?t=350246

Regards,
Vadim

--
Vadim Zavalishin
Software Integration Architect | RD

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 29-30
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747

Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Noise performance of f32 iir filters

2011-09-27 Thread Vadim Zavalishin

Hi Andrew


I finally got around to following up on my hunch that a slightly
modified version of the trapezoidal integrated svf (ie a modified
version of the one I previously posted) should have excellent
numerical properties.


Care to reveal what is the modification? Did you measure the precision of 
the unmodified version?


Regards,
Vadim

- Original Message - 
From: Andrew Simper a...@cytomic.com

To: A discussion list for music-related DSP music-dsp@music.columbia.edu
Sent: Tuesday, September 27, 2011 10:09
Subject: [music-dsp] Noise performance of f32 iir filters



I finally got around to following up on my hunch that a slightly
modified version of the trapezoidal integrated svf (ie a modified
version of the one I previously posted) should have excellent
numerical properties. My initial tests confirm this in spectacular
fashion. I used all sorts of tests, but the one to show up most
problems was a bell filter with q=2, gain=12 dB, and look at the
cutoffs 20, 200, 2k, 20k. I compare the modified state variable
filter, normalised ladder, normalised direct wave form, direct form 1,
and direct form 2 transposed. The only filter to match the low
quantization error of the modified svf is the normalized ladder
filter, but none of the filters can match the coefficient rounding
error, as is shown in the time domain error of the 20 Hz example:

http://www.cytomic.com/files/dsp/SVF-vs-DF1.pdf

The modified SVF works fine down to very low frequencies with all
single precision computation, which makes it ideal for use even at 192
kHz sample rates. I'll get around to writing it up some time, but I've
got a few plugins and other work to get on with for the moment.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, 
dsp links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp



--
Vadim Zavalishin
Senior Software Developer | RD

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 29-30
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747

Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal and other integration methodsappliedtomusical resonant filters

2011-05-23 Thread Vadim Zavalishin
okay, so that's an analog resonant LPF filter.  there is a method,  called 
impulse invariant (an alternative to BLT) to transform this  analog 
filter to a digital filter.  as it's name suggests, we have a  digital 
impulse response that looks the same (again, leaving out the  unit step 
gating function):


Just for the record. The digital impulse response indeed looks exactly the 
same, but (!) it is not bandlimited, which results in distorted frequency 
response of the resulting filter. Bandlimiting the response is not a 
solution either, because the resulting response then is not a response of a 
system consisting of a finite number of unit delays.



Vadim, is this the article you meant?



Fontana, Preserving the structure of the Moog VCF in the digital domain

Proc. Int. Computer Music Conf., Copenhagen, Denmark, 27-31 Aug. 2007
http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.2007.062



Yes, thanks Martin. BTW, I'f I'm correct, this paper (I only briefly looked 
through it) doesn't address fixing the problem in the 1-pole components. 
It's correct that the bigger problem is in the outer feedback loop, but 
performing the same fix in the 1-poles improves the behavior further, IIRC.


-- it does beg the question: why didn't we think of this earlier? and why 
did Chamberlin do it the way he did?


A number of people *did* think of this earlier, also in the application to 
the analog simulation. E.g. the simulanalog.org article I mentioned earlier 
uses trapezoidal integration, and I would guess that there is even some 
earlier work. The right question is: why almost nobody cares about this 
issue?


Yet another issue, is the subjectivity of the judgement. Not all DSP 
engineers, and even not all musicians have the same high requirements to the 
details of the filter response. Many people would be fully happy with the 
Chamberlin SVF as it is. Also, BLT filters require division (once the 
parameters change), which was quite expensive, especially those days, I 
think.


Regards,
Vadim

--
Vadim Zavalishin
Senior Software Developer | RD

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal and other integrationmethodsappliedtomusical resonant filters

2011-05-23 Thread Vadim Zavalishin
Just one other issue I wanted to mention in respect to using trapezoidal 
integration / bilinear TPT vs. trying to manually fix simpler Euler-like 
models. Besides giving a nice frequency response of a digital model, the BLT 
also results in an equally nice phase response, which also affects the 
sound to an extent. When comparing the models, one also shouldn't forget the 
time-variant (modulated parameters) behavior of the structures (which I 
guess is the primary reason to use SVF instead of DF), but this is much more 
difficult to theoretically analyse.


Regards,
Vadim

--
Vadim Zavalishin
Senior Software Developer | RD

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal andotherintegrationmethodsappliedtomusical resonant filters

2011-05-23 Thread Vadim Zavalishin
With regard to stability under time varying modulation, Jean Laroche has 
published some criteria:


Very interesting!

As to analysing the effect of parameter modulation on the output I guess 
this is a job for state-space models?


Yes, I guess this is easiest to analyse in the state-space form.

Regards,
Vadim

--
Vadim Zavalishin
Senior Software Developer | RD

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal and otherintegrationmethodsappliedtomusical resonant filters

2011-05-18 Thread Vadim Zavalishin

Vadim, did possibly get the lp and hp gains swapped in your equations?
With my working is should the s^2 numerator term should be for the low
pass output, and the cutoff^2 gain for the high pass output.


H(s) = (gainLP*cutoff^2 + gainBP*cutoff*s +
gainHP*s^2)/(cutoff^2+k*cutoff*s+s^2)


No. s^2 is the HP numerator while cutoff^2 is the LP numerator. Consider the 
limit of H(s) when s-inf or s-0.


Regards,
Vadim

--
Vadim Zavalishin
Senior Software Developer | RD

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal and otherintegrationmethodsappliedtomusical resonant filters

2011-05-18 Thread Vadim Zavalishin

low = v2/v0 = (g^2)/(g^2 + g*k*s + s^2)
band = v1/v0 = (g*s)/(g^2 + g*k*s + s^2)
high = (v0 - k*v1 - v2)/v0 = (s^2)/(g^2 + g*k*s + s^2)
notch = (v0 - k*v1)/v0 = 1 - (g*k*s)/(g^2 + g*k*s + s^2)
peak = (v0 - k*v1 - 2*v2) = (-g^2 + s^2)/(g^2 + g*k*s + s^2)


allpass = 1 - 2*bandpass = (g^2-g*k*s-s^2)/(g^2+g*k*s+s^2)

:-)

I'm not sure about your peak filter. How did you build it? I would think

peak = 1 + boost*k*bandpass

(boost = -1)

Regards,
Vadim

--
Vadim Zavalishin
Senior Software Developer | RD

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal and otherintegrationmethodsappliedtomusical resonant filters

2011-05-18 Thread Vadim Zavalishin

I was recently made aware by a friend of mine Urs Heckmann that the
KHN / SVF as we know it is just a special case of a leapfrog filter,


The 2-pole SVF as a block diagram is actually particular case of the 
controllable canonical form of continuous time LTI system, which is 
essentially a continuous-time counterpart of the familiar DF2 (canonical) 
filter structure. Just try replacing unit delays with integrators in the 
DF2. As such, in principle, it can be generalized to arbitrary-order 
multimode filter, capable of realizing any s-domain transfer function.


Regards,
Vadim

--
Vadim Zavalishin
Senior Software Developer | RD

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp