Re: [music-dsp] Time-variant 2nd-order sinusoidal resonator

2019-02-23 Thread Andrew Simper
When evaluating polynomials although Horner's method is shorter to code,
and has the fewest actual operations used, on modern architectures with
deep pipelines I would recommend giving Estrin's scheme a go and let the
profiler / accurate cpu meter logging tell you which one is best:

https://en.wikipedia.org/wiki/Horner%27s_method
vs
https://en.wikipedia.org/wiki/Estrin%27s_scheme

Andy





On Sat, 23 Feb 2019 at 07:31, robert bristow-johnson <
r...@audioimagination.com> wrote:

>
>
> this is just a guess at what the C code might look like to calculate one
> sample.  this uses Olli's 7th-order with continuous derivative.  f0 is the
> oscillator frequency which can vary but it must be non-negative.  1/T is
> the sample rate.
>
>
>
> float sine_osc(float f0, float T)
> {
>static float phase = 1.0;// this is the only state.  -2.0 <= phase
> < +2.0
>
>phase += 4.0*f0*T;
>if (phase >= 2.0)
>{
>   phase -= 4.0;
>}
>
>if (phase < 0.0)
>{
>   triangle = -phase - 1.0;
>}
>else
>{
>   triangle = phase - 1.0;
>}
>
>x2 = triangle*triangle;
>return triangle*(1.570781972 - x2*(0.6458482979 - x2*(0.07935067784 -
> x2*0.004284352588)));
> }
>
> i haven't run this code nor checked it for syntax.  but it's conceptually
> so simple that i'll bet it works.
>
> r b-j
>
>  Original Message 
> Subject: Re: [music-dsp] Time-variant 2nd-order sinusoidal resonator
> From: "Olli Niemitalo" 
> Date: Thu, February 21, 2019 11:58 pm
> To: "A discussion list for music-related DSP" <
> music-dsp@music.columbia.edu>
> --
>
> > On Fri, Feb 22, 2019 at 9:08 AM robert bristow-johnson <
> > r...@audioimagination.com> wrote:
> >
> >> i just got in touch with Olli, and this "triangle wave to sine wave"
> >> shaper polynomial is discussed at this Stack Exchange:
> >>
> >>
> >>
> >>
> https://dsp.stackexchange.com/questions/46629/finding-polynomial-approximations-of-a-sine-wave/46761#46761
> >>
> > I'll just summarize the results here. The polynomials f(x) approximate
> > sin(pi/2*x) for x=-1..1 and are solutions with minimum peak harmonic
> > distortion compared to the fundamental frequency. Both solutions with
> > continuous and discontinuous derivative are given. In summary:
> >
> > Shared polynomial approximation properties and constraints:
> > x = -1..1, f(-1) = -1, f(0) = 0, f(1) = 1, and f(-x) = -f(x).
> >
> > If continuous derivative:
> > f'(-1) = 0 and f'(1) = 0 for the anti-periodic extension f(x + 2) =
> -f(x).
> >
> > 5th order, continuous derivative, -78.99 dB peak harmonic distortion:
> > f(x) = 1.569778813*x - 0.6395576276*x^3 + 0.06977881382*x^5
> >
> > 5th order, discontinuous derivative, -91.52 dB peak harmonic distortion:
> > f(x) = 1.570034357*x - 0.6425216143*x^3 + 0.07248725712*x^5
> >
> > 7th order, continuous derivative, -123.8368 dB peak harmonic distortion:
> > f(x) = 1.570781972*x - 0.6458482979*x^3 + 0.07935067784*x^5
> > - 0.004284352588*x^7
> >
> > 7th order, discontinuous derivative, -133.627 dB peak harmonic
> distortion:
> > f(x) = 1.5707953785726114835*x -
> > 0.64590724797262922190*x^3 + 0.079473610232926783079*x^5
> > - 0.0043617408329090447344*x^7
> >
> > Also the exact coefficients that are rational functions of pi are given
> in
> > the answer, in case anyone needs more precision.
> >
> > -olli
> > ___
> > dupswapdrop: music-dsp mailing list
> > music-dsp@music.columbia.edu
> > https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
>
>
>
>
>
>
> --
>
> r b-j r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
>
>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Time-variant 2nd-order sinusoidal resonator

2019-02-21 Thread Andrew Simper
Thanks Robert, much appreciated for adding this to my code :)

With double precision I've noticed that even when undergoing drastic
frequency modulation it all works pretty accurately still, so any
correction term could probably be applied with a very long time constant
and linear interpolation to reduce the cpu load something along the lines
of (have not compiled or tested so could contain errors):

pre loop:
new_gain = 1 + adaptationspeed*(1 - c*c - s*s)
gain_inc = (new_gain - gain)/blocklength

loop blocklength:
t0 = g0*c - g1*s
t1 = g1*c + g0*s
c = gain*t0
s = gain*t1
gain += gain_inc




On Thu, 21 Feb 2019 at 16:06, robert bristow-johnson <
r...@audioimagination.com> wrote:

>
>
>  Original Message 
> Subject: Re: [music-dsp] Time-variant 2nd-order sinusoidal resonator
> From: "Andrew Simper" 
> Date: Wed, February 20, 2019 9:20 pm
> To: "Robert Bristow-Johnson" 
> "A discussion list for music-related DSP" 
> --
>
> > This looks pretty good to me, and I like the amplitude adjustment g[n]
> term
> > :)
>
> well, then let's integrate that into your code.
>
>
> >
> > Depending on the situation you may want to modulate the frequency of the
> > oscillator pretty fast, so it can help to use a tan approximation
> function
> > and then a division and a few other operations to get your cos (w) and
> sin
> > (w) rotation terms from that single approximation. I've called the
> rotation
> > terms g0 and g1, and c and s are the output cos and sin quadrature
> > oscillator values
> >
> > init:
> > c = cos(2*pi*startphase)
> > s = sin(2*pi*startphase)
>gain = 1
>adaptationspeed = 0.5 // this could be made smaller, but must be
> positive
> >
> > set frequency:
> > g0 = cos(2*pi*frequency/samplerate)
> > g1 = sin(2*pi*frequency/samplerate)
> >
> > or
> >
> > g = tan(pi*frequency/samplerate);
> > gg = 2/(1 + g*g)
> > g0 = gg-1
> > g1 = g*gg
> >
> > tick:
> > t0 = g0*c - g1*s
> > t1 = g1*c + g0*s
>   c = gain*t0
>   s = gain*t1
>   gain = 1 + adaptationspeed*(1 - c*c - s*s)
>
>
>
> --
>
> r b-j r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
>
>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Time-variant 2nd-order sinusoidal resonator

2019-02-20 Thread Andrew Simper
This looks pretty good to me, and I like the amplitude adjustment g[n] term
:)

Depending on the situation you may want to modulate the frequency of the
oscillator pretty fast, so it can help to use a tan approximation function
and then a division and a few other operations to get your cos (w) and sin
(w) rotation terms from that single approximation. I've called the rotation
terms g0 and g1, and c and s are the output cos and sin quadrature
oscillator values

init:
c = cos(2*pi*startphase)
s = sin(2*pi*startphase)

set frequency:
g0 = cos(2*pi*frequency/samplerate)
g1 = sin(2*pi*frequency/samplerate)

or

g = tan(pi*frequency/samplerate);
gg = 2/(1 + g*g)
g0 = gg-1
g1 = g*gg

tick:
t0 = g0*c - g1*s
t1 = g1*c + g0*s
c = t0
s = t1


On Thu, 21 Feb 2019 at 06:28, robert bristow-johnson <
r...@audioimagination.com> wrote:

> i did that wrong.  i meant to say:
>
>x[n] = a[n] + j*b[n] = g[n-1]*exp(j*w[n]) * x[n-1]
>
> this is the same as
>
>   a[n] = g[n-1]*cos(w[n])*a[n-1] - g[n-1]*sin(w[n])*b[n-1]
>
>   b[n] = g[n-1]*sin(w[n])*a[n-1] + g[n-1]*cos(w[n])*b[n-1]
>
> and adjust the gain g[n] so that |x[n]|^2 = a[n]^2 + b[n]^2 converges to
> the desired amplitude squared (let's say that we want that to be 1).  you
> can adjust the gain slowly with:
>
>   g[n] = 3/2 - (a[n]^2 + b[n]^2)/2
>
> that will keep the amplitude sqrt(a[n]^2 + b[n]^2) stably at 1 even as
> w[n] varies.
>
> i've done this before (when i wanted a quadrature sinusoid) for doing
> frequency offset shifting (not pitch shifting).  worked pretty good.
>
> --
>
>
> r b-j r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
> > A very simple oscillator recipe is:
> >
> > a(t+1) = C*a(t) - S*b(t)
> > b(t+1) = S*a(t) + C*b(t)
> >
> > Where C=cos(w), S=sin(w), w being the angular frequency. a and b are your
> > two state variables that are updated every sample clock, either of which
> > you can use as your output.
> >
> > There won't be any phase or amplitude discontinuity when you change C and
> > S. However, it's not stable as is, so you periodically have to make an
> > adjustment to make sure that a^2 + b^2 = 1.
> >
> > -Ethan
> >
> >
> > On Wed, Feb 20, 2019 at 12:26 PM Ian Esten  wrote:
> >
> >> The problem you are experiencing is caused by the fact that after
> changing
> >> the filter coefficients, the state of the filter produces something
> >> different to the current output. There are several ways to solve the
> >> problem:
> >> - The time varying bilinear transform:
> >> http://www.aes.org/e-lib/browse.cfm?elib=18490
> >> - Every time you modify the filter coefficients, modify the state of the
> >> filter so that it will produce the output you are expecting. Easy to do.
> >>
> >> I will also add that some filter structures are less prone to problems
> >> like this. I used a lattice filter structure to allow audio rate
> modulation
> >> of a biquad without any amplitude problems. I have no idea how it would
> >> work for using the filter as an oscillator.
> >>
> >> Best,
> >> Ian
> >>
> >> On Wed, Feb 20, 2019 at 9:07 AM Dario Sanfilippo <
> >> sanfilippo.da...@gmail.com> wrote:
> >>
> >>> Hello, list.
> >>>
> >>> I'm currently working with digital resonators for sinusoidal
> oscillators
> >>> and I was wondering if you have worked with some design which allows
> for
> >>> frequency variations without discontinuities in phase or amplitude.
> >>>
> >>> Thank you so much for your help.
> >>>
> >>> Dario
>
>
>
>
>
>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Book: The Art of VA Filter Design 2.1.0

2018-11-01 Thread Andrew Simper
On Thu, 1 Nov 2018 at 16:24, Vadim Zavalishin <
vadim.zavalis...@native-instruments.de> wrote:

> On 31-Oct-18 18:19, Stefan Stenzel wrote:
> > Vadim,
> >
> > I was more refering to the analog multimode filter based on the moog
> cascade I did some years ago, and found it amusing to find a warning
> against it.
>
> Ah, you mean the one at the beginning of Section 5.5? Well, that's an
> artifact of the older revision 1, where the ladder filter was introduced
> before the SVF (I still believe it's better didactically, unfortunately
> new material dependencies made me switch the order). The modal mixtures
> of the transistor ladder are asymmetric (HP is not symmetric to LP and
> has the resonance peak kind of "in the middle of its slope" and BP is
> not symmetric on its own). I felt that it might be confusing for a
> beginner if their first encounter with resonating HP and BP is with this
> kind of special-looking filters, hence the warning. With revision 2 this
> warning becomes less important, since the 2-pole LP and BP were
> discussed already before, but I still believe it's informative. After
> all, it doesn't say that these filters are bad, it says that they are
> special ;)
>
>
If you prize symmetry then you can use a cascade with 2 x one pole HP and 2
x one pole LP to make a 4 pole BP (band pass) then you can use the same old
FIR based output tap mixing to generate all the different responses. It may
not be so easy to do in a real circuit, but in software we're not bound by
what is easy to build :)

https://cytomic.com/files/dsp/cascade-all-to-all-responses.pdf


>
> > Anyway, excellent writeup,
>
> Thank you! I'm glad my book is appreciated not only by newbies, but also
> by the industry experts.
>
>
> > I wish I cuold have it printed as a proper book for more relaxed reading.
>
> Hmmm, 500 A4 pages would be rather heavy ;)
>
> Vadim
>
>
Stefan: This sounds like the prefect excuse to get a HiDPI tablet to store
and read all your PDFs, much lighter actual books and easier to search :)

I've gone paper free now and write notes in PDF format and annotate
directly on published papers in PDF format, which is great since it's
easier to find things and it's all backed up. I lost a paper notebook and
was always losing conference papers I printed out and annotated, which is
quite frustrating. Digital has it's own challenges, but overall I'm happy
with the move and love that everything is backed.

Cheers,

Andy
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-22 Thread Andrew Simper
To bandlimit a discontinuity in the n-th derivative of any function you add
a corrective grain formed from band-limited step approximation integrated n
times. For saw and sqr, which have C0 discontinuites, you add in
band-limited corrective step functions directly. For a non-synced triangle,
where you only have C1 discontinuities you add in band-limited corrective
ramp functions (integrated step). A corrective function is just the
band-limited one with the trivial one subtracted, so you can generate the
trivial waveform and then add the correction to get the band-limited one.

Andy

On Wed, 8 Aug 2018 at 06:02, Kevin Chi  wrote:

> I just want to thank you guys for the amount of experience and knowledge
> you are sharing here! This list is a gem!
>
> I started to replace my polyBLEP oscillators with waveTables to see how
> it compares!
>
>
> Although while experimenting with PolyBLEP I just run into something I
> don't get and probably you will know the answer for this.
>
> I read at a couple of places if you use a leaky integrator on a Square
> then you can get a Triangle. But as a leaky integrator
> is a first order lowpass filter, you won't get a Triangle waveform, but
> this:
>
> https://www.dropbox.com/s/1xq321xqcb7ir3a/PolyBLEPTri.png?dl=0
>
> Is it me doing something wrong misunderstanding the concept, or what is
> the best way to make a triangle with PolyBLEPs?
>
>
> thanks again for the great discussion on wavetables!
>
> --
> Kevin @ Fine Cut Bodies
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-06 Thread Andrew Simper
I definitely agree here, start with the easy approach, then put in more
effort when it's needed - but keep in mind you won't be able to get decent
feedback from non-dsp people until the final quality version is done.

If the code is not a key part of your product then you can even take
another step back, if you can find someone else's code that does what you
want then just use that! If that code has detailed information on how it
was derived then if you do need to make a change in the future you can put
in the work later on to understand how it was done and make changes as
needed.

I really appreciate RBJ's Audio EQ cookbook for this sort of approach, and
hopefully I have helped people with the technical paper's I've done too for
the non LTI (modulation) case of linear 2 pole resonant filters / EQ.

Cheers,

Andy


On Tue, 7 Aug 2018 at 08:06, Scott Gravenhorst  wrote:

>
> Nigel Redmon via music-dsp@music.columbia.edu wrote:
> >
> >Arg, no more lengthy replies while needing to catch a plane. Of
> >course, I didnt mean to say Alesis synths (and most others) were
> >drop sampleI meant linear interpolation. The point was that stuff
> >that seems to be substandard can be fine under musical
> >circumstances...
>
> This is an important point.  I see so often developer approaches that start
> at the most complex and most computationally burdensome when simpler
> approaches work well enough to pass the "good enough" test.  I start at the
> low end and if I can hear stuff I don't like, then I'll try more "advanced"
> techniques and that has served me very well.
>
> You may now start heaving rotten eggs and expired vegetables at me  :)
>
> -- ScottG
> 
> -- Scott Gravenhorst
> -- http://scott.joviansynth.com/
> -- When the going gets tough, the tough use the command line.
> -- Matt 21:22
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] What is resonance?

2018-07-20 Thread Andrew Simper
Resonance is just delay with feedback. Resonance occurs when you delay a
signal and then feed it back with some gain to the input of the delay "in
phase" with the original input, which means the delayed signal adds
together and boosts the input level to the delay. If you use a normal
digital delay line you get what is called a linear phase delay, so each
frequency is delayed by the same amount. If you use an IIR filter to delay
the signal you get what is called a non-linear phase delay, so each
frequency is delayed by a different amount. The particular arrangement of
multiple stages of delays and how the feedback is arranged both around
individual stages and globally all determine the structure of the filter,
and the type of filtering achieved.

Most synth filters use IIR filters. The amount of non-linear phase delay is
typically referred to in terms of degrees, as this is usually more useful
for filter design. When the signal has been delayed 360 degrees at a
particular frequency, if you add this to the input signal you get a boost
in amplitude of that frequency since it is back in phase with the input,
which is called constructive interference. With enough feedback gain a
resonant peak will form as the constructive interference is in a feedback
loop with itself. Most resonant filter designs use the fact that taking the
negative of a signal actually changes the phase by 180 degrees at all
frequencies. So most filters delay the phase of the signal by 180 degrees
somehow, then subtract this from the input with some gain (negative
feedback) and so form a resonant peak.

Many people have spent a lot of time forming many types of low pass
resonant filter structures. In synths it is mostly the non-linear
properties that very between different filters, there are actually only 3
structures typically used to form low pass resonant filters: multiple IIR
one pole integrators with feedback (svf) or multiple IIR one pole low pass
filters with feedback (sallen key, cascade).

Andy


On Sat, 21 Jul 2018 at 01:29, Spencer Jackson  wrote:

> Resonance is the characteristic of some systems to store and release
> energy at particular frequencies. It's not limited to filters, mechanical
> systems like springs or pendulums have resonance (get on a swing at the
> park and try to change your frequency and you'll feel the effects of
> resonance).
>
> In an electrical system, a minimal passive resonant circuit would be one
> with 2 capacitors and 2 resistors. Selecting the values of the components
> determines the frequency, but what happens is that at a certain frequency
> the energy gets stored and passed back and forth between the capacitors,
> like the swing going back and forth. This storage and energy swapping
> emphasizes that frequency. Depending on the "Quality factor" or amount of
> resonance that frequency can become much more apparent than the other
> frequencies in the signal even if the input is wide-band.
>
> When you are talking about delay and feedback, you are creating a digital
> filter, but I think it is worthwhile to spend some time understanding the
> theoretical concept and think in terms of energy and frequency. Your
> feedback delay becomes the storage and certain frequencies will resonate
> with that system.
>
> > But it is having the resonance in parralel to a dry sound that bothers
> me; but may be that's the only way to do ?
>
> I'm not sure what you mean but I think you need some study in filter
> design, because a single feedback delay causes comb filtering but its not a
> classical lowpass. Is that what you are trying to achieve? Digital filters
> are almost universally combinations of short delays (typically 1 sample)
> placed in different patterns and fed back in different amounts (e.g.
> https://en.wikipedia.org/wiki/Digital_filter#Filter_realization).
>
> Here are some resources that may help you:
> http://www.dspguide.com/ch14.htm
>
> https://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_2.0.0a.pdf
>
> That first one is a book that can help you more with the fundamentals in
> the early chapters as well. I hope this is somewhat helpful, if not perhaps
> I need to understand better specifically what you are trying to achieve.
> _Spencer
>
>
> On Fri, Jul 20, 2018 at 10:13 AM, Mehdi Touzani 
> wrote:
>
>> Hi all,
>> I follow the llist for a while, but I am not a DSP programmer, I do DSP
>> audio apps for about 20 years now, for sonic core plateform "Scope" and
>> Xite".   I begin with other things like juce or flowstone, but so far,
>> scope is still far superior in terms of sound results. Too bad there is no
>> scripting tool for it (well there is but it is not available to me).
>>
>> My question is probably weird for you  - like super noob - , because i am
>> NOT looking for math or codes, but hints about a general
>> design/architecture.
>>
>> So... how do you do a resonance in a lowpass circuit?   :-)   not the
>> math, not the code, just the architecture.
>>
>> 

Re: [music-dsp] advice regarding USB oscilloscope

2017-03-08 Thread Andrew Simper
Hi Remy,

I use the signal generator all the time to calibrate the pot on the
probes when in x10 mode using the square wave output. Note that the
scope runs off USB power so you can't generate very hot signals, it's
+- 2V (USB is 5V), you'll need to make your own external booster
circuit for general use. The 5000 has a proper analog signal generator
from what I can tell, and the 5000B adds a 14-bit sample based
arbitrary waveform generator that runs at 200MHz, so absolutely fine
for any audio applications, but for us audio guys we have soundcards
to play back waveforms, so it's not that much use.

I wish they made this scope when I bought my first one, I bought the
12bit 4226 model, which still works great, but I would love this new
one!

Cheers,

Andy

On 9 March 2017 at 07:19, Remy Muller <muller.r...@gmail.com> wrote:
> hi,
>
> AudioPrecision looks nice but it's way over my budget considering that it
> won't be used on a daily basis.
>
> Looking at the specs, the QuantAsylum audio card only seems to have AC
> coupling (down to 1.6Hz) and their oscillosccope page is a bit short on
> details.
>
> Hacking a soundcard as an oscilloscope could be very convenient since it
> benefits from all the standard audio softwares and can easily get beyong the
> 2/4 channels, but it's limited to AC coupling, unless there are soundcards
> that have DC coupled inputs? AFAIK most only provide DC outputs.
> Furthermore having to do homemade matched probes and attenuators is not very
> 'plug and play'.
>
> Since bitscope seems to only provide 8-bit ADC, Picoscope is thus very high
> on my list, in particular the 5000 series. I'm wondering whether their
> Arbitrary Waveform Generator option is really worth it though.
>
> @Andrew I just found a python wrapper based on ctypes
> https://github.com/colinoflynn/pico-python
>
> Thanks for all the feedback!
>
>
> On 08/03/17 12:16, Roshan Wijetunge wrote:
>
> Depending on how cheap and improvised you want to go, and how handy you are
> with basic electronics, you can easily adapt your soundcard to work as an
> oscilloscope. There are a number of guides on the internet on how to do
> this, such as:
>
> http://makezine.com/projects/sound-card-oscilloscope/
>
> I have used the following variation with good results:
>
> - Probe via resistor to mic input of mixer
> - Mixer line out to line of USB soundcard
> - Schwa Schope plugin running in any DAW host (e.g. Reaper)
>
> I used this setup as it utilised components I already had available, and it
> has proved very useful for debugging audio hardware, being able to trace
> signals through a circuit as well as biasing amplifier stages in pre-amps.
> Using the mixer gave me control over input signal range though clearly you
> have to be careful with gain staging so as not to introduce distortion to
> the signal.
>
> I also improvised a signal generator using a Electro Harmonix Tube Zipper
> guitar effects pedal. It's an auto-wah type pedal, but you can set the
> resonance to maximum, sensitivity to zero and it generates a nice clean
> stable sine wave.
>
> Best Regards
> Roshan
>
>
>
> On 8 March 2017 at 09:57, Andrew Simper <a...@cytomic.com> wrote:
>>
>> Picoscope make the cheapest 16-bit scopes around (USD 1000), the
>> 16-bit stuff from Tektronix is a lot more expensive (USD 31000 -
>> that's right I didn't accidentally add an extra zero, it's x30 the
>> price). I would recommend using the Picoscope and use Python's easy c
>> bindings to call the Picoscope library functions to do what you want.
>>
>> Cheers,
>>
>> Andy
>>
>> On 7 March 2017 at 22:59, Remy Muller <muller.r...@gmail.com> wrote:
>> > Hi,
>> >
>> > I'd like to invest into an USB oscilloscope.
>> >
>> > The main purpose is in analog data acquisition and instrumentation.
>> > Since
>> > the main purpose is audio, bandwidth is not really an issue, most models
>> > seem to provide 20MHz or much more and I'm mostly interested in analog
>> > inputs, not logical ones.
>> >
>> > Ideally I'd like to have
>> >
>> >  - Mac, Windows and Linux support
>> >
>> > - 4 channels or more
>> >
>> > - 16-bit ADC
>> >
>> > - up to 20V
>> >
>> > - general purpose output generator*
>> >
>> > - a scripting API (python preferred)
>> >
>> > * I have been told that most oscilloscopes have either no or limited
>> > output,
>> > and that I'd rather use a soundcard for generating dedicated test audio
>> > signals, synchronizing the oscilloscope acquisition using the
>> > soundcard

Re: [music-dsp] advice regarding USB oscilloscope

2017-03-08 Thread Andrew Simper
Picoscope make the cheapest 16-bit scopes around (USD 1000), the
16-bit stuff from Tektronix is a lot more expensive (USD 31000 -
that's right I didn't accidentally add an extra zero, it's x30 the
price). I would recommend using the Picoscope and use Python's easy c
bindings to call the Picoscope library functions to do what you want.

Cheers,

Andy

On 7 March 2017 at 22:59, Remy Muller  wrote:
> Hi,
>
> I'd like to invest into an USB oscilloscope.
>
> The main purpose is in analog data acquisition and instrumentation. Since
> the main purpose is audio, bandwidth is not really an issue, most models
> seem to provide 20MHz or much more and I'm mostly interested in analog
> inputs, not logical ones.
>
> Ideally I'd like to have
>
>  - Mac, Windows and Linux support
>
> - 4 channels or more
>
> - 16-bit ADC
>
> - up to 20V
>
> - general purpose output generator*
>
> - a scripting API (python preferred)
>
> * I have been told that most oscilloscopes have either no or limited output,
> and that I'd rather use a soundcard for generating dedicated test audio
> signals, synchronizing the oscilloscope acquisition using the soundcard's
> word-clock. However not having to deal with multiple drivers and clock
> synchronization would be more than welcome.
>
> A friend of mine recommended using Picoscope which seems well supported, has
> a strong user community but no official support for python AFAIK.
>
> https://www.picotech.com/oscilloscope/5000/flexible-resolution-oscilloscope
>
> I also found about bitscope http://www.bitscope.com which looks more
> oriented toward the casual hacker/maker, seems more open-ended and has
> python support, much cheaper too.
>
> What about the traditional oscilloscope companies like Tektronix, Rigol ?
>
> Has anyone experience with any of those? or any other reference to
> recommend?
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Recognizing Frequency Components

2017-01-28 Thread Andrew Simper
I thought the common way to do it was to take two FFTs really close to each
other, one or more samples depending on which frequencies you want the best
resolution for, and do phase differencing to work out the frequency. Seems
to work pretty well in the little test I just did, and is robust in the
presence of additive gaussian noise. Also long as you have at least four
cycles of your sine wave in the FFT block you can get to around a cent
accuracy, more if you have more cycles.

Cheers,

Andy

On 27 January 2017 at 19:32, STEFFAN DIEDRICHSEN 
wrote:

> Here it is from our nuclear friends at CERN:
>
> https://mgasior.web.cern.ch/mgasior/pap/FFT_resol_note.pdf
>
>
> Steffan
>
>
>
> On 26.01.2017|KW4, at 20:01, robert bristow-johnson <
> r...@audioimagination.com> wrote:
>
> i thought Steffan mentioned something about using a Gaussian window.  he
> mentioned a paper he found but did not identify it.  i am a little curious.
>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Dynamic smoothing algorithm

2016-12-08 Thread Andrew Simper
On 8 December 2016 at 06:13, Lubomir I. Ivanov  wrote:
>
> a couple of typos:
> - meaure -> measure
> - continous - > continuous
> - acheived -> achieved
>
> this is very cool!
> did you observe any increment in the THD when applying the routine;
> abs() tends to contribute to that?
>
> thanks
> lubomir

Glad you like it lubomir! Thanks for the corrections, I'll update the
document to fix the typos.

The abs(band) is a sudden non-linearity, but this is done to on the
signal modulating the cutoff frequency and is not applied to the input
signal directly. It will introduce some distortion, but not too much
more than a smoother transition like a function starting with band^2
then flattening out to linear. I've found for control signals linear
frequency modulation works most musically, but please feel free to try
any non-linearity you want in on the bandpass signal that may suit
your application better.

Cheers,

Andy
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Dynamic smoothing algorithm

2016-12-06 Thread Andrew Simper
On 6 December 2016 at 15:35, robert bristow-johnson
<r...@audioimagination.com> wrote:
>
>
>  Original Message 
> Subject: [music-dsp] Dynamic smoothing algorithm
> From: "Andrew Simper" <a...@cytomic.com>
> Date: Tue, December 6, 2016 1:26 am
> To: "A discussion list for music-related DSP" <music-dsp@music.columbia.edu>
> --
>
>>
>> Another year has almost passed so I thought it was time to release
>> another technical paper!
>>
>> It's a dynamic smoothing algorithm that can do things like this:
>>
>> http://cytomic.com/files/dsp/dynamic-smoothing.png
>
> that (bottom graph) looks nice.  can't be totally linear.

It's a linear filter with self fm, so the fm could be thought of as a
non-linear element.

>> The basic idea is to use a filter's bandpass output to
>> modulate its own cutoff frequency.
>
> so, the more BPF output (abs and LPF'd), the lower the LPF cutoff frequency?

The abs of the BPF is taken, and then this is used to increase the
same filter's cutoff frequency. It's a 2 pole multi-mode filter, and
the final output is a 2 pole low pass.


> why a BPF and not an HPF?   because the HP component is already well
> filtered out, so you need not measure that energy?

You want to reject high frequency small changes, so the HPF wouldn't
reject that, so wouldn't reject high frequency noise.


>>The basic version takes 6+-, 3* and
>> one absolute value, details here:
>
> that's code at "tick", right?

Yes, it is the op count for the per sample processing (be that control
or audio rate).

>
>> http://cytomic.com/files/dsp/DynamicSmoothing.pdf
>>
>> Enjoy,
>
> thanks, Andrew.  it looks like a valuable little slewing alg on the
> controls.

You're welcome! I fine tuned the initial cutoff and sensitivity to
make the plots look nice, but they will need adjusting depending on
the target platforms implementation details, but it hopefully won't
take long to get it sounding how you want.

Cheers,

Andy
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Dynamic smoothing algorithm

2016-12-05 Thread Andrew Simper
Hi Guys,

Another year has almost passed so I thought it was time to release
another technical paper!

It's a dynamic smoothing algorithm that can do things like this:

http://cytomic.com/files/dsp/dynamic-smoothing.png

I came up with the idea a few years ago when I needed a way to
de-noise and de-step the cutoff of a self oscillating filter when
mapped to a midi controller. It uses a dynamic 2 pole multimode
filter. The basic idea is to use a filter's bandpass output to
modulate its own cutoff frequency. The basic version takes 6+-, 3* and
one absolute value, details here:

http://cytomic.com/files/dsp/DynamicSmoothing.pdf

Enjoy, and have a happy holidays everyone :)

Cheers,

Andrew
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Bandlimited morphable waveform generation

2016-09-24 Thread Andrew Simper
I've tried both ways and the linear phase was easier to correct for
the dc offsets that change as the frequency of the oscillator gets
higher. For low frequency stuff it doesn't really matter.

On 24 September 2016 at 13:29, Ross Bencina <rossb-li...@audiomulch.com> wrote:
> On 24/09/2016 3:01 PM, Andrew Simper wrote:
>>>
>>> > "Hard Sync Without Aliasing," Eli Brandt
>>> > http://www.cs.cmu.edu/~eli/papers/icmc01-hardsync.pdf
>>> >
>
>>
>>
>> But stick to linear phase as you can correct more easily for dc offsets.
>
>
> What's your reasoning for saying that?
>
> I'm guessing it depends on whether you have an analytic method for
> generating the minBLEP.
>
> Ross.
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Bandlimited morphable waveform generation

2016-09-23 Thread Andrew Simper
On 24 September 2016 at 12:06, Ross Bencina <rossb-li...@audiomulch.com> wrote:
> On 24/09/2016 1:28 PM, Andrew Simper wrote:
>>
>> Corrective grains are also called BLEP / BLAMP etc, so have a read about
>> those.
>
>
> Original reference:
>
> "Hard Sync Without Aliasing," Eli Brandt
> http://www.cs.cmu.edu/~eli/papers/icmc01-hardsync.pdf
>

But stick to linear phase as you can correct more easily for dc offsets.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Bandlimited morphable waveform generation

2016-09-23 Thread Andrew Simper
Corrective grains are also called BLEP / BLAMP etc, so have a read about those.

If f(x) is your function then I'm defining:

C(0) = f(x) doesn't suddenly jump anywhere, i.e. is smooth in the 0th derivative
C(1) = f'(x) doesn't jump anywhere, i.e. is smooth in the 1st derivative
...
C(n) = f^n(x) doesn't jump anywhere

Now the C^n (where n is a superscript) in papers normally means you
have C(0), C(1), all the way up to C(n)

For example:

saw and sqr are not C(0), but are C(1) onwards (zero)
tri is C(0), but not C(1), but then C(2) onwards (zero)
sin C(n) for all n, which is C^inf

If you function is not C(n) for a particular n then you need to band
limit this transition, which will normally occur at a fraction of a
sample.

Cheers,

Andy

On 22 September 2016 at 18:18, André Michelle  wrote:
> Hi Andrew,
>
>
> I am having a hard time understanding what you are suggesting.
>
> Don't use wavetables!
>
>
> I would be pleased not to.
>
> As you have constructed your desired waveform as a continuous function
> all you have to do is work out where any discontinuities in C(n) occur
> and you can band limit those use corrective grains for each C(n)
> discontinuity at fractions of a sample where the discontinuity occurs.
> Adding sync to this is trivial is you just do the same thing, in fact
> you can jump between any two points in your waveform or waveform shape
> instantly if you want to create even more interesting waveforms.
>
>
> How do I detect discontinuities? It is easy to see when printed visually but
> I do not see how I can approach this with code. Do I need the ‘complete’
> function at once and check or can I do it in runtime for each sample. I
> think so since you suggest that I can jump around within the function
> without alias? Because that would sound like a solution I wanted to have
> from the very beginning.
>
> For example a sawtooth is C(1) continuous all the time, it just has a
> jump in C(0) every now and again, so you just band limit those jumps
> with a C(0) corrective grain - which is an integrated sinc function to
> give you a bandlmited step, then subtract the trivial step from this,
> and add in this corrective grain at a fraction of a sample to
> re-construct your fraction of a sample band limited step.
>
>
> I do not quite get this: C(1). Does it mean I have C(n) values of the
> function where C(1) is the second value?
> What frequency does the integrated sync function has?
> What is a 'fraction of a sample'?
>
> Similarly you can bandlimit C(1) and C(2) discontinuities, after that
> the amplitude of the discontinuities is so small that it rarely
> matters if you are running at 88.2 / 96 khz.
>
>
> I am missing to many aspects of your suggestion. Any hints where to learn
> about this would be appreciated.
>
> ~
> André Michelle
> https://www.audiotool.com
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Bandlimited morphable waveform generation

2016-09-21 Thread Andrew Simper
Hi André,

Don't use wavetables!

As you have constructed your desired waveform as a continuous function
all you have to do is work out where any discontinuities in C(n) occur
and you can band limit those use corrective grains for each C(n)
discontinuity at fractions of a sample where the discontinuity occurs.
Adding sync to this is trivial is you just do the same thing, in fact
you can jump between any two points in your waveform or waveform shape
instantly if you want to create even more interesting waveforms.

For example a sawtooth is C(1) continuous all the time, it just has a
jump in C(0) every now and again, so you just band limit those jumps
with a C(0) corrective grain - which is an integrated sinc function to
give you a bandlmited step, then subtract the trivial step from this,
and add in this corrective grain at a fraction of a sample to
re-construct your fraction of a sample band limited step.

Similarly you can bandlimit C(1) and C(2) discontinuities, after that
the amplitude of the discontinuities is so small that it rarely
matters if you are running at 88.2 / 96 khz.

Cheers,

Andrew

On 15 September 2016 at 23:49, André Michelle  wrote:
> Hi all,
>
>
> many articles have been written about bandlimited waveform generation. But 
> for various reasons I am not able to implement any solution to my synthesiser 
> that are feasible. The synth allows blending smoothly between different 
> shapes 
> (http://codepen.io/andremichelle/full/8341731a1ff2bdc90be3cb88e6509358/). It 
> also provides phase modulation (by LFO), frequency gliding, hard sync and 
> parameter automation. The following I already understand: Functions other 
> than a sinus have overtones that may overlap the Nyquist-frequency reflecting 
> back into the audible spectrum. I tried the following to reduce the alias: 
> Oversample (32x) and apply multiple BiQuad[4] filter (Cutoff at Nyquist or 
> less), Oversample and down-sample with a Finite Impulse Response filter, use 
> a Sync function window to be applied to each sample (sinc Fc/Fs), apply a FFT 
> and sum up sin(x) up to the Nyquist. All those technics seem to be either 
> static (FFT) or very costly or are not perfectly reducing the alias. The 
> synthesiser runs online inside your browser 
> (https://www.audiotool.com/product/device/pulverisateur/). So CPU time is 
> crucial. Most articles are explaining how to create the usual suspects such 
> as Sawtooth, Square and Triangle. The other articles are filled with complex 
> math. I am not a complete dummy but most articles are really hard to follow 
> and not pointing out the key ideas in plain english.
>
> A simple question remains:
> Is it possible to sample an arbitrary dynamic(changing over time) waveform 
> function f(x) excluding frequencies over Nyquist?
>
> Any suggestions are highly appreciated!
>
> ~
> André Michelle
> https://www.audiotool.com
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] SVF matching of forward Euler to trapezoidal response

2016-02-04 Thread Andrew Simper
You guys may be interested in a technical paper I've just made public. It
matches various forward Euler type SVF difference equations to the LTI
response of the trapezoidal one. It was until recently just an internal
document, but I've added some comments and neatened it up for public
consumption, please let me know if there are any typos:

http://cytomic.com/files/dsp/SvfMatchingFeToTr.pdf

There are three different versions presented, each one mainly only differs
in what the low pass integrator gets as its input, either the current
bandpass, an average of the current and previous bandpass, or just the
previous bandpass - each version is possible, and leads to a slightly
different warping of the coefficients, and slightly different numerical
behaviour.

Enjoy!

Andy
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Approximating convolution reverb with multitap?

2015-03-24 Thread Andrew Simper
On 19 March 2015 at 02:35, Alan Wolfe alan.wo...@gmail.com wrote:

 Thanks a bunch you guys.  It seems like the problem is more complex
 than I expected and so the solutions are a bit over my head.

 I'll start researching though, thanks!!


This could be applied to most areas of music-dsp when starting out,
but stick with it. It is very rewarding to dig deeper and find
interesting solutions to difficult problems.

All the best,

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-10 Thread Andrew Simper
On 11 February 2015 at 05:52, gwenhwyfaer gwenhwyf...@gmail.com wrote:
 On 10/02/2015, Didier Dambrin di...@skynet.be wrote:
 Pretty easy to check the obvious difference between a pure low sawtooth, and

 the same sawtooth with all partials starting at random phases.

 Ah, this again? Good times. I remember playing. I made 7 sawtooth
 waves with random (static) phases and one straightforward sawtooth
 wave, with all partials in phase. I just listened to it again, to
 check my memory. On a half-decent pair of headphones, the difference
 between the all-partials-in-phase sawtooth and the random-phase ones
 is readily audible, but it was rather harder to tell the difference
 between the various random-phase waves; they all kind of sounded
 pulse-wavey. On a pair of speakers through the same amp and soundcard,
 though, I can still *jst about* pick out the in-phase sawtooth -
 but I couldn't confidently tell the difference between the 7 other
 waves. Which I'm guessing has something to do with the difference
 between the fairly one-dimensional travel of sound from headphone to
 ear, vs the bouncing-in-from-all-kinds-of-directions speaker-ear
 journey.

Have you considered that headphones don't have crossovers?

All the best,

Andrew
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-10 Thread Andrew Simper
Didier,

I can hear hiss down at -72 dBFS while a 0 dBFS 440 hz sine wave is
playing. There is no compressor in my signal chain anywhere, I use an
RME FireFace UCX and have all gains to 0 dBFS and only adjust the
headhpone out gain. The FX % cpu on the soundcard is at 0 %, and I
even double checked through all the power buttons for the EQ / Comps
on each channel, nothing is on.

I will not reply to you any further on this topic, I have made my
statements very clear, posted examples, and been very patient with
you, but you still don't want to believe me so it is best to not
discuss it any further as it is just wasting everyone's time.

All the best,

Andrew


-- cytomic -- sound music software --

On 10 February 2015 at 21:35, Didier Dambrin di...@skynet.be wrote:

 Interestingly, I wasn't gonna suggest that a possible cause could have been a 
 compressor built-in the soundcard, because.. why would a soundcard even do 
 that..

 However.. I've polled some people in our forum with this same test, and one 
 guy could hear it. But it turns out that he owns an X-Fi, and it does feature 
 automatic gain compensation, which was on for him. Owning the same soundcard, 
 I turned it on, and yes, that made the noise at -80dB rather clear.

 I'm not saying it's what's happening for you, but are you 100% sure of 
 everything the signal goes through in your system?


 This said, the existence of a built-in compressor in a soundcard.. that alone 
 might be a point for dithering, if the common end listener leaves that kind 
 of thing on.




 -Message d'origine- From: Andrew Simper
 Sent: Tuesday, February 10, 2015 6:52 AM

 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles

 Hi Didier,

 I count myself as having good hearing, I always wear ear protection at
 any gigs / loud events and have always done so. My hearing is very
 important to me since it is essential for my livelihood.

 I made a new test, a 440 hz sine wave with three 0.25 second white
 noise bursts -66 dB, -72 dB and -75 dB below the sine (which is at -6
 dBFS). I can hear the first one very clearly, then just hear the
 second one. I can't actually hear the hiss of the third one but I can
 hear the amplitude of the sine wave fractionally lowering when the
 actual amplitude of the test sine remains constant, I don't know why
 this is but that's how I hear it.

 You will clearly see where the white noise bursts are if you use some
 sort of FFT display, but please just have a listen first and try and
 pick where each (3 total) are in the file:

 www.cytomic.com/files/dsp/border-of-hearing.wav

 For the other way around, a constant noise file and with bursts of 440
 hz sine waves, the sine has to be very loud before I can hear it, up
 around -28 dB from memory. Noise added to a sine wave is much easier
 to pick, which is why I think low pass filtered tones that are largely
 sine like in nature are the border case for dither.

 All the best,

 Andy


 -- cytomic -- sound music software --


 On 10 February 2015 at 10:56, Didier Dambrin di...@skynet.be wrote:

 I'm having a hard time finding anyone who could hear past the -72dB noise,
 here around.

 Really, either you have super-ears, or the cause is (technically) somewhere
 else. But it matters, because the whole point of dithering to 16bit depends
 on how common that ability is.




 -Message d'origine- From: Andrew Simper
 Sent: Saturday, February 07, 2015 2:08 PM

 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles

 On 7 February 2015 at 03:52, Didier Dambrin di...@skynet.be wrote:


 It was just several times the same fading in/out noise at different
 levels,
 just to see if you hear quieter things than I do, I thought you'd have
 guessed that.

 https://drive.google.com/file/d/0B6Cr7wjQ2EPub2I1aGExVmJCNzA/view?usp=sharing
 (0dB, -36dB, -54dB, -66dB, -72dB, -78dB)

 Here if I make the starting noise annoying, then I hear the first 4 parts,
 until 18:00. Thus, if 0dB is my threshold of annoyance, I can't hear
 -72dB.

 So you hear it at -78dB? Would be interesting to know how many can, and if
 it's subjective or a matter of testing environment (the variable already
 being the 0dB annoyance starting point)



 Yep, I could hear all of them, and the time I couldn't hear the hiss
 any more as at the 28.7 second mark, just before the end of the file.
 For reference this noise blast sounded much louder than the bass tone
 that Nigel posted when both were normalised, I had my headphones amp
 at -18 dB so the first noise peak was loud but not uncomfortable.

 I thought it was an odd test since the test file just stopped before I
 couldn't hear the LFO amplitude modulation cycles, so I wasn't sure
 what you were trying to prove!

 All the best,

 Andy




 -Message d'origine- From: Andrew Simper
 Sent: Friday, February 06, 2015 3:21 PM
 To: A discussion list for music-related DSP
 Subject: Re: [music

Re: [music-dsp] Dither video and articles

2015-02-08 Thread Andrew Simper
Vicki,

If you look at the limits of what is possible in a real world ADC
there is a certain amount of noise in any electrical system due to
gaussian thermal noise:
http://en.wikipedia.org/wiki/Johnson%E2%80%93Nyquist_noise

For example if you look at an instrument / measurement grade ADC like
this: 
http://www.prismsound.com/test_measure/products_subs/dscope/dscope_spec.php
They publish figures of a residual noise floor of 1.4 uV, which they
say is -115 dBu. So if you digitise a 1 V peak (2 V peak to peak) sine
wave with a 24-bit ADC then you will have hiss (which includes a large
portion of gaussian noise) at around the 20 bit mark, so you will have
4-bits of hiss to self dither. This has nothing to do with microphones
or noise in air, this is in the near perfect case of transmission via
a well shielded differential cable transferring the voltage directly
to the ADC.

All the best,

Andy
-- cytomic -- sound music software --


On 9 February 2015 at 00:09, Vicki Melchior vmelch...@earthlink.net wrote:
 I have no argument at all with the cheap high-pass TPDF dither; whenever it 
 was published the original authors undoubtedly verified that the moment 
 decoupling occurred, as you say.  And that's what is needed for dither 
 effectiveness.   If you're creating noise for dither, you have the option to 
 verify its properties.  But in the situation of an analog signal with added, 
 independent instrument noise, you do need to verify that the composite noise 
 source actually satisfies the criteria for dither.  1/f noise in particular 
 has been questioned, which is why I raised the spectrum issue.

 Beyond that, Nigel raises this issue in the context of self-dither.  In 
 situations where there is a clear external noise source present, whether the 
 situation is analog to digital conversion or digital to digital bit depth 
 change, the external noise may, or may not, be satisfactory as dither but at 
 least it's properties can be measured.  If the 'self-dithering' instead 
 refers to analog noise captured into the digitized signal with the idea that 
 this noise is going to be preserved and available at later truncation steps 
 to 'self dither' it is a very very hazy argument.   I'm aware of the various 
 caveats that are often postulated, i.e. signal is captured at double 
 precision, no truncation, very selected processing.  But even in minimalist 
 recording such as live to two track, it's not clear to me that the signal can 
 get through the digital stages of the A/D and still retain an unaltered noise 
 distribution.  It certainly won't do so after considerable processing.  So 
 the shor
 t
  answer is, dither!  At the 24th bit or at the 16th bit, whatever your output 
 is.  If you (Nigel or RBJ) have references to the contrary, please say so.

 Vicki

 On Feb 8, 2015, at 10:11 AM, robert bristow-johnson wrote:

 On 2/7/15 8:54 AM, Vicki Melchior wrote:
 Well, the point of dither is to reduce correlation between the signal and 
 quantization noise.  Its effectiveness requires that the error signal has 
 given properties; the mean error should be zero and the RMS error should be 
 independent of the signal.  The best known examples satisfying those 
 conditions are white Gaussian noise at ~ 6dB above the RMS quantization 
 level and white TPDF noise  at ~3dB above the same, with Gaussian noise 
 eliminating correlation entirely and TPDF dither eliminating correlation 
 with the first two moments of the error distribution.   That's all textbook 
 stuff.  There are certainly noise shaping algorithms that shape either the 
 sum of white dither and quantization noise or the white dither and 
 quantization noise independently, and even (to my knowledge) a few 
 completely non-white dithers that are known to work, but determining the 
 effectiveness of noise at dithering still requires examining the 
 statistical properties of the error signal and showin
 g

 th
  at the mean is 0 and the second moment is signal independent.  (I think 
 Stanley Lipschitz showed that the higher moments don't matter to 
 audibility.)

 but my question was not about the p.d.f. of the dither (to decouple both the 
 mean and the variance of the quantization error, you need triangular p.d.f. 
 dither of 2 LSBs width that is independent of the *signal*) but about the 
 spectrum of the dither.  and Nigel mentioned this already, but you can 
 cheaply make high-pass TPDF dither with a single (decent) uniform p.d.f. 
 random number per sample and running that through a simple 1st-order FIR 
 which has +1 an -1 coefficients (i.e. subtract the previous UPDF from the 
 current UPDF to get the high-pass TPDF).  also, i think Bart Locanthi (is he 
 still on this planet?) and someone else did a simple paper back in the 90s 
 about the possible benefits of high-pass dither.  wasn't a great paper or 
 anything, but it was about the same point.

 i remember mentioning this at an AES in the 90's, and Stanley *did* address 
 it.  for straight 

Re: [music-dsp] Dither video and articles

2015-02-07 Thread Andrew Simper
32-bit internal floating point is not sufficient for certain DSP tasks
and will be plainly audible as causing all sorts of problems, a DF1 at
low frequencies is the classic example of this, it causes large
amounts of low frequency rumble. This is a completely different thing
to the final bit depth of an audio file to listen to.

Andy

-- cytomic -- sound music software --

On 7 February 2015 at 02:24, Michael Gogins michael.gog...@gmail.com wrote:

 Do not believe anything that is not confirmed to a high degree of
 statistical signifance (say, 5 standard deviations) by a double-blind
 test using an ABX comparator.

 That said, the AES study did use double-blind testing. I did not read
 the article, only the abstract, so cannot say more about the study.

 In my own work, I have verified with a double-blind ABX comparator at
 a high degree of statistical significance that I can hear the
 differences in certain selected portions of the same Csound piece
 rendered with 32 bit floating point samples versus 64 bit floating
 point samples. These are sample words used in internal calculations,
 not for output soundfiles. What I heard was differences in the sound
 of the same filter algorithm. These differences were not at all hard
 to hear, but they occurred in only one or two places in the piece.

 I have not myself been able to hear differences in audio output
 quality between CD audio and high-resolution audio, but when I get the
 time I may try again, now that I have a better idea what to listen
 for.

 Regards,
 Mike



 -
 Michael Gogins
 Irreducible Productions
 http://michaelgogins.tumblr.com
 Michael dot Gogins at gmail dot com


 On Fri, Feb 6, 2015 at 1:13 PM, Nigel Redmon earle...@earlevel.com wrote:
 Mastering engineers can hear truncation error at the 24th bit but say it is 
 subtle and may require experience or training to pick up.
 
  Quick observations:
 
  1) The output step size of the lsb is full-scale / 2^24. If full-scale is 
  1V, then step is 0.000596046447753906V, or 0.0596 microvolt (millionths 
  of a volt). Hearing capabilities aside, the converter must be able to 
  resolve this, and it must make it through the thermal (and other) noise of 
  their equipment and move a speaker. If you’re not an electrical engineer, 
  it may be difficult to grasp the problem that this poses.
 
  2) I happened on a discussion in an audio forum, where a highly-acclaimed 
  mastering engineer and voice on dither mentioned that he could hear the 
  dither kick in when he pressed a certain button in the GUI of some beta 
  software. The maker of the software had to inform him that he was mistaken 
  on the function of the button, and in fact it didn’t affect the audio 
  whatsoever. (I’ll leave his name out, because it’s immaterial—the guy is a 
  great source of info to people and is clearly excellent at what he does, 
  and everyone who works with audio runs into this at some point.) The 
  mastering engineer graciously accepted his goof.
 
  3) Mastering engineers invariably describe the differences in very 
  subjective term. While this may be a necessity, it sure makes it difficult 
  to pursue any kind of validation. From a mastering engineer to me, 
  yesterday: 'To me the truncated version sounds colder, more glassy, with 
  less richness in the bass and harmonics, and less front to back depth in 
  the stereo field.’
 
  4) 24-bit audio will almost always have a far greater random noise floor 
  than is necessary to dither, so they will be self-dithered. By “almost”, I 
  mean that very near 100% of the time. Sure, you can create exceptions, such 
  as synthetically generated simple tones, but it’s hard to imagine them 
  happening in the course of normal music making. There is nothing magic 
  about dither noise—it’s just mimicking the sort of noise that your 
  electronics generates thermally. And when mastering engineers say they can 
  hear truncation distortion at 24-bit, they don’t say “on this particular 
  brief moment, this particular recording”—they seems to say it in general. 
  It’s extremely unlikely that non-randomized truncation distortion even 
  exists for most material at 24-bit.
 
  My point is simply that I’m not going to accept that mastering engineers 
  can hear the 24th bit truncation just because they say they can.
 
 
  On Feb 6, 2015, at 5:21 AM, Vicki Melchior vmelch...@earthlink.net wrote:
 
  The following published double blind test contradicts the results of the 
  old Moran/Meyer publication in showing (a) that the differences between CD 
  and higher resolution sources is audible and (b) that failure to dither at 
  the 16th bit is also audible.
 
  http://www.aes.org/e-lib/browse.cfm?elib=17497
 
  The Moran/Meyer tests had numerous technical problems that have long been 
  discussed, some are enumerated in the above.
 
  As far as dithering at the 24th bit, I can't disagree more with a 
  conclusion that 

Re: [music-dsp] Dither video and articles

2015-02-07 Thread Andrew Simper
On 7 February 2015 at 03:52, Didier Dambrin di...@skynet.be wrote:
 It was just several times the same fading in/out noise at different levels,
 just to see if you hear quieter things than I do, I thought you'd have
 guessed that.
 https://drive.google.com/file/d/0B6Cr7wjQ2EPub2I1aGExVmJCNzA/view?usp=sharing
 (0dB, -36dB, -54dB, -66dB, -72dB, -78dB)

 Here if I make the starting noise annoying, then I hear the first 4 parts,
 until 18:00. Thus, if 0dB is my threshold of annoyance, I can't hear -72dB.

 So you hear it at -78dB? Would be interesting to know how many can, and if
 it's subjective or a matter of testing environment (the variable already
 being the 0dB annoyance starting point)

Yep, I could hear all of them, and the time I couldn't hear the hiss
any more as at the 28.7 second mark, just before the end of the file.
For reference this noise blast sounded much louder than the bass tone
that Nigel posted when both were normalised, I had my headphones amp
at -18 dB so the first noise peak was loud but not uncomfortable.

I thought it was an odd test since the test file just stopped before I
couldn't hear the LFO amplitude modulation cycles, so I wasn't sure
what you were trying to prove!

All the best,

Andy




 -Message d'origine- From: Andrew Simper
 Sent: Friday, February 06, 2015 3:21 PM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles

 Sorry, you said until, which is even more confusing. There are
 multiple points when I hear the noise until since it sounds like the
 noise is modulated in amplitude by a sine like LFO for the entire
 file, so the volume of the noise ramps up and down in a cyclic manner.
 The last ramping I hear fades out at around the 28.7 second mark when
 it is hard to tell if it just ramps out at that point or is just on
 the verge of ramping up again and then the file ends at 28.93 seconds.
 I have not tried to measure the LFO wavelength or any other such
 things, this is just going on listening alone.

 All the best,

 Andrew Simper



 On 6 February 2015 at 22:01, Andrew Simper a...@cytomic.com wrote:

 On 6 February 2015 at 17:32, Didier Dambrin di...@skynet.be wrote:

 Just out of curiosity, until which point do you hear the noise in this
 little test (a 32bit float wav), starting from a bearable first part?


 https://drive.google.com/file/d/0B6Cr7wjQ2EPucjFCSUhGNkVRaUE/view?usp=sharing


 I hear noise immediately in that recording, it's hard to tell exactly
 the time I can first hear it since there is some latency from when I
 press play to when the sound starts, but as far as I can tell it is
 straight away. Why do you ask such silly questions?

 All the best,

 Andrew Simper

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp


 -
 Aucun virus trouve dans ce message.
 Analyse effectuee par AVG - www.avg.fr
 Version: 2015.0.5645 / Base de donnees virale: 4281/9068 - Date: 06/02/2015
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-06 Thread Andrew Simper
Sorry, you said until, which is even more confusing. There are
multiple points when I hear the noise until since it sounds like the
noise is modulated in amplitude by a sine like LFO for the entire
file, so the volume of the noise ramps up and down in a cyclic manner.
The last ramping I hear fades out at around the 28.7 second mark when
it is hard to tell if it just ramps out at that point or is just on
the verge of ramping up again and then the file ends at 28.93 seconds.
I have not tried to measure the LFO wavelength or any other such
things, this is just going on listening alone.

All the best,

Andrew Simper



On 6 February 2015 at 22:01, Andrew Simper a...@cytomic.com wrote:
 On 6 February 2015 at 17:32, Didier Dambrin di...@skynet.be wrote:
 Just out of curiosity, until which point do you hear the noise in this
 little test (a 32bit float wav), starting from a bearable first part?

 https://drive.google.com/file/d/0B6Cr7wjQ2EPucjFCSUhGNkVRaUE/view?usp=sharing

 I hear noise immediately in that recording, it's hard to tell exactly
 the time I can first hear it since there is some latency from when I
 press play to when the sound starts, but as far as I can tell it is
 straight away. Why do you ask such silly questions?

 All the best,

 Andrew Simper
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-06 Thread Andrew Simper
On 6 February 2015 at 17:32, Didier Dambrin di...@skynet.be wrote:
 Just out of curiosity, until which point do you hear the noise in this
 little test (a 32bit float wav), starting from a bearable first part?

 https://drive.google.com/file/d/0B6Cr7wjQ2EPucjFCSUhGNkVRaUE/view?usp=sharing

I hear noise immediately in that recording, it's hard to tell exactly
the time I can first hear it since there is some latency from when I
press play to when the sound starts, but as far as I can tell it is
straight away. Why do you ask such silly questions?

All the best,

Andrew Simper
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-05 Thread Andrew Simper
On 6 February 2015 at 12:16, Didier Dambrin di...@skynet.be wrote:
 I'm not quite sure I understand what you described here below.
 I think the wavs should have contained a normalized part, so that anyone who
 listens to it, will never crank up his volume above the threshold of pain on
 the first, normalized part, and then everyone is more or less listening to
 the quiet part the same way.

That is exactly what I was doing, to normalise the float wav file and
let you know it wasn't even remotely near the level of pain, which
tells me the gain of -12 dB on the headphone amp is a reasonable
listening level.


 Claiming that it's any audible is one thing, but you go as far as saying
 it's clear to hear.. we're probably not testing the same way.
 I have normalized (+23dB) the last 9 seconds of the Diva bass 16-bit
 truncated.wav file to hear what I was supposed to hear. I'm just not hearing
 anything close to that, in the normal test.

I can only say what I hear, which is pretty clear. Nigel's point about
the volume is this: at one point in the song that bass sound would be
normalised up higher, or perhaps behind drums which were louder, but
you can consider this bit as being in a quieter bit of a song, so
absolutely reasonable as a test case.


 While I have Sennheiser HD650, I'm listening through bose QC15 because,
 although it's night time, my ambient noise is probably a gazillion times
 above what we're debating here. So I'm in a pretty quiet listening setup
 here (for those who have tried QC15's).

If you can't hear it I believe you, but I can hear it. Not all peoples
hearing is equal.

All the best,

Andrew Simper





 -Message d'origine- From: Andrew Simper
 Sent: Friday, February 06, 2015 3:31 AM

 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles

 I also tried boosting the float version of the bass tone to -1 dB (so
 another 18 dB up from with the same test setup), it was loud, but not
 anywhere near the threshold of pain for me. I then boosted it another
 12 dB on the headphone control (so 0 dB gain), so now 30 dB gain in
 total and my headphones were really shaking, this was a bit silly a
 level, but still definitely not painful to listen to. My point being
 that this is a very reasonable test signal to listen to, and it is
 clear to hear the differences even at low levels of gain.

 If I had to choose, between the two 16-bit ones I would prefer the one
 with dither but put through a make mono plugin, as this sounded the
 closest to the float version.

 All the best,

 Andy

 -- cytomic -- sound music software --


 On 5 February 2015 at 16:46, Nigel Redmon earle...@earlevel.com wrote:

 Hmm, I thought that would let you save the page source (wave file)…Safari
 creates the file of the appropriate name and type, but it stays at 0
 bytes…OK, I put up and index page—do the usual right-click to save the field
 to disk if you need to access the files directly:

 http://earlevel.com/temp/music-dsp/


 On Feb 5, 2015, at 12:13 AM, Nigel Redmon earle...@earlevel.com wrote:

 OK, here’s my new piece, I call it Diva bass—to satisfy your request for
 me to make something with truncation distortion apparent. (If it bother you
 that my piece is one note, imagine that this is just the last note of a
 longer piece.)

 I spent maybe 30 seconds getting the sound—opened Diva (default
 “minimoog” modules), turn the mixer knobs down except for VCO 1, set range
 to 32’, waveform to triangle, max release on the VCA envelope.

 In 32-bit float glory:

 http://earlevel.com/temp/music-dsp/Diva%20bass%2032-bit%20float.wav

 Truncated to 16-bit, no dither (Quan Jr plug-in, Digital Performer),
 saved to 16-bit wave file:

 http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated.wav

 You’ll have to turn your sound system up, not insanely loud, but loud. (I
 said that this would be the case before.) I can hear it, and I know
 engineers who monitor much louder, routinely, than I’m monitoring to hear
 this. My Equator Q10s are not terribly high powered, and I’m not adding any
 other gain ahead of them in order to boost the quiet part.

 If you want to hear the residual easily (32-bit version inverted, summed
 with 16-bit truncated, the result with +40 dB gain via Trim plug-in):


 http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20truncated%20residual%20+40dB.wav

 I don’t expect the 16-bit truncated version to bother you, but it does
 bother some audio engineers. Here's 16-bit dithered version, for
 completeness, so that you can decide if the added noise floor bothers you:

 http://earlevel.com/temp/music-dsp/Diva%20bass%2016-bit%20dithered.wav



 On Feb 4, 2015, at 1:10 PM, Didier Dambrin di...@skynet.be wrote:

 Yes, I disagree with the always. Not always needed means it's
 sometimes needed, my point is that it's never needed, until proven
 otherwise. Your video proves that sometimes it's not needed, but not that
 sometimes it's needed

Re: [music-dsp] Dither video and articles

2015-02-05 Thread Andrew Simper
Hi Nigel,

You're welcome! Thanks for spending the time and effort preparing
examples so I could make some observations on. Yeah, with headphones
my ears easily picked up the stereo-ness of the hiss as soon as I
switched sources. If I was listening to an entire CD and all tracks
had the same hiss I would have just assumed it would be part of the
recording chain in making the CD, which I suppose in a sense it is,
but the hiss definitely sounded quieter in headphones when it was
mono.

Now I'm just being lazy with the plugin, I can do it myself as a
command line thing / plugin, but I just figured if you had recently
compiled the plugin it would be an interesting addition to have!

All the best,

Andy

-- cytomic -- sound music software --


On 6 February 2015 at 14:47, Nigel Redmon earle...@earlevel.com wrote:
 Funny, Andy, I was thinking about the merits of mono versus stereo dither a 
 couple of nights ago while having dinner…while independent dither makes 
 sense, in that your equipment’s background noise should be uncorrelated, 
 there is the issue with headphones (maybe making it more obvious, more 
 spacious?)…I didn’t think it through very far, just a thought to try out, but 
 it’s interesting that you should bring it up...

 But actually, those files aren’t using my plug-in. Since the test didn’t 
 require a constant residual level at various truncation levels (which is the 
 best part of the plug-in—nothing like juggling a couple of gain plug-ins to 
 manually compensate the gain in a null test, and blasting your ears off when 
 a stray index finger mouse-scrolls bit-depth down to one or two bits with a 
 high gain setting in place), I went with the off-the-shelf stuff, and not 
 have a chance that someone would question whether my plug-in was doing 
 something misleading. DP’s Quan Jr plug-in is supplying the dither.

 I can mod my plug-in for mono dither, though, and supply a version of that. 
 You make an interesting observation, thanks.


 On Feb 5, 2015, at 6:31 PM, Andrew Simper a...@cytomic.com wrote:

 Hi Nigel,

 Can I please ask a favour? Can you please add a mono noise button to
 your dither plugin? In headphones the sudden onset of stereo hiss of
 the dither is pretty obvious and a little distracting in this example.
 I had a listen with a make mono plugin and the results were much
 less obvious between the 16-bit with dither and the float file.  It
 would be interesting to hear a stereo source (eg the same Diva sounds
 but in unison) put through mono noise dithering.

 The differences are pretty clear to me, thanks for posting the files! My 
 setup:

 (*) Switching between files randomly the three files randomly playing
 them back with unity gain (the float file padded -6 dB to have the
 same volume as the others)
 (*) FireFace UCX with headphone output set to -12 dB, all other gains at 
 unity
 (*) Senheisser Amperior HD25 headphones

 My results

 (*) the float file is easy to spot, because of the differences when
 compared to the other two
 (*) the dithered one sounds hissy straight away when I switch to it,
 it is obvious that the hiss is stereo, my ears immediately hear that
 stereo difference, but otherwise it sounds like the original float
 file
 (*) the undithered one, right from the start, sounds like a harsher
 version of the float one with just a hint of noise as well, an
 aggressive subtle edge to the tone which just isn't in the original.
 When the fadeout comes then it becomes more obvious aliasing
 distortion that everyone is used to hearing.

 I also tried boosting the float version of the bass tone to -1 dB (so
 another 18 dB up from with the same test setup), it was loud, but not
 anywhere near the threshold of pain for me. I then boosted it another
 12 dB on the headphone control (so 0 dB gain), so now 30 dB gain in
 total and my headphones were really shaking, this was a bit silly a
 level, but still definitely not painful to listen to. My point being
 that this is a very reasonable test signal to listen to, and it is
 clear to hear the differences even at low levels of gain.

 If I had to choose, between the two 16-bit ones I would prefer the one
 with dither but put through a make mono plugin, as this sounded the
 closest to the float version.

 All the best,

 Andy

 -- cytomic -- sound music software --


 On 5 February 2015 at 16:46, Nigel Redmon earle...@earlevel.com wrote:
 Hmm, I thought that would let you save the page source (wave file)…Safari 
 creates the file of the appropriate name and type, but it stays at 0 
 bytes…OK, I put up and index page—do the usual right-click to save the 
 field to disk if you need to access the files directly:

 http://earlevel.com/temp/music-dsp/


 On Feb 5, 2015, at 12:13 AM, Nigel Redmon earle...@earlevel.com wrote:

 OK, here’s my new piece, I call it Diva bass—to satisfy your request for 
 me to make something with truncation distortion apparent. (If it bother 
 you that my piece is one note, imagine that this is just

Re: [music-dsp] Dither video and articles

2015-02-05 Thread Andrew Simper
 there time 
 dithering, because listeners will never hear it, but they will want to get 
 rid of it. And the cost of that rash action to get rid of it? Basically 
 nothing. Hence my advice: Dither and don’t worry about it—or listen to the 
 residual up close and see if there’s nothing to worry about, if you prefer.


 On Feb 3, 2015, at 10:24 PM, Didier Dambrin di...@skynet.be wrote:

 Sorry, but if I sum up this video, it goes like this:
 you need dithering to 16bit and I'm going to prove it, then the video 
 actually proves that you don't need it starting at 14bit, but adds it's 
 only because of the nature of the sound I used for demo.

 ..then why not use a piece of audio that does prove the point, instead?
 I know why, it's because you can't, and dithering to 16bit will never 
 make any audible difference.
 It's ok to tell the world to dither to 16bit, because it's nothing 
 harmful either (it only mislays people from the actual problems that 
 matter in mixing). But if there is such a piece of audio that makes 
 dithering to 16bit any audible, without an abnormally massive boost to 
 hear it, I'd like to hear it.

 Andrew says he agrees, but then adds that it's important when you 
 post-edit the sound. Yes it is, totally, but if you're gonna post-edit 
 the sound, you will rather keep it 32 or 24bit anyway - the argument 
 about dithering to 16bit is for the final mix.

 To me, until proven otherwise, for normal-to-(not abnormally)-high 
 dynamic ranges, we can't perceive quantization above 14bit for audio, and 
 10bits for images on a screen (debatable here because monitors aren't 
 linear but that's another story). Yet people seem to care less about 
 images, and there's gradient banding all over the place.






 -Message d'origine- From: Andrew Simper
 Sent: Wednesday, February 04, 2015 6:06 AM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Dither video and articles

 Hi Nigel,

 Isn't the rule of thumb in IT estimates something like: Double the
 time you estimated, then move it up to the next time unit? So 2 weeks
 actually means 4 months, but since we're in Music IT I think we should
 be allowed 5 times instead of 2, so from my point of view you've
 actually delivered on time ;)

 Thanks very much for doing the video! I agree with your recommended
 workflows of 16 bit = always dither, and 24 bit = don't dither. I
 would probably go further and say just use triangular dither, since at
 some time in the future you may want to pitch the sound down (ie for a
 sample library of drums with a tom you want to tune downwards, or
 remixing a song) then any noise shaped dither will cause an issue
 since the noise will become audible.

 All the best,

 Andrew

 -- cytomic -- sound music software --
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, 
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, 
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

 -
 Aucun virus trouvé dans ce message.
 Analyse effectuée par AVG - www.avg.fr
 Version: 2015.0.5645 / Base de données virale: 4281/9056 - Date: 04/02/2015
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, 
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Dither video and articles

2015-02-05 Thread Andrew Simper
On 6 February 2015 at 09:00, Nigel Redmon earle...@earlevel.com wrote:
...
 Several people have told me that they can hear it, consistently, on 24-bit 
 truncations. I don’t think so. I read in a forum, where an expert was using 
 some beta software and mentioned the audible difference with engaging 24-bit 
 dither and not via a button on the GUI, and the developer had to tell him 
 that he was mistaken on the function of that button, and that it did not 
 impact audio at all.

I've done tests and 18-bits is pretty much my limit to hear any
differences dither can make, and that is it loud listening levels. At
20-bits I can't tell any difference unless I do something pathological
like only listen to the tail and then boost the crap out of it.

 ...But at 16-bit, it’s just not that hard to hear it—an edge case, for sure, 
 but it’s there, so they will want to act on it, and I don’t think that’s 
 unreasonable.

I agree.

All the best,

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Dither video and articles

2015-02-04 Thread Andrew Simper
On 4 February 2015 at 14:24, Didier Dambrin di...@skynet.be wrote:
 Andrew says he agrees, but then adds that it's important when you post-edit
 the sound. Yes it is, totally, but if you're gonna post-edit the sound, you
 will rather keep it 32 or 24bit anyway - the argument about dithering to
 16bit is for the final mix.

Unless you ship 16-bit samples as a product.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2015-02-03 Thread Andrew Simper
Hi Nigel,

Isn't the rule of thumb in IT estimates something like: Double the
time you estimated, then move it up to the next time unit? So 2 weeks
actually means 4 months, but since we're in Music IT I think we should
be allowed 5 times instead of 2, so from my point of view you've
actually delivered on time ;)

Thanks very much for doing the video! I agree with your recommended
workflows of 16 bit = always dither, and 24 bit = don't dither. I
would probably go further and say just use triangular dither, since at
some time in the future you may want to pitch the sound down (ie for a
sample library of drums with a tom you want to tune downwards, or
remixing a song) then any noise shaped dither will cause an issue
since the noise will become audible.

All the best,

Andrew

-- cytomic -- sound music software --


On 25 January 2015 at 01:49, Nigel Redmon earle...@earlevel.com wrote:
 “In the coming weeks”, I said…OK, maybe 10 months…(I wasn’t *just* slow, 
 actually rethought and changed courses a couple of times)…

 Here’s my new “Dither—The Naked Truth” video, looking at isolated truncation 
 distortion in music:

 https://www.youtube.com/watch?v=KCyA6LlB3As


 On Mar 26, 2014, at 4:45 PM, Nigel Redmon earle...@earlevel.com wrote:

 Since it’s been quiet…

 Maybe this would be interesting to some list members? A basic and intuitive 
 explanation of audio dither:

 https://www.youtube.com/watch?v=zWpWIQw7HWU

 The video will be followed by a second part, in the coming weeks, that 
 covers details like when, and when not to use dither and noise shaping. I’ll 
 be putting up some additional test files in an article on ear level.com in 
 the next day or so.

 For these and other articles on dither:

 http://www.earlevel.com/main/category/digital-audio/dither-digital-audio/

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Efficiently modulate filter coefficients without artifacts?

2015-02-02 Thread Andrew Simper
On 2 February 2015 at 18:45, Vadim Zavalishin
vadim.zavalis...@native-instruments.de wrote:
...
 In regards to the artifact minimization, I have only an intuitive
 suggestion. Let's look at the SVF structure in continuous time (e.g. Fig.5.1
 on p.77 of
 http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_1.0.3.pdf)
 and at the structure of the continuous-time integrator (the two untitled
 pictures on p.49 in the same text). It's intuitively clear, that the
 integrator structure, where the cutoff gain precedes the integration
 generates less artifacts, since the integrator is smoothing out the
 coefficient changes. This leads to the idea that in this case the lowpass
 output of the SVF would be quite reasonable in regards to the artifact
 minimization, since each of the cutoff coefficients is smoothed by an
 integrator and the resonance coefficient is smoothed by both of them.
 Similar considerations can be applied to the other modes, where it's clear
 that the HP output gets the unsmoothed artifacts from the resonance changes.
 If we want to build a mixture of LP/BP/HP modes rather than picking the
 outputs one by one, then, maybe it's possible to smooth the artifacts by
 using the transposed (MISO) form of the SVF, but I'm not usre.
...

Thanks for your interesting observation of smoothing of cutoff changes
and changes in resonance gain. In this light the MISO form does look
superior, and in my tests has a way more interesting non-linear tone
that a standard SVF, especially the LP, no wonder ARP used this form
in their synths! I have so much to learn still from all the classic
designs.


 One would then generally expect other discretization approaches, which do
 not preserve the topology and state variables, such as direct forms to have
 a way poorer performance in regards to the artifacts, unless, of course,
 it's an approach which specifically targets the artifact minimization in one
 or the other way.

 Regards,
 Vadim

I completely agree! I find it mentally easier to think of energy
stored in each component rather than state variables even though
they are the same. So for musical applications it is important that a
change in the cutoff and resonance doesn't change (until you process
the next sample) the energy stored in each capacitor / inductor /
other energy storage component in your model. Direct form structures
do not have this energy conservation property, they are only
equivalent in the LTI case (linear time invariant - ie don't change
your cutoff or resonance ever). Any method that tries to jiggle the
states to preserve the energy would only be trying to do what already
happens automatically with some of state space model, so I feel it is
best to leave such forms for static filtering applications.

All the best,

Andy




 On 02-Feb-15 11:18, Ross Bencina wrote:

 Hello Robert,

 On 2/02/2015 10:10 AM, robert bristow-johnson wrote:

 also, i might add to the list, the good old-fashioned lattice (or
 ladder) filters.


 In the Laroche paper mentioned earlier [1] he shows that Coupled Form is
 BIBO stable and Normalized Ladder is stable only if coefficients are
 varied at most every other sample (which, if adhered to, should also be
 fine for the current discussion).

 The Lattice filter is *not* time-varying stable according to Laroche's
 analysis. I'd be curious to hear alternative views/discussion on that.

 [1] Laroche, J. (2007) “On the Stability of Time-Varying Recursive
 Filters,” J. Audio Eng. Soc., vol. 55, no. 6, pp. 460-471, June 2007.

 Cheers,

 Ross.
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp



 --
 Vadim Zavalishin
 Reaktor Application Architect
 Native Instruments GmbH
 +49-30-611035-0

 www.native-instruments.com

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] SVF and SKF with input mixing

2015-01-05 Thread Andrew Simper
Thanks to the ARP engineers for the original circuit and Sam
HOSHUYAMA's great work for outlining the theory and designing a
schematic for an input mixing SVF.

Sam's articles-
Theory: http://houshu.at.webry.info/200602/article_1.html
Schematic: http://houshu.at.webry.info/201202/article_2.html

I have taken Sam's design and written a technical paper on
discretizing this form of the SVF. I also took the chance to update
the SKF (Sallen Key Filter) paper to more explicitly deal with input
mixing of different signals, here they are in as similar form as
possible:

http://cytomic.com/files/dsp/SvfInputMixing.pdf
http://cytomic.com/files/dsp/SkfInputMixing.pdf

As always all the technical papers I've done can be accessed from this page:

http://cytomic.com/technical-papers

All the best,

Andrew Simper

-- cytomic -- sound music software --
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sallen Key with sin only coefficient computation

2014-12-23 Thread Andrew Simper
Hi Robert,

Everyone on the synth diy list didn't even bat an eyelid at that
diagram, they just said stuff like Thanks and That's very
interesting, I have not seen it done that way before.

It is the way that you can mix the inputs into a Sallen Key filter
that is new. I have checked David Dixon's schematic that he emailed me
and he actually does it a different way to me. The benefit is that
this is a circuit design that can be built, and when modelled properly
leads to a design that supports audio rate modulation. You can also
send completely different signals into the structure and get
simultaneous low, band, and high passing of those signals, so you can
do some cool things with this.

But, lets rewind a little, it seems you are still having issues with
the idealised circuit diagram. For the record I have only shown the
solution to the linear case, but you can add non-linearities anywhere
you see a triangle if you want as this is what will happen in a real
circuit to sound good, but I have solved for idealised components.

Anyway, please forget about it diagram if it confuses you. I have
listed the equations in full in the pdf, but if you are also having
trouble reading that then here are the equations again, but this time
instead of solving the implicit integration version you can solve for
a frequency response by sticking a s in front of the vc1, vc2 terms (I
know you love your laplace!)

va1 == (m0 v0) - (m1 v0 + v1)
va2 == (m1 v0 + v1) - (v2)
v1 == (k v2 - m2 v0) + vc1  -- vc1 == (v1) - (k v2 - m2 v0)
v2 == m2 v0 + vc2  --  vc2 == (v2) - (m2 v0)

solve the simultaneous nodal circuit equations for hs:
0 == g va1 - s vc1
0 == g va2 - s vc2
hs == v2/v0

and the solution is:
H(s) = (g^2 m0 + g m1 s + m2 s^2)/(g^2 - g (k - 2) s + s^2)


for example you can paste this command into the web page:
http://maxima-online.org/
solve ([va1 = (m0*v0 - m1*v0 - v1), va2 = (m1*v0 + v1 - v2), v1 =
(k*v2) - m2*v0 + vc1, v2 = m2*v0 + vc2, 0=g*va1-s*vc1, 0=g*va2-s*vc2,
hs=v2/v0], [hs, v2, v1, vc1, vc2, va1, va2])[1];

Again, note that the solution contains weights for low, band, and high
pass processing, but the important thing here is that these can be
completely different inputs, this is not summing three different
output signals.

Andy

-- cytomic -- sound music software --


On 23 December 2014 at 07:10, robert bristow-johnson
r...@audioimagination.com wrote:
 On 12/22/14 12:27 AM, Andrew Simper wrote:

 I've seen in many Sallen Key circuits people stick the input signal into
 various points to generate some different responses, but always the high
 pass is only 1 pole.

 i haven't seen that with the SK.  for HPF, i've only seen it with the the
 R's and C's swapped.  like with
 http://sim.okawa-denshi.jp/en/OPseikiHikeisan.htm .

 I'm referring to the MS20 and Steiner Parker filters,



 http://machines.hyperreal.org/manufacturers/Korg/MS-synths/schematics/KLM-307.GIF

 ...

 http://www.cgs.synth.net/modules/cgs35_syntha_vcf.html .


 okay, neither of those seem to be what you have at
 http://cytomic.com/files/dsp/SkfInputMixing.jpg .

 these LM13600's are more than a simple transconductance amplifier. they are
 current-controlled transconductance amplifiers and that current going into
 pins 1 or 16 of the chip determine the transconductance gain g_m.  in your
 analysis in your paper, are you assuming g is constant (or set
 independently) and then using time-domain analysis (modeling with
 trapezoidal integration) for the rest of the circuit?  or are you modeling a
 varying g, because i don't see that explicitly in SkfInputMixing.jpg?



 --

 r b-j  r...@audioimagination.com

 Imagination is more important than knowledge.



 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sallen Key with sin only coefficient computation

2014-12-23 Thread Andrew Simper
 completely different inputs, this is not summing three different
 output signals.

to clarify I meant to say : this is not summing three output signals
(low, band, high) from the same input signal like you can do with an
SVF
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sallen Key with sin only coefficient computation

2014-12-23 Thread Andrew Simper
PS:

 Anyway, please forget about it diagram if it confuses you.

 legit circuit diagrams ain't confusing.  signal flow diagrams ain't
 confusing.  mixed metaphors can be confusing.  wires are sorta physical
 things that you can do Kirchoff's laws on, signal paths are more like
 information pipes in which numbers flow.  when i see a line go into a
 capacitor, i think it's a wire.  when i see a line go into an adder or a
 gain block, i think it's a signal path.

They are wires. Using a summation mixer type symbol is pretty standard
in circuit papers for a long time eg:

http://www.ka-electronics.com/images/pdf/Steiner_Filter.pdf
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sallen Key with sin only coefficient computation

2014-12-21 Thread Andrew Simper
Thanks for spotting the typo Ross!

I've updated the diagram of the filter to be a little prettier in the full
pdf, and I've also uploaded it as a jpg here:

http://cytomic.com/files/dsp/SkfInputMixing.jpg

Andy

-- cytomic -- sound music software --

On 21 December 2014 at 20:56, Ross Bencina rossb-li...@audiomulch.com
wrote:

 On 21/12/2014 5:12 PM, Andrew Simper wrote:

 and all the other papers (including the SVF version of the same thing I
 did
 a while back) are always available here:

 www.cytomic.com/techincal-papers


 Actually:

 http://www.cytomic.com/technical-papers
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] R: Sallen Key with sin only coefficient computation

2014-12-21 Thread Andrew Simper
Hi Marco,

Thanks for explaining the diagram to Robert, I am so used to seeing
and drawing idealised circuit diagrams like this that I forget people
in the DSP aren't as comfortable with them as me.

Another feature I forgot to mention, this form is quite fun since your
input signals can actually be completely different. So for example you
can high pass filter one signal, and low pass filter another with the
same cutoff and resonance and the filter mixes them for you at the
output. So you could use this as a DJ type filter with two songs as
the different inputs and have a crossover type mix between them at the
output. This isn't such a big deal in the linear case, you could
equally use two filters with the same settings and the cpu difference
will be minimal, but for the non-linear case things are more
interesting (always the way!). You get both signals contributing to
core and resonance drive, so it keeps everything wonderfully balanced
and sounds brilliant.

David Dixon has confirmed on the Synth DIY email list that this is the
same method he uses (although I came up with it independently) in his
Intellijel Korgasmatron II filter, well worth checking out:
http://www.intellijel.com/eurorack-modules/913-2/ . He actually has
two of these input summing sallen key filters in his module. Those
analog boys are always one step ahead ;) He released his module in Jan
2013.

If anyone is interested in having a listen to the non-linear version
please let me know and I'll do some audio demos, the low pass version
sounds the same as the MS2 filter in The Drop.

Andy

-- cytomic -- sound music software --


On 22 December 2014 at 05:46, Marco Lo Monaco marco.lomon...@teletu.it wrote:
 Hello Robert,
 I did a similar analysis months ago on the SVF topology Andrew posted at
 that time.
 The implicit (and most logical) convention is to consider the OTAs as output
 current generator (as they should, so that a current flows into the cap),
 the unity/k/mo0/m1/m2 gains as having infinite input impedance and zero
 output impedance, the summing nodes as having infinite input impedence on
 each addend input and zero output on the summed voltage node.
 By doing this the analog counterpart of Andrews scheme works.

 Ciao

 Marco

 -Messaggio originale-
 Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
 boun...@music.columbia.edu] Per conto di robert bristow-johnson
 Inviato: domenica 21 dicembre 2014 20:25
 A: music-dsp@music.columbia.edu
 Oggetto: Re: [music-dsp] Sallen Key with sin only coefficient computation

 On 12/21/14 1:01 PM, Andrew Simper wrote:
  I've updated the diagram of the filter to be a little prettier in the
  full pdf, and I've also uploaded it as a jpg here:
 
  http://cytomic.com/files/dsp/SkfInputMixing.jpg
 

 i don't see how one analyzes that circuit since c1 and c2 are not
 connected to
 any other impedances.  there is no way to determine what the two
 capacitors do.  it's really a signal flow diagram (like we do with DSP)
 but with
 two mysterious elements added.

 --

 r b-j  r...@audioimagination.com

 Imagination is more important than knowledge.



 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp
 links http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sallen Key with sin only coefficient computation

2014-12-21 Thread Andrew Simper
 I've seen in many Sallen Key circuits people stick the input signal into
 various points to generate some different responses, but always the high
 pass is only 1 pole.

 i haven't seen that with the SK.  for HPF, i've only seen it with the the
 R's and C's swapped.  like with
 http://sim.okawa-denshi.jp/en/OPseikiHikeisan.htm .

I'm referring to the MS20 and Steiner Parker filters, and also copies
of those (eg Doepfer A106), they don't do a 2 pole high pass, so can't
generate a proper notch or peaking response.

MS20 v2: You can see here in the first filter they stick the input
into the base of the second cap (C2):
http://machines.hyperreal.org/manufacturers/Korg/MS-synths/schematics/KLM-307.GIF

Steiner Parker: Also here, although this is a partially differential
design, they do the same thing, a bandpass is generated by putting the
input to the base of the first cap, the high pass into the second cap
(middle one) - which gives a correct bandpass response, but again only
a 1 pole high pass:
http://www.cgs.synth.net/modules/cgs35_syntha_vcf.html . The last cap
is a differential input of the first one, so they both join to get
filtered by the middle cap non-differentially.

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Sallen Key with sin only coefficient computation

2014-12-20 Thread Andrew Simper
Hi Guys,

Something I've had on the backburner for a while, but now I've finished my
new product I've had time to finish.

I've seen in many Sallen Key circuits people stick the input signal into
various points to generate some different responses, but always the high
pass is only 1 pole. A while back I came up with a version that can also
generate a 2 pole high pass, in fact it can generate all the shapes you can
by mixing the outputs of an SVF, including notch and peaking or any other
summation.

Like I did with the SVF some time ago I've also worked out the coefficients
for the SKF using Sin only and put it in state increment form. Thanks again
to Teemu for pointing out the sin(w) and sin(2*w) sine / cosine generator
form as being very low noise and having high accuracy on a KVR thread on
efficient sine generation.

Like I did a while back with the SVF I've also calculated the DF1
coefficient to SKF with mix parameters which turn out to be identical to
the SVF anyway.

https://cytomic.com/files/dsp/SkfLinearTrapezoidalSin.pdf

and all the other papers (including the SVF version of the same thing I did
a while back) are always available here:

www.cytomic.com/techincal-papers

All the best,

Andy

-- cytomic -- sound music software --
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Fast exp2() approximation?

2014-09-03 Thread Andrew Simper
On 4 September 2014 02:53, robert bristow-johnson r...@audioimagination.com
wrote:

 On 9/3/14 2:25 PM, Stefan Stenzel wrote:

 On 03 Sep 2014, at 18:00 , robert bristow-johnsonrbj@
 audioimagination.com  wrote:

 […]

 Feeding this into my approximator gives me these equation for some
 orders:
 ...
 1.0 + 0.6930089686*x + 0.2415203576*x^2 + 0.0517931450*x^3 +
 0.0136775288*x^4

 this one *should* come out the same as mine.  but doesn't exactly.

 Stefan, is your error weighting function inversely proportional to 2^x
 (so that the Tchebyshev or maximum error is proportional to 2^x)?  this
 minimizes the maximum error in *pitch* (octaves) or in loudness (dB).

 I looked at both functions’ error twice, once with approximation(x)-f(x)
 and approximation(x)/f(x)
 Seems that my function is better at minimising the difference, while
 yours minimises the
 relative error.


 yup.  so your's is the *non*-relative difference.


For an exponential tuning approximation you want to minimise the relative
error like RBJ has done, but I'm not sure how much audible difference this
will make within a single octave - the order of the poly will most likely
dominate. Also if you also use this function in circuit modelling /
waveshaping then you also want to make as many derivatives as possible are
continuous at the endpoints since this lower aliasing.

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] Linear Trap SVF Sin

2014-06-29 Thread Andrew Simper
In a recent KVR thread about sin oscillators I posted a method based
on a tan coefficient trapezoidal integrated SVF. Teemu Voipio
(mystran) pointed out a version that was calculated directly using
complex numbers and only used sin for the coefficients and a state +=
coeff * val type update, so near zero the coefficients are all near
zero, which gave this form better numerical properties.

Also guy called Adriano Mitre did some research into various fixed
point biquads with fixed point computation and has found the old
form I posted ages ago to be very good. I'll leave it to him to
publish exact details, but I wanted to give him the very best I could
offer:

So here it is using sin only for the coefficients:
http://www.cytomic.com/files/dsp/SvfLinearTrapezoidalSin.pdf

Which is linked from this page:
http://www.cytomic.com/technical-papers

These are all just numerical manipulations of the same basic nodal
equations / state space equations to try and squeeze a little more
precision out of things, so thanks to Teemu for the idea of using sin
only for the coefficient calculation.

Also at the bottom I have shown the (horrible) relation between
regular DF1 coefficients and the svf form. I recommend not using this
and instead derive the correct mix terms from analog prototypes, but
I did this to help out Adriano.

All the best,

Andrew Simper

-- cytomic -- sound music software --
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Linear Trap SVF Sin- generalized and simple analog appromation works well.

2014-06-29 Thread Andrew Simper
On 29 June 2014 16:05, socialmedia soc...@monotheo.biz wrote:
 My general comment on this, and several discussions on KvR and similar
 discussion elsewhere, is this. First of all they accept the term State
 variable filter. And then apply advanced mathematics to solve it. And then
 realize a highly inefficient filter, usually with oversampling.

 State variable filter, was a name for a particular way of doing a 12dB
 filter in the analog domain, AFAIK.

No. An SVF is a very particular low noise analog structure that you
can do any generic biquad responses from, and with a trapezoidal SVF
you can generate the same responses as you can with a DF1 biquad, so
this is useful for exact parametric eq shapes, as well as scientific
grade notch filtering and a bunch of other applications where a very
accurate linear filter is desirable.


 That can be done with extremely simple math.

 in = in - (buf2 * res); Subtract output for negative feedback (resonance)
 buf1 = buf1 + ((in - buf1) * cut); One pole (normalized positive feedback)
 buf2 = buf2 + ((buf1 - buf2) * cut); Second pole (normalized positive
 feedback)

Here is forward euler type 4 pole cascade with negative feedback (I'm
writing out the z terms just to make it clearer what is going on):

v1z = v1
v2z = v2
v3z = v4
v4z = v4
v1 += cut * (in - v1z - res*v4z)
v2 += cut * (v1 - v2z)
v3 += cut * (v2 - v3z)
v4 += cut * (v3 - v4z)

Now you can cut this down by one section, but you get more passband
cut with increased reonsnace:

v1z = v1
v2z = v2
v3z = v4
v1 += cut * (in - v1z - res*v3z)
v2 += cut * (v1 - v2z)
v3 += cut * (v2 - v3z)

You can drop another section, but the results are terrible as you get
loads of passband cut with resonance:

v1z = v1
v2z = v2
v1 += cut * (in - v1z - res*v2z)
v2 += cut * (v1 - v2z)

This is what you have done, and I don't recommend it, there are much
better low cpu filter structures than this (if you are interested in
that sort of thing). They actually used a 3 pole version in the
Xpander, but mostly people use 4 pole since the passband cut is too
great otherwise.

It is possible to construct a very low cpu Sallen Key as well as SVF
which has perfect trapezoidal shapes without oversampling and you
can apply trivial non-linearities to. The paper I have presented has
very little to do with any of this, I'm trying to generating the
lowest possible noise filter structure that has exact amplitude and
phase as a continuos biquad response at DC, cutoff, and infinity
(infinity being mapped to nyquist for the digital filter).

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] [admin] Re: Simulating Valve Amps

2014-06-25 Thread Andrew Simper
On 25 June 2014 07:27, Ethan Duni ethan.d...@gmail.com wrote:
 Ethan: This seems kind of pedantic. It's still an iterative solution to the
 underlying model. You've just offloaded the iterations to happen before
 runtime, and then added another layer of approximation at runtime to
 interpolate the table

Thanks Ethan for your contribution, I couldn't agree more with
everything in your entire post.  I hope you will have more luck
communicating these concepts than I did. I think a lot of the problem
is lack of familiarity for some people about the basic machinery we
are talking about. To become familiar requires not just time and
effort, but most of all willingness to learn methods that may be
helpful.

But it is impossible to teach someone that doesn't actually want to
learn. Saying you want to learn and you are open to new ideas doesn't
count as actually wanting to learn. A willingness to learn is
demonstrated by getting down to it and actually working through
examples and trying out the methods on basic problems and building
your way up as you go.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-25 Thread Andrew Simper
On 26 June 2014 03:11, robert bristow-johnson r...@audioimagination.com wrote:
 well, in the year 2014, let's consider that relative cost.  how expensive is
 a 1/2 MB in a computer with 8 or more GB?  unlike MIPS, which increase
 linearly with the number of simultaneous voices and such, a large lookup
 table can be used in common with every voice

 both speed and memory have been increasing according to, roughly, Moore's
 Law.  but, because of the speed of propagation of electrons, i am seeing a
 wall approaching on speed (not considering parallel or multiprocessing which
 is a complication that hurts my old geezer brain to think about - just the
 scheduling is a fright), but with 64-bit words (and 64-bit addresses), i am
 not seeing a similar wall on memory.

 but i am fully aware of the tradeoff.  ...

 if it's a Freescale 56K chip in a stompbox, your point is well-taken.


I think the only way to tell is profiling on the target platform. The
point here is that huge table lookups are likely not going to be the
best approach on a modern computer. Although everyone is aware of
keeping as much as possible in the cache and trying not to use large
LUTS if you can help it, here is a quick reminder video of where
things are right now, and where they are headed in the future:

http://channel9.msdn.com/Events/Build/2014/4-587
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-25 Thread Andrew Simper
PS: the keyword I left out here is memory bound


On 26 June 2014 12:31, Andrew Simper a...@cytomic.com wrote:
 On 26 June 2014 03:11, robert bristow-johnson r...@audioimagination.com 
 wrote:
 well, in the year 2014, let's consider that relative cost.  how expensive is
 a 1/2 MB in a computer with 8 or more GB?  unlike MIPS, which increase
 linearly with the number of simultaneous voices and such, a large lookup
 table can be used in common with every voice

 both speed and memory have been increasing according to, roughly, Moore's
 Law.  but, because of the speed of propagation of electrons, i am seeing a
 wall approaching on speed (not considering parallel or multiprocessing which
 is a complication that hurts my old geezer brain to think about - just the
 scheduling is a fright), but with 64-bit words (and 64-bit addresses), i am
 not seeing a similar wall on memory.

 but i am fully aware of the tradeoff.  ...

 if it's a Freescale 56K chip in a stompbox, your point is well-taken.


 I think the only way to tell is profiling on the target platform. The
 point here is that huge table lookups are likely not going to be the
 best approach on a modern computer. Although everyone is aware of
 keeping as much as possible in the cache and trying not to use large
 LUTS if you can help it, here is a quick reminder video of where
 things are right now, and where they are headed in the future:

 http://channel9.msdn.com/Events/Build/2014/4-587
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread Andrew Simper
On 23 June 2014 17:11, Ivan Cohen ivan.co...@orosys.fr wrote:
 Hello everybody !

 I may be able to clarify a little the confusion here...

Thanks Ivan for your great email contribution. I will only reply to
the one and only correction / clarification to what I have posted
previously.


 The bilinear transform is a tool allowing people to get from a Laplace
 transfer function H(s) an equivalent Z transfer function H(z). It is based
 on the fact that z is precisely equivalent to exp(sT) with T the sampling
 period. So, s equals 1/T * ln(z), which may be approximated with its first
 Taylor series term, 2/T * (z - 1) / (z + 1).

Thanks for pointing this out clearly! In derivations I've seen of the
bi-linear transform they use trapezoidal integration to get there,
taking the first taylor series expansion of 1/T ln(z) makes more sense
to distinguish it from trapezoidal integration, even though it results
in the same thing. I would love to see the original derivation by
Tustin, in particular where the idea came from, I've searched but
can't find it. In addition  Wikipedia /  Oppenheim doesn't help here
either:
...where T is the numerical integration step size of the trapezoidal
rule used in the bilinear transform derivation.[1]
[1] Oppenheim, Alan (2010). Discrete Time Signal Processing Third
Edition. Upper Saddle River, NJ: Pearson Higher Education, Inc. p.
504. ISBN 978-0-13-198842-2.

Again thanks for your post, I just hope people not only read it, but
take the time to understand it.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread Andrew Simper
On 23 June 2014 19:43, Andrew Simper a...@cytomic.com wrote:
 On 23 June 2014 17:11, Ivan Cohen ivan.co...@orosys.fr wrote:
 Hello everybody !

 I may be able to clarify a little the confusion here...

 Thanks Ivan for your great email contribution. I will only reply to
 the one and only correction / clarification to what I have posted
 previously.


 The bilinear transform is a tool allowing people to get from a Laplace
 transfer function H(s) an equivalent Z transfer function H(z). It is based
 on the fact that z is precisely equivalent to exp(sT) with T the sampling
 period. So, s equals 1/T * ln(z), which may be approximated with its first
 Taylor series term, 2/T * (z - 1) / (z + 1).

 Thanks for pointing this out clearly! In derivations I've seen of the
 bi-linear transform they use trapezoidal integration to get there,
 taking the first taylor series expansion of 1/T ln(z) makes more sense
 to distinguish it from trapezoidal integration, even though it results
 in the same thing. I would love to see the original derivation by
 Tustin, in particular where the idea came from, I've searched but
 can't find it. In addition  Wikipedia /  Oppenheim doesn't help here
 either:
 ...where T is the numerical integration step size of the trapezoidal
 rule used in the bilinear transform derivation.[1]
 [1] Oppenheim, Alan (2010). Discrete Time Signal Processing Third
 Edition. Upper Saddle River, NJ: Pearson Higher Education, Inc. p.
 504. ISBN 978-0-13-198842-2.

 Again thanks for your post, I just hope people not only read it, but
 take the time to understand it.


Ok, I'm still stumped here. Can someone please show me a reference to
how the bi-linear transform is created without using trapezoidal
integration?

I found this: http://www.josiahland.com/archives/1178 , but
trapezoidal integration is used in the middle section (a - b) * (f(a)
+ f(b))/2

Likewise this: 
http://www.d-filter.ece.uvic.ca/SupMaterials/Slides/DSP-Ch11-S6,7.pdf;
, also uses trapezoidal integration on frame 6 slide 7.

And again here: http://donalprice.com/dsp/bilinear-transform/
trapezoidal used in the derivation

and so on.

Can anyone help out here?
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread Andrew Simper
Here is a quote from one of my first replies to you Robert:

--
 of course a VCF driven by a constantly changing LFO waveform (or its digital
 model) is a different thing.  i was responding to the case where there is an
 otherwise-stable filter connected to a knob.  sometimes the knob gets
 adjusted before the song or the set or gig starts and never gets moved after
 that for the evening.  for filter applications like that, i am not as
 worried myself about the right time-varying behavior (whatever right
 is).

I agree that if the knobs aren't touched and their is either double
precision or some error correction feedback to make a fixed point
implementation work properly then a DF1 or other abstract filter
realisation work well and have a low cpu overhead.
--

I have time and time again stated that in the LTI case, with enough
precision, that _any_ implementation form of an abstract filter
derived from the bi-linear transform will be the same as direct
trapezoidal integration. The differences come with finite precision
and time varying behaviour.


 In the LTI case, with enough precision, then any implementation
 structure (if done properly) will result in the same output.


 no it won't!  digital filters implemented from analog prototypes using, say,
 impulse invariant will *not* result in the same output as those implemented
 from the same analog prototype using bilinear transform.

yes it will, you are taking this out of context, we are talking about
bi-linear transformed and direct trapezoidal integrated. The LTI will
match (given infinite processing resolution) perfectly between the
two.

I will reply to the rest once you can agree with this short snippet,
since you seem unable to read what I actually write, and instead make
things up.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread Andrew Simper
Ok, but where does
On 23 June 2014 22:59, robert bristow-johnson r...@audioimagination.com wrote:
 On 6/23/14 10:50 AM, Andrew Simper wrote:

 Ok, I'm still stumped here. Can someone please show me a reference to
 how the bi-linear transform is created without using trapezoidal
 integration?


 not that you wanna hear from me, but the usual derivation in textbooks
 goes something like this:


z  =  e^(sT)

   =  e^(sT/2) / e^(-sT/2)

   =approx   (1 + sT/2)/(1 - sT/2)   (hence the bilinear approximation)


 solving for s gets


 s  =approx  2/T * (z-1)/(z+1)


Ok, but what are the origins of that approximation? Where did it
actually come from. Looking at it in hindsight is fine, but doesn't
tell me anything.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Derivation of the Tustins method (was Re: Simulating Valve Amps)

2014-06-23 Thread Andrew Simper
Ok, so what I'm really asking is why did someone (Tustin?) decide to
make this substitution?

exp (sT) = exp (sT/2)  / exp (-sT/2)

which can be written:

exp (sT/2 - (-sT/2))



On 23 June 2014 23:58, Andrew Simper a...@cytomic.com wrote:
 Ok, but where does
 On 23 June 2014 22:59, robert bristow-johnson r...@audioimagination.com 
 wrote:
 On 6/23/14 10:50 AM, Andrew Simper wrote:

 Ok, I'm still stumped here. Can someone please show me a reference to
 how the bi-linear transform is created without using trapezoidal
 integration?


 not that you wanna hear from me, but the usual derivation in textbooks
 goes something like this:


z  =  e^(sT)

   =  e^(sT/2) / e^(-sT/2)

   =approx   (1 + sT/2)/(1 - sT/2)   (hence the bilinear approximation)


 solving for s gets


 s  =approx  2/T * (z-1)/(z+1)


 Ok, but what are the origins of that approximation? Where did it
 actually come from. Looking at it in hindsight is fine, but doesn't
 tell me anything.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Derivation of the Tustins method (was Re: Simulating Valve Amps)

2014-06-23 Thread Andrew Simper
Here is a reply from Ivan to the old thread, that I am including here
in this new thread:

On 24 June 2014 00:25, Ivan Cohen ivan.co...@orosys.fr wrote:
 Not sure about what you mean here, but to get these approximations, you use
 the Taylor series of exp(x) and ln(x) for x - 0 :

 exp(x) =  sum_(k=0 to N) x^k / k !
 exp(x) = 1 + x + x^2/2! + x^3/3! + ...

 ln(x) = 2 * sum(k=0 to N) 1 / (2k+1) ((x - 1) / (x + 1))^(2k-1)
 ln(x) = 2 ( (x - 1)/(x+1) + 1/3 ((x - 1)/(x + 1))^3 + ... )

 So, if we do the normal substitution, we can write this :

 z = exp (sT)
 z = exp (sT/2) * exp (sT/2) = exp (sT/2) / exp (-sT/2)
 z = (1 + sT/2 + (sT/2)² / 2... ) / (1 - sT/2 + (-sT/2)² / 2 + ...)

 With the inverse mapping :

 s = 1/T * ln(z)
 s = 2/T * ( (z - 1)/(z+1) + 1/3 (z-1)^3 /(z+1)^3 + ...)

 At the first order you get the approximations z = (1 + sT/2) / (1 - sT/2) or
 s = 2/T (z -1)/(z+1)


 Ivan COHEN
 http://musicalentropy.wordpress.com


I'm after the reason behind the normal substitution, what prompted
someone (Tustin?) to come up with that?
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread Andrew Simper
-- cytomic -- sound music software --


On 23 June 2014 21:58, robert bristow-johnson r...@audioimagination.com wrote:
 On 6/23/14 12:43 AM, Andrew Simper wrote:

 On 23 June 2014 11:25, robert bristow-johnsonr...@audioimagination.com
 wrote:

 On 6/22/14 10:48 PM, Andrew Simper wrote:

 I think the important thing to note here as well is the phase.
 Trapezoidal keeps the phase and amplitude correct at dc, cutoff, and
 nyquist.


 Nyquist?  are you sure about that?

 Yes,

 PS: I was agreeing with you by saying Yes, and thanking you for
 spotting the mistake. It's called English,


 it *is* English.  i responded serially.  Yes to the question asked meant
 that you *were* sure about it.  later i see that we agreed that the
 discretized filter using trapezoidal integration behaves at Nyquist the same
 way the original continuous-time filter behaves at infinity.  funny thing
 is, so does the discretized filter using bilinear transform.  might one
 suspect they have something in common?


   and it is a bit vague at
 times, but please try and read what I'm saying not just take random
 word snippets and construct your own interpretation of them so you can
 vent.


 it wasn't random.  i asked a clear and challenging question and you answered
 it with the opposite binary word than the answer you evidently intended.


ok, lets try again:

 I think the important thing to note here as well is the phase.
 Trapezoidal keeps the phase and amplitude correct at dc, cutoff, and
 nyquist.

 Nyquist?  are you sure about that?

Ahh yes, thanks for spotting that, I am so used to having nyquist warped
to inifinity that I use them interchanably in my mind. What I mean
was: at dc, cutoff, and infinity (which is nyquist in the warped
digital case)

Do you get it now? This is a fairly standard turn of phrase that you
decided to jump on and interpret out of context.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-23 Thread Andrew Simper
On 24 June 2014 06:37, Urs Heckmann u...@u-he.com wrote:

 On 23.06.2014, at 19:18, robert bristow-johnson r...@audioimagination.com 
 wrote:

 it *is* precisely equivalent to the example you were describing with one 
 more iteration than you were saying was necessary.

 Now I'm really angry I wasted so much time. An example is just that, an 
 example. I deliberately kept it very simple to show the ?  principle. In 
 Germany we have a saying that roughly translates to pass someone a finger 
 and he'll grab the whole hand. That's what you did. Good luck storing a one 
 pole filter in a 3 dimensional look up table. You think we haven't thought of 
 that in five years of researching that field? Even if you succeed for a 
 measly non-linear one pole filter, try look up tables for a two pole filter. 
 Or one for a six pole filter, which is the minimum if you want to sound close 
 to that transistor ladder filter and which makes, uhm, something like 18 
 dimensions? THAT is a scary concept.

Please keep in mind Urs that RBJ is being a grumpy old engineer that
is resistant to change just for the sake of it. He has a proven track
record in taking anything you say and do his grumpy best to
misinterpret what you have written just to be difficult, since
stalling means he doesn't have to learn anything new.


 My one most important question was never answered though: What advantage does 
 a DF-anything filter have over a direct implementation when modeling a 
 circuit in software? My vague feeling: There is no answer.

 I had to get this off my chest. I'm off for a drink.

 - Urs

ATTENTION RBJ: Before you reply to this I think what Urs left out is
HE IS NOT TALKING ABOUT THE LTI CASE. In the LTI case (ie static
filter that never changes from now to the end of time) and with
infinite precision then there are loads of equivalent methods that
will produce identical results eg: is you use H(s) and the bilinear
transform with an implementation structure of DF1/2/transposed/digital
wave/lattice/... or you use direct trapezoidal integration.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread Andrew Simper
 um, it's a semantic thing that i just wrote about in response to Urs.  i
 don't use the term myself, but i am defining nodal analysis the way i see
 virtually all other lit doing it.  when spice is modeling non-linear
 circuits, it is using Kirchoff's current law on every node, Kirchoff's
 voltage law on every loop, and the explicit volt-amp characteristic on every
 device connected between those nodes.  those are the physical axioms.  if
 you wanna call that nodal analysis, fine by me (but i don't use that term,
 myself).

 but if you're using Spice to give you a quickie frequency response to an
 ideal op-amp circuit with resistors and capacitors, it's doing what we call
 the node-voltage method (in the s-domain) which **is** limited to
 independent and dependent sources and impedances.  and it solves a system of
 linear equations.  you need to count each of these parts between nodes as
 something that obeys the s-domain version of Ohm's law.  that's the only way
 you can write the linear equations and solve them to get a meaningful and
 consistent frequency response which is a function only of the circuit with
 respect to its input and output terminals.  that frequency response is not a
 function of the applied signal or anything outside of the circuit.


Few! And I thought I would have to pull all my products from sale
because the didn't work! ;)

Another slight possible confusion is that most circuit simulation
isn't nodal analysis but modified nodal analysis which is just
done so they can stick everything into a matrix form.



 i'm no more an expert on DF-anything.  i am just saying that, when it's
 LTI (and i am referring to some hand-written papers i've seen of yours a
 couple years ago) it's gonna boil down to an H(z).  then there is, as far as
 the input-output behavior and frequency response goes, an equivalence of the
 various topologies that get the same H(z).  the different topologies have
 different behaviors regarding clipping and quantization noise and
 time-varying behavior (less worried about the occasional knob twist than for
 the filter modulated with a tremelo signal).  it is *for* *those* *reasons*
 i have an interest in the different topologies, but not for calculating
 coefficients to get a desired frequency response.

LTI = no changes in parameters, so sure, then it comes down to the
numerical properties of the realisation


 You just store a single state per capacitor actually, please read these
 for
 some examples: www.cytomic.com/technical-papers



 i've been there a while ago.  are there new papers now?

I have updated a few of them, and added a breakdown showing how a one
pole active / passive low pass is solved and leads to an identical
implementation when using nodal analysis which has been around for
ages and compare this to some newly invented methods.



 I would guess that for guitar circuits if the cutoff frequencies are
 around
 80 hz at the lowest then after two seconds of not moving any knobs the
 that
 any linear filter that hasn't blown up would settle to sound the same
 given
 infinite processing resolution, but I have not done research into this so
 I
 would not bank on it, especially when direct circuit simulation is so
 straight forward, and then the non-linear case can also be handled with
 identical machinery at the cost of more cpu.


 1/(2 seconds) is about 1/2 Hz.  two orders of magnitude between 0.5 Hz and
 80.  i'm would be surprized, outside of a chaotic system (which is
 non-linear on steroids), if there was any transient left after even a half
 second.

If time varying behaviour is important then even 1/2 is too long. It
depends on the specific project as to what is an appropriate tradeoff
of accuracy and cpu use. For a non-modulatable linear tone stack I
would say a double precision 3 pole DF1 is a good low cpu solution and
but there should be decent bandwidth limiting of automation to prevent
problems. The thing with music is that fast automation can sound
really cool too, so if it's only a slight cpu overhead then I think it
is worth supporting that.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread Andrew Simper
 It is different for a circuit that isn't a 1 pole RC.


 no, it's whenever an integrator (1/s in the s universe) is implemented
 numerically with the trapezoid rule.  doesn't matter whether it's a C or
 anything else.

RBJ: please show me the derivation for a 2 pole Sallen Key using the
bi-linear transform, then I'll show you the difference between using
trapezoidal integration (which preserves each capacitors state
directly in from the time domain structure of the circuit) and the
bi-linear transform used by engineers (which operates on the s-domain
generic 2 pole biquad laplace representation so throws away the
original structure and state of the circuit)

In the one pole case they come down to the same thing (there is only
one capacitor), but the bi-linear transform is actually derived from
the time domain trapezoidal rule.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread Andrew Simper
 I think the important thing to note here as well is the phase.
 Trapezoidal keeps the phase and amplitude correct at dc, cutoff, and
 nyquist.



 Nyquist?  are you sure about that?

Yes, thanks for spotting that, I am so used to having nyquist warped
to inifinity that I use them interchanably in my mind. What I mean
was: at dc, cutoff, and infinity (which is nyquist in the warped
digital case)
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread Andrew Simper
RBJ: direct integration like I am proposing is a good idea can be
solved in many ways, what results is a set of linearised equations to
be solved, these can be for nodal voltages, or differences in
voltages, the latter is called state space. Have a read of this:

DISCRETIZATION OF PARAMETRIC ANALOG CIRCUITS FOR REAL-TIME SIMULATIONS
Kristjan Dempwolf, Martin Holters and Udo Zölzer
http://dafx10.iem.at/papers/DempwolfHoltersZoelzer_DAFx10_P7.pdf

They cover it all much better than I can do via email, but everything
shown are useful variations of MNA type possibly non-linear nodal
analysis. What you end up with is a trapezoidal integrated model that
preserves the state of each capacitor.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread Andrew Simper
 you
 have a function of two variables that you can explicitly evaluate
 using your favourite route finding mechanism, and then use an
 approximation to avoid evaluating this at run time. This 2D
 approximation is pretty efficient and will be enough to solve this
 very basic case. But each non-linearity that is added increases the
 space by at least one dimension, so your function gets big very
 quickly and you have to start using a non-linear mapping into the
 space to keep things under control.


 i haven't been able to decode what you just wrote.

This is important to understand if you want to use massive tables to
model this stuff. Perhaps read Yeh's thesis paper? I have already
posted this to another thread, but here it is again:

... but missed David Yeh's dissertation
https://ccrma.stanford.edu/~dtyeh/papers/DavidYehThesissinglesided.pdf
which contains a great description of MNA and how it relates to the
DK-method. I highly recommend everyone read it, thanks David!!

I really hope that an improved DK-method that handles multiple
nonlinearities more elegantly that it currently does. A couple of
things to note here, in general, this method uses multi-dimensional
tables to pre-calculate the difficult implicit equations to solve the
non-linearities, but as the number of non-linearities increases so
does the size of your table as noted in 6.2.2:

The dimension of the table lookup for the stored nonlinearity in
K-method grows with the number of nonlinear devices in the circuit. A
straightforward table lookup is thus impractical for circuits with
more than two transistors or vacuum tubes. However, function
approximation approaches such as neural networks or nonlinear
regression may hold promise for efficiently providing means to
implement these high-dimensional lookup functions.


As you add more non-linearities to the circuit it becomes less
practical to pre-calculate tables to handle the situation. Also
changes caused by things like potentiometers add extra dimensions, and
you will spend all your time doing very high dimensional table
interpolation and completely blow the cache.

 sigh  it's a function.  given his parameters, g and s, then x[n] goes in,
 iterate the thing 50 times, and an unambiguous y[n] comes out.  doesn't
 matter what the initial guess is (start with 0, what the hell).  i am saying
 that *this* net function is just as deserving a candidate for modeling as is
 the original tanh() or whatever.  just run an offline program using MATLAB
 or python or C or the language of your delight.  get the points of that
 function defined with a dense look-up-table.  then consider ways of modeling
 *that* directly.  maybe leave it as a table lookup.  whatever.  but at least
 you can see what you're dealing with and use that net function to help you
 decide how much you need to upsample.

sigh sigh sigh please at least try and understand what I wrote
before sighing at me! Yes, I agree that for low dimensional cases this
is a good approach, but for any realistic circuit things get
complicated and inefficient really quickly and you are better off with
other methods.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread Andrew Simper
 sigh sigh sigh please at least try and understand what I wrote
 before sighing at me! Yes, I agree that for low dimensional cases this
 is a good approach, but for any realistic circuit things get
 complicated and inefficient really quickly and you are better off with
 other methods.

What I mean by other methods is that you can use low dimensional
tables/approximations for local non-linear chunks of circuit, but then
use a global root finder for the inter-connection of these chunks.
Here is an example of how to do this on a realistic circuit:

GUITAR PREAMP SIMULATION USING CONNECTION CURRENTS
Jaromir Macak
http://dafx13.nuim.ie/papers/27.dafx2013_submission_10.pdf

The drawback of this method is that it still requires usage of
numerical algorithm to solve the equations but if the inner blocks are
approximated, then the number of unknown variables to be solved
numerically is equal to number of connection currents, which is much
lower than original unknown variables.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread Andrew Simper
On 23 June 2014 11:25, robert bristow-johnson r...@audioimagination.com wrote:
 On 6/22/14 10:48 PM, Andrew Simper wrote:

 I think the important thing to note here as well is the phase.
 Trapezoidal keeps the phase and amplitude correct at dc, cutoff, and
 nyquist.


 Nyquist?  are you sure about that?

 Yes,

PS: I was agreeing with you by saying Yes, and thanking you for
spotting the mistake. It's called English, and it is a bit vague at
times, but please try and read what I'm saying not just take random
word snippets and construct your own interpretation of them so you can
vent.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-22 Thread Andrew Simper
On 23 June 2014 12:37, robert bristow-johnson r...@audioimagination.com wrote:
 Andy and Urs, i have been making consistent and clear points and challenges
 and the response is not addressing these squarely.

 let's do the Sallen-Key challenge, Andy.  that's pretty concrete.

With respect Robert, I have really tried to address your points,
please go and read all the links I've posted so that you get a better
picture of what I am saying.


 you pick the circuit (i suggested the one at wikipedia) so we have a common
 reference.  then you pick an R1, R2, C1, C2 (or an f0 and Q, i don't care).
 let's leave the DC gain at 0 dB.  then we'll have a common and unambiguous
 H(s).

In the LTI case, with enough precision, then any implementation
structure (if done properly) will result in the same output. There is
no question here, I have already state this any number of times. If we
want to now look at how things differ in the time varying case, which
is what I was talking about, then we can continue. Otherwise I have a
feeling that people reading this thread will be getting bored with the
repetition of these posts (I know I am).
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-21 Thread Andrew Simper
On 20 June 2014 23:37, robert bristow-johnson r...@audioimagination.com
wrote:

 well, Kirchoff's laws apply to either linear or non-linear.  but the
 methods we know as node-voltage (what i prefer) or loop-current do
 *not* work with non-linear.  these circuits (that we apply the node-voltage
 method to) have dependent or independent voltage or current sources and
 impedances between the nodes.


If this is the case you should tell everyone that using Spice to stick to
linear components only since the non-linear ones won't work!



   I was trying to point out that the linear analysis done by Yeh
 starts the circuit but then throws it away and instead uses a DF1, and a
 DF1 does not store the state of each capacitor individually, so when you
 turn the knob you don't get the right time varying behaviour.


 in the steady-state (say, a second after the knob is turned) is there the
 right behavior with the DF1?


If you start the filters with silence and don't change any settings and you
have infinite processing resolution they will be identical. I would guess
that the time for artificial energy injected into a DF1 due to parameter
changes to be unnoticeable would depend on the particular circuit, guessing
again I would say it is most likely that low frequencies with high
resonance would be most problematic, and from what I have noticed sweeping
downwards in frequency is the worst case, but you are the expert on DF1s
what are the results of research you have conducted into these issues?



I am saying
 you don't have to throw the circuit away, you can still get an efficient
 implementation since in the linear case everything reduces to a bunch of
 adds and multiplies.


 and delay states.


You just store a single state per capacitor actually, please read these for
some examples: www.cytomic.com/technical-papers



 For non-linear modelling you need additional steps, and depending on the
 circuit there are many different methods that can be tried to find the
 best
 fit for the particular requirements


 if, the sample rate is high enough (and it *should* be pretty fast because
 of the aliasing issue) the deltaT used in forward differences or backward
 differences (or predictor-corrector) or whatever should be pretty small.
  in my opinion, if you have a bunch of memoryless non-linear elements
 connected in a circuit with linear elements (with or without memory), it
 seems to me that the simple Euler's forward method (like we learned in
 undergraduate school) suffices to model it.


In tests I've done (where I've had to simplify some parts just to get it
explicit version to be stable) it takes around x8 oversampling of a base
rate of 44.1khz before the single step explicit version sounds the almost
identical to an implicit version, so



Andrew, i realize that you had been using something like that to emulate
 linear circuits with capacitors and resistors and op-amps.  it does make a
 difference in time-variant situations, but for the steady state (a second
 or two after the knob is twisted), i'm a little dubious of what difference
 it makes.


I would guess that for guitar circuits if the cutoff frequencies are around
80 hz at the lowest then after two seconds of not moving any knobs the that
any linear filter that hasn't blown up would settle to sound the same given
infinite processing resolution, but I have not done research into this so I
would not bank on it, especially when direct circuit simulation is so
straight forward, and then the non-linear case can also be handled with
identical machinery at the cost of more cpu.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-20 Thread Andrew Simper
On 20 June 2014 17:11, Tim Goetze t...@quitte.de wrote:

 [Andrew Simper]
 On 18 June 2014 21:01, Tim Goetze t...@quitte.de wrote:
  I absolutely agree that this looks to be the most promising approach
  in terms of realism.  However, the last time I looked into this, the
  computational cost seemed a good deal too high for a realtime
  implementation sharing a CPU with other tasks.  But perhaps I'll need
  to evaluate it again?
 
 The computational costs of processing the filters isn't high at all, just
 like with DF1 you can compute some simplified coefficients and then call
 process using those. Since everything is linear you end up with a bunch of
 additions and multiplies just like you do in a DF1, but the energy in your
 capacitors is preserved when you change coefficients just like it is when
 you change the knobs on a circuit.

 Yeh's work on the Fender tonestack is just that: symbolic nodal
 analysis leading to an equivalent linear digital filter.   I
 mistakenly thought you were proposing nodal analysis including also
 the nonlinear aspects of the circuit including valves and output
 transformer (which without being too familiar with the method I
 believe to lead to a system of equations that's a lot more complicated
 to solve).


Nodal analysis can refer to linear or non-linear, so sorry for the
confusion. I was trying to point out that the linear analysis done by Yeh
starts the circuit but then throws it away and instead uses a DF1, and a
DF1 does not store the state of each capacitor individually, so when you
turn the knob you don't get the right time varying behaviour. I am saying
you don't have to throw the circuit away, you can still get an efficient
implementation since in the linear case everything reduces to a bunch of
adds and multiplies.

For non-linear modelling you need additional steps, and depending on the
circuit there are many different methods that can be tried to find the best
fit for the particular requirements.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-18 Thread Andrew Simper
On 18 June 2014 16:15, STEFFAN DIEDRICHSEN sdiedrich...@me.com wrote:

 Actually, it’s not rocket science to model a baxandall or those
 Treble/Mid/bass networks. A straight forward approach is modified nodal
 analysis, which gives you a model, that preserves the passivity of the
 filter network.

 Steffan


Indeed Steffan! I have a feeling some people are allergic to solving basic
sets of linear equations? The math behind it is about as basic as it gets,
it is literally a set of simultaneous linear equations, the kind of stuff
that is covered in high school! Things get more interesting when you need
to solve non-linear components.

As for tubes, I think more research is needed to accurately characterise
the time varying behaviour of tubes, but even existing empirical models
like the one proposed by Koren would probably be enough for audio work. The
more challenging part I feel would be in the electro mechanical power amp +
speaker stage, where you have more complicated interactions between
non-linear semi-rigid bodies coupled to the circuit via an electric field
with coils and magnets.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Simulating Valve Amps

2014-06-18 Thread Andrew Simper
On 18 June 2014 18:26, Tim Goetze t...@quitte.de wrote:

 ...  Thanks to
 the work of Yeh, I personally consider the tonestack a solved problem,
 or at least one of least concern for the time being.

 Cheers,
 Tim


A linear tonestack has been a solved problem way before Yeh wrote any
papers. Also I would not consider a mapping component values to direct form
1 biquad coefficients a good way to simulate a tone stack when you can
easily preserve the time varying behaviour as well if you use standard
circuit simulation techniques like nodal analysis.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Dither video and articles

2014-03-28 Thread Andrew Simper
 On 29 March 2014 03:31, Sampo Syreeni de...@iki.fi wrote:
 On 2014-03-28, robert bristow-johnson wrote:
 On 3/28/14 12:25 PM, Didier Dambrin wrote:

 my opinion is: above 14bit, dithering is pointless (other than for 
 marketing reasons),

 14 bits???  i seriously disagree.  i dunno about you, but i still listen
 to red-book CDs (which are 2-channel, uncompressed 16-bit fixed-point).
 they would sound like excrement if not well dithered when mastered to the
 16-bit medium.

 I'd argue the same. First, it's meaningless to talk about bit depth alone.
 What we can hear is dictated first by absolute amplitude. If the user turns
 the knob to eleven, the number of bits doesn't matter: at some point you'll
 hear the noise floor, and any distortion products produced by quantization.
 That will even happen without user intervention when your work is used in a
 sampler, and because of things like broadcast compressors.. Second, at that
 point you'll also hear noise modulation, which sounds pretty nasty in things
 like reverb tails which always go to zero in the end. And third, people can
 hear stuff well below the noise floor. Even if the floor is set so low that
 you can hear it but don't really mind it, distortion products can still be
 clearly audible, and coming from hard quantization, rather annoying.

I think the important thing to remember here is that audio content
varies in dynamic range, you don't always have a near 0 dBFS signal
playing all the time (although some modern mastering comes close,
but not every makes pounding dance music Didier!). Lets take classical
music for example, there will be sections of the full orchestra
playing in the recording at near 0 dBFS (around 95 dB SPL), but then
quieter sections at -45 dBFS (around 45 dB SPL). In loud sections 16
bits without dither may be fine, but as soon as they stop then you are
around 7 bits down so you have 9 bits left of your 16 (and this isn't
even a reverb tail here, just quiet playing!), so dither is very much
needed. Don't worry about riding the volume knob on your amp since
your ears (and eyes) already have dynamic range processors built in to
adjust the gain for you (if the background is quiet enough).

--Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Oversampling and CPU + Bandlimited Distortion Effects?

2013-11-29 Thread Andrew Simper
My approach to this sort of thing is pretty basic:

1) lower the aliasing as much as possible at the algorithm level.
there are several tricks that can be used here, not just making the
curve have smooth derivatives at the end points, although that helps.

2) have a decent baseline level of quality, but allow more
oversampling to be used on render. this way you can play live or jam
ideas with a reasonable cpu / sound quality trade-off, and then render
pristine audio for the final mix.

All the best,

Andy
--
cytomic - sound music software


On 30 November 2013 03:54, Nigel Redmon earle...@earlevel.com wrote:
 Not to Robert so much, but for anyone who hasn't thought too deeply about 
 guitar amps, maybe it's helpful to look at what you're up against.

 It's the extremely wide useful range of the distortion that fundamental to 
 the issue. You want things to warm up a little with some mild overdrive. For 
 a round number, let's say 6 dB—and maybe it's helpful to point out that's 
 equivalent to shifting the input left one bit—to warm up the strummin'.

 Now let's talk about making the ears bleed—a modern high gain lead: shift 
 left *another* 14-15 bits…

 So…look at your curve, where you want to start warming up the tone, and where 
 it will be if you want a Soldano SLO…

 Now, think about what oversampling ratio you'll need to avoid aliasing in the 
 audio band entirely, and you'll see that's not the target you need to aim 
 for.

 (just hiked the Grand Canyon yesterday; Bright Angle Trial, from rim down to 
 the bottom (Colo river) and back to the rim.  very sore muscles today.)

 Wow, that is one nice Thanksgiving. Every once in a while, I shake my head at 
 how I've never managed to make it to the Grand Canyon this far in life, and 
 vow to make it a trip...



 On Nov 29, 2013, at 11:07 AM, robert bristow-johnson 
 r...@audioimagination.com wrote:

 well, i dunno how many real-world implementation[s] use the integral of 
 (1-x^2)^N or (1-x^N)^2 (the former was my proposal and the latter is 
 Stephan's idea).  Nigel says it doesn't apply because his premise is that 
 he'll be clipping the polynomial anyway, so i presume the case for doesn't 
 apply is that higher order overtones are generated anyway.  and that 
 correct, given the premises.

 i can say that, in the case of integral{(1-x^2)^N dx}, for N equal to, say, 
 3, which results in a 7th-order polynomial, there is gain (around x=0) for 
 that polynomial, and you might be scaling the input so that it only soft 
 clips.  then the formula (which Julius has also used in one of his on-line 
 CCRMA classes) still applies precisely.

 but, in a case where *so* many of the derivatives are continuous at |x|=1, 
 for a small amount of overdrive (so |x| exceeds 1 a little at occasional 
 samples), there is virtually no difference between the polynomial extended 
 beyond |x|1 and that same polynomial spliced to the rails at |x|=1. so 
 the formula *still* applies in that case.  and for N=3 which makes the 
 polynomial order to be 2N+1 = 7, the 4x oversampling suffices, it need not 
 be 8x.  it all depends on the polynomial, how seamlessly it splices to the 
 rails, and how much you overdrive it.  i continue to stand by that.

 this is less so the case for Stephan's model of integral{(1-x^N)^2 dx}, 
 which looks nice and linear for x  1, but the splice to the rails is not as 
 smooth in the 2nd-derivative.

 so i am not expecting to clip the polynomial to the same extent that Nigel 
 might be.

 (just hiked the Grand Canyon yesterday; Bright Angle Trial, from rim down to 
 the bottom (Colo river) and back to the rim.  very sore muscles today.)

 L8r,

 r b-j

 On 11/29/13 10:25 AM, Nigel Redmon wrote:
 Hi Stephan,

 I don't disagree with Robert's formula at all. I'm simply saying it doesn't 
 apply. In a real implementation, you clip the signal as soon as you get 
 outside of the portion of the polynomial curve you're using. And that 
 happens very quickly. (Sure, you could say that you'll use a much higher 
 polynomial instead, but that simply wastes cycles and is not of practical 
 importance.) Modern high gain amp simulations need 90 dB or so of gain. 
 You're either clipping explicitly, or effectively. On the implementation, 
 you pick the oversampling ratio that you can live with, not the 
 oversampling ratio that is calculated to give no aliasing in the audio band.

 Nigel

 On Nov 29, 2013, at 5:06 AM, STEFFAN DIEDRICHSENsdiedrich...@me.com  
 wrote:

 Nigel,

 Roberts formula does work, as long as you just use that polynomial. But a 
 real-world implementation, like the soft-limiting based on integrals of 
 (1-x^2)^N or (1-x^N)^2, you have concatinated functions, which leads to 
 very high order polynomials. Then you need to apply common-sense, like the 
 one mentioned below.

 Steffan
 On 29 Nov 2013, at 08:48, Nigel Redmonearle...@earlevel.com  wrote:

 Robert…If you're talking about distortion, of significance, you're 
 *always* talking about 

Re: [music-dsp] [admin] music-dsp FAQ

2013-11-17 Thread Andrew Simper
Sorry Douglas, I meant to say thanks to you.

All the best,

Andy
--
cytomic - sound music software


On 16 November 2013 09:54, Andrew Simper a...@cytomic.com wrote:

 Hi Robert,

 Thanks for your hard work in updating the list!

 Perhaps you could update the message to remind people that all html /rich
 text formatting will be converted to plain text?

 All the best,

 Andy
 --
 cytomic - sound music software


 On 16 November 2013 00:10, douglas repetto doug...@music.columbia.eduwrote:


 Heh, I guess we don't need that WARNING anymore!


 On 11/15/13 7:00 AM, douglas repetto wrote:

 ***

 OBNOXIOUS WARNING THAT YOU CAN'T MISS:
 BEWARE: Messages containing HTML, rich text, or any type of attachment
 will be silently rejected from the list! This may change soon. But for now,
 BEWARE!!!

 ***



 Hi,

 Just a reminder that if you are new to the list you should read the
 music-dsp FAQ. It contains answers to both technical _and_
 adminstrative questions that often come up on the list. If your question
 appears in the FAQ it is safe to assume that it has been discussed on the
 list many times in the past, and you should probably have a look through
 the list archives before posting your question to the list.

 http://music.columbia.edu/cmc/music-dsp/musicdspFAQ.html


 Also of interest to new and not-so-new list members:

 The music-dsp list archives
 http://music.columbia.edu/cmc/music-dsp/musicdsparchives.html

 The music-dsp source code archive
 http://www.musicdsp.org

 music-dsp books and reviews
 http://music.columbia.edu/cmc/music-dsp/dspbooks.html


 All this and more at:
 http://music.columbia.edu/cmc/music-dsp


 Hasta la pasta,
 douglas

 (this is an automated message sent out on the 1st and 15th of each month)
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp


 --
 ... http://artbots.org
 .douglas.irving http://dorkbot.org
 .. http://music.columbia.edu/cmc/music-dsp
 ...repetto. http://music.columbia.edu/organism
 ... http://music.columbia.edu/~douglas



 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Implicit integration is an important term, ZDF is not

2013-11-15 Thread Andrew Simper
 i think that the delay word in particular should be *gone for good*,
 these are IIR filters with state variables which are empty on start,
 there is also the group delay an so on.
 however, ZDF is stuck as a marketing term and i think it can hardly be
 changed at this point.

 if one builds a bridge and paints it green, people may call it the
 green bridge, but once he's out of green paint and paints it red,
 they may continue calling it with the old name because it sounds cool.

 lubomir


The point here is that almost everyone has a green bridge, even before they
knew that it was green, so it is completely reasonable for them all to call
theirs the green bridge.

Naming things is difficult, but if people want to continue using the ZDF
terminology please expect that it is completely reasonable for anyone using
laplace analysis on a circuit and then realise it via the bi-linear
transform has a ZDF filter, since the laplace transform solution is in the
continuous domain so has instant feedback, and the bi-linear transform
results from the implicit trapezoidal numerical integration scheme.

So anyone using the RBJ cookbook stuff realised with whatever structure you
want qualifies to be called a ZDF filter.

All the best,

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] [admin] music-dsp FAQ

2013-11-15 Thread Andrew Simper
Hi Robert,

Thanks for your hard work in updating the list!

Perhaps you could update the message to remind people that all html /rich
text formatting will be converted to plain text?

All the best,

Andy
--
cytomic - sound music software


On 16 November 2013 00:10, douglas repetto doug...@music.columbia.eduwrote:


 Heh, I guess we don't need that WARNING anymore!


 On 11/15/13 7:00 AM, douglas repetto wrote:

 ***

 OBNOXIOUS WARNING THAT YOU CAN'T MISS:
 BEWARE: Messages containing HTML, rich text, or any type of attachment
 will be silently rejected from the list! This may change soon. But for now,
 BEWARE!!!

 ***



 Hi,

 Just a reminder that if you are new to the list you should read the
 music-dsp FAQ. It contains answers to both technical _and_
 adminstrative questions that often come up on the list. If your question
 appears in the FAQ it is safe to assume that it has been discussed on the
 list many times in the past, and you should probably have a look through
 the list archives before posting your question to the list.

 http://music.columbia.edu/cmc/music-dsp/musicdspFAQ.html


 Also of interest to new and not-so-new list members:

 The music-dsp list archives
 http://music.columbia.edu/cmc/music-dsp/musicdsparchives.html

 The music-dsp source code archive
 http://www.musicdsp.org

 music-dsp books and reviews
 http://music.columbia.edu/cmc/music-dsp/dspbooks.html


 All this and more at:
 http://music.columbia.edu/cmc/music-dsp


 Hasta la pasta,
 douglas

 (this is an automated message sent out on the 1st and 15th of each month)
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp


 --
 ... http://artbots.org
 .douglas.irving http://dorkbot.org
 .. http://music.columbia.edu/cmc/music-dsp
 ...repetto. http://music.columbia.edu/organism
 ... http://music.columbia.edu/~douglas



 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Implicit integration is an important term, ZDF is not

2013-11-14 Thread Andrew Simper

 Every time I see a valve circuit with R|C to ground off the cathode, that's

universally agreed to be feedback... But implicit, not explicit.

 I'm open to changing my definition of feedback, but I can't go with one
 that requires me to assign the direction of a wire, cause that's not how
 I imagine things. Why not go with feedback in both cases?

 As Wittgenstein said: We are engaged in a struggle with language.
 Although, for fairness, he did also say: When we can't think for
 ourselves, we can always quote.

 Dave.


Implicit is I think a more useful term all up. But note that you also get
an implicit equation from a basic diode clipper circuit with just one diode
in serial with a resistor to ground without a capacitor anywhere or
feedback. This is because the voltage appears as a linear term (the
resistor) and an exponential term (for the diode), this is an implicit
equation, but you also have implicit numerical integration, so really those
are the terms that I think have meaning. To be more descriptive on top you
can also add linear or nonlinear as well.

But I still don't think the word implicit is useful for marketing, although
I use it to describe things I understand my customers may not.

All the best,

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Implicit integration is an important term, ZDF is not

2013-11-14 Thread Andrew Simper
 I may have misread, but the discussion seems to suggest that this
 discipline is just discovering implicit finite differencing! Is that
 really the case? If so, that would be odd, because implicit methods
 have been around for a very long time in numerical analysis.

 Max


Max, can I please give you a hug? I am beating my head against the wall
with these music-dsp that think this stuff is somehow novel..

All the best,

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Implicit integration is an important term, ZDF is not

2013-11-14 Thread Andrew Simper

 The question seems to be arriving at: if we oughtn't keep the
 potentially misleading phrase that's in common usage, and instead use
 existing, more common EE parlance, what phrasing should be used?


Direct implicit integration covers it perfectly. Direct meaning not through
the laplace space or other intermediate LTI analysis, and the implicit
integration covers you are taking into account the the current state is
used to solve for the output. You could specify the integrator being used
if you don't like implicit and say direct trapezoidal integration or even
drop the integration and specify the structure directly as in direct
trapezoidal svf. Since you specify the structure then you could possibly
even shorten this further to trapezoidal svf.




 Urs is right that mentioning implicit integration is so obscure as to be
 useless. The word implicit is also technical and has too simple a
 common-language connotation.


I'm sorry that you and your customers are not familiar with the correct
terminology to describe this. I feel its best not to even bother to
describe it in a technical manner to customers. The problem is you can come
up with an explicit solution that takes great care to come out with a
trapezoidal frequency response and then you can oversample this to push any
inaccuracies out of the audible area, so then what? Should this be orphaned
as not being implicit when it will sound identical and offer lower cpu to
boot? Can't people just be happy that the developer has taken care to solve
things as properly as they see fit with whatever methods are appropriate
for the situation at hand?



 Andy, can you comment on the following observation?
 - the Stilson and smith model, with the artificial delay in the feedback,
 which is resolved by your implicit methods is, in fact, not an explicit
 method, but simply a fudged approximation; an explicit method solution
 would have the correct transfer function, while the Stilson and smith does
 not.

 Is that correct?


Not really.

And slow down there skipper! These are not my methods, implicit integration
has been around for a very long time.

You are confusing implicit with trapezoidal here. The trapezoidal response
is a very specific and useful one for audio. Not all implicit methods
preserve resonance, eg Backwards Euler, Implicit Gear. Qucs to the rescue!

http://qucs.sourceforge.net/tech/node24.html

In summary an explicit method depends only on the previous state of the
system to solve the system. So in that regard most music dsp people are
using semi-explicit systems. Here ya go (please note that the following
isn't strictly accurate since I am trying to show dependencies not specific
implementations details of implicit and explicit numerical integration
schemes, so just keep and eye on the z terms)

 v1z = v1 z^-1
 v2z = v2 z^-1
 v3z = v3 z^-1
 v4z = v4 z^-1

CASCADE

explicit (uses only previous outputs):
v1 = v1z + g (v0 - v1z - k v4z)
v2 = v2z + g (v1z - v2z)
v3 = v3z + g (v2z - v3z)
v4 = v4z + g (v3z - v4z)

explicit (partly forward euler, the resonance and low pass filters are
still explicit, but you take the current estimated output of the previous
stage as the input to the next stage) - note that in the Stilton and Smith
version they use (a)*vi + (1-a)*viz as the input to the i+1st stage which
is an adjustment to trapezoidal integration to not cause quite as much
phase delay at cutoff:
v1 = v1z + g (v0 - v1z - k v4z)
v2 = v2z + g (v1 - v2z)
v3 = v3z + g (v2 - v3z)
v4 = v4z + g (v3 - v4z)

implicit:
v1 = v1z + g (v0 - v1 - k v4)
v2 = v2z + g (v1 - v2)
v3 = v3z + g (v2 - v3)
v4 = v4z + g (v3 - v4)



SVF

Similarly for the SVF, the fully explicit filter is:
v1 = v1z + g (v0 - k v1z - v2z)
v2 = v2z + g (v1z)

the Chamberlin is:
v1 = v1z + g (v0 - k v1z - v2z)
v2 = v2z + g (v1)

and the implicit:
v1 = v1z + g (v0 - k v1 - v2)
v2 = v2z + g (v1)


I hope that helps!

All the best,

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Implicit integration is an important term, ZDF is not

2013-11-14 Thread Andrew Simper
 There you go. It's sad that some blurtards have caused confusion with
 stupid terminology - I'm talking about the zero delay filter misnomer. That
 however doesn't make it seem less arrogant to make fun of people who
 practice eliminating the unit delay as their method. Because for those
 and their point of view, zero delay feedback is relevant.

 - Urs


Hi Urs,

Please don't take this thread as a personal attack, you're a friend and a
peer who makes excellent music software, software that I am happy to use in
my own productions. I understand that from your perspective it makes
perfect sense to use the phrase zero delay feedback filter, I am not
making fun of you, I am pointing out the problems with the terminology.

There is a difference between being arrogant and being deliberately
confrontational to stir the pot and provoke a reaction and hopefully help
people in the long run. I don't for one second think I'm better than
anyone, and I'm doing my best to use references and back up my reasoning
with examples, and to show the substantial existing body of knowledge that
lots of people have ignored. I am not pretending I invented this stuff
(that would be arrogant) I never insulted anyone even if they insult me,
but if people take my technical comments personally then perhaps it is just
because I am pointing out something they need to come to terms with.

All the best,

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Implicit integration is an important term, ZDF is not

2013-11-14 Thread Andrew Simper
 If you are not familiar with what finite difference methods can do then...


This reads badly. I don't mean you Max, I meant for anyone not familiar...

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Implicit integration is an important term, ZDF is not

2013-11-14 Thread Andrew Simper
 but aren't all the direct forms (even the transposed

 ones), so called delay free


 i don't consider them delay free in their feedback path.  not at all.


Hi Robert, I think here we are not talking about does the implementation
use delays in its own feedback path clearly a DF1 does, as does any
digital filter, they have to store some state. The question is does a DF1
model the LTI continuous biquad using an implicit integration scheme?
Clearly the answer is yes, it uses the bi-linear transform, which is
implicit, so relies on the current and previous state to solve things.



- gathering this from the top of my
 head, as i haven't look at the flow graphs much since the last time i
 tried making them into code?


 the semantic i used for zero delay feedback is that there is no delay in
 the feedback loop.  using the language similar to
 http://www.nireaktor.com/reaktor-tutorials/zero-delay-
 feedback-filters-in-reaktor/ ,

 a causal discrete-time operation:

y[n]  =  g(y[n-1], y[n-2], ... x[n], x[n-1],...)

 so the output sample is a function of the present input and past input
 samples and of past output samples.  no logical problem here.  all of these
 arguments of g() are known at time n.

 the zero-delay feedback filter purports to implement

y[n]  =  g(y[n], y[n-1], y[n-2], ... x[n], x[n-1],...)

 and, of course, the logical problem is how does the algorithm g() know
 what the argument y[n] is when that is precisely the value that g() is
 returning?  not yet determined.

 i know (especially for LTI systems), that you can plug all those arguments
 into g() and solve for y[n] and get an answer.  for more complicated algs,
 you might have to iteratively solve for y[n].  but that does not change the
 fact that, in reality, y[n] is being determined from knowledge of present
 and past input samples x[n], and *past* samples of y[n].


Yes, this is the definition of implicit and if you read the title this is
the clarification I'm trying to make, so thankyou for backing me up that
this is the case.



 that is my only bone to pick with 0df.  i just see it as another causal
 algorithm and each and every *causal* discrete-time algorithm, linear or
 not, time-invariant or not, *must* define its output sample in terms of
 present and past input samples and *past* output samples.


This is part of the problem that causes confusion (but not even one that I
have brought up!). What Urs and others are talking about is not that you
need some previous state, you obviously do for a filter to work at all, but
rather do you use the only the previous state, or both the current and
previous state to work out the current answer. This is why I am suggesting
using the standard terminology of explicit and implicit integration since
it is a well defined and widely used term in mathematics and so won't cause
confusion. Thanks for being confused about the terminology and so showing
that even an expert in the field finds ZDF a confusing term!




  like Euler's backward method (i think we called it predictor-corrector)
 for numerically solving differential equations, it's just fine to jump
 through some hoops to predict the present output sample before you actually
 compute it.  but when you finally commit to your output sample y[n], that
 value is a function of the present and past x[n] and only the *past* y[n].

 --

 r b-j  r...@audioimagination.com

 Imagination is more important than knowledge.



Backwards Euler is an implicit method, predictor corrector is another thing
again entirely. If anyone wants to discuss such methods can you please do
so in another thread, people are confused enough as it is!

All the best,

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Implicit integration is an important term, ZDF is not

2013-11-13 Thread Andrew Simper
On 13 November 2013 20:31, Lubomir I. Ivanov neolit...@gmail.com wrote:

 On 13 November 2013 08:50, Andrew Simper a...@cytomic.com wrote:
  Now for all those people scratching their heads of the whole Zero Delay
  Feedback, here is the deal:
 
  Any implicit integration method applied to numerically integrate
 something
  is by its very definition using Zero Delay Feedback, linear or non-linear
  this is the case. You can completely ignore that you need some previous
  state to add things to, this is always the case for numerical
 integration.
 

 out of curiosity, have you encountered a case where trapezoidal is
 non-optimal or have you tried comparing it to simpson's?

 lubomir


Simpson's method isn't practical for audio in any use I've seen since it is
less stable than trapezoidal. Trapezoidal is the most useful, if I had to
pick one only that would be it, but I use several others depending on the
situation.

All the best,

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Implicit integration is an important term, ZDF is not

2013-11-13 Thread Andrew Simper
On 13 November 2013 20:00, Urs Heckmann u...@u-he.com wrote:

 On 13.11.2013, at 07:50, Andrew Simper a...@cytomic.com wrote:

  I hope this clears things up and exposes ZDF as a confusing and pointless
  marketing catch phrase.

 It's not pointless for marketing in the sense that instantaneous feedback
 is much easier to explain than implicit integration. Users usually don't
 have profound knowledge of maths, whereas a delayless feedback loop is
 easily illustrated. In other words, we would lose customers if we
 advertised uses implicit integration, yay ;-)


Well it is time for all people using DF1 to ratchet up their marketing and
start touting [*] featuring Zero Delay Feedback technology!! (go on Dave
Gamble I know you are just itching to do this ;)

Yes, that's right people you have my word on this - any method using the
bilinear z transform, which is just another name for trapezoidal
integration, solves without any delay in the feedback loops so qualifies
completely to be called a ZDF filter. I'll post this to KVR so that people
can start updating their web pages immediately so everyone can benefit from
an increase in customers and cash in on the ZDF bonanza!

All the best,

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Implicit integration is an important term, ZDF is not

2013-11-13 Thread Andrew Simper
Hi Clemens and Urs!

Time for a backflip from me, I completely agree with all the points you
have both made in that describing to customers that there are no delays in
feedback paths is much easier than describing implicit integration schemes.
The title I gave this thread is clearly wrong and attention grabbing for
which I apologize (but it did grab your attention didn't it ;)

So how about I start again? The point I actually meant to make but failed
completely to is this:

Since all DF1 (including RBJ cookbook) and other trapezoidal integrated
filters solve things without delays in feedback loops then simply stating
you use Zero Delay Feedback filters does not help much inform customers
anything special going on, as most of the time this will be the case.

What is useful is stating you use Non-linear Zero Delay Filters, but how
much difference do you think that is going to make with customers? I fear
they will mostly just see the words Zero Delay Filters and not differential
much past that.

All the best,

Andy





cytomic - sound music software


On 13 November 2013 22:32, Andrew Simper a...@cytomic.com wrote:

 On 13 November 2013 20:51, Didier Dambrin di...@skynet.be wrote:

 Trapezoidal.. Simpsons.. that reminds me of this quote (from The Simpsons)
 First, let me assure you that this is not one of those shady pyramid
 schemes you’ve been hearing about. No sir. Our model is the trapezoid! 


 I hope this made science progress


 :D

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] R: R: R: Trapezoidal integrated optimised SVF v2

2013-11-13 Thread Andrew Simper
Thanks to Clemens for spotting an error in the implementation of the skf,
it was a copy and paste error from the svf version where I didn't update
the denominator in the code to be the correct one solved for. I've updated
it now:

http://cytomic.com/files/dsp/SkfLinearTrapOptimised2.pdf

All the best,

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Time Varying BIBO Stability Analysis of Trapezoidal integrated optimised SVF v2

2013-11-13 Thread Andrew Simper
Thanks very much Ross for taking the time to look at this! There is a lot
of reading and theory so until I get some more time I personally can't
really take on board any of it to provide you with any useful comments, but
I appreciate your time.

All the best,

Andy
--
cytomic - sound music software


On 10 November 2013 23:58, Ross Bencina rossb-li...@audiomulch.com wrote:

 Hi Everyone,

 I took a stab at converting Andrew's SVF derivation [1] to a state space
 representation and followed Laroche's paper to perform a time varying BIBO
 stability analysis [2]. Please feel free to review and give feedback. I
 only started learning Linear Algebra recently.

 Here's a slightly formatted html file:

 http://www.rossbencina.com/static/junk/SimperSVF_BIBO_Analysis.html

 And the corresponding Maxima worksheet:

 http://www.rossbencina.com/static/junk/SimperSVF_BIBO_Analysis.wxm

 I had to prove a number of the inequalities by cut and paste to Wolfram
 Alpha, if anyone knows how to coax Maxima into proving the inequalities I'm
 all ears. Perhaps there are some shortcuts to inequalities on rational
 functions that I'm not aware of. Anyway...

 The state matrix X:

 [ic1eq]
 [ic2eq]

 The state transition matrix P:

 [-(g*k+g^2-1)/(g*k+g^2+1), -(2*g)/(g*k+g^2+1) ]
 [(2*g)/(g*k+g^2+1),(g*k-g^2+1)/(g*k+g^2+1)]

 (g  0, k  0 = 2)

 Laroche's method proposes two time varying stability criteria both using
 the induced Euclidian (p2?) norm of the state transition matrix:

 Either:

 Criterion 1: norm(P)  1 for all possible state transition matrices.

 Or:

 Criterion 2: norm(TPT^-1)  1 for all possible state transition matrices,
 for some fixed constant change of basis matrix T.

 norm(P) can be computed as the maximum singular value or the positive
 square root of the maximum eigenvalue of P.transpose(P). I've taken a
 shortcut and not taken square roots since we're testing for norm(P)
 strictly less than 1 and the square root doesn't change that.

 From what I can tell norm(P) is 1, so the trapezoidal SVF filter fails to
 meet Criterion 1.

 The problem with Criterion 2 is that Laroche doesn't tell you how to find
 the change of basis matrix T. I don't know enough about SVD, induced p2
 norm or eigenvalues of P.P' to know whether it would even be possible to
 cook up a T that will reduce norm(P) for all possible transition matrices.
 Is it even possible to reduce the norm of a unit-norm matrix by changing
 basis?

 From reading Laroche's paper it's not really clear whether there is any
 way to prove Criterion 2 for a norm-1 matrix. He kind-of side steps the
 issue with the norm=1 Normalized Ladder and ends up proving that
 norm(P^2)1. This means that the Normalized Ladder is time-varying BIBO
 stable for parameter update every second sample.

 Using Laroche's method I was able to show that Andrew's trapezoidal SVF
 (state transition matrix P above) is also BIBO stable for parameter update
 every second sample. This is the final second of the linked file above.

 If anyone has any further insights on Criterion 2 (is it possible that T
 could exist?) I'd be really interested to hear about it.

 Constructive feedback welcome :)

 Thanks,

 Ross


 [1] Andrew Simper trapazoidal integrated SVF v2
 http://www.cytomic.com/files/dsp/SvfLinearTrapOptimised2.pdf

 [2] On the Stability of Time-Varying Recursive Filters
 http://www.aes.org/e-lib/browse.cfm?elib=14168
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Implicit integration is an important term, ZDF is not

2013-11-13 Thread Andrew Simper
On 14 November 2013 01:28, Dave Gamble davegam...@gmail.com wrote:

 On Wed, Nov 13, 2013 at 2:29 PM, Andrew Simper a...@cytomic.com wrote:

  On 13 November 2013 20:00, Urs Heckmann u...@u-he.com wrote:
 
   On 13.11.2013, at 07:50, Andrew Simper a...@cytomic.com wrote:
  
I hope this clears things up and exposes ZDF as a confusing and
  pointless
marketing catch phrase.
  
   It's not pointless for marketing in the sense that instantaneous
 feedback
   is much easier to explain than implicit integration. Users usually
 don't
   have profound knowledge of maths, whereas a delayless feedback loop is
   easily illustrated. In other words, we would lose customers if we
   advertised uses implicit integration, yay ;-)
  
 
  Well it is time for all people using DF1 to ratchet up their marketing
 and
  start touting [*] featuring Zero Delay Feedback technology!! (go on
 Dave
  Gamble I know you are just itching to do this ;)
 
  In opposites world, sure. You may have me backwards.

 Despite my fairly inane sense of humour, I do strive (not always
 successfully, of course) to avoid any kind of sensationalism that the
 customer can see.

 I'm strongly anti-bullshit, because I'm just not very good at it.

 Dave.


You have expressed quite clearly your distaste for this particular
terminology, I'm just having fun :)
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Implicit integration is an important term, ZDF is not

2013-11-13 Thread Andrew Simper
On 13 November 2013 23:31, Andrew Simper a...@cytomic.com wrote:

 Hi Clemens and Urs!

 Time for a backflip from me, I completely agree with all the points you
 have both made in that describing to customers that there are no delays in
 feedback paths is much easier than describing implicit integration schemes.
 The title I gave this thread is clearly wrong and attention grabbing for
 which I apologize (but it did grab your attention didn't it ;)

 So how about I start again? The point I actually meant to make but failed
 completely to is this:

 Since all DF1 (including RBJ cookbook) and other trapezoidal integrated
 filters solve things without delays in feedback loops then simply stating
 you use Zero Delay Feedback filters does not help much inform customers
 anything special going on, as most of the time this will be the case.

 What is useful is stating you use Non-linear Zero Delay Filters, but how
 much difference do you think that is going to make with customers? I fear
 they will mostly just see the words Zero Delay Filters and not differential
 much past that.

 All the best,

 Andy


Should be non-linear zero delay feedback filters in the above (see how
easy this stuff is to get mixed up?)

But here is another backflip: How about this one, take a basic one pole
active low pass filter what uses feedback, it has the idealised nodal
equations:

0 == geqamp (v0 - v1) - gceq v2 + iceq

now take the same thing but in a passive ideal form with variable resistor
(ie without feedback at all):

0 == gr (v0 - v1) - gceq v2 + iceq

note that gr = 1/R

So here we have the solution of the equations without feedback being
identical to those with feedback, and the implementation will sound
identical if the same integrator is used, so the important point is if the
integrator implicit not if there was zero delay feedback or not.

And further more, if we consider the slight misuse of the terminology like
this:

featuring zero delay feedback technology

Sounds good huh? Well have a look at this circuit:

http://www.eecs.tufts.edu/~dsculley/tutorial/opamps/opAmpBuffer.jpg

If you take an idealised op amp with negative feedback then model this with
zero delay feedback technology then you get the following analog model:

output = input

so every single time an assignment is done in dsp you could easily argue
that you are using zero delay feedback technology

Oh so much fun to be had!

All the best,

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Implicit integration is an important term, ZDF is not

2013-11-13 Thread Andrew Simper
But here is another backflip: How about this one, take a basic one pole
active low pass filter what uses feedback, it has the idealised nodal
equations:


 0 == geqamp (v0 - v1) - gceq v2 + iceq

 now take the same thing but in a passive ideal form with variable resistor
 (ie without feedback at all):

 0 == gr (v0 - v1) - gceq v2 + iceq

 note that gr = 1/R



I didn't draw a diagram and so got those equations wrong, it should be -
gceq v1 + iceq in both cases for the capacitor, so the full equations are:

0 == geqamp (v0 - v1) - gceq v1 + iceq

now take the same thing but in a passive ideal form with variable resistor
(ie without feedback at all):

0 == gr (v0 - v1) - gceq v1 + iceq

note that gr = 1/R
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Implicit integration is an important term, ZDF is not

2013-11-13 Thread Andrew Simper
on topic i guess,


 i think that ZDF is a horrible term targeting real EE's. for the DSP
 crowd it may sound sane, but even then it could be expanded to
 something like zero unit delay feedback and i'm not even sure that
 will work any better.


Yeah I agree




 BTW Andy, you mention DFI a lot, perhaps bacause RBJ has them in the
 popular filters, but aren't all the direct forms (even the transposed
 ones), so called delay free - gathering this from the top of my
 head, as i haven't look at the flow graphs much since the last time i
 tried making them into code?


Yes, all bi-linear filters derived from the laplace space representation of
on active filter (ie one with feedback) are Delay Free Feedback Filters




 actually delay free is event crazier because this is IIR and there
 is the group delay factor, so...how is it delay free given there is
 distortion and the frequency decomposition is variadic due to
 magnitude envelopes...i know that implies differently, but still some
 of this product terminology is just non-sense IMO.


Yes, its a stupendously vague term all around, but I think care needs to be
taken to always state delay free feedbacks, but even then it is easy enough
to break it down and make it a pointless distinction.



 i completely and absolutely disregard the marketing of DSP software
 product in therms of terminology that users will never understand and
 i'm sure that even if you put uses calculus equations in there, it
 will still sell much better than the average, cleverly unadvertised
 produced.

 lubomir
 --
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] R: Trapezoidal integrated optimised SVF v2

2013-11-12 Thread Andrew Simper
On 10 November 2013 18:12, Dominique Würtz dwue...@gmx.net wrote:
 Am Freitag, den 08.11.2013, 11:03 +0100 schrieb Marco Lo Monaco:
 I think a crucial point is that besides replicating steady state
 response of your analog system, you also want to preserve the
 time-varying behavior (modulating cutoff frequency) in digital domain.
 To achieve the latter, your digital system must use a state space
 representation equivalent to the original circuit, or, how Vadim puts
 it, preserve the topology. By starting from an s-TF, however, all this
 information is lost. This is in particular visible from the fact that
 implementing different direct forms yields different modulation
 behavior.

Yes, modulation behaviour is a very important point to me.


 BTW, in case you all aren't aware: a work probably relevant to this
 discussion is the thesis of David Yeh found here:

 https://ccrma.stanford.edu/~dtyeh/papers/pubs.html

 When digging through it, in particular the so-called DK method, you
 will find many familiar concepts incorporated in a more systematic and
 general way of discretizing circuits, including nonlinear ones. Can't
 say how novel all this really is, still it's an interesting read anyway.

 Dominique

Thanks very much for this link! I have read most of these papers in
isolation previously, but missed David Yeh's dissertation
https://ccrma.stanford.edu/~dtyeh/papers/DavidYehThesissinglesided.pdf
which contains a great description of MNA and how it relates to the
DK-method. I highly recommend everyone read it, thanks David!!

I really hope that an improved DK-method that handles multiple
nonlinearities more elegantly that it currently does. A couple of
things to note here, in general, this method uses multi-dimensional
tables to pre-calculate the difficult implicit equations to solve the
non-linearities, but as the number of non-linearities increases so
does the size of your table as noted in 6.2.2:

The dimension of the table lookup for the stored nonlinearity in
K-method grows with the number of nonlinear devices in the circuit. A
straightforward table lookup is thus impractical for circuits with
more than two transistors or vacuum tubes. However, function
approximation approaches such as neural networks or nonlinear
regression may hold promise for efficiently providing means to
implement these high-dimensional lookup functions.

Also note that also in section 2.2 some basic tone stack circuits
are discussed, which contain 3 capacitors 2 pots and a resistor, which
are trivial enough to solve using direct integration methods. Yeh
notes that WDF can only handle serial or parallel connections of
components, not arbitrary ones like in the tonestack, and says
specifically (page 26):

Passive filter circuits are typically suited to implementation as a
wave digital filter (WDF) (Fettweis, 1986). This approach can easily
model standard components such as inductors, capacitors, and resistors
that are connected in series and in parallel. However, the tone stack
is a bridge circuit, which falls into a category of connections that
are neither parallel or series (Fränken et al., 2005). A bridge
adapter with 6 ports can be derived (Fränken et al., 2005; Sarti and
De Sanctis, 2009), but in general, for a 6-port linear sys- tem, there
are 6 × 6 input/output relationships that must be computed. Efficient,
parametric implementations for these circuits are not currently
obvious.

And then goes ahead and solves the circuit using regular symbolic
circuit math and implements it with a DF2 T 3 pole filter, not WDF nor
the (D)K-method.

So all up this shows real promise, but it's not quite there yet.

All the best,

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] Implicit integration is an important term, ZDF is not

2013-11-12 Thread Andrew Simper
Now for all those people scratching their heads of the whole Zero Delay
Feedback, here is the deal:

Any implicit integration method applied to numerically integrate something
is by its very definition using Zero Delay Feedback, linear or non-linear
this is the case. You can completely ignore that you need some previous
state to add things to, this is always the case for numerical integration.

So trapezoidal integration as well as the bi-linear transform solve the
implicit equations with different realisations, but the results are both
ZDF because they both used the implicit trapezoidal integration method to
solve things even if this is done indirectly via the laplace space.

The key thing here is if the integrator is implicit or explicit, ZDF is
unavoidable if you use implicit methods, explicit methods will contain
delay:

http://en.wikipedia.org/wiki/Explicit_and_implicit_methods

*Explicit methods* calculate the state of a system at a later time from
the state of the system at the current time, while *implicit methods* find
a solution by solving an equation involving both the
current state of the system and the later one.

I hope this clears things up and exposes ZDF as a confusing and pointless
marketing catch phrase.

All the best,

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-11 Thread Andrew Simper
On 11 November 2013 08:09, robert bristow-johnson
r...@audioimagination.com wrote:
 On 11/8/13 6:47 PM, Andrew Simper wrote:

 On 9 November 2013 08:57, Tom Duffytdu...@tascam.com  wrote:

 Having worked with Direct-Form I filters for half of my
 career, I've been glossing over this discussion as
 not relevant to me.

 It depends if you value numerical performance, cutoff accuracy, dc
 performance etc etc, DF1 scores badly on all these fronts,


 nope.

Actually yep. Whenever I mentioned DF1 I meant DF1, not DF1 with added
and noise shaping. I'm using the algorithms you published in your RBJ
audio eq cookbook, I could see no mention of noise shaping there, nor
using a mapping from cos to sin to help out in the text that you
wrote, not even as a one line disclaimer saying: oh and watch out for
using this, if you don't use double precision you will most likely
need noise shaping to get reasonable audio performance.


   and this is even in the case where you keep your cutoff and q unchanged.


 you can, for a lot fewer instructions than a lattice or a ladder or an SVF,
 do a DF1 with noise shaping with a zero in the quantization noise TF at z=1
 that obliterates any DC error.  infinite S/N at f=0.  in a fixed-point
 context, this DF1 with noise shaping (sometimes called fraction saving),
 has *one* quantization point, not two, not three.

I don't believe you, so how about post some code so show it? Actually
I do believe you but I'm too lazy and completely un-motivated to work
through and double check a correct implementation of DF1 with noise
shaping, but I would love to add the results of this to all the plots
I've done so people can make a more informed decision when comparing
the different methods.

Now lets have a look at a bell using DF1 or SVF trap:

DF1 bell (5* 4+ 4=)
const x0 = input
const y0 = b0*x0 + b1*x1 + b2*x2 + a1*y1 + a2*y2
x2 = x1
x1 = x0
y2 = y1
y1 = y0
const output = y0

SVF Trap bell (6* 6+ 2=)
const v0 = input
const v1 = a0*v0 + a1*ic1eq + a2*ic2eq
const v2 = ic2eq + g*v1
ic1eq = 2*v1 - ic1eq
ic2eq = 2*v2 - ic2eq
const output = v0 - v1

So the DF1 without noise shaping you have 9 ops and you need to add
noise shaping to get reasonable performance and with the svf you have
12 ops and already very good performance. And there is one more
important thing to note, if you bell gain is 0 dB then the a0 term in
the SVF trap will be 0, so all the terms are 0 including v1, so then
you have at the end v0 - 0 = v0 so you get your input back bit for
bit.

But... we aren't even considering time varying behaviour here.


 you can also rewrite equations to get rid of the cosine problem, which is
 at the root of problems regarding cutoff accuracy.  you do it by
 replacing, in your equations, every occurrence of cos() with this:


  cos(2*pi*f0/Fs) --  1  -  2*( sin(pi*f0/Fs) )^2

 as you can see, even if you have floating point, all of the information
 concerning f0 is in the difference that cos() is from 1.  so, assuming
 f0Fs, even floating point doesn't help.  all of the information concerning
 f0 is in the mantissa bits that are falling offa the edge as f0 gets lower
 and lower.  double precision floats *do* help out here, but it's a numerical
 problem.  and all of the designs using tan(pi*f0/Fs) have their own
 numerical problems regarding ranging, which is why i would suggest to move
 away from using tan() as soon as possible in your coefficient math.

Thanks for pointing this out, I've not explicitly mentioned this
before so its great you've brought it up. This is a part of keeping
coefficients bounded and tan is definitely not bounded as your
cutoff approaches nyquist. It is very easy to work around this by
substituting sin/cos and multiplying through so yes there are changes
here that can help a particular implementation eg:

1/(1+g(g+k)) = 1/(1+tan(pi wc) (tan (pi wc) + k)) = cos(pi wc)^2 / (1
+ k sin (2 pi wc) /2)

in which all terms remain bounded all the time. Showing the
coefficients as sin and cos terms doesn't really help the clarity of
the situation, and deriving the sin and cos version is basic algebra,
so I've chosen to leave things as tan - I'll add a note to remind
people they can use a sin and cos version if they prefer.

I've found for audio rate modulation you can be better off keeping
things as tan since then if you use an approximation of tan that
contains an error then this error will only be in what the cutoff
actually is, but you are modulating the cutoff so an exact result
isn't required. If you use a sine and cosine approximation then the
maths falls down and the coefficients you generate may not be stable
(or have the exact cutoff). Once the filter is static again you can
use a precise version and get an exact cutoff.

 you'll get something a little different, but pretty strongly related to the
 standard DF1.


Thanks Robert for your continued input on this topic, it's great to
have such an expert on DF1 stuff to discuss things with and bring up
important points

Re: [music-dsp] Fwd: [admin] another HTML test

2013-11-11 Thread Andrew Simper
(sent in html)

Thanks Douglas! This is a huge help.

All the best,

Andy
--
cytomic - sound music software


On 12 November 2013 01:26, STEFFAN DIEDRICHSEN sdiedrich...@me.com wrote:

 Rich t  textTest. .
 Should be red. If not, all is well.

 Steffan
 On 11.11.2013, at 18:23, douglas repetto doug...@music.columbia.edu
 wrote:

 
  Yes, it should convert rich text to plain text. Can you send a test
 message? At the least it should send you back a bounce message.
 
  best,
  douglas
 
 
 
  On 11/11/13 12:22 PM, STEFFAN DIEDRICHSEN wrote:
 
 
  Begin forwarded message:
 
  From: STEFFAN DIEDRICHSEN sdiedrich...@mac.com
  Subject: Re: [music-dsp] [admin] another HTML test
  Date: 11. November 2013 18:20:14 MEZ
  To: A discussion list for music-related DSP 
 music-dsp@music.columbia.edu
 
  Does this apply to rich text mails as well? That was the problem with
 Mail here.
 
  Steffan
 
 
 
  On 11.11.2013, at 18:19, douglas repetto doug...@music.columbia.edu
 wrote:
 
 
  Okay, it seems to be working. You can now post to the list in HTML
 but your message will be converted to plain text. This seems like a good
 first step to accommodate people who need plain text and people for whom
 sending plain text is a pain. I tried to respond to some message from my
 phone while I was out of town and found that it's impossible to send plain
 text email that way!
 
  best,
  douglas
 
 
 
 
  On 11/11/13 12:16 PM, douglas repetto wrote:
 
  Hi Dave,
 
  I have the list set to convert HTML mail to plain text. So it's a
 good sign that the email went through. I'm composing this in red text and
 changing fonts around. That should all disappear when this message
 arrives...
 
  Here's a list:
 
  1. * taco
  2. * truck
 
 
 
  Wheee!
 
 
 
 
  --
  ... http://artbots.org
  .douglas.irving http://dorkbot.org
  .. http://music.columbia.edu/cmc/music-dsp
  ...repetto. http://music.columbia.edu/organism
  ... http://music.columbia.edu/~douglas
 
 
  --
  dupswapdrop -- the music-dsp mailing list and website:
  subscription info, FAQ, source code archive, list archive, book
 reviews, dsp links
  http://music.columbia.edu/cmc/music-dsp
  http://music.columbia.edu/mailman/listinfo/music-dsp
 
 
  --
  dupswapdrop -- the music-dsp mailing list and website:
  subscription info, FAQ, source code archive, list archive, book
 reviews, dsp links
  http://music.columbia.edu/cmc/music-dsp
  http://music.columbia.edu/mailman/listinfo/music-dsp
 
 
  --
  ... http://artbots.org
  .douglas.irving http://dorkbot.org
  .. http://music.columbia.edu/cmc/music-dsp
  ...repetto. http://music.columbia.edu/organism
  ... http://music.columbia.edu/~douglas
 
 
  --
  dupswapdrop -- the music-dsp mailing list and website:
  subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
  http://music.columbia.edu/cmc/music-dsp
  http://music.columbia.edu/mailman/listinfo/music-dsp

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] R: R: Trapezoidal integrated optimised SVF v2

2013-11-09 Thread Andrew Simper
On 9 November 2013 22:21, Marco Lo Monaco marco.lomon...@teletu.it wrote:

Hi Marco,

First up I want to thank you for your considered and useful
observations Marco, I appreciate where you are coming from and how you
can clearly communicate your ideas. This makes it possible for me to
reply to your points and offer observations in return which are
hopefully helpful to not only you, but anyone reading this thread.


 Hi Andrew,

 I think it's useful for everyone, and especially those wanting to handle
 non-linearities or other music behaviour.

 Yes, but people who are working in this field and doing virtual analog have
 known these tricks for at least 10 years ago. :)

Here is a direct quote from my first email, and similar quotes will be
found in all initial emails from me, and references to reinforce just
how not new this stuff is:

Please note there is absolutely nothing new here, this is all standard
circuit maths that has been around for ages, and all the maths behind
it was invented by people like Netwon, Leibnitz, and Euler, they
deserve all the credit and came up with ways of solving not just this
linear case but also the non-linear case.

Even if this is old news, there are plenty of people struggling along
with algorithms that don't sound as good and they aren't aware of
alternatives. I'm trying to raise the level of dsp here by pointing
out what to me is obvious and helpful, I'm not pretending to have
invented anything.

 You don't need the laplace space to apply all these different
 discretization (numerical integration)
 methods, they all come from the time domain and can be derived pretty
 quickly from first principles.

 Well, of course the s = (T/2)(z-1)/(z+1) conversion comes from discretizing
 a differential equation. Using Laplace is simply more handy IMHO. Mainly
 because once you have your state space model (or TF) representation you can
 choose (via a CAS tool) to discretize it at best. Moreover modeling an
 analog network with s-domain impedence sometimes is quicker/easier. But I
 understand it's only a routine behavior and it's a matter of tastes. I
 remember I have once modeled 50eqs linear system via KVL/KCL always
 remaining in the continuos domain and because I wanted to see different
 behavior I chose the Laplace representation. If I had to solve several times
 those linear system with different integration rules I would have gone nuts
 :)

Please note you only have to solve the numerical integration equations
once to generate a linear equivalent model of the form ic = gc vc -
iceq, then you put that linear equivalent model into your Computer
Algebra System (CAS) to solve things.

So although it may be more handy for you at the moment to do things
via the laplace domain there are compelling reasons not to, even in
the case where you don't modulate the filter at all like an eq. In
noise performance tests I've done (I don't know how to prove this
analytically, but if you do then please go ahead and work it out and
share it with us!) using direct integration on an svf delivers better
results (rms error) than any other filter realisation I've tried which
operates via the laplace transform (DF1, DF2, DF2T, Ladder (not moog),
Direct Wave, etc etc). Here are some plots of the previous method I
posted which is almost identical to the one in the newer pdf, the
ladder structure is the only one to come come close but it falls down
on some of the tests. I suggest if you are not convinced to repeat the
tests yourself (or even come up with new ones) and share the results:

http://www.cytomic.com/files/dsp/SVF-vs-DF1.pdf


  What I would say more about this method is that , since it is
  intrinsicly a biquad, you not only have to prewarp the cutoff fc but
  also the Q. In such

 Are you talking about bell filters here? For a low pass resonant filter it
 is hard to warp the Q since there is no
  extra degree of freedom to keep its gain down, so I'm not sure how
 prewarping the Q is possible in this case, but I'd love to hear if it can be
 done.

 Not only, but wait I could be wrong on this. I always took RBJ cookbook as a
 bible and he doesn't really say that the Q cant be prewarped for LPF/HPF
 starting from the analog Q. Maybe RBJ can correct me :)

I would love it if you are right. If RBJ can pull it off I would love
to see how as it would be ultra ultra useful!


  You can do all the same warping no matter if you go through the laplace
 space or
  directly integrate the circuits, and it doesn't matter what realisation
 you are using, in
 particular you can have a look at my previous workbook
  where I matched the SVF to Roberts RBJ shapes:

 I would love to see it, if you have the chance.

I missed adding the link here, sorry about that, here is the old
version with the bell shape mapped, you can use the same method on the
new version I just didn't bother adding it to the new pdf (you match
all the RBJ shapes with algebra exactly)


Re: [music-dsp] R: Trapezoidal integrated optimised SVF v2

2013-11-08 Thread Andrew Simper
Hi Marco,

On 8 November 2013 18:03, Marco Lo Monaco marco.lomon...@teletu.it wrote:
 Hi guys,
 the work that Andrew did is of course a classic way to implement
 discretization of any analog filter by not considering any s-domain
 analysis, but discretizing directly from time domain differential eqs. I
 think it is useful for people here not having a Master's degree education to
 review from time to time those concepts and that's why I think it should be
 well accepted.

I think it's useful for everyone, and especially those wanting to
handle non-linearities or other music behaviour.


 Being in the linear modeling field, I would rather have analized the filter
 in the classic virtual analog way, reaching an s-domain transfer function
 which has the main advantage that is ready to many discretization
 techniques: bilinear (trapezoidal), euler back/fwd, but also multi step like
 AdamsMoulton etc. Once you have the s-domain TF you just need to push in s
 the correct formula involving z and simplify the new H(z) which is read to
 be implemented in DF1/2.

You don't need the laplace space to apply all these different
discretization (numerical integration) methods, they all come from the
time domain and can be derived pretty quickly from first principles.


 What I would say more about this method is that , since it is intrinsicly a
 biquad, you not only have to prewarp the cutoff fc but also the Q. In such

Are you talking about bell filters here? For a low pass resonant
filter it is hard to warp the Q since there is no extra degree of
freedom to keep its gain down, so I'm not sure how prewarping the Q is
possible in this case, but I'd love to hear if it can be done.


 cases I typically use the analog s-domain TF and then also compensate the Q
 via the very famous RBJ cookbook (compute the analog Q and redesign the
 digital biquad with the fc, Q and gain params). Compensating the Q is
 important not only because you prevent the stretching as your cutoff reaches
 Nyquist but also because it minimizes the same stretch at different sampling
 frequency.

You can do all the same warping no matter if you go through the
laplace space or directly integrate the circuits, and it doesn't
matter what realisation you are using, in particular you can have a
look at my previous workbook where I matched the SVF to Roberts RBJ
shapes:


 Nonetheless I would like to ask Andrew if he has time to show how he deals
 with a tanh-like nonlinearity with his approach: I think that it would be
 very interesting also and raise the level of the discussion to a higher one.

 Ciaoo

 Marco

Please note this is not my approach, I didn't come up with anything
here I'm just pointing people in the right direction and providing a
few worked examples. To handle the non-linear case usually people use
newton-raphson, but any root finding method is possible. Here are some
links to show you how:

http://en.wikipedia.org/wiki/Root-finding_algorithm
http://en.wikipedia.org/wiki/Newton's_method
http://qucs.sourceforge.net/tech/node16.html (example of how to solve
exponential V to I non-linearity)
http://qucs.sourceforge.net/tech/node68.html (general pn-junction
equivalent model)
http://www.ecircuitcenter.com/SpiceTopics/Non-Linear%20Analysis/Non-Linear%20Analysis.htm
(example of how to solve worked exponential V to I non-linearity)

The basic idea in most circuit simulators is to linearise the
non-linear bits, then iterate to converge on a solution (find the
zero) of the equations which also include handling your integration
method (but a final single step after convergence is needed to update
the states). This is all done by turning everything into y = m x + b
form, since then if y is on both sides like y = m (x - y) + b then
you can easily solve it: y + m y = m x + b, y (1 + m) = m x + b, y =
(m x + b)/(1 + m), and that is about as hard as it gets. For each
implicit dependency you have a division to eliminate it, and sometimes
you can group the divisions. Note that the implicit dependency could
be either a linear one (which means you can solve it in one step) or a
non-linear one (which means you need to iterate).

I'll post another workbook showing how straightforward this is when I
get a chance.

All the best,

Andy


 -Messaggio originale-
 Da: music-dsp-boun...@music.columbia.edu
 [mailto:music-dsp-boun...@music.columbia.edu] Per conto di Andrew Simper
 Inviato: mercoledì 6 novembre 2013 10:46
 A: A discussion list for music-related DSP
 Oggetto: [music-dsp] Trapezoidal integrated optimised SVF v2

 Here is an updated version of the optimised trapezoidal integrated svf which
 bundles up all previous state into equivalent currents for the capacitors,
 which is how I solve non-linear circuits (although this solution is just the
 linear one that I'm posting here). The only thing to note that with
 trapezoidal integration you have gain terms of g =
 tan(pi*cutoff/samplerate) which become very large with high cutoff, so care
 needs

Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-08 Thread Andrew Simper
On 9 November 2013 08:57, Tom Duffy tdu...@tascam.com wrote:
 Having worked with Direct-Form I filters for half of my
 career, I've been glossing over this discussion as
 not relevant to me.

It depends if you value numerical performance, cutoff accuracy, dc
performance etc etc, DF1 scores badly on all these fronts, and this is
even in the case where you keep your cutoff and q unchanged.


 I went back and re-read it, and if you can get past
 the scribbled diagrams and a few hand-waving / bypassed
 steps, I can appreciate that Andrew has derived a
 useful digital model of an analog circuit that is often
 used in synthesizers (The state variable filter).

I find it much quicker and easier to hand draw a diagram and take a
photo than using vector drawing packages or tools like tikz or
circuitz (although the latter two do make nice diagrams). If I give a
presentation I would also rather use a white board or draw on overhead
slides by hand than use powerpoint or similar tools.


 What Theo has also noted is that Andrew has in
 the process come up with some new naming conventions
 that confuse, conflate or otherwise seem contrary to
 academic use. This makes it difficult to see how
 the derivation differs from, or is identical to,
 classical approaches.

I haven't come up with any new naming conventions, nor any acronyms to
describe things (I haven't really come up with much actually), I'm
using the naming conventions of the technical papers of qucs (which
are also used widely in the elsewhere) which I have provided links to
as part of the references. If you are not familiar with these naming
conventions then that's fine, but please don't think I came up with
new naming conventions just to confuse people, the good thing about
conventions is that there are so many of them!

I used the letter g for conductance
(http://en.wikipedia.org/wiki/Electrical_conductance) which is
standard, and then the closest letter to g that isn't used for
anything else much k for the gain of the resonance path (which is
voltage gain in this case), which are clearly marked (ok scribbled!)
in the diagram, these don't really matter much and don't change any of
the working. I also use v for voltages, and i for currents, the
letter c for capacitors they all seem pretty standard to me. Then
there is eq for equivalent, and n and n+1 for nth and n+1 st time
steps (all from the qucs link), and finally wc for the cutoff
frequency (ok so this perhaps should be w0 but I don't want to confuse
it with gain coefficients or voltages).

All the best,

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-06 Thread Andrew Simper
On 6 November 2013 22:13, Theo Verelst theo...@theover.org wrote:

 That's a lot of approximations and (to me !) unclear definitions on a row.

Ok, please let me know the first one you don't understand and I'll
break it down for you! The only approximation made is the numerical
integration scheme used (and that is always going to be an
approximation by its very nature), the rest is basic circuit math on
ideal components so does not contain any approximations at all.

All the best,

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Fwd: 24dB/oct splitter

2013-11-05 Thread Andrew Simper
Hi Ross, I never actually use the form of the equations I posted in
the pdf, I wrote all those horrible z^-1 type state diagrams
specifically because Vadim requested them, but they confuse the crap
out of me and I even had difficulty writing them myself, they are
really of no practical use for me at all. I actually use the
equivalent current form as shown here:

http://qucs.sourceforge.net/tech/node26.html

You can see that all previous terms, ie terms that are in n or n-1
... n-m can be summed to form a single constant. In the specific
case of trapezoidal integration you have (in this case n+1 is the
current time step, n is equivalent to z^-1, n-1 is z^-2 etc:

ic(n+1) = 2*C/h(n+1)*vc(n+1) - 2*C/h(n)*vc(n) - ic(n)
ic(n+1) = gc(n+1)*vc(n+1) - iceq

where gc(n) = 2C/h(n) and gc(n+1) = 2C/h(n+1) and iceq = gc(n)*vc(n) + ic(n)

You can drop all the n stuff now and just use: ic = gc*vc - iceq

And if you update iceq after the solution then all your z terms are
taken care of. For the specific case of trapezoidal integration you
have:

iceq = 2*gc*vc - iceq

so you iceq holds your z^-1 and z^-2 etc terms for you and becomes
a single state. This is all completely standard circuit mathematics
that has been around for a very long time and was probably written in
fortran codes before I was born.

I'll update the document to get rid of all those horrible z terms soon.

All the best,

Andy



On 5 November 2013 18:52, Vadim Zavalishin
vadim.zavalis...@native-instruments.de wrote:
 (the quotation is from Andy's mail)

 On 2/8/13 2:15 AM, Ross Bencina wrote:
 i've analyzed Hal's SVF to death, and i was exposted to Andy's
 design some time ago, but at first glance, it looks like the
 Trapazoidal SVF looks like it doubles the order of the filter.
 it it was a second-order analog, it becomes a 4th-order digital.
 but his final equations do not show that.  do those trapazoidal
 integrators, become a single-delay element block (if one were to
 simplify)?  even though they ostensibly have two delays?


 You can use canonical (DF2/TDF2) trapezoidal integration, in which case the
 order of the filter doesn't formally grow. This is quite intuitively
 representable in the TPT papers and the book I mentioned earlier. If you use
 DF1 integrators, the order formally grows by a factor of 2, but I believe
 half of the poles will be cancelled by the zeroes.

 BTW, IIRC, as for the optimization from 4 z^-1 to 3 z^-1 in Andy's SVF, I
 believe this optimization implicitly assumed the time-invariance of the
 filter. So, while keeping the transfer function intact, this optimization
 changes the time-varying behavior of the filter (not sure, how much and
 whether it's for the worse or for the better).

 Regards,
 Vadim


 --
 Vadim Zavalishin
 Reaktor Application Architect
 Native Instruments GmbH
 +49-30-611035-0

 www.native-instruments.com

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Missing replies for the past year or possibly more

2013-11-05 Thread Andrew Simper
Hi Douglas,

Thanks for your time in maintaing the list for the benefit of everyone!

I had a quick look at the mailman help and couldn't find anything
easily, but if you can decipher a setting that sends a reply telling
someone their email has not been accepted if they posting the
incorrect format that would be ultra useful.

All the best,

Andy
--
cytomic - sound music software


On 6 November 2013 03:22, douglas repetto doug...@music.columbia.edu wrote:

 I've just added a new obnoxious warning to the reminder that goes out to the
 list 2x a month. We should obviously change this policy in some way, but I
 can't deal with it at the moment. I'll figure something out soon.

 Sorry for the missed messages and disrupted conversations.

 best,
 douglas





 On 11/5/13 2:32 AM, Andrew Simper wrote:

 Sorry to anyone that has tried to get feedback from me in the past
 year or more, I have been posting but in html format, and the email
 list deamon failed silently so I never knew they weren't making it
 through. This is really frustrating since some of my posts took some
 time to put together. I'll search through my sent mail and re-send a
 bunch of emails.

 All the best,

 Andy
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp


 --
 ... http://artbots.org
 .douglas.irving http://dorkbot.org
 .. http://music.columbia.edu/cmc/music-dsp
 ...repetto. http://music.columbia.edu/organism
 ... http://music.columbia.edu/~douglas


 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] family of soft clipping functions.

2013-11-04 Thread Andrew Simper
I think I've been caught out on the html email thing as well, I wonder
how many posts have gone completely missing that I've sent? Here is
one I sent 5 days ago, sorry if this is a double up, I checked the
archives but couldn't find anything:

Hi Robert,

Thanks very much for the post! I plotted the shapes and noticed that
as the order increased the maximum output value decreased. I find for
audio use it's nice to have the maximum output remain constant, for
example at +-1 like a tanh. This can easily be added to your equations
by scaling the x^2 term by the appropriate amount, lets call it a,
so the final formula is:

f[x, a, n] = Integrate[(1 - a v^2)^n, {v, 0, x}]

So then the input bounds will change from -1, 1 to some other value
say -x1 to x1, and then there are two variables a and x1 that can
be solved by the following two equations:

f'[x1, a, n] == 0 and f[x1, a, n] == 1

The first few solutions to these equations are:

3rd order with x1 = 3/2, f3(x) = x - (4 x^3)/27

5th order with x1 = 15/8, f5(x) = x - (128 x^3)/675 + (4096 x^5)/253125

7th order with x1 = 35/16, f7(x) = x - (256 x^3)/1225 + (196608
x^5)/7503125 - (16777216 x^7)/12867859375

9th order with x1 = 315/128, f9(x) = x - (65536 x^3)/297675 +
(536870912 x^5)/16409334375 - (17592186044416 x^7)/6838508054109375 +
(72057594037927936 x^9)/872422665003003515625

By comparing the results for lots of values of n I noticed that these
are the same solutions that you get if you have f[x] =  x + a(3) x^3 +
... + a(2n+1) x^(2n+1) and solve for the a coeffs and x1 using the
system of equations f[x1] == 1, f'[x1]==0, ..., f'n[x1]==0.

Enjoy!

Andy
--
cytomic - sound music software
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Missing replies for the past year or possibly more

2013-11-04 Thread Andrew Simper
Sorry to anyone that has tried to get feedback from me in the past
year or more, I have been posting but in html format, and the email
list deamon failed silently so I never knew they weren't making it
through. This is really frustrating since some of my posts took some
time to put together. I'll search through my sent mail and re-send a
bunch of emails.

All the best,

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Fwd: R: Sweeping tones via alias-free BLIT synthesis and TRI gain adjust formula

2013-11-04 Thread Andrew Simper
Hi Marco,

Use linear phase BLEP / BLAMP, 16 taps should be plenty for very clean
results. You need linear phase so you won't accrue DC when you overlap
the taps when generating high frequency waveforms.

Andy
--
cytomic - sound music software



On 17 May 2013 00:05, Marco Lo Monaco marco.lomon...@teletu.it wrote:

 Hi guys, here is a repost of a conversation between me and RBJ under his
 permission, since he couldnt send to the NG via plain text from his browser.
 Pls if some of you guys have some suggestion, it would be very much
 appreciated.
 Marco

  Original Message 
 Subject: [music-dsp] Sweeping tones via alias-free BLIT synthesis and TRI
 gain adjust formula
 From: Marco Lo Monaco marco.lomon...@teletu.it
 Date: Tue, May 14, 2013 6:34 am
 To: music-dsp@music.columbia.edu
 --
  I am here asking what is the best practice to deal with frequency
 modulation via BLIT generation.

 hi Marco,

 i've fiddled with BLIT longer ago. as i recall i generated a string of
 sinc() functions instead of a string of impulses and i integrated them along
 with a little DC bias to get a saw. for squareit were two little BLITs,
 alternating in sign, per cycle, and integrated. and triangle was an
 integrated square. i found generating the BLITs to be difficult, i
 eventually just used wavetables for the BLITs at different ranges. and then
 i thought why not just use wavetable to generate the waveforms directly.
 i know with the sawtooth, the bias going into the integrator had to change
 as the pitch changes. i dunno what to do about leftover charge left in the
 integrator other than maybe make the integrator a little bit leaky. so that
 is my only suggestion, if you have the rest of the BLIT already
 licked.

 the net advice i can give you is to consider some other method than BLIT
 (like wavetable) and if you're using BLIT with a digital integrator, you
 might have to make the integrator a little leaky so that an DC component
 inside can leak out.

 bestest,

 r b-j

  Original Message 
 Subject: R: [music-dsp] Sweeping tones via alias-free BLIT synthesis and TRI
 gain adjust formula
 From: Marco Lo Monaco marco.lomon...@teletu.it
 Date: Wed, May 15, 2013 4:46 am
 To: r...@audioimagination.com
 --

 Hi Robert,
 I tried with leaky: the thing is that it seems you need to compensate for
 amplitudes if you are using a fixed cutoff leaky integrator (if you filter a
 12kHz blit, its amplitude with an LPF 5Hz will be much lower than a 100Hz
 blit and compensating the amplitude could generate roundoff noise at high
 freqs). As a hack one could use a varying cutoff depending on the f0 tone to
 be synthesized, but that could have problem again in sweeping tones (an LPF
 changing cutoff at audio rate is not generally artifact free). In both cases
 transient and not steady textbook waveforms are the result, which could be a
 problem if there is a distortion stage following (like a moog ladder with
 its non linearities).

 I tried also with wavetables, and the clean solution is a memory eager,
 starting to midi note 0 (8Hz) you need thousands of harmonics until Nyquist,
 with PWL interpolation with at list 2001pts …you can easily reach 300 MB for
 SQR/TRI/SAW!!! Probably a tradeoff and accepting aliasing a bit is the only
 solution. The open problem there is the click (also happening with sincM)
 that you get when you simply add/cut an harmonic in a sweeping context.
 Maybe the SWS BLIT method is the only solution to avoid this and I must
 investigate.

 I also tried HardSync ala Eli Brandt and its consequent method of generating
 aliasfree waveforms, but I get too much aliasing with his implementation for
 minBLEP and with a 32 zerocrossing impulse seems that you cant treat
 waveforms that has a lower 32 samples period (because the OLA method would
 add up creating subsequent DC steps).

 I thought it was much simpler to do an alias free synth, honestly!!!

 Thank you for time

 Marco

  Original Message 
 From: r...@audioimagination.com [mailto:r...@audioimagination.com]
 Date: mercoledì 15 maggio 2013 17:08
 To: Marco Lo Monaco
 Subject: Re: [music-dsp] Sweeping tones via alias-free BLIT synthesis and
 TRI gain adjust formula
 --

 i don't quite see the memory issues of wavetable as bad as you do, with
 back-of-envelope calculation.  how many ranges do you need?  maybe 2 per
 octave?  there are 10 octaves of MIDI notes.  maybe 4K per wavetable (unless
 you do some tricky stuff so that the high-pitch wavetables may have fewer
 points)  so that's 80K for a single waveform going up and down the whole
 MIDI range.  HardSync will have a large collection of 

  1   2   >