Re: [music-dsp] FIR blog post & interactive demo

2020-03-03 Thread Alan Wolfe
Man that's neat. I've been wondering how a vocoder worked. I'm looking
forward to reading through your work.

BTW, there is also an IIR demo and blog post now.
http://demofox.org/DSPIIR/IIR.html


On Tue, Mar 3, 2020 at 1:04 PM Zhiguang Eric Zhang  wrote:

> this is cool, i can't believe I actually worked on FFT filtering (via
> phase vocoder) before learning FIR/IIR filters ... ?
>
> if anyone's interested in that source code it's here:
> https://www.github.com/kardashevian
>
> On Wed, Jan 15, 2020 at 11:20 PM Alan Wolfe  wrote:
>
>> probably pretty basic stuff for most people here but wanted to share a
>> writeup and demo i made about FIRs.
>>
>> Post: https://blog.demofox.org/2020/01/14/fir-audio-data-filters/
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__blog.demofox.org_2020_01_14_fir-2Daudio-2Ddata-2Dfilters_=DwMFaQ=slrrB7dE8n7gBJbeO0g-IQ=w_CiiFx8eb9uUtrPcg7_DA=7Qrw7Q-zG9ysrJJyRW6mgLFxHzbEocFKhjiRv2QQvm4=4n5Ei4A0nKHFpsgBBVNUHMShfKCQuFVFRsCSs1pitks=>
>> Demo: http://demofox.org/DSPFIR/FIR.html
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__demofox.org_DSPFIR_FIR.html=DwMFaQ=slrrB7dE8n7gBJbeO0g-IQ=w_CiiFx8eb9uUtrPcg7_DA=7Qrw7Q-zG9ysrJJyRW6mgLFxHzbEocFKhjiRv2QQvm4=jZbKU-U0MDb2zCIChqGIcXyhzWZ6omet01_BbnGD-3o=>
>> Some simple ~175 lines of code C++:
>> https://github.com/Atrix256/DSPFIR/blob/master/Source.cpp
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_Atrix256_DSPFIR_blob_master_Source.cpp=DwMFaQ=slrrB7dE8n7gBJbeO0g-IQ=w_CiiFx8eb9uUtrPcg7_DA=7Qrw7Q-zG9ysrJJyRW6mgLFxHzbEocFKhjiRv2QQvm4=EFHcs34THi586AJu3OQhMvpori8eF0HPcNFhL0SSQ7Y=>
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>>
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.columbia.edu_mailman_listinfo_music-2Ddsp=DwICAg=slrrB7dE8n7gBJbeO0g-IQ=w_CiiFx8eb9uUtrPcg7_DA=7Qrw7Q-zG9ysrJJyRW6mgLFxHzbEocFKhjiRv2QQvm4=Ny0bCe_dRqaJklgGS5T0Oleuu7EVRRJRYgXtMn6BcIk=
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] FIR blog post & interactive demo

2020-01-15 Thread Alan Wolfe
probably pretty basic stuff for most people here but wanted to share a
writeup and demo i made about FIRs.

Post: https://blog.demofox.org/2020/01/14/fir-audio-data-filters/
Demo: http://demofox.org/DSPFIR/FIR.html
Some simple ~175 lines of code C++:
https://github.com/Atrix256/DSPFIR/blob/master/Source.cpp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] high & low pass correlated dither noise question

2019-07-12 Thread Alan Wolfe
Late response but thanks a bunch you guys :)

On Thu, Jun 27, 2019 at 1:55 PM Ethan Duni  wrote:

> So as Nigel and Robert have already explained, in general you need to
> separately handle the spectral shaping and pdf shaping. This dither
> algorithm works by limiting to the particular case of triangular pdf with a
> single pole at z=+/-1. For that case, the state of the spectral shaping
> filter can be combined with the state of the pdf shaper, and so a single
> process (with no multiplies!) handles both pdf shaping and spectral shaping.
>
> For arbitrary order M, you would roll one die at each step and then sum it
> with M previous rolls (possibly with some set of signs inverted). So the OP
> example is M=1. You have your choice of 2^M spectral shapes, depending on
> which (if any) of the previous rolls you invert. For the "highness" output,
> you will want to invert every other previous roll. As M increases, the
> output gets more Gaussian.
>
> However for higher orders this multiplier-free algorithm does not produce
> attractive spectral shapes. For even orders, the highpass does not have a
> zero at z=1. For odd orders, the frequency response has large notches in
> the middle of the bandwidth.
>
> For most applications, a triangular pdf with single zero at z=1 is a
> perfectly good dither configuration, and there is no need to go any
> further. If you are looking for a higher-order dither algorithm without
> multiplies, I think the way to extend this would be to include bit shifts
> in the summation. Then you can get some reasonable spectral shapes. The
> simple summation approach is too constrained for orders>1.
>
> Ethan
>
> On Thu, Jun 27, 2019 at 7:43 AM Alan Wolfe  wrote:
>
>> I read a pretty cool article the other day:
>> https://www.digido.com/ufaqs/dither-noise-probability-density-explained/
>>
>> It says that if you have two dice (A and B) that you can roll both dice
>> and then...
>> 1) Re-roll die A and sum A and B
>> 2) Re-roll die B and sum A and B
>> 3) Re-roll die A and sum A and B
>> 4) repeat to get a low pass filtered triangular noise distribution.
>>
>> It says that you can modify it for high pass filtered triangle noise by
>> rolling both dice and then...
>> 1) Re-roll die A and take A - B
>> 2) Re-roll die B and take B - A
>> 3) Re-roll die A and take A - B
>> 4) repeat to get a high pass filtered triangular noise distribution.
>>
>> What i'm wondering is, what is the right thing to do if you want to do
>> this with more than 2 dice? (going higher order)
>>
>> For low pass filtered noise with 3+ more dice (which would be more
>> gaussian distributed than triangle), would you only re-roll one die each
>> time, or would you reroll all BUT one die each time.
>>
>> I have the same question about the high pass filtered noise with 3+ more
>> dice, but in that case I think i know what to do about the subtraction
>> order...  I think the right thing to do if you have N dice is to sum them
>> all up, but after each "roll" you flip the sign of every die.
>>
>> What do you guys think?
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] high & low pass correlated dither noise question

2019-06-27 Thread Alan Wolfe
I read a pretty cool article the other day:
https://www.digido.com/ufaqs/dither-noise-probability-density-explained/

It says that if you have two dice (A and B) that you can roll both dice and
then...
1) Re-roll die A and sum A and B
2) Re-roll die B and sum A and B
3) Re-roll die A and sum A and B
4) repeat to get a low pass filtered triangular noise distribution.

It says that you can modify it for high pass filtered triangle noise by
rolling both dice and then...
1) Re-roll die A and take A - B
2) Re-roll die B and take B - A
3) Re-roll die A and take A - B
4) repeat to get a high pass filtered triangular noise distribution.

What i'm wondering is, what is the right thing to do if you want to do this
with more than 2 dice? (going higher order)

For low pass filtered noise with 3+ more dice (which would be more gaussian
distributed than triangle), would you only re-roll one die each time, or
would you reroll all BUT one die each time.

I have the same question about the high pass filtered noise with 3+ more
dice, but in that case I think i know what to do about the subtraction
order...  I think the right thing to do if you have N dice is to sum them
all up, but after each "roll" you flip the sign of every die.

What do you guys think?
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Real-time pitch shifting?

2018-05-17 Thread Alan Wolfe
I wrote something in time domain using granular synthesis that doesn't
sound too awful to me. There's explanation and samples on the page, as well
as source code.

https://blog.demofox.org/2018/03/05/granular-audio-synthesis/


On Thu, May 17, 2018 at 1:24 PM, Matt Ingalls  wrote:

> I tried porting Stephan Bernsee’s code from his old DSP Dimension blog:
>  http://blogs.zynaptiq.com/bernsee/pitch-shifting-using-the-ft/
>
> But it sounds pretty crappy, even compared to simple time-domain linear
> interpolation.
>
> And now wondering what's the state of the art for real-time pitch
> manipulation?
> (for my purposes, ideally in the frequency domain)
>
> Is it still just phase vocoder with peak-detection ala Laroche/Dolson?
> https://www.ee.columbia.edu/~dpwe/papers/LaroD99-pvoc.pdf
>
> Thanks!
> Matt
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] granular synth write up / samples / c++

2018-03-05 Thread Alan Wolfe
Hello rbj!

My techniques are definitely not that sophisticated. It's really neat to
hear what the deeper layers of sophistication are.

I'm particularly surprised to hear it is in the neighborhood of vocoders.
That is another technique I'd like to learn sometime, but sounds "scary"
(granular synthesis sounded scary too before I did this post hehe).

Anyways, all I'm doing is placing grains after another, but repeating or
omitting them as needed to make the output buffer get to the target length
for whatever percentage the input buffer is at. I only place whole grains
into the output buffer.

There is a parameter that specifies a multiplier for playing the grains
back at (to make them slower or faster aka affecting pitch without really
affecting length).

Whenever a grain is placed down, say grain index N, if the previous grain
placed down isn't grain index N-1, but is grain index M, it does a cross
fade from grain index M+1 to N to keep things continuous.

In my setup, there is no overlapping of grains except for this cross
fading, and no discontinuities.

I use cubic hermite interpolation to get fractional samples, my grain size
is 20 milliseconds and the cross fade time is 2 milliseconds.

Would you consider this enough in the family of granular synthesis to call
it GS for a layman / introduction?

Thanks so much for the info!

PS do you happen to know any gentle / short introductions to formants or
vocoders?

On Mar 5, 2018 3:58 PM, "robert bristow-johnson" <r...@audioimagination.com>
wrote:

>
>
> this is very cool.  i had not read through everything, but i listened to
> all of the sound examples.
>
> so there are two things i want to ask about.  the first is about this
> "granular" semantic:
>
> Thing #1:  so the pitch shifting is apparently *not* "formant-corrected"
> or "formant-preserving".  when you shift up, the voice becomes a little
> "munchkinized" and when you shift down, Darth Vader (or Satan) comes
> through.  that's okay (unless one does not want it), but i thought that
> with granular synthesis (or resynthesis), that the grains that are windowed
> off and overlap-added where not stretched (for downshifting) nor scrunched
> (for up-shifting).  i.e. i thought that in granular synthesis, the amount
> of overlap increases in up shifting and decreases during downshifting.
> this kind of pitch shifting is what Keith Lent writes about in Computer
> Music Journal in 1989 (boy that's a long time ago) and i did a paper in the
> JAES in, i think, 1995.
>
> without this formant-preserving operation, i think i would call this
> either "TDHS" (time-domain harmonic scaling), "OLA" (overlap-add), or
> "WOLA" (windowed overlap-add), or if pitch detection is done "SOLA"
> (synchronous overlap-add) or "PSOLA" (pitch synchronous overlap-add).
> however i have read somewhere the usage of the term PSOLA to mean this
> formant-preserving pitch shifting a.la Lent (or also a French dude named
> Hamon).  BTW, IVL Technologies (they did the pitch-shifting products for
> Digitech) was heavy into this and had a few patents, some i believe are now
> expired.
>
> Thing #2: are you doing any pitch detection or some attempt to keep
> waveforms coherent in either the time-scaling or pitch-shifting
> applications?  they sound pretty good (the windowing smoothes things out)
> but might sound more transparent if you could space the input grains by an
> integer number of periods.
>
> with pitch-detection and careful cross-fading (and windowing can be
> thought of as a fade-up function concatenated to a fade-down function) you
> can make time-scaling or pitch-shifting a monophonic voice or harmonic
> instrument glitch free.  it can sound *very* good and companies like
> Eventide have been doing something like that since the early-to-mid 80s.
> (ever since the H949.)  and i imagine any modern DAW does this (and some
> might do frequency-domain pitch-shifting and/or time-scaling using
> something we usually call a "phase vocoder".
>
>
>
> but your examples sound pretty good.
>
> r b-j
>
>
>  Original Message 
> Subject: [music-dsp] granular synth write up / samples / c++
> From: "Alan Wolfe" <alan.wo...@gmail.com>
> Date: Mon, March 5, 2018 5:14 pm
> To: "A discussion list for music-related DSP" <
> music-dsp@music.columbia.edu>
> --
>
> > Hey Guys,
> >
> > Figured I'd share this here.
> >
> > An explanation of basic granular synth stuff, and some simple standalone
> > C++ i wrote that implements it.
> >
&

[music-dsp] granular synth write up / samples / c++

2018-03-05 Thread Alan Wolfe
Hey Guys,

Figured I'd share this here.

An explanation of basic granular synth stuff, and some simple standalone
C++ i wrote that implements it.

https://blog.demofox.org/2018/03/05/granular-audio-synthesis/

Kind of amazed at how well it works (:

Thanks for the answer to my question BTW Jeff.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] Is this granular synthesis?

2018-03-02 Thread Alan Wolfe
Someone was explaining some algorithms to me that I thought were
interesting.  I was curious, is this granular synthesis?

It seems to be but after i read this link
https://granularsynthesis.com/guide.php. I'm unsure if it is, or is just
close...

--Adjust Audio Length Without Affecting Frequency--

If you wanted to stretch a section of sound, you'd find a small section of
sound (say 20 ms worth) in the region.

If you wanted to double the sound length, you could double this section of
sound. Similar for any integer multiple.

For fractions of multiples > 1, you overlap the sections and mix them (just
addition as per usual, but he also mentioned averaging as a possibility)

Presumably you would do this for every small section of sound in the region
you want to adjust.

He said he would cut these buffers at zero crossings to get rid of clicks,
but I imagine using an envelope, or even doing a cubic hermite
interpolation could help here even more.

--Adjust Audio Frequency Without Adjusting Length--

AKA for stuff like autotune.

The idea would be similar.

You'd get a small buffer, adjust it to be shorter or longer (doing some
form of resampling), letting the pitch change by whatever factor you are
interested in.

Then, you'd use the previously mentioned technique to adjust the length
back to what it was.


Besides confirming or denying that this is granular synthesis, does anyone
have any observations or tips for approaching these techniques, or the
desired end results?

Thanks!!
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Reverb, magic numbers and random generators

2017-10-16 Thread Alan Wolfe
I was just about to suggest that maybe something like a low discrepancy
sequence could be interesting to explore - such as the golden ratio (which
strongly relates to fib of course!).

On Mon, Oct 16, 2017 at 10:22 AM, Andy Farnell 
wrote:

>
> Bit late to the thread, but if you look around Pd archives you will
> find a patch called Fiboverb that I made about 2006/7. As you surmise,
> the relative co-primality of fib(n) sequence has great properties
> for diffuse reverbs.
>
> Just reading about the proposed Go spacing idea, seems very interesting.
>
> best
> Andy
>
> On Wed, Sep 27, 2017 at 05:00:13PM +0200, gm wrote:
> >
> > I have this idée fixe that a reverb bears some resemblance with some
> > types of random number generators especially the lagged Fibonacci
> > generator.
> >
> > Consider the simplified model reverb block
> >
> >
> >  +-> [AP Diffusor AP1] -> [AP Diffusor Ap2] -> [Delay D] ->
> >  |  |
> >  -<--
> >
> >
> > and the (lagged) fibonacci generator
> >
> > xn = xn-j + xn-k (mod m)
> >
> > The delay and feedback is similar to a modulus operation (wrapping)
> > in that that
> > the signal is "folded", and creates similar kinds of patterns if you
> > regard the
> > delay length as a period.
> > (convolution is called "folding" in Germand btw)
> >
> > For instance, if the Delay of the allpass diffusor length is set to
> > 0.6 times the delay length
> > you will get an impulse pattern in the period that is related to the
> > pattern of the operation
> > xn = xn-1 + 0.6 (mod 1) if you graph that on a tile.
> >
> > And the quest in reverb designing is to find relationhips for the AP
> Delays
> > that result in a smooth, even and quasirandom impulse responses.
> > A good test is the autocorrelation function wich should ideally be
> > an impulse on a uniform noise floor.
> >
> > So my idea was to relate the delay time D to m and set the AP Delays
> > to D*(Number/m),
> > where Number is the suggested numbers j and k for the fibonacci
> generator.
> >
> > The results however were mixed, and I cant say they were better than
> > setting the
> > times to the arbitray values I have been using before.
> > (Which were based on some crude assumptions about distributing the
> > initial impulse as fast as possible, fine tuning per ear and
> > rational coprime aproximations for voodoo).
> > The results were not too bad either, so they are different from
> > random cause the numbers Number/m
> > have certain values and their values are actually somewhat similar
> > to the values I was using.
> >
> > Any ideas on that?
> > Does any of this make sense?
> > Suggestions?
> > Improvements?
> > How do you determin your diffusion delay times?
> > What would be ideal AP delay time ratios for the simplified model
> > reverb above?
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > ___
> > dupswapdrop: music-dsp mailing list
> > music-dsp@music.columbia.edu
> > https://lists.columbia.edu/mailman/listinfo/music-dsp
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sampling theory "best" explanation

2017-08-26 Thread Alan Wolfe
This is neat, thanks for sharing Nigel

On Aug 25, 2017 6:22 PM, "Nigel Redmon"  wrote:

> Well, it’s quiet here, why not…
>
> Please check out my new series on sampling theory, and feel free to
> comment here or there. The goal was to be brief, but thorough, and
> avoid abstract mathematical explanations. In other words, accurate enough
> that you can deduce correct calculations from it, but intuitive enough for
> the math-shy.
>
> http://www.earlevel.com/main/tag/sampling-theory-series/?order=asc
>
> I’m not trying to be presumptuous with the series title, “the best
> explanation you’ve ever heard”, but I think it’s unique in that
> it separates sampling origins from the digital aspects, making the
> mathematical basis more obvious. I’ve had several arguments over the years
> about what lies between samples in the digital domain, an epic argument
> about why and how zero-stuffing works in sample rate conversion here more
> than a decade ago, etc. I think if people understand exactly what sampling
> means, and what PCM means, it would be a benefit. And, basically, I
> couldn’t think of a way to titled it that didn’t sound like “yet another
> introduction to digital sampling”.
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Recognizing Frequency Components

2017-01-26 Thread Alan Wolfe
As things change hands, or go through multiple layers of translation and
sanitation, sometimes  becomes < and then gets stripped by the next
thing.

Might try something like this hehe: lt;

Let's be honest, the web is a mess :P

On Thu, Jan 26, 2017 at 2:38 PM, Bjorn Roche <bj...@shimmeo.com> wrote:

>
>
> On Thu, Jan 26, 2017 at 2:40 PM, Martin Klang <m...@pingdynasty.com>
> wrote:
>
>> try putting  instead of less-than.
>>
> That's exactly what I did last time. My impression is that Blogger doesn't
> have a canonical data representation.
>
>
>> On 26/01/17 19:28, Bjorn Roche wrote:
>>
>> On Thu, Jan 26, 2017 at 2:09 PM, Alan Wolfe <alan.wo...@gmail.com> wrote:
>>
>>> It's some HTML filtering happening somewhere between (or including) his
>>> machine and yours.
>>>
>>>
>> It's Blogger. I've fixed this before and apparently it comes back :(. I
>> think Google's more or less abandoned Blogger. I'd switch to something else
>> if I ever blogged anymore.
>>
>> bjorn
>>
>>
>>
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>
>
>
>
> --
> Bjorn Roche
> @shimmeoapp
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Recognizing Frequency Components

2017-01-26 Thread Alan Wolfe
It's some HTML filtering happening somewhere between (or including) his
machine and yours.

The less than of the for loop is being seen as the start of an HTML tag, or
just possibly part of the start of an HTML tag and being stripped away.

A common problem when providing code snippets on the web via email or in
forums etc. Generally due to software not doing proper sanitizing at some
step along the way :P

On Thu, Jan 26, 2017 at 11:06 AM, robert bristow-johnson <
r...@audioimagination.com> wrote:

>
>
>  Original Message 
> Subject: Re: [music-dsp] Recognizing Frequency Components
> From: "Bjorn Roche" 
> Date: Thu, January 26, 2017 10:57 am
> To: "A discussion list for music-related DSP" <
> music-dsp@music.columbia.edu>
> --
>
> > I wrote a blog post a while ago about how to use FFT to find the pitch of
> > an instrument. As I mention in the post, this is hardly the best way,
> but I
> > think it's suitable for many applications. For example, you could write a
> > perfectly serviceable guitar tuner with this.
> >
> > The post links to code and includes some discussion of specific issues of
> > time/frequency resolution and so on.
> >
> > I've been wanting to write about other methods, but... maybe when I
> retire
> > :)
> >
> > http://blog.bjornroche.com/2012/07/frequency-detection-
> using-fft-aka-pitch.html
> >
>
> Bjorn, you *must* be aware of this because it appears 3 times, but why are
> your for() statements truncated?
>
> e.g.
>
> void applyWindow( float *window, float *data, int size )
> {
>for( int i=0; i
>   data[i] *= window[i] ;
> }
>
>
> --
>
> r b-j  r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Faster Fourier transform from 2012?

2016-08-22 Thread Alan Wolfe
Thanks for the info, very interesting! (:

On Sun, Aug 21, 2016 at 8:34 PM, Ross Bencina 
wrote:

> [Sorry about my previous truncated message, Thuderbird is buggy.]
>
> I wonder what the practical musical applications of sFFT are, and whether
> any work has been published in this area since 2012?
>
>
> > http://groups.csail.mit.edu/netmit/sFFT/hikp12.pdf
>
> Last time I looked at this paper, it seemed to me that sFFT would
> correctly return the highest magnitude FFT bins irrespective of the
> sparsity of the signal. That could be useful for spectral peak-picking
> based algorithms such as SMS sinusoid/noise decomposition and related
> pitch-tracking techniques. I'm not sure how efficient sFFT is for "dense"
> audio vectors however.
>
>
> More generally, Compressive Sensing was a hot topic a few years back.
> There is at least one EU-funded research project looking at audio-visual
> applications:
> http://www.spartan-itn.eu/#2|
>
> And Mark Plumbley has a couple of recent co-publications:
> http://www.surrey.ac.uk/cvssp/people/mark_plumbley/
>
> No doubt there is other work in the field.
>
> Cheers,
>
> Ross.
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] Faster Fourier transform from 2012?

2016-08-21 Thread Alan Wolfe
This article has been getting shared and reshared by some graphics
professionals / researchers I know on twitter.

The article itself and arxiv paper are from 2012 though, which makes me
wonder why we haven't heard more about this?

Does anyone know if this is real?

http://m.phys.org/news/2012-01-faster-than-fast-fourier.html
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] Are kalman filters used often in music or audio DSP?

2016-07-23 Thread Alan Wolfe
I've read about kalman filters being used in dsp for things like flight
controls.

I was wondering though, do they have much use in audio and/or music
applications?

Thanks!!
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Trouble Implementing Huggins Binaural Pitch

2016-06-26 Thread Alan Wolfe
Someone on the dsp stack exchange mentioned that you could also get this
effect through additive synthesis:
http://dsp.stackexchange.com/questions/31725/trouble-implementing-huggins-binaural-pitch


On Sun, Jun 26, 2016 at 8:25 AM, Ethan Fenn <et...@polyspectral.com> wrote:

> Keep in mind that windowing (or just using a finite-length sample, which
> is the same as using a rectangular window) is going to smear the FFT
> spectra out, making it hard to draw any conclusion from looking at
> individual bins. I'm guessing you also don't have the tail of the allpass
> filter -- without that it's not strictly true that the magnitudes will be
> preserved.
>
> Another way you could check what's going on is to subtract the two
> channels. If the allpass filter is the only thing going on, you'd expect
> the result to have a very narrow spectrum centered around the target
> frequency. This is because the channels should be in phase at low and high
> frequencies, so you should have cancellation; at the target frequency they
> should be 180 degrees out of phase, so subtraction should yield
> constructive interference.
>
> -Ethan
>
>
>
> On Sun, Jun 26, 2016 at 6:15 AM, Uli Brueggemann <
> uli.brueggem...@gmail.com> wrote:
>
>> I listened to the example at http://www.srmathias.com/huggins-pitch/ and
>> I hear the tones.
>> But a deeper inspection shows that taking the differences of the
>> magnitude responses after FFT results in quite big deviations, even > 10 dB.
>> So it seems that the allpass delays are not really allpasses without
>> influencing the magnitude response.
>>
>>
>> 2016-06-26 3:14 GMT+02:00 Alan Wolfe <alan.wo...@gmail.com>:
>>
>>> Oh nuts. I guess my understanding of the effect was incomplete.
>>>
>>> I'm not using an all pass filter no, I'm just delaying the entire signal.
>>>
>>> Thanks you guys, I'll try with an all pass.
>>> On Jun 25, 2016 4:43 PM, "Jon Boley" <j...@jboley.com> wrote:
>>>
>>> Alan,
>>>
>>> I'm on a phone and don't have headphones on me, so I haven't listened to
>>> your examples yet. However, it sounds like you are applying a broadband
>>> delay.
>>>
>>> Huggins pitch typically works when you apply a narrowband delay (i.e.,
>>> with an allpass filter). The pitch corresponds to the frequency that is
>>> delayed.
>>>
>>> So, can you clarify - are you using an allpass filter to delay specific
>>> frequencies?
>>>
>>> - Jon
>>>
>>>
>>>
>>>
>>> On Jun 25, 2016, at 4:15 PM, Alan Wolfe <alan.wo...@gmail.com> wrote:
>>>
>>> Hey Guys,
>>>
>>> I'm trying to make an implementation of the Huggins Binaural Pitch
>>> illusion, which is where if you play whitenoise into each ear, but offset
>>> one ear by a period T that it will create the illusion of a tone of 1/T.
>>>
>>> Unfortunately when I try this, I don't hear any tone.
>>>
>>> I've found a python implementation at
>>> http://www.srmathias.com/huggins-pitch/, but unfortunately I don't know
>>> python (I'm a C++ guy) and while I see that this person is doing some extra
>>> filtering work and other things, it's hard to pick apart which extra work
>>> may be required versus just dressing.
>>>
>>> Here is a 3 second wav file that I've made:
>>>
>>> http://blog.demofox.org/wp-content/uploads/2016/06/stereonoise.wav
>>>
>>> The first 1.5 seconds is white noise. The second half of the sound has
>>> the right ear shifted forward 220 samples. The sound file has a sample rate
>>> of 44100, so that 220 sample offset corresponds to a period of 0.005
>>> seconds aka 5 milliseconds aka 200hz.
>>>
>>> I don't hear a 200hz tone though.
>>>
>>> Can anyone tell me where I'm going wrong?
>>>
>>> The 160 line single file standalone (no libs/non standard headers etc)
>>> c++ code is here:
>>> http://pastebin.com/ZCd0wjW1
>>>
>>> Thanks for any insight anyone can provide!
>>> ___
>>> dupswapdrop: music-dsp mailing list
>>> music-dsp@music.columbia.edu
>>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>>
>>> ___
>>> dupswapdrop: music-dsp mailing list
>>> music-dsp@music.columbia.edu
>>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>>
>>>
>>> ___
>>> dupswapdrop: music-dsp mailing list
>>> music-dsp@music.columbia.edu
>>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>>
>>
>>
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Trouble Implementing Huggins Binaural Pitch

2016-06-25 Thread Alan Wolfe
Oh nuts. I guess my understanding of the effect was incomplete.

I'm not using an all pass filter no, I'm just delaying the entire signal.

Thanks you guys, I'll try with an all pass.
On Jun 25, 2016 4:43 PM, "Jon Boley" <j...@jboley.com> wrote:

Alan,

I'm on a phone and don't have headphones on me, so I haven't listened to
your examples yet. However, it sounds like you are applying a broadband
delay.

Huggins pitch typically works when you apply a narrowband delay (i.e., with
an allpass filter). The pitch corresponds to the frequency that is delayed.

So, can you clarify - are you using an allpass filter to delay specific
frequencies?

- Jon




On Jun 25, 2016, at 4:15 PM, Alan Wolfe <alan.wo...@gmail.com> wrote:

Hey Guys,

I'm trying to make an implementation of the Huggins Binaural Pitch
illusion, which is where if you play whitenoise into each ear, but offset
one ear by a period T that it will create the illusion of a tone of 1/T.

Unfortunately when I try this, I don't hear any tone.

I've found a python implementation at
http://www.srmathias.com/huggins-pitch/, but unfortunately I don't know
python (I'm a C++ guy) and while I see that this person is doing some extra
filtering work and other things, it's hard to pick apart which extra work
may be required versus just dressing.

Here is a 3 second wav file that I've made:

http://blog.demofox.org/wp-content/uploads/2016/06/stereonoise.wav

The first 1.5 seconds is white noise. The second half of the sound has the
right ear shifted forward 220 samples. The sound file has a sample rate of
44100, so that 220 sample offset corresponds to a period of 0.005 seconds
aka 5 milliseconds aka 200hz.

I don't hear a 200hz tone though.

Can anyone tell me where I'm going wrong?

The 160 line single file standalone (no libs/non standard headers etc) c++
code is here:
http://pastebin.com/ZCd0wjW1

Thanks for any insight anyone can provide!
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Trouble Implementing Huggins Binaural Pitch

2016-06-25 Thread Alan Wolfe
I tried another experiment.  I can "kinda" hear a tone, in that the white
noise sounds a bit more tonal.

Is this what the effect sounds like?  As you point out, it does a filter,
but it still seems like I'm not getting the actual effect.

What do you think?

http://blog.demofox.org/wp-content/uploads/2016/06/stereonoise2.wav

I did 16 notes.  In hertz below:
200
0
400
0
300
0
800
0
800
0
300
0
400
0
200
0


On Sat, Jun 25, 2016 at 2:43 PM, Phil Burk <philb...@mobileer.com> wrote:

> Hello Alan,
>
> Your WAV file looks like it has the 220 sample offset. But I not hear a
> 200 Hz tone.
>
> I can hear tones faintly in the example here:
> http://www.srmathias.com/huggins-pitch/
>
> 200 Hz seems like a low frequency. You might have better luck with
> frequencies around 600 like in the Python example.
>
> The Python also uses a bandwidth filter to make the sound less harsh.
>
> Also I did not hear the tone until I noticed the melody. Try playing a
> simple melody or scale.
>
> Phil Burk
>
>
> On Sat, Jun 25, 2016 at 2:08 PM, Alan Wolfe <alan.wo...@gmail.com> wrote:
>
>> Hey Guys,
>>
>> I'm trying to make an implementation of the Huggins Binaural Pitch
>> illusion, which is where if you play whitenoise into each ear, but offset
>> one ear by a period T that it will create the illusion of a tone of 1/T.
>>
>> Unfortunately when I try this, I don't hear any tone.
>>
>> I've found a python implementation at
>> http://www.srmathias.com/huggins-pitch/, but unfortunately I don't know
>> python (I'm a C++ guy) and while I see that this person is doing some extra
>> filtering work and other things, it's hard to pick apart which extra work
>> may be required versus just dressing.
>>
>> Here is a 3 second wav file that I've made:
>>
>> http://blog.demofox.org/wp-content/uploads/2016/06/stereonoise.wav
>>
>> The first 1.5 seconds is white noise. The second half of the sound has
>> the right ear shifted forward 220 samples. The sound file has a sample rate
>> of 44100, so that 220 sample offset corresponds to a period of 0.005
>> seconds aka 5 milliseconds aka 200hz.
>>
>> I don't hear a 200hz tone though.
>>
>> Can anyone tell me where I'm going wrong?
>>
>> The 160 line single file standalone (no libs/non standard headers etc)
>> c++ code is here:
>> http://pastebin.com/ZCd0wjW1
>>
>> Thanks for any insight anyone can provide!
>>
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] Trouble Implementing Huggins Binaural Pitch

2016-06-25 Thread Alan Wolfe
Hey Guys,

I'm trying to make an implementation of the Huggins Binaural Pitch
illusion, which is where if you play whitenoise into each ear, but offset
one ear by a period T that it will create the illusion of a tone of 1/T.

Unfortunately when I try this, I don't hear any tone.

I've found a python implementation at
http://www.srmathias.com/huggins-pitch/, but unfortunately I don't know
python (I'm a C++ guy) and while I see that this person is doing some extra
filtering work and other things, it's hard to pick apart which extra work
may be required versus just dressing.

Here is a 3 second wav file that I've made:

http://blog.demofox.org/wp-content/uploads/2016/06/stereonoise.wav

The first 1.5 seconds is white noise. The second half of the sound has the
right ear shifted forward 220 samples. The sound file has a sample rate of
44100, so that 220 sample offset corresponds to a period of 0.005 seconds
aka 5 milliseconds aka 200hz.

I don't hear a 200hz tone though.

Can anyone tell me where I'm going wrong?

The 160 line single file standalone (no libs/non standard headers etc) c++
code is here:
http://pastebin.com/ZCd0wjW1

Thanks for any insight anyone can provide!
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] up to 11

2016-06-21 Thread Alan Wolfe
In case you don't get any other responses, in Andy Farnell's book
"Designing Sound" there is a section on psycho-acoustics that I'm reading
right now that I think may be able to answer this question.

I don't understand enough of it to answer it for you, but it is talking in
great detail about this sort of thing.

It also mentions for instance how you can increase the perceived loudness
of a sound (It might only work for sounds which are already short?) by
making it slightly longer.  Apparently that works for up to about 200ms of
stretch time.

A really good read!
https://mitpress.mit.edu/books/designing-sound

On Tue, Jun 21, 2016 at 8:51 AM, Ethan Fenn  wrote:

> Purely for amusement and edification:
>
> Let's say I wanted to make a one second, 44.1/16 mono wav file which was
> as loud as it could possibly be.
>
> The only real results I know about loudness are the equal loudness
> contours, which suggest I should put as much energy as possible in the
> region around 3-4kHz, although the whole mid/high-mid band is pretty
> effective.
>
> But, those contours are really about pure sine tones... I don't know
> anything about the relative loudness of tones vs. noise. There's also the
> question of how to pack as much energy as possible into the sensitive bands
> without exceeding the limited signal range.
>
> What would your approach be?
>
> -Ethan
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] a family of simple polynomial windows and waveforms - DINisnoise

2016-06-14 Thread Alan Wolfe
speaking of Bezier, the graphs shown earlier look a lot like gain (
http://blog.demofox.org/2012/09/24/bias-and-gain-are-your-friend/) and also
SmoothStep which is y=3x^2+2x^3

Interestingly (to me anyways, before i learned more math) smoothstep is
equivelant to a cubic bezier curve where the first two control points are 0
and the seconds two control points are 1 (
http://blog.demofox.org/2014/08/28/one-dimensional-bezier-curves/)



On Tue, Jun 14, 2016 at 10:18 AM, David Lowenfels  wrote:

> > On Jun 12, 2016, at 3:04 AM, Andy Farnell 
> wrote:
> >
> > I did some experiments with Bezier after being hugely inspired by
> > the sounds Jagannathan Sampath got with his DIN synth.
> > (http://dinisnoise.org/)
>
> DIN is not just an additive synth?
> appears to be so looking at the prominent and low-res FFT display
> everywhere.
>
> > Jag told me that he had a cute method for matching the endpoints
> > of the segment (you can see in the code), and listening, sounds
> > seem to be alias free, but we never could arrive at a proof of
> > that.
>
> there’s no way those naive (linear) saw and pulse waveforms could be alias
> free.
> arriving at proof would be as easy as an FFT with more resolution?
>
> -David
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Will Pirkle's "Designing Software Synthesizer Plug-Ins in C++"

2016-06-14 Thread Alan Wolfe
Agreed, i like this book a lot and I used the information within to write a
custom compressor and limiter for a PC game.  Really great info.

while on the topic of good books, I want to add two more that I've found
very useful.

Andy Farnell's "Designing Sound".  It talks about the physics, math, and
psychology behind sound to empower people to create sounds from scratch,
based on knowing about how they are actually created and percieved.  Andy
is a regular on this list, interestingly!
https://www.amazon.com/Designing-Sound-Press-Andy-Farnell/dp/0262014416/ref=sr_1_1?ie=UTF8=1465926729=8-1=designing+sound

Also this online / free book on dsp:
http://www.dspguide.com/ch1.htm


On Tue, Jun 14, 2016 at 10:29 AM, David Lowenfels  wrote:

> Hi, I just purchased Will Pirkle’s textbook "Designing Software
> Synthesizer Plug-Ins in C++”
> and wanted to give a huge thumbs up. It demystifies so many
> state-of-the-art things about virtual analog, including filters (delay-free
> loops!), band-limited oscillators, envelope generators, modulation
> matrices, etc. And also goes into heavy detail on the ins and outs of AU
> and VST (and his own platform RAFX).
>
> As a fledgling music-dsp coder, I really wish I’d had a practical manual
> such as this!
> I was frustrated in university that I could write algorithms and DSP code
> but didn’t know how to package it into a plugin, similarly to how I could
> design crazy hardware/software on paper and breadboards but didn’t know how
> to make a PCB or surface-mount solder to put it in a box.
> I also look forward to perusing his other book on Digital Audio effects.
>
> -David
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] looking for tutorials

2016-06-13 Thread Alan Wolfe
You bet!  And apologies if i came off too harsh on your ideas.

Passion is a good thing, and if you want to code all this stuff in assembly
you'd get a lot of good experience working in both assembly and dsp stuff (:

On Mon, Jun 13, 2016 at 9:17 AM, ty armour  wrote:

> Cool, ill take a look at this stuff
> Thanks
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] looking for tutorials

2016-06-13 Thread Alan Wolfe
It would be ridiculous to code it all in assembly.

The performance critical parts could be written in assembly, but only after
profiling and finding that micro optimization would help.

Assembly code is hard to write, hard to maintain, not portable, and you
don't need it in situations where performance is not critical - like
processing and dispatching UI messages.

Also, macro optimization (changing algorithms) should be tried before micro
optimization.  It has the potential for much bigger wins, while still
leaving your code in a good state.

If you are interested in this sort of stuff, in my opinion you should learn
the techniques themselves first, and then implement them in assembly if
that is what you are really dead set on doing.

You won't find much out there that is fully in assembly.

Remember, programming languages are just a means to an end.  It's the
techniques and ideas that matter, not the specific language they are
programmed in!

Anyhow, here is some information that might help you start out:
http://blog.demofox.org/diy-synthesizer/



On Mon, Jun 13, 2016 at 7:52 AM, ty armour  wrote:

> I am looking for tutorials on coding complete recording studios in
> assembly under linux or bsd. I can write the frameworks myself if you
> introduce me to writing a framework like alsa or portaudio and I will do it
> under linux and bsd and macintosh. seriously someone desperately needs to
> make a hackintosh freeware recording studio.
>
> but if you are interested, make the most complete tutorials ever and make
> the recording studio that you design the best recording studio you can
> think of with compressors and mixers and software like JAMIn etcetera
>
> but yeah post detailed tutorials on it, you can even try to write an
> alternative to cygwin or compile the software under cygwin.
>
> but make it complete. Do everything in the recording studio from DSP to
> synths and drums to recording etcetera.
>
> and find all of the instruments you can
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] R: Anyone using unums?

2016-04-15 Thread Alan Wolfe
Your ideas are interesting Evan and it's interesting that you mention lattices.

An off topic usage case I was thinking about for these is actually for
lattice based homomorphic encryption. (like gentry's or homomorphic
encryption over the integers)

When it takes 10 minutes per logic gate, a simpler math method would be nicer.

At first glance, it might seem like lookup tables could be more
efficient yet (like, with CADET as someone mentioned), but in this
case of homomorphic encryption the encoding of the reciprocals on the
numberline is still useful since now you can have fractions. The
operations of unums themselves could all boil down to lookup tables
(homomorphic encryption uses XOR and AND, so going from lookup table
to ANF would be a good start, and then you could simplify the ANF),
but the unum encoding is still nice for having better accuracy at
lower bit counts it seems.

But hey, for all i know there's a better solution for that than unums,
for the H.E. case :P


On Fri, Apr 15, 2016 at 9:30 AM, Evan Balster <e...@imitone.com> wrote:
> Ethan:  Take care not to overlook the solution proposed for SORN size:  Yes,
> a comprehensive SORN for the set of N-bit unums takes 2^N bits.  But in most
> practical applications we'll use SORNS representing continuous ranges of
> unums on the "circle" (which may cross infinity).  As the author notes,
> common operations (+-*/^) on continuous ranges produces another continuous
> range.  We can represent these ranges with 2N bits which are homologous to
> two unums, where the range is the clockwise path from the first to the
> second.*  (I'll confess this took me a while to understand as the
> presentation doesn't explain it clearly.)
>
> To operate on practical (range) SORNs, we simply need to apply the operation
> to each range, such that the new range is the union of all possible results.
> We can define this in terms of operator implementations which yield the
> "most clockwise" and "least clockwise" results given the four argument
> values.  So a SORN addition (a+b) would yield (a.min+b.min, a.max+b.max).
>
> All this said, I won't necessarily disagree with the "snake oil and
> conspiracy theories" argument yet -- but if this system can be made into
> something practical it would certainly yield some interesting benefits.
> Even if it can't, perhaps it can inspire something better.
>
>
> * In the case of the complete set and the zero set, the two halves of the
> range equal; in this case we would want to use an odd or even MSB, or some
> other pattern, to distinguish between the special cases.
>
> – Evan Balster
> creator of imitone
>
> On Fri, Apr 15, 2016 at 9:38 AM, Ethan Fenn <et...@polyspectral.com> wrote:
>>
>> Sorry, you don't need 2^256 bits, my brain was just getting warmed up and
>> I got ahead of myself there. There are 2^256 different SORNs in this
>> scenario and you need 256 bits to represent them all. But the point stands
>> that if you actually want good precision (2^32 different values, for
>> instance), the SORN concept quickly becomes untenable.
>>
>> -Ethan
>>
>>
>>
>> On Fri, Apr 15, 2016 at 9:03 AM, Ethan Fenn <et...@polyspectral.com>
>> wrote:
>>>
>>> I really don't think there's a serious idea here. Pure snake oil and
>>> conspiracy theory.
>>>
>>> Notice how he never really pins down one precise encoding of unums...
>>> doing so would make it too easy to poke holes in the idea.
>>>
>>> For example, this idea of SORNs is presented, wherein one bit represents
>>> the presence or absence of a particular value or interval. Which is fine if
>>> you're dealing with 8 possible values. But if you want a number system that
>>> represents 256 different values -- seems like a reasonable requirement to
>>> me! -- you need 2^256 bits to represent a general SORN. Whoops! But of
>>> course he bounces on to a different topic before the obvious problem comes
>>> up.
>>>
>>> -Ethan
>>>
>>>
>>>
>>> On Fri, Apr 15, 2016 at 4:38 AM, Marco Lo Monaco
>>> <marco.lomon...@teletu.it> wrote:
>>>>
>>>> I read his slides. Great ideas but the best part is when he challenges
>>>> Dr. Kahan with the star trek trasing/kidding. That made my day.
>>>> Thanks for sharing Alan
>>>>
>>>>
>>>>
>>>> Inviato dal mio dispositivo Samsung
>>>>
>>>>
>>>>  Messaggio originale 
>>>> Da: Alan Wolfe <alan.wo...@gmail.com>
>>>> Data: 14/04/2016 23:30 (GMT+01:00)
>>>> A: A dis

Re: [music-dsp] R: Anyone using unums?

2016-04-15 Thread Alan Wolfe
They aren't full sized lookup tables but smaller tables. There are multiple
lookups ORd together to get the final result.

I don't understand them fully yet, but I ordered his book and am going to
start trying to understand them and make some blog posts with working
example C code.  I'll share with the list (:
On Apr 15, 2016 8:06 AM, "Bjorn Roche" <bj...@shimmeo.com> wrote:

> I can see this being applicable to:
>
> - GPUs, especially embedded GPUs on mobile, where low precision floats are
> super useful, and exact conformity to IEEE isn't (at least, I don't think
> conformity is part of any of the usual specs, but it may be)
> - Storage (I have an application now that could benefit from some sane
> low-precision floats. We are considering IEEE half floats -- yuck!)
>
> However, I am confused by the arithmetic. Is the author seriously
> proposing that all arithmetic be done by LUTs, or am I misunderstanding
> something? Seems like a joke since he literally compares it to "Cant add,
> doesn't even try". For smaller precision, LUTs seem workable, but if you
> have to fetch a number from a large LUT for every operation you can't
> really do that in one clock tick, since, in practice, you have to go off
> die. Anyway, if LUTs made sense for, say, 32-bit floating point math,
> couldn't we also use LUTs for current IEEE floats?
>
> Still, even if this is utter nonsense, I'm glad to see someone rethinking
> floats on a fundamental level.
>
> On Fri, Apr 15, 2016 at 10:38 AM, Ethan Fenn <et...@polyspectral.com>
> wrote:
>
>> Sorry, you don't need 2^256 bits, my brain was just getting warmed up and
>> I got ahead of myself there. There are 2^256 different SORNs in this
>> scenario and you need 256 bits to represent them all. But the point stands
>> that if you actually want good precision (2^32 different values, for
>> instance), the SORN concept quickly becomes untenable.
>>
>> -Ethan
>>
>>
>>
>> On Fri, Apr 15, 2016 at 9:03 AM, Ethan Fenn <et...@polyspectral.com>
>> wrote:
>>
>>> I really don't think there's a serious idea here. Pure snake oil and
>>> conspiracy theory.
>>>
>>> Notice how he never really pins down one precise encoding of unums...
>>> doing so would make it too easy to poke holes in the idea.
>>>
>>> For example, this idea of SORNs is presented, wherein one bit represents
>>> the presence or absence of a particular value or interval. Which is fine if
>>> you're dealing with 8 possible values. But if you want a number system that
>>> represents 256 different values -- seems like a reasonable requirement to
>>> me! -- you need 2^256 bits to represent a general SORN. Whoops! But of
>>> course he bounces on to a different topic before the obvious problem comes
>>> up.
>>>
>>> -Ethan
>>>
>>>
>>>
>>> On Fri, Apr 15, 2016 at 4:38 AM, Marco Lo Monaco <
>>> marco.lomon...@teletu.it> wrote:
>>>
>>>> I read his slides. Great ideas but the best part is when he challenges
>>>> Dr. Kahan with the star trek trasing/kidding. That made my day.
>>>> Thanks for sharing Alan
>>>>
>>>>
>>>>
>>>> Inviato dal mio dispositivo Samsung
>>>>
>>>>
>>>>  Messaggio originale 
>>>> Da: Alan Wolfe <alan.wo...@gmail.com>
>>>> Data: 14/04/2016 23:30 (GMT+01:00)
>>>> A: A discussion list for music-related DSP <
>>>> music-dsp@music.columbia.edu>
>>>> Oggetto: [music-dsp] Anyone using unums?
>>>>
>>>> Apologies if this is a double post.  I believe my last email was in
>>>> HTML format so was likely rejected.  I checked the list archives but
>>>> they seem to have stopped updating as of last year, so posting again
>>>> in plain text mode!
>>>>
>>>> I came across unums a couple weeks back, which seem to be a plausible
>>>> replacement for floating point (pros and cons to it vs floating
>>>> point).
>>>>
>>>> One interesting thing is that division is that addition, subtraction,
>>>> multiplication and division are all single flop operations and are on
>>>> "equal footing".
>>>>
>>>> To get a glimpse, to do a division, you do a 1s compliment type
>>>> operation (flip all bits but the first 1, then add 1) and you now have
>>>> the inverse that you can do a multiplication with.
>>>>
&g

Re: [music-dsp] list archives not updating?

2016-04-14 Thread Alan Wolfe
ah ok thanks.

This list has been around a long while hehe.

http://music.columbia.edu/cmc/music-dsp/musicdsparchives.html  points
to http://music.columbia.edu/pipermail/music-dsp/

but apparently the real one is here in the email.

maybe we should update that first link or something?  not a biggie
though obviously :P

On Thu, Apr 14, 2016 at 2:38 PM, Douglas Repetto
<doug...@music.columbia.edu> wrote:
> We switched to a new server a year ago. The footer at the bottom of each
> email has the correct address for the archives:
>
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
> best,
> douglas
>
>
>
> On Thu, Apr 14, 2016 at 5:29 PM, Alan Wolfe <alan.wo...@gmail.com> wrote:
>>
>> It looks like it stopped archiving messages last july:
>>
>> http://music.columbia.edu/pipermail/music-dsp/
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Anyone using unums?

2016-04-14 Thread Alan Wolfe
Apologies if this is a double post.  I believe my last email was in
HTML format so was likely rejected.  I checked the list archives but
they seem to have stopped updating as of last year, so posting again
in plain text mode!

I came across unums a couple weeks back, which seem to be a plausible
replacement for floating point (pros and cons to it vs floating
point).

One interesting thing is that division is that addition, subtraction,
multiplication and division are all single flop operations and are on
"equal footing".

To get a glimpse, to do a division, you do a 1s compliment type
operation (flip all bits but the first 1, then add 1) and you now have
the inverse that you can do a multiplication with.

Another interesting thing is that you have different accuracy
concerns.  You basically can have knowledge that you are either on an
exact answer, or between two exact answers.  Depending on how you set
it up, you could have the exact answers be integral multiples of some
fraction of pi, or whatever else you want.

Interesting stuff, so i was curious if anyone here on the list has
heard of them, has used them for dsp, etc?

Fast division and the lack of denormals seem pretty attractive.

http://www.johngustafson.net/presentations/Multicore2016-JLG.pdf
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] list archives not updating?

2016-04-14 Thread Alan Wolfe
It looks like it stopped archiving messages last july:

http://music.columbia.edu/pipermail/music-dsp/
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Anyone using Chebyshev polynomials to approximate trigonometric functions in FPGA DSP

2016-01-19 Thread Alan Wolfe
Chebyshev is indeed a decent way to approximate trig from what I've read. (
http://www.embeddedrelated.com/showarticle/152.php)

Did you know that rational quadratic Bezier curves can exactly represent
conic sections, and thus give you exact trig values?  You essentially
divide one quadratic Bezier curve by another, with specifically calculated
weights.  Fairly simple and straightforward stuff.  Not sure if the
division is a problem for you mapping it to circuitry.
http://demofox.org/bezquadrational.html

Video cards use a handful of terms of taylor series, so that might be a
decent approach as well since it's used in high end production circuitry.


On Tue, Jan 19, 2016 at 10:05 AM, Theo Verelst  wrote:

> Hi all,
>
> Maybe a bit forward, but hey, there are PhDs here, too, so here it goes:
> I've played a little with the latest Vivado HLx design tools fro Xilinx
> FPGAs and the cheap Zynq implementation I use (a Parallella board), and I
> was looking for interesting examples to put in C-to_chip compiler that I
> can connected over AXI bus to a Linux program running on the ARM cores in
> the Zynq chip.
>
> In other words, computations and manipulations with additions, multiplies
> and other logical operations (say of 32 bits) that compile nicely to for
> instance the computation of y=sin(t) in such a form that the Silicon
> Compiler can have a go at it, and produce a nice relative low-latency FPGA
> block to connect up with other blocks to do nice (and very low latency) DSP
> with.
>
> Regards,
>
>  Theo V.
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Inlined functions

2015-10-12 Thread Alan Wolfe
Here's the standard response that you are likely to get a lot more of:

You should profile before optimizing.

If you are having a performance problem (including just wanting it to run
faster in general), you should find out where the time is going and address
the biggest time sinks specifically.

If you aren't having a performance problem, no reason to optimize.  If you
do optimize without having a performance problem, you may create a
performance problem!

Also, many times macro optimization is a far bigger win than micro
optimization.  A pure assembly bubble sort, which uses all sorts of
hardware specific trickery, is likely going to be outperformed by a quick
sort that is not written nearly as well or is as well tuned.  The micro
optimized bubble sort is also going to be harder to maintain, harder to
port to other platforms, and is going to be harder to debug if there are
problems, which is another reason to prefer macro to micro optimization.

Summary: profile, then react to profile measurements, rinse and repeat.  If
a function call is slow, try marking it as inline, and see if it helps.
Think about the bigger picture first though, before micro optimizing.

HTH (:

On Mon, Oct 12, 2015 at 8:47 AM, Nigel Redmon  wrote:

> This is a topic discussed ad museum on stackoverflow.com, and is better
> suited to that venue. Google ;-)
>
> "Inline" is more a suggestion, compiler dependent, etc., so other than
> saying the obvious—that small, often called functions benefit (where the
> overhead of the call is significant relative to function body)—the discuss
> quickly drills down to minutia (versus #define macro, blah blah).
>
> > On Oct 12, 2015, at 8:27 AM, Aran Mulholland 
> wrote:
> >
> > I'm just wondering how people decide which functions should be inlined?
> What are the rules that you follow when choosing to inline a function? Are
> you concerned with function size or the number of inlined functions through
> the code base? What other considerations do you factor in whilst making the
> decision to inline?
> >
> > Thanks
> >
> > Aran
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Inlined functions

2015-10-12 Thread Alan Wolfe
optimization is the only reason to worry about inlining - whether it's
optimizing execution speed, or executable size.

Small functions that only perform very small operations and are called alot
are usually good candidates for inlining, because something as simple as
"getting the next sample" could happen 44100 a second or more, and if you
add the cost of a function call to the cost of a memory fetch, it may
double in execution time to do it's work.

In that sort of a case, making it inline is nice, because it doesn't add
that function call overhead to execution time.

but again, if you don't care about optimization at this point, no need to
worry about it too much!

On Mon, Oct 12, 2015 at 4:38 PM, Aran Mulholland <aranmulholl...@gmail.com>
wrote:

> I'm not optimising anything. I guess the reason I am asking is that I see
> a lot of DSP code that has inlined methods. Usually it will be the render
> callback, to reduce the function calls in that process.
>
> In some wave table oscillator code I have been looking at the main process
> method that gets the next sample is inlined.
>
> Those were the reasons I was asking the questions
>
>
>
>
> On Tue, Oct 13, 2015 at 3:09 AM, Alan Wolfe <alan.wo...@gmail.com> wrote:
>
>> Here's the standard response that you are likely to get a lot more of:
>>
>> You should profile before optimizing.
>>
>> If you are having a performance problem (including just wanting it to run
>> faster in general), you should find out where the time is going and address
>> the biggest time sinks specifically.
>>
>> If you aren't having a performance problem, no reason to optimize.  If
>> you do optimize without having a performance problem, you may create a
>> performance problem!
>>
>> Also, many times macro optimization is a far bigger win than micro
>> optimization.  A pure assembly bubble sort, which uses all sorts of
>> hardware specific trickery, is likely going to be outperformed by a quick
>> sort that is not written nearly as well or is as well tuned.  The micro
>> optimized bubble sort is also going to be harder to maintain, harder to
>> port to other platforms, and is going to be harder to debug if there are
>> problems, which is another reason to prefer macro to micro optimization.
>>
>> Summary: profile, then react to profile measurements, rinse and repeat.
>> If a function call is slow, try marking it as inline, and see if it helps.
>> Think about the bigger picture first though, before micro optimizing.
>>
>> HTH (:
>>
>> On Mon, Oct 12, 2015 at 8:47 AM, Nigel Redmon <earle...@earlevel.com>
>> wrote:
>>
>>> This is a topic discussed ad museum on stackoverflow.com, and is better
>>> suited to that venue. Google ;-)
>>>
>>> "Inline" is more a suggestion, compiler dependent, etc., so other than
>>> saying the obvious—that small, often called functions benefit (where the
>>> overhead of the call is significant relative to function body)—the discuss
>>> quickly drills down to minutia (versus #define macro, blah blah).
>>>
>>> > On Oct 12, 2015, at 8:27 AM, Aran Mulholland <aranmulholl...@gmail.com>
>>> wrote:
>>> >
>>> > I'm just wondering how people decide which functions should be
>>> inlined? What are the rules that you follow when choosing to inline a
>>> function? Are you concerned with function size or the number of inlined
>>> functions through the code base? What other considerations do you factor in
>>> whilst making the decision to inline?
>>> >
>>> > Thanks
>>> >
>>> > Aran
>>>
>>>
>>> ___
>>> dupswapdrop: music-dsp mailing list
>>> music-dsp@music.columbia.edu
>>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>>
>>
>>
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] sinc interp, higher orders

2015-09-11 Thread Alan Wolfe
As far as the artifacts, it sounds like the information you are lacking is
knowledge of bandlimiting and the nyquist frequency (:

Check these out, I think they will help you, especially the second one, but
the first one might have some info for you as well!

http://blog.demofox.org/2012/05/19/diy-synthesizer-chapter-2-common-wave-forms/

http://blog.demofox.org/2012/06/18/diy-synth-3-sampling-mixing-and-band-limited-wave-forms/

Also, are you interpolating between the samples in your table?  Hopefully
you are at least using linear interpolation!

Sinc interpolation is a way of interpolating between samples in a way that
makes the result band limited.

Just for fun, here's another way to interpolate data that is decent and
higher quality than linear, but not as high quality as sinc.

http://blog.demofox.org/2015/08/08/cubic-hermite-interpolation/


On Fri, Sep 11, 2015 at 8:43 AM, Ross Bencina 
wrote:

> On 12/09/2015 1:13 AM, Nuno Santos wrote:
>
>> Curiosity, by sinc do you mean sin function?
>>
>
> sinc(x) := sin(x)/x
>
> https://en.wikipedia.org/wiki/Sinc_function
>
> https://ccrma.stanford.edu/~jos/pasp/Windowed_Sinc_Interpolation.html
>
> Cheers,
>
> Ross.
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] what is GL_TEXTURE_2D_MULTISAMPLE??

2015-06-06 Thread Alan Wolfe

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] recursive SIMD?

2015-04-14 Thread Alan Wolfe
Right, with SIMD you buy in bulk so naive implementations of work
where each sample needs processing before the next sample is
problematic.

Intuitively it seems like if you get creative, work out some math, and
do some overlapping SIMD math, you might be able to do a factory
line type of setup, where even though you are doing a multiply (for
instance) of 4 samples against 4 scalar values, that the multiply
accounts for different steps in the filtering process for each sample.
Or other fancy things like that.

In writing a limiter and a compressor and thinking about SIMD, i've
felt like the limiter probably wouldn't be able to gain anything from
using SIMD since it has to react to each and every peak, which affects
the next values.

But, maybe in the case of a compressor, which has some fudge due to 
0 attack time, maybe you could aproximate the correct behavior and
gain the benefit of simd ::shrug::

I don't have a lot of working knowledge of SIMD either unfortunately,
maybe someone else will chime in with a more useful response.

On Tue, Apr 14, 2015 at 10:13 AM, Eric Christiansen
eric8939...@gmail.com wrote:
 No one is able to confirm that a SIMD operation can't use its own output,
 or provide any insight on how it might be accomplished?


 On Sat, Apr 11, 2015 at 10:13 AM, Eric Christiansen eric8939...@gmail.com
 wrote:

 Hi there. (Long time reader, first time poster. Yay!)

 I haven't done much with SIMD in the past, so my experience is pretty low,
 but my understanding is that each data piece must be defined prior to the
 operation, correct? Meaning that you can't use result of the operation of
 one piece of data as the source data for the next operation, right?

 This came up in thinking about how to optimize an anti-aliasing routine.
 If, for example, the process is oversampling by 4 and running each through
 a low pass filter and then averaging the results, I was wondering if
 there's some way of using some SIMD process to speed this up, specifically
 the part sending each sample through the filter. Since each piece has to go
 through sequentially, I would need to use the result of the first filter
 tick as the input for the second filter tick.

 But that's not possible, right?

 Thanks!

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Uses of Fourier Synthesis?

2015-04-05 Thread Alan Wolfe
Hey Guys,

I was wondering, does anyone know of any practical or interesting uses
cases of Fourier synthesis for audio?

I can already make bandlimited square, saw and triangle waves but was
hoping for something like guitar strings or voice, or something along
those lines.

Someone shared photosounder with me, which treats pictures as a
spectrogram and lets you hear the images.
https://www.youtube.com/watch?v=W8MCAXhEsy4

That's pretty interesting, but anyone else know of any other practical
or interesting audio use cases?

Thanks!
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] oversampled Fourier Transform

2015-04-01 Thread Alan Wolfe
and of course, great discoveries often come from where people least
expect them, and often trodden ground where others have walked before
without noticing something major.

Nothing wrong w/ fresh looks at old things, even if all it amounts to
is someone getting a deeper understanding of things other people
already know.

But yeah... seems like if you are looking for low hanging fruit for
breaking new ground, going to undiscovered areas seems like a good
idea hehe.

On Wed, Apr 1, 2015 at 2:02 PM, Theo Verelst theo...@theover.org wrote:
 Some of this all is amusing, like it's also an Aprils' fool
 thing to mess up frequency and time domain, use symbols almost
 interchangeably, etc. I hope especially the serious EEs will return to the
 essence of the engineering profession and commit to a decent error analysis
 in this old and almost boring field to really contribute to (I mean
 seriously, Fourier transforms have been around a long time). I like some
 theoretical competition, but the faster boys (and girls) and the interesting
 workers in for instance audio applications appear to be way too interested
 in claiming small stakes and trying out an alternative to the normal long
 existing coverage of the subjects.

 First year practicum, people, proper error analysis. For the others: the
 holy grail of DSP isn't so much there where a lot of people are searching,
 probably it's worth knowing that playing around with cepstrums and so on
 isn't going to be more (or less) interesting than it has been, it's ok, but
 extrapolating upward to some theory and corpus of superior work is really
 not going to yield many worth while results, besides some obvious and normal
 possibilities.

 T V.

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] R: Glitch/Alias free modulated delay

2015-03-20 Thread Alan Wolfe
One thing to watch out for is to make sure you are not looking
backwards AND forwards in time, but only looking BACK in time.

When you say you have an LFO going from -1 to 1 that makes me think
you might be going FORWARD in the buffer as well as backwards, which
would definitely cause audible problems.

your LFO really should go between -1 and 0, you then multiply that
value by the number of samples in your buffer (minus 1 if needed,
depending on your design and timing in your code), and then subtract
that value from your write index into the buffer, making sure to
handle the case of going negative, where your subtracted offset is
greater than your current write index.

On Fri, Mar 20, 2015 at 11:51 AM, Marco Lo Monaco
marco.lomon...@teletu.it wrote:
 How often do you update the LFO? Every buffersize (32/64 samples)?

 M.

 -Messaggio originale-
 Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
 boun...@music.columbia.edu] Per conto di Nuno Santos
 Inviato: venerdì 20 marzo 2015 19:06
 A: A discussion list for music-related DSP
 Oggetto: Re: [music-dsp] Glitch/Alias free modulated delay

 Hi,

 Today I have used a piece of code which is on musicdsp for testing this out
 again.

 http://musicdsp.org/archive.php?classid=4#98
 http://musicdsp.org/archive.php?classid=4#98

 I was able to have a delay changing in time without any kind of artefact or
 glitch. However I have only managed to get this results by changing the
 parameter by myself.

 When I say, manually moving the parameter by myself I say that I update a
 property which will be linearly interpolated in time (500ms).

 When I try to apply the modulation which is a value between -1 and 1, that
 comes from an LFO, I always end up with artefacts and noise

 I don’t understand why it works so well when I move the parameter value
 (which is also changing constantly due to interpolation, and it doesn’t work
 when I apply the modulation with the lo…

 Any ideas?

 This is my current code

 void IDelay::read(IAudioSample *output)
 {
 double t=double(_writeIndex)-_time; // works perfectly moving the
 handle manually with the value being interpolated before getting the
 variable _time;
 //double t=double(_writeIndex)-_time+_modulation*_modulationRange;

 // clip lookback buffer-bound
 if(t0.0)
 t=_size+t;

 // compute interpolation left-floor
 int const index0=int(t);

 // compute interpolation right-floor
 int index_1=index0-1;
 int index1=index0+1;
 int index2=index0+2;

 // clip interp. buffer-bound
 if(index_10)index_1=_size-1;
 if(index1=_size)index1=0;
 if(index2=_size)index2=0;

 // get neighbourgh samples
 float const y_1= _buffer[index_1];
 float const y0 = _buffer[index0];
 float const y1 = _buffer[index1];
 float const y2 = _buffer[index2];

 // compute interpolation x
 float const x=(float)t-(float)index0;

 // calculate
 float const c0 = y0;
 float const c1 = 0.5f*(y1-y_1);
 float const c2 = y_1 - 2.5f*y0 + 2.0f*y1 - 0.5f*y2;
 float const c3 = 0.5f*(y2-y_1) + 1.5f*(y0-y1);

 *output=((c3*x+c2)*x+c1)*x+c0;
 }
  On 20 Mar 2015, at 14:20, Bjorn Roche bj...@shimmeo.com
 mailto:bj...@shimmeo.com wrote:
 
  Interpolating the sample value is not sufficient to eliminate artifacts.
  You also need to eliminate glitches that occur when jumping from one
  time value to another. In other words: no matter how good your
  sample-value interpolation is, you will still introduce artifacts when
  changing the delay time. A steep low-pass filter going into the delay
  line would be one way to solve this. (this is the idea of
  bandlimiting alluded to earlier in this discussion.)
 
  I can say from experience that you absolutely must take this into
  account, but, if memory serves (which it may not), the quality of
  interpolation and filtering is not that important. I am pretty sure
  I've written code to handle both cases using something super simple
  and efficient like linear interpolation and it sounded surprisingly
  good, which is to say everyone else on the project thought it sounded
  great, and that was enough to consider it done on that particular project.
 
  HTH
 
 
 
  On Fri, Mar 20, 2015 at 6:43 AM, Steven Cook
  stevenpaulc...@tiscali.co.uk mailto:stevenpaulc...@tiscali.co.uk
  wrote:
 
 
  Let suppose that I fix the errors In the algorithm. Is this
  sufficient
  for a quality delay time
  Modulation? Or will I need more advance technics?
 
 
  That's a matter of opinion :-) My opinion is that the hermite
  interpolation you're using here (I didn't check to see if it's
  implemented
  correctly!) is more than adequate for modulated delay effects like
  chorus - I suspect a lot of commercial effects have used linear
 interpolation.
 
  Steven Cook.
 
 
 
  -Original Message- From: Nuno Santos
  Sent: Thursday, March 19, 2015 6:28 PM
  To: A discussion list for music-related DSP
  Subject: Re: [music-dsp] 

Re: [music-dsp] Glitch/Alias free modulated delay

2015-03-19 Thread Alan Wolfe
In case it helps, it isn't the delay buffer size that you need to
modify, but rather just your read index into that delay buffer.

If you have a flange for instance that can go from 0 to 500ms in the
past, and is controlled by a sine wave, you always have a 500ms buffer
that you put your output samples into (while also outputting to the
actual output of course), and just let the sine wave control the read
index into that 500ms buffer.

Here's a blog post of mine about the flange effect with simple working
c++ code that only includes standard headers.  Should be pretty easy
to follow / see what i mean hopefully.

http://blog.demofox.org/2015/03/16/diy-synth-flange-effect/

On Thu, Mar 19, 2015 at 11:12 AM, David Olofson da...@olofson.net wrote:
 On Thu, Mar 19, 2015 at 6:15 PM, Nuno Santos nunosan...@imaginando.pt wrote:
 [...]
 If I use interpolation for buffer access I experience less glitch and more 
 alias.

 What type of interpolation are you using? I would think you need
 something better than linear interpolation for this. I'd try Hermite.
 That should be sufficient for slow modulation, although
 theoretically, you *should* bandlimit the signal as soon as you play
 it back faster than the original sample rate.

 For more extreme effects (which effectively means you're sometimes
 playing back audio at a substantially higher sample rate than that of
 your audio stream), you may need a proper bandlimited resampler.
 (Apply a brickwall filter before the interpolation, and/or oversample
 the interpolator.)


 --
 //David Olofson - Consultant, Developer, Artist, Open Source Advocate

 .--- Games, examples, libraries, scripting, sound, music, graphics ---.
 |   http://consulting.olofson.net  http://olofsonarcade.com   |
 '-'
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Approximating convolution reverb with multitap?

2015-03-18 Thread Alan Wolfe
Thanks a bunch you guys.  It seems like the problem is more complex
than I expected and so the solutions are a bit over my head.

I'll start researching though, thanks!!

On Wed, Mar 18, 2015 at 11:21 AM, Ethan Duni ethan.d...@gmail.com wrote:
 Yeah if you simply pick out a few peaks you can get the general shape of
 the reverb decay, but you miss all of the dense reflections. Multi-tap
 delay on its own is fine for the early reflections, but the rest of the
 reverb response is more dense as Steffan says.

 Keun Sup Lee did some work along these lines at CCRMA, where he uses a
 cascade of a comb filter (to get the general shape of the reverb response)
 and switched noise convolution (to get the density). It's a cool approach
 in that you can easily control the big-picture aspects of the reverb by
 tweaking the comb filter parameters. Here are some links that talk about it:

 https://ccrma.stanford.edu/~keunsup/projects.html
 https://ccrma.stanford.edu/~keunsup/earlypart_control.html

 E


 On Wed, Mar 18, 2015 at 10:39 AM, STEFFAN DIEDRICHSEN sdiedrich...@me.com
 wrote:

 Hi Alan,


 With most IRs, you don’t see discrete peaks, it’s a continuous signal.
 This is due to the response of the speaker and microphone being used. This
 causes some smear.
 You might “segment” the IR by doing an integration of the energy from the
 tail to the start (IIRC, that’s a backward energy decay curve). This will
 show some distinctive steps , which are the strongest reflections. I’m not
 sure, if this can be used to identify tap positions, but my intuition tells
 me, it’d be starting point.

 Best,

 Steffan



  On 18.03.2015|KW12, at 18:11, Alan Wolfe alan.wo...@gmail.com wrote:
 
  Hey Guys,
 
  Let's say you have an impulse response recording of your favorite
  reverb location.
 
  Are there any known algorithms for taking that impulse response and
  convert it to N taps for use in a multitap reverb implementation?
 
  I was trying to think about this and one hand it seems like maybe you
  could keep the top N peaks, but on the other hand, it seems like some
  of the smaller peaks may be important too, and also, some peaks are
  wide, and you might want to treat that wide peak as a single peak?
 
  Not really sure... any info or known algorithms on this sort of thing?
 
  Thanks!!
  --
  dupswapdrop -- the music-dsp mailing list and website:
  subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
  http://music.columbia.edu/cmc/music-dsp
  http://music.columbia.edu/mailman/listinfo/music-dsp

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] Approximating convolution reverb with multitap?

2015-03-18 Thread Alan Wolfe
Hey Guys,

Let's say you have an impulse response recording of your favorite
reverb location.

Are there any known algorithms for taking that impulse response and
convert it to N taps for use in a multitap reverb implementation?

I was trying to think about this and one hand it seems like maybe you
could keep the top N peaks, but on the other hand, it seems like some
of the smaller peaks may be important too, and also, some peaks are
wide, and you might want to treat that wide peak as a single peak?

Not really sure... any info or known algorithms on this sort of thing?

Thanks!!
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Musical pitch detection by counting bitflips

2015-02-04 Thread Alan Wolfe
Do you have a write up of this anywhere? I'd love to read more and have a
place to point people to for more info.

Also it would be neat to see how you extend this to higher dimensions, and
also your log2 calculation is quite intriguing (:

On Wed, Feb 4, 2015 at 7:52 AM, Peter S peter.schoffhau...@gmail.com
wrote:

 Hello All,

 Some time ago it was said on this mailing list by someone that the
 information contained in a periodic signal can be expressed by its
 frequency, as that's enough to reconstruct the signal. I'll refer to
 this claim as (1).

 Later I proposed a very simple algorithm that effectively counts 'bit
 flips' in a fixed length window:

 entropy = 0
 state = bit[0]
 for i = 2 to N
 if bit[i] != bit[i-1]
 entropy = entropy + 1
 state = bit[i]
 end

 I originally invented this algorithm to separate graphics images to
 high-entropy and low-entropy parts for noise reduction and other uses
 (which I know from experience it does exceptionally well).

 Some people argued, that it is a mistaken algorithm, and it has no
 practical use. So I decided to spend an entire month of my life
 examining solely this algorithm, its derivatives, and its potential
 uses and applications in audio and graphics processing. It's probably
 been the most inspiring month of my life, gaining a tremendous amount
 of insight from the experiments, also inventing and further refining
 more algorithms and methods as I went along, including new types of
 musical effects.

 After a whole month of testing, let me add some further notes:

 1) I propose to call this Schoffhauzer entropy S_0. This is to avoid
 name collisions in the highly overused 'entropy' namespace and to
 differentiate it from the works of others. So you can't say: No,
 you're an idiot, that's not what entropy is! Since I defined
 Schoffhauzer entropy S_0 to be this, you cannot argue a definition.

 2) Why stop analysis of information on whole messages? I go one step
 further and decompose the message to individual bits, and do analysis
 on bits. Why stop at the level of whole messages? (Sure, that's not
 the whole picture, but now I don't have time to write a book to
 explain this fully.)

 3) This algorithm is only for an 1-bit signal, which automatically
 assumes fixed point representation. For signals with higher bit depth,
 you need to either convert or decompose into 1 bit signal(s).

 4) If you prefer normalized values in a regular real-valued 0-1 range,
 simply divide the result by window_length-1. Another way of
 normalizing it is to further divide that by two, to have a result
 normalized to the 0-0.5 range. (In practice, binary floating point
 representation is always just an approximation, so doing this may
 yield less accurate results. For this reason, I prefer to work with
 unnormalized values.) For now, let's call these Schoffhauzer entropy
 S_1 and S_2, being normalized to 0-1 and 0-1/2, respectively.

 5) In case the signal is a regular periodic 1-bit square wave, and the
 result is normalized to 0-0.5 range, the Schoffhauzer entropy S_2
 approximates the frequency of the periodic 1 bit waveform. After
 normalizing to 0-0.5, this is in fact trivial to see, and it is also
 trivial to compute the approximation error for a given frequency. I
 won't go into further detail, you can confirm it yourself if you're
 skeptical.

 6) Coincidentally, this is in line with claim (1) made by a
 well-respected member of this mailing list, claiming that the
 information contained in a periodic waveform can be represented by
 its frequency. Once you normalize my algorithm to the 0-0.5 range, it
 approximates exactly the _frequency_ of a periodic 1 bit square wave,
 with a well-defined error for any given frequency (which is due to
 windowing).

 At minimum, I would consider this an interesting coincidence that
 for perodic 1 bit square waves, my algorithm - when normalized -
 approximates what (1) claimed to be the information in a periodic
 signal (minus the error from the windowing). Especially that I
 originally invented this algorithm for graphics processing, without
 any consideration for periodic square waves whatsoever.

 7) To use this to practically approximate the normalized musical
 frequency of an arbitrary periodic waveform with arbitrary bit depth,
 a simple method is to run my algorithm on the highest bit of each
 sample, discarding all the other bits. The highest bit is effectively
 the sign bit in both fixed and floating point representation, so when
 used this way on periodic waveforms, in that case it effectively
 becomes the same as 'counting zero crossings' to determine the pitch
 of a periodic musical signal. To my knowledge, that is a
 well-established algorithm that you can also find in books. This is
 one of the simplest possible ways to use my algorithm.

 8) This shows that you can approximate the musical pitch of a periodic
 signal by merely using bitwise operations, irregardless of
 fixed/floating point 

Re: [music-dsp] magic formulae

2014-11-27 Thread Alan Wolfe
You might check this out. An interesting tune made on Shadertoy.com where
the audio is made with glsl

https://www.shadertoy.com/view/ldfSW2
On Nov 27, 2014 8:28 AM, Michael Gogins michael.gog...@gmail.com wrote:

 I've experimented with this using LuaJIT, which has bitwise operations. I
 used a LuaJIT binding to PortAudio for real time audio output. Ivan send
 you my stuff  if you like.

 Regards,
 Mike
 On Nov 27, 2014 8:54 AM, Victor Lazzarini victor.lazzar...@nuim.ie
 wrote:

  Thanks everyone for the links. Apart from an article in arXiv written by
  viznut, I had no
  further luck finding papers on the subject (the article was from 2011, so
  I thought that by
  now there would have been something somewhere, beyond the code examples
 and
  overviews etc.).
  
  Dr Victor Lazzarini
  Dean of Arts, Celtic Studies and Philosophy,
  Maynooth University,
  Maynooth, Co Kildare, Ireland
  Tel: 00 353 7086936
  Fax: 00 353 1 7086952
 
   On 27 Nov 2014, at 13:38, Tito Latini tito.01b...@gmail.com wrote:
  
   On Thu, Nov 27, 2014 at 09:46:13AM -0200, a...@ime.usp.br wrote:
   Another post from him, with more analysis stuff.
  
  
 
 http://countercomplex.blogspot.com.br/2011/10/some-deep-analysis-of-one-line-music.html
  
   Cheers,
   Antonio.
  
   Quoting Ross Bencina rossb-li...@audiomulch.com:
  
   On 27/11/2014 8:35 PM, Victor Lazzarini wrote:
   Does anyone have any references for magic formulae for synthesis (I
   am not sure that this is the usual term)?
   What I mean is the type of bit manipulation that generates
   rhythmic/pitch patterns etc., built (as far as I can see)
   a little bit on an ad hoc basis, like
 kt*((kt12|kt8)63kt4)???
   etc.
  
   If anyone has a suggestion of papers etc on the subject, I???d be
   grateful.
  
   Viznut's stuff was going on a couple of years ago:
  
  
 
 http://countercomplex.blogspot.com.au/2011/10/algorithmic-symphonies-from-one-line-of.html
  
   Cheers,
  
   Ross.
  
   other links here
  
   http://canonical.org/%7Ekragen/bytebeat/
  
   --
   dupswapdrop -- the music-dsp mailing list and website:
   subscription info, FAQ, source code archive, list archive, book
 reviews,
  dsp links
   http://music.columbia.edu/cmc/music-dsp
   http://music.columbia.edu/mailman/listinfo/music-dsp
 
  --
  dupswapdrop -- the music-dsp mailing list and website:
  subscription info, FAQ, source code archive, list archive, book reviews,
  dsp links
  http://music.columbia.edu/cmc/music-dsp
  http://music.columbia.edu/mailman/listinfo/music-dsp
 
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] entropy

2014-10-15 Thread Alan Wolfe
For some reason, All I'm seeing are your emails Peter.  not sure who you
are chatting to or what they are saying in response :P

On Wed, Oct 15, 2014 at 2:18 PM, Peter S peter.schoffhau...@gmail.com
wrote:

 Academic person: There is no way you could do it! Impossibru!!

 Practical person: Hmm... what if I used a simple upper-bound
 approximation instead?
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] ANN: CDP is now a social enterprise

2013-11-04 Thread Alan Wolfe
That's awesome!

On Mon, Nov 4, 2013 at 2:58 PM, Richard Dobson
richarddob...@blueyonder.co.uk wrote:
 [with apologies for any multiple posts]

 This is to announce that the Composers Desktop Project is now a UK social
 enterprise - a non profit-making limited company with (in the required legal
 sense) charitable aims, i.e. education outreach. Our updated home page
 explains it all in more detail:

 http://www.composersdesktop.com/

 People will find that the noteworthy acronym GPL now appears there, along
 with a date.

 This is all to further our goals to get Sound and Music Computing into UK
 schools.

 Richard Dobson
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Functional Programming C code generation

2013-05-14 Thread Alan Wolfe
fwiw, i have a DAW I work on, and on my todo list is the ability to
export your creations to C++.

One option would do generic C++ so you could drop it into whatever
program you wanted (like, an fmod callback, or custom code etc).

The other option to be generating code for a VST plugin.

Just wanted to mention it because as you probably know... a lot of
people have the same good ideas, but fewer people actually bring them
to fruition (:

On Tue, May 14, 2013 at 4:42 PM, Tom Schouten t...@zwizwa.be wrote:
 On 05/14/2013 04:30 PM, pdowling wrote:

 i'm presuming everyone in this thread knows Grame's FAUST and Cycling 74's
 GEN ? or maybe i'm missing something about what you want to do? if so
 apologies. i'm actually just very interested in this subject myself. surely
 the first step would be to appraise two of the excellent solutions already
 out there?


 I'm not familiar with GEN, though I am aware there are already a load of
 code generators available.
 Some open source, some not so open.




 Faust is amazing. it can compile to many different end targets and even
 has it's own IDE in FaustWorks. Also, Albert Graf has embedded it (of sorts)
 into Pd already (via Pure). very powerful combo. the language itself is a
 little mathematical, finicky and technically minded though.


 I really like the idea behind Faust, and I made sure that this idea can be
 used on top of what I am writing.  Faust is already a stateless
 operator/combinator language, so very close to what I have in mind.

 That combinator approach used in Faust can work well for high level
 composition, though after working a lot with such style of languages (though
 more Forth-like) I no longer think combinators are a good basic substrate
 for low-level work.  They can be very powerful and succinct, but are indeed
 finicky to work with, and produce hard to read code.  Overall I prefer a
 more direct applicative style ie. named variables for input and possibly
 output.

 The point of the project is not so much to make a graphical design language.
 It is to go a bit deeper and find a simple basic substrate for coding DSP
 algorithms (and other arithmetic-intensive code).  To take the central idea
 of different interpretations of the code and make it accessible to a
 broader public of designers/implementers of DSP code.  It is definitely a
 bit technically minded.  Maybe a bit too technical.  What I like to find
 out is in how far my exposure to these ideas made my idea of common sense
 shift (I have a DSP background originally but got exposed to FP at a later
 time).  I.e. can this approach benefit others?   Can the specific
 implementation I'm using be somehow shifted more into the main stream?  What
 am I (needlessly) re-inventing and what is new?  Are there points of
 integration with other systems?  Lot's of questions still.

 The basic substrate here is the lambda-calculus (a fancy name for the
 process of building everything out of stateless operations).  Everything
 that can go on top (graphical interface, other combination languages, ...)
 is not essential from my current pov, but it should definitely not be
 hindered.






 Gen (and GenExpr) allows for graphically patching and/or using a clean
 scripting language (based on top of lua but with a syntax more like C /
 javascript) to jit compile to machine code in real time whilst the audio
 stream is running. click a button and you get optimised C++ code.

 both much like what jeremy shaw mentioned wishing in his post. by the way,
 Grame have also produced faustgen~, an LLVM object implementation of Faust
 for working in Max also.

 Gen also has the advantage of being a generic language, not just a DSP
 one. can comiple to GPU pixel shader GLSL code equally easilly, for example.

 i know many DSP/VST coders who now use these environments as their main
 prototyping jumping off point (not me, i'm just a vaguely creative hacker).

 just thought i'd mention it all for the sake of the discussion.


 Looks like I need to give GEN a try.  Probably good a source of inspiration.





 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] basic trouble with signed and unsigned types

2013-05-02 Thread Alan Wolfe
Just noticed in the archives that my response never went to the
mailing list.  People have covered most of it already but here ya go
anyways... (:

Well, you could certainly make a set of functions for the basic lego
pieces of bit twiddling, and overload them for each type you want to
support. You'd want to mark them as inline to make sure and not pay
the overhead of having them be a function though of course.

Like this...

inline void RotateLeft(int value) { // do it }
inline void RotateLeft(char value) { // do it }
inline void RotateLeft(long int value) { // do it }

Then, in your code, you don't care what type it is, you just call
RotateLeft() and it does the rotation, calling the correct version of
the function.  (watch out for implicit conversion though - which could
happen if you start using a type that you didn't implement the
functions for!)

If you further want to divorce your code from specific types, template
functions and template classes can help you get there too.  For
maximum performance, just make sure and keep functions marked as
inlined, and stay away from virtual functions.

Another potential tool in your toolbox could be stdtypes.h which
defines specifically sized data types e.g. uint8, int32 so you can
work in known sized data types instead of the loosely sized types
built into C++.

In a somewhat similar vein, I wrote an article yesterday about a
technique that is really useful for writing lightning fast code that
also needs to be configurable (dsp is one such type of software)

Sharing in case anyone finds it interesting or useful, it's a bit on topic :P
http://blog.demofox.org/2013/05/01/permutation-programming-without-maintenance-nightmares/

On Wed, May 1, 2013 at 1:35 PM, Sampo Syreeni de...@iki.fi wrote:
 For the longest time I took out a compiler and started cranking out an old
 idea. In that vein, I'm using libsndfile and its (highly reasonable)
 processing model: you just keep everything to zero padded ints (preferably
 signed) and go from there.

 The trouble is that my code is of the kind that also requires lots of bit
 twiddling. My current problem comes from trying to make the code more or
 less adaptive to any bit width, while I also have to do stuff like computed
 shifts.

 So, how do you go about systematically and portably implementing what you
 would expect from your logical operations, using standard C operations,
 without knowing the basic width of your types? (Logical, not arithmetic)
 right shifts of signed quantities, efficient parity, and computed shifts
 with negative offsets are proving particularly nasty at the moment. (It has
 to do with dithering at arbitrary word length which also has to be
 reasonably efficient if any set in silicon.)
 --
 Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
 +358-50-5756111, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-14 Thread Alan Wolfe
I'm sure it varies from hardware to hardware too, so always good to
know your options

On Thu, Mar 14, 2013 at 12:02 PM, jpff j...@cs.bath.ac.uk wrote:
 Ross == Ross Bencina rossb-li...@audiomulch.com writes:

  Ross I am suspicious about whether the mask is fast than the conditional for
  Ross a couple of reasons:

  Ross - branch prediction works well if the branch usually falls one way

  Ross - cmove (conditional move instructions) can avoid an explicit branch

  Ross Once again, you would want to benchmark.

 I did the comparison for Csound a few months ago. The loss in using
 modulus over mask was more than I could contemplate my users
 accepting.  We provide both versions for those who want non-power-of-2
 tables and can take the considerable hit (gcc 4, x86_64)

 ==John ffitch
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-14 Thread Alan Wolfe
RBJ's response would fit into that category I think Sampo (:

On Thu, Mar 14, 2013 at 1:27 PM, Sampo Syreeni de...@iki.fi wrote:
 On 2013-03-14, jpff wrote:

 I did the comparison for Csound a few months ago. The loss in using
 modulus over mask was more than I could contemplate my users accepting.


 Quite a number of processors have/used to have explicit support for counted
 for loops. Has anybody tried masking against doing the inner loop as a
 buffer-sized counted for and only worrying about the wrap-around in an
 outer, second loop, the way we do it with unaligned copies, SIMD and other
 forms of unrolling?
 --
 Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
 +358-50-5756111, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-11 Thread Alan Wolfe
interesting idea about rounding up and letting multiple buffers using
the memory.  Very nice.

I just wanted to add on the front of enforcing powers of 2 sizes, the
way you have it where you pass in an integer and it understand that as
a power of 2 is nice but of course a little less intuitive to the user
than saying i want 1024 samples.  They pass a 10 in and whenever
they see that number in the code, they have to spend time thinking or
remembering what it means.

another way that could be a nicety could be to use an enum to get the
best of both worlds

enum EBufferSizes
{
  //... etc
  kBufferSize_512 = 9,
  kBufferSize_1024 = 10,
  //.. etc
}

Sure they could just put in ints instead of using your enum (some
compilers might make warnings for that at least, or allow you to tell
them to make warnings for that), but it could be a nice step to making
the interface a little nicer while still having the safety / ease of
use in your example.

On Mon, Mar 11, 2013 at 11:06 AM, robert bristow-johnson
r...@audioimagination.com wrote:
 On 3/11/13 11:19 AM, Phil Burk wrote:

 Regarding power-of-2 sized circular buffers, here is a handy way to
 verify that a bufferSize parameter is actually a power-of-2:

 int init_circular_buffer( int bufferSize, ... )
 {
 assert( (bufferSize  (bufferSize-1)) == 0 )
 ...



 might be silly, but a way to force the caller to constrain it to a power of
 2 is:

 init_circular_buffer( int logBufferSize, ... )
 {
 unsigned long bufferSize = 1L  logBufferSize;
 unsigned long indexMask = bufferSize - 1;
 ...



 On 3/11/13 2:59 AM, Nigel Redmon wrote:

 Also a note that the modulo-by-AND indexing is built into some
 processors—the 56K family, at least, as Robert knows well…buffers are the
 next power of two higher than the space needed, and the masking happens for
 free…


 actually the 56K and other DSPs (like the SHArC) can do buffers of any size
 below 32K. the 56K has a restriction that the base address of the buffer
 must be an integer multiple of a power of 2 that is at least as big as the
 bufferSize. the modulo arithmetic doesn't really happen for free. choosing
 to use a DSP over a cheap ARM chip or something similar has both advantages
 and disadvantages. and they have to put a bunch of logic on the chip for the
 modulo. even the 563xx chip has that 32K restriction, even though the
 address space increased to 16M. such a shame. you have minutes of addressing
 space, but your modulo delay lines are still limited to less than a second
 at any decent sampling rate.

 but what you can do with C where you might have a bunch of different delay
 lines (like in a Shroeder/Jot reverb), all running at the same sampling
 rate, is create a *single* circular buffer that has length that is a power
 of 2. then each little delay line can have a piece of that buffer allocated,
 but all of the allocations move at the same rate. the various delay line
 allocations are stationary relative to each other.

 it can be compared to an analog tape delay like this. you have a fixed
 amount of tape media but as many record and playback heads as your heart
 desires. so instead of cutting a separate loop of tape (which has to be of
 length equal to a power of two) and connect that up to a record and playback
 head, you create one big loop of tape and put a record/playback head pair
 for each delay on the tape loop at different locations.

 that way you can efficiently allocate a delay line of 129 or 257 or 4097
 samples long along with a bunch of others. only the whole big buffer need be
 of length 2^p .


 --

 r b-j  r...@audioimagination.com

 Imagination is more important than knowledge.



 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-09 Thread Alan Wolfe
Hey while we are on the topic of efficiency and the OP not knowing
that division was slower...

Often times in DSP you'll use circular buffers (like for delay buffers
for instance).

Those are often implemented by having an array, and an index into the
array for where the next sample should go.  When you put a sample into
the buffer, you increment the index and then make sure that if the
index into the array is out of bounds, that it gets set back to zero
so that it continually goes through the array in a circular fashion
(thus the name circular buffer!).

Incrementing the index could be implemented like this:

index = index + 1;
if (index = count)
  index = 0;

Another, more compact way could be to do it this way:
index = (index + 1) % count;

In that last one, it uses the modulo operator to get the remainder of
a division to make sure the index is within range.  The modulo
operator has to pay the full cost of the divide though to figure out
the remainder so it is the same cost as a division (talked about
earlier!).

There's a neat technique to do this faster that I have to admit i got
from Ross's code a few years ago in his audio library PortAudio.  That
technique requires that your circular buffer is a power of 2, but so
long as that is true, you can do an AND to get the remainder of the
division.  AND is super fast (even faster than the if / set) so it's a
great improvement.

How you do that looks like the below, assuming that your circular
buffer is 1024 samples large:
index = ((index + 1)  1023);   // 1023 is just 1024-1

if your buffer was 256 samples large it would look like this:
index = ((index + 1)  255); // 255 is just 256 - 1

Super useful trick so wanted to share it with ya (:

On Sat, Mar 9, 2013 at 12:14 PM, Tim Goetze t...@quitte.de wrote:
 [Tim Blechmann]
 Though recent gcc versions will replace the above a/3.14 with a
 multiplication, I remember a case where the denominator was constant
 as well but not quite as explicitly stated, where gcc 4.x produced a
 division instruction.

not necessarily: in floating point math a/b and a * (1/b) do not yield
the same result. therefore the compile should not optimize this, unless
explicitly asked to do so (-freciprocal-math)

 I should have added, when employing the usual suspects, -ffast-math
 -O6 etc, as you usually would when compiling DSP code.  Sorry!

 Tim
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-07 Thread Alan Wolfe
Quick 2 cents of my own to re-emphasize a point that Ross made -
profile to find out which is fastest if you aren't sure (although it's
good to ask too in case different systems have different oddities you
don't know about)

Also, if in the future you have performance issues, profile before
acting for maximum efficiency... often times what we suspect to be the
bottleneck of our application is in fact not the bottleneck at all.
Happens to everyone :P

lastly, copying buffers is an important thing to get right, but in
case you haven't heard this enough, when hitting performance problems
it's often better to do MACRO optimization instead of MICRO
optimization.

Macro optimization means changing your algorithm, being smarter with
the resources you have etc.

Micro optimization means turning multiplications into bitshifts,
breaking out the assembly and things like that.

Often times macro optimizations will get you a bigger win (don't
optimize a crappy sorting algorithm, just use a better sorting
algorithm and it'll be way better) and also will result in more
maintainable, portable code, so you should prefer going that route
first.

Hope this helps!

On Thu, Mar 7, 2013 at 2:48 PM, Ross Bencina rossb-li...@audiomulch.com wrote:
 Stephen,


 On 8/03/2013 9:29 AM, ChordWizard Software wrote:

 a) additive mixing of audio buffers b) clearing to zero before
 additive processing


 You could also consider writing (rather than adding) the first signal to the
 buffer. That way you don't have to zero it first. It requires having a
 write and an add version of your generators. Depending on your code this
 may or may not be worth the trouble vs zeroing first.

 In the past I've sometimes used C++ templates to paramaterise by the output
 operation (write/add) so you only have to write the code that generates the
 signals once


 c) copying from one buffer to another

 Of course you should avoid this whereever possible. Consider using
 (reference counted) buffer objects so you can share them instead of copying
 data. You could use reference counting, or just reclaim everything at the
 end of every cycle.



 d) converting between short and float formats


 No surprises to any of you there I'm sure.  My question is, can you
 give me a few pointers about making them as efficient as possible
 within that critical realtime loop?

 For example, how does the efficiency of memset, or ZeroMemory,
 compare to a simple for loop?


 Usually memset has a special case for writing zeros, so you shouldn't see
 too much difference between memset and ZeroMemory.

 memset vs simple loop will depend on your compiler.

 The usual wisdom is:

 1) use memset vs writing your own. the library implementation will use
 SSE/whatever and will be fast. Of course this depends on the runtime

 2) always profile and compare if you care.



 Or using HeapAlloc with the
 HEAP_ZERO_MEMORY flag when the buffer is created (I know buffers
 shouldn’t be allocated in a realtime callback, but just out of
 interest, I assume an initial zeroing must come at a cost compared to
 not using that flag)?


 It could happen in a few ways, but I'm not sure how it *does* happen on
 Windows and OS X.

 For example the MMU could map all the pages to a single zero page and then
 allocate+zero only when there is a write to the page.



 I'm using Win32 but intend to port to OSX as well, so comments on the
 merits of cross-platform options like the C RTL would be particularly
 helpful.  I realise some of those I mention above are Win-specific.

 Also for converting sample formats, are there more efficient options
 than simply using

 nFloat = (float)nShort / 32768.0


 Unless you have a good reason not to you should prefer multiplication by
 reciprocal for the first one

 const float scale = (float)(1. / 32768.0);
 nFloat = (float)nShort * scale;

 You can do 4 at once if you use SSE/intrinsics.


 nShort = (short)(nFloat * 32768.0)

 Float = int conversion can be expensive depending on your compiler settings
 and supported processor architectures. There are various ways around this.

 Take a look at pa_converters.c and the pa_x86_plain_converters.c in
 PortAudio. But you can do better with SSE.



 for every sample?

 Are there any articles on this type of optimisation that can give me
 some insight into what is happening behind the various memory
 management calls?


 Probably. I would make sure you allocate aligned memory, maybe lock it in
 physical memory, and then use it -- and generally avoid OS-level memory
 calls from then on.

 I would use memset() memcpy(). These are optimised and the compiler may even
 inline an even more optimal version.

 The alternative is to go low-level and benchmark everything and write your
 own code in SSE (and learn how to optimise it).

 If you really care you need a good profiler.

 That's my 2c.

 HTH

 Ross.





 Regards,

 Stephen Clarke Managing Director ChordWizard Software Pty Ltd
 corpor...@chordwizard.com 

Re: [music-dsp] Thesis topic on procedural-audio in video games?

2013-03-05 Thread Alan Wolfe
Howdy!

I think kkrieger, the 96KB first person shooter uses procedural audio:
http://www.youtube.com/watch?v=KgNfqYf_C_Q

i work in games myself and was recently talking to an audio engineer
(the non programming type of engineer) who has a couple decades of
experience about procedural sound effects.  He was saying that he
hasn't seen anything that great in this respect other than footstep
sounds and sometimes explosions.  If you think about it, that makes a
lot of sense because even in synthesized music (or MIDI let's say),
the only stuff that really sounds that realistic is percussion.

That being said, FM synthesis is kind of magical and can make some
interesting and even realistic sounds :P

my 2 cents!

On Tue, Mar 5, 2013 at 1:35 AM, David Olofson da...@olofson.net wrote:
 Well, I've been doing a bit of that (pretty basic stuff so far, aiming
 at a few kB per song) - but all I have is code; two Free/Open Source
 engines; that are used in two games (all music and sound effects) I'm
 working on. No papers, and unfortunately, not much documentation yet
 either.


 Audiality
 (The latest release is not currently online, though the unnamed
 official version is part of Kobo Deluxe: http://kobodeluxe.com/)

 The old Audiality is all off-line modular synthesis and a simple
 realtime sampleplayer driven by a MIDI sequencer. No samples - it's
 all rendered at load time from a few kB of scripts.

 Gameplay video of Kobo Deluxe (sound effects + music):
 http://youtu.be/C9wO_T_fOvc

 Some ancient Audiality examples (the latter two from Kobo Deluxe):
 http://olofson.net/music/a1-atrance2.mp3
 http://olofson.net/music/a1-trance1.mp3
 http://olofson.net/music/a1-ballad1.mp3


 ChipSound/Audiality 2
 http://audiality.org/

 Audiality 2 is a full realtime synth with subsample accurate
 scripting. The current version has modular voice structures (resonant
 filters, effects etc), but the Kobo II songs so far are essentially
 50-100 voice chip music, using only basic geometric waveforms and
 noise - mono, no filters, no effects, no samples, nothing
 pre-rendered.

 ChipSound/Audiality 2 (the Kobo II tracks and the A2 jingle):
 https://soundcloud.com/david-olofson


 David

 On Tue, Mar 5, 2013 at 9:08 AM, Danijel Domazet
 danijel.doma...@littleendian.com wrote:
 Hi mdsp,
 We need a masters thesis topic on procedural audio in video games. Does
 anyone have any good ideas? It would be great if we could afterwards
 continue developing this towards a commercial products.

 Any advice most welcome.

 Thanks!

 Danijel Domazet
 LittleEndian.com


 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp




 --
 //David Olofson - Consultant, Developer, Artist, Open Source Advocate

 .--- Games, examples, libraries, scripting, sound, music, graphics ---.
 |   http://consulting.olofson.net  http://olofsonarcade.com   |
 '-'
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sound effects and Auditory illusions

2013-02-19 Thread Alan Wolfe
I think it would be neat to show how wildly different wave forms can
sound the same (like... you can make a square wave with sine or
cosine.  Using one looks like a square, using the other doesn't, but
they both sound the same).

Also it would probably be neat to budding audio folk / audio
programmers to see how a low pass filter can make things sound like
you would hear them through a wall, or underwater, or far away.  It
might also be neat to show them visually how an LPF or HPF affects a
wave form (like how LPF smooths it out).

You could also possibly talk about how graphics ties to this stuff,
how when someone takes a big image and makes it smaller, they will
essentially put it through a 2 dimensional low pass filter to make it
look better (non aliased etc) at the lower resolution.

Also of course, showing graphically how aliasing happens in every day
life (car wheels spinning backwards in commercials) is kind of neat.

It might also be neat to get them to play around with something like
pure data where they can build their own audio effects or music, as
well as make a poor man's echo or reverb or flange (:

On Tue, Feb 19, 2013 at 10:58 AM, Nils Pipenbrinck n...@planetarc.de wrote:
 On 19.02.2013 11:26, Marcelo Caetano wrote:
 Dear list,

 I'll teach a couple of introductory lectures on audio and music processing, 
 and I'm looking for some interesting examples of cool stuff to motivate the 
 students, like sound transformations, auditory illusions, etc. I'd really 
 appreciate suggestions, preferably with sound files.

 I think you should present:

 The Sheapard tones:  http://en.wikipedia.org/wiki/Shepard_tone
 And the Haas effect:   http://en.wikipedia.org/wiki/Haas_effect

 Best,
   Nils


 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Starting From The Ground Up

2013-01-21 Thread Alan Wolfe
Heya,

I'm a game programmer by trade who dabbles in DSP and audio
programming.  I have a handful of books on the subject but recently
was turned onto one that was aimed at programmers.  Reading it has
been really enlightening and seeing things in code which previously i
had only seen as complex equations or strange looking diagrams has
allowed me to understand some things i have been struggling to
understand for a while now :P

I highly recommend this book:  Designing Audio Effect Pluggins in C++
http://www.amazon.com/Designing-Audio-Effect-Plug-Ins-Processing/dp/0240825152/ref=sr_1_1?ie=UTF8qid=1358787258sr=8-1keywords=designing+audio+effect+plug-ins+in+c%2B%2B

On Mon, Jan 21, 2013 at 8:27 AM, douglas repetto
doug...@music.columbia.edu wrote:

 And lots of semi-outdated DSP book reviews here:

 http://music.columbia.edu/cmc/music-dsp/dspbooks.html




 On 1/21/13 10:28 AM, Russell McClellan wrote:

  From a more theoretical perspective, you can't go wrong with the free
 online books at https://ccrma.stanford.edu/~jos/

 These intro DSP books require some basic college math but always keep
 their focus on musical and audio applications.

 Thanks,
 -Russell
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp


 --
 ... http://artbots.org
 .douglas.irving http://dorkbot.org
 .. http://music.columbia.edu/cmc/music-dsp
 ...repetto. http://music.columbia.edu/organism
 ... http://music.columbia.edu/~douglas


 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Calculating the gains for an XY-pad mixer

2013-01-17 Thread Alan Wolfe
What you are trying to calculate is called barycentric coordinates,
you might give them a google (:

As far as them all adding to one (which barycentric coordinates do),
I'm not sure if that's appropriate or not, because you have to
remember that volume is linear, but the perception of that linear
scale is not linear.  Normally when you are trying to work with the
perception of volume (in this case, trying to keep it the same
loudness), you work in decibels, which are a non linear scale, but are
linear to the ear.

Hope this helps.  Someone will surely chime in if i've misled you on
the second part :P

On Thu, Jan 17, 2013 at 8:59 PM, Aengus Martin aen...@am-process.org wrote:
 Hi Everyone,

 This may be a fairly idiosyncratic issue, but I think someone here
 might be able to comment on the correctness of what I've done.

 I am implementing a mixer in which the gains of four sounds are
 controlled using a single XY-pad. There is one sound associated with
 each corner of the XY-pad and placing the cursor at a corner sets gain
 of the corresponding sound to 1 and all others to 0; whereas placing
 it in the middle mixes all four sounds equally; and placing it on an
 edge mixes the sounds at the two nearest corners only. I have two
 schemes for calculating the four gains from the cursor position, one
 for correlated sounds and the other for uncorrelated sounds. I'm
 hoping that someone might flag any problems--theoretical or
 otherwise--with either of them (though they seem to me to sound ok)

 For correlated sounds (such as four waveforms in a subtractive
 synthesiser), I understand that a linear crossfade is appropriate and
 the four gains should sum to 1. The scheme I came up with for doing
 this is to divide the XY-pad rectangle into four smaller rectangles by
 drawing a horizontal line and a vertical line through the cursor
 position, and then to use the areas of the rectangles as the gains.
 The gain for a given corner is the area of the 'opposite' rectangle,
 e.g. the gain for the sound associated with the bottom left corner is
 given by the area of the top right rectangle, etc.; I can supply a
 figure if necessary. Of course the areas need to be normalised so that
 they sum to 1.

 Now for uncorrelated sounds, I understand that the squares of the
 gains should sum to 1. My solution is to use the previous scheme but
 with the gain being given by the square root of the area of the
 corresponding rectangle. This somehow seems a bit too simple, but
 maybe it's correct.

 Do these seem like reasonable ways to get the gains for the two cases?

 Thanks,

 Aengus.

 
 www.am-process.org
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Calculating the gains for an XY-pad mixer

2013-01-17 Thread Alan Wolfe
Barycentric coordinates don't just apply to triangles.  Trust me, I'm
a (video game) engineer (tm) hehe

In geometry, the barycentric coordinate system is a coordinate system
in which the location of a point is specified as the center of mass,
or barycenter, of masses placed at the vertices of a simplex(a
triangle, tetrahedron, etc.) - from wikipedia

On Thu, Jan 17, 2013 at 9:22 PM, Ross Bencina
rossb-li...@audiomulch.com wrote:
 On 18/01/2013 4:06 PM, Alan Wolfe wrote:

 What you are trying to calculate is called barycentric coordinates,


 Actually I don't think so.

 Barycentric coordinates apply to triangles (or simplices), not squares (XY).

 http://en.wikipedia.org/wiki/Barycentric_coordinate_system_(mathematics)
 http://en.wikipedia.org/wiki/Simplex


 Aengus wrote:
 Do these seem like reasonable ways to get the gains for the two cases?

 They seem reasonable to me. Do you have a reason for doubting?

 Ross

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Fwd: Programming ARC on a compressor

2013-01-10 Thread Alan Wolfe
Hey Guys,

I have a compressor that works by having an envelope follower (full
rectifier with attack and release settings) following the uncompressed
input, then applying the compression ratio to my input where the
envelope follower samples are above the threshold (working in db for
that part).

I am trying to get ARC working and have 2 thoughts how it might work,
but it's difficult to find info online about the internals of any ARC
techniques.

Do either of these sound like reasonable approaches?

#1 - watch the envelope follower, and if it's been spending too much
time releasing to get back to normal, decrease the release time.
Else, if it hasn't been spending enough time releasing and it often
quickly hits the input, increase the release time.   What I mean from
an implementation standpoint is something like keep track of what % of
the time over the last X milliseconds (100ms perhaps? I don't know...)
that it's been releasing and use that to drive the release time up or
down.  A few numbers need tuning though which makes me feel like maybe
this is not the correct answer since ARC seems to just work without
parameters.

#2 - watch the input data within a time window (again, say the last
100ms perhaps?) and see if the heights of the peaks and the depths of
the valleys are pretty much consistent  or if they are wildly all over
the place.  If they are pretty consistent, the compressor should use a
longer release time, but if they are very different from each other, i
should use a faster release time.

What do you guys think / have i missed another, better way to do it?

Thanks!

PS I've also read that some ARC techniques may look at the (loudest?)
frequency of the audio data and use that as the basis for the release
times.  Is that true?  Is there a magic formula for calculating a
release time from frequency? :P  (im thinking they must use a windowed
FFT to get the frequency...)
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Fwd: Programming ARC on a compressor

2013-01-10 Thread Alan Wolfe
That sounds like a really interesting and not too perf intensive implementation.

I can dig around in the archives, you already did all the hard work of
creating it and sharing it hehe.

Thanks a ton James!

On Thu, Jan 10, 2013 at 4:16 PM, James C Chandler Jr
jchan...@bellsouth.net wrote:

 On Jan 10, 2013, at 7:10 PM, Andrew Jerrim wrote:

 Here's the link to James' old post:
 http://music.columbia.edu/pipermail/music-dsp/2004-January/059028.html


 Thanks, Andrew.

 Maybe remembering wrong, but think I posted later than that, what I consider 
 an improved two-stage variable release scheme, including some code, but maybe 
 memory is failing. I'll try to find it later tonight if I get time.

 James Chandler Jr.

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Getting Started with PortAudio

2011-05-26 Thread Alan Wolfe
Heya,

You might want to ask the port audio mailing list, you're more likely
to find better answers there :P

http://www.portaudio.com/contacts.html

On Thu, May 26, 2011 at 1:46 PM, resea...@ottomaneng.com
resea...@ottomaneng.com wrote:
 Hello,

 I am trying to get started using PortAudio on Windows 7. I got the latest
 stable release and the ASIO SDK. Compiled on Visual Studio 2008 without a
 problem.


 I am now following this tutorial for setting up a basic PortAudio project

 http://www.portaudio.com/trac/wiki/TutorialDir/Compile/WindowsASIOMSVC

 I've got two issues:

 1) The file: pa_skeleton.c (portaudio\src\common), does not exist.

 2) I tried compiling without the file and I got the following error:

 portaudio\src\common\pa_hostapi.h(71) : fatal error C1189: #error :
  Portaudio: PA_NO_APINAME is no longer supported, please remove
 definition and use PA_USE_APINAME instead


 Only references to PA_NO_* that I can find are in pa_converters.c and those
 are #ifdef PA_NO_STANDARD_CONVERTERS. I think this is a recent change in the
 API. Tips or pointers?



 Thanks,

 Omer Osman

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sinewave generation - strange spectrum

2011-04-27 Thread Alan Wolfe
Might want to ask this one on the PA list (:

On Wed, Apr 27, 2011 at 2:47 PM,  eu...@lavabit.com wrote:
 Hello,

 Today I tried compiling on Windows with MinGW  MSYS, and everything
 works, the spectrum is clean at SR=44.1 kHz and even the LUT version is
 acceptable.

 Probably on linux I was mixing different versions of header and linked
 library. I also tried in linux copying directly input buffer to output
 buffer, but the signal is chopped, more often when frames per buffer is
 low.

 Now I have to compile Portaudio on linux.
 ./configure and make run without errors, but on the portaudio/lib folder I
 only have libportaudio.la , and no other files or .libs .
 How can I obtain libportaudio.a ?

 Thank you very much, and sorry for this misleading problem - I should be
 more careful with linking.


 Hello,

 I want to generate two different frequency sinewaves on LineOut -
 LeftRight. For audio IO I'm using Portaudio(Linux, PortAudio V19-devel
 (built Apr 17 2011 22:00:29)), and the callback code is:

 static int paCallback( const void* inBuff, void* outBuff,
                                               unsigned long frpBuff,
                                               const 
 PaStreamCallbackTimeInfo* tInf,
                                               PaStreamCallbackFlags flags,
                                               void* userData )
 {
       int16_t i;
       audioData* data = (audioData*) userData;
       float* out = (float*) outBuff;

       /* Prevent warnings */
       (void) tInf;
       (void) flags;

       for( i=0; ifrpBuff; i++ )
       {
               *out++ = data-amplitude[0] * sinf( (2.0f * M_PI) * 
 data-phase[0] );
               *out++ = data-amplitude[1] * sinf( (2.0f * M_PI) * 
 data-phase[1] );

               /* Update phase, rollover at 1.0 */
               data-phase[0] += (data-frequency[0] / SAMPLE_RATE);
               if(data-phase[0]  1.0f) data-phase[0] -= 2.0f;
               data-phase[1] += (data-frequency[1] / SAMPLE_RATE);
               if(data-phase[1]  1.0f) data-phase[1] -= 2.0f;
       }

       return paContinue;
 }

 When I checked the output spectrum for a 10kHz frequency using baudline
 (running on another PC), I got this http://images.cjb.net/80af2.png . The
 spectrum is clean only for output frequencies below 2-3 kHz.

 The tone generator inside baudline gives a clean spectrum at 10 kHz:
 http://images.cjb.net/b943b.png .

 What method would you recommend for generating a clean sinewave at 5-12
 kHz?
 I think there is a bug somewhere, because the sine is computed in float
 for each sample and should be precise enough...

 Thanks



 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp




 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sinewave generation - strange spectrum

2011-04-26 Thread Alan Wolfe
just stabbing in the dark in case nobody else gives a more useful
response but...

#1 - what is the format of your output?  If it's low in bitcount that
could make the signal more dirty i believe (less resolution to make a
more perfect sine wave)

#2 - have you tried calculating via doubles?

#3 - what is data-amplitude... does that ever change or is it just a
one time set volume adjustment for the left and right channels?

On Tue, Apr 26, 2011 at 12:57 PM,  eu...@lavabit.com wrote:
 Hello,

 I want to generate two different frequency sinewaves on LineOut -
 LeftRight. For audio IO I'm using Portaudio(Linux, PortAudio V19-devel
 (built Apr 17 2011 22:00:29)), and the callback code is:

 static int paCallback( const void* inBuff, void* outBuff,
                                                unsigned long frpBuff,
                                                const 
 PaStreamCallbackTimeInfo* tInf,
                                                PaStreamCallbackFlags flags,
                                                void* userData )
 {
        int16_t i;
        audioData* data = (audioData*) userData;
        float* out = (float*) outBuff;

        /* Prevent warnings */
        (void) tInf;
        (void) flags;

        for( i=0; ifrpBuff; i++ )
        {
                *out++ = data-amplitude[0] * sinf( (2.0f * M_PI) * 
 data-phase[0] );
                *out++ = data-amplitude[1] * sinf( (2.0f * M_PI) * 
 data-phase[1] );

                /* Update phase, rollover at 1.0 */
                data-phase[0] += (data-frequency[0] / SAMPLE_RATE);
                if(data-phase[0]  1.0f) data-phase[0] -= 2.0f;
                data-phase[1] += (data-frequency[1] / SAMPLE_RATE);
                if(data-phase[1]  1.0f) data-phase[1] -= 2.0f;
        }

        return paContinue;
 }

 When I checked the output spectrum for a 10kHz frequency using baudline
 (running on another PC), I got this http://images.cjb.net/80af2.png . The
 spectrum is clean only for output frequencies below 2-3 kHz.

 The tone generator inside baudline gives a clean spectrum at 10 kHz:
 http://images.cjb.net/b943b.png .

 What method would you recommend for generating a clean sinewave at 5-12 kHz?
 I think there is a bug somewhere, because the sine is computed in float
 for each sample and should be precise enough...

 Thanks



 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sinewave generation - strange spectrum

2011-04-26 Thread Alan Wolfe
oh also, it *might* matter what device you are using with port audio.

for instance, when i use ASIO i seem to get a much louder, more raw
signal than if i use, say, the directsound interface.

i think the DS device must do some kind of dsp on it like maybe
there's a compressor or something.

To rule this out, what you can do is write your output to a file
instead of (or in addition to) spitting it out to the speaker, then
run your analysis on the sound file.

Libsndfile is a nice, simple library for reading/writing sound files:

http://www.mega-nerd.com/libsndfile/

On Tue, Apr 26, 2011 at 1:14 PM, Alan Wolfe alan.wo...@gmail.com wrote:
 just stabbing in the dark in case nobody else gives a more useful
 response but...

 #1 - what is the format of your output?  If it's low in bitcount that
 could make the signal more dirty i believe (less resolution to make a
 more perfect sine wave)

 #2 - have you tried calculating via doubles?

 #3 - what is data-amplitude... does that ever change or is it just a
 one time set volume adjustment for the left and right channels?

 On Tue, Apr 26, 2011 at 12:57 PM,  eu...@lavabit.com wrote:
 Hello,

 I want to generate two different frequency sinewaves on LineOut -
 LeftRight. For audio IO I'm using Portaudio(Linux, PortAudio V19-devel
 (built Apr 17 2011 22:00:29)), and the callback code is:

 static int paCallback( const void* inBuff, void* outBuff,
                                                unsigned long frpBuff,
                                                const 
 PaStreamCallbackTimeInfo* tInf,
                                                PaStreamCallbackFlags flags,
                                                void* userData )
 {
        int16_t i;
        audioData* data = (audioData*) userData;
        float* out = (float*) outBuff;

        /* Prevent warnings */
        (void) tInf;
        (void) flags;

        for( i=0; ifrpBuff; i++ )
        {
                *out++ = data-amplitude[0] * sinf( (2.0f * M_PI) * 
 data-phase[0] );
                *out++ = data-amplitude[1] * sinf( (2.0f * M_PI) * 
 data-phase[1] );

                /* Update phase, rollover at 1.0 */
                data-phase[0] += (data-frequency[0] / SAMPLE_RATE);
                if(data-phase[0]  1.0f) data-phase[0] -= 2.0f;
                data-phase[1] += (data-frequency[1] / SAMPLE_RATE);
                if(data-phase[1]  1.0f) data-phase[1] -= 2.0f;
        }

        return paContinue;
 }

 When I checked the output spectrum for a 10kHz frequency using baudline
 (running on another PC), I got this http://images.cjb.net/80af2.png . The
 spectrum is clean only for output frequencies below 2-3 kHz.

 The tone generator inside baudline gives a clean spectrum at 10 kHz:
 http://images.cjb.net/b943b.png .

 What method would you recommend for generating a clean sinewave at 5-12 kHz?
 I think there is a bug somewhere, because the sine is computed in float
 for each sample and should be precise enough...

 Thanks



 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sinewave generation - strange spectrum

2011-04-26 Thread Alan Wolfe
don't forget (if you can do this) to try just writing your output to a
sound file and analyzing that.

that will pull out all unknowns about further dsp work done by your
devices or drivers and isolate the issue (if it still exists) to being
for sure something inside your code.

On Tue, Apr 26, 2011 at 3:56 PM,  eu...@lavabit.com wrote:
 Thanks for the tips

 The amplitude is 0.1 and the format is paFloat32.
 I only needed output so now I'm not using the input buffer, but I will try
 to copy directly input to output.

 If it's possible to record the time-domain audio output  plot it, that
 might help in trying to figure out where the problem is.

 Is the audio clipping perhaps, what is the amplitude set to ?

 Have you tried copying the input buffer directly to the output buffer so
 you
 can rule out everything except your algorithm ?

 Regards
 Rob

 -Original Message-
 From: Alan Wolfe
 Sent: Tuesday, April 26, 2011 9:14 PM
 To: A discussion list for music-related DSP
 Subject: Re: [music-dsp] Sinewave generation - strange spectrum

 just stabbing in the dark in case nobody else gives a more useful
 response but...

 #1 - what is the format of your output?  If it's low in bitcount that
 could make the signal more dirty i believe (less resolution to make a
 more perfect sine wave)

 #2 - have you tried calculating via doubles?

 #3 - what is data-amplitude... does that ever change or is it just a
 one time set volume adjustment for the left and right channels?

 On Tue, Apr 26, 2011 at 12:57 PM,  eu...@lavabit.com wrote:
 Hello,

 I want to generate two different frequency sinewaves on LineOut -
 LeftRight. For audio IO I'm using Portaudio(Linux, PortAudio V19-devel
 (built Apr 17 2011 22:00:29)), and the callback code is:

 static int paCallback( const void* inBuff, void* outBuff,
                                                unsigned long frpBuff,
                                                const
 PaStreamCallbackTimeInfo* tInf,
                                                PaStreamCallbackFlags
 flags,
                                                void* userData )
 {
        int16_t i;
        audioData* data = (audioData*) userData;
        float* out = (float*) outBuff;

        /* Prevent warnings */
        (void) tInf;
        (void) flags;

        for( i=0; ifrpBuff; i++ )
        {
                *out++ = data-amplitude[0] * sinf( (2.0f * M_PI) *
 data-phase[0] );
                *out++ = data-amplitude[1] * sinf( (2.0f * M_PI) *
 data-phase[1] );

                /* Update phase, rollover at 1.0 */
                data-phase[0] += (data-frequency[0] / SAMPLE_RATE);
                if(data-phase[0]  1.0f) data-phase[0] -= 2.0f;
                data-phase[1] += (data-frequency[1] / SAMPLE_RATE);
                if(data-phase[1]  1.0f) data-phase[1] -= 2.0f;
        }

        return paContinue;
 }

 When I checked the output spectrum for a 10kHz frequency using baudline
 (running on another PC), I got this http://images.cjb.net/80af2.png .
 The
 spectrum is clean only for output frequencies below 2-3 kHz.

 The tone generator inside baudline gives a clean spectrum at 10 kHz:
 http://images.cjb.net/b943b.png .

 What method would you recommend for generating a clean sinewave at 5-12
 kHz?
 I think there is a bug somewhere, because the sine is computed in float
 for each sample and should be precise enough...

 Thanks



 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp




 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Floating Point Division

2011-04-26 Thread Alan Wolfe
i don't know that chip but have you thought about re-arranging your math?

with limited precision, order of operations can matter.

This is a bigger problem with fixed point (and integers) than floating
point but it can still be a problem even in floating point.

whats the specific error message?

On Tue, Apr 26, 2011 at 8:51 PM,  resea...@ottomaneng.com wrote:
 Hello everyone,

 I have been going back to working on a DSP processor and recently hit a 
 precision problem with division on an ADI SHARC 21369. I get something to the 
 effect of overflow or fold over. Anyone have any suggestions or references if 
 I want e-7 precision division?


 Thanks,

 Omer Osman
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] resonance

2011-01-01 Thread Alan Wolfe
I found this link by reading the link you sent me and wandering around
related wikipedia pages

http://www.falstad.com/dfilter/

kinda neat applet!  Playing around with the resonating filters i can
see more of what you were talking about and see how they work.

On Sat, Jan 1, 2011 at 3:23 PM, Alan Wolfe alan.wo...@gmail.com wrote:
 Thanks a bunch Ross (:

 On Sat, Jan 1, 2011 at 1:54 AM, Ross Bencina rossb-li...@audiomulch.com 
 wrote:
 Alan Wolfe wrote:

 I have a future retro revolution (303 clone) and one of the knobs it
 has is resonance.

 Does anyone know what resonance is in that context or how it's
 implemented?

 I was reading some online and it seems like it might be some kind of
 feedback but I'm not really sure...

 In general, analog synth filters have a Resonance control. It creates a
 pronounced resonant peak around the cutoff frequency. Some filters will
 self-oscillate at that frequency.

 I'm not familiar with the exact filter circuit used by the 303 (links
 welcome :) but I do think of it as feedback, at least that's how it's
 implemented in the Stilson Moog ladder filter, maybe not in all filters
 though. Ignoring feedback for a moment, the filter will have a 180 or 360
 degree phase shift in the vicinity of the cutoff frequency. If you feed back
 the output to the input (in the 360 degree shift case, or the inverted
 output in the 180 degree shift case) then you'll end up with a resonant peak
 at the frequency where the phases match and reinforce each other -- and so
 the amount of feedback controls the amount of resonance at that frequency.

 Sometimes the control is labeled Q instead of Resonance, this is a
 reference to Q factor (see http://en.wikipedia.org/wiki/Q_factor) which I
 guess is a more general parameter for underdamped systems and may be
 relevant to other topologies than just a big feedback loop around the whole
 filter -- hopefully someone else can explain that since I still don't fully
 understand the concept of Q factor as it relates to filters (and digital
 filters in particular).

 Ross.
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Merry Christmas - and some music

2010-12-24 Thread Alan Wolfe
Very nice song (:

Also thanks for bringing up the impulse train topic, i hadn't heard of
that before.

Im reading the link you sent but am i right in thinking that an
impulse train is just a really narrow rectangle wave?

::continues to read::

On Fri, Dec 24, 2010 at 10:19 PM, Thor Harald Johansen t...@thj.no wrote:
 Hey, Music-DSP!

 Newcomer and lurker here. 27 year old computer programmer and music hobbyist
 from Norway.

 Been following the discussions this December with great interest. I have no
 background in CS and DSP but have never the less managed to stubbornly learn
 at least some of the tricks of the trade.

 The tutorials on earlevel.com were a nice refresher, and the whole idea of
 seeing sampled audio as impulse trains instead of stairsteps prompted new
 curiousity in the subject of BLIT (Band Limited Impulse Train) synthesis of
 waveforms, a form of synthesis I for the love of god could not comprehend
 because of its magical ability to synthesize perfect waveforms with only 2
 sines per sample.

 Some quick Google searching produced this:

 http://www.music.mcgill.ca/~gary/307/week5/bandlimited.html

 A couple of hours later, I had a BLIT synthesizer implemented in Java (I
 prefer Java as a portable RAD tool, C/C++ for performance). Very
 enlightening experience!

 Anyway, what I really wanted to post here is a version of Silent Night that
 I arranged/produced a few days ago, in a fervor of Yule spirit:

 http://www.artgrounds.com/submission-data/103145/

 It's all sampled software synthesis, with many of the patches coming from
 SampleTank and Roland Personal Orchestra, since I was aiming for something
 acoustic sounding.

 Felt that since we're posting Christmas songs, I might as well join in, and
 announce my presence on the list as well. =)

 Cheers,
 Thor



 Scott Gravenhorst wrote:

 Here is my arrangement of Silent Night for synths.
 http://home1.gte.net/res0658s/Silent_Night.mp3

 3 FPGA synths are used:

 Xarp-56 for harp sounds
 PolyGateMan for flute sounds
 PolyGateMan_FM for bell sounds

 Merry Christmas!

 -- ScottG
 
 -- Scott Gravenhorst
 -- FPGA MIDI Synthesizer Information: home1.gte.net/res0658s/FPGA_synth/
 -- FatMan: home1.gte.net/res0658s/fatman/
 -- NonFatMan: home1.gte.net/res0658s/electronics/
 -- When the going gets tough, the tough use the command line.

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] achieving chiptunes sounds

2010-12-22 Thread Alan Wolfe
i tried the variable rectangle wave and that sounds A TON more like
old early nes style game music.  Even without sound degradation it
really sounds a lot like a chiptune

im going to try the quick arpeggios next (:

On Tue, Dec 21, 2010 at 2:49 AM, Laurent de Soras
laurent.de.so...@free.fr wrote:

 What is it that old system (and chiptunes) have that make them sound
 so distinctly recognizable like they are?

 As Didier implied, most of the 8-bit sound character comes from
 the voice control system. The chip parameters (pitches, volumes,
 waveforms, etc) are generally refreshed by the software in
 a vertical blank related timer, every 1/50 or 1/60 s.
 Chiptunes make an extensive use of arpeggios and complex,
 fast pitch envelopes, making them sound very different from
 classic synthesizers.

 Anyway, the produced sound should be quite clean. There is
 no need to introduce aliasing or quantification on the
 final signal.


 --
 Laurent de Soras                  |               Ohm Force
 DSP developer  Software designer |  Digital Audio Software
 http://ldesoras.free.fr           | http://www.ohmforce.com

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Gate on/off clicks in analogue synths?

2010-12-20 Thread Alan Wolfe
Someone else will surely chime in but i asked a similar question a
couple months back and i remember one person suggested that
internally, such devices probably have a capacitor (if my memory
recalls correctly) that acts as a basic envelope by ramping up and
down the volume at the begin and end.

ill see if i can find that old email and fwd it to you off list or
post some quotes from it or something (:

2010/12/20 Dominique Würtz dwue...@gmx.net:
 Hi all,

 I'm developing a virtual analogue synth plugin where one can choose the
 VCA to be controlled either by an envelope generator or the gate signal.
 This architecture is also found in classic synths like the Roland SHx
 series. Now, obviously, the gate signal can cause annoying clicks at
 note on/off event due to the sharp transients that occur if the VCO
 signal is cut off in the middle of a waveform cycle. The clicks are
 especially pronounced with a low frequency signal with few harmonics
 (i.e. filter freq turned down). My question is: do these clicks also
 occur with real analog synths are do these hardware units take special
 measures to smooth the on-off transitions of the gate signal?

 Thank you!

 Dominique


 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Gate on/off clicks in analogue synths?

2010-12-20 Thread Alan Wolfe
Found it!

http://music.columbia.edu/pipermail/music-dsp/2010-November/subject.html

Check that page out and look at the messages with the subject

[music-dsp] how does simple synth hardware not pop?

hopefully there's some good info in there for you to make sense of

On Mon, Dec 20, 2010 at 10:46 PM, Alan Wolfe alan.wo...@gmail.com wrote:
 Someone else will surely chime in but i asked a similar question a
 couple months back and i remember one person suggested that
 internally, such devices probably have a capacitor (if my memory
 recalls correctly) that acts as a basic envelope by ramping up and
 down the volume at the begin and end.

 ill see if i can find that old email and fwd it to you off list or
 post some quotes from it or something (:

 2010/12/20 Dominique Würtz dwue...@gmx.net:
 Hi all,

 I'm developing a virtual analogue synth plugin where one can choose the
 VCA to be controlled either by an envelope generator or the gate signal.
 This architecture is also found in classic synths like the Roland SHx
 series. Now, obviously, the gate signal can cause annoying clicks at
 note on/off event due to the sharp transients that occur if the VCO
 signal is cut off in the middle of a waveform cycle. The clicks are
 especially pronounced with a low frequency signal with few harmonics
 (i.e. filter freq turned down). My question is: do these clicks also
 occur with real analog synths are do these hardware units take special
 measures to smooth the on-off transitions of the gate signal?

 Thank you!

 Dominique


 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Algorithms for finding seamless loops in audio

2010-11-24 Thread Alan Wolfe
Agreed here (:

in 2d graphics and skeletal animation, making tileable 2d art and
seamless blends are basically the same problems.

in both areas they MAKE the things seamless instead of trying to find
how they could be seamless.

in 2d graphics this comes up via texturing (probably obvious), and in
skeletal animation, it is literally in the form that Didier talks
about; they literally cross fade animation weights from an old
animation to a new animation to make a seamless transition.

Of course, even with these seamless techniques, you can still notice
issues like in 2d textures there might be specific features that
really stand out in a texture and you can easily see it repeating.  In
3d animation, even though a blend may be seamless it can still look
wrong.

I only bring these parallels up because there is something in 2d
graphics called wang tiling which can make some really organic
looking tileable textures.

i think the same technique could apply to audio (and even skeletal
animation) but how you would apply the idea, i'm not sure 100% hehe.

my 2 monopoly dollars! :P

On Wed, Nov 24, 2010 at 9:51 PM, Didier Dambrin di...@skynet.be wrote:
 IMHO finding loop points is the wrong problem to solve, it's better to
 make something loop instead, as (ideally) you're only gonna find the least
 bad loop points, nothing guarantees that there's anything loopable.
 I would use crossfading, and possibly autocorellation to auto-select the
 part to repeat  crossfade (to avoid a volume dip ( timbre change) due to
 phasing).
 I would also reject too small looping sections, as a click-free loop is
 one thing but a loop that doesn't sound repeating is another thing.
 Even if you wanna find loop points, IMHO it's still better to find them not
 caring about noticable clicks, and then do a little crossfade. Unless you
 really can't touch your source sample.




 Hello music-dsp list,

 I'm the author of a SoundFont instrument editing application called
 Swami (http://swami.sourceforge.net).  A while back an interested
 developer added a loop finding algorithm which I integrated into the
 application.  This feature is supposed to generate a list of start/end
 loop points which are optimal for seamless loops.

 The original algorithm was based on autocorrelation.  There were many
 bugs in the implementation and I was having trouble understanding how
 it functioned, so I wrote a new algorithm which currently does not use
 autocorrelation.  The new algorithm seems to come up with good
 candidates for seamless loops, but is slower than the old algorithm
 by a factor of 5, but at least it does not suffer from bugs, which
 often resulted in unpredictable results.

 I have limited knowledge in the area of DSP, so I thought I would seek
 advice on this list to determine if the new algorithm makes sense, if
 anyone has any ideas on ways to optimize it or if there are better
 ways of tackling this task.

 First off, is autocorrelation even a solution for this?  That is what
 the old algorithm used and it seemed to me that sample points closer
 to the loop start or end should be given higher priority in the
 quality calculation, which was not done in that case.

 Inputs to the algorithm:
 float sample_data[]: Audio data array of floating point samples
 normalized to -1.0 to 1.0
 int analysis_size: Size of analysis window (number of points compared
 around loop points, the loop point is in the center of the window).
 int half_analysis_size = analysis_size / 2 which is the center point
 of the window (only really center on odd analysis_size values)
 int win1start: Offset in sample_data[] of 1st search window (for loop
 start point).
 int win1size: Size of 1st search window (for loop start point).
 int win2start: Offset in sample_data[] of 2nd search window (for loop
 end point).
 int win2size: Size of 2nd search window (for loop end point).


 Description of the new algorithm:
 A multiplication window array of floats (lets call it
 analysis_window[]) is created which is analysis_size in length and
 contains a peak value in the center of the window, each point away
 from the center is half the value of its closer neighbor and all
 values in the window add up to 0.5.  0.5 was chosen because the
 maximum difference between two sample points is 2 (1 - -1 = 2), so
 this results in a maximum quality value of 1.0 (worst quality).

 The two search windows are exhaustively compared with two loops, one
 embedded in the other.  For each loop start/end candidate a quality
 factor is calculated.  The quality value is calculated from the sum of
 the absolute differences of the sample points surrounding the loop
 points (analysis window size) multiplied individually by values in the
 analysis_window[] array.

 C code:

  /* Calculate fraction divisor */
  for (i = 0, fract = 0, pow2 = 1; i = half_window; i++, pow2 *= 2)
  {
   fract += pow2;
   if (i  half_window) fract += pow2;
  }

  /* Even windows are asymetrical, subtract 1 */
  if 

Re: [music-dsp] Is beating the same thing as flanging?

2010-11-19 Thread Alan Wolfe
i fear to post a question being the OP of this huge 100+ message thread but...

it was mentioned here and in a previous email that for digital
flangers you want to interpolate between samples for best results.

Would you want to do this for all sampling digital effects such as
delay and reverb too?  Or is flanger special because it's dealing with
usually a small offset in the samples, so interpolation becomes more
important to fake a higher resolution signal?

Thanks!

On Fri, Nov 19, 2010 at 12:36 PM, Peter Schoffhauzer sco...@inf.elte.hu wrote:
 Nigel Redmon wrote:

 Sure, there is a minimum delay when resampling, but there's no reason for
 any delay to be quantized to 1/Fs multiples, nor for it to be burdensome
 computationally.

 I suppose delays that are not quantized to 1/Fs are what we call
 'interpolating' or 'fractional' delays, which are easily realizable in any
 digital system, and are usually used in flangers.

 So I guess the cause for differences between analog/digital sound lie
 somewhere else.

 Peter

 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp