Re: [music-dsp] Time-variant 2nd-order sinusoidal resonator

2019-02-21 Thread Phil Burk
Another approach is to use a Taylor Expansion. It's pretty accurate in the
first quadrant. One advantage over the resonator is that it does not drift.
Another advantage is that you can do FM without paying the penalty of
recalculating the coefficients.

Here is some free Java source.

https://github.com/philburk/jsyn/blob/master/src/com/jsyn/unitgen/SineOscillator.java

Phil Burk


On Wed, Feb 20, 2019, 4:12 PM robert bristow-johnson <
r...@audioimagination.com> wrote:

> personally, i think that phase accumulator and wavetable lookup and
> intersample interpolation is the best way to do a time-varying sinusoidal
> oscillator,
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] OT List Reply To

2018-10-25 Thread Phil Burk
Hmm. For me the "reply-to" in the original email header is set to "music-dsp
@music.columbia.edu".
If I hit Reply it goes to the list.

I recall being annoyed by Reply going to the person and not the list. Why
am I seeing something different?  I am using GMail web client.

And BTW, I set my default response in GMail to "Reply-All" instead of
"Reply" in preferences. That is what I usually want to do.

Phil Burk

On Wed, Oct 24, 2018 at 8:54 PM Vladimir Pantelic 
wrote:

> 1) http://www.unicom.com/pw/reply-to-harmful.html
>
> 2) http://marc.merlins.org/netrants/reply-to-useful.html
>
> 3) http://marc.merlins.org/netrants/reply-to-still-harmful.html
>
> 4) tbd :)
>
> personally I'm in the 2) camp :)
>
> On 24.10.2018 02:50, gm wrote:
> > It's quite a nuisance that the lists reply to is set to the person who
> > wrote the mail
> > and not to the list adress
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-06 Thread Phil Burk
On Sun, Aug 5, 2018 at 4:27 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:

i, personally, would rather see a consistent method used throughout the
> MIDI keyboard range; high notes or low.  it's hard to gracefully transition
> from one method to a totally different method while the note sweeps.  like
> what if portamento is turned on?  the only way to clicklessly jump from
> wavetable to a "naive" sawtooth would be to crossfade.  but crossfading to
> a wavetable richer in harmonics is already built in.


Yes. I crossfade between two adjacent wavetables. It is just at the bottom
that I switch to the "naive sawtooth". I want to be able to sweep the
frequency through zero to negative frequency. So I need a signal near zero.
But as I get closer to zero I need an infinite number of octaves. So the
region near zero has to be handled differently anyway.


> and what if the "classic" waveform wasn't a saw but something else?  more
> general?


I only use the MultiTable for the Sawtooth. Then I generate Square and
Pulse from two Sawteeth.
  <https://lists.columbia.edu/mailman/listinfo/music-dsp>
Also note that for the octave between Nyquist and Nyquist/2 that I use a
table with a pure sine wave. If I added a harmonic in that range then it
would be above the Nyquist.

Phil Burk
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-04 Thread Phil Burk
On Sat, Aug 4, 2018 at 10:53 AM, Nigel Redmon 
wrote:
>
> With a full-bandwidth saw, though, the brightness is constant. That takes
> more like 500 harmonics at 40 Hz, 1000 at 20 Hz. So, as Robert says, 2048
> or 4096 are good choices (for both noise and harmonics).
>
 As I change frequencies  above 86 Hz, I interpolate between wavetables
with 1024 samples. For lower frequencies I interpolate between a bright
wavetable and a pure sawtooth phasor that is not band limited. That way I
can use the same oscillator as an LFO.

https://github.com/philburk/jsyn/blob/master/src/com/jsyn/engine/MultiTable.java#L167

Phil Burk
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-03 Thread Phil Burk
Hello Kevin,

There are some antialiased oscillators in JSyn (Java) that might interest
you.

Harmonic table approach
https://github.com/philburk/jsyn/blob/master/src/com/jsyn/unitgen/SawtoothOscillatorBL.java

Square generated from Sawtooth
https://github.com/philburk/jsyn/blob/master/src/com/jsyn/unitgen/SquareOscillatorBL.java

Differentiated Parabolic Waveform - Simpler, faster and almost as clean.
https://github.com/philburk/jsyn/blob/master/src/com/jsyn/unitgen/SawtoothOscillatorDPW.java

Phil Burk

On Fri, Aug 3, 2018 at 2:23 PM, Kevin Chi  wrote:

> Hi,
>
> Is there such a thing as today's standard for softSynth antialiased
> oscillators?
>
> I was looking up PolyBLEP oscillators, and was wondering how it would
> relate
> to a 1-2 waveTables per octave based oscillator or maybe to some other
> algos.
>
> thanks for any ideas and recommendations in advance,
> Kevin
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] WSOLA

2018-05-27 Thread Phil Burk
Hello Alex,

The period is 1 / frequency. I think what was confusing is you said that
the pitch was 5 milliseconds. Pitch is normally described either in Hertz
or in semitones.

Also the buffer size that you are referring to is a single buffer used for
reading or writing the audio data. That is not the total size of the
sample. The buffer size use for reading and writing should not affect the
signal processing algorithm because you are basically processing one sample
at a time anyway. You can collect the samples into blocks of data if you
need to. but that is independent of the input output buffer size.

Most smart phones including Android should be fast enough to implement this
algorithm. It should be possible. You might want to start with just reading
a WAV file in, processing the data, then writing a WAV file out.  Separate
the reading and writing of the file from the processing algorithm. Then
when you have it working you can just port it to Android.

If you have Android specific questions about the Android APIs then please
use the Android mailing list. If you have mathematical questions about DSP
then this is a better mailing list.

Phil Burk

On Sun, May 27, 2018, 6:27 AM Alex Dashevski <alexd...@gmail.com> wrote:

> Hi,
> I mean that fundamental frequency is between 50Hz and 4"50Hz.  Right?
> Why period of pitct isn't equal to 1/fundamental frequency?
>
> what is about of subsampling?  That means that proccessing will be done
> with 8Kh.
>
> what is about pitch shifting?
>
> How can I prove to my instractor that I can't implementation wsola?
>
> I have already asked this question on  ndk android group but they refer me
> to this forum.
>
> Thanks,
> Alex
>
>
> On Sun, May 27, 2018, 02:51 robert bristow-johnson <
> r...@audioimagination.com> wrote:
>
>> On 5/25/18 2:06 PM, Alex Dashevski wrote:
>> >
>> > I want to implement WSOLA on Real Time.
>> > The pitch is between 5ms and 20ms.
>> do you mean the *period* is between 5 ms and 20 ms?  or that the
>> fundamental frequency is between 50 Hz and 200 Hz?  this appears to be a
>> bass instrument
>>
>> > Frequency samples of the system is 48Khz
>> > Buffer size has 240 sample.
>>
>> that's not long enough.  you will never be able to even do the necessary
>> pitch detection with a buffer that small.  (unless you mean the
>> input/output buffer of the android, then that is plenty long.)
>>
>> > I want to implement it on android.
>>
>> then you should have no problem securing a megabyte of memory.
>>
>> > My issue is that my buffer is smaller than pitch,
>>
>> it's the *period*.  pitch is not measured in ms.
>>
>> > I can't understand how I can implement WSOLA.
>>
>> you can't unless you can allocate more memory.  that's a programming
>> issue with the android.
>>
>>
>>
>> --
>>
>> r b-j  r...@audioimagination.com
>>
>> "Imagination is more important than knowledge."
>>
>>
>>
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>
>> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Build waveform sample array from array of harmonic strengths?

2018-04-15 Thread Phil Burk
If you are looking for a way to generate band-limited oscillators using
octave based tables then here is an implementation in Java for JSyn:

https://github.com/philburk/jsyn/blob/master/src/com/jsyn/engine/MultiTable.java
https://github.com/philburk/jsyn/blob/master/src/com/jsyn/unitgen/SawtoothOscillatorBL.java
https://github.com/philburk/jsyn/blob/master/src/com/jsyn/unitgen/SquareOscillatorBL.java

It windows the higher order partials to reduce the Gibbs Effect.

I can smoothly ramp the frequency and do not hear any abrupt transitions.

Phil Burk


On Sun, Apr 15, 2018 at 11:55 AM, Frank Sheeran <fshee...@gmail.com> wrote:

> I'm currently just looping and calling sin() a lot.  I use trivial 4-way
> symmetry of sin() and build a "mipmap" of progressively octave-higher
> versions of a wave, to play for higher notes, by copying samples off the
> lowest-frequency waveform.  That still is only 8x faster than the naive way
> to do it.
>
> I know in theory that a FFT or DFT will turn a CONTINUOUS graph of
> frequency into a graph of time, and vice versa, but if I don't have a a
> continuous graph of frequency but rather an array of strengths, can I still
> use it?
>
> I thought of making a continuous graph of frequency from my harmonics, but
> 1) sounds quite imprecise and 2) I note real FFT graphs have smooth "hills"
> where harmonics are, rather than point peaks, and am wondering whether I'd
> get expected output if I didn't generate those hills.
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] band-limited website

2017-12-23 Thread Phil Burk
I was not looking for the mail list archives.
We used to have a music-dsp code archive with lots of example DSP code.
It was under this website:

http://www.musicdsp.org/

That website is now bandwidth limited, which for a website is a bad
thing. The site will not open.

Is anyone else seeing that error?

> You need to sample it more frequently.

Thanks James. That's sound advice.


On Sat, Dec 23, 2017 at 2:49 PM, James McCartney <asy...@gmail.com> wrote:
> You need to sample it more frequently.
>
> On Sat, Dec 23, 2017 at 1:06 PM, Phil Burk <philb...@mobileer.com> wrote:
>>
>> I tried to access the Archive at http://musicdsp.org/archive.php
>> and got this message:
>>
>> Bandwidth Limit Exceeded
>>
>> The server is temporarily unable to service your request due to the
>> site owner reaching his/her bandwidth limit. Please try again later.
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>
>
>
>
> --
> --- james mccartney
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] band-limited website

2017-12-23 Thread Phil Burk
I tried to access the Archive at http://musicdsp.org/archive.php
and got this message:

Bandwidth Limit Exceeded

The server is temporarily unable to service your request due to the
site owner reaching his/her bandwidth limit. Please try again later.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Music software interface design

2017-04-16 Thread Phil Burk
Hello Arthur,

I really like the ideas on your website. I agree that the area where we
need to work is in the UI. We have powerful CPUs and a big toolkit of
synthesis techniques. Now we need a better way of interacting with these
tools.  I will rethink my knob arrays.

I use breakpoint envelopes in JSyn. I may add the voice input to my
envelope editor. I find Java is a good place for prototyping UI and has
plenty of performance.

I notice that your examples did not show any sharp attacks. Probably
because the voice takes a while to build. I think that combining a clap
with a vocalization might be fun to try.

I will contact you off list.

Thanks,
Phil Burk

On Fri, Apr 14, 2017 at 8:48 AM, Arthur Carabott <arth...@gmail.com> wrote:

> Hello all,
>
> I've been doing some work on re-designing the interactions / interfaces
> for music software. The focus isn't on the DSP, more on how we can better
> interact with it. That said, there are some engineering implications
> (particularly with the first prototype).
>
> Hope you enjoy! http://arthurcarabott.com/mui/
>
> If the work interests you feel free to mail me off list as well.
>
> Best,
>
> Arthur
>
> www.arthurcarabott.com
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] musicdsp.org site down

2017-03-31 Thread Phil Burk
Can't load http://musicdsp.org/

It says "Bandwidth Limit Exceeded".
DOS?

Phil Burk
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Bandlimited morphable waveform generation

2016-09-16 Thread Phil Burk
Hello Andre,

If you are interested in source code, here is a multi-wavetable
implementation of a band-limited sawtooth oscillator. It windows the
partials to avoid the Gibbs Effect.


https://github.com/philburk/jsyn/blob/master/src/com/jsyn/unitgen/SawtoothOscillatorBL.java

You can combine two sawteeth waveforms to get square or rectangular
waveforms.


https://github.com/philburk/jsyn/blob/master/src/com/jsyn/unitgen/SquareOscillatorBL.java

Another technique is the Differentiated Parabolic Waveform, DPW. It is much
easier but sounds almost as good as the above.


https://github.com/philburk/jsyn/blob/master/src/com/jsyn/unitgen/SawtoothOscillatorDPW.java

Phil Burk
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Choosing the right DSP, what things to look out for?

2016-08-24 Thread Phil Burk
Another consideration is whether the DSP supports real hardware
floating-point vs fixed-point arithmetic.

If you are doing a lot of low level stuff like filters then fixed-point may
be fine. But if you are more comfortable with higher level full range
floating point then you may prefer a DSP that has real hardware floats.

Development time be shorter with real floats. But sometimes product cost
may be lower with a fixed-point DSP.

SHARC supports both float and fixed-point.
Blackfin only has fixed-point hardware and does floats in software.

This article discusses some tradeoffs float vs fixed:

  http://www.eetimes.com/document.asp?doc_id=1275364

Phil Burk
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] is our favorite mailing list still happenin'?

2016-08-24 Thread Phil Burk
rb-j wrote:
> mailing list signup page (lacerating gossip lids) is not to be found
either.

I found this:
   https://lists.columbia.edu/mailman/listinfo/music-dsp

gm wrote:
> the archive is here https://lists.columbia.edu/pipermail/music-dsp/

That archive only goes back to July 2015. Is the old archive still
accessable? Lots of great stuff in there, going back to the 90's I believe.

Archives pre-August 2015 are supposed to be here but are missing.
http://music.columbia.edu/pipermail/music-dsp

It would be great to have an alternate backup for the list archive.

Phil Burk
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] highly optimised variable rms and more

2016-07-17 Thread Phil Burk
Hello Tito,

I may be misinterpreting your code. But it looks like there is a potential
buffer overflow.

I assume your buffer indices can go from 0 to max_size-1
If size == max_size then i should be between 0 and size-1

But in the code below, if i is size-1 then it can be incremented to size.
That might be one past the end of the array.

Maybe this code:
i = (i >= size ? 0 : i + 1);
should be:
   i++;
   if (i >= size) i = 0;

Phil Burk

On Fri, Jul 15, 2016 at 4:20 AM, Tito Latini <tito.01b...@gmail.com> wrote:
>>
>> There is a buffer with an index (if you prefer, a delay line with length
>> max_size). "size" represents the number of the values to sum and it is
>> less than or equal to max_size. The core of the algo is:
>>
>> /* Subtract the oldest, add the new and update the index. */
>> sum = sum - buffer[i] + input;
>> buffer[i] = input;
>> i = (i >= size ? 0 : i + 1);
>>
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Trouble Implementing Huggins Binaural Pitch

2016-06-25 Thread Phil Burk
It sounds noisier in bursts that probably correspond to the delayed
sections. But I do not hear any tones.

I tried trimming the bass and treble using an EQ, to mimic the other online
example but no luck.

I'm out of ideas.


On Sat, Jun 25, 2016 at 3:13 PM, Alan Wolfe <alan.wo...@gmail.com> wrote:

> I tried another experiment.  I can "kinda" hear a tone, in that the white
> noise sounds a bit more tonal.
>
> Is this what the effect sounds like?  As you point out, it does a filter,
> but it still seems like I'm not getting the actual effect.
>
> What do you think?
>
> http://blog.demofox.org/wp-content/uploads/2016/06/stereonoise2.wav
>
> I did 16 notes.  In hertz below:
> 200
> 0
> 400
> 0
> 300
> 0
> 800
> 0
> 800
> 0
> 300
> 0
> 400
> 0
> 200
> 0
>
>
> On Sat, Jun 25, 2016 at 2:43 PM, Phil Burk <philb...@mobileer.com> wrote:
>
>> Hello Alan,
>>
>> Your WAV file looks like it has the 220 sample offset. But I not hear a
>> 200 Hz tone.
>>
>> I can hear tones faintly in the example here:
>> http://www.srmathias.com/huggins-pitch/
>>
>> 200 Hz seems like a low frequency. You might have better luck with
>> frequencies around 600 like in the Python example.
>>
>> The Python also uses a bandwidth filter to make the sound less harsh.
>>
>> Also I did not hear the tone until I noticed the melody. Try playing a
>> simple melody or scale.
>>
>> Phil Burk
>>
>>
>> On Sat, Jun 25, 2016 at 2:08 PM, Alan Wolfe <alan.wo...@gmail.com> wrote:
>>
>>> Hey Guys,
>>>
>>> I'm trying to make an implementation of the Huggins Binaural Pitch
>>> illusion, which is where if you play whitenoise into each ear, but offset
>>> one ear by a period T that it will create the illusion of a tone of 1/T.
>>>
>>> Unfortunately when I try this, I don't hear any tone.
>>>
>>> I've found a python implementation at
>>> http://www.srmathias.com/huggins-pitch/, but unfortunately I don't know
>>> python (I'm a C++ guy) and while I see that this person is doing some extra
>>> filtering work and other things, it's hard to pick apart which extra work
>>> may be required versus just dressing.
>>>
>>> Here is a 3 second wav file that I've made:
>>>
>>> http://blog.demofox.org/wp-content/uploads/2016/06/stereonoise.wav
>>>
>>> The first 1.5 seconds is white noise. The second half of the sound has
>>> the right ear shifted forward 220 samples. The sound file has a sample rate
>>> of 44100, so that 220 sample offset corresponds to a period of 0.005
>>> seconds aka 5 milliseconds aka 200hz.
>>>
>>> I don't hear a 200hz tone though.
>>>
>>> Can anyone tell me where I'm going wrong?
>>>
>>> The 160 line single file standalone (no libs/non standard headers etc)
>>> c++ code is here:
>>> http://pastebin.com/ZCd0wjW1
>>>
>>> Thanks for any insight anyone can provide!
>>>
>>> ___
>>> dupswapdrop: music-dsp mailing list
>>> music-dsp@music.columbia.edu
>>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>>
>>
>>
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>
>
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Trouble Implementing Huggins Binaural Pitch

2016-06-25 Thread Phil Burk
Hello Alan,

Your WAV file looks like it has the 220 sample offset. But I not hear a 200
Hz tone.

I can hear tones faintly in the example here:
http://www.srmathias.com/huggins-pitch/

200 Hz seems like a low frequency. You might have better luck with
frequencies around 600 like in the Python example.

The Python also uses a bandwidth filter to make the sound less harsh.

Also I did not hear the tone until I noticed the melody. Try playing a
simple melody or scale.

Phil Burk


On Sat, Jun 25, 2016 at 2:08 PM, Alan Wolfe <alan.wo...@gmail.com> wrote:

> Hey Guys,
>
> I'm trying to make an implementation of the Huggins Binaural Pitch
> illusion, which is where if you play whitenoise into each ear, but offset
> one ear by a period T that it will create the illusion of a tone of 1/T.
>
> Unfortunately when I try this, I don't hear any tone.
>
> I've found a python implementation at
> http://www.srmathias.com/huggins-pitch/, but unfortunately I don't know
> python (I'm a C++ guy) and while I see that this person is doing some extra
> filtering work and other things, it's hard to pick apart which extra work
> may be required versus just dressing.
>
> Here is a 3 second wav file that I've made:
>
> http://blog.demofox.org/wp-content/uploads/2016/06/stereonoise.wav
>
> The first 1.5 seconds is white noise. The second half of the sound has the
> right ear shifted forward 220 samples. The sound file has a sample rate of
> 44100, so that 220 sample offset corresponds to a period of 0.005 seconds
> aka 5 milliseconds aka 200hz.
>
> I don't hear a 200hz tone though.
>
> Can anyone tell me where I'm going wrong?
>
> The 160 line single file standalone (no libs/non standard headers etc) c++
> code is here:
> http://pastebin.com/ZCd0wjW1
>
> Thanks for any insight anyone can provide!
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] up to 11

2016-06-23 Thread Phil Burk
Hello Ethan,

People seem to be interpreting you question two ways.

Are you trying to:
1) generate an arbitrary sound that has maximum perceived loudness OR
2) maximize the loudness of spoken voice?

I assumed (1). It would be interesting to generate a bunch of candidate
files then A/B compare them to see which is loudest.

My approach would be:
1) generate white noise
2) bandpass filter at 3.4KHz to match region of max perceived loudness
3) squeeze peaks using an atan limiter so we don't have any big peaks
sticking out
4) normalize to maximum peak amplitude

Phil Burk



On Thu, Jun 23, 2016 at 2:19 AM, Andy Farnell <padawa...@obiwannabe.co.uk>
wrote:

> On Wed, Jun 22, 2016 at 01:40:45PM -0700, Duino wrote:
> >
> > This is an old problem, since the 70s, in SSB transmission.
> > Specifically driven by 'hams' that want to be heard around the world.
> > There have been excellent analog solutions since the late 80s.
> > In order to be heard, you need to be loud in the receiving end, and in
>
> Presumably optimised for speech. When I worked in broadcast
> audio we had a rack with "optimod" boxes at the BBC, but IIRC
> they were full of clever stuff to be adaptive to programme
> material.
>
> The whole loudness war thing really came alive for me after
> seeing an AES lecture by mastering engineer Darcy Proper.
> She showed how multiband dynamic maximisers do not "add"
> when chained at different stages in a process, but rather
> they lead to really counter-intuitive non-linear behaviour
> that defeats and even reverses the aim of making the signal
> louder and the intentions of the artist.
>
> best,
> Andy
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Android related audio group / mailing list?

2016-06-05 Thread Phil Burk
You may be seeing variations in CPU clock speed. That can lead to some
puzzling benchmark results. I gave more detail in my answer on the Android
list.

By the way, one way to prevent overly aggressive optimization is to
generate a check some of the results of your benchmark and then print that
number. This will prevent a smart compiler from optimizing away all the
work of your benchmark
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] looking for iPhone, Android app developer

2016-05-29 Thread Phil Burk
Hi Robert,

> developing audio in and audio out apps for *both* iPhone and Android
platforms  (and if you know how to make apps for both have as much of a
common code base as possible),

I'm not available to join the team. But I suggest that they look at using
JUCE, which supports writing audio apps that run on iOS, and Android, and
as a VST plugin. (I don't work for JUCE.)

Phil Burk
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Android related audio group / mailing list?

2016-05-25 Thread Phil Burk
Hello Nunos,
I just joined https://groups.google.com/forum/#!forum/andraudio
I think that is a good place for discussion.
If you repost your questions there then I will try to answer them.
Phil Burk


On Wed, May 25, 2016 at 6:15 AM, Nuno Santos <nunosan...@imaginando.pt>
wrote:

> I have been testing DRC in 2 different Android devices: Nexus 9 (48000/128
> buffer size) phone, Bq Aquaris M5 phone (48000/912) buffer size
>
> Nexus 9 is a beast. I was able to run DRC with full polyphony (8 voices)
> without any kind of glitch. The fake touches hack was essential to make
> this happen. Without the fake touches hack I would hear glitches.
> Bq Aquarius M5 has a very similar processor to Nexus 6P and I couldn’t
> have more than 2 voices running without having some glitches.
>
> All DSP code is C++. Reverb, Delay and Chorus represent half of the
> processing effort the rest is for active voices.
> From my experiences I couldn’t see any kind of effect in performance by
> compiling the code with NEON enabled. For example, on my Bq phone for a 912
> buffer size at 48000 my processing code would take the following time with
> the following flags enabled:
>
> -Os - ~5ms
> -O3 - ~5ms
> -Ofast - ~5ms
> -Ofast -mfpu=neon - ~5ms
>
> No significant changes in performance with different flags. What kind of
> flags are you guys using for your Android apps?
>
> I don’t need to worry with this on iOS though. It simply works!
>
> Regards,
>
> Nuno Santos
> Founder / CEO / CTO
> www.imaginando.pt
> +351 91 621 69 62
>
> On 25 May 2016, at 12:24, Jean-Baptiste Thiebaut <jean-bapti...@roli.com>
> wrote:
>
> At JUCE / ROLI we've been working with Google for over a year to optimize
> audio latency, throttle, etc for cross platform apps. Our app is featured
> also in the Youtube video from Google IO, and it runs on some devices with
> performances comparable to iOS.
>
> Whether you are using JUCE or not, you're welcome to post on our forum (
> forum.juce.com).
>
> Sent from my mobile
>
> On 25 May 2016, at 11:57, grh <g...@mur.at> wrote:
>
> Hallo!
>
> Thanks Nuno, it was a great demo ;)
>
> LG
> Georg
>
> On 2016-05-25 12:46, Nuno Santos wrote:
> Hi George,
>
> I would be interested in such a community as well. Specially regarding
> audio performance. We have recently released DRC (one of the apps that
> has been featured on Google I/O Android High Performance Audio) and we
> are mostly interested in squeezing performance out of it. It is
> incredible the performance differences between iOS and Android. The DSP
> code is shared among both and I still have glitch problems in Android
> powerful devices.
>
> One option could be creating a slack channel.
>
> Regards,
>
> Nuno Santos
> Founder / CEO / CTO
> www.imaginando.pt <http://www.imaginando.pt>
> +351 91 621 69 62
>
> On 25 May 2016, at 11:36, grh <g...@mur.at <mailto:g...@mur.at <g...@mur.at>>>
> wrote:
>
> Hallo music-dsp list!
>
> Sorry for being off topic, but does someone know an active discussion
> group / mailing list about android audio?
> (There is quite a lot of progress lately, see for example [1])
> 5 years ago a list was announced here [2], which does not seem to be
> active anymore ...
>
> We just created a simple android audio editor [3] and would be very much
> interested into a discussion of common infrastructure like audio
> plugins/effects (like SAPA from Samsung) or copy/paste between audio apps.
> I think that would be important for the audio ecosystem on Android.
>
> Thanks for any hints,
> LG
> Georg
>
> [1]: https://www.youtube.com/watch?v=F2ZDp-eNrh4
> [2]:
>
> http://music-dsp.music.columbia.narkive.com/zbYgicxy/new-android-audio-developers-mailing-list
> [3]:
> https://play.google.com/store/apps/details?id=com.auphonic.auphonicrecorder
>
> --
> auphonic - audio post production software and web service
> http://auphonic.com
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
>
> --
> auphonic - audio post production software and web service
> http://auphonic.com
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
> --
>
>
> *ROLI's **award-winning*
> <
> http://www.tele

Re: [music-dsp] confirm a2ab2276c83b0f9c59752d823250447ab4b666

2016-03-29 Thread Phil Burk
I suspect the mailing list is generating these based on bounces from mail
servers. If a human troublemaker was doing it then they could have used the
confirmation code in the emails that get posted to finalize removal. I
don't think that has happened.


On Tue, Mar 29, 2016 at 10:40 AM, Ethan Duni  wrote:

> Supposing this is some griefer it seems reasonable to ignore them - but is
> there a possibility that this is a symptom of some kind of server attack or
> attempt to profile/track list members?
>
> I've never received any unsub notices myself but it is a little
> disconcerting that somebody persists at doing this. I'd think that a
> griefer would give up after a while.
>
> E
>
>
> On Tue, Mar 29, 2016 at 7:13 AM, Douglas Repetto <
> doug...@music.columbia.edu> wrote:
>
>> I get reports about this every couple weeks. Because it's a double
>> opt-out no one is actually being unsubscribed from the list unless they
>> want to be. So please ignore these bogus unsub messages. It's not worth
>> spending time worrying about it.
>>
>> douglas
>>
>>
>> On Mon, Mar 28, 2016 at 6:37 PM, Evan Balster  wrote:
>>
>>> This happened to me also, but I didn't give it much thought.
>>>
>>>
>>> On Mon, Mar 28, 2016 at 4:31 PM, robert bristow-johnson <
>>> r...@audioimagination.com> wrote:
>>>


 h.  i wonder if someone is trying to tell me something



  Original Message
 
 Subject: confirm a2ab2276c83b0f9c59752d823250447ab4b666
 From: music-dsp-requ...@music.columbia.edu
 Date: Mon, March 28, 2016 2:31 pm
 To: r...@audioimagination.com

 --

 > Mailing list removal confirmation notice for mailing list music-dsp
 >
 > We have received a request for the removal of your email address,
 > "r...@audioimagination.com" from the music-dsp@music.columbia.edu
 > mailing list. To confirm that you want to be removed from this
 > mailing list, simply reply to this message, keeping the Subject:
 > header intact. Or visit this web page:
 >
 >
 https://lists.columbia.edu/mailman/confirm/music-dsp/a2ab2276c83b0f9c59752d823250447ab4b666
 >
 >
 > Or include the following line -- and only the following line -- in a
 > message to music-dsp-requ...@music.columbia.edu:
 >
 > confirm a2ab2276c83b0f9c59752d823250447ab4b666
 >
 > Note that simply sending a `reply' to this message should work from
 > most mail readers, since that usually leaves the Subject: line in the
 > right form (additional "Re:" text in the Subject: is okay).
 >
 > If you do not wish to be removed from this list, please simply
 > disregard this message. If you think you are being maliciously
 > removed from the list, or have any other questions, send them to
 > music-dsp-ow...@music.columbia.edu.
 >
 >

 i think *someone* is being a wee bit malicious.  or at least a bit
 mischievous.

 (BTW, i changed the number enough that i doubt it will work for anyone.
  but try it, if you want.)





  Original Message
 
 Subject: Re: [music-dsp] Changing Biquad filter coefficients
 on-the-fly, how to handle filter state?
 From: "vadim.zavalishin" 
 Date: Mon, March 28, 2016 2:20 pm
 To: r...@audioimagination.com
 music-dsp@music.columbia.edu

 --

 > robert bristow-johnson писал 2016-03-28 17:57:
 >> using the trapezoid rule to model/approximate the integrator of an
 >> analog filter is no different than applying bilinear transform
 >> (without compensation for frequency warping) to the same integrator.
 >>
 >> s^(-1) <--- T/2 * (1 + z^(-1)) / (1 - z^(-1))
 >
 > This statement implies the LTI case, where the concept of the transfer
 > function exists.

 i didn't say that.  i said "applying ... to the same integrator."
  about each individual "transfer function" that looks like "s^(-1)"



 > In the topic of this thread we are talking about
 > time-varying case, this means that the transfer function concept
 doesn't
 > apply anymore.



 well, there's slow time and there's fast time.  and the space between
 the two depends on how wildly one twists the knob.  while the filter
 properties are varying, we want the thing to sound like a filter (with
 properties that vary).  there *is* a concept of frequency response (which
 may vary).



 for each individual integrator you are replacing the
 continuous-time-domain equivalent of s^(-1) with the discrete-time-domain
 

Re: [music-dsp] Changing Biquad filter coefficients on-the-fly, how to handle filter state?

2016-03-01 Thread Phil Burk
I use biquads in JSyn. The coefficients are calculated using RBJ's
excellent biquad cookbook from the music-dsp archives.

I have found that I can recalculate and update the filter coefficients on
the fly without unpleasant artifacts. I do NOT zero out or modify the
internal state variables. I should think that setting them to zero would
sound pretty bad.

Generally the filter settings are changed gradually using a knob or driven
by an LFO or envelope.

Phil Burk


On Tue, Mar 1, 2016 at 6:56 AM, Paul Stoffregen <p...@pjrc.com> wrote:

> Does anyone have any suggestions or publications or references to best
> practices for what to do with the state variables of a biquad filter when
> changing the coefficients?
>
> For a bit of background, I implement a Biquad Direct Form 1 filter in this
> audio library.  It works well.
>
> https://github.com/PaulStoffregen/Audio/blob/master/filter_biquad.cpp#L94
>
> There's a function which allows the user to change the 5 coefficients.
> Lines 94 & 95 set the 4 filter state variables (which are 16 bits, packed
> into two 32 bit integers) to zero.  I did this clear-to-zero out of an
> abundance of caution, for concern (maybe paranoia) that a stable filter
> might do something unexpected or unstable if the 4 state variables are
> initialized with non-zero values.
>
> The problem is people wish to change the coefficients in real time with as
> little audible artifact as possible between the old and new filter
> response.  Clearing the state to zero usually results in a very noticeable
> click or pop sound.
>
> https://github.com/PaulStoffregen/Audio/issues/171
>
> Am I just being overly paranoid by setting all 4 state variables to zero?
> If "bad things" could happen, are there any guidelines about how to manage
> the filter state safely, but with with as graceful a transition as possible?
>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] NAMM Meetup?

2016-01-18 Thread Phil Burk
I will be at NAMM. I will be at the Android table in the Roli room off and
on. And at the MMA meetings on Sunday.

Phil Burk


On Mon, Jan 18, 2016 at 2:42 AM, Christian Luther <c...@kempermusic.com>
wrote:

> Hey everyone!
>
> who’ll be there and who’s in for a little music-dsp meetup?
>
> Cheers
> Christian
>
> P.S.: I just started a new blog, might be interesting for you guys. Have a
> look:
> http://science-of-sound.net
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] automation of parametric EQ .

2015-12-22 Thread Phil Burk
Hello,

On Mon, Dec 21, 2015 at 3:46 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:
>
> regarding MIDI 1.0 (which is what goes into MIDI files), i had noticed
> that there were some "predefined controls", like MIDI control 7 (and 39 for
> the lower-order bits) for Volume.  i just thought there might have evolved
> a common practice of some of the unassigned MIDI controls having a loose
> assignment to tone or EQ parameters.


One problem is that controller address space for MIDI is a bit limited.
There are Registered Parameters but only a couple have been defined. For
General MIDI, they ended up using SysEx for some extra controllers like
Chorus Mod Rate or Reverb Type.

  http://www.midi.org/techspecs/gm.php

i just would have thought that by now, 30+ years later, that a common
> practice would have evolved and something would have been published (and i
> could not find anything).
>

The MMA is working on a new extended protocol for music that has higher
resolution, larger address space, bidirectional query/response, etc. It is
currently called "HD Protocol" but the name will change. In HD one can
address thousands of controllers. Many of the common controllers have been
defined. Others can be defined in later.

I can't go into too much detail because it is MMA confidential. But I am
trying to get the MMA to release a 10 page technical overview. If you work
for a company that is an MMA member you can get a copy of the spec and
participate in the work group. The MMA will be having meeting on Sunday
January 24th at NAMM and may have a public session on "HD Protocol".

  http://www.midi.org/aboutus/news/hd.php

Phil Burk
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] confirm 2692e89dd013da35bd113d6f644fdcfa865054c3

2015-11-13 Thread Phil Burk
Could we log the IP address of whoever is entering these unsub requests?
Then we can do a reverse lookup using ip2location.com
Maybe the info is in the logs at Columbia.


On Wed, Nov 11, 2015 at 8:17 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:
>
> > We've had a couple of these in the last week. The list requires a
> > confirmation email for unsub requests, so I don't think anyone has
> > actually been removed. I'm not sure there's much to be done about this
> > sort of thing.
>
> i'd like to know who's doing it.  it might be unanswerable.
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] proper scaling of an FFT

2015-11-05 Thread Phil Burk
Hello Chuck,

Thanks for the explanation.

> The correct scaling factor depends on what you want to do with it.

OK. Then I should probably allow the caller to pass an optional scaling
factor. Luckily that is easy to do in Java.

I'll keep my current defaults because they seem reasonable for some
applications.

Phil Burk
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] proper scaling of an FFT

2015-11-05 Thread Phil Burk
Hello Tito and Chris,

Thanks also for your explanations. Sorry I didn't see them before I
responded earlier.

I am surprised at the variety of approaches to scaling. It looks like not
doing any scaling is an important option.

In the latest unreleased code, callers could pass in 1.0 as scale factors.
But then I am still doing an unnecessary multiply by 1.0. So I think I will
provide an "unscaled" option for speed.

Phil

On Thu, Nov 5, 2015 at 6:07 AM, Chris Cannam <can...@all-day-breakfast.com>
wrote:

>
> On Wed, Nov 4, 2015, at 04:56 PM, Phil Burk wrote:
> > Is there a "right" way or a "wrong" way or is there just "my" way?
>
> I think in the context of a library there are two questions here -- what
> scaling should an FFT implementation use (behind the scenes in the
> functions it provides), and what scaling is appropriate for a particular
> application that makes use of an FFT implementation.
>
> Without getting into the second question, most libraries I've used
> either scale by 1 in both directions (FFTW, KissFFT) or scale the
> forward transform by 1 and the inverse by 1/M (MATLAB, NumPy).
>
> The exceptions that come to mind are:
>
>  * Apple's Accelerate framework scales the complex-complex forward
>  transform by 1, but the real-complex forward transform by 2 (that's 2,
>  not 2/M). In both cases it scales the inverse by 1/M
>
>  * Fixed-point implementations, e.g. OpenMAX, which has you specify the
>  scale factor when you call the function
>
> My preference is for scaling by 1 in both directions and leaving it up
> to the caller.
>
>
> Chris
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] proper scaling of an FFT

2015-11-04 Thread Phil Burk
What is the "right" way to scale the inputs of an FFT.

I have implemented some FFT functions in JSyn. The goal is to support
spectral analysis, processing and synthesis for music applications.

I would like to be able to measure the amplitude of various sinewave
partials in the original signal. With my current scaling, if I do an FFT of
a sine wave with amplitude 1.0 aligned with a bin then the magnitude comes
out 1.0.

magnitude = sqrt(real*real + imag*imag);

Also my FFT and IFFT are inverse functions:   x==IFFT(FFT(x))

My current scale factors are 2.0/M for FFT and 0.5 for IFFT. I am happy
with this. But I see many conflicting recommendations in the literature
that suggest I am doing it wrong.

In this MatLab forum,
http://www.mathworks.com/matlabcentral/answers/15770-scaling-the-fft-and-the-ifft
they recommend  1/M and M, or 1/sqrt(M) and sqrt(M).  They say it is
important that the product of the scale factors is 1.0. But then the IFFT
is not the exact inverse of the FFT.

Is there a "right" way or a "wrong" way or is there just "my" way?

BTW, the scaling source code is on line 126 at

https://github.com/philburk/jsyn/blob/master/src/com/softsynth/math/FourierMath.java

Thanks,
Phil Burk
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Change in reply-to?

2015-08-16 Thread Phil Burk
Oddly enough, replying to the sender is the default setting for Mailman.
The music-dsp list recently moved to a new server, from music.columbia.edu
to lists.columbia.edu.
At that time, most settings reverted to the default.
Someone with admin privileges could change this setting.
Phil Burk


On Sun, Aug 16, 2015 at 1:53 PM, Nigel Redmon earle...@earlevel.com wrote:

 I noticed that, as of the past three weeks, the reply-to for messages to
 the list has change from the list to the sender. Intentional? It seems to
 make it easy to reply to the sender and miss the list.
 ___
 music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] FFTW Help in C

2015-06-11 Thread Phil Burk
Hello Connor,

If you just wanted to do a quick FFT and then using the spectrum to control
synthesis, then I would recommend staying in the callback. If you are doing
overlap-add then set framesPerBuffer to half your window size and combine
the current buffer with the previous buffer to feed into the FTT.

But if you are using the FFT to do complex analysis, or to drive a graphics
display, then that is probably too much for the callback. In that case just
set the callback pointer to NULL and use Pa_ReadStream() with a large
buffer size.

http://portaudio.com/docs/v19-doxydocs/portaudio_8h.html#a0b62d4b74b5d3d88368e9e4c0b8b2dc7

This decouples your code from the main audio processing. Then you can do
almost anything, including writing files or generating graphics displays.
You will probably need to create a separate thread that does the read and
the FFT.

Phil Burk


On Thu, Jun 11, 2015 at 7:20 AM, Connor Gettel connorget...@me.com wrote:

 Hello Everyone,

 My name’s Connor and I’m new to this mailing list. I was hoping somebody
 might be able to help me out with some FFT code.

 I want to do a spectral analysis of the mic input of my sound card. So far
 in my program i’ve got my main function initialising portaudio,
 inputParameters, outputParameters etc, and a callback function above
 passing audio through. It all runs smoothly.

 What I don’t understand at all is how to structure the FFT code in and
 around the callback as i’m fairly new to C. I understand all the steps of
 the FFT mostly in terms of memory allocation, setting up a plan, and
 executing the plan, but I’m still really unclear as how to structure these
 pieces of code into the program. What exactly can and can’t go inside the
 callback? I know it’s a tricky place because of timing etc…

 Could anybody please explain to me how i could achieve a real to complex 1
 dimensional DFT on my audio input using a callback?

 I cannot even begin to explain how grateful I would be if somebody could
 walk me through this process.

 I have attached my callback function code so far with the FFT code
 unincorporated at the very bottom below the main function (should anyone
 wish to have a look)

 I hope this is all clear enough, if more information is required please
 let me know.

 Thanks very much in advance!

 All the best,

 Connor.



 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Fwd: Array indexing in Matlab finally corrected after 30 years!

2015-04-02 Thread Phil Burk
Speaking of zero based indexing, my neighbor's street address is 0
Meadowood Drive. There was a 4 Meadowood Drive already existing. They
left room to build one more house at the end of the street. But instead of
building a house they built two cottages. So they had to number them 0 and
2.

My wife doesn't understand why I am so jealous of that street address. My
neighbor says he likes it but he keeps getting forms returned asking him to
correct his street address.

Phil Burk
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] I am working on a new DSP textbook using Python. Comments are welcome!

2015-01-14 Thread Phil Burk
Hello Allen,

Your book and the DSP library will be very helpful. I like how you
integrate with iPython.

Your discussion of Brown Noise taught me something new and inspired me to
add that as a unit generator to JSyn.

One suggestion: in some cases the operations involved are hidden by high
level calls to your Python functions. It would help to have some
pseudo-code that showed people how to write the code if they were using
another language.

For example, the Brownian noise code was:

dys = numpy.random.uniform(-1, 1, len(ts))
ys = numpy.cumsum(dys)
ys = normalize(unbias(ys), self.amp)

Here is some pseudo code for the inner loop that is similar to what I
implemented in JSyn.

r = (random() * 2.0) - 1.0; // ranges from -1.0 to +1.0

output = (previous * 0.999) + (r * amplitude);

previous = output;

The code is very different because I am generating a continuous stream. So
I cannot, for example, normalize an array of output.

Phil Burk

On Wed, Jan 14, 2015 at 3:40 PM, Allen Downey dow...@allendowney.com
wrote:

 I am developing a textbook for a computational (as opposed to mathematical)
 approach to DSP, with emphasis on applications -- especially sound/music
 processing.  People on this list might like this example from Chapter 9:


 http://nbviewer.ipython.org/github/AllenDowney/ThinkDSP/blob/master/code/chap09preview.ipynb

 I have a draft of the first 9 chapters, working on one or two more. I am
 publishing excepts and the supporting IPython notebooks in my blog, here:

 http://thinkdsp.blogspot.com

 Of if you want to go straight to the book, it is here:

 http://think-dsp.com

 Comments (and corrections) are welcome!
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] entropy

2014-10-16 Thread Phil Burk



On 10/16/14, 3:43 AM, Peter S wrote:

Quantifying information is not something that can be discussed in
depth in only a dozen messages in a single weekend


Very true. Have you considered writing a book on entropy? You clearly 
can generate a lot of content on a daily basis and could easily fill a 
book. I think that might be more productive than sending it, a few 
paragraphs at a time, to a *music DSP* mailing list.


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] entropy

2014-10-15 Thread Phil Burk

Hello Peter,

I'm trying to understand this entropy discussion.

On 10/15/14, 2:08 AM, Peter S wrote:
 Let's imagine that your message is 4 bits long,
 If we take the minimal number of 'yes/no' questions I need to guess
 your message with a 100% probability, and take the base 2 logarithm of
 that, that is _precisely_ the Shannon entropy of your message.
 Is your password  ?
 Is your password 0001 ?

That would take 16 questions. But instead of asking those 16 questions, 
why not ask:


Is the 1st bit a 1?
Is the 2nd bit a 1?
Is the 3rd bit a 1?
Is the 4th bit a 1?

Then you can guess the message using only 4 yes/no questions. So the 
Shannon entropy of a 4 bit message would _precisely_ be log2(4) = 2. Is 
that right?


Phil

On 10/15/14, 2:08 AM, Peter S wrote:

On 14/10/2014, Peter S peter.schoffhau...@gmail.com wrote:

Again, the minimal number of 'yes/no' questions needed to guess your
message with 100% probability is _precisely_ the Shannon entropy of
the message:


Let me demonstrate this using a simple real-world example.

Let's imagine that your message is 4 bits long, and I want to guess
it. In that case, my guesses for your message would be:


0001
0010
0011
0100
0101
0110
0111
1000
1001
1010
1011
1100
1101
1110


As you see, using these 16 guesses, I can guess your message with a
100% probability, since there is no other possible combination of
bits, so one of them must be your message. So the probability of
success for each guess is 1/16 (= 1/2^4). In other words, the maximum
entropy of your message is log2(16) = 4 bits, because I can guess it
in a maximum of 2^4 = 16 guesses.

It's hard to tell the entropy precisely - if I'm smart and I find out
that your favourite string of bits is '0101', and I try that as a
first guess, I can guess your message instantly and therefore the
entropy of your message is effectively zero. So it's hard to tell
exactly, how many guesses I minimally need.

But there is one thing that is absolutely certain: if your bits come
from a fully decorrelated source, like sampled bits of thermal white
noise, then - since I have no a-priori knowledge - there is no other
way for me to guess your message, other than to try out each possible
combination, giving a probability of success per each guess as 1/16,
and an entropy of log2(16) = 4 bits. In this case, the entropy of your
message is precisely defined (assuming I know its length), because the
probability of success per guess is precisely defined, and comes from
combinatorics.

Again, _all_ I'm doing is asking simple 'yes/no' questions:

Is your password  ?
Is your password 0001 ?
Is your password 0010 ?
Is your password 0011 ?
...and so on.

If we take the minimal number of 'yes/no' questions I need to guess
your message with a 100% probability, and take the base 2 logarithm of
that, that is _precisely_ the Shannon entropy of your message.

Welcome to the real world. This is what we could call the
'combinatorial' approach to entropy, as opposed to the 'probabilistic'
approach to entropy. In this case, the probability of a successful
guess comes from a combinatiorial answer. If your approach is
'ill-formed' for a given situation, then you will not get any
meaningful answer. In these situations, you need to change your
approach.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Microphones for measuring stuff, your opinions

2014-08-26 Thread Phil Burk
On 8/26/14, 6:23 PM, Rohit Agarwal wrote: Do they come with software 
for post-processing or are they just inputs

 for recording tools?


I believe there are commercial analysis programs that can import these 
standard format calibration files. But no software came with the mic.


Some software is listed here, eg. Spectrafoo

http://www.earthworksaudio.com/microphones/m-series/m30/

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Simulating Valve Amps

2014-06-21 Thread Phil Burk



On 6/21/14, 8:09 AM, Rich Breen wrote:

5 msec becomes very noticable on headphones, and above
6 msec is not usable.


Note that the speed of sound in air is roughly 1125 feet/second. So if a 
guitar player is more than 7 feet from their amp then they will have 
more than 6 msec of latency.


For acoustic instrument players the speed of sound is not an issue. But 
if an electronic musician is looking for ultra-low latency then they 
must also consider their distance from the loudspeaker.


Phil Burk
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] new shade of pink

2014-05-07 Thread Phil Burk

Hello Stefan,

 http://stenzel.waldorfmusic.de/post/pink/

This is really interesting. I love being able to turn on or off various 
octaves and hear the effect using your JavaScript implementation.


Adding the FIR extends the high end nicely. I love your trick of 
precomputing the FIR by taking advantage of single bit inputs. And you 
combine the shifting of the bits in the LFSR with the delay of the input 
values to the FIR. Nice.


I notice that the error curve for the Voss algorithm has lots of peaks 
in the midrange compared to your algorithm. Is that mostly because you 
do linear interpolation instead of zero-order hold?


I also notice that the pk3 algorithm has low error and is almost as good 
as yours. It is the sum of 6 first order filters, using 14 multiplies. 
Given the speed of multiplies on modern processors, is it possible that 
the pk3 algorithm is faster than yours? The filters are independent so 
it would work well on a SIMD architecture.


Phil Burk

On 5/7/14, 1:20 AM, Stefan Stenzel wrote:

Quick and quite accurate pink noise generator, maybe useful for someone:
http://stenzel.waldorfmusic.de/post/pink/

Stefan
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Trapezoidal integrated optimised SVF v2

2013-11-07 Thread Phil Burk

Dear Theo,

I found Andrew's postings to be very interesting and helpful.

Respectful disagreement is welcome. Insults are not. Please stop.

Thank you,
Phil Burk

On 11/7/13 8:22 AM, Theo Verelst wrote:

most of what you're oresenting is boring old crap, that isn't worth
working on unless you'd actually understand some of the theory and
relevant tunings involved. Clearly you don't.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Incudine support to forge Virtual UGens

2013-06-24 Thread Phil Burk

Hello Tito,

I added a link to Incudine here:

http://www.portaudio.com/apps.html

Good luck with the project. Looks great.

Phil Burk

On 6/24/13 7:21 AM, Tito Latini wrote:

Hi all,

the last year I started to write Incudine, a heavy Music/DSP
programming environment for Common Lisp on which sounds can be hammered
and shaped. :-)

Incudine provides a mechanism for defining primitive unit generators
on-the-fly and scheduling sample accurate callbacks.

A work in progress but it is starting to be interesting.

The page of the project is

   http://incudine.sourceforge.net


Tito Latini


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-11 Thread Phil Burk
Regarding power-of-2 sized circular buffers, here is a handy way to 
verify that a bufferSize parameter is actually a power-of-2:


int init_circular_buffer( int bufferSize, ... )
{
  assert( (bufferSize  (bufferSize-1)) == 0 )

Phil Burk
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Pointers for auto-classification of sounds?

2012-06-11 Thread Phil Burk

On 6/11/12 10:36 AM, douglas repetto wrote:

That's really why I mostly lost interest in that domain -- the
realization that in order to efficiently generate audible output I'd
have to build a lot of musical/perceptual cheats into the process.


If you just start with something that generates a sequence of numbers 
then it is not likely to generate audible tones very quickly. But nature 
did not start out making sound using non-oscillating number sequences. 
Wind was howling and rain was generating grains of sound before life 
even evolved.


Animal sounds arose from the beating of insect wings, or from breath 
causing flabby bits of flesh to oscillate in the throat. Matter 
resonates naturally. So it seems fair to start a GA with some resonating 
elements. Maybe you could evolve a network of mass+spring units 
connected together randomly. They would quickly make some audible sound.


Add in some higher level fitness functions for communication or 
species recognition and you could evolve some nice soundscapes.


Phil Burk
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] what is computer music, was a little about myself

2012-02-27 Thread Phil Burk


On 2/27/12 5:13 PM, Michael Gogins wrote:
 There's computer music -- you play it on a computer.

That may be too broad. It used to mean music composed by writing 
software. It often involved novel algorithmic composition and/or custom 
synthesis techniques. Computer music sounded fundamentally different 
than other types of music.


But it is complicated now because the people who write computer music 
software started sharing their tools with the public. That is a good 
thing. But then non-programmers starting calling their creations 
computer music just because they used a computer with a MIDI sequencer.


Perhaps we need some additional terms. How about:

Software music is music created by writing software in languages such 
as C++, Java, DSP assembly.


Programmed music is music created using specialized music languages 
such as SuperCollider, Max, Pd, CSound, Chuck.


Computer music refers to either software music or programmed music.

Music created using computer productivity tools such as sequencers, 
sample editors, etc. can just be called music.


Phil Burk
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] google's non-sine

2012-02-23 Thread Phil Burk

Hello Theo,

On 2/23/12 5:18 AM, Theo Verelst wrote:
 What's the challenge being met by Google with their wavy lines?

They were celebrating Heinrich Hertz' 155th birthday.


It clearly isn't a graphics problem, nor a particularly good synthesis
engine being promoted


I'm sorry you don't like JSyn. Is there anything in particular that I 
can improve? Have you tried developing a program using JSyn?


My goal in developing JSyn was to provide a synthesis API for Java 
programmers that could run in a web browser. There are other synthesis 
engines, eg. SuperCollider and Chuck, that are more powerful than JSyn. 
But they have their own language and are not easily used from Java.


Also please note that there is no connection between Google and JSyn. I 
was just responding to their doodle.


 (the page with the application is fun and maybe
 sound fun, but isn't put forward as the next big thing in audio).

I'm puzzled. Does it have to be the next big thing? I obviously just did 
it for fun because we were having fun talking about the Google doodle. 
Some folks enjoyed it. That's enough for me.


Phil Burk
www.softsynth.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] google's non-sine

2012-02-22 Thread Phil Burk

On 2/22/12 5:29 PM, Adam Puckett wrote:

Is jSyn dependent on any other Java libraries?


No. JSyn works with just the standard JDK. There are no dependencies 
except that JSyn uses JavaSound for audio output. JavaSound is available 
on Windows, Mac and Linux but not on Android.


There is a single JSyn jar file, jsyn-beta-16.4.6.jar, that can be 
downloaded from here:


http://www.softsynth.com/jsyn/developers/download.php

Just add that JAR file to your Java CLASSPATH. You can then build JSyn 
apps using a text editor and the JDK command line tools, or Eclipse.


A programmers guide is here:

http://www.softsynth.com/jsyn/docs/usersguide.php

A list of the most important unit generators are here:

http://www.softsynth.com/jsyn/docs/unitlist.php

I have found that Java code runs slightly slower than native 'C'. It's 
about 80% as fast. But I can program much faster in Java than 'C' and my 
time is more important than the computer's time.


Enjoy,
Phil Burk



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Java for audio processing

2011-09-13 Thread Phil Burk


In a thread about FM synthesis, Tom Wiltshiret...@electricdruid.net wrote:


If there's any heresy, it's probably using Java for audio processing! ;)


There are plenty of reasons *not* to use Java for audio. But in some 
circumstances it can be quite delightful.  I recently converted the 
synthesis engine in JSyn from native 'C' to pure Java and I'm glad I did.


I'm not interested in a flame war. Every language is great and has 
pluses and minuses. I program mostly in 'C' to pay the rent. But I love 
Java and thought I would share some of my experience with audio 
processing in Java.


First, why *not* use Java:

1) Garbage collection: When the JVM does garbage collection it can cause 
threads to pause. This means you have to use bigger output buffers and 
suffer higher latency as a result.  Luckily one can tune the garbage 
collector so this is not too bad. Also there are real-time JVMs that 
effectively eliminate this problem but they are very expensive and some 
run only on Linux.


2) Performance: Java code generally runs slower than equivalent 'C' 
code. It used to be much slower. But the HotSpot just-in-time compiler 
has improved performance significantly. I have seen reports that HotSpot 
code can be faster in some cases than code compiled for generic x86 
because HotSpot can optimize the code for the actual processor model it 
is running on. I found that my Java code runs about 70-80% as fast as my 
old 'C' code.  I often use no more than 10-20% of the CPU anyway so it 
really makes no difference to me.


3) JavaSound: JavaSound was optimized for streaming so it tends to have 
high latency.  Also JavaSound on Mac is a bit broken and pops every few 
seconds. JavaSound also does not support multi-channel (N2) devices 
very well.  I am working on a Java wrapper for PortAudio that will 
hopefully address these issues.


So given these problems, why use Java:

A) Tools: I love the Eclipse IDE for Java. It has very powerful 
refactoring tools. The code practically writes itself.


B) Safety: If I over-index an array or miscast an object then Java tells 
me immediately and gives me a stack trace. I don't crash five minutes 
later wondering how I scribbled memory. So I don't waste a lot of time 
debugging obscure pointer bugs. I am more confident that the code I ship 
is stable.


C) Cross-platform: I used to spend most of my development time for JSyn 
trying to maintain Mac, Windows and Linux versions of the native code. I 
had to deal with 32 vs 64-bit OSes, browser plugins, installers, etc. 
Yuck. Now I just build a JSyn JAR that is pure Java and it works 
everywhere. I can even write large GUI apps in Swing using Threads and 
networking and then drag them from Mac to PC or Linux and they just 
work. I can even write Applets that run in a browser. There are a few 
gotchas related to file paths etcetera but they are minor and easy to 
avoid once you learn how.  Now I can concentrate on writing synthesis 
and music code.


I am happy to trade off latency and performance issues for the luxury of 
writing in pure Java.


Phil Burk
http://www.softsynth.com/jsyn/

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] resonance

2011-01-02 Thread Phil Burk



From: Nigel Redmonearle...@earlevel.com
Very cool applet--thanks. Too bad you can't can't mess with resonance on the 
lowpass filters


Here is an Applet with a resonant lowpass biquad.

http://www.softsynth.com/jsyn/examples/filterfun.php

Try an impulse source with a cutoff=4000 and Q=8.

Phil Burk
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp