Re: [music-dsp] Music applications/DSP online masters

2016-07-20 Thread Robert Marsanyi
Stanford has a well-regarded summer course at CCRMA that covers the 
basics rigorously.


--rbt


On 07/20/2016 03:11 PM, Liam Sargent wrote:

Hello all,

Been subscribed to this list for a while and have found the 
conversation fascinating. I recently graduated with a B.S. in Computer 
Science and have a strong interest in continuing my education in DSP 
programming for audio applications. I have recently started a full 
time job in the SF Bay Area as a software engineer - will likely have 
to complete course material online.


Wondering anyone on this list has recommendations for a solid online 
M.S. program focused on audio signal processing/music applications, or 
just resources for continuing my learning in general.


Liam


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



<>___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] efficient running max algorithm

2016-07-20 Thread Evan Balster
Ethan ---

If we use only a binary search in the discard step, isn't the amortized
complexity still O(N)?  I suppose not...  We'd be doing a log2(w) search
every sample in the worst case where a monotonic decrease occurs.

I'll have to look over the paper to get a better understanding, but would
be very interested to hear your thought process with the partial linear
search.

– Evan Balster
creator of imitone 

On Wed, Jul 20, 2016 at 1:23 PM, Dan Gillespie 
wrote:

> Regarding the Lemire paper his code is provided here:
> https://github.com/lemire/runningmaxmin
>
> It's been a while since I've read the paper, but iirc it reports fast
> average O(.), not worst case.  Specifically it's good if the signal has few
> changes in direction, but the worst case is not better than other
> algorithms.
>
> Dan Gillespie
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Music applications/DSP online masters

2016-07-20 Thread Esteban Maestre

Hi Liam,

I believe

https://ccrma.stanford.edu/academics/masters

is an excellent program. It's not an online program, but at least it 
happens close to where you live.


At Kadenze.com you'll find online (related) courses.

Esteban

On 7/20/2016 3:11 PM, Liam Sargent wrote:

Hello all,

Been subscribed to this list for a while and have found the 
conversation fascinating. I recently graduated with a B.S. in Computer 
Science and have a strong interest in continuing my education in DSP 
programming for audio applications. I have recently started a full 
time job in the SF Bay Area as a software engineer - will likely have 
to complete course material online.


Wondering anyone on this list has recommendations for a solid online 
M.S. program focused on audio signal processing/music applications, or 
just resources for continuing my learning in general.


Liam


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


--

Esteban Maestre
CIRMMT/CAML - McGill Univ
MTG - Univ Pompeu Fabra
http://ccrma.stanford.edu/~esteban

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Music applications/DSP online masters

2016-07-20 Thread Ralph Glasgal
You might take a look at www.ambiophonics.org which discusses a whole bunch of 
DSP applications for both recording and reproduction.  If you look at the NYU 
thesis and the related AES paper you will see ideas for more MS thesis ideas.  
One possibility is to codify Envelophonics and prove why it works.

 

Ralph Glasgal

 

From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Liam Sargent
Sent: Wednesday, July 20, 2016 6:11 PM
To: music-dsp@music.columbia.edu
Subject: [music-dsp] Music applications/DSP online masters

 

Hello all,

 

Been subscribed to this list for a while and have found the conversation 
fascinating. I recently graduated with a B.S. in Computer Science and have a 
strong interest in continuing my education in DSP programming for audio 
applications. I have recently started a full time job in the SF Bay Area as a 
software engineer - will likely have to complete course material online.

 

Wondering anyone on this list has recommendations for a solid online M.S. 
program focused on audio signal processing/music applications, or just 
resources for continuing my learning in general.

 

Liam

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] efficient running max algorithm

2016-07-20 Thread Ethan Fenn
Good idea and very clear summary.

One correction: in the discard step the paper prescribes a linear search
through the queue starting from the end, not a binary search. And the
modification I propose is to stick with this linear search for the first
log2(w) samples, then switch to a binary search. I believe this makes the
overall performance worse, but that it will still be O(n), and now the
worst case for an individual sample is O(log(w)) rather than O(w), a big
improvement.

As to how the O(n) bound works, I'd suggest just reading that part of the
paper until it clicks. Informally, the gist is this: Robert rightly worries
about the case where we have to search through this whole list of w
elements to figure out how many to discard. This will indeed take some time
for large w. However, the list only grows in size when we're lucky --
specifically when we encounter a sample which is less than everything in
the list. In this case we just add it to the end and move on, and the
discard step is nearly free. So, in order for the list to have gotten this
long in the first place we must have gotten lucky w times in the past. Even
though we have to pay the piper now, the time is amortized over all of
these samples. The average time is thus bounded by a constant, and really
quite a small constant.

(And sorry, Robert, I don't have any code to share).

-Ethan




On Wed, Jul 20, 2016 at 12:39 PM, Evan Balster  wrote:

> OK, here's what I gather...  I'm also going to summarize for the benefit
> of people who haven't read the other thread.
>
> There exists an algorithm for computing the minimum and/or maximum sample
> value in a rolling range.  It runs in amortized linear time in proportion
> to the number of samples processed---that is, we can process N samples in
> O(N).  Individual samples have a worst-case time of O(w) or O(log2(w)) if
> we use a binary search.
>
> Daniel Lemire, Streaming Maximum-Minimum Filter Using No More than Three
>> Comparisons per Element, Nordic Journal of Computing, Volume 13, Number 4,
>> pages 328-339, 2006
>
>
>> http://arxiv.org/abs/cs/0610046
>
>
> From what I've read here, the maximum algorithm works thus:
>
>- Maintain a delay line containing the last w elements.
>
>- Maintain a "monotonic wedge" queue describing all potential
>maximums...
>   - The queue contains each sample from the last *w* samples which is
>   larger than all the samples after it.
>   - By definition, the front element in the queue is the maximum in
>   the range.
>   - The queue's maximum size is *w*; this happens when the samples
>   are monotonically descending.
>
>   - For each new input sample:
>- If the front element in the queue is now *w* or more samples old,
>   discard it.
>   - *Discard* all samples in the queue which are not larger than the
>   latest sample.
>   - Add the latest sample to the back of the queue.
>
>   - To implement the *Discard* step:
>   - Perform a binary search on the queue (which is already sorted) to
>   find the smallest sample larger than
>    the new
>   input sample.
>   - Discard all samples following the found sample.
>
>   - To query the maximum:
>   - Return the front-most element of the queue.
>
> We can derive the min or min-and-max algorithm trivially from these
> steps.  The queue can be efficiently implemented with a ring-buffer of size
> w, which could optionally store delay-line indices instead of samples.  The
> discard step would be as simple as revising the end pointer.
>
> Because the only sample which can be both a minimum and  maximum candidate
> is the latest sample, we might be able to implement the algorithm with
> queue storage of size w.
>
> As Robert observes, this could get very spiky for w larger than processing
> block size.  I suspect that it would be possible to do the same kind of
> magic used in realtime hash-table migration and make that discard step run
> in constant time per sample---but that's a little more tango than I feel
> like dancing today!
>
> – Evan Balster
> creator of imitone 
>
> On Wed, Jul 20, 2016 at 11:04 AM, robert bristow-johnson <
> r...@audioimagination.com> wrote:
>
>>
>>
>>  Original Message 
>> Subject: Re: [music-dsp] efficient running max algorithm
>> From: "Ethan Fenn" 
>> Date: Wed, July 20, 2016 10:27 am
>> To: music-dsp@music.columbia.edu
>> --
>>
>> > Of course, for processing n samples, something that is O(n) is going to
>> > eventually beat something that's O(n*log(w)), for big enough w.
>>
>> i am skeptical of the claim.  at least for worst case.  if w = 2^30, i
>> would think the computational cost O(.) will have an increasing function of
>> w in the expression.
>>
>> > 

Re: [music-dsp] efficient running max algorithm

2016-07-20 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] efficient running max algorithm

From: "Ethan Fenn" 

Date: Wed, July 20, 2016 10:27 am

To: music-dsp@music.columbia.edu

--



> Of course, for processing n samples, something that is O(n) is going to

> eventually beat something that's O(n*log(w)), for big enough w.
i am skeptical of the claim. �at least for worst case. �if w = 2^30, i would 
think the computational cost O(.) will have an increasing function of w in the 
expression.
> FWIW if it's important to have
O(log(w)) worst case per sample, I think you
> can adapt the method of the paper to achieve this while keeping the O(1)

> average.

>
for real-time processing (that's how i think of a "running" max) the worst case 
timing is what one has to worry about. �at least to prevent an unexpected 
hiccup.
now, if the worst case *average* can be contained, then processing samples in 
blocks can take advantage of
that averaging. �so if we process samples in, say, 32 or 64 sample blocks, and 
the *worst case* timing of this can be guaranteed to beat the O(log2(w)), then 
the alg has value. �but if it cannot be guaranteed to do that, it does not for 
a real-time context.
> Instead of a linear
search back through the maxima, do a linear search for
> log2(w) samples and, failing that, switch to binary search. This is

> O(log(w)) worst case, but I think it would keep the property of O(1)

> comparisons per sample. I believe the upper bound would now be 4

> comparisons per sample rather than 3. I'd have to think more about it to be

> sure.

�
because i am having trouble understanding *exactly* the alg, i would still 
appreciate a snippet of code.
i will bet, that given a snippet of code, i can think of a signal that will 
push the computational cost into the worst case which will be worse than
O(log2(w)).
--
r b-j � � � � � � � � � � �r...@audioimagination.com
"Imagination is more important than knowledge."___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] efficient running max algorithm

2016-07-20 Thread Ethan Fenn
Of course, for processing n samples, something that is O(n) is going to
eventually beat something that's O(n*log(w)), for big enough w.

FWIW if it's important to have O(log(w)) worst case per sample, I think you
can adapt the method of the paper to achieve this while keeping the O(1)
average.

Instead of a linear search back through the maxima, do a linear search for
log2(w) samples and, failing that, switch to binary search. This is
O(log(w)) worst case, but I think it would keep the property of O(1)
comparisons per sample. I believe the upper bound would now be 4
comparisons per sample rather than 3. I'd have to think more about it to be
sure.

-Ethan



On Tue, Jul 19, 2016 at 1:49 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:

>
>
> changing the title as per Evan's request
>
>
>  Original Message 
> Subject: Re: [music-dsp] highly optimised variable rms and more
> From: "Ethan Fenn" 
> Date: Tue, July 19, 2016 9:45 am
> To: music-dsp@music.columbia.edu
> Cc: "robert bristow-johnson" 
> --
>
> > So you keep a list of all samples in your window (length w) which are
> > potential maxima -- that is, those which are >= all of the following
> > samples. By definition this is a monotonically decreasing list (not
> > strictly, there can be duplicates). For each input sample, you compare it
> > against the end of the list. If it's <= the last sample, just add it to
> the
> > list. Otherwise scan backwards until you find a sample >= it,
>
> how much does *that* cost?
>
> > chop off the
> > list after that point and add your sample. See if the first sample in
> your
> > list is expiring (if it is == the earliest sample in the window), and
> > discard it if so. Then output the first sample in the list and move on.
> >
> > It's O(1) in an amortized sense -- it's O(n) to process n samples,
> > regardless of w. The worst case time for any individual sample is O(w).
>
> that is as i expected.  O(w).  not O(1).
>
> the code i posted is, worst case, O(log2(w)).
>
>
>
> > Still, it seems very practical, the operations are simple and few.
>
> i'd like to see the code.  and then i'll cook up a signal that pushes it
> into the worst case.
>
> i thought the operations of the posted code are also few and simple and it
> guarantees worst case O(log2(w)).
>
>
>
> --
>
> r b-j  r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
>
>
> restated for convenience:
>
>
>
>
>
> #define A_REALLY_LARGE_NUMBER 3.40e38
>
> typedef struct
>{
>unsigned long filter_length;  // array_size/2 < filter_length
> <= array_size
>unsigned long array_size; // must be power of 2 for this
> simple implementation
>unsigned long input_index;// the actual sample placement is
> at (array_size + input_index);
>float* big_array_base;// the big array is malloc()
> separately and is actually twice array_size;
>} search_tree_array_data;
>
>
> void initSearchArray(unsigned long filter_length, search_tree_array_data*
> array_data)
>{
>array_data->filter_length = filter_length;
>
>array_data->array_size = 1;
>filter_length--;
>while (filter_length > 0)
> {
> array_data->array_size <<= 1;
> filter_length >>= 1;
> }
>// array_size is a power of 2 such that
>// filter_length <= array_size < 2*filter_length
>// array_size = 2^ceil(log2(filter_length)) =
> 2^(1+floor(filter_length-1))
>
>array_data->input_index = 0;
>
>array_data->big_array_base =
> (float*)malloc(sizeof(float)*2*array_data->array_size);// dunno
> what to do if malloc() fails.
>
>for (unsigned long n=0; n<2*array_data->array_size; n++)
> {
> array_data->big_array_base[n] = -A_REALLY_LARGE_NUMBER;//
> init array.
> } //
> array_base[0] is never used.
> }
>
>
>
> /*
>  *   findMaxSample(new_sample, _data) will place new_sample into the
> circular
>  *   buffer in the latter half of the array pointed to by
> array_data.big_array_base .
>  *   it will then compare the value in new_sample to its "sibling" value,
> takes the
>  *   greater of the two and then pops up one generation to the parent node
> where
>  *   this parent also has a sibling and repeats the process.  since the
> other parent
>  *   nodes already have the max value of the two child nodes, when getting
> to the
>  *   top-level parent node, this node will have the maximum value of all
> the samples
>  *   in the big_array.  the number of iterations of this loop is
> ceil(log2(filter_length)).
>  */
>
> float findMaxSample(float value, search_tree_array_data* array_data)
>{
>register float* big_array = array_data->big_array_base;
>
>