Wow, my very own hyphenated algorithm. :o)
Thanks for sharing this, Evan, it's impressive how concise the code ends up
being.
I'd echo the general feeling (which you also express in your Future Work
section) that for DSP work it really wants to be implemented using a ring
buffer. Neither
On 4/09/2016 1:42 PM, robert bristow-johnson wrote:
i think the worst case ain't gonna be too much better than
O(log2(windowSize)) per sample even with amortization over the block.
You can think that if you like, but I don't think the worst case is that
bad. I have given examples. If you
�
Original Message
Subject: Re: [music-dsp] efficient running max algorithm
From: "Ross Bencina" <rossb-li...@audiomulch.com>
Date: Sat, September 3, 2016 10:16 pm
To: r...@audioimagination.com
music-dsp@mu
On 4/09/2016 6:27 AM, robert bristow-johnson wrote:
if i were to test this out myself, i need to understand it enough to
write C code (or asm code) to do it.
The paper provides all of the information that you need for the basic
implementation (which I recommend to start with). If there is
On 4/09/2016 2:49 AM, robert bristow-johnson wrote:
sorry to have to get to the basics, but there are no *two* length
parameters to this alg. there is only one.
define the streaming real-time input sequence as x[n]. the length of
this signal is infinite.
output of running max alg is
Original Message
Subject: Re: [music-dsp] efficient running max algorithm
From: "Evan Balster" <e...@imitone.com>
Date: Sat, September 3, 2016 3:24 pm
To: "robert bristow-johnson" <r...@aud
l of this time, that i have mixed up the roles of N
> and R.
>
>
>
> Original Message ----------------
> Subject: Re: [music-dsp] efficient running max algorithm
> From: "Evan Balster" <e...@imitone.com>
;N". earlier i
> had been under the impression that you're processing R samples at a time,
> like processing samples in blocks of R samples. now i have no idea.
>
>
> r b-j
>
>
> ------------ Original Message
> Subject: Re: [music
Ross' explanation is solid.
Robert: R refers to range ("delay line" size, one could say) and N refers
to signal length.
Clarifying about the weird linear/binary search: It's necessary to
maintain O(N) complexity for processing N samples. This took me a while to
grasp:
If we do a backward
On 3/09/2016 2:14 PM, robert bristow-johnson wrote:
and in discussing iterators, says nothing about push_back()
and the like.
push_back(), push_front(), pop_back(), pop_front() are methods generally
available on queue-like containers.
from online i can get an idea, but it seems to me to
Original Message
Subject: Re: [music-dsp] efficient running max algorithm
From: "Ross Bencina" <rossb-li...@audiomulch.com>
Date: Fri, September 2, 2016 10:19 pm
To: music-dsp@mu
On 3/09/2016 3:12 AM, Evan Balster wrote:
Just a few clarifications:
- Local maxima and first difference don't really matter. The maximum
wedge describes global maxima for all intervals [t-x, t], where x=[R-1..0].
I think it's interesting to look at the algorithm from different
For one wedge, the worst case is a monotonic signal, which will saturate
the wedge with R samples.
For two wedges, the worst case is the situation where the absolute distance
from previous samples to the latest sample is monotonically decreasing. In
this case the wedges will contain R+1 samples
Right aren't monotonic signals the worst case here? Or maybe not, since
they're worst for one wedge, but best for the other?
Ethan D
On Fri, Sep 2, 2016 at 10:12 AM, Evan Balster wrote:
> Just a few clarifications:
>
> - Local maxima and first difference don't really matter.
Just a few clarifications:
- Local maxima and first difference don't really matter. The maximum wedge
describes global maxima for all intervals [t-x, t], where x=[R-1..0].
- If two equal samples are encountered, the algorithm can forget about the
older of them.
It has been very helpful for my
On 2/09/2016 4:37 PM, Ross Bencina wrote:
When the first difference is positive, the history is trimmed. This is
the only time any kind of O(N) or O(log2(N)) operation is performed.
First difference positive implies that a new local maximum is achieved:
in this case, all of the most recent
r...@audioimagination.com
"Imagination is more important than knowledge."
Original Message ----
Subject: Re: [music-dsp] efficient running max algorithm
From: "Evan Balster" <e...@imitone.com>
Date: Fri, September 2
of this Lemire alg over the simpler
> binary tree alg. it doesn't seem to me to be a lot better for memory
> requirements.
>
>
> --
>
> r b-j r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
>
al Message
Subject: Re: [music-dsp] efficient running max algorithm
From: "Evan Balster" <e...@imitone.com>
Date: Fri, September 2, 2016 12:12 am
To: music-dsp@music.columbia.edu
Hello, all ---
Reviving this topic to mention I've created an STL-compatible header
implementing what I'm calling the "Lemire-Fenn" algorithm for rolling
minimum and maximum:
https://github.com/EvanBalster/STL_mono_wedge
This was a little pet project I undertook today to familiarize myself with
> - IIUC, none the other algos mentioned can cheaply vary the window size
> at run-time, right?
I don't think what I've been calling "the Tito method" can cheaply vary the
window size, unless I've missed something clever. Or unless you're willing
to accept a "crossover period" during which the
> Hi, OP here.
>
>
> Unfortunately I don't understand all you are saying, due to my lack of C
> and Z-transform skills.
>
> This is what I understood, please correct me if I'm wrong:
> - My algo is called a binary max-tree. (When used with max as an
> operator)
> - The other algo discussed is
Ahh, smart! Thanks for the insight; I understand now.
It occurs to me that you could slightly tighten the bound on per-sample
work, because it's not necessary to include the elements of the linear
search in the binary one. The worst-case would then be O(J) per sample,
where J=log2(w-J). But
The trick is to count the total number of operations, not for each incoming
sample as the window moves.
The algorithm maintains a buffer that pushes at the back and pops at both front
and back. Each sample is pushed onto the buffer and popped out of it exactly
once. If R samples are popped
Yeah, with a binary search, you're doing O(log(w)) work, but you might not
end up discarding any samples. With the paper's linear search, you might do
O(w) work, but only if you end up discarding O(w) samples. So, it turns out
to be worth it, as long as you can tolerate the spike.
The thinking
Ethan ---
If we use only a binary search in the discard step, isn't the amortized
complexity still O(N)? I suppose not... We'd be doing a log2(w) search
every sample in the worst case where a monotonic decrease occurs.
I'll have to look over the paper to get a better understanding, but would
ic used in realtime hash-table migration and make that discard step run
> in constant time per sample---but that's a little more tango than I feel
> like dancing today!
>
> – Evan Balster
> creator of imitone <http://imitone.com>
>
> On Wed, Jul 20, 2016 at 11:04 A
Original Message
Subject: Re: [music-dsp] efficient running max algorithm
From: "Ethan Fenn" <et...@polyspectral.com>
Date: Wed, July 20, 2016 10:27 am
To: music-dsp@mu
Of course, for processing n samples, something that is O(n) is going to
eventually beat something that's O(n*log(w)), for big enough w.
FWIW if it's important to have O(log(w)) worst case per sample, I think you
can adapt the method of the paper to achieve this while keeping the O(1)
average.
29 matches
Mail list logo