Re: [music-dsp] Supervised DSP architectures (vs. push/pull)

2016-07-27 Thread Ross Bencina

Hi Evan,

Greetings from my little cave deep in the multi-core scheduling rabbit 
hole! If multi-core is part of the plan, you may find that multicore 
scheduling issues dominate the architecture. Here are a couple of 
starting points:


Letz, Stephane; Fober, Dominique; Orlarey, Yann; P.Davis,
"Jack Audio Server: MacOSX port and multi-processor version"
Proceedings of the first Sound and Music Computing conference – SMC’04, 
pp. 177–183, 2004.

http://www.grame.fr/ressources/publications/SMC-2004-033.pdf

CppCon 2015: Pablo Halpern “Work Stealing"
https://www.youtube.com/watch?v=iLHNF7SgVN4

Re: prioritization. Whether the goal is lowest latency or highest 
throughput, the solutions come under the category of Job Shop Scheduling 
Problems. Large classes of multi-worker multi-job-cost scheduling 
problems are NP-complete. I don't know where your particular problem 
sits. The Work Stealing schedulers seem to be a popular procedure, but 
I'm not sure about optimal heuristics for selection of work when there 
are multiple possible tasks to select -- it's further complicated by 
imperfect information about task cost (maybe the tasks have 
unpredictable run time), inter-core communication costs etc.


Re: scratch storage allocation. For a single-core single-graph scenario 
you can use graph coloring (same as a compiler register allocator). For 
multi-core I guess you can do the same, but you might want to do 
something more dynamic. E.g. reuse a scratch buffer that is likely in 
the local CPUs cache.


Cheers,

Ross.



On 28/07/2016 5:38 AM, Evan Balster wrote:

Hello ---

Some months ago on this list, Ross Bencina remarked about three
prevailing "structures" for DSP systems:  Push, pull and *supervised
architectures*.  This got some wheels turning, and lately I've been
confronted by the need to squeeze more performance by adding multi-core
support to my audio framework.

I'm looking for wisdom or reference material on how to implement a
supervised DSP architecture.

While I have a fairly solid idea as to how I might go about it, there
are a few functions (such as prioritization and scratch-space
management) which I think are going to require some additional thought.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] BW limited peak computation?

2016-07-27 Thread Ross Bencina

On 28/07/2016 12:04 AM, Ethan Fenn wrote:

Because I don't think there can be more than one between any two
adjacent sampling times.


This really got the gears turning. It seems true, but is it a theorem?
If not, can anyone give a counterexample?


I don't know whether it's a classical theorem, but I think it is true.

Define the normalized sinc function as:

sinc(t) := sin( pi t ) / (pi t)

sinc(0) = 1. the signal is analytic everywhere.

A bandlimited, periodically sampled discrete-time signal {x_n} can be 
interpolated by a series of time-shifted normalized sinc functions, each 
centered at time n and scaled by amplitude x_n. This procedure can be 
used to produce the continuous-time analytic signal x(t) induced by 
{x_n}. We want to know how many peaks (direction changes) there can be 
in x(t) between x(n) and x(n+1).


Sinc is bandlimited and has no frequencies above the Nyquist rate 
(fs/2). A sum of time shifted sincs is also bandlimited and therefore 
has no frequencies above the Nyquist rate.


Now all you need to do is prove that a band-limited analytic signal 
whose highest frequency is fs/2 has no more than one direction change 
per sample period. I can't think how to do that formally right now, but 
intuitively it seems plausible that a signal with no frequencies above 
the nyquist rate would not have time-domain peaks spaced closer than the 
sampling period.


Ross.



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] confirm 29f9d07aca460a7584879c1831b9e3298c4

2016-07-27 Thread Bruno Afonso
Could you please stop spamming the list? Much appreciated

On Wed, Jul 27, 2016 at 16:09 robert bristow-johnson <
r...@audioimagination.com> wrote:

> sorry, i just ain't getting the hint.
>
> i'm sorta dense that way.
>
>
>
>  Original Message 
> Subject: confirm 29f9d07aca460a7584879c1831b9e3298c4
> From: music-dsp-requ...@music.columbia.edu
> Date: Wed, July 27, 2016 10:37 am
> To: r...@audioimagination.com
> --
>
> > Mailing list removal confirmation notice for mailing list music-dsp
> >
> > We have received a request for the removal of your email address,
> > "r...@audioimagination.com" from the music-dsp@music.columbia.edu
> > mailing list. To confirm that you want to be removed from this
> > mailing list, simply reply to this message, keeping the Subject:
> > header intact. Or visit this web page:
> >
> >
> https://lists.columbia.edu/mailman/confirm/music-dsp/29f9d07aca460a7584879c1831b9e3298c4
> >
> >
> > Or include the following line -- and only the following line -- in a
> > message to music-dsp-requ...@music.columbia.edu:
> >
> > confirm 29f9d07aca460a7584879c1831b9e3298c4
> >
> > Note that simply sending a `reply' to this message should work from
> > most mail readers, since that usually leaves the Subject: line in the
> > right form (additional "Re:" text in the Subject: is okay).
> >
> > If you do not wish to be removed from this list, please simply
> > disregard this message. If you think you are being maliciously
> > removed from the list, or have any other questions, send them to
> > music-dsp-ow...@music.columbia.edu.
> >
> >
>
>
> --
>
>
>
>
> r b-j  r...@audioimagination.com
>
>
>
>
> "Imagination is more important than knowledge."
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] idealized flat impact like sound

2016-07-27 Thread gm

(Hi Matt, we've met before at NI btw, briefly)

Thanks, I look these up, I think I browsed through some of this a few 
years ago, Rocchesso I think.


I already had a hammer model for the same thing, which is piano synthesis
and a soundboard model.

For the moment I am more interested in spectral flatness.
I would like to synthesize decaying white noise thats completely flat
Basically the ideal reverb response in a way.

I want to figure out what details make a piano sound sound like piano, 
and how to exaggerate or idealize these
thats one reason why I replaced hammer model and soundboard model with 
white noise for now.


(It turns out that the fluctuations in the spectrum matter, but can also 
give an interesting touch

when the noise varies with time...)

Now I want to replace it with something really flat to figure out what
role some of the modes of different real soundboards have, if any, or if 
the impact sound is more important (if it is)
and what makes that impact, perceptionally, in the case of the piano 
(where immediate collision sounds don't matter).


I also experimented with random phase noise of the soundbard spectrum vs 
the recorded impact
It makes a difference but I am not sure what it is at the moment, 
perceptionally and phase wise.
A minmum phase version of the same impact seems to sound worse to me for 
instance.

The reverberation seems quite important.

An other thing thats of importance seem to be the reflections between 
hammer

and where the string is fixed, but you dont need an impact model for this,
they can be modeled with a truncated comb filter response... thats 
already seperated out.


So my question for now is: how can we synthesize completely flat 
decaying noise?

(is it even possible?)


Am 27.07.2016 um 21:33 schrieb Matt Jackson:

There might also be something by max Matthews or Curtis Roads.
I think I recall a chapter in the computer music tutorial.

Sent from a phone.


On 27.07.2016, at 20:47, Andy Farnell  wrote:

For impact/contact exciters you will find plenty
of empirical studies and theoretical models in the
literature by;

Davide Rocchesso
Bruno Giodano
Perry Cook

These are good initial paper authors to search

all best
Andy Farnell




On Wed, Jul 27, 2016 at 07:00:02PM +0200, gm wrote:

Hi

I want to create a signal thats similar to a reverberant knocking or
impact sound,
basically decaying white noise, but with a more compact onset
similar to a minimum phase signal
and spectrally completely flat.

I am aware thats a contradiction.

Both, minimum phase impulse and fading random phase white noise are
unsatisfactory.
The minimum phase impulse does not sound reverberant.

The random phase noise isn't strictly flat anymore when you window
it with an exponentially decaying envelope
and also lacks a knocking impression.

I am also aware that a knocking impression comes from formants and
pronounced modes
related to shapes and material and not flat, which is another
contradiction..

I am not sure what the signal or phase alignment is I am looking for.

Also it's not a chirp cause a chirp sounds like a chirp.

What happens in a knock/impact besides pronounced modes or formants?
Somehow the phases are aligned it seems, similar to minimum phase
but then its
also random and reverberant.


Any ideas?



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Supervised DSP architectures (vs. push/pull)

2016-07-27 Thread Evan Balster
Hello ---

Some months ago on this list, Ross Bencina remarked about three prevailing
"structures" for DSP systems:  Push, pull and *supervised architectures*.
This got some wheels turning, and lately I've been confronted by the need
to squeeze more performance by adding multi-core support to my audio
framework.

I'm looking for wisdom or reference material on how to implement a
supervised DSP architecture.

While I have a fairly solid idea as to how I might go about it, there are a
few functions (such as prioritization and scratch-space management) which I
think are going to require some additional thought.


*Background on my use case*:  (optional reading)

I have a mature audio processing framework that I use in a number of
applications.  (I may open-source it in the future.)

   - *imitone*, which performs low-latency, CPU-intensive audio analysis.
   I plan on adding support for signaled multi-core processing for users who
   want to process many voices from a single device.

   - *SoundSelf*, a VR application which places excessive demands on audio
   rendering with close to a thousand DSPs routinely operating in the output
   stream.  (The sound designer has free reign and uses it!)  The application
   also uses one or more input streams which may be ring-buffered to output.
   I plan on adding worker threads for less latency-sensitive audio.

My current architecture is pull-based:  The DSP graph assumes a tree
structure, where each unit encapsulates both DSP and state-synchronization,
and "owns" its inputs, propagating sync events and pulling audio as needed.

This scheme has numerous inelegances and limitations.  To name a few:
 Analysis DSP requires that audio be routed through to a sink, a "splitter"
mechanism must be used to share a source between multiple consumers, and
all units' audio formats must be fixed throughout their lifetimes.  It is
not directly possible to insert DSPs into a chain at run-time, or to
migrate the graph to a different stream.

Thinking about a supervised architecture, I can see how I might be able to
solve all these problems and more.  By building a rendering and state
manager, I can reduce the implementation burden involved in building new
DSPs, prioritize time-sensitive processing and skip unnecessary work.
Lastly, it could help me to build much more elegant and robust multi-steam
and multi-core processing mechanisms.

I would be very interested to hear from others who have used this type of
architecture:  Strengths, weaknesses, gotchas, et cetera.

– Evan Balster
creator of imitone 
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] idealized flat impact like sound

2016-07-27 Thread Matt Jackson
There might also be something by max Matthews or Curtis Roads. 
I think I recall a chapter in the computer music tutorial. 

Sent from a phone. 

> On 27.07.2016, at 20:47, Andy Farnell  wrote:
> 
> For impact/contact exciters you will find plenty 
> of empirical studies and theoretical models in the 
> literature by;
> 
> Davide Rocchesso
> Bruno Giodano
> Perry Cook
> 
> These are good initial paper authors to search 
> 
> all best
> Andy Farnell
> 
> 
> 
>> On Wed, Jul 27, 2016 at 07:00:02PM +0200, gm wrote:
>> 
>> Hi
>> 
>> I want to create a signal thats similar to a reverberant knocking or
>> impact sound,
>> basically decaying white noise, but with a more compact onset
>> similar to a minimum phase signal
>> and spectrally completely flat.
>> 
>> I am aware thats a contradiction.
>> 
>> Both, minimum phase impulse and fading random phase white noise are
>> unsatisfactory.
>> The minimum phase impulse does not sound reverberant.
>> 
>> The random phase noise isn't strictly flat anymore when you window
>> it with an exponentially decaying envelope
>> and also lacks a knocking impression.
>> 
>> I am also aware that a knocking impression comes from formants and
>> pronounced modes
>> related to shapes and material and not flat, which is another
>> contradiction..
>> 
>> I am not sure what the signal or phase alignment is I am looking for.
>> 
>> Also it's not a chirp cause a chirp sounds like a chirp.
>> 
>> What happens in a knock/impact besides pronounced modes or formants?
>> Somehow the phases are aligned it seems, similar to minimum phase
>> but then its
>> also random and reverberant.
>> 
>> 
>> Any ideas?
>> 
>> 
>> 
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] idealized flat impact like sound

2016-07-27 Thread Andy Farnell
For impact/contact exciters you will find plenty 
of empirical studies and theoretical models in the 
literature by;

Davide Rocchesso
Bruno Giodano
Perry Cook

These are good initial paper authors to search 

all best
Andy Farnell



On Wed, Jul 27, 2016 at 07:00:02PM +0200, gm wrote:
> 
> Hi
> 
> I want to create a signal thats similar to a reverberant knocking or
> impact sound,
> basically decaying white noise, but with a more compact onset
> similar to a minimum phase signal
> and spectrally completely flat.
> 
> I am aware thats a contradiction.
> 
> Both, minimum phase impulse and fading random phase white noise are
> unsatisfactory.
> The minimum phase impulse does not sound reverberant.
> 
> The random phase noise isn't strictly flat anymore when you window
> it with an exponentially decaying envelope
> and also lacks a knocking impression.
> 
> I am also aware that a knocking impression comes from formants and
> pronounced modes
> related to shapes and material and not flat, which is another
> contradiction..
> 
> I am not sure what the signal or phase alignment is I am looking for.
> 
> Also it's not a chirp cause a chirp sounds like a chirp.
> 
> What happens in a knock/impact besides pronounced modes or formants?
> Somehow the phases are aligned it seems, similar to minimum phase
> but then its
> also random and reverberant.
> 
> 
> Any ideas?
> 
> 
> 
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
> 


signature.asc
Description: Digital signature
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] idealized flat impact like sound

2016-07-27 Thread gm


Hi

I want to create a signal thats similar to a reverberant knocking or 
impact sound,
basically decaying white noise, but with a more compact onset similar to 
a minimum phase signal

and spectrally completely flat.

I am aware thats a contradiction.

Both, minimum phase impulse and fading random phase white noise are 
unsatisfactory.

The minimum phase impulse does not sound reverberant.

The random phase noise isn't strictly flat anymore when you window it 
with an exponentially decaying envelope

and also lacks a knocking impression.

I am also aware that a knocking impression comes from formants and 
pronounced modes
related to shapes and material and not flat, which is another 
contradiction..


I am not sure what the signal or phase alignment is I am looking for.

Also it's not a chirp cause a chirp sounds like a chirp.

What happens in a knock/impact besides pronounced modes or formants?
Somehow the phases are aligned it seems, similar to minimum phase but 
then its

also random and reverberant.


Any ideas?



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] confirm 29f9d07aca460a7584879c1831b9e3298c4

2016-07-27 Thread robert bristow-johnson



sorry, i just ain't getting the hint.
i'm sorta dense that way.




 Original Message 

Subject: confirm 29f9d07aca460a7584879c1831b9e3298c4

From: music-dsp-requ...@music.columbia.edu

Date: Wed, July 27, 2016 10:37 am

To: r...@audioimagination.com

--



> Mailing list removal confirmation notice for mailing list music-dsp

>

> We have received a request for the removal of your email address,

> "r...@audioimagination.com" from the music-dsp@music.columbia.edu

> mailing list. To confirm that you want to be removed from this

> mailing list, simply reply to this message, keeping the Subject:

> header intact. Or visit this web page:

>

> https://lists.columbia.edu/mailman/confirm/music-dsp/29f9d07aca460a7584879c1831b9e3298c4

>

>

> Or include the following line -- and only the following line -- in a

> message to music-dsp-requ...@music.columbia.edu:

>

> confirm 29f9d07aca460a7584879c1831b9e3298c4

>

> Note that simply sending a `reply' to this message should work from

> most mail readers, since that usually leaves the Subject: line in the

> right form (additional "Re:" text in the Subject: is okay).

>

> If you do not wish to be removed from this list, please simply

> disregard this message. If you think you are being maliciously

> removed from the list, or have any other questions, send them to

> music-dsp-ow...@music.columbia.edu.

>

>





--
�


r b-j � � � � � � � � � � �r...@audioimagination.com
�


"Imagination is more important than knowledge."
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] BW limited peak computation?

2016-07-27 Thread Ethan Fenn
>
> Because I don't think there can be more than one between any two adjacent
> sampling times.


This really got the gears turning. It seems true, but is it a theorem? If
not, can anyone give a counterexample?

Back to the main question... I think you're really going to need to
oversample by at least 2x. As Robert's example shows, it's hard to even
know where to look for extrema without oversampling. If you want to further
refine the extrema after doing this you could try the methods proposed by
Stefan or Xue, or try something like a golden section search using the
interpolator of your choice.

-Ethan


On Wed, Jul 27, 2016 at 1:29 AM, robert bristow-johnson <
r...@audioimagination.com> wrote:

>
>
>  Original Message 
> Subject: Re: [music-dsp] BW limited peak computation?
> From: "Ross Bencina" 
> Date: Tue, July 26, 2016 6:21 pm
> To: music-dsp@music.columbia.edu
> --
>
> > On 27/07/2016 7:09 AM, Sampo Syreeni wrote:
> >> Now, what I wonder is, could you still somehow pinpoint the temporal
> >> location of an extremum between sampling instants, by baseband logic?
> >> Because I don't think there can be more than one between any two
> >> adjacent sampling times.
> >
> > Presumably the certainty of such an estimate would depend on how many
> > baseband time samples you considered. Sinc decays as 1/x so that gives
> > you some idea of the potential influence of distant values -- not sure
> > exactly how that maps into distant sample's influence on peak location
> > though.
> >
>
> for normal bandlimited interpolation, i don't think you need to go more
> than +16 and -15 samples from the interpolated region (which is between 0
> and 1) and i don't think you'll need to have more than 16 or 32 phases.
>  and i think you can decently apply quadratic interpolation between those
> 1/16th or 1/32nd sample values and you'll have, for all quantitative
> purposes, a very good interpolation of the peak.
>
> dunno what exactly is meant by "baseband logic".
>
>
> --
>
>
>
>
> r b-j  r...@audioimagination.com
>
>
>
>
> "Imagination is more important than knowledge."
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp