Re: [music-dsp] idealized flat impact like sound

2016-08-02 Thread gm



Am 02.08.2016 um 10:55 schrieb Uli Brueggemann:
Maybe I miss the real question of the topic but I have played around 
with creating a FIR filter:

1. generate white noise of a desired length
2. window it with an exponentially decaying envelope
3. apply some gain, e.g. 0.5
4. add a Dirac pulse at the first sample
The result is sprectrally not flat but
5. compute the excessphase of the sum = allpass = spectrally flat and 
use it


I don't get it to work, some questions:

when I convolve the original with the excess phase signal, shouldn't I 
get a minimum phase signal again?

(I dont)

what is the expected wave shape for the excess phase signal then?
(I get an arbitry mixed phase signal, not a one sided decaying signal
- but that is also what I would have expected though -?)

or do I need the unwrapped phase to calculate the excess phase?



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Supervised DSP architectures (vs. push/pull)

2016-08-02 Thread Andy Farnell
Dreaming about novel real-time DSP architectures... bottom up? 

I find this discussion and general problem of DSP architectures
suited to parallel computation exciting.
Its something I've pondered while considering a problem in
the implementation layer of procedural audio, which is 'level
of audio detail', simply the sound-design principle that not
every sonic detail needs computing perfectly all the time, that
good enough models can be 'computationally elastic'. 

Indeed in games, as Ethan F indicates in the above post,
material is often wide rather than deep, with lots of 
contributory signals, and some papers (search SIGGRAPH) have 
been written on perceptual prioritisation in games.

Of course a good solution is also one that allows dynamic 
reconfiguration of DSP graphs, but also one that seems to need
all the trapping of operating system principles, prioritisation,
critical path/dependency solving, cache prediction, scheduling,
cost estimation, a-priori metrics, etc. 

Although I kind of abandoned that line of thought, in honesty
due to the lazy thought that raw CPU capability would overtake
my ambitions, there are indeed certain sound models that are 
really rather hard to express within traditional DSP frameworks.
An example is fragmentation, as a recursive (forking) 'particle 
system' of ever smaller pieces, each  with decreasing level of 
detail. I imagine this elegantly expressed in LISP and easily 
executed on a multiple processors. And I can see other 
applications, perhaps for new audio effects that are adaptive to 
the changing complexity of incoming material.

But the fear I felt when thinking about "supervision" is twofold

1) We need reliable knowledge about DSP processes
   i) Order of growth in time and space
  ii) Anomolies, discontinuities, instablilities 
 iii) Significance (perhaps perceptual model)
 
.. and that knowledge might not be so reliable and consistent
As Ross said, some are not easily computable, and many of these 
issues in (Letz, Fober, Orlarey, Davis) the Jack paper just get
worse the more cores (and ICC paths) you add.
Again, this gets worse when the material is interactive, as in games,
and where you may want to adapt the level of audio detail 
on the fly.

2) That deep, synchronous audio systems are always 'brittle', one 
thing fails and everything fails, and at some point complexity and 
explicit rules at the supervisor level just get too much to create 
effects and instruments that are certain not to glitch during 
performance. 

Its like 'real-time' and massively concurrent dont mix well.

So I got wondering if _super_ vision is the wrong way of
looking at this for audio. Please humour a fool for a moment.
Instead of thinking like 'kernel', what can we learn from Hurd
and Minix? What can we learn from networking and massively 
concurrent asychronous systems that have failure built in as 
assumptions?

1) DSP nodes that can advertise capability
2) Processes that can solicit work with constraints
3) Opportunistic routing through available resources
4) Time to live for low priority contributory signals
5) Soft limits, cybernetics (correction and negative feedback)

So, if you were to think like 1960's DARPA and say " I want to 
construct a DSP processor based on nodes where many could be 
taken out by 'enemy action', and still get a 'good enough' signal 
throughput and latency" - what would that look like? 

Approaching this way, what you get probably looks horribly inefficient
for small systems where the inter-process bureaucracy dominates,
but really very scalable too, and doing better and better as the 
complexity increases rather than worse.

cheers,
Andy

 

On Mon, Aug 01, 2016 at 12:16:38PM -0500, Evan Balster wrote:
> Here's my current thinking.  Based on my current and foreseeable future
> use-cases, I see just a few conditions that would play into automatic
> prioritization:
> 
>- (A) Does the DSP depend on a real-time input?
>- (B) Does the DSP factor into a real-time output?
>- (C) Does the DSP produce side-effects?  (EG. observers, sends to
>application thread)
> 
> Any chain of effects with exactly one input and one output could be grouped
> into a single task with the same priority.  Junction points whose sole
> input or sole output is such a chain could also be part of it.
> 
> This would yield a selection of DSP jobs which would be, by default,
> prioritized thus:
> 
>1. A+B+C
>2. A+B
>3. A+C
>4. B+C
>5. B
>6. C
> 
> Any DSPs which do not factor into real-time output or side-effects could
> potentially be skipped (though it's worth considering that DSPs will
> usually have state which we may want updating).
> 
> It is possible that certain use-cases may favor quick completion of
> real-time processing over latency of observer data.  In that case, the
> following scheme could be used instead:
> 
>1. A+B (and A+B+C)
>2. B+C
>3. B
>4. A+C
>5. C
> 
> (Where steps 4 and 5 may occur after the 

Re: [music-dsp] idealized flat impact like sound

2016-08-02 Thread Uli Brueggemann
Maybe I miss the real question of the topic but I have played around with
creating a FIR filter:
1. generate white noise of a desired length
2. window it with an exponentially decaying envelope
3. apply some gain, e.g. 0.5
4. add a Dirac pulse at the first sample
The result is sprectrally not flat but
5. compute the excessphase of the sum = allpass = spectrally flat and use it

- Uli


2016-08-02 2:31 GMT+02:00 gm :

>
>
> Am 01.08.2016 um 22:55 schrieb Evan Balster:
>
>> The most essentially flat signal is a delta function or impulse, which is
>> also phase-aligned.  Apply any all-pass filter or series thereof to the
>> impulse, and the fourier transform over infinite time will remain flat.  I
>> recommend investigating Schroeder filters.
>>
>
> I already played with them as well as FDNs.
> Though Shroeder allpass filters in series (or reverbs in general) are not
> strictly flat it's better than random.
>
> And it's a trade off to have an impact like onset.
> You get that "like gaussian smear" smear, unless you set your diffusion
> coefficient high
> which also makes the responses longer. And the onset ist a little bit
> unnatural.
> (I know you can "nest" them and change that a little bit)
>
> Either way this way it comes down to reverb design... which quite a trap
> to waste time with...
> it's never finished in a way, at least for me
>
> And related to reverbs, the question:
> - how do I create a spectrally flat short decaying noise-like and
> impact-like sequence
> becomes interesting again, I think.
>
> But maybe there is nothing that's better than Shroeder allpass?
> I started to use random sequences for early reflections but found that
> these colors the sound too much, so I basically came to the same question.
> Though for reverb a more sparse noise would be better...
>
> And for allpass delays the question remains
> - how to design optimal length ratios?
> That's why I made the slightly nonsense remark about RNGs and reverbs the
> other day.
>
> So far I just use my ears, assumptions and numerology.
>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp