Re: [music-dsp] Sampling theorem extension

2015-06-12 Thread Ethan Duni
>The fact that the constant maps to a delta and the successive higher
>derivatives to monomials of equally higher order sort of correspond to
>the fact that in order to approximate something with such fiendishly
>local structure as a delta (corresponding in convolution to taking the
>value) and its derivatives (convolution with which is the derivative of
>corresponding order) calls for polynomially increasing amounts of high
>frequency energy. That is, something you can only handle in the
distributional
>setting, with its functionals and only a local sense of integration.
Trying to
>interpret something like that the way we do in conventional L_2 theory
sounds
>ikely to lead to pain.

Thanks for expanding on that, this is quite interesting stuff. However, if
I'm following this correctly, it seems to me that the problem of
multiplication of distributions means that the whole basic set-up of the
sampling theorem needs to be reworked to make sense in this context. I.e.,
not much point worrying about whether to call whatever exotic combination
of derivatives of delta functions that result from polynomials as "band
limited" or not, when we don't have a way to relate such a property back to
sampling/reconstruction of well-tempered distributions in the first place.
No?

E

On Thu, Jun 11, 2015 at 2:00 AM, Sampo Syreeni  wrote:

> On 2015-06-09, Ethan Duni wrote:
>
>  The Fourier transform does not exist for functions that blow up to +-
>> infinity like that. To do frequency domain analysis of those kinds of
>> signals, you need to use the Laplace and/or Z transforms.
>>
>
> Actually in the distributional setting polynomials do have Fourier
> transforms. Naked exponentials, no, but those are evil to begin with. The
> reason that works is the duality between the Schwartz space and that of
> tempered distributions themselves. The test functions are required to be
> rapidly decreasing which means that integrals between them and any function
> of at most polynomial growth converge, and so polynomials induce perfectly
> well behaved distributions. In essence the regularization which the Laplace
> transform gets from its exponential term and varying area of convergence is
> taken care of by the structure of the Schwartz space, and the whole
> machinery implements not a global theory of integration, but a local one.
>
> I don't know how useful the resulting Fourier transforms would be to the
> original poster, though: their structure is weird to say the least. Under
> the Fourier transform polynomials map to linear combinations of the
> derivatives of various orders of the delta distribution, and their spectrum
> has as its support the single point x=0. The same goes the other way:
> derivatives map to monomials of corresponding order. In a vague sense that
> functional structure at a certain frequency corresponds to the asymptotic
> behavior of the distribution, while the tamer function-like part
> corresponds to the shift-invariant structure.
>
> The fact that the constant maps to a delta and the successive higher
> derivatives to monomials of equally higher order sort of correspond to the
> fact that in order to approximate something with such fiendishly local
> structure as a delta (corresponding in convolution to taking the value) and
> its derivatives (convolution with which is the derivative of corresponding
> order) calls for polynomially increasing amounts of high frequency energy.
> That is, something you can only handle in the distributional setting, with
> its functionals and only a local sense of integration. Trying to interpret
> something like that the way we do in conventional L_2 theory sounds likely
> to lead to pain.
> --
> Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
> +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
>
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FFTW Help in C

2015-06-12 Thread Bjorn Roche
Ross and Conner,

It's absolutely true that each callback must respond in a certain amount of
time. The maximum execution time of each callback is something you need to
consider. Ross' language is more precise, but maybe some arm-wavy examples
will help:

- If each callback does the same amount of work (e.g., each call does an
FFT on all the incoming data), then each callback likely takes about the
same amount of time. In this case, moving to another thread *probably*
won't help (although, depending on the system, it might, especially if you
are anywhere near the allotted time you have for each callback).
- If some callbacks do different amounts of work (e.g., filling a buffer
and doing one FFT every three or four callbacks), then the maximum time is
much larger than the average. In this case, passing the audio to another
thread and doing the work there will likely help, because the additional
buffering helps to "spread the work around".

bjorn

On Fri, Jun 12, 2015 at 4:25 AM, Ross Bencina 
wrote:

> Hey Bjorn, Connor,
>
> On 12/06/2015 1:27 AM, Bjorn Roche wrote:
>
>> The important thing is to do anything that might take an unbounded
>> amount of time outside your callback. For a simple FFT, the rule of
>> thumb might bethat all setup takes place outside the callback. For
>> example, as long as you do all your malloc stuff outside the
>> callback, processing and soon can usually be done in the callback.
>>
>
> All true, but Connor may need to be more careful than that. I.e. make sure
> that the amount of time taken is guaranteed to be less than the time
> available in each callback.
>
> An FFT is not exactly a free operation. On a modern 8-core desktop
> machine it's probably trivial to perform a 2048-point FFT in the audio
> callback. But on a low-powered device, a single FFT of large enough size
> may exceed the available time in an audio callback. (Connor mentioned
> Raspberry Pi on another list).
>
> The only way to be sure that the FFT is OK to run in the callback is to:
>
> - work out the callback period
>
> - work out how long the FFT takes to compute on your device and how many
> you need to coompute per-callback.
>
> - make sure time-to-execute-FFT << callback-period (I'd aim for below
> 75% of one callback period to execute the entire FFT). This is not
> something that can be easily amortized across multiple callbacks.
>
>
> The above also assumes that your audio API lets you use 100% of the
> available CPU time within each callback period. A safer default
> assumption might be 50%.
>
> Remember that that your callback period will be short (64 samples) but
> your FFT may be large, e.g. 2048 bins. In such cases you have to perform
> a large FFT in the time of a small audio buffer period.
>
> If the goal is to display the results I'd just shovel the audio data
> into a buffer and FFT it in a different thread. That way if the FFT
> thread falls behind it can drop frames without upsetting the audio
> callback.
>
> The best discussion I've seen about doing FFTs synchronously is in this
> paper:
>
> "Implementing Real-Time Partitioned Convolution Algorithms on
> Conventional Operating Systems"
> Eric Battenberg, Rimas Avizienis
> DAFx2011
>
> Google says it's available here:
>
> http://cnmat.berkeley.edu/system/files/attachments/main.pdf
>
> If anyone has other references for real-time FFT scheduling I'd be
> interested to read them.
>
> Cheers,
>
> Ross.
>
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>



-- 
Bjorn Roche
@shimmeoapp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-06-12 Thread Andreas Tell

On 12 Jun 2015, at 14:31, Vadim Zavalishin 
 wrote:

> On one hand cos(omega0*t) is delta(omega-omega0)+delta(omega+omega0) in the 
> frequency domain (some constant coefficients possibly omitted). On the other 
> hand, its Taylor series expansion in time domain corresponds to an infinite 
> sum of derivatives of delta(omega). So an infinite sum delta^(n)(omega) 
> (which are zero everywhere except at the origin) must converge to 
> delta(omega-omega0)+delta(omega+omega0), correct? ;)

I’m not using any convergence properties in my argument. The space you’re 
arguing about is not complete, so convergence is not necessarily given.
My argument is simply that i*omega as an operator in frequency domain cannot 
have real eigenvalues. 

If the core of your argument is, that we could take \delta^{(n)} as a possible 
frame for constructing distributions that are eigenvectors of i*\omega, then 
that won’t work: i*\omega*\delta^{(n)}(\omega-\omega_0)  is again a Dirac 
distribution because it integrates like follows:

\int _{\omega_0-\eps}^{\omega_0+\eps} i \omega \delta^{(n)}(\omega-\omega_0) 
d\omega = (-1)^n i (d/d\omega0)^n \omega0 = 
\int_{\omega_0-\eps}^{\omega_0+\eps) (-1)^n i \delta(\omega-\omega_0) 
(d/d\omega0)^n \omega0 d\omega

therefore 

i*\omega*\delta^\{(n)}(\omega-\omega_0) = (-1)^n i (d/d\omega_0)^n \omega_0  
\delta(\omega-\omega_0)

Note that the right hand side vanishes for n>1, and only n=0 gives an 
eigenvector. So you cannot get an infinite series of Dirac derivatives, and 
even if you did, higher orders could not be combined to form eigenvectors, 
because they would all map to n=0. This argument holds for any distribution 
with single-point support. And as soon as you have non-single point support, 
you cannot get an eigenvector because of the non-degeneracy of the operator 
i*\omega.

I can’t really go into the details of the spectral theory of infinite 
dimensional representations of the Heisenberg algebra (which this is),as it’s 
one of the more complicated topics in mathematics and as far as I understand, 
it’s quite well understood. Questions about the existence of Fourier transforms 
map quite well to it. 

Andreas
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-06-12 Thread Vadim Zavalishin

On 12-Jun-15 12:54, Andreas Tell wrote:

I think it’s not hard to prove that there is no consistent
generalisation of the Fourier transform or regularisation method that
would allow plain exponentials. Take a look at the representation of
the time derivative operator in both time domain, d/dt, and frequency
domain, i*omega. The one-dimensional eigensubspaces  of i*omega are
spanned by the eigenvectors delta(omega-omega0) with the associated
eigenvalues i*omega0. That means all eigenvalues are necessarily
imaginary, with exception of omega0=0. On the other hand, exp(t) is
an eigenvector of d/dt with eigenvalue 1, which is not part of the
spectrum of the frequency domain representation.

This means, there is no analytic continuation from other transforms,
no regularisation or transform in a weaker distributional sense.


On one hand cos(omega0*t) is delta(omega-omega0)+delta(omega+omega0) in 
the frequency domain (some constant coefficients possibly omitted). On 
the other hand, its Taylor series expansion in time domain corresponds 
to an infinite sum of derivatives of delta(omega). So an infinite sum 
delta^(n)(omega) (which are zero everywhere except at the origin) must 
converge to delta(omega-omega0)+delta(omega+omega0), correct? ;)


This is just to illustrate that the eigenspace reasoning might not work 
for infinite sums. And I don't know the sufficient condition for it to 
work (and this condition wouldn't hold here anyway, probably). Or is 
there any mistake in the above?


Regards,
Vadim


--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] FFTW Help in C

2015-06-12 Thread Connor Gettel
Dear Phil,

Just wanted to say thanks for your thorough response. It was very helpful and 
raised a few questions.

When you mention driving graphic’s displays, did you mean software or hardware? 
The project i’m working at the moment is based on a raspberry pi, FFT is all 
conducted there, output is being sent to an Arduino along a serial bus or 
ethernet, and then displayed on a 32x32 LED RGB matrix. For this instance would 
you recommend not using the callback? I’ve seen Pa_ReadStream as opposed to the 
callback on a few source code examples, so it’s nice to have an explanation for 
why that may be the case.

--

Dear Athos,

That was a great article. I’ve read a lot of Ross’s stuff lately but i hadn’t 
found anything nearly that insightful. Thanks very much for sharing. Explained 
a lot of uncertainties that I previously had.

-

Dear Bjorn,

Your blog post was also extremely helpful! I actually came across it a couple 
months ago and had been using it as a reference. You’ve made the process seem 
much more straight forward.



Cheers,

Connor. 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sampling theorem extension

2015-06-12 Thread Andreas Tell

On 11 Jun 2015, at 19:58, Sampo Syreeni  wrote:

> Now, I don't know whether there is a framework out there which can handle 
> plain exponentials, a well as tempered distributions handle at most 
> polynomial growth. I suspect not, because that would call for the test 
> functions to be faster decaying than any exponential, and such functions are 
> measure theoretically tricky at best. I suspect what you'd at best arrive is 
> would seem very much like the L_p theory or the Laplace transform: various 
> exponential growth rates being quantified by various upper limits of 
> regularization, and so not one single theory where the Fourier transform 
> exists for all your functions at the same time, and the whole thing 
> restricting to the nice L_2 isometry where both functions belong to that 
> space.

I think it’s not hard to prove that there is no consistent generalisation of 
the Fourier transform or regularisation method that would allow plain 
exponentials. Take a look at the representation of the time derivative operator 
in both time domain, d/dt, and frequency domain, i*omega. The one-dimensional 
eigensubspaces  of i*omega are spanned by the eigenvectors delta(omega-omega0) 
with the associated eigenvalues i*omega0. That means all eigenvalues are 
necessarily imaginary, with exception of omega0=0. On the other hand, exp(t) is 
an eigenvector of d/dt with eigenvalue 1, which is not part of the spectrum of 
the frequency domain representation.

This means, there is no analytic continuation from other transforms, no 
regularisation or transform in a weaker distributional sense.

Hope this helps,

 Andreas

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-06-12 Thread Vadim Zavalishin

On 11-Jun-15 19:58, Sampo Syreeni wrote:

On 2015-06-11, vadim.zavalishin wrote:


Not really, if the windowing is done right. The DC offsets have more
to do with the following integration step.


I'm not sure which integration step you are referring to.


The typical framework starts with BLITs, implemented as interpolated
wavetable lookup, and then goes via a discrete time summation to derive
BLEPs. Right?


I prefer analytical expressions for BLEPs of 0 and higher orders :)


So the main problem tends to be with the summation,
because it's a (borderline) unstable operation.


I don't think so. The analytical expressions give beautiful answers 
without any ill-conditioning.




So we don't know, if exp is bandlimited or not. This brings us back to
my idea to try to extend the definition of "bandlimitedness", by
replacing the usage of Fourier transform by the usage of a sequence of
windowed sinc convolutions.


The trouble is that once you go with such a local description, you start
to introduce elements of shift-variance.


How's that? This condition (transparency of the convolution of the 
original signal with sinc in continuous time domain) is equivalent to 
the normal definition of bandlimitedness via the Fourier transform, as 
long as Fourier transform exists. The thing is, by understanding the 
convolution integral in the generalized Cesaro sense (or just ignoring 
the 0th and higher-order "DC offsets" arising in this convolution) we 
might attempt to extend the class of applicable signals. This seems 
relatively straightforward for polynomials. As the next step we can 
attempt to use polynomials of infinite order, particularly the Taylor 
expansions, where the BLEP conversion question will arise. The answer to 
the latter might be given by the rolloff speed of the Taylor series 
terms (derivative rolloff).


Regards,
Vadim


--
Vadim Zavalishin
Reaktor Application Architect | R&D
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FFTW Help in C

2015-06-12 Thread Ross Bencina


On 12/06/2015 6:51 AM, Richard Dobson wrote:

If it is purely for graphic display, the interesting aspect coding-wise
will be timing, so that the display coincides closely enough with the
audio it represents.


The following paper might give some idea about doing that with PortAudio 
timestamps:


http://www.rossbencina.com/static/writings/portaudio_sync_acmc2003.pdf

Not my best work though.


On the perceptual side, it might get "interesting." If the display 
trails the audio too much you're in trouble. Late vision is much easier 
to spot than late sound.


Cheers,

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FFTW Help in C

2015-06-12 Thread Ross Bencina

Hey Bjorn, Connor,

On 12/06/2015 1:27 AM, Bjorn Roche wrote:

The important thing is to do anything that might take an unbounded
amount of time outside your callback. For a simple FFT, the rule of
thumb might bethat all setup takes place outside the callback. For
example, as long as you do all your malloc stuff outside the
callback, processing and soon can usually be done in the callback.


All true, but Connor may need to be more careful than that. I.e. make 
sure that the amount of time taken is guaranteed to be less than the 
time available in each callback.


An FFT is not exactly a free operation. On a modern 8-core desktop
machine it's probably trivial to perform a 2048-point FFT in the audio
callback. But on a low-powered device, a single FFT of large enough size
may exceed the available time in an audio callback. (Connor mentioned
Raspberry Pi on another list).

The only way to be sure that the FFT is OK to run in the callback is to:

- work out the callback period

- work out how long the FFT takes to compute on your device and how many
you need to coompute per-callback.

- make sure time-to-execute-FFT << callback-period (I'd aim for below
75% of one callback period to execute the entire FFT). This is not
something that can be easily amortized across multiple callbacks.


The above also assumes that your audio API lets you use 100% of the
available CPU time within each callback period. A safer default
assumption might be 50%.

Remember that that your callback period will be short (64 samples) but
your FFT may be large, e.g. 2048 bins. In such cases you have to perform
a large FFT in the time of a small audio buffer period.

If the goal is to display the results I'd just shovel the audio data
into a buffer and FFT it in a different thread. That way if the FFT
thread falls behind it can drop frames without upsetting the audio callback.

The best discussion I've seen about doing FFTs synchronously is in this
paper:

"Implementing Real-Time Partitioned Convolution Algorithms on
Conventional Operating Systems"
Eric Battenberg, Rimas Avizienis
DAFx2011

Google says it's available here:

http://cnmat.berkeley.edu/system/files/attachments/main.pdf

If anyone has other references for real-time FFT scheduling I'd be
interested to read them.

Cheers,

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp