Re: [music-dsp] Sampling theorem extension

2015-06-11 Thread Sampo Syreeni

On 2015-06-09, Ethan Duni wrote:

The Fourier transform does not exist for functions that blow up to +- 
infinity like that. To do frequency domain analysis of those kinds of 
signals, you need to use the Laplace and/or Z transforms.


Actually in the distributional setting polynomials do have Fourier 
transforms. Naked exponentials, no, but those are evil to begin with. 
The reason that works is the duality between the Schwartz space and that 
of tempered distributions themselves. The test functions are required to 
be rapidly decreasing which means that integrals between them and any 
function of at most polynomial growth converge, and so polynomials 
induce perfectly well behaved distributions. In essence the 
regularization which the Laplace transform gets from its exponential 
term and varying area of convergence is taken care of by the structure 
of the Schwartz space, and the whole machinery implements not a global 
theory of integration, but a local one.


I don't know how useful the resulting Fourier transforms would be to the 
original poster, though: their structure is weird to say the least. 
Under the Fourier transform polynomials map to linear combinations of 
the derivatives of various orders of the delta distribution, and their 
spectrum has as its support the single point x=0. The same goes the 
other way: derivatives map to monomials of corresponding order. In a 
vague sense that functional structure at a certain frequency corresponds 
to the asymptotic behavior of the distribution, while the tamer 
function-like part corresponds to the shift-invariant structure.


The fact that the constant maps to a delta and the successive higher 
derivatives to monomials of equally higher order sort of correspond to 
the fact that in order to approximate something with such fiendishly 
local structure as a delta (corresponding in convolution to taking the 
value) and its derivatives (convolution with which is the derivative of 
corresponding order) calls for polynomially increasing amounts of high 
frequency energy. That is, something you can only handle in the 
distributional setting, with its functionals and only a local sense of 
integration. Trying to interpret something like that the way we do in 
conventional L_2 theory sounds likely to lead to pain.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-06-11 Thread Vadim Zavalishin

On 11-Jun-15 11:00, Sampo Syreeni wrote:

I don't know how useful the resulting Fourier transforms would be to the
original poster, though: their structure is weird to say the least.
Under the Fourier transform polynomials map to linear combinations of
the derivatives of various orders of the delta distribution, and their
spectrum has as its support the single point x=0.


So they can be considered kind of bandlimited, although as I noted in 
my other post, it seems to result in DC offsets in their restored 
versions, if sinc is windowed. It probably can be shown that in the 
context of BLEP these DC offsets will cancel each other (possibly under 
some additional restrictions). So, this seems to agree with my previous 
guesses and ideas.


You also mentioned (or I understood you so) that the exp(at) (a - real, 
t - from -infty to +infty) is not bandlimited (whereas my conjecture, 
based on the derivative rolloff speed, was that it's bandlimited if a is 
below the Nyquist). Could you tell us how does its spectrum look like?


Thanks,

Vadim


--
Vadim Zavalishin
Reaktor Application Architect | RD
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-06-11 Thread vadim.zavalishin

Sampo Syreeni писал 2015-06-11 15:55:

On 2015-06-11, Vadim Zavalishin wrote:

So they can be considered kind of bandlimited, although as I noted 
in my other post, it seems to result in DC offsets in their restored 
versions, if sinc is windowed.


Not really, if the windowing is done right. The DC offsets have more
to do with the following integration step.


I'm not sure which integration step you are referring to. As for the 
correct window length, this is what is bothering me a little. Depends 
on how generic the correct window length condition can be.




You also mentioned (or I understood you so) that the exp(at) (a - 
real, t - from -infty to +infty) is not bandlimited (whereas my 
conjecture, based on the derivative rolloff speed, was that it's 
bandlimited if a is below the Nyquist). Could you tell us how does its 
spectrum look like?


It doesn't. Exponentials grow too fast for the transform to be well
defined. Or at least in the context of tempered distributions, which
call for (very roughly) at most polynomial growth.


Ok, so the sampling theorem is not applicable to the exponential 
function even if we use tempered distributions for the Fourier 
transform maths. So we don't know, if exp is bandlimited or not. This 
brings us back to my idea to try to extend the definition of 
bandlimitedness, by replacing the usage of Fourier transform by the 
usage of a sequence of windowed sinc convolutions.


--
Vadim Zavalishin
Reaktor Application Architect | RD
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] [ot] other than sampling theorem, Theo

2015-06-11 Thread Sampo Syreeni

On 2015-06-11, Theo Verelst wrote:

[...] I don't recommend any of the guys I've read from here to presume 
they'll make it high up the mathematical pecking order by assuming all 
kinds of previous century generalities, while being even more 
imprecise about Hilbert Space related math than already at the 
undergrad EE sampling theory and standard signal analysis and 
differential equations solving.


Could you be any more condescending?

For your information, at least distribution spaces do not admit an inner 
product, much less a complete topology induced by it. Hell, in general 
they aren't even metrizable.


And as for last century mathematics, yes, well anything prior to 2000 
technically is just that. But let me point out that math doesn't exactly 
age, and that 21st century math tends to be beyond most PhD's in its 
rigor, generality and methods. An EE could well stop at 1800 AD and make 
do. What we're talking about here is a theory which even now isn't 
really complete, what with Lojasiewicz and Hörmander doing their seminal 
work only in '58-'59, and many of the problems of distributional 
division continuing to generate papers to this date. That being the 
stuff that lets you operate on rational system functions in this 
setting, ferfucksake...


There are three main operations involved in the relevant sampling 
theory at hand, the digitization, where the bandwidth limitation 
should be effective, [...]


Yes.

[...] the digital processing, which whether you like it or not is very 
far from perfect (really, it is often, no matter how you insist in 
fantasies on owning perfect mathematical filter in the sense of the 
parallel with idealized analog filters), [...]


But the problems can be regularized/mollified out of practical 
significance using 1800s maths.


[...] and the reconstruction, which in the absence of processing can 
be guaranteed to yield back the input signal when it's properly 
bandwidth limited.


Yes, and with deep delta-sigma converters we know how to achieve that as 
well, over any useful audio bandwidth. Exactly enough for the resulting 
error to *always* fall under thermionic noise, in practical circuits.


[...] without going through the proper motions of understand the 
academic EE basics, it's a free world, at least in the West, so fine.


In fact we do our homework, often without it being in any way connected 
to a lucrative endeavour. When we don't get it, we ask for help on fora
such as these. Which are then supposed to be easy to enter, *precisely* 
because there's strength in community.


I don't believe it is us who are climbing some imaginary ladder of power 
and prestige. We've been talking math, pure and simple. What you're 
doing here is pooping on a perfectly vibrant party of others. Please 
don't do that.


I repeat the main error at hand here: it's important to have bandwidth 
limited synthesis forms, but it is equally important to *quantify* 
[...]


We know, and we have.

[...] THEN you could try to get EEs/musicians opinions about inverting 
and partially preventing the errors in common DAC reconstruction 
filtering.


There is no error, as shown by tens of double-blind empirical tests over 
the years. If that's your persuasion.


Start with the basics, and goof of into some strange faith in 
miraculous mathematics to solve complexities that are inherent in the 
problem.


Oh, and you just happen to hold the magic key? Math bedamned? Seriously, 
man.


Even worse idea is to mash such idea up with the signal generators and 
filters, without concern for sample shifting, filtering errors, 
generator waveform reconstruction issues, and so on. That's not going 
to be my dream virtual anything. just saying.


That's sheer gobbledygook, and as an EE you ought to know so.

[...] but it may also kill information that is subsequently lost for 
later processing, and it will impose a character that probably is 
boring.


Show me the information theoretical argument to that effect. I can 
follow that kind as well, as I'd surmise quite a number of people here 
can.


But I don't need to worry about accusing the teacher of humiliating 
me, because I'd be fine good at it.


I am reasonably sure it is you who is humiliating himself. But of course 
I'm open to being proved wrong. Why not do the test?

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sampling theorem extension

2015-06-11 Thread Sampo Syreeni

On 2015-06-11, vadim.zavalishin wrote:

Not really, if the windowing is done right. The DC offsets have more 
to do with the following integration step.


I'm not sure which integration step you are referring to.


The typical framework starts with BLITs, implemented as interpolated 
wavetable lookup, and then goes via a discrete time summation to derive 
BLEPs. Right? So the main problem tends to be with the summation, 
because it's a (borderline) unstable operation. We often build in 
leakiness there in order to counter the effects of limited numerical 
precision, leading to long term average cancellation of DC. If we just 
did everything with bandlimited impulses, the DC error could be 
controlled exactly -- after all, the sinc interpolation (was it 
Whittaker?) formula really is interpolating, so that any effects it 
has on the DC are at most local even after windowing. No systematic DC 
offset ought to develop.


As for the correct window length, this is what is bothering me a 
little. Depends on how generic the correct window length condition 
can be.


There is no such thing as a correct window length. It's a matter 
minimizing the interpolation artifacts. Usually by pushing their maximum 
amplitude under some upper bound over the Fourier domain. The only DC 
error arising from that is that which the window function itself causes, 
and it then can be compensated by just adding it back. That's easy, 
because window functions have compact support. You just have to mind the 
effect.


Ok, so the sampling theorem is not applicable to the exponential 
function even if we use tempered distributions for the Fourier 
transform maths.


Correct.

Now, I don't know whether there is a framework out there which can 
handle plain exponentials, a well as tempered distributions handle at 
most polynomial growth. I suspect not, because that would call for the 
test functions to be faster decaying than any exponential, and such 
functions are measure theoretically tricky at best. I suspect what you'd 
at best arrive is would seem very much like the L_p theory or the 
Laplace transform: various exponential growth rates being quantified by 
various upper limits of regularization, and so not one single theory 
where the Fourier transform exists for all your functions at the same 
time, and the whole thing restricting to the nice L_2 isometry where 
both functions belong to that space.


But as I said, I'm not sure. That sort of stuff goes above my head 
already.


So we don't know, if exp is bandlimited or not. This brings us back to 
my idea to try to extend the definition of bandlimitedness, by 
replacing the usage of Fourier transform by the usage of a sequence of 
windowed sinc convolutions.


The trouble is that once you go with such a local description, you start 
to introduce elements of shift-variance. That sort of thing 
automatically breaks down most of the nice structure we have with more 
conventional Fourier transforms.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] [ot] other than sampling theorem, Theo

2015-06-11 Thread robert bristow-johnson

On 6/11/15 1:20 PM, Sampo Syreeni wrote:

On 2015-06-11, Theo Verelst wrote:

[...] I don't recommend any of the guys I've read from here to 
presume they'll make it high up the mathematical pecking order by 
assuming all kinds of previous century generalities, while being even 
more imprecise about Hilbert Space related math than already at the 
undergrad EE sampling theory and standard signal analysis and 
differential equations solving.


Could you be any more condescending?



Theo, might i recommend that (using Google Groups or eternal september 
or the NNTP of your choice) you mosey on over to and check out comp.dsp 
and see how dumb those guys are.


glmrboy (a.k.a Douglas Repetto) runs a pretty nice and clean and high 
S/N ratio forum here.  we're grateful for that.  because of that we try 
to be very nice to each other even when there is controversy (e.g. when 
i had a little run-in with Andrew Simper over the necessity of recursive 
solving of non-linear equations in real-time and with the salient 
differences between trapezoidal integration vs. bilinear transform), we 
try (and succeed) at not underestimating the knowledge, both in the 
practical and in the theoretical, of the other folks on the list.


but check out comp.dsp and see if they're as polite as Sampo.

--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FFTW Help in C

2015-06-11 Thread Danny van Swieten
When setting up the audio callback for PortAudio you can give it a void* to 
some data. Set up the fft plan and set the fft object as the void*.
In the callback you can use a cast to get the fft object from the void*

Good luck

Sent from my iPhone

 On 11 Jun 2015, at 16:20, Connor Gettel connorget...@me.com wrote:
 
 Hello Everyone,
 
 My name’s Connor and I’m new to this mailing list. I was hoping somebody 
 might be able to help me out with some FFT code. 
 
 I want to do a spectral analysis of the mic input of my sound card. So far in 
 my program i’ve got my main function initialising portaudio, inputParameters, 
 outputParameters etc, and a callback function above passing audio through. It 
 all runs smoothly. 
 
 What I don’t understand at all is how to structure the FFT code in and around 
 the callback as i’m fairly new to C. I understand all the steps of the FFT 
 mostly in terms of memory allocation, setting up a plan, and executing the 
 plan, but I’m still really unclear as how to structure these pieces of code 
 into the program. What exactly can and can’t go inside the callback? I know 
 it’s a tricky place because of timing etc… 
 
 Could anybody please explain to me how i could achieve a real to complex 1 
 dimensional DFT on my audio input using a callback? 
 
 I cannot even begin to explain how grateful I would be if somebody could walk 
 me through this process. 
 
 I have attached my callback function code so far with the FFT code 
 unincorporated at the very bottom below the main function (should anyone wish 
 to have a look)
 
 I hope this is all clear enough, if more information is required please let 
 me know.
 
 Thanks very much in advance!
 
 All the best,
 
 Connor.
 
 
 Callback_FFT.c
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sampling theorem extension

2015-06-11 Thread Theo Verelst

HI

While it's cute you all followed my lead to think about simple 
continuous signals that are bandwidth limited, such that they can be 
used as proper examples for a digitization/synthesis/reconstruction 
discipline, I don't recommend any of the guys I've read from here to 
presume they'll make it high up the mathematical pecking order by 
assuming all kinds of previous century generalities, while being even 
more imprecise about Hilbert Space related math than already at the 
undergrad EE sampling theory and standard signal analysis and 
differential equations solving. I don't even think there's much chance 
you'll get lucky enough to score a solution with a empty domain or 
something funny like that, and all terminology I've heard is material 
that has been known to many scientists for a very long time.


So, let's talk about the engineering type of issue at hand afresh from 
the correct starting point again, shall we ?


There are three main operations involved in the relevant sampling theory 
at hand, the digitization, where the bandwidth limitation should be 
effective, the digital processing, which whether you like it or not is 
very far from perfect (really, it is often, no matter how you insist in 
fantasies on owning perfect mathematical filter in the sense of the 
parallel with idealized analog filters), and the reconstruction, which 
in the absence of processing can be guaranteed to yield back the input 
signal when it's properly bandwidth limited.


Of course you want to work or hobby (it's a mystery to me how little or 
how imperfect leadership or professoring has taken place concerning 
these subjects, I suppose many people want very much to prove themselves 
in the popular subject and don't mind moonlighting) around the subject 
of software synthesis, or desire to have some pietistic occupation in 
software synthesis without going through the proper motions of 
understand the academic EE basics, it's a free world, at least in the 
West, so fine.


I repeat the main error at hand here: it's important to have bandwidth 
limited synthesis forms, but it is equally important to *quantify* 
(which is harder than just qualifying some of them) the errors taking 
place in digital filtering (like I mentioned the shifting versus 
reshaped e-powers problem)) the errors in the processing, and to 
understand that using a digital simulation has it's own range of errors, 
which won't solve by reversing the problems.


Finally *IF* you have perfect mathematical signal, say as a function of 
time, *AND* you can make or reasonably speaking guarantee it's frequency 
content is limited to half the sampling rate you're going to apply 
(remember when *I* pointed that problem out a while ago I wasn't exactly 
welcome to with some...) THEN you can start to do a perfect 
reconstruction. AND IF you somehow can make a perfect signal, or a 
perfectly reconstructed signal to sufficient accuracy THEN you could try 
to get EEs/musicians opinions about inverting and partially preventing 
the errors in common DAC reconstruction filtering. Why is there a 
discussion needed about that ? Well some of the modernistic boys and 
girls like to be loud, and don't think about that these subjects can 
turn ok audio into dangerous for the hearing audio. Important subject.


So, all the math and methodology I've seen of this little club of people 
who seem to think they can single-handedly deal with this important part 
of the EE history, ignoring the apparent decisions made about these 
subjects possibly before they (and possibly me) to me suggests not a 
single workable mathematical line or truthful solution strategy. Start 
with the basics, and goof of into some strange faith in miraculous 
mathematics to solve complexities that are inherent in the problem.


Now a little not about the aliasing, and the creating of synthesized 
signals: I can understand the desire of some persons to want to run som 
virtual oscillators, connect some digital envelops+VCAs, do some good 
sounding filtering, and wanting then in a limited latency to arrive a 
decent signal to send to a normal or extra quality (but standard) 
Digital to Analog COnverter. It's tempting to chose a few shape 
enforcing operations, and squash all signals in soem of the ways that 
can be imagined, and call it a day. That's not the quality of accurate 
virtual samples I'm talking about, and will probably sound tedious and 
repetitive soon. Even worse idea is to mash such idea up with the signal 
generators and filters, without concern for sample shifting, filtering 
errors, generator waveform reconstruction issues, and so on. That's not 
going to be my dream virtual anything. just saying.


So it's tempting to do tricks in digital audio processing, some of the 
aliasing guessing and output signal mangling (to stay inside a small 
range of possible signals that do not sound all too bad, and important: 
may well be ok for human loud consumption) may improve the impression of 

Re: [music-dsp] FFTW Help in C

2015-06-11 Thread Richard Dobson
If it is purely for graphic display, the interesting aspect coding-wise 
will be timing, so that the display coincides closely enough with the 
audio it represents. In this regard, the update rate for a running 
display rarely needs to be more than 60 fps, and can often be slower - 
so you would only need to compute FFTs at those intervals (e.g. 
triggered by a timer and reconciled with your main audio buffer), not 
at the higher analysis frame rates associated with FFT overlapping.


Richard Dobson


On 11/06/2015 16:18, Phil Burk wrote:

Hello Connor,

If you just wanted to do a quick FFT and then using the spectrum to control
synthesis, then I would recommend staying in the callback. If you are doing
overlap-add then set framesPerBuffer to half your window size and combine
the current buffer with the previous buffer to feed into the FTT.

But if you are using the FFT to do complex analysis, or to drive a graphics
display, then that is probably too much for the callback. In that case just
set the callback pointer to NULL and use Pa_ReadStream() with a large
buffer size.

http://portaudio.com/docs/v19-doxydocs/portaudio_8h.html#a0b62d4b74b5d3d88368e9e4c0b8b2dc7

This decouples your code from the main audio processing. Then you can do
almost anything, including writing files or generating graphics displays.
You will probably need to create a separate thread that does the read and
the FFT.

Phil Burk



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-06-11 Thread Vadim Zavalishin

On 10-Jun-15 21:26, Ethan Duni wrote:

With bilateral Laplace transform it's also complicated, because the
damping doesn't work there, except possibly at one specific damping
setting (for an exponent, where for polynomials it doesn't work at
all), yielding a DC


Why isn't that sufficient? Do you need a bigger region of convergence for
something? Note that the region of convergence for a DC signal is also
limited to the real line/unit circle (for Laplace/Z respectively). I'm
unclear on exactly what you're trying to do with these quantities.


I'm interested in the bandlimitedness of the signal. I'm not aware of 
how I can judge the bandlimitedness, if I don't know Laplace transform 
on the imaginary axis.





I'm not fully sure, how to analytically extend this result to the entire
complex plane and whether this will make sense in regards to the
bandlimiting question.


I'm not sure why you want to do that extension? But, again, note that you
have the same issue extending the transform of a regular DC signal to the
entire complex plane - maybe it would be enlightening to walk through what
you do in that case?


See above and below ;)

Alright, I'll try to reiterate my previous year's ideas in here.

I'm interested in a firm (well, reasonably firm, whatever that means) 
foundation of the BLEP approach. Intuitive description of the BLEP 
approach is: the discontinuities of the signal and its derivatives are 
the only source of nonbandlimitedness, so if we replace these 
discontinuities with their bandlimited versions, we effectively 
bandlimit the signal.


Now, how far can this statement be taken? This depends on the following 
two issues:


- Which infinitely differentiable signals are bandlimited and which 
aren't. Here come the polynomials and the exponentials, among other 
practically interesting signals. One could be tempted to intuitively 
think that polynomials, being integrals of a bandlimited DC are also 
bandlimited and, taking the limit, so should be infinite polynomials, 
particularly Taylor series, so any signal representable by its Taylor 
series (particularly, any analytic signal) should be bandlimited. 
However, this clearly contradicts the knowledge that FM'd sine is not 
bandlimited. This leads to the second question:


- Given a point where a signal and its derivatives are discontinuous, 
will the sum of the respective BLEPs converge?


In regards to the first question, we can notice any bandlimited (in the 
Fourier transform sense) signal is necessarily entire in the complex 
plane (if I'm not mistaken, this can be derived from the Laplace 
transform properties). Also, pretty much any practically interesting 
infinitely differentiable signal is also entire. So we can replace the 
infinite real differentiability requirement here with a stronger 
requirement of the signal being entire in the complex domain.


However, we also need to extend the definition of bandlimitedness. Since 
polynomials and exponentials have no Fourier transform (let's believe 
so, until Sampo Syreeni or someone else gives further clarifications 
otherwise), we can't say whether they are bandlimited or not. More 
precisely, the samping theorem cannot give any answer for these signals. 
But any answer to what exactly?


The real question (why we are talking about bandlimitedness and the 
sampling theorem in the first place) is not the bandlimitedness itself. 
Rather we want to know, what is going to be the result of a restoration 
of the discrete-time signal by the DAC. So the extended (more practical) 
definition of the bandlimitedness should be something like follows:


Suppose we are given a continuous-time signal, which is then naively 
sampled and further restored by a high-quality DAC. If the restored 
signal is reasonably identical to the original signal, the signal is 
called bandlimited.


It is probably reasonable to simplify the above, and replace the high 
quality DAC with a sequence of windowed sinc restoration filters, where 
the window length is approaching infinity.


Reasonably identical means the following:
- the higher the quality of the DAC, the closer is the signal to the 
original one
- we probably can allow a discrepancy at the DC. At least this 
discrepancy seems to appear in windowed sinc filters for polynomials.



Then, the BLEP approach applicability condition is the following. Given 
a bounded signal representable as a sum of an entire function and the 
derivative discontinuity functions, can we bandlimit it by simply 
bandlimiting the discontinuities? Apparently, we can do so if the entire 
function is bandlimited and the sum of the bandlimited derivatives 
converges. The conjecture is (please refer to my previuos year's posts) 
that the condition (at least a sufficient one) for the entire function 
being bandlimited and for the BLEP sum to converge is one and the same 
and has to do with the rolloff speed of the function's derivatives as 
the derivative order increases.



--
Vadim 

Re: [music-dsp] Sampling theorem extension

2015-06-11 Thread robert bristow-johnson

On 6/11/15 5:39 PM, Sampo Syreeni wrote:

On 2015-06-09, robert bristow-johnson wrote:


BTW, i am no longer much enamoured with BLIT and the descendents of 
BLIT. eventually it gets to an integrated (or twice or 3 times 
integrated) wavetable synthesis, and at that point, i'll just do 
bandlimited wavetable synthesis (perhaps interpolating between 
wavetables as we move up the keyboard).


How do you handle free-form PWM, sync and such in that case? Inquiring 
minds want to know.


if memory is cheap, lotsa wavetables (in 2 dimensions sorta like the 
Prophet VS having 2 dimensions but with a constellation of many more 
than 4 wavetables) with incremental differences between them and 
point-by-point interpolation between adjacent wavetables.


one dimension might be the pitch and, if your sampling rate is 48 kHz 
and you don't care what happens above 19 kHz (because you'll filter it 
all out), you can get away with 2 wavetables per octave (and you're only 
interpolating between tow adjacent wavetables).  for these analog 
waveform emulations, whatever harmonics that adjacent wavetables have in 
common would be phase aligned.  as you get higher in pitch, some of the 
harmonics will drop off in the wavetables used for those higher pitch, 
the other harmonics would remain at exactly the same amplitude and same 
phase, so for just those active harmonics, you would be crossfading 
(with complementary constant-voltage crossfade) from one sinusoid to 
another sinusoid that is exactly the same.  but, as you get higher up 
the keyboard, in the higher harmonics, you will be crossfading from an 
active harmonic to no harmonic.


along the other dimension that would be your sync-osc frequency ratio 
(or the PWM duty cycle), i am not sure how many wavetables you would 
need, but i think a couple dozen (where you're always interpolating 
between just two adjacent wavetables) should be way more than enough.  
the waveshapes won't look exactly correct in the interpolated and 
bandlimited wavetables, but you can define the wavetables so that all of 
the harmonics below, say 19 kHz, are exactly what you want them to be 
(they would be the same as the analog sync-osc).  the harmonics above 19 
kHz would either be zero, on its way to zero (as the pitch moves up), or 
possibly folded back but not folded back below 19 kHz.


interpolating between samples within the wavetable is another issue that 
depends on the interpolation method.  again, if memory is cheap, perhaps 
you might wanna put something like 2^12 samples per cycle with harmonics 
only going up to, say, the 300th or so (for very low notes).  then you 
can get away with linear interpolation between samples.


--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FFTW Help in C

2015-06-11 Thread Phil Burk
Hello Connor,

If you just wanted to do a quick FFT and then using the spectrum to control
synthesis, then I would recommend staying in the callback. If you are doing
overlap-add then set framesPerBuffer to half your window size and combine
the current buffer with the previous buffer to feed into the FTT.

But if you are using the FFT to do complex analysis, or to drive a graphics
display, then that is probably too much for the callback. In that case just
set the callback pointer to NULL and use Pa_ReadStream() with a large
buffer size.

http://portaudio.com/docs/v19-doxydocs/portaudio_8h.html#a0b62d4b74b5d3d88368e9e4c0b8b2dc7

This decouples your code from the main audio processing. Then you can do
almost anything, including writing files or generating graphics displays.
You will probably need to create a separate thread that does the read and
the FFT.

Phil Burk


On Thu, Jun 11, 2015 at 7:20 AM, Connor Gettel connorget...@me.com wrote:

 Hello Everyone,

 My name’s Connor and I’m new to this mailing list. I was hoping somebody
 might be able to help me out with some FFT code.

 I want to do a spectral analysis of the mic input of my sound card. So far
 in my program i’ve got my main function initialising portaudio,
 inputParameters, outputParameters etc, and a callback function above
 passing audio through. It all runs smoothly.

 What I don’t understand at all is how to structure the FFT code in and
 around the callback as i’m fairly new to C. I understand all the steps of
 the FFT mostly in terms of memory allocation, setting up a plan, and
 executing the plan, but I’m still really unclear as how to structure these
 pieces of code into the program. What exactly can and can’t go inside the
 callback? I know it’s a tricky place because of timing etc…

 Could anybody please explain to me how i could achieve a real to complex 1
 dimensional DFT on my audio input using a callback?

 I cannot even begin to explain how grateful I would be if somebody could
 walk me through this process.

 I have attached my callback function code so far with the FFT code
 unincorporated at the very bottom below the main function (should anyone
 wish to have a look)

 I hope this is all clear enough, if more information is required please
 let me know.

 Thanks very much in advance!

 All the best,

 Connor.



 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] FFTW Help in C

2015-06-11 Thread Athos Bacchiocchi
You may find this article useful:

http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing

It deals with the things to do and not to do when processing audio in
realtime using callbacks.

Athos

On 11 June 2015 at 16:20, Connor Gettel connorget...@me.com wrote:

 Hello Everyone,

 My name’s Connor and I’m new to this mailing list. I was hoping somebody
 might be able to help me out with some FFT code.

 I want to do a spectral analysis of the mic input of my sound card. So far
 in my program i’ve got my main function initialising portaudio,
 inputParameters, outputParameters etc, and a callback function above
 passing audio through. It all runs smoothly.

 What I don’t understand at all is how to structure the FFT code in and
 around the callback as i’m fairly new to C. I understand all the steps of
 the FFT mostly in terms of memory allocation, setting up a plan, and
 executing the plan, but I’m still really unclear as how to structure these
 pieces of code into the program. What exactly can and can’t go inside the
 callback? I know it’s a tricky place because of timing etc…

 Could anybody please explain to me how i could achieve a real to complex 1
 dimensional DFT on my audio input using a callback?

 I cannot even begin to explain how grateful I would be if somebody could
 walk me through this process.

 I have attached my callback function code so far with the FFT code
 unincorporated at the very bottom below the main function (should anyone
 wish to have a look)

 I hope this is all clear enough, if more information is required please
 let me know.

 Thanks very much in advance!

 All the best,

 Connor.



 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp