Re: [music-dsp] Dither video and articles
As far as being interesting in subtractive dither, I can’t say I’m terribly interested in it, mainly because I prefer a larger word size (24-bit is convenient, it can be smaller, but more than 16), and no dither at all…but I’d be willing to discuss it with you, Sampo ;-) On Mar 26, 2014, at 10:53 PM, Sampo Syreeni de...@iki.fi wrote: On 2014-03-26, Nigel Redmon wrote: Maybe this would be interesting to some list members? A basic and intuitive explanation of audio dither: https://www.youtube.com/watch?v=zWpWIQw7HWU Since it's been quiet and dither was mentioned... Is anybody interested in the development of subtractive dither? I have a broad idea in my mind, and a little bit of code (for once!) as well. Unfortunately nothing too easily adaptable though... Willing to copy and explain all of it, though. :) The video will be followed by a second part, in the coming weeks, that covers details like when, and when not to use dither and noise shaping. I’ll be putting up some additional test files in an article on ear level.com in the next day or so. In any case, thank you kindly. Dithering and noise shaping, both in theory and in practice is *still* something far too few people grasp for real. -- Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2-- -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Nyquist–Shannon sampling theorem
On Mar 26, 2014, at 8:42 PM, Doug Houghton doug_hough...@sympatico.ca wrote: I'm guessing this somehow scratches at the surface of what I've read about no signal being properly band limited unless it's infinit. Sure, in the same sense, we don’t properly sample to digital or properly convert back to analog anything—if “properly” means perfectly. But if it means “adequately, then we’re good. Perfectly is a brick wall lowpass filter. Adequately is is a steep filter that gives us flat response out as far as we can hear, and results in aliasing of an amplitude that is below out ability to hear. BTW, there’s always something to learn or think about it seems. Having dinner by myself tonight, I started thinking about those zeros between samples when up-sampling. What if the sampled signal had a significant DC offset to start with? What's the difference between inserting zeros, and inserting the DC offset instead? Well, I did figure it out, and confirmed it when I got home, but it was amusing to give some thought to something I hadn’t thought much about, considering I’ve been writing oversampling code for 25+ years... -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Nyquist–Shannon sampling theorem
On 2014-03-27, Doug Houghton wrote: I understand the basics, my question is in the constraints that might be imposed on the signal or functon as referenced by the theory. The basic theory presupposes that the signal is square integrable and bandlimited. That's pretty much it. If you want to make it hard and nasty, you can go well beyond that, but for the most part it suffices that you can ordinarily Riemann integrate the square of your signal, you get a finite total power figure that way, and then once you integrate the product of your signal with any fixed sinusoid, over some fixed frequency cutoff every such product integrates identically to zero. When you have that, you can derive the sampling theorem. It says that taking an equidistant train of instantaneous values of your signal at any rate at or above twice the bandlimit mentioned above is enough fully reconstruct the original waveform. Point for point, exactly, no ifs, ors or buts. So the only real limitation is the upper bandlimit. Is it understood to be repeating? No it doesn't have to be. Yes, there are four separate forms of Fourier analysis which are commonly used, and which have their own analogues of the sampling theorem. Or perhaps rather the sampling theorem itself is a reflection of the same Fourier stuff which all of those forms of analysis rely upon. Two of the forms are periodic in time, which is why you might be thrown off here. But the basic form under which the Shannon-Nyquist sampling theorem is proved is not one of them; it covers all of real square integrable functions, R to R. I'm thinking the math must consider it this way, or rather the difference is abstracted since the signal is assumed to be band limited, which means infinit, which means you can create any random signal by inject the required freuencies at the reuired amplitides and phase from start to finish, even a 20k 2ms blip in the middle of endless silence. If you inject something with time structure, the Fourier transform will decompose it as an integral of a continuum of separate frequencies. This is part of the deeper structure of Fourier analysis, and what in the quantum physics circles is called the uncertainty principle. What we in the DSP circles think of as the tradeoff between time and frequency structure, and operationalize via the idea that time domain convolution becomes a multiplication in the frequency one, and vice versa, is thought of as in the physics circles as the duality between any two conjugate variables, lower-bounded by uncertainty principle. What they call a physical law, us math freaks always called just a basic eigenproperty of any linear operator on R, lower bounded by the eigenfunctions of the class of linear, shift-invariant operators, those being the exponential class, containing complex sinusoids and in the proper limit Gaussians, impulses, infinite sinusoids, and all of their shifted linear combinations. That is a deep, and rich, and beautiful theory in harmonical analysis if you choose to go that way. At its most beautiful it exhibits itself in the class of tempered distributions, which you most easily get via metric completion of the intersection of real functions in square and absolute value norm, and then going to the natural topological dual of the function space which results. If you go there, you can suddenly do things like derivatives of a delta function, and bandlimitation on top of it, in your head. And whatnot. But for the most part you don't want to go there, because there's no return and no end to what follows, and it nary helps you with anything practicable. The better way for a digital bloke is to just assume a bandlimit, and to see what comes out of that. Apply the sampling theorem as it was proven, and then get acquainted with discrete time math as fully as one can. For there lies salvation, and the app which pays your bills. -- Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2 -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Nyquist–Shannon sampling theorem
On 27/03/2014 3:23 PM, Doug Houghton wrote: Is that making any sense? I'm struggling with the fine points. I bet this is obvious if you understand the math in the proof. I'm following along, vaguely. My take is that this conversation is not making enough sense to give you the certainty you seem to be looking for. Your question seems to be very particular regarding specifics of the definitions used in a theorem, but you have not quoted the theorem or the definitions. Most of the answers so far seem to be talking about interpretations and consequences of the theorem. May I suggest: quote the version of the theorem that you're working from and the definitions of terms used that you're assuming, and we can go from there. It may also be helpful to see the proof that you are working from, perhaps someone can help unpack that. Here's a version of the theorem that you may or may not be happy with: SHANNON-NYQUIST SAMPLING THEOREM [1] If a function x(t) contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart. --- I'm loath to contribute my limited interpretation, but let me try (feel free to ignore or ideally, correct me): x(t) is an infinite duration continuous-time function. a frequency is defined to be an infinite duration complex sinusoid with a particular period. The theorem is saying something about an infinite duration continuous time signal x(t), and expressing a constraint on that signal in terms of the signal's frequency components. To be able to talk about the frequency components of x(t) we can use a continuous Fourier representation of the signal, i.e. the Fourier transform [2], say x'(w), a complex valued function, w is a (continuous) real-valued frequency parameter: +inf x'(w) = integrate x(t)*e^(-2*pi*i*t*w) dt -inf The Fourier transform can represent any continuous signal that is integrable and continuous (I deduce this from the invertability of the Fourier transform [3]). One consequence of this is that any practical analog signal x(t) may be represented by its Fourier transform. The theorem expresses a constraint the frequencies for which the Fourier transform may be non-zero. Specifically, it requires that x'(w) = 0 for all w -N and all w N, where N is the Nyquist frequency. Note specifically that we are dealing with the continuous Fourier transform, therefore there is no requirement for x(t) to be periodic or of finite temporal extent. The theorem also does not say anything about the time extent of the discrete time signal (it is assumed to be infinite too). That's my take on it anyway. Ross. [1] http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Nyquist%E2%80%93Shannon_sampling_theorem.html [2] http://en.wikipedia.org/wiki/Fourier_transform [3] http://en.wikipedia.org/wiki/Fourier_inversion_formula -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Nyquist-Shannon sampling theorem
Hi Doug- To address some of your general questions about Fourier analysis and relationship to sampling theory: Broadly speaking any reasonably well-behaved signal can be decomposed into a sum of sinusoids (actually complex exponentials but don't worry about that detail for now). There are several flavors of Fourier analysis corresponding to different classes of signals. I.e., there are variants for continuous-time signals and discrete-time signals, and also for periodic signals and general signals. For the case of periodic signals you use what's called the Fourier Series (there you add up harmonic components), for general signals you use the Fourier Transform (this uses both harmonic and inharmonic components). The key difference between discrete time and continuous time from a Fourier analysis perspective (either series or transform) is that continuous time signals can have arbitrarily high frequencies, but discrete-time signals can only admit a finite bandwidth (related to the sampling rate). So in all cases of Fourier analysis, we're decomposing the signal into sinusoids. As you have noted, sinusoids all extend off to +/- infinity. You are correct to note that this corresponds to steady-state analysis, when used in for example a circuit analysis context. One consequence of this is that any perfectly bandlimited signal, like in the Sampling Theorem, also has to extend to +/- infinity. The other way around is also true: any signal that only lasts a finite length of time must contain frequencies all the way up to infinity. So, strictly speaking, it is true that the conditions of the Sampling Theorem cannot ever be truly fulfilled in practice, since all practical signals are necessarily time-limited. There is *always* theoretically some level of aliasing in practice because of the time-limiting constraint. But it turns out that this doesn't present much of a difficulty in practice. To see why, consider another question you raised: if arbitrary signals can be decomposed into sums of sinusoids that all repeat off to infinity, how do we represent signals that are concentrated in a particular time, or have frequency components that turn off midway through, etc.? It's easy enough to prove that the Fourier transform is invertible - i.e., that all of the information in the signal is contained in the sinusoidal parameters - but where does all that time information go? It turns out that it gets folded into the relationship between the amplitudes and phases of the various sinusoids in a complicated way - the information is still there and recoverable, but it's not usually easy to discern from looking at the output of the Fourier transform. For a signal where a given frequency turns on at some particular time, the transform is only going to give you a single parameter for that frequency, corresponding to the average energy over all time. The information about it turning on and off shows up at sidelobes around the frequency in question. Likewise, the time signal already contains all the information about the sinusoid parameters, folded up in a way that's hard to see. Why does that help us with the Sampling Theorem? Well, it turns out that while perfectly bandlimited signals can't also be perfectly time-limited, they *can* still have they're energy concentrated in one place in time. They don't ever go to zero and stay there, but they can die off, even quite strongly. The pertinent example here is the sinc signal, which is the ideal reconstruction filter used in the Sampling Theorem. This signal has energy concentrated at time 0, and falls off as 1/n from there. So if we truncate such a signal at some reasonable length, we can capture nearly all of its energy. And then if we take the Fourier Transform of the truncated signal, we will see that it is no longer perfectly bandlimited, but the energy in the signal outside the desired bandwidth will be very low. We can get further control over this by using a window instead of simply truncating (or even fancier ideas), but you get the general idea. In engineering terms, it's possible to build reconstruction filters with reasonable delay and very good stopband rejection - 100dB and beyond, pretty much the useful range of human hearing. In practice it is not the finite-time constraint on stopband rejection that limits sampling performance, but rather other more arcane circuits and systems considerations. E On Wed, Mar 26, 2014 at 10:07 PM, Doug Houghton doug_hough...@sympatico.cawrote: so is there a requirement for the signal to be periodic? or can any series of numbers be cnsidered periodic if it is bandlimited, or infinit? Periodic is the best word I can come up with. -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp -- dupswapdrop -- the music-dsp mailing list and website:
Re: [music-dsp] Nyquist-Shannon sampling theorem
On Mar 26, 2014, at 10:07 PM, Doug Houghton doug_hough...@sympatico.ca wrote: so is there a requirement for the signal to be periodic? or can any series of numbers be cnsidered periodic if it is bandlimited, or infinit? Periodic is the best word I can come up with. -- Well, no--you can decompose any portion of waveform that you want...I'm not sure at this point if you're talking about the discrete Fourier Transform or continuous, but I assume discrete in this context...but it's not that generally useful to, say, do a single transform of an entire song. Sorry, I'm not sure where you're going here... Actually, yes there IS a requirement that it be periodic. Fourier theorem says that any periodic sequence can be represented as a sum of sinusoids. Sampling theory says that any band-limited _perriodic_ signal can be properly sampled at the Nyquist rate. The trick is that any finite-duration signal can be thought of as one period of a periodic signal. This is part of the reason you get infinite repetitions in the frequency domain after you sample. ...sort of. As for frequencies jumping in and out, I think you were on the right track when you said that it's a Fourier theorem thing. Imagine you had a signal with one sinusoid that slowly fades in and out for the duration of the signal. Imagine that the the envelope of this sinusoid is the first half of a sinusoid. The envelope can be described as a sinusoid whose period is twice the signal duration. If you were to simply take these two stationary sinusoids (the envelope and the audible tone) and multiply them you end up with a spectrum that contains their sum and difference tones. In that way it can already be thought of as a tone that (slowly) pops in and out, but which is represented as a sum of two stationary sinusoids. If you wanted to have the tone come in and out more quickly you could add the first harmonic of a square wave (or several) to the envelope. For each additional harmonic you add to the envelope you get an additional two sinusoids in spectrum of the whole signal. You can keep adding harmonics up to the Nyquist frequency. This means that your frequencies can pop in and out very quickly, but only as fast as your sampling rate allows. Rinse and repeat for additional e.g. harmonics of your audible tone, additional tones, etc. Note that you can construct any envelope you could imagine, including apparently asymmetrical, or ones which are approximately zero for most of its duration pops in and out for only a small portion of it. That all comes from Fourier theorem. The important part is that each of these envelopes would be periodic in the duration of your sampled signal, as far as the sampling theorem is concerned. I hope that helps a bit -Stefan -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
[music-dsp] music-dsp mailing list digest settings
Hi Douglas and others, I wonder if it would be possible to increase the number of emails contained in the digest mails we receive? We (well, at least I) only get 2 or 3 emails, which is so little that my mail box would be less messy if I didn't had subscribed to the digest version, since then I could have read the subject of each mail in the mail list overview. I've looked into the configuration, and couldn't find a way to set this number myself. I think other digest users agree with me about this, since on other mailing lists, it's more common to get something like 10-20 mails (plus at least one mail per 24 hours if there had been any posts in that period). Best, Kjetil -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Nyquista?Shannon sampling theorem
On 27/03/2014, Doug Houghton doug_hough...@sympatico.ca wrote: consider this from a wiki page A bandlimited signal can be fully reconstructed from its samples, provided that the sampling rate exceeds twice the maximum frequency in the bandlimited signal. Actually twice the *bandwidth*. In music the distinction isn't terribly important, because the lower limit of the bandwidth is about 20Hz; other applications may find it more useful. -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Nyquist-Shannon sampling theorem
On 2014-03-27, Stefan Sullivan wrote: Actually, yes there IS a requirement that it be periodic. No, there is not. And the Shannon-Nyquist theorem isn't typically proven under under any such assumption. Furthermore, it generalizes to settings where periodicity isn't even an option. Granted, it *is* possible to prove the sampling theorem starting with a square integrable, continuous, periodic function. That would in fact be the most classical treatment of the subject, starting with Fourier himself, and the way he utilized his series in the context of the heat equation. But if you try to treat any general, real signals that way, you'll have to pass the period to the limit in infinity, and that then makes some of the attendant calculus unwieldy. Nowadays pretty much nobody bothers with those details, except perhaps to show the connection between the periodic in time, original series, and the continuous time, far easier and more generalizable transform. (The difference between the four signal theoretically meaningful modern forms of Fourier analysis is that the Fourier transform (FT) is continuous both in time and in frequency, the original Fourier series (FS) is periodic in time and discrete in frequency, the discrete time Fourier transform (DTFT) is discrete in time and periodic in frequency, and the discrete Fourier transform (DFT) is discrete and periodic in both time and in frequency. Obviously we can only compute with the all-discrete thingy in there in DSP, that being DFT; the rest of the lot are relevant only if you have to analyze mixed mode systems, like delta-sigma-converters, radar pulse compression schemes and whatnot. All of the domains can be related to each other in a regular fashion (a lattice of linear homomorphisms), using certain intuitively sensible limit arguments, but only DTFT and FS admit a full isomorphism between them. However the topological properties of the function spaces FT and DFT work on, and in the case of FT in particular the measure theoretic properties, are rather different from the middle pair.) Fourier theorem says that any periodic sequence can be represented as a sum of sinusoids. Which Fourier theorem? There are tons and tons of different Fourier theorems in the modern mathematical literature, all of them subtly and sometimes not so subtly different. The original one had to do with decomposing (many but by no means all) periodic (dis?)continuous real functions into a certain kind of an (in?)finite series of sinusoids, under a certain kind of a convergence criterion. But then pretty much every part of that sentence has since been mauled, reinterpreted, twisted, übergeneralized, and thoroughly fucked over, and usually in more than one direction at the same time. That's why harmonic analysis is to this date a lively -- and exceedingly finicky -- subspecialty of the mathematical science. One does not simply go into the Fourier domain. Sampling theory says that any band-limited _perriodic_ signal can be properly sampled at the Nyquist rate. No. Absolutely not. It says any bandlimited signal can be fully represented by its equidistant samples, at rate twice that of the highest sinusoid present. That in fact is the essence and the full shock of the theorem as originally presented and proven: bandlimitation in fully continuous R-R function terms can be translated into discreteness in time. That's the very thing which makes DSP possible in the first place, so everybody here still ought to understand how revolutionary, shocking, counterintuitive and far-reaching that idea really is. What it says is that under no realistic, physical constraint at all, any and every continuous time signal, be it periodic or not, can be losslessly, linearly represented by its equidistant samples, and since that is so, processed by discrete time circuitry to any given accuracy simply given the sampled representation. That you can go to such discreteness was no surprise with periodic signals. Not for a hundred years or so. The precise reason why we invoke the theorem, and laud its inventors by name as Shannon and Nyquist, *is* that the *continuous* time, FT-derived, *aperiodic* version used to be so counter-intuitive. Yet it went through, obviously carried shocking implications from the start, and now we're suddenly here as just one casual minded such implication. Never forget. The trick is that any finite-duration signal can be thought of as one period of a periodic signal. No, it can not. If you stretch a single cycle of cosine to eternity, it won't be square integrable. There's no real way to really handle that basic case in a systematic fashion, either. This is part of the reason you get infinite repetitions in the frequency domain after you sample. ...sort of. That's the DTFT, which you can sorta relate to the FT, using modulation by a train of delta functionals. But in order to do that for real, you
Re: [music-dsp] music-dsp mailing list digest settings
The digest doesn't go by number of messages, it goes by size. It was set at 10k to trigger a new digest, I just upped it to 10k. Let me know how that feels. best, douglas On Thu, Mar 27, 2014 at 5:12 AM, Kjetil Matheussen k.s.matheus...@gmail.com wrote: Hi Douglas and others, I wonder if it would be possible to increase the number of emails contained in the digest mails we receive? We (well, at least I) only get 2 or 3 emails, which is so little that my mail box would be less messy if I didn't had subscribed to the digest version, since then I could have read the subject of each mail in the mail list overview. I've looked into the configuration, and couldn't find a way to set this number myself. I think other digest users agree with me about this, since on other mailing lists, it's more common to get something like 10-20 mails (plus at least one mail per 24 hours if there had been any posts in that period). Best, Kjetil -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Best way to do sine hard sync?
From my perhaps less-than-perfect reading of this method, it sounds much like the Casio CZ synthesizer's resonance waveforms. If you're among the select few who's actually downloaded the alpha of my functional synthesis patching language, Moselle, you will find a module called Cazanova that does a superset of CZ waveforms (albeit with FM, not PM, making it, to my reading of their patent, not an infringement). See illustration: http://moselle.invisionzone.com/index.php?/gallery/image/16-untitled-4/ The beauty of the CZ waveform is they made an interesting tradeoff: accepting bucket-loads of DC in exchange for some really simple but powerful waveforms. They had the option (which I emulate) of having a single oscillator switch between two waveforms. In the illustration here, the first half is a sine wave FM'd to be morphing into a sawtooth. That's not actually pertinent to the discussion, but the right half is the same sinewave, suddenly switched to a much higher frequency and windowed (I think that's the term--I mean multiplied mathmatically) by a triangle wave. Both CZ and Cazanova give three windowing functions: this triangle, a trapezoid, and (I think an exact match for what OP is describing) a sawtooth. If I'm correct that this is what you're doing then I'd say the sound is quite different from hard sync, but that's not to say its bad at all. In fact, in Moselle, there are several demo patches that use Cazanova with no further processing. I'd say its a bit like a resonant filter sweep so clean you know its digital. -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] music-dsp mailing list digest settings
10k 10k ? :-) On 3/27/14 10:51 AM, Douglas Repetto wrote: The digest doesn't go by number of messages, it goes by size. It was set at 10k to trigger a new digest, I just upped it to 10k. Let me know how that feels. best, douglas On Thu, Mar 27, 2014 at 5:12 AM, Kjetil Matheussen k.s.matheus...@gmail.com wrote: Hi Douglas and others, I wonder if it would be possible to increase the number of emails contained in the digest mails we receive? We (well, at least I) only get 2 or 3 emails, which is so little that my mail box would be less messy if I didn't had subscribed to the digest version, since then I could have read the subject of each mail in the mail list overview. I've looked into the configuration, and couldn't find a way to set this number myself. I think other digest users agree with me about this, since on other mailing lists, it's more common to get something like 10-20 mails (plus at least one mail per 24 hours if there had been any posts in that period). Best, Kjetil -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp -- r b-j r...@audioimagination.com Imagination is more important than knowledge. -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Nyquista?Shannon sampling theorem
On 2014-03-27, gwenhwyfaer wrote: In music the distinction isn't terribly important, because the lower limit of the bandwidth is about 20Hz; other applications may find it more useful. Except for 1812 Overture. That sinks rather near DC at substantial amplitude, given the live cannon in the percussive section. Of course some bright fella then also went ahead and invented a speaker coupled right downto static pressure, presumably just to piss the rest of us off: https://en.wikipedia.org/wiki/Rotary_woofer . I should also go kill myself just about now. -- Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2 -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] music-dsp mailing list digest settings
That setting is a float, so who knows what == will return... (I meant 100k.) .d On Thu, Mar 27, 2014 at 1:08 PM, robert bristow-johnson r...@audioimagination.com wrote: 10k 10k ? :-) On 3/27/14 10:51 AM, Douglas Repetto wrote: The digest doesn't go by number of messages, it goes by size. It was set at 10k to trigger a new digest, I just upped it to 10k. Let me know how that feels. best, douglas On Thu, Mar 27, 2014 at 5:12 AM, Kjetil Matheussen k.s.matheus...@gmail.com wrote: Hi Douglas and others, I wonder if it would be possible to increase the number of emails contained in the digest mails we receive? We (well, at least I) only get 2 or 3 emails, which is so little that my mail box would be less messy if I didn't had subscribed to the digest version, since then I could have read the subject of each mail in the mail list overview. I've looked into the configuration, and couldn't find a way to set this number myself. I think other digest users agree with me about this, since on other mailing lists, it's more common to get something like 10-20 mails (plus at least one mail per 24 hours if there had been any posts in that period). Best, Kjetil -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp -- r b-j r...@audioimagination.com Imagination is more important than knowledge. -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Nyquist-Shannon sampling theorem
Hi Doug- Regarding this: Terms like well behaived when applied to the functon make me wonder what stipulations might be implied by the language that you'd have to be a formal mathmatician to interpret. As an example, I don't even know what the instrinsic properties of a function may be in this context. It turns out to be mostly math details that don't really come up in practice, as somebody (Sampo? Robert?) already mentioned. You have to avoid stuff where the signal blows up to infinity, or has badly-behaved discontinuities or things like that. But in practice, there are no realistic signals that display these issues. The only part where practical signals are problematic is that they are necessarily time-limited, and so cannot be perfectly band-limited. So the Sampling Theorem conditions can't be exactly fulfilled, and we have to live with some (hopefully extremely small) aliasing as a results. But, again, this turns out to be a quite minor issue compared to various of the other practical concerns that come up in designing A/D/A converters. And this one: If it was just a bunch of random numbers that started somewhere and stopped somewhere, I doubt anyone would be writing equations that mean anything. I'd guess we would turn to statistics at that pint to supply some context. Fourier analysis also works on random signals. But usually in that case we are less interested in the Fourier Transform of the random signals directly, and look more at the Fourier Transform of their correlation functions (this is called the power spectrum). That quantity is generally more useful for your usual engineering stuff like filter design, system analysis, etc. If you go to get a graduate degree in signal processing, the core first-year courses are typically what's called statistical signal processing, which as the name suggests covers signal processing issues in the context of random signals. This is an important, interesting, and worthwhile subject, but also maybe getting a bit afield from the Sampling Theorem issues you are most immediately interested in? E On Thu, Mar 27, 2014 at 11:20 AM, Doug Houghton doug_hough...@sympatico.cawrote: Some great replies, gives me a lot to think about Terms like well behaived when applied to the functon make me wonder what stipulations might be implied by the language that you'd have to be a formal mathmatician to interpret. As an example, I don't even know what the instrinsic properties of a function may be in this context. Since it's an infinit series I suppose it doesn't really matter, given enough time you could prove out any rational requirement? which is why you can throw math at it. If it was just a bunch of random numbers that started somewhere and stopped somewhere, I doubt anyone would be writing equations that mean anything. I'd guess we would turn to statistics at that pint to supply some context. As a broad answer to questions posted in a couple of the replies, my interest lies in imrpoving my understanding of specifically what the SNST proves, and the requirements for it to be valid. -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Nyquista?Shannon sampling theorem
Yeah this is sometimes called bandpass sampling or under sampling ( http://en.wikipedia.org/wiki/Undersampling) and is commonplace in the context of like RF communications and such. But it can also come up in audio applications, for example critically sampled filter banks. I.e. say you split a signal into a lowpass component and a high pass component, each with half the bandwidth of the original signal. Then you can downsample each of those components by 2 without losing anything - and also without applying any kind of base banding modulation on the high pass component. You just use a bandpass reconstruction filter at the end to recover the high pass component, instead of the traditional lowpass reconstruction filter. Analyzing this variation of the usual sampling approach is a good exercise for testing your understanding of the Sampling Theorem. E On Thu, Mar 27, 2014 at 3:41 AM, gwenhwyfaer gwenhwyf...@gmail.com wrote: On 27/03/2014, Doug Houghton doug_hough...@sympatico.ca wrote: consider this from a wiki page A bandlimited signal can be fully reconstructed from its samples, provided that the sampling rate exceeds twice the maximum frequency in the bandlimited signal. Actually twice the *bandwidth*. In music the distinction isn't terribly important, because the lower limit of the bandwidth is about 20Hz; other applications may find it more useful. -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Nyquist-Shannon sampling theorem
On 3/27/14 2:20 PM, Doug Houghton wrote: Some great replies, gives me a lot to think about Terms like well behaved when applied to the functon make me wonder what stipulations might be implied by the language that you'd have to be a formal mathmatician to interpret. i'm not so terribly worried about the existence of audio or music signals that are not sufficiently well behaved As an example, I don't even know what the instrinsic properties of a function may be in this context. the context are audio and music signals. the end receptacle of these signals are our ears and brains. i'm pretty sure that bandwidth restrictions apply. that *really* nails down the well-behaved. the are continuous-time, finite power signals that are also bandlimited. whether it's bandlimited to 22.05 kHz or to 48 kHz or 96 kHz, doesn't matter. that is a quantitative issue. doesn't change the validity nor the qualitative conditions for the theorem. Since it's an infinit series I suppose it doesn't really matter, given enough time you could prove out any rational requirement? which is why you can throw math at it. yup. and then, when you get practical about reconstruction, you realize that the infinite series of sinc() functions will turn into a finite approximation to the same thing. one approach, to a finite sum, is to truncate the sinc() function to a finite length. that is the same as applying a rectangular window (which is often the worst kind), so then you try the sinc() function windowed by a good window function. now that is a slightly different low-pass filter than the ideal brick-wall filter (which has a sinc() function for its impulse response). so then you investigate how bad is it from different points of view (usually a spectral POV over some frequencies of interest). If it was just a bunch of random numbers that started somewhere and stopped somewhere, I doubt anyone would be writing equations that mean anything. ??? um, you can model *any* linear and time-invariant signal reconstruction problem (or interpolation problem) as a specific case of a string of impulses weighted with the samples values, x[n], going into a particular low-pass filter. you can write equations for that. in both the time domain (you would use these equations to implement the interpolation) and in the frequency domain: [what does this LPF do to the baseband signal? what does it do to the images?] so, for even polynomial interpolation (like Lagrange or Hermite), you can model it as a convolution with an impulse response and you can compute the Fourier transform of that continuous-time impulse response and see how good or how bad the frequency response is. how bad does it kill the images and how safe is it to your original signal? you can write equations for that, and they mean something. I'd guess we would turn to statistics at that pint to supply some context. but you can make some good guesses instead of doing this as a complicated statistical process As a broad answer to questions posted in a couple of the replies, my interest lies in imrpoving my understanding of specifically what the SNST proves, and the requirements for it to be valid. take a look at that earlier wikipedia version to show you. if you ideally uniformly sample, your spectrum is repeatedly shifted (by integer multiples of the sampling frequency, these are called images) and added together. to recover the original signal, you must remove all of the images, yet preserve the original (that's what the brick-wall LPF does). Only if there is no overlap of the adjacent images is it possible to recover the original spectrum. To make that happen, the sampling frequency must exceed twice the frequency of the highest frequency component of a bandlimited signal. if it does not do that, images will overlap and once you add two numbers, it's pretty hard to separate them. so the overlapped images can be thought of as non-overlapped frequency components that just happened to be at those frequencies. those frequency components are called aliases. take a look at https://en.wikipedia.org/w/index.php?title=Nyquist%E2%80%93Shannon_sampling_theoremoldid=217945915 the sampling theorem is actually quite simple to be expressed rigorously. it is, at least, if you accept the EE notion of the Dirac delta function and not worry so much about it not really being a function, which is literally what the math folks tell us. -- r b-j r...@audioimagination.com Imagination is more important than knowledge. -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Nyquist-Shannon sampling theorem
On 3/27/14 4:05 PM, Ethan Duni wrote: Hi Doug- Regarding this: Terms like well behaived when applied to the functon make me wonder what stipulations might be implied by the language that you'd have to be a formal mathmatician to interpret. As an example, I don't even know what the instrinsic properties of a function may be in this context. It turns out to be mostly math details that don't really come up in practice, as somebody (Sampo? Robert?) already mentioned. You have to avoid stuff where the signal blows up to infinity, or has badly-behaved discontinuities or things like that. you only have to worry about discontinuities regarding the signal x(t) itself. but if it's an audio/music signal that is bandlimited to some finite frequency, i think the signal is, by any definition, sufficiently well behaved. of course there are huge discontinuities in the impulse train that is the sampling function (or Dirac comb), but we have math to deal with that. but it requires accepting +inf +inf T SUM{ delta(t-nT) } = SUM{ e^(i 2 pi k/T t) } n=-inf k=-inf But in practice, there are no realistic signals that display these issues. certainly not that are finite power and bandlimited. The only part where practical signals are problematic is that they are necessarily time-limited, and so cannot be perfectly band-limited. but we can get very, very close. leave your audio device turned on for an hour or two. how bandlimited can you make that? So the Sampling Theorem conditions can't be exactly fulfilled, and we have to live with some (hopefully extremely small) aliasing as a results. yes. but we can write equations that tell us an upper limit to the energy in those aliases. But, again, this turns out to be a quite minor issue compared to various of the other practical concerns that come up in designing A/D/A converters. right, but it becomes about the only issue (or a main tradeoff issue) in software or totally digital interpolation or sample-rate-conversion processes. we do that basically by imagining what we would be doing with a D/A converter running at one Fs connected to an A/D running at a different Fs. so, we *reconstruct*, using Shannon-Nyquist, from the samples that were obtained at the source Fs, the continuous-time signal at instances of time that are the sampling instances of the destination Fs. And this one: If it was just a bunch of random numbers that started somewhere and stopped somewhere, I doubt anyone would be writing equations that mean anything. I'd guess we would turn to statistics at that pint to supply some context. Fourier analysis also works on random signals. But usually in that case we are less interested in the Fourier Transform of the random signals directly, and look more at the Fourier Transform of their correlation functions (this is called the power spectrum). which is the ensemble magnitude-square of the Fourier Transform of the random signals themselves. so the Fourier analysis we do on random signals really doesn't know anything about the phase of their spectrums. their spectrum magnitudes may still be deterministic, but their phases are totally random. That quantity is generally more useful for your usual engineering stuff like filter design, system analysis, etc. If you go to get a graduate degree in signal processing, the core first-year courses are typically what's called statistical signal processing, which as the name suggests covers signal processing issues in the context of random signals. but, we can make some approximations and get good results without getting too deep in the weeds. This is an important, interesting, and worthwhile subject, but also maybe getting a bit afield from the Sampling Theorem issues you are most immediately interested in? -- r b-j r...@audioimagination.com Imagination is more important than knowledge. -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Nyquist-Shannon sampling theorem
it is, at least, if you accept the EE notion of the Dirac delta function and not worry so much about it not really being a function, which is literally what the math folks tell us. I may be misremembering, but can't non-standard analysis be used to make that whole Dirac delta function approach rigorous? I know that it works for the whole algebraic manipulation of delta-x terms that we also like to do in engineering classes, intuitively seems like we could play the same trick with Dirac delta's and associated stuff. But I don't recall whether it actually works out entirely... although Wikipedia suggests that maybe it does ( http://en.wikipedia.org/wiki/Dirac_delta_function#Infinitesimal_delta_functions). Not that it's worth the trouble to really work out - we already know what the correct answers are from measure theory/distributions - but it's nice to keep in mind that these pedantic math complaints are actually kind of baseless, at least if some care is taken to adhere to the rules of non-standard analysis and so avoid various pitfalls. E On Thu, Mar 27, 2014 at 12:34 PM, robert bristow-johnson r...@audioimagination.com wrote: On 3/27/14 2:20 PM, Doug Houghton wrote: Some great replies, gives me a lot to think about Terms like well behaved when applied to the functon make me wonder what stipulations might be implied by the language that you'd have to be a formal mathmatician to interpret. i'm not so terribly worried about the existence of audio or music signals that are not sufficiently well behaved As an example, I don't even know what the instrinsic properties of a function may be in this context. the context are audio and music signals. the end receptacle of these signals are our ears and brains. i'm pretty sure that bandwidth restrictions apply. that *really* nails down the well-behaved. the are continuous-time, finite power signals that are also bandlimited. whether it's bandlimited to 22.05 kHz or to 48 kHz or 96 kHz, doesn't matter. that is a quantitative issue. doesn't change the validity nor the qualitative conditions for the theorem. Since it's an infinit series I suppose it doesn't really matter, given enough time you could prove out any rational requirement? which is why you can throw math at it. yup. and then, when you get practical about reconstruction, you realize that the infinite series of sinc() functions will turn into a finite approximation to the same thing. one approach, to a finite sum, is to truncate the sinc() function to a finite length. that is the same as applying a rectangular window (which is often the worst kind), so then you try the sinc() function windowed by a good window function. now that is a slightly different low-pass filter than the ideal brick-wall filter (which has a sinc() function for its impulse response). so then you investigate how bad is it from different points of view (usually a spectral POV over some frequencies of interest). If it was just a bunch of random numbers that started somewhere and stopped somewhere, I doubt anyone would be writing equations that mean anything. ??? um, you can model *any* linear and time-invariant signal reconstruction problem (or interpolation problem) as a specific case of a string of impulses weighted with the samples values, x[n], going into a particular low-pass filter. you can write equations for that. in both the time domain (you would use these equations to implement the interpolation) and in the frequency domain: [what does this LPF do to the baseband signal? what does it do to the images?] so, for even polynomial interpolation (like Lagrange or Hermite), you can model it as a convolution with an impulse response and you can compute the Fourier transform of that continuous-time impulse response and see how good or how bad the frequency response is. how bad does it kill the images and how safe is it to your original signal? you can write equations for that, and they mean something. I'd guess we would turn to statistics at that pint to supply some context. but you can make some good guesses instead of doing this as a complicated statistical process As a broad answer to questions posted in a couple of the replies, my interest lies in imrpoving my understanding of specifically what the SNST proves, and the requirements for it to be valid. take a look at that earlier wikipedia version to show you. if you ideally uniformly sample, your spectrum is repeatedly shifted (by integer multiples of the sampling frequency, these are called images) and added together. to recover the original signal, you must remove all of the images, yet preserve the original (that's what the brick-wall LPF does). Only if there is no overlap of the adjacent images is it possible to recover the original spectrum. To make that happen, the sampling frequency must exceed twice the frequency of the highest frequency component
Re: [music-dsp] Nyquist-Shannon sampling theorem
In the time when Einstein started to work on his theories, the main hip and profound mathematics of the time came to be a consequence of the important physics problems of the time, and mostly (if I'm not forgetting some other factors) they were the higher maths, formulated as functional integrals. That's hard to explain such as to achieve actual understanding, easier for people with undergrad levels in at least Mechanical Engineering or better still. Apart from catching some small fish applying the dynamic of find an error in.. and such exercises, it isn't very worth while to try to grasp the mind of Fourier or something, really, seriously, usually pretty futile. I'm glad to see some influence of my repeated mention of some of my theoretical concerns leads to thoughts getting formulated, and more, up to quite some, precision being present. robert bristow-johnson wrote: On 3/27/14 5:27 PM, Ethan Duni wrote: it is, at least, if you accept the EE notion of the Dirac delta function and not worry so much about it not really being a function, which is literally what the math folks tell us. I may be misremembering, but can't non-standard analysis be used to make that whole Dirac delta function approach rigorous? i dunno what non-standard analysis you mean. the only truly rigorous usage of the Dirac delta is to keep it clothed with a surrounding integral. so naked Dirac deltas are a no-no. ... That, like what some others have phrased/quoted, sound good. I had the good fortune when I graduated to have had a nice subject with influence from university and some commercial lab (HP) influence, and can't help to think when some people feel properly in place and motivated to influence the world of science and connected interested people, that there would be very, very many interesting subjects possible to think about and work on, without having to go rough on the lower regions of theory. T. -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Nyquist-Shannon sampling theorem
On 3/27/14 10:58 PM, Theo Verelst wrote: I'm glad to see some influence of my repeated mention of some of my theoretical concerns leads to thoughts getting formulated, and more, up to quite some, precision being present. well, Theo, i've been thinking (and writing about http://www.aes.org/e-lib/browse.cfm?elib=5122 ) the sampling theorem and reconstruction issues for longer ago than this mailing list existed. robert bristow-johnson wrote: On 3/27/14 5:27 PM, Ethan Duni wrote: it is, at least, if you accept the EE notion of the Dirac delta function and not worry so much about it not really being a function, which is literally what the math folks tell us. I may be misremembering, but can't non-standard analysis be used to make that whole Dirac delta function approach rigorous? i dunno what non-standard analysis you mean. the only truly rigorous usage of the Dirac delta is to keep it clothed with a surrounding integral. so naked Dirac deltas are a no-no. ... That, like what some others have phrased/quoted, sound good. but it's *convenient* to be able to express and make use of naked delta functions. i *want* to be able to say: +inf +inf T SUM{ delta(t-nT) } = SUM{ e^(i 2 pi k/T t) } n=-inf k=-inf and the summation on the left is a bunch of naked delta functions. no integral surrounding them (at least not until later). I had the good fortune when I graduated to have had a nice subject with influence from university and some commercial lab (HP) influence, and can't help to think when some people feel properly in place and motivated to influence the world of science and connected interested people, that there would be very, very many interesting subjects possible to think about and work on, without having to go rough on the lower regions of theory. prognosticating about whether the Dirac delta is a function or not is less useful to me than just moving past that and treating it as if it were a function defined by the limit of some nascent delta function (which is not the way the math guys do it). it still has all the properties i need from it. -- r b-j r...@audioimagination.com Imagination is more important than knowledge. -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp
Re: [music-dsp] Nyquist-Shannon sampling theorem
Hi Robert- i dunno what non-standard analysis you mean. I'm referring to the stuff based on hyperreal numbers: http://en.wikipedia.org/wiki/Hyperreal_number These are an extension of the extended real numbers, where each hyperreal number has a standard part (which is an extended real) and an infintesimal part (which corresponds to a convergence rate). The basic idea is that each hyperreal number represents an equivalence class of functions which converge (in the extended reals, so converging to infinity is allowed) to the same limit at the same rate. The limit is given by the standard part of the number, and the convergence rate by the infintesimal part. So you can make sense of statements like 0/0 or infinity - infinity in this context, by comparing the infintesimal parts. I.e., the usual epsilon-delta limit approach from standard analysis is embedded into the arithmetic of the hyperreals. So using this approach you can rigorously do the kinds of sloppy algebraic manipulations of dx type terms that we often see in introductory calculus classes, for one example. the only truly rigorous usage of the Dirac delta is to keep it clothed with a surrounding integral. That's true, but a Dirac delta in the context of non-standard analysis isn't naked - it comes clothed with an associated limiting process given by the infintesimal part. I.e., consider a sequence of functions that converges to a Dirac delta, as is used in the standard approach (there's the boxcar example you've already given, or you can use a two-sided exponential decay, or a Gaussian distribution with variance shrinking to zero, or any number of other things). For any such sequence, there is an associated hyperreal Dirac delta, which expresses all of the relevant analytic properties of that class of sequences - the fact that it tends to zero everywhere except the origin and blows up there, and also the rates at which each point converges. Using that, we should be able to do the usual non-rigorous algebraic manipulations used in undergrad engineering proofs, but make them rigorous (with a bit of care - you have to work out what effects the non-standard versions of various operations have, take the standard part at appropriate places to get back to the final answer, etc.). Anyway the whole thing is a bit of a curiosity. It's generally easier to just do the proofs the standard way if you're really interested, and just use the regular sloppy approach if you aren't. But still kind of neat I think, that the fake way can actually be made rigorous by embedding the relevant analytic framework into an extended number system. E On Thu, Mar 27, 2014 at 7:17 PM, robert bristow-johnson r...@audioimagination.com wrote: On 3/27/14 5:27 PM, Ethan Duni wrote: it is, at least, if you accept the EE notion of the Dirac delta function and not worry so much about it not really being a function, which is literally what the math folks tell us. I may be misremembering, but can't non-standard analysis be used to make that whole Dirac delta function approach rigorous? i dunno what non-standard analysis you mean. the only truly rigorous usage of the Dirac delta is to keep it clothed with a surrounding integral. so naked Dirac deltas are a no-no. then we can't really have a notion of a Dirac comb function either. I know that it works for the whole algebraic manipulation of delta-x terms that we also like to do in engineering classes, intuitively seems like we could play the same trick with Dirac delta's and associated stuff. But I don't recall whether it actually works out entirely... although Wikipedia suggests that maybe it does ( http://en.wikipedia.org/wiki/Dirac_delta_function# Infinitesimal_delta_functions). Not that it's worth the trouble to really work out - we already know what the correct answers are from measure theory/distributions - but it's nice to keep in mind that these pedantic math complaints are actually kind of baseless, at least if some care is taken to adhere to the rules of non-standard analysis and so avoid various pitfalls. i just treat the Dirac delta in time as if it has a Planck Time (10^(-43) second) width. then it's a true function and it still has, to within an immeasureable degree of accuracy, the same properties that i want. L8r, -- r b-j r...@audioimagination.com Imagination is more important than knowledge. -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://music.columbia.edu/cmc/music-dsp http://music.columbia.edu/mailman/listinfo/music-dsp