On Thu, Mar 19, 2020 at 8:11 AM Dario Sanfilippo
wrote:
>
> I believe that the time complexity of FFT is O(nlog(n)); would you perhaps
> have a list or reference to a paper that shows the time complexity of
> common DSP systems such as a 1-pole filter?
>
The complexity depends on the topology.
On Tue, Mar 10, 2020 at 1:05 PM Richard Dobson wrote:
>
> Our ICMC paper can be found here, along with a few beguiling sound
> examples:
>
> http://dream.cs.bath.ac.uk/SDFT/
So this is pretty cool stuff. I can't say I've digested the whole idea yet,
but I had a couple of obvious questions.
In
dio if you want to hear whether or not there is quantization noise from
> this FFT EQ or not (from changing the coefficients, etc).
>
>
> cheers,
> Eric Z
> https://www.github.com/kardashevian
>
> On Fri, Mar 13, 2020 at 6:18 PM Ethan Duni wrote:
>
>> On Thu, Mar 12, 2020 at
On Thu, Mar 12, 2020 at 9:35 PM robert bristow-johnson <
r...@audioimagination.com> wrote:
> i am not always persuaded that the analysis window is preserved in the
> frequency-domain modification operation.
It definitely is *not* preserved under modification, generally.
The Perfect
Hi Robert
On Wed, Mar 11, 2020 at 4:19 PM robert bristow-johnson <
r...@audioimagination.com> wrote:
>
> i don't think it's too generic for "STFT processing". step #4 is pretty
> generic.
>
I think the part that chafes my intuition is more that the windows in steps
#2 and #6 should "match" in
On Tue, Mar 10, 2020 at 8:36 AM Spencer Russell wrote:
>
> The point I'm making here is that overlap-add fast FIR is a special case
> of STFT-domain multiplication and resynthesis. I'm defining the standard
> STFT pipeline here as:
>
> 1. slice your signal into frames
> 2. pointwise-multiply an
> On Mar 10, 2020, at 3:38 AM, Richard Dobson wrote:
>
> You can have windows when hop size is 1 sample (as used in the sliding phase
> vocoder (SPV) proposed by Andy Moorer exactly 20 years ago, and the focus of
> a research project I was part of around 2007). So long as the window is based
It is certainly possible to combine STFT with fast convolution in various ways.
But doing so imposes significant overhead costs and constrains the overall
design in strong ways.
For example, this approach:
> On Mar 9, 2020, at 7:16 AM, Spencer Russell wrote:
>
>
> if you have an KxN STFT
On Sun, Mar 8, 2020 at 8:02 PM Spencer Russell wrote:
> In fact, the the standard STFT analysis/synthesis pipeline is the same
> thing as overlap-add "fast convolution" if you:
>
> 1. Use a rectangular window with a length equal to your hop size
> 2. zero-pad each input frame by the length of
>
> If the system is suitably designed (e.g. correct window and overlap),
> you can filter using an FFT and get identical results to a time domain
> FIR filter (up-to rounding/precision limits, of course). The
> appropriate window and overlap process will cause all circular
> convolution
or
> DCT type IV which is ubiquitous in audio codecs.
>
>> On Sun, Mar 8, 2020, 7:41 PM Ethan Duni wrote:
>> FFT filterbanks are time variant due to framing effects and the circular
>> convolution property. They exhibit “perfect reconstruction” if you design
>&
FFT filterbanks are time variant due to framing effects and the circular
convolution property. They exhibit “perfect reconstruction” if you design the
windows correctly, but this only applies if the FFT coefficients are not
altered between analysis and synthesis. If you alter the FFT
It is physically impossible to build a causal, zero-phase system with
non-trivial frequency response.
Ethan
> On Mar 7, 2020, at 7:42 PM, Zhiguang Eric Zhang wrote:
>
>
> Not to threadjack from Alan Wolfe, but the FFT EQ was responsive written in C
> and running on a previous gen MacBook
So as Nigel and Robert have already explained, in general you need to
separately handle the spectral shaping and pdf shaping. This dither
algorithm works by limiting to the particular case of triangular pdf with a
single pole at z=+/-1. For that case, the state of the spectral shaping
filter can
Looks like they use the Viterbi algorithm to get the pitch tracks.
> On Mar 6, 2019, at 6:59 PM, Jay wrote:
>
>
> Looks like there's a link to a python implementation on this topics page,
> might provide some insights:
> https://github.com/topics/pitch-tracking
>
>
>
>
>
>
>
>
>> On
Aren't Auto-Tune and similar built on LPC vocoders? I had the impression
that was publicly known (recalling magazine interviews/articles from the
late 90s). The secret sauce being all the stuff required for pitch
tracking, unvoiced segments, different tunings, vibrato, corner cases, etc.
But as
cations?
>
> The background is still that I want to use a higher resolution for
> ananlysis and
> a lower resolution for synthesis in a phase vocoder.
>
> Am 08.11.2018 um 21:45 schrieb Ethan Duni:
>
> Not sure can get the odd bins *easily*, but it is certainly possible.
> Con
t; X1 = x0 + (r - r*i)*x1 - i*x2 + (-r - r*i)*x3 - x4 + (-r + r*i)*x5 + i*x6
> + (r + r*i)*x7
>
> where r=sqrt(1/2)
>
> Is it actually possible? It seems like the phase of the coefficients in
> the Y's and Z's advance too quickly to be of any use.
>
> -Ethan
>
>
>
&
You can combine consecutive DFTs. Intuitively, the basis functions are
periodic on the transform length. But it won't be as efficient as having
done the big FFT (as you say, the decimation in time approach interleaves
the inputs, so you gotta pay the piper to unwind that). Note that this is
for
Well you definitely want a monotonic, equal-amplitude crossfade, and
probably also time symmetry. So I think raised sinc is right out.
In terms of finer design considerations it depends on the time scale. For
longer crossfades (>100ms), steady-state considerations apply, and you can
design for
You should have a search for papers by Jean Laroche and Mark Dolson, such
as "About This Phasiness Business" for some good information on phase
vocoder processing. They address time scale modification mostly in that
specific paper, but many of the insights apply in general, and you will
find
Alex, it sounds like you are confusing algorithmic latency with framing
latency. At each frame, you take in 10ms (or whatever) of input, and then
provide 10ms of output. This (plus processing time to generate the output) is
the IO latency of the process. But the algorithm itself can add
rbj wrote:
>i, personally, would rather see a consistent method used throughout the
MIDI keyboard range
If you squint at it hard enough, you can maybe convince yourself that the
naive sawtooth generator is just a memory optimization for low-frequency
wavetable entries. I mean, it does a perfect
Hi ben
You don't need to evaluate the asin() - it's piecewise monotonic and
symmetrical, so you can get the same comparison directly in the signal
domain.
Specifically, notice that x(n) = sin(2*pi*(1/4)*n) = [...0,1,0,-1,...]. So
you get the same result just by checking ( abs( x[n] - x[n-1] ) ==
com> wrote:
>
>
> Original Message
> Subject: Re: [music-dsp] Sampling theory "best" explanation
> From: "Ethan Duni" <ethan.d...@gmail.com>
> Date: Wed, September 6, 2017 4:49 pm
> To
rry you misinterpreted it.
>
> On Sep 7, 2017, at 5:34 AM, Ethan Duni <ethan.d...@gmail.com> wrote:
>
> Nigel Redmon wrote:
> >As an electrical engineer, we find great humor when people say we can't
> do impulses.
>
> I'm the electrical engineer who pointed out t
e shortcut is trivial. Like I said, audio
> sample rates are slow, not that hard to do a good enough job for
> demonstration with "close enough" impulses.
>
> Don't anyone get mad at me, please. Just sitting on a plane at LAX at 1AM,
> waiting to fly 14 hours...on the
esenting bandlimited functions is useful. Because if we're thinking
>> of things this way, we can simply define an operation in the space of
>> discrete signals as being LTI iff the corresponding operation in the space
>> of bandlimited functions is LTI. This generalizes the usual d
o violate the
> definition of LTI they were taught.
>
> On Sep 1, 2017, at 3:46 PM, Ethan Duni <ethan.d...@gmail.com> wrote:
>
> Ethan F wrote:
> >I see your nitpick and raise you. :o) Surely there are uncountably many
> such functions,
> >as the power at any ap
adio
>> applications.
>
>
> I see your nitpick and raise you. :o) Surely there are uncountably many
> such functions, as the power at any apparent frequency can be distributed
> arbitrarily among the bands.
>
> -Ethan F
>
>
> On Fri, Sep 1, 2017 at 5:30 PM, Ethan
>I'm one of those people who prefer to think of a discrete-time signal as
>representing the unique bandlimited function interpolating its samples.
This needs an additional qualifier, something about the bandlimited
function with the lowest possible bandwidth, or containing DC, or
"baseband," or
These PicoScopes look pretty cool :]
As it happens I am just now trying to free up some garage space to get an
electronics bench together. But it's coming up on 20 years since I last
soldered and it's a whole different world with scopes now. So thanks for
this thread!
Also if anybody knows good
> how do you quadrature modulate without Hilbert filters?
>
Perhaps I'm using the wrong term - the operation in question is just the
multiplication of a signal by e^jwn. Or, equivalently, multiplying the real
part by cos(wn) and the imaginary part by sin(wn) - a pair of "quadrature
oscillators."
On Tue, Feb 7, 2017 at 6:49 AM, Ethan Fenn wrote:
> So I guess the general idea with these frequency shifters is something
> like:
>
> pre-filter -> generate Hilbert pair -> multiply by e^iwt -> take the real
> part
>
> Am I getting that right?
>
Exactly, this is a
Ha this article made me chuckle. All the considerations about odd 8 bit
audio formats!
This method has his desired property that if all but one input is silent,
you get the non-silent one at output without attenuation or other
degradation. But the inclusion of the cross term makes it quite
sponse and
truncate/window it to the desired length.
FFT domain is generally not a good place to design filters - you're only
controlling what happens at the bin centers, and all kinds of wild things
can happen in between them. And it's difficult to account for the
circular/finite length effects.
Right aren't monotonic signals the worst case here? Or maybe not, since
they're worst for one wedge, but best for the other?
Ethan D
On Fri, Sep 2, 2016 at 10:12 AM, Evan Balster wrote:
> Just a few clarifications:
>
> - Local maxima and first difference don't really matter.
So like a cascade of allpass filters then?
Ethan D
On Fri, Jul 29, 2016 at 11:10 AM, gm wrote:
>
> I think what I am looking for would be the perfect reverb.
>
> So that's the question reformulated: how could you construct a perfectly
> flat short reverb?
>
> It's the
>okay, this PDF was more useful than the other. once i got down to slide
#31,
> i could see the essential definition of what a "unum" is.
>big deeel.
>first of all, if the word size is fixed and known (and how would you know
how far
>to go to get to the extra meta-data: inexact bit, num
Any noise other than white noise is correlated, by definition. That's what
"white noise" means - uncorrelated. Correlation in the time domain is
equivalent to non-constant shape in the frequency domain.
Ethan
On Thu, Apr 14, 2016 at 12:24 PM, Seth Nickell wrote:
> Maybe
Supposing this is some griefer it seems reasonable to ignore them - but is
there a possibility that this is a symptom of some kind of server attack or
attempt to profile/track list members?
I've never received any unsub notices myself but it is a little
disconcerting that somebody persists at
Yeah zeroing out the state is going to lead to a transient, since the
filter has to ring up.
If you want to go that route, one possibility is to use two filters in
parallel: one that keeps the old state/coeffs but gets zero input, and
another that has zero state and gets the new input/coeffs. You
Theo wrote:
>I get there are certain statistical ideas involved. I wonder
>however where those ideas in practice lead to, because
>of a number of assumptions, like the "statistical variance"
>of a signal. I get that a self correlation of a signal in some
>normal definition gives an idea of the
>Lastly, it's important to note that differentiation and
semi-differentiation
>filters are always approximate for sampled signals, and will tend to
>exhibit poor behavior for very high frequencies and (for
semi-differentiation)
>very low ones.
I'm not sure there's necessarily a problem at low
Not a purely time-domain approach, but you can consider comparing sparsity
in the time and Fourier domains. The idea is that periodic/tonal type
signals may be non-sparse in the time domain, but look sparse in the
frequency domain (because all of the energy is on/around harmonics).
Similarly,
ny brightness metric can be clearly understood, I'll stick
> to formulas whose mathematical properties are transparent -- these lend
> themselves infinitely better to being small pieces of larger systems.
>
> – Evan Balster
> creator of imitone <http://imitone.com>
>
&g
s with the differential brightness estimator.
>
> – Evan Balster
> creator of imitone <http://imitone.com>
>
> On Thu, Feb 18, 2016 at 1:00 AM, Ethan Duni <ethan.d...@gmail.com> wrote:
>
>> >normalized to fundamental frequency or not
>> >normalized (so t
om> wrote:
>
>
> Original Message
> Subject: Re: [music-dsp] Cheap spectral centroid recipe
> From: "Ethan Duni" <ethan.d...@gmail.com>
> Date: Wed, February 17, 2016 11:21 pm
> To: "A discussion list f
>It's essentially computing a frequency median,
>rather than a frequency mean as is the case
>with the derivative-power technique described
> in my original approach.
So I'm wondering, is there any consensus on what is the best measure of
central tendency for a music signal spectrum? There's the
>given the same order N for the polynomials, whether your basis set are
> the Tchebyshevs, T_n(x), or the basis is just set of x^n, if you come up
>with a min/max optimal fit to your data, how can the two polynomials be
>different?
Right, if you do that you'll end up with equivalent answers (to
>> [..] the autocorrelation is
>>
>> = (1/3)*(1-P)^|k|
>>
>> (I checked that with a little MC code before posting.) So the power
>> spectrum is (1/3)/(1 + (1-P)z^-1)
The FT of (1/3)*(1-P)^|k| is (1/3)*(1-Q^2)/(1-2Qcos(w) + Q^2), where Q =
(1-P).
Looks like you were thinking of the
-
> Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold
> noise?
> From: "Ethan Duni" <ethan.d...@gmail.com>
> Date: Wed, November 11, 2015 7:36 pm
> To: "robert bristow-jo
--
> Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold
> noise?
> From: "Ethan Duni" <ethan.d...@gmail.com>
> Date: Wed, November 11, 2015 5:57 pm
> To: "robert bristow-johnson" <r...@audioimaginatio
On Tue, Nov 10, 2015 at 6:33 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:
>
>
> Original Message
> Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold
> noise?
> From: &
>(Semi-)stationarity, I'd say. Ergodicity is a weaker condition, true,
>but it doesn't then really capture how your usual L^2 correlative
>measures truly work.
I think we need both conditions, no?
>Something like that, yes, except that you have to factor in aliasing.
What aliasing? Isn't this
's the basic idea.
https://en.wikipedia.org/wiki/Spectral_density_estimation
E
On Thu, Nov 5, 2015 at 2:00 AM, Ross Bencina <rossb-li...@audiomulch.com>
wrote:
> Thanks Ethan(s),
>
> I was able to follow your derivation. A few questions:
>
> On 4/11/2015 7:07 PM, E
about power per linear or angular frequency. And
>>> there could be others I'm not thinking of maybe someone else can
>>> shed more light here.
>>>
>>
>> I multiplied the psd by 1/3 and as you can see from the graph it looks as
>> thoug
P^2)
Unless I've screwed up somewhere?
E
On Tue, Nov 3, 2015 at 8:51 PM, Ross Bencina <rossb-li...@audiomulch.com>
wrote:
> On 4/11/2015 5:26 AM, Ethan Duni wrote:
>
>> Do you mean the literal Fourier spectrum of some realization of this
>> process, or the power spec
Yep that's the same approach I just posted :]
E
On Tue, Nov 3, 2015 at 11:48 PM, Ethan Fenn wrote:
> How about this:
>
> For a lag of t, the probability that no new samples have been accepted is
> (1-P)^|t|.
>
> So the autocorrelation should be:
>
> AF(t) =
Wait, just realized I wrote that last part backwards. It should be:
So in broad strokes, what you should see is a lowpass spectrum
parameterized by P - for P very small, you approach a DC spectrum, and for
P close to 1 you approach a spectrum that's flat.
On Tue, Nov 3, 2015 at 10:26 AM, Ethan
Do you mean the literal Fourier spectrum of some realization of this
process, or the power spectral density? I don't think you're going to get a
closed-form expression for the former (it has a random component). For the
latter what you need to do is work out an expression for the
autocorrelation
>the reason why it's merely convention is that if the minus sign was
swapped
>between the forward and inverse Fourier transform in all of the literature
and
>practice, all of the theorems would work the same as they do now.
Note that in some other areas they do actually use other conventions.
think we're on the same page. ain't we?
Yeah, I was unclear on which scenario(s) the aliasing analysis was supposed
to apply to.
E
On Wed, Aug 26, 2015 at 12:53 PM, robert bristow-johnson
r...@audioimagination.com wrote:
On 8/25/15 7:08 PM, Ethan Duni wrote:
if you can, with optimal
, not a
discrete time signal of whatever sampling rate.
E
On Fri, Aug 21, 2015 at 2:09 AM, Peter S peter.schoffhau...@gmail.com
wrote:
On 21/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
In this graph, the signal frequency seems to be 250 Hz, so this graph
shows the equivalent of about 22000/250 = 88x
, 2015 at 1:24 PM, Peter S peter.schoffhau...@gmail.com
wrote:
On 21/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
It shows *exactly* the aliasing
It shows the aliasing left by linear interpolation into the continuous
time
domain. It doesn't show the additional aliasing produced
...@gmail.com
wrote:
On 21/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
Creating a 22000 Hz signal from a 250 Hz signal by interpolation, is
*exactly* upsampling
That is not what is shown in that graph. The graph simply shows the
continuous-time frequency response of the interpolation
The details of how the graphs were generated don't really matter. The point
is that the only effect shown is the spectrum of the continuous-time
polynomial interpolator. The additional spectral effects of delaying and
resampling that continuous-time signal (to get fractional delay, for
example)
to
what I'm saying in the first place. It is indeed a waste of your time to
invent equivalent ways to generate graphs, since that is not the point.
E
On Fri, Aug 21, 2015 at 2:56 PM, Peter S peter.schoffhau...@gmail.com
wrote:
On 21/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
The details
1) Olli Niemiatalo's graph *is* equivalent of the spectrum of
upsampled white noise.
We've been over this repeatedly, including in the very post you are
responding to. The fact that there are many ways to produce a graph of the
interpolation spectrum is not in dispute, nor is it germaine to my
of the noisiness no matter how
much data you throw at it).
E
On Fri, Aug 21, 2015 at 5:47 PM, Peter S peter.schoffhau...@gmail.com
wrote:
On 22/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
We've been over this repeatedly, including in the very post you are
responding to. The fact
In this graph, the signal frequency seems to be 250 Hz, so this graph
shows the equivalent of about 22000/250 = 88x oversampling.
That graph just shows the frequency responses of various interpolation
polynomials. It's not related to oversampling.
E
On Thu, Aug 20, 2015 at 5:40 PM, Peter S
If all you're trying to do is mitigate the rolloff of linear interp
That's one concern, and by itself it implies that you need to oversample by
at least some margin to avoid having a zero at the top of your audio band
(along with a transition band below that).
But the larger concern is the
a first-order interpolator.
quite familiar with it.
Yeah that was more for the list in general, to keep this discussion
(semi-)grounded.
E
On Wed, Aug 19, 2015 at 9:15 AM, robert bristow-johnson
r...@audioimagination.com wrote:
On 8/18/15 11:46 PM, Ethan Duni wrote:
for linear
and it doesn't require a table of coefficients, like doing higher-order
Lagrange or Hermite would.
Well, you can compute those at runtime if you want - and you don't need a
terribly high order Lagrange interpolator if you're already oversampled, so
it's not necessarily a problematic overhead.
.
E
On Wed, Aug 19, 2015 at 3:55 PM, Peter S peter.schoffhau...@gmail.com
wrote:
On 20/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
I don't dispute that linear fractional interpolation is the right choice
if
you're going to oversample by a large ratio. The question is what
rbj
and it doesn't require a table of coefficients, like doing higher-order
Lagrange or Hermite would.
Robert I think this is where you lost me. Wasn't the premise that memory
was cheap, so we can store a big prototype FIR for high quality 512x
oversampling? So why are we then worried about the
Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1,
-1...
The sampling theorem requires that all frequencies be *below* the Nyquist
frequency. Sampling signals at exactly the Nyquist frequency is an edge
case that sort-of works in some limited special cases, but there is no
a nyquist frequency
sinusoid when you run it through a DAC.
E
On Tue, Aug 18, 2015 at 1:28 PM, Peter S peter.schoffhau...@gmail.com
wrote:
On 18/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1,
-1...
The sampling
no bearing on the frequency
response of fractional interpolators. I'd suggest dropping this whole
derail, if you are no longer hung up on this point.
E
On Tue, Aug 18, 2015 at 2:08 PM, Peter S peter.schoffhau...@gmail.com
wrote:
On 18/08/2015, Ethan Duni ethan.d...@gmail.com wrote:
That class
, the aliasing issue
works like this: I add two numbers together, and find that the answer is X.
I tell you X, and then ask you to determine what the two numbers were. Can
you do it?
E
On Tue, Aug 18, 2015 at 2:13 PM, Peter S peter.schoffhau...@gmail.com
wrote:
On 18/08/2015, Ethan Duni ethan.d
be used there.
But the example of the weird things that can happen when you try to sample
a sine wave right at the nyquist rate and then process it is orthogonal to
that point.
E
On Tue, Aug 18, 2015 at 1:16 PM, robert bristow-johnson
r...@audioimagination.com wrote:
On 8/18/15 3:44 PM, Ethan Duni
In order to reconstruct that sinusoid, you'll need a filter with
an infinitely steep transition band.
No, even an ideal reconstruction filter won't do it. You've got your
+Nyquist component sitting right on top of your -Nyquist component. Hence
the aliasing. The information has been lost in the
for linear interpolation, if you are a delayed by 3.5 samples and you
keep that delay constant, the transfer function is
H(z) = (1/2)*(1 + z^-1)*z^-3
that filter goes to -inf dB as omega gets closer to pi.
Note that this holds for symmetric fractional delay filter of any odd order
(i.e.,
Yeah I am also curious. It's not obvious to me where it would make sense to
spend resources compensating for interpolation rather than just juicing up
the interpolation scheme in the first place.
E
On Mon, Aug 17, 2015 at 11:39 AM, Nigel Redmon earle...@earlevel.com
wrote:
Since compensation
https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect
E
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp
to
address the source of your hostility, and also that you gain more insight
into Information Theory.
My apologies to the list for encouraging this unfortunate tangent.
E
On Thu, Jul 16, 2015 at 8:38 PM, Peter S peter.schoffhau...@gmail.com
wrote:
On 17/07/2015, Ethan Duni ethan.d...@gmail.com
peter.schoffhau...@gmail.com
wrote:
On 15/07/2015, Ethan Duni ethan.d...@gmail.com wrote:
Right, this is an artifact of the approximation you're doing. The model
doesn't explicitly understand periodicity, but instead only looks for
transitions, so the more transitions per second (higher
This algorithm gives an entropy rate estimate approaching zero for any
periodic waveform, irregardless of the shape (assuming the analysis
window is large enough).
But, it seems that it does *not* approach zero. If you fed an arbitrarily
long periodic waveform into this estimator, you won't see
into muddier and muddier waters as it proceeds, and the resulting
confusion seems to be provoking some unpleasantly combative behavior from
you.
E
On Thu, Jul 16, 2015 at 12:50 PM, Peter S peter.schoffhau...@gmail.com
wrote:
On 16/07/2015, Ethan Duni ethan.d...@gmail.com wrote:
But, it seems
I wondered a few times what a higher entropy estimate for a higher
frequency would mean according to this - I think it means that a
higher frequency signal needs a higher bandwidth channel to transmit,
as you need a transmission rate of 2*F to transmit a periodic square
wave of frequency F. Hence,
Well, I was thinking about this as well. How about a 1bit square wave then?
Such a signal is deterministic and so has entropy rate of zero.
Your bitflip counter would not be sensitive to duty cycle.
The simpler bit counter would be.
I don't see why entropy should change with duty cycle since I
, and then downsampling? Is there mileage to be had by
combining oversampling with BLEP?
E
On Thu, Jun 25, 2015 at 1:34 AM, Vadim Zavalishin
vadim.zavalis...@native-instruments.de wrote:
On 24-Jun-15 21:30, Ethan Duni wrote:
Could you expand a bit on exactly what it means to apply the BLEP method
at 12:49 PM, Sampo Syreeni de...@iki.fi wrote:
On 2015-06-12, Ethan Duni wrote:
Thanks for expanding on that, this is quite interesting stuff. However,
if I'm following this correctly, it seems to me that the problem of
multiplication of distributions means that the whole basic set-up
to
sampling/reconstruction of well-tempered distributions in the first place.
No?
E
On Thu, Jun 11, 2015 at 2:00 AM, Sampo Syreeni de...@iki.fi wrote:
On 2015-06-09, Ethan Duni wrote:
The Fourier transform does not exist for functions that blow up to +-
infinity like that. To do frequency domain
vadim.zavalis...@native-instruments.de wrote:
On 09-Jun-15 19:23, Ethan Duni wrote:
Could you give a little bit more of a clarification here? So the
finite-order polynomials are not bandlimited, except the DC? Any hints
to what their spectra look like? How a bandlimited polynomial would look
Could you give a little bit more of a clarification here? So the
finite-order polynomials are not bandlimited, except the DC? Any hints
to what their spectra look like? How a bandlimited polynomial would look
like?
Any hints how the spectrum of an exponential function looks like? How
does a
Now the assignment is as follows: can we, given the output signal
coming from our filter which was fed the input signal, and the filter
coefficients, compute the input signal ?
Invertible digital filters are invertible, up to numerical precision. Are
you wanting to talk about finite word length
If you try to take the Fourier transform integral of a exp(j*omega_0*t),
it will not
converge in the sense, how an improper integral's convergence is usually
understood.
You will need to employ something like Cauchy principal value or Cesaro
convergence
to make it converge to zero at
Wow, good answer!
E
On Sat, Jun 6, 2015 at 4:34 PM, Sampo Syreeni de...@iki.fi wrote:
On 2015-06-06, Alan Wolfe wrote:
I am so sorry... meant to send this to myself to investigate later, my
name starts with A and my address book has this as A for some reason.
Please ignore... or feel
Also a good starting place for beginners are the xiph show-and-tell videos
(probably been posted here before, but whatever):
https://xiph.org/video/vid2.shtml
E
On Wed, Jun 3, 2015 at 3:05 PM, Ethan Duni ethan.d...@gmail.com wrote:
Perfect sinusoids/square waves/etc. only exist
1 - 100 of 168 matches
Mail list logo