Re: [music-dsp] Computational complexity of common DSP algorithms

2020-03-19 Thread Sampo Syreeni

On 2020-03-19, Dario Sanfilippo wrote:

Thanks for your email, all good points. From the top of your head, 
could you please point me to a reference for the measurement of 
calculations needed in direct-form filters?


The best computational complexity in direct form convolution is 
guaranteed to be zero-delay convolution in the Gardner form, so O(N log 
M) where N is the number of samples processed and M is the support of 
the eventual weight/measure of the nonzero portion of the LTI impulse 
response. (There are methods which lead to longer responses at lower 
cost, but they all trade in something for something. At full innovation 
density/critical sampling, you cannot do any better.) This is because 
this sort of processing can be reduced to a binary sorting problem, 
which is guaranteed to asymptotically take as much time. (It is a 
research problem whether non-binary sorting algorithms could perchance 
be generalized to this setting as well; no such generalizations exist as 
of now.)


https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=2ahUKEwi74uPM56foAhWnxcQBHeG1BU0QFjAAegQIAxAB&url=http%3A%2F%2Fwww.cs.ust.hk%2Fmjg_lib%2Fbibs%2FDPSu%2FDPSu.Files%2FGa95.PDF&usg=AOvVaw1lFWuE1IrzjRZDOD-VAP05

Gardner's algorithm is then best in more than one way. It doesn't just 
yield the best asymptotic performance, but exactly zero delay as well. 
And the guy didn't even stop there. He also broke the traditional 
overlap-add-FFT-convolution algorithm in running parts, so that it 1) 
becomes an exponential cascade of overlapping processes, whose running 
time sums to constant one, instead of peaking at any stage, even as a 
whole, and 2) it optimally reuses previous half-results in order to 
derive the next one. (That has a lot to do with both the OLA structure, 
and at the same time with how FFT is a realization of a full Fourier 
rotation in a function space, by coordinate-wise, parallel rotations, in 
a logarithmic/divide-and-conquer cascade.)


I've in fact thought that I should apply the algorithm to certain coding 
theoretic problems, just now. Because, since it's guaranteed to be 
zero-delay, you can apply it willy-nilly in decision feedback decoding, 
on the coding side. And since it's guaranteed to be optimal on the LTI 
side of things, and it's a fully neutral, general, and provably 
efficient LTI-DSP primitive, why not take advantage of it...? ;)

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Sliding Phase Vocoder (was FIR blog post & interactive demo)

2020-03-19 Thread Eric Brombaugh

Thanks for the clarification.

Sorry for the confusion re twiddle factors - I meant the per-bin complex 
rotations, so I believe we're on the same page.


It's good to know that you found single precision floating pt to be 
insufficient for long-term stability. The low-resolution fixed point 
SDFT I built wasn't designed to run for more than a few tens of 
milliseconds before being reset, but I did see some error build-up over 
that period so it's not surprising that high resolution is required for 
long run times. It might be interesting to play with this in an FPGA 
context, so it's good to set the expectations properly at the outset.


Eric

On 3/19/20 11:11 AM, Richard Dobson wrote:

(caveat - 13 years since I worked on this)

This is a real single-sample update Sliding DFT, not a block-based 
method. The sample comes in, and used to perform a complex rotation to 
each bin, followed by the frequency-domain convolution. There are no 
twiddle factors as such. So the rectangular window is at best implicit - 
I'm not sure it even has any meaning in this situation. The approach 
from the outset was for the goal of real-time processing - i.e. 
potentially for hours non-stop. We found (in the Cleaspeed project) that 
single-precision floats would not support that; I don't know whether 
anything less than double precision is required - those were the only 
choices available.


It's "embarrassingly parallel" as an algorithm, so very suited to 
dedicated massively parallel hardware. I know FPGAs are pretty powerful 
these days so might well do the job (but some transformations are pretty 
cpu-intensive too!). The Bath Uni team said they were using a 
"mid-range" graphic card (on a Linux workstation).


Richard Dobson

On 19/03/2020 17:45, Eric Brombaugh wrote:

Wow - interesting discussion.

I've implemented a real-time SDFT on an FPGA for use in carrier 
acquisition of communications signals. It was surprisingly easy to do 
and didn't require particularly massive resources, although FPGAs 
naturally facilitate a degree of low-level parallelism that you can't 
easily achieve in CPU-based systems.


Based on this it might be feasible to build the SPV on a modest FPGA 
rather than resorting to GPUs or specialized parallel CPU systems. The 
main stumbling block that I see was your use of double-precision 
floating point. If that level of accuracy is really necessary then a 
higher end FPGA would be needed as most mid-range devices are geared 
more for fixed point or single-precision floating point.


I was a bit confused by the ICMC paper when it came to windowing. The 
SDFT structure I'm used to seeing (as discussed in the Lyons/Jacobsen 
article you referenced) involves a rectangular window applied prior to 
the twiddle calculations using a comb-filter structure. Is this window 
replaced by your frequency domain convolutions, or are the 
cosine-based windows applied in addition to the rectangular one?


Eric


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Computational complexity of common DSP algorithms

2020-03-19 Thread Dario Sanfilippo
Hi, Ethan.

Thanks for your email, all good points. From the top of your head, could
you please point me to a reference for the measurement of calculations
needed in direct-form filters?

Best,
Dario

On Thu, 19 Mar 2020 at 17:28, Ethan Duni  wrote:

>
>
> On Thu, Mar 19, 2020 at 8:11 AM Dario Sanfilippo <
> sanfilippo.da...@gmail.com> wrote:
>
>>
>> I believe that the time complexity of FFT is O(nlog(n)); would you
>> perhaps have a list or reference to a paper that shows the time
>> complexity of common DSP systems such as a 1-pole filter?
>>
>
> The complexity depends on the topology. The cheapest topologies (direct
> forms) are something like 2*M operations per sample, where M is the filter
> order. Other topologies are optimized for other properties (such as noise
> robustness, modulation robustness, etc.) and exhibit higher complexity -
> generic state-variable topologies can scale as M^2 operations per sample,
> for example.
>
>
>> If simply comparing two algorithms by the number of operations needed to
>> compute a sample, would you include delays in filters as an operation? I'm
>> just wondering as some papers about FFT only include real multiplications
>> and additions as operations.
>>
>
> Delays usually get accounted as memory requirements in this type of
> analysis. That isn't to say that copying data around in a real computer
> doesn't take time, but this is usually abstracted away in the generic DSP
> algorithm accounting. The underlying assumption being that the DSP
> throughput is essentially computation bound, and so reducing the total
> number of MACs is the goal. But that's not terribly appropriate for a
> software system running on a modern personal computer, for example.
>
> Ethan
>
>
>
>>
>> Thanks for your help,
>> Dario
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sliding Phase Vocoder (was FIR blog post & interactive demo)

2020-03-19 Thread Richard Dobson

(caveat - 13 years since I worked on this)

This is a real single-sample update Sliding DFT, not a block-based 
method. The sample comes in, and used to perform a complex rotation to 
each bin, followed by the frequency-domain convolution. There are no 
twiddle factors as such. So the rectangular window is at best implicit - 
I'm not sure it even has any meaning in this situation. The approach 
from the outset was for the goal of real-time processing - i.e. 
potentially for hours non-stop. We found (in the Cleaspeed project) that 
single-precision floats would not support that; I don't know whether 
anything less than double precision is required - those were the only 
choices available.


It's "embarrassingly parallel" as an algorithm, so very suited to 
dedicated massively parallel hardware. I know FPGAs are pretty powerful 
these days so might well do the job (but some transformations are pretty 
cpu-intensive too!). The Bath Uni team said they were using a 
"mid-range" graphic card (on a Linux workstation).


Richard Dobson

On 19/03/2020 17:45, Eric Brombaugh wrote:

Wow - interesting discussion.

I've implemented a real-time SDFT on an FPGA for use in carrier 
acquisition of communications signals. It was surprisingly easy to do 
and didn't require particularly massive resources, although FPGAs 
naturally facilitate a degree of low-level parallelism that you can't 
easily achieve in CPU-based systems.


Based on this it might be feasible to build the SPV on a modest FPGA 
rather than resorting to GPUs or specialized parallel CPU systems. The 
main stumbling block that I see was your use of double-precision 
floating point. If that level of accuracy is really necessary then a 
higher end FPGA would be needed as most mid-range devices are geared 
more for fixed point or single-precision floating point.


I was a bit confused by the ICMC paper when it came to windowing. The 
SDFT structure I'm used to seeing (as discussed in the Lyons/Jacobsen 
article you referenced) involves a rectangular window applied prior to 
the twiddle calculations using a comb-filter structure. Is this window 
replaced by your frequency domain convolutions, or are the cosine-based 
windows applied in addition to the rectangular one?


Eric


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Computational complexity of common DSP algorithms

2020-03-19 Thread Leonardo Gabrielli
> If simply comparing two algorithms by the number of operations needed to
> compute a sample, would you include delays in filters as an operation? I'm
> just wondering as some papers about FFT only include real multiplications
> and additions as operations.
>

It depends whether you are conducting an academic study or a
real-world engineering.
In the very past you had to know how much time a MUL operations
required compared to a SUM, nowadays they are very similar as they are
hardware implemented, even on microcontrollers such as Arduino's
Atmel.

Anyway, for academic stuff what I usually do (based on what currently
people do) is summing together all real sums and multiplications into
one number. If you have complex data operations, then 1 complex SUM =
2 real SUM, 1 complex MUL = 4 real MUL + 2 real SUM. As for divisions
you can either indicate them as another count, or put them together
with SUM and MUL in the global flops count. This way, however you
can't be too abstract and have to take some reference values. I take
the instruction manual of the target architecture and give an estimate
of the number of clock cycles, then compare this to SUM or MUL clock
cycles.
Finally, for the delays, sometimes I indicate the size of the memory
to allocate.

Differently, if you are working on an engineering project, you should
really take care of all pointer arithmetics, value copying, memory
swapping. These can usually go in parallel with the arithmetic
operations (if your code is nice) but you never know. Memory
operations have a severe effect on embedded systems as the system can
stall for many cycles waiting for the data to come. Definitely
divisions are a nightmare on embedded processors and you should just
try to avoid them at all costs (unless you are using a whole DSP for a
digital effect). On x86 forget all the above and write your code as
ugly as possible, you'll never incur in bottlenecks :P :P :P (joking,
but really you have no big issues with divisions and memory ops
there).

Hope this helps.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Sliding Phase Vocoder (was FIR blog post & interactive demo)

2020-03-19 Thread Eric Brombaugh

Wow - interesting discussion.

I've implemented a real-time SDFT on an FPGA for use in carrier 
acquisition of communications signals. It was surprisingly easy to do 
and didn't require particularly massive resources, although FPGAs 
naturally facilitate a degree of low-level parallelism that you can't 
easily achieve in CPU-based systems.


Based on this it might be feasible to build the SPV on a modest FPGA 
rather than resorting to GPUs or specialized parallel CPU systems. The 
main stumbling block that I see was your use of double-precision 
floating point. If that level of accuracy is really necessary then a 
higher end FPGA would be needed as most mid-range devices are geared 
more for fixed point or single-precision floating point.


I was a bit confused by the ICMC paper when it came to windowing. The 
SDFT structure I'm used to seeing (as discussed in the Lyons/Jacobsen 
article you referenced) involves a rectangular window applied prior to 
the twiddle calculations using a comb-filter structure. Is this window 
replaced by your frequency domain convolutions, or are the cosine-based 
windows applied in addition to the rectangular one?


Eric

On 3/19/20 10:23 AM, Richard Dobson wrote:
In  my original C programs it was all implemented in double precision 
f/p, and the results were pretty clean (but we never assessed it 
formally at the time), but as the computational burden was substantial 
on a standard PC, there was no way to run them in real time to perform a 
   soak test.


However, we received some advanced (at the time) highly parallel 
accelerator cards frm a Bristol company "Clearspeed" which did offer the 
opportunity to perform real-time oscillator bank synthesis (by making a 
rudimentary VST synth). For example, to generate band-limited square and 
sawtooth waves. With single precision, and real-time generation it did 
not take long at all (I ran it one time for 20mins, monitoring on an 
oscilloscope) for phases to degrade and thus the waveform shape 
degraded. Conversely, with double precision (which those cards fully 
supported, most unusually for the time), I was able to leave it running 
for some hours, with no visible degradation of the waveform or audible 
increase in noise.


It doesn't fully answer your question, but I hope it offers some 
indication of the potential of the process.


Later on, colleagues at Bath University got the SPV fully running in 
real time on Nvidia GPU cards programmed using CUDA, fed with real-time 
audio input, and this was presented (I think) at either ICMC or DaFX. If 
John Fitch is following this, he will be able to give more details. GPUs 
are definitely the way to go for SPV in real time. I estimated 
(back-of-an-envelope-style) demands of the order of 50GFlops. Of course 
there remain many unanswered questions!


Richard Dobson

On 19/03/2020 16:18, Ethan Duni wrote:



On Tue, Mar 10, 2020 at 1:05 PM Richard Dobson > wrote:



    Our ICMC paper can be found here, along with a few beguiling sound
    examples:

    http://dream.cs.bath.ac.uk/SDFT/


So this is pretty cool stuff. I can't say I've digested the whole idea 
yet, but I had a couple of obvious questions.


In particular, the analyzer is defined by a recursive formula, and I 
gather that the synthesizer effectively becomes an oscillator bank. 
So, are special numerical techniques required to implement this, in 
order to avoid the build-up of round-off noise over time?


Ethan

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Sliding Phase Vocoder (was FIR blog post & interactive demo)

2020-03-19 Thread Richard Dobson

sorry for the repeats - don't know how that happened!
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Sliding Phase Vocoder (was FIR blog post & interactive demo)

2020-03-19 Thread Richard Dobson
In  my original C programs it was all implemented in double precision 
f/p, and the results were pretty clean (but we never assessed it 
formally at the time), but as the computational burden was substantial 
on a standard PC, there was no way to run them in real time to perform a 
  soak test.


However, we received some advanced (at the time) highly parallel 
accelerator cards frm a Bristol company "Clearspeed" which did offer the 
opportunity to perform real-time oscillator bank synthesis (by making a 
rudimentary VST synth). For example, to generate band-limited square and 
sawtooth waves. With single precision, and real-time generation it did 
not take long at all (I ran it one time for 20mins, monitoring on an 
oscilloscope) for phases to degrade and thus the waveform shape 
degraded. Conversely, with double precision (which those cards fully 
supported, most unusually for the time), I was able to leave it running 
for some hours, with no visible degradation of the waveform or audible 
increase in noise.


It doesn't fully answer your question, but I hope it offers some 
indication of the potential of the process.


Later on, colleagues at Bath University got the SPV fully running in 
real time on Nvidia GPU cards programmed using CUDA, fed with real-time 
audio input, and this was presented (I think) at either ICMC or DaFX. If 
John Fitch is following this, he will be able to give more details. GPUs 
are definitely the way to go for SPV in real time. I estimated 
(back-of-an-envelope-style) demands of the order of 50GFlops. Of course 
there remain many unanswered questions!


Richard Dobson

On 19/03/2020 16:18, Ethan Duni wrote:



On Tue, Mar 10, 2020 at 1:05 PM Richard Dobson > wrote:



Our ICMC paper can be found here, along with a few beguiling sound
examples:

http://dream.cs.bath.ac.uk/SDFT/


So this is pretty cool stuff. I can't say I've digested the whole idea 
yet, but I had a couple of obvious questions.


In particular, the analyzer is defined by a recursive formula, and I 
gather that the synthesizer effectively becomes an oscillator bank. So, 
are special numerical techniques required to implement this, in order to 
avoid the build-up of round-off noise over time?


Ethan

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] FIR blog post & interactive demo

2020-03-19 Thread STEFFAN DIEDRICHSEN
Like many other things ….

Steffan 

> On 19.03.2020|KW12, at 17:01, Ethan Fenn  wrote:
> 
> So interestingly those two #define's together would have no effect!
> 

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Computational complexity of common DSP algorithms

2020-03-19 Thread Ethan Duni
On Thu, Mar 19, 2020 at 8:11 AM Dario Sanfilippo 
wrote:

>
> I believe that the time complexity of FFT is O(nlog(n)); would you perhaps
> have a list or reference to a paper that shows the time complexity of
> common DSP systems such as a 1-pole filter?
>

The complexity depends on the topology. The cheapest topologies (direct
forms) are something like 2*M operations per sample, where M is the filter
order. Other topologies are optimized for other properties (such as noise
robustness, modulation robustness, etc.) and exhibit higher complexity -
generic state-variable topologies can scale as M^2 operations per sample,
for example.


> If simply comparing two algorithms by the number of operations needed to
> compute a sample, would you include delays in filters as an operation? I'm
> just wondering as some papers about FFT only include real multiplications
> and additions as operations.
>

Delays usually get accounted as memory requirements in this type of
analysis. That isn't to say that copying data around in a real computer
doesn't take time, but this is usually abstracted away in the generic DSP
algorithm accounting. The underlying assumption being that the DSP
throughput is essentially computation bound, and so reducing the total
number of MACs is the goal. But that's not terribly appropriate for a
software system running on a modern personal computer, for example.

Ethan



>
> Thanks for your help,
> Dario
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] Sliding Phase Vocoder (was FIR blog post & interactive demo)

2020-03-19 Thread Ethan Duni
On Tue, Mar 10, 2020 at 1:05 PM Richard Dobson  wrote:

>
> Our ICMC paper can be found here, along with a few beguiling sound
> examples:
>
> http://dream.cs.bath.ac.uk/SDFT/


So this is pretty cool stuff. I can't say I've digested the whole idea yet,
but I had a couple of obvious questions.

In particular, the analyzer is defined by a recursive formula, and I gather
that the synthesizer effectively becomes an oscillator bank. So, are
special numerical techniques required to implement this, in order to avoid
the build-up of round-off noise over time?

Ethan
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] FIR blog post & interactive demo

2020-03-19 Thread Ethan Fenn
As long as we're going off the rails...

 This provoked me into learning something new:
https://stackoverflow.com/questions/24177503/how-does-the-c-preprocessor-handle-circular-dependencies

So interestingly those two #define's together would have no effect!

-Ethan



On Thu, Mar 19, 2020 at 7:34 AM STEFFAN DIEDRICHSEN 
wrote:

> #define analog digital
> #define digital analog
>
> and now read again ….
>
>
> Best,
>
> Steffan
>
>
> > On 19.03.2020|KW12, at 12:31, Theo Verelst  wrote:
> >
> > Maybe a side remark, interesting nevertheless: the filtering in digital
> domain, as
> > compared with the analog good ol' electronics filters isn't the same in
> any of the
> > important interpretations of sampled signals being put on any regular
> digital to
> > analog converter, by and large regardless of the sampled data and it's
> known properties
> > offered to the digital filter.
> >
> > So, reconstructing the digital simulation of an analog filter into a
> electronic
> > signal through either a (theoretically , or near-) perfect
> reconstruction DAC or an
> > ordinary DAC with any of the widely used limited-time interval over
> sampled  FIR or IIR
> > simplified "reconstruction" filters, isn't going to yield a perfect
> equivalent of a
> > normal, phase shift based electronics (or mechanics based) filter. Maybe
> unfortunately,
> > but it's only an approximation, and no theoretically pleasing sounding
> mathematical
> > derivation of filter properties is going to change that.
> >
> > It is possible to construct digital signals, where givens are hard-known
> about the signal
> > which given a certain DAC will 'reconstruct' or simply result in an
> output signal which
> > approaches a certain engineered ideal to any degree of accuracy. In
> general though, the
> > signal between samples can only be known through perfect reconstruction
> filtering
> > (taking infinite time and resources), and DACs that are used in studio
> and consumer
> > equipment should be thoroughly signal prepared by pre-conditioning the
> digital signal
> > feeding it such that it's very limited reconstruction filtering is used
> such that certain
> > output signal ideals are approximated to the required degree of accuracy.
> >
> > Including even a modest filter in that picture isn't easy!
> >
> > Theo V.
> > ___
> > dupswapdrop: music-dsp mailing list
> > music-dsp@music.columbia.edu
> > https://lists.columbia.edu/mailman/listinfo/music-dsp
> >
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] Computational complexity of common DSP algorithms

2020-03-19 Thread Dario Sanfilippo
Hello, list.

I would like to compare the efficiency of some feature extraction
algorithms based on FFT with some original time-domain algorithms used to
measure the same perceptual characteristics.

I believe that the time complexity of FFT is O(nlog(n)); would you perhaps
have a list or reference to a paper that shows the time complexity of
common DSP systems such as a 1-pole filter?

If simply comparing two algorithms by the number of operations needed to
compute a sample, would you include delays in filters as an operation? I'm
just wondering as some papers about FFT only include real multiplications
and additions as operations.

Thanks for your help,
Dario
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] FIR blog post & interactive demo

2020-03-19 Thread STEFFAN DIEDRICHSEN
#define analog digital
#define digital analog

and now read again ….


Best,

Steffan


> On 19.03.2020|KW12, at 12:31, Theo Verelst  wrote:
> 
> Maybe a side remark, interesting nevertheless: the filtering in digital 
> domain, as
> compared with the analog good ol' electronics filters isn't the same in any 
> of the
> important interpretations of sampled signals being put on any regular digital 
> to
> analog converter, by and large regardless of the sampled data and it's known 
> properties
> offered to the digital filter.
> 
> So, reconstructing the digital simulation of an analog filter into a 
> electronic
> signal through either a (theoretically , or near-) perfect reconstruction DAC 
> or an
> ordinary DAC with any of the widely used limited-time interval over sampled  
> FIR or IIR
> simplified "reconstruction" filters, isn't going to yield a perfect 
> equivalent of a
> normal, phase shift based electronics (or mechanics based) filter. Maybe 
> unfortunately,
> but it's only an approximation, and no theoretically pleasing sounding 
> mathematical
> derivation of filter properties is going to change that.
> 
> It is possible to construct digital signals, where givens are hard-known 
> about the signal
> which given a certain DAC will 'reconstruct' or simply result in an output 
> signal which
> approaches a certain engineered ideal to any degree of accuracy. In general 
> though, the
> signal between samples can only be known through perfect reconstruction 
> filtering
> (taking infinite time and resources), and DACs that are used in studio and 
> consumer
> equipment should be thoroughly signal prepared by pre-conditioning the 
> digital signal
> feeding it such that it's very limited reconstruction filtering is used such 
> that certain
> output signal ideals are approximated to the required degree of accuracy.
> 
> Including even a modest filter in that picture isn't easy!
> 
> Theo V.
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
> 

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] FIR blog post & interactive demo

2020-03-19 Thread Theo Verelst

Maybe a side remark, interesting nevertheless: the filtering in digital domain, 
as
compared with the analog good ol' electronics filters isn't the same in any of 
the
important interpretations of sampled signals being put on any regular digital to
analog converter, by and large regardless of the sampled data and it's known 
properties
offered to the digital filter.

So, reconstructing the digital simulation of an analog filter into a electronic
signal through either a (theoretically , or near-) perfect reconstruction DAC 
or an
ordinary DAC with any of the widely used limited-time interval over sampled  
FIR or IIR
simplified "reconstruction" filters, isn't going to yield a perfect equivalent 
of a
normal, phase shift based electronics (or mechanics based) filter. Maybe 
unfortunately,
but it's only an approximation, and no theoretically pleasing sounding 
mathematical
derivation of filter properties is going to change that.

It is possible to construct digital signals, where givens are hard-known about 
the signal
which given a certain DAC will 'reconstruct' or simply result in an output 
signal which
approaches a certain engineered ideal to any degree of accuracy. In general 
though, the
signal between samples can only be known through perfect reconstruction 
filtering
(taking infinite time and resources), and DACs that are used in studio and 
consumer
equipment should be thoroughly signal prepared by pre-conditioning the digital 
signal
feeding it such that it's very limited reconstruction filtering is used such 
that certain
output signal ideals are approximated to the required degree of accuracy.

Including even a modest filter in that picture isn't easy!

Theo V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp