Re: [music-dsp] Introducing: Axoloti

2014-12-08 Thread Johannes Taelman
Thanks, Eric!
Not only other hardware could be supported, it could also become a native
software plug-in composer.
I referred already to the STM32F4Discovery, this is not my own kit. Early
Axoloti development started on that kit, so making it compatible (with
inherent limitations such as no audio ADC and no SDcard) did not require a
lot of work.
Fragmentation is a concern if it becomes a jungle of all sorts of hardware
early on.

Any hardware are you thinking of in particular?

On Sat, Dec 6, 2014 at 6:11 PM, Eric Brombaugh ebrombau...@cox.net wrote:

 Nice work! Seems like it would be fairly straightforward to add support
 for additional hardware platforms. Given that it's open source code, what's
 your feeling about this being used as a front-end for other hardware
 besides your own?

 Eric


 On 12/05/2014 01:05 PM, Johannes Taelman wrote:

 Hi,

 I'm pleased to announce the open source release of Axoloti.

 Axoloti is a platform for sketching music-DSP algorithms running on
 standalone hardware build around an ARM Cortex M4F microcontroller.
 Axoloti has a graphical patcher that generates C++ code, and also manages
 compilation and upload to the microcontroller. The GUI runs on Windows,
 OSX
 and Linux.

 It's still in alpha stage, many improvements left to be made at all
 layers.
 But it's already complete enough to support a variety of techniques and
 applications.

 Axoloti Core boards are not available currently, but with a few documented
 changes in the code, the editor runs with the STM32F4Discovery kit.

 Website: www.axoloti.be
 Source code: https://github.com/JohannesTaelman/axoloti

 You're invited to comment, test, report bugs, contribute, etc...

 thanks,
 Johannes Taelman
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp


 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FFT and harmonic distortion (short note)

2014-12-08 Thread robert bristow-johnson

On 12/8/14 11:52 AM, Theo Verelst wrote:

Hi,

Having seen a lot of subjects here lately around basic subjects, I 
think it might be interesting for some to think about yet another not 
so highly spun mathematical subject directly relevant to certain DSP 
activities (I use it too).


It's a common notion to take it that if we take a sequence of samples 
(presuming a equidistant and linearly sampled signal with sufficiently 
accurate digital sample representation) 


what other presumption is there?  i, personally, have never seen a 
sequence of samples of audio or music that was not equidistant and 
linearly sampled.  it's what we call uniform sampling.



we can apply the well known Fast Fourier Transform to them, to get a 
set of frequency+phase tuples. Of course there's a correlation between 
the magnitude of the transformed frequency components and frequencies 
present in the signal we have sampled (presume for the moment we 
honored the Niquist criterion). However, if we want to be accurate, or 
claim generality like present in the (continuous, infinite) Fourier 
transform, what I'm pointing at is that without precautions, it isn't 
a good idea to presume the FFT transformed spectrum is the same, or 
even close to the Fourier spectrum of the sampled signal. If sampling 
(and if needed reconstruction) is accurate, the frequencies present in 
the digital version of an analog signal should of course be exactly 
the same as in the the analog signals that was sampled.


Let's look at simple examples of the errors that can take place. 
Here's a decent (simple) example, 8 harmonics of a square wave that 
fits exactly in the FFT interval, i.e. if we take an fft length (in 
terms of samples) of 256 we make sure the fundamental frequency of the 
square wave corresponds to 256 samples, too:


   http://www.theover.org/Musicdspexas/fft_square8.png

To a certain accuracy, the measured frequency components (shown at the 
bottom of the figure) will have the same magnitude as the components 
summed together to make for the (above) waveform.


Now say we take a saw wave (with all harmonics), and we take a 
frequency which in the sample domain doesn't correspond to a multiple 
of the FFT interval, we are going to get a *wrong* frequency graph:


   http://www.theover.org/Musicdspexas/fft_sawpl20perc.png

dunno what you're getting at, Theo.  both graphics appear fully as 
expected to me.


besides

  1.  Uniform sampling (and the effects thereof)

it's about

  2.  Windowing (and the effects thereof)

and the

  3.  Periodic extension inherent to the DFT (and the effects thereof).


there is aliasing involved in the line spectra regarding the effects 
thereof.


not much else happening.


??

--

r b-j  r...@audioimagination.com

Imagination is more important than knowledge.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FFT and harmonic distortion (short note)

2014-12-08 Thread gwenhwyfaer
On 08/12/2014, robert bristow-johnson r...@audioimagination.com wrote:
 it's about

2.  Windowing (and the effects thereof)

 and the

3.  Periodic extension inherent to the DFT (and the effects thereof).

That's the key, it seems to me. Theo saw a sawtooth whose cycle length
doesn't match the FFT width, and complained that the spectrum looked
wrong. I saw a hard sync'd sawtooth wave of matching cycle length,
and apparently so did the FFT, because the spectrum looks just right
for that.

Duckrabbit.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FFT and harmonic distortion (short note)

2014-12-08 Thread Ethan Duni
what other presumption is there?  i, personally, have never seen a
sequence of
samples of audio or music that was not equidistant and linearly
sampled.  it's
 what we call uniform sampling.

Some of this new stuff in compressive sensing/sparse reconstruction
involves non-uniform sampling. Not that I've seen it used much in practice
in audio and music, and anyway it would typically be done on top of
conventional sampling anyway in those contexts (and largely invisible to
the audio/music side of things, at least if done right).

dunno what you're getting at, Theo.  both graphics appear fully as
expected to me.

Yeah I'm at a loss as well. Isn't this stuff well-explained on Wikipedia,
and in every book that covers spectral analysis?

E



On Mon, Dec 8, 2014 at 11:01 AM, robert bristow-johnson 
r...@audioimagination.com wrote:

 On 12/8/14 11:52 AM, Theo Verelst wrote:

 Hi,

 Having seen a lot of subjects here lately around basic subjects, I think
 it might be interesting for some to think about yet another not so highly
 spun mathematical subject directly relevant to certain DSP activities (I
 use it too).

 It's a common notion to take it that if we take a sequence of samples
 (presuming a equidistant and linearly sampled signal with sufficiently
 accurate digital sample representation)


 what other presumption is there?  i, personally, have never seen a
 sequence of samples of audio or music that was not equidistant and
 linearly sampled.  it's what we call uniform sampling.


  we can apply the well known Fast Fourier Transform to them, to get a set
 of frequency+phase tuples. Of course there's a correlation between the
 magnitude of the transformed frequency components and frequencies present
 in the signal we have sampled (presume for the moment we honored the
 Niquist criterion). However, if we want to be accurate, or claim generality
 like present in the (continuous, infinite) Fourier transform, what I'm
 pointing at is that without precautions, it isn't a good idea to presume
 the FFT transformed spectrum is the same, or even close to the Fourier
 spectrum of the sampled signal. If sampling (and if needed reconstruction)
 is accurate, the frequencies present in the digital version of an analog
 signal should of course be exactly the same as in the the analog signals
 that was sampled.

 Let's look at simple examples of the errors that can take place. Here's a
 decent (simple) example, 8 harmonics of a square wave that fits exactly in
 the FFT interval, i.e. if we take an fft length (in terms of samples) of
 256 we make sure the fundamental frequency of the square wave corresponds
 to 256 samples, too:

http://www.theover.org/Musicdspexas/fft_square8.png

 To a certain accuracy, the measured frequency components (shown at the
 bottom of the figure) will have the same magnitude as the components summed
 together to make for the (above) waveform.

 Now say we take a saw wave (with all harmonics), and we take a frequency
 which in the sample domain doesn't correspond to a multiple of the FFT
 interval, we are going to get a *wrong* frequency graph:

http://www.theover.org/Musicdspexas/fft_sawpl20perc.png

  dunno what you're getting at, Theo.  both graphics appear fully as
 expected to me.

 besides

   1.  Uniform sampling (and the effects thereof)

 it's about

   2.  Windowing (and the effects thereof)

 and the

   3.  Periodic extension inherent to the DFT (and the effects thereof).


 there is aliasing involved in the line spectra regarding the effects
 thereof.

 not much else happening.


 ??

 --

 r b-j  r...@audioimagination.com

 Imagination is more important than knowledge.



 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews,
 dsp links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] LLVM or GCC for DSP Architectures

2014-12-08 Thread Stefan Sullivan
Hey music DSP folks,

I'm wondering if anybody knows much about using these open source compilers
to compile to various DSP architectures (e.g. SHARC, ARM, TI, etc). To be
honest I don't know so much about the compilers/toolchains for these
architectures (they are mostly proprietary compilers right?). I'm just
wondering if anybody has hooked the back-end of the compilers to the
architectures to a more widely used compiler.

The reason I ask is because I've done quite a bit of development lately
with C++ and template programming. I'm always struggling with being able to
develop more advanced widely applicable audio code, and being able to
address lower-level DSP architectures. I am assuming that the more advanced
feature set of c++11 (and eventually c++14) would be more slow to appear in
these proprietary compilers.

Thanks all,
Stefan
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FFT and harmonic distortion (short note)

2014-12-08 Thread Jerry

On Dec 8, 2014, at 2:33 PM, Ethan Duni ethan.d...@gmail.com wrote:

 what other presumption is there?  i, personally, have never seen a
 sequence of
 samples of audio or music that was not equidistant and linearly
 sampled.  it's
 what we call uniform sampling.
 
 Some of this new stuff in compressive sensing/sparse reconstruction
 involves non-uniform sampling. Not that I've seen it used much in practice
 in audio and music, and anyway it would typically be done on top of
 conventional sampling anyway in those contexts (and largely invisible to
 the audio/music side of things, at least if done right).
 
 dunno what you're getting at, Theo.  both graphics appear fully as
 expected to me.
 
 Yeah I'm at a loss as well. Isn't this stuff well-explained on Wikipedia,
 and in every book that covers spectral analysis?
 
 E
 
I think the OP is a bit of a beginner and that the usual generous response of 
this list would be helpful. Others' comments on leakage - windowing and 
aliasing are obviously correct but maybe not tuned to the level of the OP.

Jerry

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FFT and harmonic distortion (short note)

2014-12-08 Thread Sampo Syreeni

On 2014-12-08, Ethan Duni wrote:

dunno what you're getting at, Theo.  both graphics appear fully as 
expected to me.


Yeah I'm at a loss as well. Isn't this stuff well-explained on 
Wikipedia, and in every book that covers spectral analysis?


Well, not *too* well. The difference between spectral spillover and 
aliasing against nonlinearity, actually is a bit novel for most, still.


I think the main reason is that you can't really easily discern the 
separate effects from a common spectrogram by eye. Even the linear, 
fully inversible effects often just seem to royally fuck up the 
spectrum you see, and in the case of spectrally sparse, periodic 
waveforms, seemingly without any warning or consistency as the period 
drifts in and out resonance with your analysis filter. Doubly so in the 
presence of noise/background and varying transform length/analysis 
bandwidth in the algorithm you used to derive your spectrogram, because 
it really is rather unintuitive what a progressively more matched filter 
does to your utility signal, against the floor.


After that even a proper theoretical background doesn't shield you from 
the prima facie reaction: what the fuck is this shit, now, something 
must have gone wrong, where is it.


But yeah, you ought to know about this stuff already. Theo at the very 
least.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] LLVM or GCC for DSP Architectures

2014-12-08 Thread Paul Stoffregen

On 12/08/2014 01:35 PM, Stefan Sullivan wrote:

Hey music DSP folks,

I'm wondering if anybody knows much about using these open source compilers
to compile to various DSP architectures (e.g. SHARC, ARM, TI, etc).


I have some experience with ARM Cortex-M4, using fixed point. Everything 
in this message is specific only to Cortex-M4, and might apply to the 
upcoming Cortex-M7, but it's unrelated to other processors.


ARM's marketing material makes a lot of carefully worded claims about 
DSP extensions which provide up to a 2X speed improvement.  That is 
true.  But many people mistakenly leap to the conclusion that there's a 
DSP co-processor or something resembling a DSP architecture in the 
chip.  In traditional DSP, you'd expect simultaneous fetching of data 
and coefficient, multiply-accumulate and loop counting.  Cortex-M4 has 
nothing like that.  It's still very much a traditional microprocessor 
where those operations are separate instructions.


The DSP extensions are designed for 16 bit fixed point data, which you 
pack into the native 32 bit memory and registers.  Much of the 2X 
speedup occurs in the loading and storing of data.  With ARM's 
traditional instructions, 16 bit data is sign extended into 32 bit 
registers on load, and those upper 16 bits are discarded when storing it 
back to memory.  Cortex-M4 also has an optimization in hardware where it 
detects successive instructions performing similar load or store and 
combines them into a single burst access on the bus, so the 2nd, 3rd, 
4th access take only a single cycle.  In traditional DSP, the 
architecture loads data in the same cycle as the math is performed.  At 
best, ARM's extension gets the loading overhead close to 0.5 cycles per 
16 bit word.


Actually using the DSP extensions requires keen awareness and planning 
of the ARM register usage.  As far as I know, the only way to cause gcc 
to use them is inline assembly, which is usually wrapped with inline 
functions or preprocessor macros.  ARM's marketing material makes a lot 
of claims about how only C programming is needed.  While that's 
technically true, given an already-written header file with the inline 
assembly (some commercial compilers have intrinsics which are 
basically the same thing), the honest truth is assembly code is 
involved.  Really leveraging these instructions requires careful 
planning of how many registers you'll use to bring in packed pairs of 
samples, how many will hold your intermediate calculations, loop 
counters, pointers, and other overhead.  If you exceed the 12 or 13 
available ARM 32 bit registers, the compiler needs to spill variables 
onto the stack, which ruins any speed benefit you might hope to achieve 
by going to so much effort to use the DSP extensions.


Another feature of DSP fixed point architectures is automatic saturation 
(clipping) during addition.  This too is usually done with a separate 
instruction on ARM.  They do provide a couple add instructions with 
automatic saturation, but pervasive support for saturation during all 
calculations is not present.


Looping overhead is also still an issue.  Typically, you would compose 
your code to process 4, 8, or 16 samples in each loop iteration.  That 
lets you use the pipeline burst to bring the packed samples in to 2, 3 
or 4 registers.  Then you'd unroll your code, placing 4, 8 or 16 copies 
of whatever math you're doing, and store the results to the output 
buffer, taking advantage of the pipeline burst for writing.  Then you'd 
suffer looping overhead, which isn't so bad if you're processing 8 or 16 
samples per iteration.


I've written a lot about code structure, planning of data packing, and 
register allocation allocation, so far without any specifics of the 
actual operations, for a good reason.  Really using the DSP extension is 
like this.  You spend almost all the time (or at least I do) planning 
this stuff, so you can actually take advantage of the narrow but useful 
features those instructions provide.


The actual instructions are documented in the ARM v7-M reference manual 
(ARM document DDI0403D), starting on page 133, section A4.4.3.


Probably the most interesting instruction is SMLALD  SMLALDX. It 
performs two 16x16 signed multiplies and adds both products a signed 64 
bit accumulator.   The 4 numbers to multiply have to be packed into 2 
normal 32 bit ARM registers.  SMLALD multiplies the lower halfs together 
and the upper halves together, and SMLALDX multiplies the lower half in 
one register with the upper half in the other, and vise-versa.  No other 
combinations are possible, so you must arrange your data appropriately 
if you want to get 2 multiply-accumulate in a single cycle.  But there 
is a version that subtract one of the products.  There's also versions 
that accumulate to only 32 bits, which give you one extra precious 32 
bit register, in cases where you're sure overflow isn't an issue 
(remember, these don't automatically saturate if your 

Re: [music-dsp] LLVM or GCC for DSP Architectures

2014-12-08 Thread Sham Beam
Thanks for the info Paul. I've been considering using Teensy for for a 
DIY project for a while now. Just haven't found the time to start yet.



Shannon


On 12/9/2014 11:28 AM, Paul Stoffregen wrote:

On 12/08/2014 01:35 PM, Stefan Sullivan wrote:

Hey music DSP folks,

I'm wondering if anybody knows much about using these open source
compilers
to compile to various DSP architectures (e.g. SHARC, ARM, TI, etc).


I have some experience with ARM Cortex-M4, using fixed point. Everything
in this message is specific only to Cortex-M4, and might apply to the
upcoming Cortex-M7, but it's unrelated to other processors.

ARM's marketing material makes a lot of carefully worded claims about
DSP extensions which provide up to a 2X speed improvement.  That is
true.  But many people mistakenly leap to the conclusion that there's a
DSP co-processor or something resembling a DSP architecture in the
chip.  In traditional DSP, you'd expect simultaneous fetching of data
and coefficient, multiply-accumulate and loop counting.  Cortex-M4 has
nothing like that.  It's still very much a traditional microprocessor
where those operations are separate instructions.

The DSP extensions are designed for 16 bit fixed point data, which you
pack into the native 32 bit memory and registers.  Much of the 2X
speedup occurs in the loading and storing of data.  With ARM's
traditional instructions, 16 bit data is sign extended into 32 bit
registers on load, and those upper 16 bits are discarded when storing it
back to memory.  Cortex-M4 also has an optimization in hardware where it
detects successive instructions performing similar load or store and
combines them into a single burst access on the bus, so the 2nd, 3rd,
4th access take only a single cycle.  In traditional DSP, the
architecture loads data in the same cycle as the math is performed.  At
best, ARM's extension gets the loading overhead close to 0.5 cycles per
16 bit word.

Actually using the DSP extensions requires keen awareness and planning
of the ARM register usage.  As far as I know, the only way to cause gcc
to use them is inline assembly, which is usually wrapped with inline
functions or preprocessor macros.  ARM's marketing material makes a lot
of claims about how only C programming is needed.  While that's
technically true, given an already-written header file with the inline
assembly (some commercial compilers have intrinsics which are
basically the same thing), the honest truth is assembly code is
involved.  Really leveraging these instructions requires careful
planning of how many registers you'll use to bring in packed pairs of
samples, how many will hold your intermediate calculations, loop
counters, pointers, and other overhead.  If you exceed the 12 or 13
available ARM 32 bit registers, the compiler needs to spill variables
onto the stack, which ruins any speed benefit you might hope to achieve
by going to so much effort to use the DSP extensions.

Another feature of DSP fixed point architectures is automatic saturation
(clipping) during addition.  This too is usually done with a separate
instruction on ARM.  They do provide a couple add instructions with
automatic saturation, but pervasive support for saturation during all
calculations is not present.

Looping overhead is also still an issue.  Typically, you would compose
your code to process 4, 8, or 16 samples in each loop iteration.  That
lets you use the pipeline burst to bring the packed samples in to 2, 3
or 4 registers.  Then you'd unroll your code, placing 4, 8 or 16 copies
of whatever math you're doing, and store the results to the output
buffer, taking advantage of the pipeline burst for writing.  Then you'd
suffer looping overhead, which isn't so bad if you're processing 8 or 16
samples per iteration.

I've written a lot about code structure, planning of data packing, and
register allocation allocation, so far without any specifics of the
actual operations, for a good reason.  Really using the DSP extension is
like this.  You spend almost all the time (or at least I do) planning
this stuff, so you can actually take advantage of the narrow but useful
features those instructions provide.

The actual instructions are documented in the ARM v7-M reference manual
(ARM document DDI0403D), starting on page 133, section A4.4.3.

Probably the most interesting instruction is SMLALD  SMLALDX. It
performs two 16x16 signed multiplies and adds both products a signed 64
bit accumulator.   The 4 numbers to multiply have to be packed into 2
normal 32 bit ARM registers.  SMLALD multiplies the lower halfs together
and the upper halves together, and SMLALDX multiplies the lower half in
one register with the upper half in the other, and vise-versa.  No other
combinations are possible, so you must arrange your data appropriately
if you want to get 2 multiply-accumulate in a single cycle.  But there
is a version that subtract one of the products.  There's also versions
that accumulate to only 32 bits, which give you one 

Re: [music-dsp] FFT and harmonic distortion (short note)

2014-12-08 Thread gwenhwyfaer
On 08/12/2014, Jerry lancebo...@qwest.net wrote:

 I think the OP is a bit of a beginner and that the usual generous response
 of this list would be helpful.

That's, um, not how Theo represents himself.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp