Re: [music-dsp] Low cost DSPs

2017-02-16 Thread Andrew McPherson
On Wed, Feb 15, 2017 at 11:02 PM, Pablo Riera  wrote:

> I am collecting information on how to accomplish DSP projects (mainly
> synths, only output) with low cost or makers boards.
> 
> -Arduino Uno 16 bit PWM, 25 usd
> -Arduino Due 12 bit DAC, 50 usd
> -Beaglebone + Bela 16 bit DAC, 170 usd
> -Raspberry Pi + Behringher Uca22216 bit DAC, 70 usd
> -16 bit DAC module, 5 usd (and some fast board, Due? )
> 
> I am not sure about maximum sampling rate for each case, but I think they
> all reach 44.1 kHz, (maybe not for the arduino UNO). It will be nice to run
> code at higher bit depths and rates but DAC outputs at CD quality.
> 
> Does anyone has experience with some of these combinations or others and
> could share comments on technical issues, ease of use, latency, audio
> quality (noise, max sampling rate), etc.
> 
> Thanks in advance. Great list by the way.
> Pablo

I'm one of the developers of Bela and can supply a bit more info. It has a 
16-bit stereo audio ADC and DAC, plus 8x each of 16-bit DC-coupled analog I/O.

Low latency was one of our core design principles for Bela. We use a Xenomai 
Linux environment to get audio buffer sizes as small as 2 samples, producing 
round-trip audio latency as low as 1ms (or down to 100us using the non-audio 
analog I/Os). The design also samples the analog and digital I/Os at audio 
rates, synchronously with the audio clock. You can a paper about how the 
environment works here:

http://www.eecs.qmul.ac.uk/~andrewm/mcpherson_aes2015.pdf

(More info at http://bela.io and https://github.com/BelaPlatform/Bela/wiki.)

The CPU on the BeagleBone Black isn't as powerful as the Pi3; on the other 
hand, the hard real-time audio environment means that you can run with much 
smaller latencies without glitches, as long as your code is fast enough to run 
in real time. At the moment we support C/C++ and Pd with experimental support 
for a few other languages and environments (SuperCollider, FAUST, Pyo).

Andrew

--
Andrew McPherson
Reader in Digital Media
Centre for Digital Music
School of Electronic Engineering and Computer Science
Queen Mary, University of London
Mile End Road
London E1 4NS

Phone: +44 (0)20 7882 5774
Email: a.mcpher...@qmul.ac.uk
Web: http://www.eecs.qmul.ac.uk/~andrewm
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Postdoc vacancy in digital musical instrument design at QMUL

2016-10-27 Thread Andrew McPherson
Hi all,

The Centre for Digital Music (C4DM) at Queen Mary University of London is 
recruiting a postdoctoral researcher in digital musical instrument design. DSP 
skills and musical proficiency would be particularly useful for this position. 
Full details below; feel free to contact me with any questions.

***

2-year postdoc in digital musical instrument design and evaluation. Deadline 18 
November; target start date January to March 2017

Complete details: https://webapps2.is.qmul.ac.uk/jobs/job.action?jobRef=QMUL9914

The responsibilities of the role are to develop sensor systems and data 
processing strategies for capturing the interaction between a performer and 
instrument, to work with musicians in controlled experiments to assess the 
usability of new instrument designs, and to publish the results in appropriate 
venues. The candidate will be expected to collaborate on a multidisciplinary 
team with other researchers and postgraduate students.

The ideal candidate will have experience in either or both of digital signal 
processing or electronic hardware design, combined with experience creating 
digital musical instruments or other interactive systems in artistic contexts. 
The candidate should have a PhD or equivalent degree in one of the following: 
Computer Science, Electrical/Electronic Engineering, Psychology, or Music 
(performance/composition/musicology with an electronic element). Musical 
proficiency (any genre/instrument) at an intermediate level or higher and 
experience conducting studies with human participants are also desirable.

This post is part of the EPSRC project “Design for Virtuosity: Modelling and 
Supporting Expertise in Digital Musical Interaction”, based in the Centre for 
Digital Music (C4DM) in the School of Electronic Engineering and Computer 
Science, Queen Mary University of London. 

Please submit applications via the link above, or contact Andrew McPherson 
(a.mcpher...@qmul.ac.uk) with any informal queries.


--
Andrew McPherson
Reader in Digital Media
Centre for Digital Music
School of Electronic Engineering and Computer Science
Queen Mary, University of London
Mile End Road
London E1 4NS

Phone: +44 (0)20 7882 5774
Email: a.mcpher...@qmul.ac.uk
Web: http://www.eecs.qmul.ac.uk/~andrewm
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Bela low-latency audio platform

2016-03-22 Thread Andrew McPherson
> okay, can someone tell me if this is right?:
> this Bela is this board from TI: 
> http://www.ti.com/tool/beaglebk with an IDE from http://faust.grame.fr/about/
>  or Bela *is* that IDE that implements a realization of FAUST.
> and you have to run this with a Linux machine,
> but it's agnostic regarding Fedora vs. Umbutu.

Not exactly. Bela runs on the BeagleBone Black, but there's a lot more to it. 
On the hardware side, there is a custom cape with a stereo audio codec, a 
16-bit 8ch. DC-coupled ADC, a 16-bit 8ch. DC-coupled DAC, and 2x 1W speaker 
amplifiers.

On the software side, the core of Bela is a custom environment using the 
Xenomai real-time Linux extensions and the BeagleBone's PRU to pass data 
straight to the hardware rather than using the usual Linux kernel drivers. This 
is written specifically to the capabilities of the BeagleBone and the ICs on 
the cape to get maximum performance. Because Xenomai lets you interrupt the 
kernel itself for real-time tasks, we can get buffer sizes down to 2 audio 
samples which you'd never get working without underruns on a standard OS. The 
Bela core code also samples the ADC, DAC and GPIOs at audio rates (or 
fractions/multiples thereof) to give you jitter-free alignment between the 
different signals.

Those things, plus a fairly straightforward C/C++ API, are the core of Bela's 
audio/sensor capabilities. Then there are a bunch of things that build on top 
of that core. One is the node.js IDE which runs onboard (i.e. browser-based but 
not cloud-based) -- we consider this part of Bela as we have developed it and 
we release it with the board, but it is not the only way to build code for the 
board. There are also terminal-based build scripts and external third-party 
tools. The external tools include Faust, which has its own cloud-based IDE to 
produce C++ code or binaries for Bela, and the Heavy Audio Tools from Enzien 
Audio to convert Pd-vanilla patches into C code for Bela. I expect we will add 
support for other third-party tools in the coming weeks. 

You can find the (pre-release) software and hardware designs here: 
http://bela.io/code/

Andrew
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Bela low-latency audio platform

2016-03-22 Thread Andrew McPherson

> That looks really nice - just curious, what?s the boot time of Bela?
> 
> Thanks, Tom Erbe


Hi Tom -- thanks! I just measured the boot time at around 25 seconds from power 
on to audio code running, and 35 seconds from power on to IDE loading in the 
browser. Based on the pattern of the LEDs, about 6 seconds of that looks to be 
the bootloader, before the kernel starts.

The Bela image is based on a Debian distribution for BeagleBone Black which we 
thus far haven't optimised specifically for boot time. I would think this could 
be cut down a fair bit with some paring back of drivers and processes and 
reordering of startup scripts. Even in the best circumstances though, an 
embedded Linux board is unlikely to match a microcontroller for instant-on.

> Sounds interesting. The website says something about "browser-based IDE
> built with Node.js". Can it be programmed normally in C or C++, using
> whichever editor I prefer, without the browser-based stuff?

Hi Johannes, just to add to what Giulio said, the browser-based IDE uses 
node.js but the audio code within it is still written in C/C++ and runs at 
native Xenomai priority. In fact you can build the exact same code via the 
browser or the external build scripts. Personally I use both -- the IDE more 
often when I want to mock something up quickly or work with the in-browser 
scope, and an external text editor plus command-line scripts when I want to 
work on a more complex project.

Andrew
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Bela low-latency audio platform

2016-03-22 Thread Andrew McPherson
Hi all,

I'd like to announce the upcoming release of Bela (http://bela.io), an embedded 
audio/sensor platform based on the BeagleBone Black which features extremely 
low latency (< 1ms from action to sound). I'm sure some of you will have seen 
this already-- it is a platform aimed specifically at real-time audio systems 
and musical instruments, so I think it will be of interest to this community.

I posted some details to this list in early February (see below), but since 
then we've launched a Kickstarter campaign to produce the hardware: 

http://bit.do/belakickstarter

In fact we're in the final 10 days of the campaign! I didn't post here when the 
campaign first launched, but by now it is fully funded, and more recently we 
reached our two stretch goals. That means we'll also be making some accessory 
boards: one to convert the analog I/Os into extra audio channels, the other to 
add support for 64 analog inputs.

Another recent update is that in addition to the C++ and Pd workflows, Bela is 
now supported by the Faust DSP language (http://faust.grame.fr). We're working 
on support for other environments as well and will post on the campaign page as 
features are added.

This is an open-source hardware/software project coming out of the Augmented 
Instruments Laboratory, part of the Centre for Digital Music at Queen Mary 
University of London. Our goal is to build a sustainable community of audio 
developers, musicians and researchers using Bela, and we are very interested in 
any thoughts or ideas. A couple weeks ago we started a discussion list for 
anyone interested:

http://lists.bela.io/listinfo.cgi/discussion-bela.io

Alternatively, feel free to drop us a line at i...@bela.io, which reaches all 
the members of the lab.

Best wishes,
Andrew

--
Andrew McPherson
Senior Lecturer in Digital Media
Centre for Digital Music
School of Electronic Engineering and Computer Science
Queen Mary University of London
http://www.eecs.qmul.ac.uk/~andrewm



> Bela uses the BeagleBone Black and a custom cape with stereo audio in/out, 8 
> channels each of 16-bit, DC-coupled analog I/O, and 16 GPIO pins. It uses 
> Xenomai Linux to run audio and sensor code at nearly bare metal priority, 
> which means that you can use buffer sizes as small as 2 audio samples and 
> achieve latencies under 1ms. (To be precise, it's about 1.0ms round-trip 
> using the audio codec, mainly because of the codec's internal filters, and 
> down to 100us round-trip using the DC-coupled ADC and DAC.)
> 
> With Bela we're trying to get the best of both worlds: the connectivity of a 
> Linux machine with the timing precision of a microcontroller. Xenomai is 
> great for this, because it can run the audio code in a hard real-time 
> environment where general system load won't lead to underruns, but it runs 
> alongside the OS for things like storage, networking and USB. To get this 
> kind of performance we don't use ALSA (or any kernel driver); instead, Bela 
> uses the BeagleBone's PRU to pass data to and from the audio codec (I2S), ADC 
> and DAC (SPI).  The tradeoff is that it is specific to particular hardware, 
> but in an embedded device that's not necessarily such a problem.
> 
> Another handy feature is that the analog and digital pins are sampled 
> synchronously with the audio, with basically no jitter. When using all 8 
> analog I/O channels plus audio, the analog sample rate is 22.05kHz; with 4 
> channels it's 44.1kHz or 2 channels at 88.2kHz.
> 
> For the past year or so, we have been developing Bela in the Augmented 
> Instruments Laboratory, part of the Centre for Digital Music at Queen Mary 
> University of London. It's an open-source project designed for creating 
> self-contained musical instruments and interactive audio systems. It's got a 
> browser-based IDE (all compiling done on the board) with an in-broswer 
> oscilloscope. Separately, you can use Enzien Audio's Heavy Audio Tools 
> (http://enzienaudio.com) to compile Pd patches into optimised C code for the 
> Bela environment.
> 
> Here's a paper with some more info and performance metrics: 
> http://www.eecs.qmul.ac.uk/~andrewm/mcpherson_aes2015.pdf
> 
> And here you can find the code and hardware designs: http://bela.io/code

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] MIDI Synth Design Advice Please

2016-02-07 Thread Andrew McPherson
Hi Scott and all,

You might also be interested in a low-latency embedded audio platform we are 
soon releasing called Bela: http://bela.io

Bela uses the BeagleBone Black and a custom cape with stereo audio in/out, 8 
channels each of 16-bit, DC-coupled analog I/O, and 16 GPIO pins. It uses 
Xenomai Linux to run audio and sensor code at nearly bare metal priority, which 
means that you can use buffer sizes as small as 2 audio samples and achieve 
latencies under 1ms. (To be precise, it's about 1.0ms round-trip using the 
audio codec, mainly because of the codec's internal filters, and down to 100us 
round-trip using the DC-coupled ADC and DAC.)

With Bela we're trying to get the best of both worlds: the connectivity of a 
Linux machine with the timing precision of a microcontroller. Xenomai is great 
for this, because it can run the audio code in a hard real-time environment 
where general system load won't lead to underruns, but it runs alongside the OS 
for things like storage, networking and USB. To get this kind of performance we 
don't use ALSA (or any kernel driver); instead, Bela uses the BeagleBone's PRU 
to pass data to and from the audio codec (I2S), ADC and DAC (SPI).  The 
tradeoff is that it is specific to particular hardware, but in an embedded 
device that's not necessarily such a problem.

Another handy feature is that the analog and digital pins are sampled 
synchronously with the audio, with basically no jitter. When using all 8 analog 
I/O channels plus audio, the analog sample rate is 22.05kHz; with 4 channels 
it's 44.1kHz or 2 channels at 88.2kHz.

For the past year or so, we have been developing Bela in the Augmented 
Instruments Laboratory, part of the Centre for Digital Music at Queen Mary 
University of London. It's an open-source project designed for creating 
self-contained musical instruments and interactive audio systems. It's got a 
browser-based IDE (all compiling done on the board) with an in-broswer 
oscilloscope. Separately, you can use Enzien Audio's Heavy Audio Tools 
(http://enzienaudio.com) to compile Pd patches into optimised C code for the 
Bela environment.

Here's a paper with some more info and performance metrics: 
http://www.eecs.qmul.ac.uk/~andrewm/mcpherson_aes2015.pdf

And here you can find the code and hardware designs: http://bela.io/code

It's in an early public beta state at the moment, but later this month we're 
planning a Kickstarter campaign to support making more capes and building a 
larger user and developer community around it. I can share some more info on 
that later on, but I thought it was worth mentioning in this discussion since 
our goals seem to be quite similar.

Best wishes,
Andrew

--
Andrew McPherson
Senior Lecturer in Digital Media
Centre for Digital Music
School of Electronic Engineering and Computer Science
Queen Mary University of London
http://www.eecs.qmul.ac.uk/~andrewm
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] good introductory microcontroller platform for audio tasks?

2011-04-22 Thread Andrew McPherson
> Once critical thing that I had to do was to remove the McBSP stuff from
> the Linux kernel and write a McBSP driver that runs on the DSP. Doing
> DMA on the audio data would have been impossible otherwise, and this is
> the critical thing for low latency.

That sounds great-- I'm very interested to see more details when you are ready 
to release it.  A DSP-side McBSP driver could have a lot of applications even 
beyond audio.

> This is not a nice solution because you loose the entire audio support
> from the Linux side, but well - everything has a cost.

In principle, I bet it would be possible to write a Linux driver creating a 
virtual audio device that passes the data to/from the DSP.  In that case you 
could have audio support from both ARM and DSP simultaneously.  As a side 
benefit, the DSP could perform post-processing on audio that originated from 
the Linux side.  For example, an active 3-way speaker crossover: the Linux 
process would see an ordinary stereo output, and the DSP would transparently 
filter the output for woofer, midrange, tweeter.

Andrew
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] good introductory microcontroller platform for audio tasks?

2011-04-20 Thread Andrew McPherson
For what it's worth, I've been poking at using the beagleboard for audio 
processing.  I'm working on a multichannel audio board (8 in, 16 out) for the 
beagle expansion port using the AD1938 ADC/DAC-- I have a prototype built and 
am trying to get the Linux drivers going.  Eventually I hope to communicate 
with the audio board directly from the C64x, which could allow very low-latency 
audio processing while using the ARM for less time-critical user interaction 
tasks.

I'm still in the early stages, but I'm hoping to produce the board in quantity 
once it works.  Of course, the beagleboard also has stereo in/out built in, and 
it ought to be possible to communicate between ARM and DSP to do all the heavy 
audio lifting on the DSP side.

Andrew

> Date: Wed, 20 Apr 2011 08:44:36 +0100
> From: Andy Farnell 
> To: music-dsp@music.columbia.edu
> Subject: Re: [music-dsp] good introductory microcontroller platform
>for audio tasks?
> [...]
> What I noticed about the Beagle Board the other day,
> which I had always thought of as "just another SBC" is
> that it has a C64 DSP on there just begging to be turned
> into an effect pedal or synth. Anyone gone that road
> with the Beagleboards yet?

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp