Hi,
A while back I asked for some help on choosing an framework to support some synthesis code. I got some very useful replies and I've read a lot more since then. Below I've summarised what I know in the hope that (1) someone will point out if I've made any terrible errors and (2) it might be useful to someone else. 1. LADSPA (www.ladspa.org) - this is a simple API for code that processes audio data (and, afaik, developed on this list). Quite a few programs can use plug-ins written to this API, including some that provide simple GUI support for the parameters. However, it is too simple for supporting synthesis as it has no support for (Midi) events - it assumes that the plug-in is a filter that runs continuously, rather than being triggered by some external action. It might be possible to work round this (eg using the change in a parameter to trigger a note - some hosts can translate Midi to a control value), but at face value the API is not really suitable for my needs. 2. LAAGA, JACK, MAIA etc (www.linuxdj.com/audio/lad/resourcesapi.php3) - different names for various future extensions to LADSPA that support the connection of higher level components (and possibly events too). Nothing seems to be finalised or widely supported. 3. aRTs (www.arts-project.org) - a project that started as a synthesiser and now appears to also be the basis for audio in KDE. This appears to have both an API for synthesis and support for interconnecting higher level components. There seem to be some questions about the design and performance, but I couldn't find any concrete figures (some code works within the kernel, for some reason). Incidentally, this seems to be a completely different community from linux-audio-dev. Was there some horrible split that separated the two, or is it a Euro/USA thing? 4. cSound, PD, jMax (use google!) - dedicated languages/environments for sound synthesis. Live-performance oriented offspring of research projects. JMax is Java based and therefore not yet as fast as PD or cSound. PD is more modern but less popular than cSound. They appear to have a more monolithic approach (to the code) than LADSPA and more decentralised approach (to development) than aRTs. 5. SAOL/sfront (www.cs.berkeley.edu/~lazzaro/sa) - similar to (4), but embedded within MPEG-4. Very well documented, but unclear future (depends on success of MPEG-4). sfront compiles SAOL via C, but user coding restricted (afaik, apart from drivers) to SAOL (which is effectively a simplified C, no explicit memory handling, array support, etc). 6. Cannibalise existing synth (eg savannah.gnu.org/project/iiwusynth) - a good approach if the existing program is close to what you want, but inflexible otherwise (less likely to be used by others than a plug-in, for example). Those seem to be the main options. If a GUI is important, and the basic control values from LADSPA are insufficient, then you're restricted to aRTs, an existing synth, or one of the cSound bunch plus some additional code. For me, LADSPA seems to be ruled out by a lack of events (also, there's neither GUI support nor, as far as I can, see support for string parameters, which makes opening external files for complex config or data dumps for later display cumbersome). cSound et al scare me - I've tried working with sprawling Linux projects before and there's also something that doesn't work. SAOL is very attractive, but I'm worried it's dying and don't completely trust the language (it seems to be aimed more at supporting simple synthesis scripts than doing complex numerical work, but that may just be my lack of knowledge). So I'm left with aRTs or IIWUSynth (or similar). aRTs seems like it's more general and heading for success - but I'm left wondering why I've seen no mention of it here over the last few weeks... Cheers, Andrew -- http://www.acooke.org
