Re: [linux-audio-dev] soft synth as a plugin

2002-10-28 Thread Tommi Ilmonen
Hi.

On 17 Oct 2002, nick wrote:

 Hi

 IMO running each synth in its own thread with many synths going is
 definitely _not_ the way forward. The host should definitely be the only
 process, much how VST, DXi, pro tools et. al. work.

 No, there is no real instrument or synth plugin API. but since my
 original post I have been brewing something up. its quite vst-like in
 some ways, but ive been wanting to make it more elegant before
 announcing it. It does, however, work, and is totally C++ based ATM. You
 just inherit the Instrument class and voila. (ok, so it got renamed
 along the way)

 Although in light of Tommi's post (mastajuuri) i have to reconsider
 working on my API. My only problem with mastajuuri is its dependance on
 QT (if im not mistaken), sorry.

You are not mistaken. I could change the structure of Mustajuuri to make
some kind of core system (just DSP engine) with no Qt dependencies *IF* it
did get more developers for Mustajuuri (enough to justify dropping all the
Qt's tools (and I do mean tools besides the graphical stuff: Unicode
strings, XML, directory management, date and time services, language
translations etc.)).

But unless there is clear promise of this there is little point in going
for the extra effort.

Mustajuuri is modular in a sense that you can run DSP without
running a GUI. Or you can build alternate GUIs with other toolkits. Then
again there is prabably little point in making the Mustajuuri GUI with
anything but Qt, since the Qt will be necessary anyhow. Since all Linux
vendors distribute (and usually install) Qt it is a fairly safe library to
build on.


Tommi Ilmonen Researcher
 = http://www.hut.fi/u/tilmonen/
  Linux/IRIX audio:   Mustajuuri
   = http://www.tml.hut.fi/~tilmonen/mustajuuri/
3D audio/animation:   DIVA
 = http://www.tml.hut.fi/Research/DIVA/




Re: [linux-audio-dev] soft synth as a plugin

2002-10-19 Thread Josh Green
On Fri, 2002-10-18 at 11:16, Paul Davis wrote:
 but personally i find it much more desirable that the plugin provides
 effectively a widget which can be added to a container (provided by the
 host) rather than the plugin creating its own window. its just much
 neater..
 
 this makes no difference to the problem. whether the plugin creates a
 widget or a window, the integration of the event loop is still a
 central blocking problem.
 

Please enlighten me, what was the problem then with multiple GUI
toolkits? IIRC it had to do with not being able to integrate multiple
applications together very well (at least as far as the user interface).

It seems like something of this nature should be implemented at a layer
above the GUI toolkit, like the window manager for example. If there was
a way to layout multiple X windows (interface elements) on a single
window and be able to edit their dimensions, manage placement, etc, GUI
toolkit would not matter so much. This also seems to kind of tie back
into the metadata/session storage discussion that was going on before. I
like the idea of multiple programs working together in a Linux
music/audio session, but I can forsee the nightmare of having dozens of
windows scattered all over the desktop.

 I thought Gtk(mm) is a more natural choice, since it is freely available
 on all platforms, voila.
 
 many Qt users feel otherwise. and wxWin is available too, along with
 several other kits.
 
 --p

GTK doesn't support as many platforms as I would like (although I use it
with Swami/Smurf). It doesn't natively support Mac OS X (yet?) and
probably will never support Mac OS classic. wxWin seems to kick ass in
this regard. I'm currently really liking the Glib/GObject/GTK+ layer
effect. All my non GUI stuff is written using GObject so its GUI
independent and I can do OO programming and stick with C (C++ has scared
me away a little, although I can understand the advantages). Cheers.
Josh Green




Re: [linux-audio-dev] soft synth as a plugin

2002-10-19 Thread Steve Harris
On Fri, Oct 18, 2002 at 11:37:46 -0700, Josh Green wrote:
 Please enlighten me, what was the problem then with multiple GUI
 toolkits? IIRC it had to do with not being able to integrate multiple
 applications together very well (at least as far as the user interface).

Its to do with the X event loop. Only one toolkit can receive the events
and the others just sit there. Some tollkits can be made to play nicly
(eg. GTK1.2 IIRC), but not all, and maybe not in a standard way.

- Steve 



Re: [linux-audio-dev] soft synth as a plugin

2002-10-19 Thread Steve Harris
On Fri, Oct 18, 2002 at 08:34:20 +0100, nick wrote:
 indeed, for a plugin soft-synth, it would only ever make sense to write
 it in c/c++ or assembler really, a question of speed. Are there really
 people who seriously want to write a synth in aynthing else?

Of course, plenty of MacOS audio software is written in Max/MSP (a
relative of pd). There is also some Windows software written in Sync -
both are graphical languages.

- Steve 



Re: [linux-audio-dev] soft synth as a plugin

2002-10-19 Thread Steve Harris
On Fri, Oct 18, 2002 at 06:47:15 +, Stefan Nitschke wrote:
 -O3 with C is broken, i got an endless loop!

What gcc version? What flags did you use with C?

The test I did had the C code using a struct, and I used the . syntax for
c++ method calls FWIW. I'l dig out the code in a min.

I think it was loop unrolling that was crappy in c++.

- Steve



Re: [linux-audio-dev] soft synth as a plugin

2002-10-19 Thread Stefan Nitschke
On Fri, Oct 18, 2002 at 06:47:15 +, Stefan Nitschke wrote:
 -O3 with C is broken, i got an endless loop!

What gcc version? What flags did you use with C?


I used gcc 3.2 that comes with SuSE 8.1.
Today i changed the initial values to a=0.5; b=0.001; x=0.1; and now
-O3 works!?? Here are the results:
C: -O3 -march=pentium
user0m11.380s

C++: -O3 -march=pentium
user0m11.960s

BTW to my surprise today i was able to use ardour without freezing my 
machine
as it always did last week. I didnt changed the system and used the same 
binaries.
I never saw such a random problem on a linux box before.

The test I did had the C code using a struct, and I used the . syntax for
c++ method calls FWIW. I'l dig out the code in a min.

I think it was loop unrolling that was crappy in c++.


That would be a bad thing.

- Stefan

_
Broadband? Dial-up? Get reliable MSN Internet Access. 
http://resourcecenter.msn.com/access/plans/default.asp



Re: [linux-audio-dev] soft synth as a plugin

2002-10-19 Thread nick
On Sat, 2002-10-19 at 00:49, Jack O'Quin wrote:
 The main drawback to using C++ subroutine linkage in today's Linux
 environment is the unstable ABI.  This has caused many problems when
 trying to call binary libraries built using different compilers or
 sometimes even different compiler options.  Exception handling is a
 particularly thorny issue.
 
 Please note that these problems are deadly to any proposal for a
 standard plugin interface.
 
 We can all hope that ABI instabilities will eventually become a thing
 of the past, perhaps once GCC version 3 becomes widely adopted.  But,
 there is no proof that this will happen.  Right now, the Linux world
 is full of incompatible C++ compiler implementations.

Yes, this is the main problem, and I am aware of it (its bitten me
enough times..), must have slipped my mind before..

But I guess i'm living in the hope that gcc will settle on a stable ABI
in the not too distant future. I really dont understand why this hasnt
happened a lot earlier (although im no compiler guru) - its really vital
to building a successful platform i would have thought. This is the only
plus windows/macos etc have on their side..

are different versions of gcc3 ABI - compatible?

cheers

-nick

__
Do You Yahoo!?
Everything you'll ever need on one web page
from News and Sport to Email and Music Charts
http://uk.my.yahoo.com




Re: [linux-audio-dev] soft synth as a plugin

2002-10-19 Thread Stefan Nitschke

are different versions of gcc3 ABI - compatible?


AFAIK all versions of gcc3 except version 3.0 which had a bug are 
compatible.

- Stefan

_
Surf the Web without missing calls! Get MSN Broadband. 
http://resourcecenter.msn.com/access/plans/freeactivation.asp



Re: [linux-audio-dev] soft synth as a plugin

2002-10-19 Thread Steve Harris
On Sat, Oct 19, 2002 at 09:45:32 +, Stefan Nitschke wrote:
 The test I did had the C code using a struct, and I used the . syntax for
 c++ method calls FWIW. I'l dig out the code in a min.
 
 I think it was loop unrolling that was crappy in c++.
 
 That would be a bad thing.

But by the look of your results they have fixed it in 3.

- Steve



Re: [linux-audio-dev] soft synth as a plugin

2002-10-19 Thread David Gerard Matthews
Steve Harris wrote:


On Fri, Oct 18, 2002 at 08:34:20 +0100, nick wrote:


indeed, for a plugin soft-synth, it would only ever make sense to write
it in c/c++ or assembler really, a question of speed. Are there really
people who seriously want to write a synth in aynthing else?



Of course, plenty of MacOS audio software is written in Max/MSP (a
relative of pd). There is also some Windows software written in Sync -
both are graphical languages.

- Steve 

Although all the actual DSP objects themselves in MSP are written in C, 
just like
in pd and jMax.  I don't know about Sync (don't know much about it), but 
I suspect
that to also be the case there as well.  The point is that all the 
actual number
crunching code is in fact written in C, even if you use a very 
high-level object-oriented
graphical language like Max/MSP to build the app.
OTOH, CLM is written in Lisp, (hence the name), and some modern Lisps 
(i.e. CMUCL)
claim to be as fast as C for floats.  So I suppose you could write DSP 
code in Lisp if you
really wanted to.
-dgm






Re: [linux-audio-dev] soft synth as a plugin

2002-10-19 Thread Steve Harris
On Sat, Oct 19, 2002 at 11:55:09 -0400, David Gerard Matthews wrote:
 Of course, plenty of MacOS audio software is written in Max/MSP (a
 relative of pd). There is also some Windows software written in Sync -
 both are graphical languages.
 
 Although all the actual DSP objects themselves in MSP are written in C, 
 just like
 in pd and jMax.  I don't know about Sync (don't know much about it), but 
 I suspect
 that to also be the case there as well.  The point is that all the 
 actual number
 crunching code is in fact written in C, even if you use a very 
 high-level object-oriented

While its true that msp objects are written in C they are not all
high level. Sync is a compiler, it may well generate C as an intermediary
langauge, I dont know.

- Steve



Re: [linux-audio-dev] soft synth as a plugin

2002-10-19 Thread Likai Liu
STEFFL, ERIK (SBCSI) wrote:


erm, sorry, but why not use pointers


 it's dangerous... null pointers, memory leaks etc. tendency is not to use
pointers unless absolutely neccessary...


References in C++ are just pointers in a sugared form. Actually they are 
the same thing in a slightly different syntax. There is no memory safety 
in C/C++, so you end up having the same risks no matter you use pointers 
or references. The difference of the performance probably lies in the 
fact that the compiler understands your code better with references, 
hence it can do better optimizations. My guess is that references are 
easier to optimize against because you don't do pointer arithmetics and 
pointer type-casting on references.

Stefan Nitschke wrote:

are different versions of gcc3 ABI - compatible?


AFAIK all versions of gcc3 except version 3.0 which had a bug are 
compatible.

Starting gcc 3.2, they changed the ABI again, so g++ 3.2 produces 
incompatible code with 3.1, 3.0, etc. I hope the gcc 3.2 ABI will remain 
stable ever after.

liulk




Re: [linux-audio-dev] soft synth as a plugin

2002-10-18 Thread Steve Harris
On Thu, Oct 17, 2002 at 09:49:53 +0100, nick wrote:
 No, there is no real instrument or synth plugin API. but since my
 original post I have been brewing something up. its quite vst-like in
 some ways, but ive been wanting to make it more elegant before
 announcing it. It does, however, work, and is totally C++ based ATM. You
 just inherit the Instrument class and voila. (ok, so it got renamed

Hmmm... My experiments with c++, dsp code and gcc (recent 2.96) did not
turn out very well. For some reason the optimiser totaly chokes on c++
code. I only tried one routine, and I'm no c++ expert, so its possible I
screwed something up, but it did not look encouraging. I will revisit this
and also try gcc3, which has much better c++ support IIRC.

- Steve



Re: [linux-audio-dev] soft synth as a plugin

2002-10-18 Thread Steve Harris
On Thu, Oct 17, 2002 at 05:00:39 -0400, Paul Davis wrote:
 i think you need to scan back a year or 18 months in the archives to
 where we measured this. the context switch under linux can be
 extremely quick - on the order of 20-50 usecs on a PII-450, and is not

Do we know if this is getting better or worse? My experince of large cache
Xeons is that they context switch very slowly (compared with PII's of the
same generation). Whats the switch time on P4's and AthlonXP's like?

 this is why JACK is designed in the way that it is, and why it
 (theoretically) allows for both in-process and out-of-process
 plugins. this allows programmers to choose which model they want to
 use. i predict that any API that forces the programmer to use a
 particular toolkit will fail. JACK's problem in this arena is that its

Which raises another important point, IMHO any softsynth API that wasn't
jack backed would have to have some pretty good reasons.

- Steve



Re: [linux-audio-dev] soft synth as a plugin

2002-10-18 Thread Stefan Nitschke







From: Steve Harris [EMAIL PROTECTED]

Hmmm... My experiments with c++, dsp code and gcc (recent 2.96) did not
turn out very well. For some reason the optimiser totaly chokes on c++
code. I only tried one routine, and I'm no c++ expert, so its possible I
screwed something up, but it did not look encouraging. I will revisit this
and also try gcc3, which has much better c++ support IIRC.

- Steve


I think gcc is in general not the best choice when you want to have highly 
optimized code. I had no problems with C++ so far. You should avoid to use 
pointers when ever possible and use references instead. RTSynth is written 
in C++ and it performs quite well i think...

- Stefan


_
Broadband? Dial-up? Get reliable MSN Internet Access. 
http://resourcecenter.msn.com/access/plans/default.asp



Re: [linux-audio-dev] soft synth as a plugin

2002-10-18 Thread Steve Harris
On Fri, Oct 18, 2002 at 12:44:56 +, Stefan Nitschke wrote:
 Hmmm... My experiments with c++, dsp code and gcc (recent 2.96) did not
 turn out very well. For some reason the optimiser totaly chokes on c++
 code. I only tried one routine, and I'm no c++ expert, so its possible I
 screwed something up, but it did not look encouraging. I will revisit this
 and also try gcc3, which has much better c++ support IIRC.
 
 I think gcc is in general not the best choice when you want to have highly 
 optimized code. I had no problems with C++ so far. You should avoid to use 
 pointers when ever possible and use references instead. RTSynth is written 
 in C++ and it performs quite well i think...

gcc3 actualy makes a good job of C, gcc2 was not so good.

- Steve



Re: [linux-audio-dev] soft synth as a plugin

2002-10-18 Thread Tim Goetze
Steve Harris wrote:

Hmmm... My experiments with c++, dsp code and gcc (recent 2.96) did not
turn out very well. For some reason the optimiser totaly chokes on c++
code. I only tried one routine, and I'm no c++ expert, so its possible I
screwed something up, but it did not look encouraging. I will revisit this
and also try gcc3, which has much better c++ support IIRC.

from my experience the contrary is true. last time i checked the
assembly code produced by -O3 i found nothing to object to. that
was g++ 2.95 compiling drawing routines on rgb buffers (just ints, 
no floats, but comparable to audio dsp in a way).

i admit i designed the code around what i expected the optimizer
to do to it, so yes, maybe you need a little experience to get it
right. 

tim




Re: [linux-audio-dev] soft synth as a plugin

2002-10-18 Thread Steve Harris
On Fri, Oct 18, 2002 at 01:39:13 +0200, Tim Goetze wrote:
 Hmmm... My experiments with c++, dsp code and gcc (recent 2.96) did not
 turn out very well. For some reason the optimiser totaly chokes on c++
 code. I only tried one routine, and I'm no c++ expert, so its possible I
 screwed something up, but it did not look encouraging. I will revisit this
 and also try gcc3, which has much better c++ support IIRC.
 
 from my experience the contrary is true. last time i checked the
 assembly code produced by -O3 i found nothing to object to. that
 was g++ 2.95 compiling drawing routines on rgb buffers (just ints, 
 no floats, but comparable to audio dsp in a way).

I was trying with float, lowpass filters, written in an OO style (so I
expected c++ to be faster).

The assembler output from c++ was obvously inefficient, and I
checked all the inlinging was taking place. The equivalent c was much
better.

- Steve



Re: [linux-audio-dev] soft synth as a plugin

2002-10-18 Thread nick

 I think gcc is in general not the best choice when you want to have highly 
 optimized code. I had no problems with C++ so far. You should avoid to use 
 pointers when ever possible and use references instead. RTSynth is written 
 in C++ and it performs quite well i think...
 
 - Stefan

erm, sorry, but why not use pointers?

-nick

__
Do You Yahoo!?
Everything you'll ever need on one web page
from News and Sport to Email and Music Charts
http://uk.my.yahoo.com




Re: [linux-audio-dev] soft synth as a plugin

2002-10-18 Thread nick

 Until we have such instrument plugin API, what is the 
 right way to implement the the system
 (30 softsynths working together)
 with what we have 
 I mean a bunch of software synths /dev/midi - /dev/dsp
 
 Can I use these together right now?

oh yeah, you need to use ALSA though i think youll find. use the
virmidi driver to run loads of OSS-based softsynths, although using
alsa nativfe synths is more desirable..

 Is there a right way to control them all via 
 a single sequencer and to get their output
 into one place?

yep

-nick

__
Do You Yahoo!?
Everything you'll ever need on one web page
from News and Sport to Email and Music Charts
http://uk.my.yahoo.com




Re: [linux-audio-dev] soft synth as a plugin

2002-10-18 Thread nick
 This discussion is open!
 
 the discussion is several years old :)

and it doesnt look set to end anytime soon ;-)
 
 you managed to touch upon the central problem in your penultimate
 sentence, apparently without realizing the depth of the problem.
 
 if a synth comes with a GUI, then the issue of toolkit compatibility
 rears its ugly and essentially insoluble head once again. you can't
 put GTK based code into a Qt application, or vice versa. this also
 fails with any combination of toolkits, whether they are fltk, xforms,
 motif etc. etc.
 
 if the synth doesn't come with a GUI, but runs in the same process as
 the host, then every synth has to have some kind of inter-process
 control protocol to enable a GUI to control it.

but personally i find it much more desirable that the plugin provides
effectively a widget which can be added to a container (provided by the
host) rather than the plugin creating its own window. its just much
neater..

I thought Gtk(mm) is a more natural choice, since it is freely available
on all platforms, voila.

 these are deep problems that arise from the lack of a single toolkit
 on linux (and unix in general). 

:-/

 --p

-nick


__
Do You Yahoo!?
Everything you'll ever need on one web page
from News and Sport to Email and Music Charts
http://uk.my.yahoo.com




RE: [linux-audio-dev] soft synth as a plugin

2002-10-18 Thread STEFFL, ERIK (SBCSI)
 -Original Message-
 From: nick [mailto:nixx;nixx.org.uk]
 
  I think gcc is in general not the best choice when you want 
 to have highly 
  optimized code. I had no problems with C++ so far. You 
 should avoid to use 
  pointers when ever possible and use references instead. 
 RTSynth is written 
  in C++ and it performs quite well i think...
  
  - Stefan
 
 erm, sorry, but why not use pointers?

  it's dangerous... null pointers, memory leaks etc. tendency is not to use
pointers unless absolutely neccessary...

  as for the context above, I don't think it has anything to do with
performance (should be same).

erik



Re: [linux-audio-dev] soft synth as a plugin

2002-10-18 Thread Paul Davis
but personally i find it much more desirable that the plugin provides
effectively a widget which can be added to a container (provided by the
host) rather than the plugin creating its own window. its just much
neater..

this makes no difference to the problem. whether the plugin creates a
widget or a window, the integration of the event loop is still a
central blocking problem.

I thought Gtk(mm) is a more natural choice, since it is freely available
on all platforms, voila.

many Qt users feel otherwise. and wxWin is available too, along with
several other kits.

--p



Re: [linux-audio-dev] soft synth as a plugin

2002-10-18 Thread Stefan Nitschke

erm, sorry, but why not use pointers?



Just out of couriosity i made a benchmark test between C and C++ with
gcc3. I dont have a clue abour x86 assembler so i made a measurement.

Here is the C code (not realy useful as real code would have a need for a
struct and a pointer operation to call the filter() function) and the
C++ code.
Both simulate a low pass filter and are compiled with:
 gcc -O2 -march=pentium -o filter filter.xx

-O3 with C is broken, i got an endless loop!
-
double x1,y1,a,b;

const double filter(const double x)
{
 register double y;
 y  = a*(x + x1) - b*y1;
 x1  = x;
 y1  = y;
 return y;
}

int main()
{
 double x=1;
 int i;
 x1 = y1 = 0;
 a  = b  = 0.5;
 for (i=0; i10; ++i) {
	x = filter(x);
 }
}
-
class LowPass {
public:
 LowPass() { x1 = y1 = 0; a = b = 0.5; };
 ~LowPass() {};
 const double filter(const double x);
private:
 double x1,y1,a,b;
};
inline const double LowPass::filter(const double x)
{
 register double y;
 y  = a*(x + x1) - b*y1;
 x1  = x;
 y1  = y;
 return y;
}

int main()
{
 //LowPass* LP = new LowPass();
 LowPass LP;
 double  x=1;

 for (int i=0; i10; ++i) {
	//x = LP-filter(x);
	x = LP.filter(x);
 }
}
-

The results on my AthlonXP machine are:

C++ with member:
real0m11.847s
user0m11.850s
sys 0m0.000s

C++ with new() and pointer:
real0m12.337s
user0m12.330s
sys 0m0.000s

C:
real0m16.673s
user0m16.670s
sys 0m0.000s


Well, i will stay with pointer less C++ :-)

- Stefan



_
Surf the Web without missing calls! Get MSN Broadband. 
http://resourcecenter.msn.com/access/plans/freeactivation.asp



Re: [linux-audio-dev] soft synth as a plugin

2002-10-18 Thread nick
On Thu, 2002-10-17 at 22:00, Paul Davis wrote:
 thus guaranteeing that no instruments can be written in other
 languages. for all the mistakes the GTK+ crew made, their design to
 use C as the base language so as to allow for other languages to
 provide wrappers was a far-sighted and wise choice. OTOH, i will
 concede that the real-time nature of most synthesis would tend to rule
 out most of the languages of interest.

I'd like to pick all your brains on this one in particular.

indeed, for a plugin soft-synth, it would only ever make sense to write
it in c/c++ or assembler really, a question of speed. Are there really
people who seriously want to write a synth in aynthing else?

And i don't see how enforcing C++ is an issue: it doesnt force you into
a particular style of programming: you can just as easily write c-like
code in c++, but not the other way round.

Basically, your instrument would be a c++ class, but whatever you do
_inside_ that class is up to you.. no need to use objects inside that.
And basically it just cleans up all the code. I'm really having trouble
seeing the drawbacks (other than one needs a c++ compiler installed, and
the fact that c++ projects take longer to compile). oh, and the c++
class could always include (link to) an assembler version of your code..

The other plugin environemnts are using c++, and i feel that a similar
approach is definitely advantageous when people from other platforms may
consider writing/porting to a linux compatible plugin standard. 

-Nick


__
Do You Yahoo!?
Everything you'll ever need on one web page
from News and Sport to Email and Music Charts
http://uk.my.yahoo.com




Re: [linux-audio-dev] soft synth as a plugin

2002-10-18 Thread Tim Goetze
Stefan Nitschke wrote:


erm, sorry, but why not use pointers?


Just out of couriosity i made a benchmark test between C and C++ with
gcc3. I dont have a clue abour x86 assembler so i made a measurement.

Here is the C code (not realy useful as real code would have a need for a
struct and a pointer operation to call the filter() function) and the
C++ code.
Both simulate a low pass filter and are compiled with:
  gcc -O2 -march=pentium -o filter filter.xx
[...]
C++ with member:
real0m11.847s
user0m11.850s
sys 0m0.000s

C++ with new() and pointer:
real0m12.337s
user0m12.330s
sys 0m0.000s

C:
real0m16.673s
user0m16.670s
sys 0m0.000s

my interpretations:

c++ sans new() might be quicker because of better cache 
locality (the class instance is just a local stack var,
while with new() it is somewhere on the heap in another
memory page).

i don't think reference and pointer access make the 
difference, after all the internal representation should
be the same. granted, new() is a lot slower than a local 
class on the stack but your code only allocates once.

have you checked whether the optimizer inlined the C
function call? it looks like it didn't.

tim




Re: [linux-audio-dev] soft synth as a plugin

2002-10-18 Thread Jack O'Quin

 On Thu, 2002-10-17 at 22:00, Paul Davis wrote:
  thus guaranteeing that no instruments can be written in other
  languages. for all the mistakes the GTK+ crew made, their design to
  use C as the base language so as to allow for other languages to
  provide wrappers was a far-sighted and wise choice. OTOH, i will
  concede that the real-time nature of most synthesis would tend to rule
  out most of the languages of interest.

nick [EMAIL PROTECTED] writes:
 Basically, your instrument would be a c++ class, but whatever you do
 _inside_ that class is up to you.. no need to use objects inside that.
 And basically it just cleans up all the code. I'm really having trouble
 seeing the drawbacks (other than one needs a c++ compiler installed, and
 the fact that c++ projects take longer to compile). oh, and the c++
 class could always include (link to) an assembler version of your code..

The main drawback to using C++ subroutine linkage in today's Linux
environment is the unstable ABI.  This has caused many problems when
trying to call binary libraries built using different compilers or
sometimes even different compiler options.  Exception handling is a
particularly thorny issue.

The result is a debugging nightmare.  Using incompatibly compiled C++
libraries may cause random SEGV's, which are often very hard to debug.
The compilers don't seem to leave enough information in the binaries
they produce for maintainers to determine exactly what ABI options
were used.

The Ardour project struggled so much with this problem that Paul Davis
eventually decided to distribute (almost) all the C++ libraries he
uses in source, statically linking them to ensure that they are always
compiled identically.  This draconian solution was not adopted for
casual reasons.

Please note that these problems are deadly to any proposal for a
standard plugin interface.

We can all hope that ABI instabilities will eventually become a thing
of the past, perhaps once GCC version 3 becomes widely adopted.  But,
there is no proof that this will happen.  Right now, the Linux world
is full of incompatible C++ compiler implementations.

Regards,
-- 
  Jack O'Quin
  Austin, Texas, USA




Re: [linux-audio-dev] soft synth as a plugin

2002-10-17 Thread nikodimka

Guys,

This answer appeared just after I decided to ask the very same question.

Is it true that there is no _common_ instrument or synth plugin API on linux?

Is it true that there is no the same kind of media for out-of-process instruments?

I see that there are some kinds of possible plugin APIs:
-- MusE's LADSPA extensions
-- mustajuuri plugin
-- maybe there's some more (MAIA? OX?)
-- I remember Juan Linietsky working on binding sequencer with softsynths
   But I dont remember to hear anything about the results

So can anyone _please_ answer:

What is the right way to use the multiple (e.g. thirty)
softsynths together simultaneously with one host?
I mean working completely inside my computer
to have just one (or even none) midi keyboard as input. 
So all the synthesys, mixing, processing goes on inside.
And to send one audio channel out to any sound card.


thanks,
nikodimka


===8 Tommi Ilmonen wrote: ===8=

Hi.

Sorry to come in very late. The Mustajuuri plugin interface includes all
the bits you need. In fact I already have two synthesizer engines under
the hood.

With Mustajuuri you can write the synth as a plugin and the host is only
responsible for delivering the control messages to it.

Alternatively you could write a new voice type for the Mustajuuri synth,
which can lead to smaller overhead ... or not, depending on what you are
after.

http://www.tml.hut.fi/~tilmonen/mustajuuri/

On 3 Jul 2002, nick wrote:

 Hi all

 I've been scratching my head for a while now, planning out how im going
 to write amSynthe (aka amSynth2)

 Ideally i don't want to be touching low-level stuff again, and it makes
 sense to write it as a plugin for some host. Obviously in the Win/Mac
 world theres VST/DXi/whatever - but that doesnt really concern me as I
 dont use em ;) I just want to make my music on my OS of choice..

 Now somebody please put me straight here - as far as I can see, there's
 LADSPA and JACK. (and MuSE's own plugins?). Now, I'm under the
 impression that these only deal with the audio data - only half what I
 need for a synth. Or can LADSPA deal with MIDI?

 So how should I go about it?
 Is it acceptable to (for example) read the midi events from the ALSA
 sequencer in the audio callback? My gut instinct is no, no, no!

 Even if that's feasible with the alsa sequencer, it still has problems -
 say the host wanted to render the `song' to an audio file - using the
 sequencer surely it would have to be done in real time?

 I just want to get on, write amSynthe and then everyone can enjoy it,
 but this hurdle is bigger than it seems.

 Thanks,
 Nick


 _
 Do You Yahoo!?
 Get your free yahoo.com address at http://mail.yahoo.com


Tommi Ilmonen Researcher
= http://www.hut.fi/u/tilmonen/
  Linux/IRIX audio: Mustajuuri
= http://www.tml.hut.fi/~tilmonen/mustajuuri/
3D audio/animation: DIVA
= http://www.tml.hut.fi/Research/DIVA/ 

__
Do you Yahoo!?
Faith Hill - Exclusive Performances, Videos  More
http://faith.yahoo.com



Re: [linux-audio-dev] soft synth as a plugin

2002-10-17 Thread nick
Hi

IMO running each synth in its own thread with many synths going is
definitely _not_ the way forward. The host should definitely be the only
process, much how VST, DXi, pro tools et. al. work.

No, there is no real instrument or synth plugin API. but since my
original post I have been brewing something up. its quite vst-like in
some ways, but ive been wanting to make it more elegant before
announcing it. It does, however, work, and is totally C++ based ATM. You
just inherit the Instrument class and voila. (ok, so it got renamed
along the way)

Although in light of Tommi's post (mastajuuri) i have to reconsider
working on my API. My only problem with mastajuuri is its dependance on
QT (if im not mistaken), sorry.

If people would like to my work-in-progress, i could definitely use some
feedback ;-)


This discussion is open!


-Nick

On Thu, 2002-10-17 at 20:53, nikodimka wrote:
 
 Guys,
 
 This answer appeared just after I decided to ask the very same question.
 
 Is it true that there is no _common_ instrument or synth plugin API on linux?
 
 Is it true that there is no the same kind of media for out-of-process instruments?
 
 I see that there are some kinds of possible plugin APIs:
 -- MusE's LADSPA extensions
 -- mustajuuri plugin
 -- maybe there's some more (MAIA? OX?)
 -- I remember Juan Linietsky working on binding sequencer with softsynths
But I dont remember to hear anything about the results
 
 So can anyone _please_ answer:
 
 What is the right way to use the multiple (e.g. thirty)
 softsynths together simultaneously with one host?
 I mean working completely inside my computer
 to have just one (or even none) midi keyboard as input. 
 So all the synthesys, mixing, processing goes on inside.
 And to send one audio channel out to any sound card.
 
 
 thanks,
 nikodimka
 
 
 ===8 Tommi Ilmonen wrote: ===8=
 
 Hi.
 
 Sorry to come in very late. The Mustajuuri plugin interface includes all
 the bits you need. In fact I already have two synthesizer engines under
 the hood.
 
 With Mustajuuri you can write the synth as a plugin and the host is only
 responsible for delivering the control messages to it.
 
 Alternatively you could write a new voice type for the Mustajuuri synth,
 which can lead to smaller overhead ... or not, depending on what you are
 after.
 
 http://www.tml.hut.fi/~tilmonen/mustajuuri/
 
 On 3 Jul 2002, nick wrote:
 
  Hi all
 
  I've been scratching my head for a while now, planning out how im going
  to write amSynthe (aka amSynth2)
 
  Ideally i don't want to be touching low-level stuff again, and it makes
  sense to write it as a plugin for some host. Obviously in the Win/Mac
  world theres VST/DXi/whatever - but that doesnt really concern me as I
  dont use em ;) I just want to make my music on my OS of choice..
 
  Now somebody please put me straight here - as far as I can see, there's
  LADSPA and JACK. (and MuSE's own plugins?). Now, I'm under the
  impression that these only deal with the audio data - only half what I
  need for a synth. Or can LADSPA deal with MIDI?
 
  So how should I go about it?
  Is it acceptable to (for example) read the midi events from the ALSA
  sequencer in the audio callback? My gut instinct is no, no, no!
 
  Even if that's feasible with the alsa sequencer, it still has problems -
  say the host wanted to render the `song' to an audio file - using the
  sequencer surely it would have to be done in real time?
 
  I just want to get on, write amSynthe and then everyone can enjoy it,
  but this hurdle is bigger than it seems.
 
  Thanks,
  Nick
 
 
  _
  Do You Yahoo!?
  Get your free yahoo.com address at http://mail.yahoo.com
 
 
 Tommi Ilmonen Researcher
 = http://www.hut.fi/u/tilmonen/
   Linux/IRIX audio: Mustajuuri
 = http://www.tml.hut.fi/~tilmonen/mustajuuri/
 3D audio/animation: DIVA
 = http://www.tml.hut.fi/Research/DIVA/ 
 
 __
 Do you Yahoo!?
 Faith Hill - Exclusive Performances, Videos  More
 http://faith.yahoo.com


__
Do You Yahoo!?
Everything you'll ever need on one web page
from News and Sport to Email and Music Charts
http://uk.my.yahoo.com




Re: [linux-audio-dev] soft synth as a plugin

2002-10-17 Thread Paul Davis
IMO running each synth in its own thread with many synths going is
definitely _not_ the way forward. The host should definitely be the only
process, much how VST, DXi, pro tools et. al. work.

i think you need to scan back a year or 18 months in the archives to
where we measured this. the context switch under linux can be
extremely quick - on the order of 20-50 usecs on a PII-450, and is not
necessarily a problem. switching between contexts is massively more
expensive under windows and macos (at least pre-X), and hence the
multi-process design is not and cannot be an option for them at this time.

No, there is no real instrument or synth plugin API. but since my
original post I have been brewing something up. its quite vst-like in
some ways, but ive been wanting to make it more elegant before
announcing it. It does, however, work, and is totally C++ based ATM. You
just inherit the Instrument class and voila. (ok, so it got renamed
along the way)

thus guaranteeing that no instruments can be written in other
languages. for all the mistakes the GTK+ crew made, their design to
use C as the base language so as to allow for other languages to
provide wrappers was a far-sighted and wise choice. OTOH, i will
concede that the real-time nature of most synthesis would tend to rule
out most of the languages of interest.

Although in light of Tommi's post (mastajuuri) i have to reconsider
working on my API. My only problem with mastajuuri is its dependance on
QT (if im not mistaken), sorry.

If people would like to my work-in-progress, i could definitely use some
feedback ;-)


This discussion is open!

the discussion is several years old :)

you managed to touch upon the central problem in your penultimate
sentence, apparently without realizing the depth of the problem.

if a synth comes with a GUI, then the issue of toolkit compatibility
rears its ugly and essentially insoluble head once again. you can't
put GTK based code into a Qt application, or vice versa. this also
fails with any combination of toolkits, whether they are fltk, xforms,
motif etc. etc.

if the synth doesn't come with a GUI, but runs in the same process as
the host, then every synth has to have some kind of inter-process
control protocol to enable a GUI to control it.

these are deep problems that arise from the lack of a single toolkit
on linux (and unix in general). 

this is why JACK is designed in the way that it is, and why it
(theoretically) allows for both in-process and out-of-process
plugins. this allows programmers to choose which model they want to
use. i predict that any API that forces the programmer to use a
particular toolkit will fail. JACK's problem in this arena is that its
designed for sharing audio data, and does not provide any method for
sharing MIDI or some other protocol to control synthesis parameters.

besides, if SC for linux is in the offing, who needs any other
synthesizers anyway? :))

--p



Re: [linux-audio-dev] soft synth as a plugin

2002-10-17 Thread John Lazzaro
 Paul Davis [EMAIL PROTECTED] writes

 switching between contexts is massively more
 expensive under windows and macos (at least pre-X),

As a data point, I ran two different sa.c files (the audio
engines sfront produces) set up as softsynths using different
patches under OS X (using CoreAudio + CoreMIDI, not the
AudioUnits API), and it worked -- two patches doubling together,
both looking at the same MIDI stream from my keyboard, both
creating different audio outputs into CoreAudio that were
mixed together by the HAL. So, for N=2 at least, OS X seems
to handle N low-latency softsynth apps in different processes
OK ...

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] soft synth as a plugin

2002-10-17 Thread Peter L Jones
On Thursday 17 Oct 2002 21:49, nick wrote:
 Hi

 IMO running each synth in its own thread with many synths going is
 definitely _not_ the way forward. The host should definitely be the only
 process, much how VST, DXi, pro tools et. al. work.

 No, there is no real instrument or synth plugin API. but since my
 original post I have been brewing something up. its quite vst-like in
 some ways, but ive been wanting to make it more elegant before
 announcing it. It does, however, work, and is totally C++ based ATM. You
 just inherit the Instrument class and voila. (ok, so it got renamed
 along the way)

 Although in light of Tommi's post (mastajuuri) i have to reconsider
 working on my API. My only problem with mastajuuri is its dependance on
 QT (if im not mistaken), sorry.

 If people would like to my work-in-progress, i could definitely use some
 feedback ;-)


 This discussion is open!


 -Nick

Mmm, I'm a bit of a TiMidity++ fan, in case anyone's forgotten... so...

I'd want a soft-synth to track MIDI Time Codes or something on each synth 
channel so it could sync to an external source, as well as being in sync with 
itself internally.

I'd want to be able to generate trigger events locally - I also want to be 
able to generate events remotely.  I'd prefer MIDI for this (as I've just 
bought a MIDI controller keyboard ;-) ).

I'd expect the system to be componentised so there would be event source plug 
ins, user interface plug ins, audio sink plug ins, data (sample) source plug 
ins.  Plus the scheduling engine.

I think that's the basic tool set that I'd be looking for.  :-)

Applications should then just be a matter of picking the plugins and gluing 
them together...

(And keep the latency introduced by the engine to zero...)

Ta.

-- Peter




Re: [linux-audio-dev] soft synth as a plugin

2002-10-17 Thread nikodimka
--- Paul Davis wrote:
 IMO running each synth in its own thread with many synths going is
 definitely _not_ the way forward. The host should definitely be the only
 process, much how VST, DXi, pro tools et. al. work.
 
 i think you need to scan back a year or 18 months in the archives to
 where we measured this. the context switch under linux can be
 extremely quick - on the order of 20-50 usecs on a PII-450, and is not
 necessarily a problem. switching between contexts is massively more
 expensive under windows and macos (at least pre-X), and hence the
 multi-process design is not and cannot be an option for them at this time.

But could something change throuh 18 months? sigh...

 
 No, there is no real instrument or synth plugin API. but since my
 original post I have been brewing something up. its quite vst-like in
 some ways, but ive been wanting to make it more elegant before
 announcing it. It does, however, work, and is totally C++ based ATM. You
 just inherit the Instrument class and voila. (ok, so it got renamed
 along the way)
 
 thus guaranteeing that no instruments can be written in other
 languages. for all the mistakes the GTK+ crew made, their design to
 use C as the base language so as to allow for other languages to
 provide wrappers was a far-sighted and wise choice. OTOH, i will
 concede that the real-time nature of most synthesis would tend to rule
 out most of the languages of interest.

Yes. I was asking about C API mostly.

 
 Although in light of Tommi's post (mastajuuri) i have to reconsider
 working on my API. My only problem with mastajuuri is its dependance on
 QT (if im not mistaken), sorry.
 
 If people would like to my work-in-progress, i could definitely use some
 feedback ;-)

But anyways. I would really love to look at what you have.
Is this only a specification or you have a reference implementation?

 
 
 This discussion is open!
 
 the discussion is several years old :)

But could something change in several years? sigh...
Still no API. 
Isnt it was a great idea behind the LADSPA 
simple API _now_ is better than several years old discussion?

 
 you managed to touch upon the central problem in your penultimate
 sentence, apparently without realizing the depth of the problem.
 
 if a synth comes with a GUI, then the issue of toolkit compatibility
 rears its ugly and essentially insoluble head once again. you can't
 put GTK based code into a Qt application, or vice versa. this also
 fails with any combination of toolkits, whether they are fltk, xforms,
 motif etc. etc.
 
 if the synth doesn't come with a GUI, but runs in the same process as
 the host, then every synth has to have some kind of inter-process
 control protocol to enable a GUI to control it.

So what? Isnt it just a two-three more API functions?

Damn! I never wrote any audio application ever,
and I dont want to create the ninth or tenth possible winner 
instrument/synth API for linux.
So I see no reason for me to say okay I can do it.

So I ask all of you guys once again:

What should we (I) use? 

We already have some possibilities 
-- MusE LADSPA extensions, 
-- nick can finish his work,
-- MAIA
-- mustajuuri's API


why dont we use what we have?

 
 these are deep problems that arise from the lack of a single toolkit
 on linux (and unix in general).
 
 this is why JACK is designed in the way that it is, and why it
 (theoretically) allows for both in-process and out-of-process
 plugins. this allows programmers to choose which model they want to
 use. i predict that any API that forces the programmer to use a
 particular toolkit will fail. JACK's problem in this arena is that its
 designed for sharing audio data, and does not provide any method for
 sharing MIDI or some other protocol to control synthesis parameters.
 
 besides, if SC for linux is in the offing, who needs any other
 synthesizers anyway? :))

Which SC are you talknig about?



 
 --p 
 


__
Do you Yahoo!?
Faith Hill - Exclusive Performances, Videos  More
http://faith.yahoo.com



Re: [linux-audio-dev] soft synth as a plugin

2002-10-17 Thread nikodimka

Oh yeah I forgot!

And there's another question I _realy_ want to know the answer for:

Until we have such instrument plugin API, what is the 
right way to implement the the system
(30 softsynths working together)
with what we have 
I mean a bunch of software synths /dev/midi - /dev/dsp

Can I use these together right now?

Is there a right way to control them all via 
a single sequencer and to get their output
into one place?

nikodimka

--- nick wrote:
 Hi
 
 IMO running each synth in its own thread with many synths going is
 definitely _not_ the way forward. The host should definitely be the only
 process, much how VST, DXi, pro tools et. al. work.
 
 No, there is no real instrument or synth plugin API. but since my
 original post I have been brewing something up. its quite vst-like in
 some ways, but ive been wanting to make it more elegant before
 announcing it. It does, however, work, and is totally C++ based ATM. You
 just inherit the Instrument class and voila. (ok, so it got renamed
 along the way)
 
 Although in light of Tommi's post (mastajuuri) i have to reconsider
 working on my API. My only problem with mastajuuri is its dependance on
 QT (if im not mistaken), sorry.
 
 If people would like to my work-in-progress, i could definitely use some
 feedback ;-)
 
 This discussion is open!
 
 -Nick
 
 On Thu, 2002-10-17 at 20:53, nikodimka wrote:
 
  Guys,
 
  This answer appeared just after I decided to ask the very same question.
 
  Is it true that there is no _common_ instrument or synth plugin API on linux?
 
  Is it true that there is no the same kind of media for out-of-process instruments?
 
  I see that there are some kinds of possible plugin APIs:
  -- MusE's LADSPA extensions
  -- mustajuuri plugin
  -- maybe there's some more (MAIA? OX?)
  -- I remember Juan Linietsky working on binding sequencer with softsynths
  But I dont remember to hear anything about the results
 
  So can anyone _please_ answer:
 
  What is the right way to use the multiple (e.g. thirty)
  softsynths together simultaneously with one host?
  I mean working completely inside my computer
  to have just one (or even none) midi keyboard as input.
  So all the synthesys, mixing, processing goes on inside.
  And to send one audio channel out to any sound card.
 
 
  thanks,
  nikodimka
 
 
  ===8 Tommi Ilmonen wrote: ===8=
 
  Hi.
 
  Sorry to come in very late. The Mustajuuri plugin interface includes all
  the bits you need. In fact I already have two synthesizer engines under
  the hood.
 
  With Mustajuuri you can write the synth as a plugin and the host is only
  responsible for delivering the control messages to it.
 
  Alternatively you could write a new voice type for the Mustajuuri synth,
  which can lead to smaller overhead ... or not, depending on what you are
  after.
 
  http://www.tml.hut.fi/~tilmonen/mustajuuri/
 
  On 3 Jul 2002, nick wrote:
 
   Hi all
  
   I've been scratching my head for a while now, planning out how im going
   to write amSynthe (aka amSynth2)
  
   Ideally i don't want to be touching low-level stuff again, and it makes
   sense to write it as a plugin for some host. Obviously in the Win/Mac
   world theres VST/DXi/whatever - but that doesnt really concern me as I
   dont use em ;) I just want to make my music on my OS of choice..
  
   Now somebody please put me straight here - as far as I can see, there's
   LADSPA and JACK. (and MuSE's own plugins?). Now, I'm under the
   impression that these only deal with the audio data - only half what I
   need for a synth. Or can LADSPA deal with MIDI?
  
   So how should I go about it?
   Is it acceptable to (for example) read the midi events from the ALSA
   sequencer in the audio callback? My gut instinct is no, no, no!
  
   Even if that's feasible with the alsa sequencer, it still has problems -
   say the host wanted to render the `song' to an audio file - using the
   sequencer surely it would have to be done in real time?
  
   I just want to get on, write amSynthe and then everyone can enjoy it,
   but this hurdle is bigger than it seems.
  
   Thanks,
   Nick
  
  
   _
   Do You Yahoo!?
   Get your free yahoo.com address at http://mail.yahoo.com
  
 
  Tommi Ilmonen Researcher
  = http://www.hut.fi/u/tilmonen/
  Linux/IRIX audio: Mustajuuri
  = http://www.tml.hut.fi/~tilmonen/mustajuuri/
  3D audio/animation: DIVA
  = http://www.tml.hut.fi/Research/DIVA/
 
  __
  Do you Yahoo!?
  Faith Hill - Exclusive Performances, Videos  More
  http://faith.yahoo.com
 
 __
 Do You Yahoo!?
 Everything you'll ever need on one web page
 from News and Sport to Email and Music Charts
 http://uk.my.yahoo.com
 
 
 
 __
 Do you Yahoo!?
 Faith Hill - Exclusive Performances, Videos  More
 http://faith.yahoo.com



Re: [linux-audio-dev] soft synth as a plugin

2002-07-04 Thread Tim Goetze

nick wrote:

Now somebody please put me straight here - as far as I can see, there's
LADSPA and JACK. (and MuSE's own plugins?). Now, I'm under the
impression that these only deal with the audio data - only half what I
need for a synth. Or can LADSPA deal with MIDI? 

[...]

I just want to get on, write amSynthe and then everyone can enjoy it,
but this hurdle is bigger than it seems.

for a softsynth, i'd prefer a design that puts the core functionality
in a library, because from there on, you can integrate it with *all*
audio applications -- if your synth sounds good, people will write the
necessary wrappers for their favourite audio app themselves :)

for a standalone version to work with other audio applications your
best choice for audio is using jack i think. of course to receive
midi, the alsa sequencer comes to mind first. personally i like raw
midi byte streams as a means of connecting via files/sockets, too. 

(unfortunately it will probably take another decade until we have
 a ladspa equivalent for realtime event transmittance ;)

maybe the already mentioned iiwusynth can serve as a design model for
your project; i found it very versatile.

cheers,

tim




Re: [linux-audio-dev] soft synth as a plugin

2002-07-03 Thread Paul Davis

Now somebody please put me straight here - as far as I can see, there's
LADSPA and JACK. (and MuSE's own plugins?). Now, I'm under the
impression that these only deal with the audio data - only half what I
need for a synth. Or can LADSPA deal with MIDI? 

So how should I go about it?
Is it acceptable to (for example) read the midi events from the ALSA
sequencer in the audio callback? My gut instinct is no, no, no!

Even if that's feasible with the alsa sequencer, it still has problems -
say the host wanted to render the `song' to an audio file - using the
sequencer surely it would have to be done in real time?

I just want to get on, write amSynthe and then everyone can enjoy it,
but this hurdle is bigger than it seems.

You handle MIDI I/O in its own thread. You use a lock-free
buffer/fifo/queue between this thread and the one that executes
process() (or LADSPA's run()) callback. The queue should contain an
abstract description of recent MIDI events, preferably in some format
not tied to MIDI because of its poor handling of pitch. The process()
callback will then pay attention to the events in the
queue/fifo/buffer, and change the synthesis accordingly.

The MIDI I/O can be done anyway you want: ALSA sequencer, ALSA
rawmidi, or raw file descriptors. 

If its not obvious, I've thought about this a great deal, but I have
no time to code it :)

I'd recommend JACK for this. LADSPA is a bit too limited to support
this because of the GUI issues, unless you use LCP, which might be
still not be enough and is not supported by any notable LADSPA hosts
at this time (mea culpa).

--p



Re: [linux-audio-dev] soft synth as a plugin

2002-07-03 Thread Bob Ham

On Thu, 2002-07-04 at 00:06, Paul Davis wrote:

 You handle MIDI I/O in its own thread. You use a lock-free
 buffer/fifo/queue between this thread and the one that executes
 process() (or LADSPA's run()) callback. The queue should contain an
 abstract description of recent MIDI events, preferably in some format
 not tied to MIDI because of its poor handling of pitch. The process()
 callback will then pay attention to the events in the
 queue/fifo/buffer, and change the synthesis accordingly.
 
 The MIDI I/O can be done anyway you want: ALSA sequencer, ALSA
 rawmidi, or raw file descriptors. 
 
 If its not obvious, I've thought about this a great deal, but I have
 no time to code it :)
 
 I'd recommend JACK for this. LADSPA is a bit too limited to support
 this because of the GUI issues, unless you use LCP, which might be
 still not be enough and is not supported by any notable LADSPA hosts
 at this time (mea culpa).

Scarey..

http://pkl.net/~node/misc/sequencer-plans-1-scan-small.png
http://pkl.net/~node/misc/sequencer-plans-2-scan-small.png


-- 
Bob Ham: [EMAIL PROTECTED]  http://pkl.net/~node/

My music: http://mp3.com/obelisk_uk
GNU Hurd: http://hurd.gnu.org/