as long as we include the possibilty of float having a compile-time
switch to mean either 32 or 64 bit floats, I agree. but only compile
time. there should be no support for any format other than whatever
float format was chosen at compile time.
This I don't like. If there are situations
so, i ask again, what's missing?
Can you show me:
1) a C++ header that show classes and method
the most recent version of the API is in C, and is intended to remain
that way. I showed you the header in the mail message. It is missing
mostly just the definition of a port and nframes_t. I will
yep. as described in the comment for audioengine_port_register(), if
the port type is not a builtin, then a buffer must be provided as the
4th argument. i should also have documented that PortIsMulti is
illegal for such ports (to ensure 1:1 connections at all times, for
reasons documented in
Can someone give me a reference to the pdf doc for LAAGA?
I would be interested to read up on the specification.
no such document exists. the discussions on the list are an attempt to
work it out. kai vehamen has an excellent website that describes the
problem space we're working in, but he'll
the engine always makes the frame time available. thats the only sort
of time we can establish, but even that relies on having a client that
can tell the engine. in order:
How? I don't see it explicit in the API. And what exactly is frame
time? How is it different from sample time?
its
Can you describe a situation where a client would want to meter
something other than its own input? If all input metering can be
handled by the owner of the input port, then there is no need for the
server to make this data available to non-owners.
The problem is that there may be more than one
I would say that for example a softsynth can (and therefore, should),
use the in-process model, because it should have that separation
between the synth machinery and the GUI anyway; control messages
through MIDI or similar. (At least that's how I think a softsynth
should be designed, good thing
i had a couple of hours today to work on the multiprocess
audioengine. its now doing its basic tasks of starting a server on a
socket, accepting new connections from clients, waking periodically
from poll(2), telling its clients to do some work, etc. note that i
short-circuited the design used by
I don't understand of what you're speaking: in my design for multi
process model the switch back to the engine happens only once for each
period (as it's needed).
Yes, but thats because your example didn't solve a rather fundamental
problem of resorting the graph.
its easy to shuffle a set of
In message [EMAIL PROTECTED]you write:
Paul Davis wrote:
I don't understand of what you're speaking: in my design for multi
process model the switch back to the engine happens only once for each
period (as it's needed).
Yes, but thats because your example didn't solve a rather fundamental
Note that here you doubled the cost compared to a pull model (you would
i don't understand. how has this doubled the cost? i know there
is extra cost from supporting multiple processes, but i don't
see an algorithmic doubling in the model. can you explain?
call it that way? I.e. I'm
just where we have this latency discussion going again, I would
be interested which hardware people use besides RME Hammerfall,
that give low latency.
I have a trident-based hoontech card that I can run quite happily at
64 frames. My Tropez+ is a bit more problematic at 64 frames - the two
but what happens when this particular client is now supposed to run
later in the graph? how does inputportfd and outputportfd get reset to
point to the correct next client ?
You have the same race with your approach - just doing the forward
from the client does not work if f.i. the supposed
I have a trident-based hoontech card that I can run quite happily at
64 frames. My Tropez+ is a bit more problematic at 64 frames - the two
streams don't seem to run precisely in sync with each other all the
time. I don't know if this a h/w problem or a design problem.
So what kind of buffer
a few people asked if they could see my slides from the free software
multimedia workshop in firenze. for whatever its worth, which i
suspect is not much, there's a PDF at:
http://www.op.net/~pbd/firenze.pdf
--p
My understanding is that LAAGA currently defines only a free-running
server. Any client that performs recording, sequencing, arranging, or
edl based editing must define its own internal timeline.
personally, the thought of being able to run Muse and Ardour in
complete sync with each other
Radikal have informed me, and strongly reiterated their position
after an email exchange, that they will not release the specs for the
protocol their very nice control surface uses.
I suggested the comparison with MIDI to them (in that MIDI succeeded
because it Yamaha/Sequential/et al. shared
And that's a matter of this message: why somebody could not make a
powerful audio/MIDI studio, such as Cakewalk Pro Audio or Cubase VST,
but for Linux. I think it is possible but I'm afraid I cannot do it
myself. Please, if somebody knows where I can get some manuals about
sound output (and
I must quibble with page 8, deficiencies of libre stuff as of 2001:
no soundfile editors capable of edl editing
OOPS!!! Major edit alert. In the actual talk, I take a marker, and
cross out all the problemas above the line, to indicate what *has*
been done since 1999. The idea is that we have
- I suppose Unix sockets are more adeguate to shm/signal constraints
why? as i've tried to explain, it seems to me that using fd's is a lot
more work with no apparent gain ... if the job can be done using
signals, whats the benefit of using fd's?
- multiple clients does not work
why not? did
Just to speak with code, I've (quick and dirty) hacked something
up which is different from Pauls approach. Its in commented
header style, it helps if you know GLAME, but I'll try to
elaborate. Note that I dont explicitly show implementation details
(f.i. if using fds or not - I'm less convinced
I made a track with jazz/timidity/csound that i would like to record
digitally to disk.
I would like some way to monitior the soundcards output, and record it
to disk at the same time as im playing it.
Im using alsa 0511.
I've seen somewhere that this is possible with OSS, but dont remember
Wouldn't it be possible to do an LD_PRELOAD hack: preload a shared object
that implements snd_pcm_write and writes all data to disk?
true, that would work for the single data stream case.
--p
Before answering the following questions I like to tell about my view
of the typical usage pattern from the UI point of view. Lets suppose
[ ... description elided ... ]
yes, that more or less matches my notion of things closely enough that
we clearly are aiming at the same general idea.
I
Ok, we seem to know what both approaches do and what advantages and
disadvantages are (but we dont agree on them). So I think we either
need input from some other guy or we can stop discussion.
More or less true, I agree. But not quite :)
Perhaps I have time to do an implementation of my API,
so, since people here seem to think that the low-latency kernel is
really great, i suspect i have missed something. is there a way to get
processes to get sceduled promptly, but at the same time not hog the cpu
if they try?
welcome to the complex world of RT-scheduled cooperating apps.
the
Instant followup:
I've just played with it, seems quite clean, though there seem to be some
bugs in the way it gives up shm memory, or doesn't rather. Is that related
to IPC_RMID?
Yes, thats all it was. Its fixed in my local code now. Sorry for
segment overflow :)
--p
Yes, your right, they are not very portable. But read/write can't work
without horrendous complexity unless you have fd passing, which is
also not portable. But you're also right that its not part of the API,
so we don't have to worry too much.
You're forgetting Unix socket/named pipe
the usual place: http://www.op.net/~pbd/audioengine-0.0.1.tar.gz
One nasty little problem right now is that the client aborts during
exit(3). i traced back to using shmat(2) with a specific address where
the segment should be attached. i don't know what to make of this -
the client runs fine but
Are you now speaking of a physical setup or about the equivalent
(i.e. all nodes above exchanged with an app) software setup?
either.
For the physical setup you plug before the quadra (i.e. a connection
that is not visible above)? Or you plug between the quadra and the
mixer?
before the
OK, well that matches what I was imagining, exept I wanted a System
Timebase plugin with a timebase output, e.g. a well known plugin with a
well known port. If its possible, whats to stop a plugin existing which
reads SMPTE sync from hardware and produces a timebase signal in the same
format as
as of this evening, the engine is now driven by a driver, which is
an abstract type. i have implemented alsa_driver, which supports
ALSA 0.9.X. the ae_main demo program loads this driver, then hooks
it up to the engine to allow any ALSA supported PCM interface to drive
the audioengine.
snapshot,
I was wondering: aren't there any toolkits 'out there' to simplify
writing multithreaded apps? (providing monitors for synchronisation
etc.) It would be difficult to design such a toolkit though as most
of the code is very application specific, with maybe the exception
of locks / monitors for
I may miss something buy my proposal would be to have the receiver (the
client) doint the mixing.
Eg if client1's input port has only 1 source feeding it then simply evaluate
that buffer directly.
If there is more than one source (many-to-one) then get the data from the
other sources and mix
assume we have a video input source that sends SMPTE data and we want to
syncronize an audio stream we have to it.
eg:
SMPTE---++
||
internal HDR track--+ LAAGA | audio out
||
softsynth --++
the problem with semaphores is that Unix/POSIX semantics don't allow
integration of a process sleeping on an file descriptor and a
semaphore (one of the few areas where Windows definitely improves upon
Unix). this would mean adding yet another thread that just sleeps on
the semaphore used to
I thought I should point out the existence of an existing cross-platform
audio API that has been around for awhile. It has been adopted by many
computer music practitioners. Perhaps it will be useful to some. There is a
Linux/OSS port in beta. More details at:
Phil, thanks for the pointer to PA.
But I could imagine two ways in which PortAudio and LAAGA could complement
each other. LAAGA could call PortAudio for access to actual audio HW.
As mentioned previously, the LAAGA prototype I've been working on
doesn't contain any statically linked code to do audio I/O at
all. Thats implemented
This is with abramo's patch applied, and current cvs alsa.
With my ens1271 at work (not a very good card I know, but it works with
ardour). I get:
[swh@inanna laaga-0.1.0]$ ./engine
creating alsa driver ... default|64|48000
ALSA: cannot set fragment count minimum to 2 for capture
ALSA-MCD:
http://www.op.net/~pbd/laaga-0.2.1.tar.gz
CHANGES
---
* the sample engine application now accepts cmd-line arguments
to specify the ALSA PCM device name, the sample rate,
and the frames-per-interrupt value. The default values
are default,
Paul, I'd suggest the following (less functions and infinite
extensibility without any new functions):
typedef int (*LaagaCallback)(laaga_client_t *client, LaagaPropertyId);
/* To get callback arg */
void *laaga_get_property_custom(laaga_client_t *client,
LaagaPropertyId);
/* To get sample rate,
Why not using a signal ? This signal would interrupt the select(), making
it return with errno==EINTR. You can then retry the select() after getting
the new file descriptor. This preserves also the select/poll returning
an error for true engine errors, such as engine crash.
The problem is that
As someone responsible for distributing a sample .asoundrc file with
ardour, I must remind people that the boilerplate text quoted below
should NOT show up in a regular .asoundrc. The sample that came with
ardour is from a couple-of-ALSA-generations ago.
--p
pcmtype.hw {
open
Has anyone looked at IBM's new linux threading stuff?
http://oss.software.ibm.com/developerworks/projects/pthreads/
yes, i read it last week. i don't view it as particular
significant. here's the simple scoop.
there are kernel threads, which are execution contexts that the kernel
knows about,
In message [EMAIL PROTECTED]you write:
A heads-up for you all to let you know that I've just started building an
alternative API to LAAGA (in case you care). As posted before, LAAGA doesn't
seem the right approach to me - it seems to be more of a
framework/application than an API.
well, thats
So what do you suggest Paul ?
Is my problem normal ?
(I can't even run alsactl store, while apps in OSS emulation mode work
perfectly)
if you have one soundcard, then ALSA should work out-of-the-box, so to
speak.
you only need a .asoundrc file if (for example):
1) you want to use
I've read that the fastest x86 intel architecture can do interrupts is w/
max latencies of 40us using rtlinux. You might be able to do at least the
interrupt part with a PPC-based system (1us interrupts).
not for 96kHz. you've only got 10usec per sample. sorry.
--p
But N is not fixed! The host is free to call the plugin's run() or
runAdding() function with any non-zero argument. It might call it
like:
run(16);
run(21467);
run(1);
run(480);
run(16384);
May i just ask, out of curiousity, why the host would want to do
But N is not fixed! The host is free to call the plugin's run() or
runAdding() function with any non-zero argument. It might call it
like:
run(16);
run(21467);
run(1);
run(480);
run(16384);
what control rate is that? The point is that the plugin has no
From a review in EQ of the new RADAR-24 from iZ:
--
Next I synchronized the RADAR 24 to the TASCAM DA-98 via SMPTE, using
the RADAR as both slave and master. Synchronizing a tape machine to a
hard disk recorder can be a finicky
For me this kind of softwares is a real gift for anyone working with
audio. A track contains clips, each of them has vector envelopes
for volume/pan/whatever parameter automatable. The volume enveloppe
is generally the one you use the most but you can choose what appears
in superposition of the
on Linux yet, and Paul
Davis seems to not care much about adding MIDI support to either
Ardour or LAAGA.
I have not said that, strictly speaking. I have considered integrating
MidiMountain into Ardour. However, I prefer a different solution (see
below for more details).
LAAGA already has implicit
steve answered most of this already, but i just wanted to add:
i have somehow managed to be unaware of ardour in the time i have been
good thing. ardour is under development. its not a tool suitable for
most people's usage. i doubt if more than 10% of the people on the
list could even get it
With respect to Paul's suggestion that installing libraries from
packages is asking for trouble, I beg to differ. I'm inclined to
think that if packages cause trouble, it's that there's a bug
somewhere, either in the package or in the app. Lib packages are
not inherently broken IMHO ; they
Can anyone name for me a widely-used Open Source application that
(1) is only available from CVS and (2) requires compiliation of all of its
supporting libraries from scratch?
widely used is not fair. ardour is not widely used. sourceforge
contains many such projects.
also, can anyone
In what way is ALSA 0.9.0beta broken? The only trouble it's given me
is syncing sound with images watching DVDs...
if you install 0.9beta5, then nothing will work because alsa.conf is
installed in the wrong place. nothing much to experience ALSA beta
users, but rather suprising to everyone else
I realized what the problem really is and what the answer really is, akin to a
real modular studio setup. The default audio I/O hooking should be done
with a patchbay client. That is, a client that has pairs of ports like audio
output 1 and audio output to driver 1 which are tied together (zero
these low latency patches look good, but I can't seem to find one for kernel
2.2.19?
is there a work-around if I can't find one?
aslo I use an AMD Athlon1gHZ, latency is low anyway, but not 2ms!
eg : are these patches made for pentiums or 'pentium class' cpu's ?
As far as I remeber they are
I've been playing with LAAGA, I wrote a simple client that just sends a
440Hz sin to ALSA I/O:Output 1. It usually works fine unless you Ctrl-C
the client, then, the next time you run it the output is garbled, theres
still a signal there but its been distorted.
occasionally it will just glitch
Paul, I am trying to use FLTK instead of GTK, but I don't get it to work...
It crashed inside X11 functions. I have seen this before, and it always
was related to multithreading and FLTK not being threadsafe, but I don't
understand how that could be a problem, since I run all fltk stuff in the
ljp writes, in response to two criticisms of GNOME dependency:
To me, music is more important than any library ideologies. I wouldn't give
a rats ass if software was made with QBASIC, as long as it compiles fairly
easily
and then continues:
(not alot of excessive library inclusion that I
If you have any suggestion on how to reduce the library set, or
improve on the functionality offered by each part, or package Ardour
for easier compilation, or whatever, I'd love to hear about it. And
I'm not being sarcastic.
Include your custom libs into the ardour CVS / tarball. Or at
Its partly politics
(they wanted to provide more reasons for people to use GNOME) and
partly development issues (GTK+ was under a feature freeze).
I dont think so.
This is not speculation on my part. I've been told so by people who
work on both GTK+ and GNOME for RH.
apt-get install
Err - for which part of GNOME or its dependencies are no binaries
available??? Paul, what are you smoking??? The above is precisely
the problem with adour which certainly doesnt depend on GNOME but
libraries for which no binaries are available...
the words were:
(not alot of excessive library
If your connection is limited, use a CD set - GLAME doesnt require
up to date versions of any lib it depends on.
GLAME might not, but other applications that use part of the GNOME lib
set do. So if I install GNOME from a CD, and then find that another
GNOME app wants a later version, I'm stuck
(not alot of excessive library inclusion that I have to install
every libtom-libdick-and-libharry libs just to compile it- because there
no
binaries available),
which i read as saying i have to install a bunch of libraries because
i have to compile an application and i have to compile
would it be too dreadfully obnoxious and steinberg sniping to rename
LAAGA as FreeWire ?
True, but I suppose ardour is any better?
No, Ardour is not better. However, the set of libraries on which it
depends is smaller than GNOME.
I want to try ardour, but gave up tr
ying to compile it? WHY? Because the libraries you use are 1) obscure and
All the ~multi-track~ programs I have had experience of (Cubase, Cool
Edit Pro, and most recently, cakewalk SONAR) use the semantic 'Import',
for a simple reason - in a multi-track project, you have N tracks, and
any soundfile must be 'imported' to one of these (You may indeed need to
select a
To put it bluntly. LAD make a mockery of the idea of lazy hackers.
Everyone is trying to do the same thing over and over and over and.
yeah, just like soundforge and cool edit pro and cakewalk and logic
and cubase and samplitude and session and protools and bias peak
and digital performer
I think everybody would expect FreeWire to be fully portable to other OSes
than linux then. Is it the case? (I'm talking about the API, not the actual
implementation...).
indeed it is.
--p
Yeh but this isn't doze or mac world. The only reason there is such a
proliferation of stuff on those other platforms is because they sell it.
We don't so why do we have so much competition? AFAIK we are the ones
who aren't in it for the money. Not that money is evil or anything or
even if it is
POSA
Paul's Own Sound Architecture
now *that* made me laugh ...
true, but i wonder if taybin has any idea of quite why, since i think
i do (given your understanding of the cortina mk.III situation) ...
But it should be recursive, to get that old school unix in-joke flavor.
I still quite like the suggestion of API (the Audio Processing
Interface), so that we have the API API :)
--p
I would like to add that I feel it is the ideas that are more important
than the individual apps. Each one has different strong points. Things
progress so much faster with the sound editors if we combined these
ideas.
That is where the true value of the Gimp lies. It's not the useful gui
design
The library of useful ideas for each project are reasonably sized but
nothing as extensive as the code base for win or mac editors. But if we
had combined them all from the start then we would already have a very
strong editing suite. How unrealistic is it? Is that posibility just too
fantastic?
On the subject of:
JACK - Jack Audio Connection Kit.
Rick writes:
Hehe, I can see it know 'You haven't used pro-audio tools on linux, well
let me tell you buddy, you don't know JACK :)
this is so good, so very, very good it almost makes me want to run:
sed -e 's/laaga/jack/g' -e
On Fri, Jul 27, 2001 at 03:40:24PM +0100, Ellis Breen wrote:
Another couple of questions regarding LAAGA.Would this allow softsynths to
be synchronised with audio to sample accuracy? And how would one go about
bouncing their output to a hard disk recorder such as Ardour? What tools are
Maarten had written an FLTK client for LAAGA:
It is a very simple example, it's your ae_client.c with a slider to control
the gain.
I just added it to the laaga source base. It seems to work flawlessly
here. So I suspect some library issues, or something like that.
Very strange.. Which
Reasonable people differ here. For example, the autoconf authors
(http://subversions.gnu.org/cgi-bin/cvsweb/autoconf/).
One of the simplest and least-dependent configure.in files you'll find :)
I think it's a better idea to include configure in CVS. Some macros only
exist in certain versions of
Yup (you knew it), I tried to write a simple command line app though could
do linking eg:
$ laaga_link list
Foo:out 1 - Ardour:Line 5
$ laaga_link make Foo:out 2 Ardour:Line 6
$ laaga_link list
Foo:out 1 - Ardour:Line 5
Foo:out 2 - Ardour:Line 6
Well, you get the jist. But, there didn;t seem to
The ardour-dev list would really be the right place for this, but
since its been raised here ...
Not on the list because I've never gotten the thing close to
running...
Such conversations dominate the list, most of the time.
First, you need to generate the autoconf scripts and Makefiles,
Is there any card, that works fine with full duplex 44kHz and small
fragment sizes? Not the Hammerfall, I want something cheaper.
I use a Trident 4D-NX based card from Hoontech that cost about $60,
and it works 100% with 64 frames per interupt.
--p
No program is using that much memory that the swap space is needed.
It looks like cp, md5sum, tar and/or flac program (which all have
handled large data for me) either use much memory, or Linux VM is
loading data always to a new memory location and eventually fills the
You would imagine that web
Hoontech could but somehow the driver jerks. Is there a difference
between opening a device with full duplex in alsa or opening to handles,
one for input, one for output?
No, ALSA's architecture doesn't differentiate: there is no open for
duplex in the ALSA API. You always open two handles
In message [EMAIL PROTECTED]you write:
Paul Davis wrote:
Paul Winkler writes:
I was just wondering why people on this list seem to ignore glame, when
the discussion comes upon waveeditors. [ ... ]
Can't compile it without GNOME. I don't like that. I guess that makes me a
luddite. Oh
does someone know a simple program to monitor MIDI input? I'm just about
to get into MIDI, as I got a nice Technics WSA Synth now :) As I have
fixed my hardware problems with a dodgy MIDI Adapter I'd like to be able
to just see, what kind of midi commands are sent from the synthesizer.
It's got
Ok heres some more info. If I hit the rec enable button on a channle
and then disable it. I get
wrong list for this information (should be on ardour-dev).
as mentioned on ardour-dev, this buglet is fixed already but not
propagated to CVS yet.
--p
last night, i thought of an analogy/metaphor) to illustrate the
difference between the way i think about processing networks and the
model used by GLAME, GStreamer and now proposed in Richard's LADMEA.
In both cases, we have a bunch of nodes that generate and/or process
a data stream.
My model
Yep, I understand in LAAGA the client *must* do something when the clock
ticks - this is fundamentally the way that LAAGA does things. It isn't
necessarily useful - for instance, a client listening to a MIDI port will
generate data asynchronously and will have no interest in the tick (its only
In message [EMAIL PROTECTED]you write:
The problem is isomorphic to disk I/O only in that I/O happens - some
exchange implementations might perform the lazy-writing for you.
So an exchange has properties above and beyond those of its clients
and the channels they declare? This gets a little
A few corrections and additions:
There are other similar API's, like DirectX (DirectMusic or what's it
precisely?) and then Emagic, www.emagic.de, a rival of Steinberg's
probably has an API of their own. IMHO VST and DX are the widest
spread.
Emagic uses VST as well.
In practice, the VST
Finally, there *are* VST hosts for Linux. We just can't distribute the
source code for them since that involves redistributing the VST SDK,
which is illegal.
This is a bit knotty, and is a ugly/possibly short-sighted solution to VST
licensing issues, but is there any way of writing a wrapper
| a better, and simpler solution, is a perl or other script that takes
| the VST SDK as distributed by Steinberg and hacks it into shape for
| use under Linux. someone should feel free to do this :) once they're
| done with that, only libvstgui awaits :)))
|
| --p
|
oh
why not
Well, that's really sad. Let's hope that some people may go on with the
previous GPL code...
it also strikes me as mostly bullshit, though i'd want to speak to a
lawyer before concluding that completely. GPL'ed software is almost
universally made available with no warranty. as a post on slashdot
What is required to support VST plugins on Linux/x86 is twofold:
1) a way of executing a dynamically linked object that was
built for Windows/x86.
2) a library that implements the libvstgui API
Couldn't you apply (1) to execute the Windows DLL version of libvstgui
Wouldn't having to run plugins with Wine totally kill any hope for
low-latency, except on maybe a dual 1.2-Ghz box?
probably, except that my point was really that running under Wine
isn't actually viable anyway. besides, its not really running under
Wine. it would something more like using part
This might (?) be useful [in the event that a Qt port would be possible for
Ardour etc.]
Bwahahahahahahah!
% cd /usr/local/music/src/ardour/gtk_ardour
% grep ';[ \t]*$' *.cc *.c *.h | wc -l
9207
% cd /usr/local/music/src/ardour/libs/ardour
% grep ';[ \t]*$' *.cc ardour/*.h | wc -l
4590
I won't pretend to know much of this issue, but a proof-of-concept text
front-end for ardour was written a while ago wasn't it?
Its under active development (though mostly by someone else). Its
written to be useful by sight-impaired computer users, and seems to
work pretty well.
its still unbelievable to me that anyone would be choosing
to work in an environment in which a pointer overrun in
an application can crash the entire program ...
--- Forwarded Message
From:elided
To: elided
Subject: Buffered plugin problems
Hi,
I am implementing a
1 - 100 of 1290 matches
Mail list logo