>I don't agree here that ALSA convers only HAL model. We have great plugin >system, which allows to build anything you need. Application code doesn't >know, if it is referencing the real audio device or a pseudo device or >a logical device created by an arbiter application for sharing and/or >mixing of multiple audio streams.
true, but it does know one key thing: *it* controls when it generates and/or processes audio. And this is the key problem with using ALSA directly as the basis for the kind of features I'm talking about, particularly low-latency sample synchronization across all participants. abramo spent a week or two arguing the case for an ALSA-provided solution, a case which i'm actually very sympathetic too, over on LAD. my impression is that he was unable to convince anybody that ALSA's current general design can be used for what we're talking about. the key points of all the models for what I'm talking about (CoreAudio, ASIO, *all* (?) plugin systems, Rewire, PortAudio, etc.) is that they are callback driven. there is no similar concept in the ALSA API (except for, ironically, the SIGIO-based stuff that started all this :). applications using ALSA drive the audio i/o as part of their own program flow. this isn't the right model for the kind of flexibility I'm talking about. its absolutely essential that the program give up the idea that it knows when its the right time to deliver and/or process audio data. > From my look, we can cover everything >under one API. We have already source and destination points (playback and >capture applications), so lets go to create arbiters and don't waste your >time to create some other system, maybe a bit simpler, but I'm sure that a bit simpler? did you see the code to jack_simple_client? do you have any idea what would be involved to set up any comparable ALSA program? >the simplicity will cost some features at some level. what many people (all except Abramo IIRC) agreed to in the discussions on LAD was that in order to have ALSA provide an API that worked this way, you'd basically be completely hiding almost all of ALSA. So much so that making it part of the ALSA API is sort of beside the point. I'd be quite happy if alsa-lib provided the kind of service that JACK does/will provide. But right now, nothing in alsa-lib gets close to the design goals that JACK has, design goals carefully laid out by Kai Vehamen (see www.eca.cx/laaga). The closest is abramo's "share" server, but it doesn't offer sample sync nor the abstractions of a port which, despite Abramo's protestations, everybody seemed to prefer over continuing to (potentially) deal with sample formats, interleave configuration, and so on and so forth. There are good reasons why in the list of inspirations I give above, not one of those systems has any support for variable sample formats (they all use 32 bit floating point). I'm not proposing dumping ALSA. For applications that really do want to interact with a HAL (such as the ALSA client/driver for JACK) its a fantastic API. I'm merely trying to encourage people writing applications away from the program-driven audio i/o model toward a callback driven one, preferably using an API that allows inter-application data exchange. Note that for many people interested in MacOS X, this has to be done anyway, and many of the most interesting Linux audio programs (e.g. jMax) work this way already. Its also required by the rather nice PortAudio library which some people like to use. --p _______________________________________________ Alsa-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/alsa-devel