>API which pretends to be "one and only" for linux must work not only with
JACK is not an API pretending to be the one and only anything. Its an API designed to fill a niche that nothing else is doing for Linux. It has been written with 98% of the attention paid to high end apps. If it happens to be useful a general server, all the better, but its goal is focused on the high end. >"high-end" but also "low-end" application ( like "embedded" mp3 player made >of old Pentium-90 box ) i.e. server must be aware of general system >perfomance, and don't eat all CPU time by pumping PCM data in 100 sample >chunks. sorry, but you can't do that. its technically impossible. the definition of a "synchronously executed" system is that every component does the same thing at the same time. if the server is running with a 100 frame cycle, then every component must supply audio data every in 100 frame chunks. no ifs and or buts :) the best you can do is to pre-buffer your audio, and hand it over to the JACK system via the process() callback in 100 frame chunks. this is perfectly reasonable to do if it matters. of course, latency is shot to hell, but if your app doesn't care, then neither do we :) the sample capture client does the opposite: it buffers captured data till there is enough to write to disk in a big chunk, and then does so (in another thread). >I'd prefer have possibility to negotiate with the server beseides the sample >rate the output delay window (say 20..25 ms or even more ). In this case >server must deside on his own how oft and how big chunks are demanded, >depending on CPU perfomance / load, length of connection chain (I.e. mp3 or >whatever player with compressor attached)... and use the biggest possible >buffer lenght to assure that latency is in this window. i don't think you grasp the point of a system like JACK. clients are *slaves*. they have no control over the operation of the system. they don't get to control anything. their only role is to do what they are supposed to do, when they are supposed to do it. this is necessary so that: 1) overall latency can be controlled 2) all clients run in sync, all the time 3) the server doesn't make assumptions about buffering designs. you requests would require that the server starts buffering data on behalf of clients. moreover, clients have no essentially no clue what they are connected to, and how long the processing chains are. just like in other SE-API's, they are dumb, ignorant black boxes whose job is to run a callback and thats all. >More, there some time-domain effects like compressor, or some FFT based like >vocoder, which naturally have some delay, and I think, there should be a >possibility to tell to the server how big is it. I'm thinking about >sample-accurate mixing down with real-time effects. Track with such effects >applied must be "played ahead". I didn't found such functions in JACK API. 1) from the perspective of clients, the system never starts and it never stops. there is no way to "play ahead" when there is no start point. 2) JACK is for **inter-application** audio routing. It is not a system like gstreamer that allows for intra-application design. You would not use JACK to do sample-accurate mixdown between two different apps because you cannot achieve timing resolution better than the server cycle time (even though everything runs with sample sync, which is different). a program like ardour, which does do sample accurate mixing, runs LADSPA plugins internally. JACK is not intended to replace LADSPA (or VST) - the cost of inter-address space context switches is too high to run a full set of "plugins" as JACK clients. 3) LADSPA needs this feature :) 4) you actually have to play all other tracks "behind" rather than play that one track "ahead". 5) dedicated, individual digital audio gear doesn't provide that kind of functionality either :) --p _______________________________________________ Alsa-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/alsa-devel