>I still don't understand the difference. Seeing jack code, there is >nothing new. You are using threads, shm for multiprocess communication >and callbacks for the final communication between the application audio >code and arbiter client. Can I see the global JACKs scheme which beats >this scheme?
synchronous execution of the processing graph. read on. >Here is our scheme for share and future mix plugins / aserver: > ><audio stream producer> -> <shm sender & poll> -> <arbiter & poll> -> ><final point - audio device or other consumer & poll> > >Note that alsa-lib covers the <shm sender & poll>. aserver doesn't provide a frame count to the client client telling it how much to do. when the client returns from poll(2), it can find out how much space/data is available, but at this time, that has no direct correlation with how much it should actually process. i can't see that it ever would unless you come up with a whole new set of fake ALSA PCM devices in which the relevant alsa-lib calls meet this requirement. in general, aserver is just not written in a way conducive to low latency operation. its not that its badly written - abramo and/or yourself just clearly did not have this idea in mind. lets take the scheme above and make it have 2 producers and one destination: producer1 -> [ whatever ] -+ |-> consumer producer2 -> [ whatever ] -+ producer1 and producer2 will not run in sample sync unless something causes them to be executed synchronously. i've seen nothing in aserver that provides for synchronous exection. having them both return from poll isn't adequate unless upon return from poll they are guaranteed to process the same number of frames *and* its also guaranteed that the consumer will do nothing until both producers are finished. aserver doesn't, as far as i can tell, guarantee either of these things. [ note: i am assuming that all the participants are "well behaved"; a different set of actions becomes necessary when some are not, and JACK handles that by removing them from the processing graph] we went over and over this on LAD, even with Richard from GLAME. eventually, it became clear to everyone, i believe, that if operating the way that dedicated h/w works is the goal (which for almost everyone, it is), asynchronous execution of graph nodes and blocking on data ready conditions is not acceptable. any other design can lead to stalls in the graph and dropouts. further, there was widespread agreement on LAD that most people don't want "arbiters". everyone on other OS's (including some Unix systems like IRIX) has gotten along fine with a standard sample format. >I have a strong suspect, that the JACK engine is only some pre-cache >tool which can be solved using bigger ring-buffer. if this means what i think i means (i don't really understand the sentence), then no. more buffering is precisely what's unacceptable to those of us who want to use linux for realtime work. if buffering was acceptable, then none of this stuff would be up for discussion: just buffer the hell out of everything, and it will work. > I think that the >global serializing and parallelizing scheme can't be avoided or changed >in the audio dataflow. the JACK engine orders all clients in the correct execution order (and it picks an order if there is no correct execution order). it dynamically reorders the execution chain whenever the processing graph is changed. note that it also does not involve a context switch back to the server every time a client is finished - clients are chained so that they directly cause the next one in the chain to execute its callback. --p _______________________________________________ Alsa-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/alsa-devel