Jaroslav: > I think that we can lose more in the client/server model. Also, note that
client/server will have higher latency. The server has to copy the samples "last minute" to DMA buffer and the client has to manage before the server copies the data. In the direct model only the client's timing has to be within the typical(maximum) system latency. Please note that on many cards supporting DMA if the client is late just a few samples but still adds the whole period, only these few samples will be silence. The "nondestructive underrun detection" is the beauty here. The client knows it is late (by comparing its pointer with HW pointer) but may continue nevertheless if it knows next data will be coming on time. You know, throwing out all samples or stopping the card in case of small underrun is like pulling emergency brake because the train is a bit late. It only makes things worse. With client/server either either all is good, or the whole period is lost. Do I understand it correctly that the server stores data in 32 bit buffer and then puts it in 16 bit DMA buffer of the card? This is one operation more compared with mixing directly in DMA buffer. Best regards, -- Tomek ------------------------------------------------------- This SF.net email is sponsored by: SlickEdit Inc. Develop an edge. The most comprehensive and flexible code editor you can use. Code faster. C/C++, C#, Java, HTML, XML, many more. FREE 30-Day Trial. www.slickedit.com/sourceforge _______________________________________________ Alsa-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/alsa-devel