That makes sense. I also fully agree that callback-driven APIs are better suited for
audio. On the other hand, nobody would doubt that o/c/r/w apps would not allow low
enough
latency for GUI-type apps like EQs (e.g. with buffer sizes of 10-20ms the total latency
isn't so inacceptable).
But that's
Paul Davis wrote:
the APIs that are used to write almost all audio software code in
production these days all use a callback model.
Sorry for questioning this statement. Of course we all don't have any statisti
cal data but
you miss what I see as the majority of applications that use
On Wed, 27 Nov 2002, James Courtier-Dutton wrote:
> Paul Davis wrote:
>
> >
> >
> >>>the APIs that are used to write almost all audio software code in
> >>>production these days all use a callback model.
> >>>
> >>>
> >>Sorry for questioning this statement. Of course we all don't have a
On Wed, 27 Nov 2002, Jaroslav Kysela wrote:
> In my brain, there is also totaly different solution with the zero context
> switching overhead - sharing the soundcard DMA buffer among more
> applications. There is only one problem: snd_pcm_rewind() implementation
This is my personal preference.
I have read your comments below, and I would like to try to explain the
problems I am coming up against when writing a multi-media app.
I am not going to say that I know everything about kernel scheduling,
but for multi media applications, avoiding xruns is a major concern.
This becomes particu
>Ok, it's only simple example, that there are more solutions than Paul
>suggests. I fully agree, that the callback model is suitable for the
>perfect synchronization among more applications.
Let's be totally clear about this. its not just that the callback
model is suitable - the mserver model
tomasz motylewski wrote:
Please stop the complication of "available/delay" etc. Just the raw pointer.
Each application knows where its application pointer is, so it can easily
calculate delay/available and decide for itself whether there was an overrun or
not.
I use the delay() function.
I he
>Sorry, it's not as easy as you've described. It's not possible to invoke
>any user code from the kernel code directly. There is a scheduler which is
>informed that a task has been woken up. It depends on scheduler when the
>task is really invoked. It's quite same as for the r/w model where the
>ap
>This is my personal preference. In this model the only service ALSA has to
>supply are:
>
>1. initial configuration/start/stop.
>2. mmapable DMA buffer
>3. fact and precise ioctl telling the current HW pointer in the buffer. If the
>card is not queried each time, then the "last period interrupt" t
Paul Davis wrote:
I am currently taking the following approach: -
Always prepare 2 audio hardware periods of sample frames in advance
inside the user app.
1) snd_pcm_wait()
2) write()
3) prepare new sample frames, then go back to (1).
for lower latency, you'd do:
1) snd_pcm_wait()
2) pre
>I am currently taking the following approach: -
>Always prepare 2 audio hardware periods of sample frames in advance
>inside the user app.
>
>1) snd_pcm_wait()
>2) write()
>3) prepare new sample frames, then go back to (1).
for lower latency, you'd do:
1) snd_pcm_wait()
2) prepare new sample
On Wed, 27 Nov 2002, Paul Davis wrote:
> >This is my personal preference. In this model the only service ALSA has to
> >supply are:
> >
> >1. initial configuration/start/stop.
> >2. mmapable DMA buffer
> >3. fact and precise ioctl telling the current HW pointer in the buffer. If the
> >card is not
On Wed, 27 Nov 2002, tomasz motylewski wrote:
> On Wed, 27 Nov 2002, Jaroslav Kysela wrote:
>
> > In my brain, there is also totaly different solution with the zero context
> > switching overhead - sharing the soundcard DMA buffer among more
> > applications. There is only one problem: snd_pcm_
On Wed, 27 Nov 2002, Paul Davis wrote:
> >Sorry, it's not as easy as you've described. It's not possible to invoke
> >any user code from the kernel code directly. There is a scheduler which is
> >informed that a task has been woken up. It depends on scheduler when the
> >task is really invoked. It
>> actually, it can't. if the user space application is delayed for
>> precisely 1 buffer's worth of data, it will see the pointer at what
>> appears to the the right place and believe that no xrun has
>> occured. the only way around this is to provide either:
>>
>> * h/w pointer position as
On Wed, 27 Nov 2002, Paul Davis wrote:
> Let's be totally clear about this. its not just that the callback
> model is suitable - the mserver model will actually not work for
> sample sync between applications. I have always been sure that the
I think this is the critical point. ALSA's smix/mserve
On Wed, 27 Nov 2002, Paul Davis wrote:
> >Please stop the complication of "available/delay" etc. Just the raw
> >pointer. Each application knows where its application pointer is, so
> >it can easily calculate delay/available and decide for itself whether
> >there was an overrun or not.
>
> actua
On Wed, 27 Nov 2002, Paul Davis wrote:
> i see this as more promising than the approach i think you are
> thinking of. you can't avoid the context switching - they *have* to
> happen so that the apps can run!! the question is *when* does it
> happen ... in JACK, they are initiated in a chain when
>> actually, it can't. if the user space application is delayed for
>> precisely 1 buffer's worth of data, it will see the pointer at what
>> appears to the the right place and believe that no xrun has
>> occured. the only way around this is to provide either:
>
>Well, but if you combine it with th
Hello,
A gentleman by the name of Abramo Bagnara recently stated he may have some
code that would kick-start the development of the smix plugin. I think this
is a very useful and increasingly important component. Abramo stated he had
no problem releasing it, but I could not find where or if he
2 clarifications:
>It is not logical for every program to write support for esd, artsd, jack,
>alsa, etc. Programs should write to ALSA and let ALSA do software mixing if
>required. Windows provided this since DirectX (3?). Solaris provides this too
>(esd apparently doesn't block on Solaris).
On November 25, 2002 10:19 pm, Paul Davis wrote:
> 2 clarifications:
> >It is not logical for every program to write support for esd, artsd, jack,
> >alsa, etc. Programs should write to ALSA and let ALSA do software mixing
> > if required. Windows provided this since DirectX (3?). Solaris provides
>> You see, if all apps are written to use the ALSA API, that's going to
>> be great for the purposes you have in mind, but totally awful for
>> those of us who want our audio apps to work in a sample synchronous
>> way and ignorant of the ultimate routing of their data. Many of us
>> don't think t
On Mon, 25 Nov 2002 23:13:35 -0500
Paul Davis <[EMAIL PROTECTED]> wrote:
> >> All that being said, I'd love to see the smix plugin implemented and
> >> available, if only because it would allow ALSA native apps to
> >> participate in a JACK system, albeit without sample synchronous
> >> behaviou
On November 26, 2002 03:13 am, Frans Ketelaars wrote:
>
> http://www.mail-archive.com/alsa-devel@lists.sourceforge.net/msg04592.html
Thanks!
I had seen this posting but the particular web interface I was using didn't
show the attachment. Got it.
--
Schedule your world with ScheduleWorld.com
htt
Hi Paul,
very interesting discussion.
Paul Davis wrote:
>
> (...)
>
> >One of the big reasons this is affecting me is that Java sound will not work
> >unless you have a hardware mixer. My understanding is that the Sun folks seem
> >to think that it is wrong to have to implement many different w
>> ALSA is *a* sound library. There are lots of things that it doesn't
>
>I would really say: ALSA is *the* sound library (at least on Linux). Isn't it
>in kernel
>2.5+ ?
alsa-lib isn't in kernel 2.5, because its not part of the kernel.
alsa-lib doesn't contain any code to read or write audio fi
27 matches
Mail list logo