On 03/11/2011 07:56 PM, Tim Mensch wrote:
> On 3/11/2011 11:07 AM, Olivier Guilyardi wrote:
>> On 03/11/2011 06:40 PM, Tim Mensch wrote:
>>> An important feature still missing from Android is low, predictable
>>> audio latency.
>>> There needs to be an API that will tell you exactly what the latency is,
>>> and EVERY device should have a low-latency sound configuration that
>>> brings the value to 20ms or lower. The new "low-latency" flag in 2.3
>>> guarantees 45ms, which isn't even really low in terms of latency, and as
>>> I understand it that feature isn't even available on the Nexus S (!!).
>> I do agree with this. But this wouldn't exactly be a new feature. It is 
>> about consolidation and optimization, and this what is needed in Android 
>> currently. We don't need no new high level features such as OpenSL reverb 
>> and the like. Working on all this is a waste of resources in my opinion, 
>> when reliable low latency isn't here. We just need good raw input and 
>> output, the most basic thing on earth, no bells, no whistles.
> 
> I agree that it should be simple, and that it shouldn't need bells and
> whistles. BUT, there's no current way at all to query latency, and
> there's no access to a low-level audio buffer, so it does in that
> respect qualify as a new feature.

Yes, of course you may consider this a new feature. But it's very different from
adding new functionalities such as reverb in OpenSL. It's about bringing some
reliability and consistency to the audio API. From my point of view, this is
some sort of consolidation/fixing.

> Granted they may need to fix and/or optimize their current audio stack
> for this to work. Probably the easiest way would be to BYPASS most of it
> (including OpenSL ES) to just give us low-level buffer access. But even
> that, from our point of view as developers, is a new feature.

You can't just access the low level buffers directly. Otherwise you'd acquire
exclusive audio access, preventing the system to play notifications, other apps
to work at the same time, etc...

The current audio stack is correct IMO regarding the fact that it is built
around a sound server (audioflinger) to which several clients can connect at the
same time. The idea is good, but it just isn't done right.

You can achieve true low latency with that kind of client/server design, by
using real time threads. Plus, this can be done rather securely on current Linux
kernels, which provide fine-grained cpu usage limits for realtime processes.

> What we (or at least most of us) don't want is a lot of additional
> bloat; the "feature" I'm talking about could potentially be "added", in
> part, by removing code, but in the end there would still be new APIs. :)

I agree that there's something wrong with the audio APIs, and this from the
public APIs down to the lowest level. For instance, there is no way to perform
sync I/O. The input and output are distinct objects in the public APIs, and this
down to libaudio. But worse than that, I've been told that some devices use
different clocks for in and out.

So, as I said previously, this isn't only about Android. The libaudio
implementation is provided by the platform vendor or the device OEM. So are the
kernel drivers.

I think that the latency problem on Android is one of the best example of the
device fragmentation issues. Especially, the Android audio internals are clearly
designed to give a lot of freedom to the OEMs and vendors. The requirements
(CDD) should be much more strict and precisely defined to achieve anything near
low latency. Then only, given those new requirements, the audio stack could be
refactored I think.

--
  Olivier

-- 
You received this message because you are subscribed to the Google Groups 
"Android Discuss" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/android-discuss?hl=en.

Reply via email to