First, regarding your specific question:

Line 95 has:
    res = (*queueItf)->Enqueue(...
which make sense to me that it could be a *possible* test point.

But line 264 has:
     /* Make sure player is stopped */
     res = (*playItf)->SetPlayState(playItf, SL_PLAYSTATE_STOPPED);
That doesn't seem to make sense, are you sure you meant that line number?

----

Second, for the larger question of how to test,
unfortunately I'm not permitted to give opinions on this issue
as it's outside of my scope [I work on the audio platform, not on 
conformance].
My unofficial "guess" is that the time starting at Enqueue would make
sense as a starting measuring point for the continuous latency.  
But you'll really need to check in with
your partner conformance contact to get an official answer.


On Thursday, April 11, 2013 11:16:24 PM UTC-7, Ship Hsu wrote:
>
> Dear Glenn Kasten,
> We are concern audio low latency of Android 4.2 Compatibility Definition 
> of (android-4.2-cdd), which make a requirement:
> *    1. cold output latency of 100 milliseconds or less*
> *    2. continuous output latency of 45 milliseconds or less*
> In the document, *"If a device implementation meets the requirements of 
> this section after any initial calibration when using the*
> *OpenSL ES PCM buffer queue API"*, we have a question of this definition, 
> what is the state of opensles Audioplayer when we start the stopwatch? If 
> there is a background sound playing, we should start the stopwatch from 
> *"enable 
> an audioplayer"* or from *"enqueue buffer in callback function".*
>
> As attachment (google opensles example), we should start audio latency 
> measurement from line #264, or from line #95? 
>
> Thank you!
>
> Sincerely,
>
> Glenn Kasten於 2012年9月7日星期五UTC+8下午11時44分29秒寫道:
>>
>> 1. You didn't mention if you're developing Android apps or the platform. 
>> If you're an Android app developer, you should be using only documented 
>> public APIs. For audio output, that's Java language 
>> android.media.AudioTrack in SDK and C language OpenSL ES AudioPlayer with 
>> PCM buffer queue in NDK. The AUDIO_OUTPUT_FLAG_FAST is an internal 
>> symbol that's used only at the AudioTrack C++ level, and that's not a 
>> documented public API. So you should not need to deal with 
>> AUDIO_OUTPUT_FLAG_FAST.
>>
>> But if you're doing platform development such as porting, it can be 
>> helpful to understand the internal implementation in JB ... 
>> AUDIO_OUTPUT_FLAG_FAST is a hint from the API level that this 
>> application would like to use a lower latency, fewer feature, audio track 
>> if one is available.  The request is not guaranteed to be accepted by the 
>> audio server (AudioFlinger).  The fewer features that are not available 
>> include effects, as you said, and also sample rate conversion. If 
>> AudioFlinger can handle the request it will create a "fast track", 
>> otherwise a normal track.
>>
>> 2. The "fast" in FastMixer means that it executes more often, and that it 
>> uses less CPU time each time it runs, than the normal mixer thread.  The 
>> normal mixer thread runs about once every 20 ms, and the FastMixer thread 
>> runs at rate of once per HAL buffer (which is ideally less than 20 ms). The 
>> FastMixer thread supports up to 7 fast tracks, and does not support sample 
>> rate conversion of effects. So it uses a limited amount of CPU each time it 
>> runs. The normal mixer thread supports more tracks (up to 32), and supports 
>> sample rate conversion and effects. So it can use more CPU each time it 
>> runs. The main purpose of FastMixer design was not to take advantage of 
>> multi-core.
>>
>> On Tuesday, September 4, 2012 6:59:27 PM UTC-7, big_fish_ wrote:
>>>
>>> I am a android developer, I just read the FastMixer code of Jellybean.
>>>
>>> I have some questions, 
>>>
>>> 1, If submit AudioTrack with AUDIO_OUTPUT_FLAG_FAST flag, then this Track 
>>> can't do AudioEffect handle, right?
>>>
>>>     I noticed that FastMixer thread handle all FastTacks without 
>>> AudioEffect. Except mFastTracks[0], because the zero FastTrack is 
>>> passed from MixerThread which was already through mixer and effect handled. 
>>> right?
>>>
>>> 2, About the performance, why FastMixer is faster then before?
>>>
>>> If we have 20 tracks, we set 8 tracks as FastMixer, and 12 as normal 
>>> tracks, 
>>> then there are two threads to do mixer. So if we run on dual core CPU, then 
>>> we have multithreading adventage.
>>>
>>> But if we have 32 tracks are all as FastTrack, then MixerThread will not 
>>> do mixer. then there will have no multithreading adventage.
>>>
>>>
>>>
>>>

-- 
-- 
unsubscribe: [email protected]
website: http://groups.google.com/group/android-porting

--- 
You received this message because you are subscribed to the Google Groups 
"android-porting" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to