Hi Gang Shen,

I was just looking through my email and saw the below. Where
are you at with this, still having problems?

--greg.


[EMAIL PROTECTED] wrote:
Please see comments below...

-----Original Message-----
From: ext Greg Wright [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 17, 2005 1:45 PM
To: Shen Gang.1 (Nokia-TP-MSW/Dallas)
Cc: [email protected]
Subject: Re: [Audio-dev] HXAudioSession & CHXAudioStream


[EMAIL PROTECTED] wrote:

Hi, Greg,

Actually, my HW decoder/device is doing exactly what you suggested. All dummy 
PCMs coming through Device::Write() are thrown away. Encoded data (if there are 
any) is pushed to HW as soon as possible. OnTimeSync() has no problem. 
Device::Write() will only update a counter (nBytesToWrite) and will trigger the 
HW device to played if HW is stopped for some reasons. It has been working fine 
in most cases.

The problem I had is a deadlock case that may happen if there is a mismatch of 
total frames(bytes) pushed down through Device::Write() and the total frames 
generated through Decoder::Decode(). It does happen very often during 
streaming. It is related to the code for replaced device in 
HXAudioSession::CheckToPlayMoreAudio().

For example, at one moment(t0), if 200ms PCMs pushed down through 
Device::Write(), at the same time there are 4 encoded frames (80ms) in buffer, 
made by Decoder::Decode().

                                Bytes pushed down               Bytes buffered
                                via Device::Write()     by Decoder::Decode()
                t0              200ms                           80ms
                t0+80ms         120ms                           0       //no 
encoded data



First, what do you mean by "Device::Write()"? I hope you mean
CHXAudioDevice::Write().


Then let's check HXAudioSession::CheckToPlayMoreAudio()
           if (m_pAudioDev->GetCurrentAudioTime(ulCurTime) == HXR_OK)
           {
               UINT32 ulNumBlocksPlayed = (UINT32) ((double) ulCurTime / 
m_dGranularity);
               if (m_ulBlocksWritten > ulNumBlocksPlayed)
               {
                   m_uNumToBePlayed = uNumBlocks = (UINT16) (m_ulBlocksWritten 
- ulNumBlocksPlayed);
               }

               /* Now that m_ulMinimumPushdown can be set by the user, it is 
possible
                * for MIN_BLOCKS_TOBEQUEUED to be 0.
                */
               if (uNumBlocks == 0 ||
                   uNumBlocks < m_ulMinBlocksTobeQueued)
               {
                   bPlay = TRUE;
               }


If you are in this code block, then it means that you do not return true
from IsWaveOutDevice(). Is that correct? If so, this code uses the current

==== GS =====
I am going to into this block because "m_bReplacedDev" is true. The HW device 
is not a CHXAudioDevice. IsWaveOutDevice() is a specific method of CHXAudioDevice, not 
part of IHXAudioDevice.
=============

audio time to determine how many blocks have been played. If this is causing
you problems make sure that GetCurrentAudioTime(ulCurTime) is smooth and 
constantly
increasing. It should never just sit at a given number. Since your hardware 
decoder
should be pushing data as fast as it can to the audio hardware, you should
never see an underflow. If you do, then it could mean that the renderer is not
getting packets fast enough (or in lumps). This, again, can be caused by a
bad implementation of GetCurrentAudioTime(ulCurTime) or no OnTimeSync() calls
to the core (or just not often enough).

Please verify that GetCurrentAudioTime() returns good values and is *always*
increasing. Next, verify that you are calling the equivilent of:

==== GS ====
The underflow happens during streaming. When HW decoder/device consumes all encoded frames and no packets coming in, GetCurrentAudioTime() will not increase unless device fake timeline. Except that, the following code is very close to mine. The deadlock case is special case of underflow, when GetCurrentAudioTime() can NOT give a correct measure about whether AudioDevice still have data to play. This is probably true for all HW device because the total number of frames pushed down is not equal to total encoded frames buffered in HW decoder, at least in current code.
On the renderer side, there are three places in Renderer that can produces 
audio frames:
        1) OnTimeSync() 
//not for symbian s60basic profile, //HELIX_CONFIG_MIN_PCM_PUSH_DOWN is not defined
        2) OnPacket()   
                //not working in deadlock case, because the 
m_PlayState==playing.
3) OnDryNotification //In the deadlock case I mentioned, HXAudioSession::CheckToPlayMoreAudio()
                // cannot reach PlayAudio(), OnDryNotification won't be invoked.

If underflow happens and CheckToPlayMoreAudio gets stuck as described, Renderer 
won't pass any data into decoder. Then the pipeline blocks itself.
============

     if (!m_bPaused)
     {
         ULONG32 ulAudioTime = 0;
         theErr = _Imp_GetCurrentTime( ulAudioTime );

         if (m_pDeviceResponse)
         {
             theErr = m_pDeviceResponse->OnTimeSync(ulAudioTime);
         }
     }

It should be calling that every 10-20ms as a start.

--greg.





After 80ms, the 'uNumBlocks' is 1 (100ms per block), the 
'm_ulMinBlocksTobeQueued' is 1 too, then 'bPlay' can never be TRUE. That means, 
from now on, CheckToPlayMoreAudio() will never be able to reach PlayAudio().

Meanwhile, the Renderer stops decoding too. There are three places 
CAudioRenderer invoke DoAudio() -- depack, decode and pass PCM to stream:
        1) OnTimeSync() 
                //not for s60basic.pcf, HELIX_CONFIG_MIN_PCM_PUSH_DOWN isdefined
        2) OnPacket()   
                //not working, at this moment, the m_PlayState==playing.
3) OnDryNotification //Since HXAudioSession::CheckToPlayMoreAudio() cannot reach PlayAudio(),
                //OnDryNotification won't be invoked.

So, the producer and consumer got stuck there.
The reasons why original device class works even if there is a mismatch could 
be:
        1) Original device consumes PCMs. It is fine to consume dummy PCMs and 
make the CheckToPlayMoreAudio() moving.
        2) Original device has the specific method: 
NumberOfBlocksRemainingToPlay(). HXAudioSession::CheckToPlayMoreAudio() depends 
on this method to decide whether to PlayAudio(). Even if there is a mismatch, 
NumberOfBlocksRemainingToPlay() can still inform HXAudioSession the right 
number.

Above is my analysis of the deadlock situation I met when making a HW 
decoder/device. If there is anything incorrect, please let me know.

Thanks,

Gang Shen




-----Original Message-----
From: ext Greg Wright [mailto:[EMAIL PROTECTED]
Sent: Tuesday, August 16, 2005 6:51 PM
To: Shen Gang.1 (Nokia-TP-MSW/Dallas)
Cc: [email protected]
Subject: Re: [Audio-dev] HXAudioSession & CHXAudioStream


[EMAIL PROTECTED] wrote:


Hi, Greg,

Our DSP decoder/device are on the same physical device. We push encoded frames into the device but cannot get decoded PCMs back. So the decoder class (Decode() method) buffers each encoded frames, feeds back dummy PCMs audio services. When AudioDevice::Write() is called, we push a proper amount of buffered encoded frames to physical device.
                Decoder::Decode() ----> Audio Service ----> Device::Write()

The mismatch happens between the total bytes coming into AudioDevice::Write() and the total bytes Decoder::Decode() feeds back (equal to the total frames Decoder buffered). There are moments that encoded frames is running out while Device::Write() still gets called with dummy PCMs pushed down. I am trying to understand in what cases this mismatch will happen -- so far we are testing AMR-NB and this happens quite frequently during streaming.



You should not be using incoming PCM data to meter your flow of decoded audio
data. The audio stream is the master source of the timeline. So, you should
just be pushing audio data into your audio device as fast as it can consume
it (which will be the samplerate*channels*bits/sample).

You audio device code then must provide the master timeline information
via the OnTimeSync() calls into the core on a regular basis. This timeline
information is then sent out to all the other renderers in the system to
provide them with the current timeline. That is how video renderers know
what frame to blt.

So, just ignore and throw away all the dummy PCM data coming into the
audio device code (CHXAudioDevice) and provide the timeline information
via OnTimeSync() (and GetCurrentAudioTime()) and all should work fine.

--greg.







Thanks,

Gang Shen



-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of ext Greg
Wright
Sent: Tuesday, August 16, 2005 1:54 PM
To: Shen Gang.1 (Nokia-TP-MSW/Dallas)
Cc: [email protected]
Subject: Re: [Audio-dev] HXAudioSession & CHXAudioStream


[EMAIL PROTECTED] wrote:



Hi, Greg,

You are right. I am using that guide to code a HW audio device. I don't have problems with the dummy PCMs illustrated in that guide. As you know, HW audio device/decoder buffers encoded audio frames and feedbacks correspondent dummy PCMs. I did exactly like that. Later, I found that HXAudioSession pushes down more PCMs to device than the dummy PCMs decoder feed backed. It is not clear to me what is case HXAudioSession will insert silent PCMs into the stream. Maybe packet loss? If so, how does HW audio device know that PCMs pushed down is actually faked silence? -- this is very important for HW device/decoders for they don't accept silent PCMs.


All dummy PCMs send through the audio services should be silence
(in case they get mixed or faded with other streams). Each dummy
PCM should also be thrown away. I assume that the real audio data
is being send directly from your DSP decoder directly to the audio
physical device. Is that correct? Or, are you decoding in the DSP
and then sending *real* audio PCM data back to the renderer, which
in turn, sends it to the audio services?


Also, can you tell me how you know there are *more* PCM then
what your decoder provided? Are you mesuring by bytes or by
number of chunks sent (Write())? Your decoder will decode in
the native format of the coded audio right? It is possible that
the media engine is resampling the PCM data to match what ever
format the audio device was opened up at. For example, the audio
could be 44.1K but the audio device could only open up at 16K.
That will result in a different amount of data be written the
the CHXAudioDevice code. You can take a look at:

HX_RESULT CHXAudioSession::GetDeviceFormat()





You mentioned as "there are a few places in hxaudses.cpp where the PCM data is silenced and/or inserted into the audio stream". Would you please explain those places? It seems silent PCMs are generated in CHXAudioStream, which is controlled by HXAudioSession. Is it right?


Look in hxaudstr_net.cpp for CAudioSvcSampleConverter::silence() and the
following code chunks in hxaudses.cpp:

                   /* If the mixer buffer was not used, make sure it is 
initialized
                    * to silence since we pass it to post process hooks
                    */
                   if (!bIsMixBufferDirty)
                   {
//{FILE* f1 = ::fopen("e:\\audioses.txt", "a+"); ::fprintf(f1, "%lu\t%p\tsilence in 
mix buffer\n", HX_GET_BETTERTICKCOUNT(), this);::fclose(f1);}
                       ::memset(pMixBuffer, 0, HX_SAFESIZE_T(m_ulBytesPerGran));
                   }


           /* did we ever write to the session buffer ? */
           if (m_pPlayerList->GetCount() > 1 && !m_bSessionBufferDirty)
           {
               HXLOGL4(HXLOG_ADEV, "CHXAudioSession[%p]::PlayAudio(): silence in 
session buffer", this);
               ::memset(pSessionBuf, 0, HX_SAFESIZE_T(m_ulBytesPerGran));
           }

--greg.





Thanks,

Gang Shen








-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of ext Greg
Wright
Sent: Monday, August 15, 2005 3:45 PM
To: Shen Gang.1 (Nokia-TP-MSW/Dallas)
Cc: [email protected]
Subject: Re: [Audio-dev] HXAudioSession & CHXAudioStream


[EMAIL PROTECTED] wrote:




Hi,

When I program a DSP audio device, I found something interesting.
When no data in the buffer (CHXAudioStream), HXAudioSession still pushes fake PCMs down to audio device(::CheckToPlayMoreAudio). Those PCMs are generated by CHXAudioStream. Since DSP device has to buffer every encoded frame in a seperate queue, it is disturbing to receive ::Write() with extra PCMs. Although I made a workaround for that, I am wondering why HXAudioSession & CHXAudioStream are designed in that way. Could anyone kindly explain a little bit?

Thanks,

Gang Shen


Could you explain a little more about what it is you are doing
exactly? From the above it sounds like you have a hardware decoder
for some audio stream. Is that correct? If so, have you read
the "Hardware Decoder Integration Guide":

  https://client.helixcommunity.org/2004/devdocs/dsp_inte

It will talk a bit about where some of this 'dummy PCM' can come

from and why it is used.


If the above is not the case, then there are a few places in
hxaudses.cpp where the PCM data is silenced and/or inserted into
the audio stream. I would need to know more about your specific
playback scenario to tell you more however.

Let me know if you have any other questions,
--greg.


_______________________________________________
Audio-dev mailing list
[email protected]
http://lists.helixcommunity.org/mailman/listinfo/audio-dev



_______________________________________________
Audio-dev mailing list
[email protected]
http://lists.helixcommunity.org/mailman/listinfo/audio-dev





_______________________________________________
Audio-dev mailing list
[email protected]
http://lists.helixcommunity.org/mailman/listinfo/audio-dev

Reply via email to