I've done a few different systems similar to what you're describing - a radio front-end tuner that generates baseband I & Q at audio rates that's then further processed by a DSP to extract true audio.

Normally what I do is slave the DSP rate to the tuner audio rate. That's usually possible since the data from the tuner is in I2S format and my DSP's I2S port can act as a slave. All subsequent processing happens at rates derived from the tuner's sample rate, and the audio output DAC is also running at that rate.

If your system architecture doesn't support running everything from the tuner's sample rate then you will need an ASRC as discussed earlier. Depending on which DSP you're using you may find that there is an ASRC co-processor already available. Many TI and ADI DSPs include this as an IP core you can access. Otherwise you'll have to code it up yourself. These aren't too hard to do - I have built them using a buffer depth measurement as the observable. Just maintain a short input buffer and servo techniques to keep the read pointer trailing the write pointer by a certain amount. Fairly simple polyphase resampling such as described by JOS works well and can maintain an SNR of 70dB or better which is often sufficient for radio applications where noise is generally pretty high anyway.

Eric

On 02/05/2018 10:20 AM, Benny Alexandar wrote:
Hi Robert,

Yes, I need to do ASRC, and the challenge is how to estimate the drift
and correct it.

As I mentioned in the earlier attached figure, DSP is slave and tuner chip feeds the baseband samples and is the master. Now the question is where to do the timestamping for correctly estimate the drift. The system is an embedded platform having a tuner chip and DSP chip, both have independent oscillators ( Xtal)  for providing the clock. So my question is how to timestamp the audio data. After channel decoder the compressed audio will have variable decoding times based on audio content. So this is not a good place to timestamp as it is very jittery.

Suppose every digital radio transmission frame duration T seconds corresponds to T seconds of audio, can I timestamp the baseband RF IQ samples when it arrives at the DSP ?   After demodulation and audio decoder calculate the max delay it can have for worst case scenarios, and add that as target delay before playing out. Then while playing out each audio period read the current time, the difference of current time - ( RF packet arrival time + Target Delay) should be ideally zero,
if audio plays out at the same rate as transmission of audio,.

-ben
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to