Hi Doug, Many thanks for responding to my query. I completely agree with your explanation. To ensure that end-user experience is fine, audio shouldn't be compromised with. However, I had a related point. If the CPU is not sufficient for video decoding, the current algorithm will drop the frames.
However, the logic to drop is of my interest. Consider a scenario where audio is uninterrupted and will continue to be so. However, if video is lagging, the amount of lag is known to the Video rendering part i.e. Awesome Player. The lag or rather the lateness is based on the current video time versus the audio system time. So the video system knows that it is behind audio by X which can be sent to the extractor. As the extractor is providing the corresponding audio frames (note that audio system is uninterrupted), the video system can start to receive the closest video frames. Let's consider this as a synchronization point. There is a very high probability that this sync point may not be a sync frame (I-frame or IDR frame). However, if one starts to decode and display the gray pictures or wait until the next sync frame comes, we can potentially have the video sub- system displaying synchronized video faster. In the current solution, there is no such flexibility and the only way the video can synchronize is to decode faster and faster. This is my concern and hence, wanted to check if such a solution had been discussed and what was the rationale behind the design. Thanks, Ganesh On Sep 21, 11:19 am, Doug <[email protected]> wrote: > On Sep 19, 5:00 pm, Ganesh <[email protected]> wrote: > > > In these scenarios, shouldn't the system be able to resynchronize > > faster i.e. skip frames at parser level and start decoding from a key > >frameat the new position? > > > Could someone explain why a similar strategy wasn't adopted? > > Is there software that you know of that does this effectively? > > The thing about A/V media is that the A packets are interleaved with > the V packets. You can't count on an entire Vframeto occur without > A possibly interrupting it in the sequence. And tou are required to > read both in the sequence they appear in the stream. If your A > decodes and renders in time, but your V does not, then there is no > incentive to skip ahead unless you want to disrupt BOTH A and V from > the users perspective. Instead, you just keep rendering the available > A at the prescribed rate and skip the V that doesn't decode in time. > V takes a whole lot more processing power, so it will always fall > behind if there is not enough CPU to process both. > > Doug -- You received this message because you are subscribed to the Google Groups "Android Developers" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/android-developers?hl=en

