I am porting Real Media on Android, and Please let me know the
following

1) Can I share some global variables between the parser interface
file( fileformats/rm/parser/src/irmff.cpp)
    and the omx_ra_component.cpp. If so, how can I share?

2) After seek, can I drop audio frames before giving to renderer at
media_output_inport.
   based on the targetNPT of the seek , and the actual PTS of the
audio packet recieved after seek.




Here is some detail of the issue while porting:




I have integrated Real Media (RM) to Eclair. In real media files, the
audio codec can be one of the 2 formats, AAC or G2COOK (also called as
Real Media 8 Low Bit rate, this is a constant bit rate codec, while
AAC is VBRcodec).

During Normal playback, there is AV sync in all real media files
irrespective of the audio format used is AAC or G2COOK, where as if I
seek, there is AV sync in all real media files with AAC as audio but
there is AV sync mismatch in all the real media files with G2COOK as
audio. Most of the times, audio will be heard ahead of video.

One of the differences between AAC and G2COOK formats is, if the audio
format used is AAC, the RM audio packet will have multiple encoded
audio frames and this packet can be decoded by the decoder
independently.

Where as if the audio format used is G2COOK, the RM audio packets are
interleaved upto a factor of 30 packets, and the decoder needs to
buffer all the 30 packets before it deinterleves all the packets and
decodes all the encoded frames in each of those packets.

The first of these 30 packets is called a audio key frame.So,
essentially after repositioning (seeking ) the file, I need to  look
for the closed audio keyframe as well as video keyframe around the
seeked position and return the PTS of these 2 packets to the player
upon request.

One observation on the PTS of AV keyframe packets after seek in COOK
audio is
TargetNPT = 27500 ms
Audio PTS = 25000 ms
Video PTS = 20500 ms

While PTS of AV keyframe packets after seek in AAC audio is
TargetNPT = 27500 ms
Audio PTS = 27430 ms
Video PTS = 20500 ms

where there is a difference of as much as 6-7 secs between the
keyframes of AV streams.

At the OMX_ra_component.cpp, whenever I recieve the audio keyframe, I
just do a memcpy of the packet in decoder's internal memory and give
back input buffer size as totally consumed while output buffer size
decoded as zero. This is done till I recieve the 29 packets, and after
recieving the last frame of this audio block, while I again send the
input buffer size as totally consumed, I will now send the total
decoded samples of the 30 audio packets recieved till now.

While the AV sync mismatch right after the seek, there is the same
constant delay till the EOF between audio and video. Always audio
played ahead. Any pointers to resolve the issue is highly appreciated.

Here are some pointers I got from helix community:

Here is the explaination I got from real media people...regarding the
AV Sync issue

""In case of cook audio the timestamp of first packet (audio keyframe)
of the superblock after seek will be always be ahead of the playback
clock, when compared to that of aac where every packet is a key
frame.

Helix player is aware of this and hence after decoding the entire
superblock the client engine will clip the decoded buffer at the
begining before  giving it to the renderer.

The number of frames of decoded audio that will be clipped will be
calculated using the sampling rate of the audio , play duration of the
current decoded super block as well as the difference between PTS of
the first packet of superblock with refrence to the actual playback
clock.""


Any of the PV experts , please let me know how can I drop the decoded
audio before giving it to the renderer in opencore, as at the omx
component level, after decoding, I do not have the information of
playback clock and the hence the diff of audio packet w.r.t the
playback clock.

-- 
unsubscribe: [email protected]
website: http://groups.google.com/group/android-porting

Reply via email to