>> Is this process correct?
The understanding is correct., it talks about single buffer usage for
input port and output port.,
usually based on component capability, we will have like 4 input
buffers and 8 output buffers, to acheive better performace for complex
clips.
(because if parallelized, while a decoded frame is getting displayed,
other frames can be getting read from file, or getting decoded in
component, or waiting to be displayed..).


> 1. How can I make my file format see mjpeg be recognized by pvmf, as a
> result the pvmf will know which component/decode to use
if you are goin to modify PV OMX core, then you need to plugin MJPEG
codec in to it.
check how a codec is added and obtained later by PVMF in component
repository (it walks throug pvplayer.cfg, in libOMX.so, a table of
component is published, which is availed during OMX Init).
you can add your MJPEG component following that.

> 2. how can I decide how many frames to put in one input/output buffer. I
> guess this rule is defined in decode node.
PVMF parses file and passes each frame in a buffer (can be a I/P/B
frame)., even i need some clarity here.


> 3. As you mentioned there is a jpeg component, where can I find it? Is this
> jpeg component also a video component?

few pointers that you can check:
- OMX library is published at
./external/opencore/tools_v2/build/package/opencore/elem/common/pvplayer.cfg
link - 
http://git.omapzoom.org/?p=patch/platform/external/opencore.git;a=blob;f=tools_v2/build/package/opencore/elem/common/pvplayer.cfg

- you can walk through each omx core, component, registry at
./external/opencore/codecs_v2
link: 
http://git.omapzoom.org/?p=patch/platform/external/opencore.git;a=tree;f=codecs_v2/omx;hb=HEAD




On Tue, Apr 27, 2010 at 8:49 PM, song guo <[email protected]> wrote:
> Hi Deva,
> Thank you very much!
> The following is my understanding for the whole integration process:
> The pmvf first parses video/image file, reads header, gets encoded data via
> PV OMX video/image decode node and put it in
> a input buffer. the pvmf send emptybuffer command to  component. component
> check if there is available output buffer, if yes
> it will use decode  node to decode data in the input buffer and put the
> decoded data in the output buffer and send back emptybufferdone
>  event once all the data in the input buffer are decoded.
> therefore pvmf could reuse the previous input buffer. Once the output buffer
> is filled, the component will send a fillbufferdone event to pvmf,
> the pvmf will consume the decoded data in the output buffer to play it. Once
> all the data in the output buffer is consumed by pvmf, pvmf will send a
> fillbuffer command to component and return the empty output buffer for
> further data decoding.
> Is this process correct?
> If yes I have two more questions:
> 1. How can I make my file format see mjpeg be recognized by pvmf, as a
> result the pvmf will know which component/decode to use
> 2. how can I decide how many frames to put in one input/output buffer. I
> guess this rule is defined in decode node.
> 3. As you mentioned there is a jpeg component, where can I find it? Is this
> jpeg component also a video component?
> Thank you very much,
> Best Regards,
> Dadao
>   sends a fillbuffer command to component, and the component will fill the
> buffer with the encoded data .
>
> On Tue, Apr 27, 2010 at 5:40 AM, Deva R <[email protected]> wrote:
>>
>> i have a brief idea about how opencore, pvmf work, and pls find my inputs
>> below.
>>
>> > 3. when pvmf  first sends fillthisbuffer command to component,I
>> > wonder
>> > how pvmf could indicate the component where the file is?
>> Component dont need to know the file, but a buffer to operate on..
>> PVMIF parses video/image file, reads header, gets encoded data, and
>> sends it to component via PV OMX video/image decode node
>>
>> > 4. If the file needs to be serialized before filling in the input
>> > buffer and who serialize it component or pvmf?
>> should be PVMF., as said above, components dont need to be aware of
>> source files..
>>
>> > 1. can I directly inherit the omx_component_video class to form my
>> > own
>> > mjpeg component?
>> Not sure, we;ll wait for opencore experts.
>>
>>
>> > 2. how pvmf consumes the output buffer of the component, what kind of
>> > decoded data should i put onto the output buffer so that these data
>> > will be recognized by pvmf and played (this helps me to design my
>> > decoder)
>> It should be one of the uncompressed video format and subformat (say
>> YUV420 semiplanar), understandable by video MIO to be displayed..
>> u can refer exisiting jpeg component how he fills the o/p buffer, and
>> what format.
>>
>>
>>
>> On Mon, Apr 26, 2010 at 8:12 PM, dadaowuwei <[email protected]> wrote:
>> > Hi,
>> > I am quite new for android onpencore framework and have some
>> > questions  for integrating process of a new openmax mjpeg component
>> > 1. can I directly inherit the omx_component_video class to form my
>> > own
>> > mjpeg component?
>> > 2. how pvmf consumes the output buffer of the component, what kind of
>> > decoded data should i put onto the output buffer so that these data
>> > will be recognized by pvmf and played (this helps me to design my
>> > decoder)
>> > 3. when pvmf  first sends fillthisbuffer command to component,I
>> > wonder
>> > how pvmf could indicate the component where the file is?
>> > 4. If the file needs to be serialized before filling in the input
>> > buffer and who serialize it component or pvmf?
>> > thank you very much,
>> > Best Regards,
>> > Dadao
>> >
>> > --
>> > unsubscribe: [email protected]
>> > website: http://groups.google.com/group/android-porting
>> >
>
>

-- 
unsubscribe: [email protected]
website: http://groups.google.com/group/android-porting

Reply via email to