Hi Deva, Thank you very much! The following is my understanding for the whole integration process:
The pmvf first parses video/image file, reads header, gets encoded data via PV OMX video/image decode node and put it in a input buffer. the pvmf send emptybuffer command to component. component check if there is available output buffer, if yes it will use decode node to decode data in the input buffer and put the decoded data in the output buffer and send back emptybufferdone event once all the data in the input buffer are decoded. therefore pvmf could reuse the previous input buffer. Once the output buffer is filled, the component will send a fillbufferdone event to pvmf, the pvmf will consume the decoded data in the output buffer to play it. Once all the data in the output buffer is consumed by pvmf, pvmf will send a fillbuffer command to component and return the empty output buffer for further data decoding. Is this process correct? If yes I have two more questions: 1. How can I make my file format see mjpeg be recognized by pvmf, as a result the pvmf will know which component/decode to use 2. how can I decide how many frames to put in one input/output buffer. I guess this rule is defined in decode node. 3. As you mentioned there is a jpeg component, where can I find it? Is this jpeg component also a video component? Thank you very much, Best Regards, Dadao sends a fillbuffer command to component, and the component will fill the buffer with the encoded data . On Tue, Apr 27, 2010 at 5:40 AM, Deva R <[email protected]> wrote: > i have a brief idea about how opencore, pvmf work, and pls find my inputs > below. > > > 3. when pvmf first sends fillthisbuffer command to component,I > > wonder > > how pvmf could indicate the component where the file is? > Component dont need to know the file, but a buffer to operate on.. > PVMIF parses video/image file, reads header, gets encoded data, and > sends it to component via PV OMX video/image decode node > > > 4. If the file needs to be serialized before filling in the input > > buffer and who serialize it component or pvmf? > should be PVMF., as said above, components dont need to be aware of > source files.. > > > 1. can I directly inherit the omx_component_video class to form my > > own > > mjpeg component? > Not sure, we;ll wait for opencore experts. > > > > 2. how pvmf consumes the output buffer of the component, what kind of > > decoded data should i put onto the output buffer so that these data > > will be recognized by pvmf and played (this helps me to design my > > decoder) > It should be one of the uncompressed video format and subformat (say > YUV420 semiplanar), understandable by video MIO to be displayed.. > u can refer exisiting jpeg component how he fills the o/p buffer, and > what format. > > > > On Mon, Apr 26, 2010 at 8:12 PM, dadaowuwei <[email protected]> wrote: > > Hi, > > I am quite new for android onpencore framework and have some > > questions for integrating process of a new openmax mjpeg component > > 1. can I directly inherit the omx_component_video class to form my > > own > > mjpeg component? > > 2. how pvmf consumes the output buffer of the component, what kind of > > decoded data should i put onto the output buffer so that these data > > will be recognized by pvmf and played (this helps me to design my > > decoder) > > 3. when pvmf first sends fillthisbuffer command to component,I > > wonder > > how pvmf could indicate the component where the file is? > > 4. If the file needs to be serialized before filling in the input > > buffer and who serialize it component or pvmf? > > thank you very much, > > Best Regards, > > Dadao > > > > -- > > unsubscribe: > > [email protected]<android-porting%[email protected]> > > website: http://groups.google.com/group/android-porting > > > -- unsubscribe: [email protected] website: http://groups.google.com/group/android-porting
