Thanks Dave for the great info.. But for creating an ashmem heap from a file of YUV frames, can I use a MemoryDealer class to allocate memory and create a IMemory instance and use the same? Is this the right approach?
Or is there any other approach for handling memory heaps? I basically could not understand how a memory heap is being built from uint8 data array. I am passing a data source of YUV file from the java wrapper to a JNI function. I did go through the MediaPlayer.cpp as well as CameraService.cpp. But I am unable to follow the flow. Regards, iblues On Jan 18, 12:31 am, Dave Sparks <[email protected]> wrote: > If you just want to play around with it, you could write a thin Java > wrapper app that does most of its work in a JNI library. The Java app > creates a SurfaceView with a PUSH_BUFFER surface and passes the > SurfaceHolder into a JNI function. You'll need to make your function a > friend of the SurfaceHolder class to extract the ISurface (see > MediaPlayer.cpp as an example). > > As you surmised, when you are ready to render, you create an ashmem > heap with room for at least two frame buffers and call registerBuffers > to register the frame buffer heap with SurfaceFlinger. Read the YUV > from the file and copy the first frame into your frame heap, then call > pushBuffer() with the heap offset for the frame. Copy the next frame > into the second frame buffer on the heap. At presentation time, call > pushBuffer() with the offset of the second frame. Now go back and > refill the first frame buffer with the next frame and repeat. > > For the G1, you will need to convert your frames to YUV 420 semi- > planar which is Y plane followed by interleaved plane of VU (sub- > sampled by 2 in both directions and V comes before U which is the > reverse of the usual). > > If you are looking for a cross-platform solution, be aware that we are > adding overlay capability and that will require a different mechanism > for pushing your frames to the overlay memory. > > On Jan 17, 6:09 am, iblues <[email protected]> wrote: > > > > > Thanks Dave! > > > On Jan 16, 6:35 pm, Dave Sparks <[email protected]> wrote: > > > > We are not ready to support this use case yet. It works on the G1 now > > > only because the codec and the display processor use the same YUV > > > format. We can get away with that because it's buried in the media > > > framework and doesn't rely on an application having special knowledge > > > of the format. The software renderer does not do full color > > > conversion. On the emulator, that means you will only see the Y plane. > > > > We are hashing out a new API that will provide deeper access to the > > > video rendering pipeline in a future release. > > > > On Jan 16, 4:39 am, iblues <[email protected]> wrote: > > > > > Hi, > > > > > My requirement needs me to draw the YUV data from the framework layer. > > > > Is this possible? From the android code, I seem to understand the > > > > following : > > > > > 1. Create an ISurface object. > > > > 2. Extract frame data in form of YUV > > > > 3. Create a IMemory object with the YUV data. > > > > 4. Call mSurface->RegisterBuffer() > > > > > My doubts are :: > > > > 1. Is my understanding right? Is this approach feasible? > > > > 2. How do we create IMemory object referencing to the frame data? Say, > > > > I set the yuv file( in the SD card) as a data source to the framework > > > > class. > > > > > Thanks & Regards, > > > > iblues- Hide quoted text - > > - Show quoted text - --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "android-framework" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/android-framework?hl=en -~----------~----~----~----~------~----~------~--~---
