The assumption is that most smartphone chipsets have color conversion,
rotation, and scaling in hardware. The pixel format enumerations are
there to support some typical formats that we expect to see used by
hardware. The software renderer (SurfaceFlinger/PixelFlinger) only
supports RGB565.

We are working on improving the 2D and 3D graphics abstraction layers.
I think this work will probably be done on a topic branch in February
since it is likely to break a few things during the process.

On Jan 21, 1:27 am, "Wu, Jackie" <[email protected]> wrote:
> Hi, Dave:
> I have a similar question about my implementation of YUV data view for camera.
>
> I have done the camera porting on EeePC for x86. The camera only generate 
> YUV422 format data.
> But currently the Android seems not support that format directly so I 
> converted the YUV422 to RGB565 in the hardware abstraction layer 
> (CameraHardwareInterface).
> The YUV422 data first is stored in local buffer (share memory with kernel). 
> And during the conversion, the RGB565 data will be written into the new 
> buffer. The new buffer actually is allocated by "new MemoryHeapBase", in 
> which I create 4 buffers. That MemoryHeapBase object can be used by 
> ISurface->registerBuffers. When calling the ISurface->registerBuffers, I used 
> the PIXEL_FORMAT_RGB_565 as the pixel format. Then all runs ok.
>
> Dave, do you have any comments about this implementation? I also want to know 
> how Android will support such cases. I found Andorid has defined 
> PIXEL_FORMAT_YCbCr_422_SP but seems it's actually not supported (see 
> LayerBase.cpp). Anything I misunderstood?
>
> Thanks
> Jackie (Weihua) Wu
>
> >-----Original Message-----
> >From: [email protected]
> >[mailto:[email protected]] On Behalf Of Girish
> >Sent: Wednesday, January 21, 2009 1:45 PM
> >To: android-framework
> >Subject: Re: Displaying YUV data from middleware component
>
> >My use case would be to display the decoded YUV data from middleware
> >(Far end video) and the camera data (Near end Video). Its H324M VT use
> >case
>
> >Regards
> >Girish
>
> >On Jan 21, 7:51 am, Dave Sparks <[email protected]> wrote:
> >> I apologize, but I don't quite understand what you are
> >asking for. Can
> >> you give a bit more detail about the use case?
>
> >> On Jan 20, 5:17 pm, Girish <[email protected]> wrote:
>
> >> > Hi Dave ,
>
> >> > Is it possible to create two video surfaces to meet
> >requirements like
> >> > diplaying data from camera and YUV data from middleware ?
> >How should
> >> > we go about this problem ?
>
> >> > Please clarify.
>
> >> > Regards
> >> > Girish
>
> >> > On Jan 20, 9:34 pm, Dave Sparks <[email protected]> wrote:
>
> >> > > Look at android_surface_output.cpp in the
> >external/opencore project
> >> > > and follow the code for the software codecs. This is
> >exactly the same
> >> > > scenario.
>
> >> > > On Jan 18, 9:56 pm, iblues <[email protected]> wrote:
>
> >> > > > Thanks Dave for the great info..
>
> >> > > > But for creating an ashmem heap from a file of YUV
> >frames, can I use a
> >> > > > MemoryDealer class to allocate memory and create a
> >IMemory instance
> >> > > > and use the same? Is this the right approach?
>
> >> > > > Or is there any other approach for handling memory heaps?
>
> >> > > > I basically could not understand how a memory heap is
> >being built from
> >> > > > uint8 data array. I am passing a data source of YUV
> >file from the java
> >> > > > wrapper to a JNI function.
>
> >> > > >  I did go through the MediaPlayer.cpp as well as
> >CameraService.cpp.
> >> > > > But I am unable to follow the flow.
>
> >> > > > Regards,
> >> > > > iblues
>
> >> > > > On Jan 18, 12:31 am, Dave Sparks
> ><[email protected]> wrote:
>
> >> > > > > If you just want to play around with it, you could
> >write a thin Java
> >> > > > > wrapper app that does most of its work in a JNI
> >library. The Java app
> >> > > > > creates a SurfaceView with a PUSH_BUFFER surface and
> >passes the
> >> > > > > SurfaceHolder into a JNI function. You'll need to
> >make your function a
> >> > > > > friend of the SurfaceHolder class to extract the
> >ISurface (see
> >> > > > > MediaPlayer.cpp as an example).
>
> >> > > > > As you surmised, when you are ready to render, you
> >create an ashmem
> >> > > > > heap with room for at least two frame buffers and
> >call registerBuffers
> >> > > > > to register the frame buffer heap with
> >SurfaceFlinger. Read the YUV
> >> > > > > from the file and copy the first frame into your
> >frame heap, then call
> >> > > > > pushBuffer() with the heap offset for the frame.
> >Copy the next frame
> >> > > > > into the second frame buffer on the heap. At
> >presentation time, call
> >> > > > > pushBuffer() with the offset of the second frame.
> >Now go back and
> >> > > > > refill the first frame buffer with the next frame and repeat.
>
> >> > > > > For the G1, you will need to convert your frames to
> >YUV 420 semi-
> >> > > > > planar which is Y plane followed by interleaved
> >plane of VU (sub-
> >> > > > > sampled by 2 in both directions and V comes before U
> >which is the
> >> > > > > reverse of the usual).
>
> >> > > > > If you are looking for a cross-platform solution, be
> >aware that we are
> >> > > > > adding overlay capability and that will require a
> >different mechanism
> >> > > > > for pushing your frames to the overlay memory.
>
> >> > > > > On Jan 17, 6:09 am, iblues <[email protected]> wrote:
>
> >> > > > > > Thanks Dave!
>
> >> > > > > > On Jan 16, 6:35 pm, Dave Sparks
> ><[email protected]> wrote:
>
> >> > > > > > > We are not ready to support this use case yet.
> >It works on the G1 now
> >> > > > > > > only because the codec and the display processor
> >use the same YUV
> >> > > > > > > format. We can get away with that because it's
> >buried in the media
> >> > > > > > > framework and doesn't rely on an application
> >having special knowledge
> >> > > > > > > of the format. The software renderer does not do
> >full color
> >> > > > > > > conversion. On the emulator, that means you will
> >only see the Y plane.
>
> >> > > > > > > We are hashing out a new API that will provide
> >deeper access to the
> >> > > > > > > video rendering pipeline in a future release.
>
> >> > > > > > > On Jan 16, 4:39 am, iblues <[email protected]> wrote:
>
> >> > > > > > > > Hi,
>
> >> > > > > > > > My requirement needs me to draw the YUV data
> >from the framework layer.
> >> > > > > > > > Is this possible? From the android code, I
> >seem to understand the
> >> > > > > > > > following :
>
> >> > > > > > > > 1. Create an ISurface object.
> >> > > > > > > > 2. Extract frame data in form of YUV
> >> > > > > > > > 3. Create a IMemory object with the YUV data.
> >> > > > > > > > 4. Call mSurface->RegisterBuffer()
>
> >> > > > > > > > My doubts are ::
> >> > > > > > > > 1. Is my understanding right? Is this approach
> >feasible?
> >> > > > > > > > 2. How do we create IMemory object referencing
> >to the frame data? Say,
> >> > > > > > > > I set the yuv file( in the SD card) as a data
> >source to the framework
> >> > > > > > > > class.
>
> >> > > > > > > > Thanks & Regards,
> >> > > > > > > > iblues- Hide quoted text -
>
> >> > > > > - Show quoted text -
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"android-framework" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/android-framework?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to