Hi all, I am trying to understand the different components of Android's display system and how they work together. I have been sifting through the source, but I still have a few fundamental questions. Here is my current understanding of a couple of these components:
SurfaceFlinger: It composes the various Surfaces, or layers, together. As of 1.6, it seems that EGLDisplaySurface is working with the frame buffer device (/dev/graphics/fb0) in order to output to the screen. Framebuffer device: It's an abstraction of the graphics hardware. The most direct way to access the display is through the frame buffer device (e.g. read from it to take a screenshot). Hardware Overlays: The SurfaceFlinger punches a hole in the window surface in order to let the hardware overlay compose its frame data directly to the screen. They are used with image capture and hardware acceleration devices. With this in mind, I am unclear about the following: 1. What is responsible for composing the overlay image with the main surface? From what I've read it seems like the overlay implementation would, but I've yet to see anything directly address this. 2. If so, does the overlay driver write to the frame buffer device, or does it output to the screen in a different way? Is this implementation-dependent? 3. My ultimate concern is if hardware overlays are in use and I took a screenshot by reading from the frame buffer device, would I see the "hole" punched out by the SurfaceFlinger, or would I see the same image as seen on my screen? 4. What are the use-cases for overlays? I assume they are used to allow hardware to handle the frame data manipulation (rather than software), but I still don't know why/when an overlay would be necessary. Answers to these questions and any more information as to how surfaceflinger, hardware overlays, and the frame buffer device interact would be very much appreciated! Thanks, Ryan -- unsubscribe: android-porting+unsubscr...@googlegroups.com website: http://groups.google.com/group/android-porting