The silence was a bit deafening on my last question, but I'll try it
again with some more specific questions.

Obviously, I'm having a bit of trouble hooking up my non-framebuffer
accelerated graphics.  While working on this, I've noticed a very odd
thing.  The surface flinger's createSurface() call is being called
with width and height of 0.  Which causes the LayerBitmap to allocate
0 bytes of data for that layer's texture.  Eventually, I see that
LayerBitmap::setBits() is called again with an appropriately sized
texture and the buffer is re-allocated in a different place.

So, my questions are:
1)  From where do the width and height that are passed to the
createSurface call come?
2)  How does the texture get "rendered" into the layer's mBuffers[]
data?  Why are there two mBuffers?

I think that my rendering is working correctly, the problem is that
there doesn't seem to be any texture data in the mBuffers[] data
buffer.  I also think that this may be due to the initial buffer
allocation being made based on a 0x0 surface size.

Can anyone offer some insight into how the sizes passed to
createSurface are determined, and how the layer buffer gets used in
the rendering path?

If I am using the "generic" layer texture rendering path, am I not
getting full acceleration?  I assume that this is false, and that the
texture data is a static resource that is simply copied into the
buffer location for blitting by the hardware later.

Regards,
Steve Aarnio
--~--~---------~--~----~------------~-------~--~----~
unsubscribe: [email protected]
website: http://groups.google.com/group/android-porting
-~----------~----~----~----~------~----~------~--~---

Reply via email to