Thanks Mathias,
I'm still unclear as to why you think it won't work if the following is true:
- SurfaceFlinger is the only thing that allocates surface memory,
which is exposed to the outside world as a base heap-pointer plus
offset.
- Clients don't do any memory management, but always used their base
heap-pointer coupled with the associated offset to access the surface.
Provided that they have a base pointer and offset, they do not care
how the memory is allocated or managed. From their point of view a
heap-base of 0 with an offset is no different from any other heap. In
this regard the offset would just be a user-mapped virtual address.
- I can map a surface buffer into both SurfaceFlinger and the client
and hook up the base address (0) and offset (mapped address), while
ensuring that clients can only see their own surfaces.

In that way why would it not work to layer surfaceFlinger's surface
management on our own abstraction - effectively bypassing the heap
management. I would just need to have a slot in a surface to point to
a class that relates to our own allocation and hook up the base
address/offset entities correctly, but that would be it? i.e. All I
should need to do is:
- At surface creation time, allocate our own surface abstraction, set
the offset as the user-mapped virtual address in the control block and
keep a reference to our own class somewhere.
- When surfaceflinger makes use of the offset/base for memory access,
I'd need to make sure that it was instead using the address mapped
into surfaceflinger. I can see this might be a little tricky - perhaps
needing either a 'pseudo' per-surface 'heap' or a bit of fiddling in
surfaceflinger wherever the data pointer is retrieved.

BR,
Fred.


On 1/16/09, Mathias Agopian <[email protected]> wrote:
>
> On Fri, Jan 16, 2009 at 8:18 AM, F H <[email protected]> wrote:
>> Thanks Mathias, that helps *a lot*.
>>
>> My brain hurts from looking at the memory stuff!
>>
>> I can now see that each client maps in the 8Mb shared memory region once.
>>
>> What I don't understand at the moment is where the pointer to the surface
>> data is passed back to the client - can you say at what point that
>> happens?
>
> There is a control block in shared-memory, per surface. It contains an
> offset to the surface's buffers from the beginning of their heap. Both
> the client and SurfaceFlinger use this offset to access the buffer;
> ;but they each have their own view of the pointer to the heap.
>
> Have a look at SurfaceComposerClient.cpp for implementation details.
>
>
>> When a client uses a buffer (e.g. to draw to it) does it retrieve the
>> pointer by first getting the base address of the related heap, and then
>> digging out an offset from the surface?
>
> yes. the offset is controlled by SurfaceFlinger.
>
>> If we assume that I have my own buffer abstraction (whereby I could map a
>> buffer into both SurfaceFlinger and the client process) that wasn't based
>> on
>> having a Heap of contiguous memory.
>
> That would not work. SurfaceFlinger's design assumes Surfaces are
> attached to a heap.
>
> A heap is by definition contiguous address-space (not to be confused
> with physically contiguous memory).
>
>
>>Then could the following be made to work
>> easily:
>>
>> - Add a new Heap type, which would have an effective base of zero (see
>> later).
>> - The allocator would be simple (wouldn't need to worry about
>> fragmentation
>> itself).
>> - An offset associated with the buffer that was effectively an offset from
>> 0
>> (yeilding a 32 bit virtual pointer).
>> - I might need to add and subtract a bias or would a heap base of 0 be OK?
>
> No. absolutely not (see below).
>
>
>> Also:
>> - Presumably all rendering done by Android honours the fact that the
>> buffer
>> stride may not be the same as the width?
>
> yes.
>
>> - Presumably Android doesn't 'lock' and 'unlock' a buffer between
>> rendering
>> anywhere?
>
> correct.
>
> If you need to use your own memory type, you'd make the changes in
> VRamHeap.cpp, which is where whole the memory management is done
> (you'd need the cupcake branch, because things were even more
> complicated before).
>
> In theory whole you need to do is to write a kernel driver than can
> allocate your own type of memory, and use that instead of /dev/pmem or
> /dev/ashmem.
>
> for this to work, your driver will have to be able to allocated
> several of these heaps, which may not be possible. On the G1, we had
> 8MB total to share with all apps. For obvious security reason, we
> couldn't allow each client app to map the same 8MB into their address
> space.
> This is where pmem comes into play. pmem wraps a new "heap" around a
> master heap and allows the server process to "slap" and "unslap"
> individual pages in its client processes. "slapping" is like
> "mmapping" except when you "unslap" the pages are replaced by a
> "garbage" page (instead of a black hole).
>
> This  way, the unique 8MB heap, is shared securely between all
> clients, which, each are allowed to "see" only the pages they need.
>
> You would have to replicate this mechanism if your kernel allocator
> cannot create multiple heaps. The easiest way would be to start from
> our pmem driver and modify it to suit your needs.
>
>
> Mathias
>
>
>
>>
>> Thanks,
>> Fred.
>>
>>
>> On Fri, Jan 16, 2009 at 3:02 AM, Mathias Agopian <[email protected]>
>> wrote:
>>>
>>> Hi,
>>>
>>> On Thu, Jan 15, 2009 at 9:53 AM, F H <[email protected]> wrote:
>>> > I have a few questions regarding integration of an accelerated
>>> > capability
>>> > into Android.
>>>
>>> Before we go further, I would like to point out that there are no
>>> OpenGL ES driver model in Android 1.0. We're trying very hard to
>>> finalize it for the cupcake release but there are no garantees.
>>> You must understand that the work you'll be doing for h/w integration
>>> before that happens, will become obsolete and we will not try to
>>> ensure backward compatibility.
>>>
>>> > I understand that Android must draw to surfaces in Software. In our
>>> > environment we can create a buffer that can be drawn to both by
>>> > Hardware
>>> > and
>>> > software, but it is one of our components that must allocate the memory
>>> > in
>>> > order to do this.
>>>
>>> > I've been looking at some various components in SurfaceFlinger that may
>>> > be
>>> > of help (or possibly red-herrings) and have a few questions:
>>>
>>> > - Do *all* surfaces which can be drawn to instances of one particular
>>> > class
>>> > and if so which one? (I'd like to hook in our own memory surface class
>>> > at
>>> > this point rather than a raw memory allocation). Is there a single
>>> > point
>>> > for
>>> > which this allocation is done?
>>>
>>> The allocation is done in SurfaceFlinger. The current scheme is very
>>> messy. Look at Layer.cpp and LayerBitmap.cpp. They eventually acquire
>>> an allocator object.
>>>
>>> In fact, on the G1, there is the same issue, all the surfaces need to
>>> be allocated in a certain way (look for the Pmem allocator).
>>>
>>> > - Does a layer have a 1:1 correspondence with a drawable Surface? Or
>>> > are
>>> > multiple Surfaces placed into a Layer?
>>>
>>> The naming conventions are not always consistent, but in short:
>>>
>>> A Layer is something that can be composited by SurfaceFlinger (should
>>> have been called LayerFlinger). There are several types of Layers if
>>> you look in the code,  in particular the regular ones (Layer.cpp) ,
>>> they are backed by a Surface, and the LayerBuffer (very badly chosen
>>> name) which don't have a backing store, but receive one from their
>>> client.
>>>
>>> A Surface has a certain number of Buffers, usually 2.
>>>
>>> Note that the GGLSurface type, should have been called GGLBuffer.
>>>
>>>
>>> > - Presumably multiple layers are just composited to the final buffer in
>>> > their Z order?
>>>
>>> Yes.
>>>
>>> > - What's the GPUHardware class for? - is it there just for chip access
>>> > arbitration or debugging (It looked for example like the allocator was
>>> > only
>>> > used if the debug.egl.hw property were set). Is this needed if
>>> > arbitration
>>> > is handled through EGL/GL?
>>>
>>> GPUHardware will go away. It is used for managing the GPU
>>> (Arbitration) and allocating the GPU memory, so that GL Surfaces can
>>> reside on the chip.
>>>
>>> > - Is it true that for surfaces only one allocator will be used either
>>> > GPU,
>>> > /dev/pmem or heap in that order? Under what circumstances is /dev/pmem
>>> > needed?
>>>
>>> The current implementation deals only with the emulator and the G1.
>>> The g1 needs pmem memory for using h/w acceleration for the
>>> compositing.
>>> So pmem is always used on the G1 (which doesn't use ashmem at all).
>>>
>>> > - Is the 8Mb SurfaceFlinger allocation per process or a one-off for the
>>> > system?
>>>
>>> Depends. On the G1 it's for everyone. and it's physical. On the
>>> Emulator we have 8MB of *address space* per process/
>>>
>>>
>>> > - Presumably there is only one compositor (running in a system thread)?
>>> > When
>>> > a surface is allocated is it done through the applications thread or
>>> > the
>>> > thread that looks after the composition? (Is there an accompanying call
>>> > somewhere to map memory from one process into another?)
>>>
>>> Memory is always allocated in Surfaceflinger, it is stuffed into an
>>> IMemory object, which takes care of mapping it to the destination
>>> process automatically (see IMemory.cpp, if you want to hurt your
>>> brain).
>>>
>>> > - When the compositor is refreshing the display what mechanisms is it
>>> > using
>>> > to access buffers that are in a different address space?
>>>
>>> They're all in shared memory. The SurfaceFlinger mmaps the surface
>>> heaps (which is different from the main heap) of each of its client
>>> processes into its own address space. This consumes 8MB of address
>>> space per client process inside surfaceflinger.
>>>
>>> > - On the off-chance is there any porting documentation related to
>>> > surface
>>> > flinger and how it works?
>>>
>>> Sorry. :)
>>>
>>>
>>> You should be able to implement basic 2D h/w acceleration though the
>>> copybit hal module.
>>>
>>> I hope this helps.
>>>
>>> Mathias
>>> >>
>>
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
unsubscribe: [email protected]
website: http://groups.google.com/group/android-porting
-~----------~----~----~----~------~----~------~--~---

Reply via email to