Hello,

I'm working on Android in a virtualized environment under Xen hypervisor.
I suppose that drm_hwcomposer will be the convenient choice as a generic 
solution in that case.

I've implemented some platfomrXXX for our case, in which implemented Importer:
I.e. ImportBuffer and ReleaseBuffer for native buffers(i.e. ION allocated 
memory) in/from DRM.
I use the same ImportImage as in platformgeneric.cpp implementation(but I 
suppose it not used in our use-cases for now).
Please note, we use custom paravirtual DRM driver on our platform.
It has only one main plane so far.
Taking that into account, I suppose that all composition is done in 
SurfaceFlinger in that case.
I.e.: hwc report that it won't take any layers and SurfaceFlinger will do 
composition and propagate one buffer as framebuffer target(via SetClientTarget)
to hwc. That buffer will be placed on the main plane.

Using drm_hwcomposer I see buffer 
registration/deregistration(DrmHwcBuffer::ImportBuffer / DrmHwcBuffer::Clear) 
for each composition.
That cause some performance issues, which may be specific to our platform/DRM 
realization/our use-case:
In our virtualized environment Framebuffer 
registration/deregistration(drmModeAddFB2/drmModeRmFB) is  performance costly. 
And de facto useless to do it for every frame, because SurfaceFlinger will not 
reallocate that framebuffer-target's(at least until display resolution won't 
change).

For example: If I allocate several buffers for final composition and register 
them once in DRM. Further, I will copy composition from framebuffer surface 
(provided by SurfaceFlinger) to one of that preallocated buffers and put it on 
a plane, it will be more optimal than do drmModeAddFB2/drmModeRmFB for every 
buffer per composition.

Could you please suggest me how to overcome that situation with buffers 
registration/deregistration?

Kind regards,
Andrii.

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Reply via email to