Am 18.06.24 um 07:01 schrieb Pierre Ossman:
On 17/06/2024 20:18, Christian König wrote:
Am 17.06.24 um 19:18 schrieb Pierre Ossman:
On 17/06/2024 18:09, Michel Dänzer wrote:

Can I know whether it is needed or not? Or should I be cautious and always do it?

Assuming GBM in the X server uses the GPU HW driver, I'd say it shouldn't be needed.


It does not (except the driver libgbm loads). We're trying to use this in Xvnc, so it's all CPU. We're just trying to make sure the applications can use the full power of the GPU to render their stuff before handing it over to the X server. :)

That whole approach won't work.

When you don't have a HW driver loaded or at least tell the client that it should render into a linear buffer somehow then the data in the buffer will be tilled in a hw specific format.

As far as I know you can't read that vendor agnostic with the CPU, you need the hw driver for that.


I'm confused. What's the goal of the GBM abstraction and specifically gbm_bo_map() if it's not a hardware-agnostic way of accessing buffers?

There is no hardware agnostic way of accessing buffers which contain hw specific data.

You always need a hw specific backend for that or use the linear flag which makes the data hw agnostic.


In practice, we are getting linear buffers. At least on Intel and AMD GPUs. Nvidia are being a bit difficult getting GBM working, so we haven't tested that yet.

That's either because you have a linear buffer for some reason or the hardware specific gbm backend has inserted a blit as Michel described.

I see there is the GBM_BO_USE_LINEAR flag. We have not used it yet, as we haven't seen a need for it. What is the effect of that? Would it guarantee what we are just lucky to see at the moment?

Michel and/or Marek need to answer that. I'm coming from the kernel side and maintaining the DMA-buf implementation backing all this, but I'm not an expert on gbm.

Regards,
Christian.


Regards

Reply via email to