On 17/06/2024 20:18, Christian König wrote:
Am 17.06.24 um 19:18 schrieb Pierre Ossman:
On 17/06/2024 18:09, Michel Dänzer wrote:
Can I know whether it is needed or not? Or should I be cautious and
always do it?
Assuming GBM in the X server uses the GPU HW driver, I'd say it
shouldn't be needed.
It does not (except the driver libgbm loads). We're trying to use this
in Xvnc, so it's all CPU. We're just trying to make sure the
applications can use the full power of the GPU to render their stuff
before handing it over to the X server. :)
That whole approach won't work.
When you don't have a HW driver loaded or at least tell the client that
it should render into a linear buffer somehow then the data in the
buffer will be tilled in a hw specific format.
As far as I know you can't read that vendor agnostic with the CPU, you
need the hw driver for that.
I'm confused. What's the goal of the GBM abstraction and specifically
gbm_bo_map() if it's not a hardware-agnostic way of accessing buffers?
In practice, we are getting linear buffers. At least on Intel and AMD
GPUs. Nvidia are being a bit difficult getting GBM working, so we
haven't tested that yet.
I see there is the GBM_BO_USE_LINEAR flag. We have not used it yet, as
we haven't seen a need for it. What is the effect of that? Would it
guarantee what we are just lucky to see at the moment?
Regards
--
Pierre Ossman Software Development
Cendio AB http://cendio.com
Teknikringen 8 http://twitter.com/ThinLinc
583 30 Linköping http://facebook.com/ThinLinc
Phone: +46-13-214600
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?