Am 14.02.2018 um 17:35 schrieb Felix Kuehling:
I don't see how you can have separate TTM objects referring to the
same memory.
Well that is trivial, we do this all the time with prime and I+A
As I understand it, you use DMABuf to export/import buffers on multiple
devices. I believe all devices share a single amdgpu_bo, which contains
the ttm_buffer_object.
That's incorrect as well. Normally multiple devices have multiple
ttm_buffer_object, one for each device.
Going a bit higher that actually makes sense because the status of
each BO is deferent for each device. E.g. one device could have the BO
in access while it could be idle on another device.
Can you point me where this is done? I'm looking at
amdgpu_gem_prime_foreign_bo. It is used if an AMDGPU BO is imported into
a different AMDGPU device. It creates a new GEM object, with a reference
to the same amdgpu BO (gobj->bo = amdgpu_bo_ref(bo)). To me this looks
very much like the same amdgpu_bo, and cosequently the same TTM BO being
shared by two GEM objects and two devices.

As Michel pointed out as well that stuff isn't upstream and judging from the recent requirements it will never go upstream.

If this is enabled by any changes that break existing buffer sharing for
A+A or A+I systems, please point it out to me. I'm not aware that this
patch series does anything to that effect.
As I said it completely breaks scanout with A+I systems.
Please tell me what "it" is. What in the changes I have posted is
breaking A+I systems. I don't see it.

Using the same amdgpu_bo structure with multiple devices is what "it" means here.

As I said that concept is incompatible with the requirements on A+A systems, so we need to find another solution to provide the functionality.

What's on my TODO list anyway is to extend DMA-buf to not require pinning and to be able to deal with P2P.

The former is actually rather easy and already mostly done by sharing the reservation object between exporter and importer.

The later is a bit more tricky because I need to create the necessary P2P infrastructure, but even that is doable in the mid term.

amd-gfx mailing list

Reply via email to