Am 19.03.2018 um 16:53 schrieb Chris Wilson:
Quoting Christian König (2018-03-16 14:22:32)
[snip, probably lost too must context]
This allows for full grown pipelining, e.g. the exporter can say I need
to move the buffer for some operation. Then let the move operation wait
for all existing fences in the reservation object and install the fence
of the move operation as exclusive fence.
Ok, the situation I have in mind is the non-pipelined case: revoking
dma-buf for mmu_invalidate_range or shrink_slab. I would need a
completion event that can be waited on the cpu for all the invalidate
callbacks. (Essentially an atomic_t counter plus struct completion; a
lighter version of dma_fence, I wonder where I've seen that before ;)

Actually that is harmless.

When you need to unmap a DMA-buf because of mmu_invalidate_range or shrink_slab you need to wait for it's reservation object anyway.

This needs to be done to make sure that the backing memory is now idle, it doesn't matter if the jobs where submitted by DMA-buf importers or your own driver.

The sg tables pointing to the now released memory might live a bit longer, but that is unproblematic and actually intended.

When we would try to destroy the sg tables in an mmu_invalidate_range or shrink_slab callback we would run into a lockdep horror.

Regards,
Christian.


Even so, it basically means passing a fence object down to the async
callbacks for them to signal when they are complete. Just to handle the
non-pipelined version. :|
-Chris
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Reply via email to