Hi, Christian,

On 8/19/22 10:52, Christian König wrote:
Hi Thomas,

IIRC we intentionally dropped that approach of balancing ttm_mem_io_reserve()/ttm_mem_io_free().

Instead the results from ttm_mem_io_reserve() are cached and that cached information is freed by ttm_mem_io_free(). In other words every time we need to make sure we have the cache filled we call ttm_mem_io_reserve() and everytime we are about to free the resource or don't need the mapping any more we call ttm_mem_io_free().

The callbacks to io_mem_reserve() and io_mem_free() are then balanced.

Hmm, yes, in the end at resource destroy, anything reserved would indeed have been freed, but consider the following:

ttm_bo_vm_fault();
ttm_bo_vmap();
ttm_bo_vunmap();
ttm_bo_unmap_virtual();

Here, wouldn't we release the io_reservation on ttm_bo_vunmap() when it should really have been released on ttm_bo_unmap_virtual()?



Fixing missing _free() calls in the error path is probably a good idea, but I wouldn't go beyond that.

Why should any of that be racy? You need to hold the reservation lock to call any of those functions.

It's when now a ttm_resource has been detached from a bo, and combined with an ongoing async memcpy we no longer have a bo reservation to protect with. Now the async memcpy currently only exists in i915 and we might at some point be able to get rid of it, but it illustrates the problem.

Thanks,

Thomas



Regards,
Christian.




Am 19.08.22 um 10:13 schrieb Thomas Hellström:
Hi Christian,

I'm looking for a way to take some sort of reference across possible VRAM accesses  over the PCI bar, be it for runtime PM, workarounds or whatever.

The ttm_mem_io_reserve/free seems like a good candidate, but is completely unbalanced and looks racy. In particular error paths forget to call ttm_mem_io_free().

Would you have any objections if I took a look at attempting to balance calls to those functions, or do you have any other suggestions for a better method?

Thanks,

Thomas



Reply via email to