On 12/3/21 16:00, Christian König wrote:
Am 03.12.21 um 15:50 schrieb Thomas Hellström:

On 12/3/21 15:26, Christian König wrote:
[Adding Daniel here as well]

Am 03.12.21 um 15:18 schrieb Thomas Hellström:
[SNIP]
Well that's ok as well. My question is why does this single dma_fence
then shows up in the dma_fence_chain representing the whole
migration?
What we'd like to happen during eviction is that we

1) await any exclusive- or moving fences, then schedule the migration
blit. The blit manages its own GPU ptes. Results in a single fence.
2) Schedule unbind of any gpu vmas, resulting possibly in multiple
fences.
3) Most but not all of the remaining resv shared fences will have been
finished in 2) We can't easily tell which so we have a couple of shared
fences left.

Stop, wait a second here. We are going a bit in circles.

Before you migrate a buffer, you *MUST* wait for all shared fences to complete. This is documented mandatory DMA-buf behavior.

Daniel and I have discussed that quite extensively in the last few month.

So how does it come that you do the blit before all shared fences are completed?

Well we don't currently but wanted to... (I haven't consulted Daniel in the matter, tbh).

I was under the impression that all writes would add an exclusive fence to the dma_resv.

Yes that's correct. I'm working on to have more than one write fence, but that is currently under review.

If that's not the case or this is otherwise against the mandatory DMA-buf bevhavior, we can certainly keep that part as is and that would eliminate 3).

Ah, now that somewhat starts to make sense.

So your blit only waits for the writes to finish before starting the blit. Yes that's legal as long as you don't change the original content with the blit.

But don't you then need to wait for both reads and writes before you unmap the VMAs?

Yes, but that's planned to be done all async, and those unbind jobs are scheduled simultaneosly with the blit, and the blit itself manages its own page-table-entries, so no need to unbind any blit vmas.


Anyway the good news is your problem totally goes away with the DMA-resv rework I've already send out. Basically it is now possible to have more than one fence in the DMA-resv object for migrations and all existing fences are kept around until they are finished.

Sounds good.

Thanks,

Thomas

Reply via email to