On 29.11.2016 15:36, Christian König wrote:
Am 29.11.2016 um 15:28 schrieb Nicolai Hähnle:
On 29.11.2016 15:12, Christian König wrote:
Am 29.11.2016 um 15:06 schrieb Nicolai Hähnle:
On 29.11.2016 14:50, Christian König wrote:
Am 29.11.2016 um 14:46 schrieb Nicolai Hähnle:
On 28.11.2016 15:51, Christian König wrote:
From: sguttula <suresh.gutt...@amd.com>

This will flush the pipeline,which will allow to share dma-buf based
buffers.

Signed-off-by: Suresh Guttula <suresh.gutt...@amd.com>
Reviewed-by: Christian König <christian.koe...@amd.com>

Why is there no fence? Relying on the correctness of doing a flush
without a fence seems very likely to be wrong... it might seemingly
fix a sharing issue, but once the timing changes, the other side of a
buffer sharing might still see wrong results if it isn't properly
synchronized.

Well there is no facility to share a fence with another side, so the
kernel must make sure that the correct order is kept when the
DMA-buf is
used by multiple processes that everything executes in the right
order.

Ah right, the kernel does most of the job. Still, because of
multi-threaded submit, returning from pipe->flush doesn't actually
guarantee that the work has even been submitted by the kernel.

So unless the only guarantee you want here is that progress happens
eventually, you'll still need a fence.

I don't think we have an interface that guarantees that work has
reached the kernel without also waiting for job completion.

I'm pretty sure that this isn't correct, otherwise VDPAU interop won't
work either.

Maybe we're just getting lucky?


When pipe->flush() is called it must be guaranteed that all work is
submitted to the kernel.

It guarantees that the work is (eventually, in practice very soon)
submitted to the kernel, but it does _not_ guarantee that the kernel
has already returned from the CS ioctl.

The whole point of multi-threaded dispatch is to avoid the wait that
would be required to guarantee that.

We neither need nor want multi-threaded dispatch for the whole MM parts.

When I've implemented the whole multi ring dispatch logic this was
clearly the case and async flushes where only initialized by the winsys
if the RADEON_FLUSH_ASYNC flag was given.

And as far as I understand it on the pipe layer the driver can only use
a async flush if the PIPE_FLUSH_DEFERRED flag is given.

If that isn't the case here then that's clearly a bug which needs to be
fixed.

Well, what you call to ensure things go to the kernel is ws->cs_sync_flush.

This function is never called from any of the video code, but that's probably okay.

It is called from r600_flush_from_st for non-deferred, but only when nothing has been submitted to the gfx ring. When something is on the gfx ring, we go to si_context_gfx_flush which doesn't call it.

As for PIPE_FLUSH_DEFERRED, the idea is that this may not actually trigger a flush at all. All it gives you is a fence which will complete based on the next real flush.

Without flags, pipe->flush is supposed to have the semantics of glFinish: it simply guarantees that all previous commands will eventually be submitted to the GPU, but nothing else.

Because of that, I actually think the fact that r600_flush_from_st calls cs_sync_flush at all right now is a bug. But clearly it seems necessary to call cs_sync_flush *sometimes*.

So yes, it looks like there are a bunch of bugs here, some of them perhaps occasionally canceling each other out. And I kind of wonder whether they might not actually affect all OpenGL apps... hooray :)

Cheers,
Nicolai
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to