Am 21.09.2016 um 13:04 schrieb Daniel Vetter:
On Wed, Sep 21, 2016 at 12:30 PM, Christian König
<> wrote:
Am 21.09.2016 um 11:56 schrieb Michel Dänzer:

Looks like there are different interpretations of the semantics of
exclusive vs. shared fences. Where are these semantics documented?

Yeah, I think as well that this is the primary question here.

IIRC the fences were explicitly called exclusive/shared instead of
writing/reading on purpose.

I absolutely don't mind switching to them to writing/reading semantics, but
amdgpu really needs multiple writers at the same time.

So in this case the writing side of a reservation object needs to be a
collection of fences as well.
You can't have multiple writers with implicit syncing. That confusion
is exactly why we called them shared/exclusive. Multiple writers
generally means that you do some form of fencing in userspace
(unsync'ed gl buffer access is the common one). What you do for
private buffers doesn't matter, but when you render into a
shared/winsys buffer you really need to set the exclusive fence (and
there can only ever be one). So probably needs some userspace
adjustments to make sure you don't accidentally set an exclusive write
hazard when you don't really want that implicit sync.

Nope, that isn't true.

We use multiple writers without implicit syncing between processes in the amdgpu stack perfectly fine.

See amdgpu_sync.c for the implementation. What we do there is taking a look at all the fences associated with a reservation object and only sync to those who are from another process.

Then we use implicit syncing for command submissions in the form of "dependencies". E.g. for each CS we report back an identifier of that submission to user space and on the next submission you can give this identifier as dependency which needs to be satisfied before the command submission can start running.

This was done to allow multiple engines (3D, DMA, Compute) to compose a buffer while still allow compatibility with protocols like DRI2/DRI3.



amd-gfx mailing list

Reply via email to