Hi Thomas,

On Fri, Apr 13, 2018 at 10:23 PM, Thomas Hellstrom
<thellst...@vmware.com> wrote:
> On 04/13/2018 07:13 PM, Daniel Vetter wrote:
>> On Wed, Apr 11, 2018 at 10:27:06AM +0200, Thomas Hellstrom wrote:
>>> 2) Should we add a *real* wound-wait choice to our wound-wait mutexes.
>>> Otherwise perhaps rename them or document that they're actually doing
>>> wait-die.
>> I think a doc patch would be good at least. Including all the data you
>> assembled here.
> Actually, a further investigation appears to indicate that manipulating the
> lock state under a local spinlock is about fast as using atomic operations
> even for the completely uncontended cases.
> This means that we could have a solution where you decide on a per-mutex or
> per-reservation object basis whether you want to manipulate lock-state under
> a "batch group" spinlock, meaning certain performance characteristics or
> traditional local locking, meaning other performance characteristics.
> Like, vmwgfx could choose batching locks, radeon traditional locks, but the
> same API would work for both and locks could be shared between drivers..

Don't we need to make this decision at least on a per-class level? Or
how will the spinlock/batch-lock approach interact with the normal
ww_mutex_lock path (which does require the atomics/ordered stores
we're trying to avoid)?

If we can't mix them I'm kinda leaning towards a
ww_batch_mutex/ww_batch_acquire_ctx, but exactly matching api
otherwise. We probably do need the new batch_start/end api, since
ww_acquire_done isn't quite the right place ...

> I'll see if I get time to put together an RFC.

Yeah I think there's definitely some use for batched ww locks, where
parallelism is generally low, or at least the ratio between "time
spent acquiring locks" and "time spent doing stuff while holding
locks" small enough to not make the reduced parallelism while
acquiring an issue.
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
dri-devel mailing list

Reply via email to