07-May-2013 10:47, Mehrdad пишет:
On Monday, 6 May 2013 at 18:56:08 UTC, Dmitry Olshansky wrote:


Thanks for the detailed explanation!


And now compiler/CPU decides to optimize/execute out of order (again,
it's an illustration) it as:

lock _static_mutex;
x = alloc int;
//even if that's atomic
static_ = x;
// BOOM! somebody not locking mutex may already
// see static_ in "half-baked" state
x[0] = 42;
unlock _static_mutex;



That's exactly the same as the classic double-checked lock bug, right?


Yeah, and that was my point to begin with - your method doesn't bring anything new. It's the same as the one with null and 'if-null-check' with same issues and requires atomics or barriers.

As I wrote in my original code -- and as you also mentioned yourself --
isn't it trivially fixed with a memory barrier?

Like maybe replacing

     _static = new ActualValue<T>();

with

     var value = new ActualValue<T>();
     _ReadWriteBarrier();
     _static = value;



Wouldn't this make it correct?

Would but then it's the same as the old fixed double-checked locking.
Barriers hurt performance that we were after to begin with.

Now it would be interesting to measure speed of this TLS low-lock vs atomic-load/memory barrier + mutex. This measurement is absent from the blog post, but Andrei claims memory barrier on each access is too slow.


--
Dmitry Olshansky

Reply via email to