On 2017-06-15 00:33:12 [+0200], Jason A. Donenfeld wrote:
> There's a potential race that I fixed in my v5 of that patch set, but
> Ted only took v4, and for whatever reason has been to busy to submit
> the additional patch I already posted showing the diff between v4&v5.
> Hopefully he actually gets around to it and sends this for the next
> rc. Here it is:
> 
> https://patchwork.kernel.org/patch/9774563/

So you replace "crng_init < 2" with use_lock instead. That is not what I
am talking about. Again:
        add_interrupt_randomness()
->       crng_fast_load()               
spin_trylock_irqsave(&primary_crng.lock, )
->        invalidate_batched_entropy()  
write_lock_irqsave(&batched_entropy_reset_lock, );

in that order while the code path
        get_random_uXX()                
read_lock_irqsave(&batched_entropy_reset_lock, );
->       extract_crng()
->        _extract_crng()                spin_lock_irqsave(&crng->lock, );

which allocates the same lock in opposite order.
That means
  T1                    T2
  crng_fast_load()      get_random_u64()
                        extract_crng()
              *dead lock*
invalidate_batched_entropy()
                        _extract_crng()

So T1 waits for batched_entropy_reset_lock holding primary_crng.lock and
T2 waits for primary_crng.lock holding batched_entropy_reset_lock.

Sebastian

Reply via email to