On Wed, Jul 26, 2017 at 01:15:06PM +0200, Pablo Neira Ayuso wrote:
> On Wed, Jul 26, 2017 at 02:09:41AM +0200, Florian Westphal wrote:
> [...]
> > @@ -144,7 +159,9 @@ static int nft_rbtree_insert(const struct net *net,
> > const struct nft_set *set,
> > int err;
> >
> >
On Wed, Jul 26, 2017 at 02:09:41AM +0200, Florian Westphal wrote:
[...]
> @@ -144,7 +159,9 @@ static int nft_rbtree_insert(const struct net *net, const
> struct nft_set *set,
> int err;
>
> write_lock_bh(>lock);
> + write_seqcount_begin(>count);
> err =
Eric Dumazet wrote:
> On Wed, 2017-07-26 at 02:09 +0200, Florian Westphal wrote:
> > switch to lockless lockup. write side now also increments sequence
> > counter. On lookup, sample counter value and only take the lock
> > if we did not find a match and the counter has
On Wed, 2017-07-26 at 02:09 +0200, Florian Westphal wrote:
> switch to lockless lockup. write side now also increments sequence
> counter. On lookup, sample counter value and only take the lock
> if we did not find a match and the counter has changed.
>
> This avoids need to write to private