Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-06 Thread Eric Wong
Roman Penyaev  wrote:
> On 2018-12-06 00:46, Eric Wong wrote:
> > Roman Penyaev  wrote:
> > > Hi all,
> > > 
> > > The goal of this patch is to reduce contention of ep_poll_callback()
> > > which
> > > can be called concurrently from different CPUs in case of high events
> > > rates and many fds per epoll.  Problem can be very well reproduced by
> > > generating events (write to pipe or eventfd) from many threads, while
> > > consumer thread does polling.  In other words this patch increases the
> > > bandwidth of events which can be delivered from sources to the
> > > poller by
> > > adding poll items in a lockless way to the list.
> > 
> > Hi Roman,
> > 
> > I also tried to solve this problem many years ago with help of
> > the well-tested-in-userspace wfcqueue from Mathieu's URCU.
> > 
> > I was also looking to solve contention with parallel epoll_wait
> > callers with this.  AFAIK, it worked well; but needed the
> > userspace tests from wfcqueue ported over to the kernel and more
> > review.
> > 
> > I didn't have enough computing power to show the real-world
> > benefits or funding to continue:
> > 
> > https://lore.kernel.org/lkml/?q=wfcqueue+d:..20130501
> 
> Hi Eric,
> 
> Nice work.  That was a huge change by itself and by dependency
> on wfcqueue.  I could not find any valuable discussion on this,
> what was the reaction of the community?

Hi Roman, AFAIK there wasn't much reaction.  Mathieu was VERY
helpful with wfcqueue but there wasn't much else.  Honestly, I'm
surprised wfcqueue hasn't made it into more places; I love it :)

(More recently, I started an effort to get glibc malloc to use wfcqueue:
https://public-inbox.org/libc-alpha/20180731084936.g4yw6wnvt677miti@dcvr/ )

> > It might not be too much trouble for you to brush up the wait-free
> > patches and test them against the rwlock implementation.
> 
> Ha :)  I may try to cherry-pick these patches, let's see how many
> conflicts I have to resolve, eventpoll.c has been changed a lot
> since that (6 years passed, right?)

AFAIK not, epoll remains a queue with a key-value mapping.
I'm not a regular/experienced kernel hacker and I had no trouble
understanding eventpoll.c years ago.

> But reading your work description I can assume that epoll_wait() calls
> should be faster, because they do not content with ep_poll_callback(),
> and I did not try to solve this, only contention between producers,
> which make my change tiny.

Yes, I recall that was it.  My real-world programs[1], even without
slow HDD access, didn't show it, though.

> I also found your https://yhbt.net/eponeshotmt.c , where you count
> number of bare epoll_wait() calls, which IMO is not correct, because
> we need to count how many events are delivered, but not how fast
> you've returned from epoll_wait().  But as I said no doubts that
> getting rid of contention between consumer and producers will show
> even better results.

"epoll_wait calls" == "events delivered" in my case since I (ab)use
epoll_wait with max_events=1 as a work-distribution mechanism
between threads.  Not a common use-case, I admit.

My design was terrible from a syscall overhead POV, but my
bottleneck for real-world use for cmogstored[1] was dozens of
rotational HDDs in JBOD configuration; so I favored elimination
of head-of-line blocking over throughput of epoll itself.

My motivation for hacking on epoll back then was only to look
better on synthetic benchmarks that didn't hit slow HDDs :)



[1] git clone https://bogomips.org/cmogstored.git/
the Ragel-generated HTTP parser was also a bottleneck in
synthetic benchmarks, as we


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-06 Thread Eric Wong
Roman Penyaev  wrote:
> On 2018-12-06 00:46, Eric Wong wrote:
> > Roman Penyaev  wrote:
> > > Hi all,
> > > 
> > > The goal of this patch is to reduce contention of ep_poll_callback()
> > > which
> > > can be called concurrently from different CPUs in case of high events
> > > rates and many fds per epoll.  Problem can be very well reproduced by
> > > generating events (write to pipe or eventfd) from many threads, while
> > > consumer thread does polling.  In other words this patch increases the
> > > bandwidth of events which can be delivered from sources to the
> > > poller by
> > > adding poll items in a lockless way to the list.
> > 
> > Hi Roman,
> > 
> > I also tried to solve this problem many years ago with help of
> > the well-tested-in-userspace wfcqueue from Mathieu's URCU.
> > 
> > I was also looking to solve contention with parallel epoll_wait
> > callers with this.  AFAIK, it worked well; but needed the
> > userspace tests from wfcqueue ported over to the kernel and more
> > review.
> > 
> > I didn't have enough computing power to show the real-world
> > benefits or funding to continue:
> > 
> > https://lore.kernel.org/lkml/?q=wfcqueue+d:..20130501
> 
> Hi Eric,
> 
> Nice work.  That was a huge change by itself and by dependency
> on wfcqueue.  I could not find any valuable discussion on this,
> what was the reaction of the community?

Hi Roman, AFAIK there wasn't much reaction.  Mathieu was VERY
helpful with wfcqueue but there wasn't much else.  Honestly, I'm
surprised wfcqueue hasn't made it into more places; I love it :)

(More recently, I started an effort to get glibc malloc to use wfcqueue:
https://public-inbox.org/libc-alpha/20180731084936.g4yw6wnvt677miti@dcvr/ )

> > It might not be too much trouble for you to brush up the wait-free
> > patches and test them against the rwlock implementation.
> 
> Ha :)  I may try to cherry-pick these patches, let's see how many
> conflicts I have to resolve, eventpoll.c has been changed a lot
> since that (6 years passed, right?)

AFAIK not, epoll remains a queue with a key-value mapping.
I'm not a regular/experienced kernel hacker and I had no trouble
understanding eventpoll.c years ago.

> But reading your work description I can assume that epoll_wait() calls
> should be faster, because they do not content with ep_poll_callback(),
> and I did not try to solve this, only contention between producers,
> which make my change tiny.

Yes, I recall that was it.  My real-world programs[1], even without
slow HDD access, didn't show it, though.

> I also found your https://yhbt.net/eponeshotmt.c , where you count
> number of bare epoll_wait() calls, which IMO is not correct, because
> we need to count how many events are delivered, but not how fast
> you've returned from epoll_wait().  But as I said no doubts that
> getting rid of contention between consumer and producers will show
> even better results.

"epoll_wait calls" == "events delivered" in my case since I (ab)use
epoll_wait with max_events=1 as a work-distribution mechanism
between threads.  Not a common use-case, I admit.

My design was terrible from a syscall overhead POV, but my
bottleneck for real-world use for cmogstored[1] was dozens of
rotational HDDs in JBOD configuration; so I favored elimination
of head-of-line blocking over throughput of epoll itself.

My motivation for hacking on epoll back then was only to look
better on synthetic benchmarks that didn't hit slow HDDs :)



[1] git clone https://bogomips.org/cmogstored.git/
the Ragel-generated HTTP parser was also a bottleneck in
synthetic benchmarks, as we


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-06 Thread Roman Penyaev

On 2018-12-06 00:46, Eric Wong wrote:

Roman Penyaev  wrote:

Hi all,

The goal of this patch is to reduce contention of ep_poll_callback() 
which

can be called concurrently from different CPUs in case of high events
rates and many fds per epoll.  Problem can be very well reproduced by
generating events (write to pipe or eventfd) from many threads, while
consumer thread does polling.  In other words this patch increases the
bandwidth of events which can be delivered from sources to the poller 
by

adding poll items in a lockless way to the list.


Hi Roman,

I also tried to solve this problem many years ago with help of
the well-tested-in-userspace wfcqueue from Mathieu's URCU.

I was also looking to solve contention with parallel epoll_wait
callers with this.  AFAIK, it worked well; but needed the
userspace tests from wfcqueue ported over to the kernel and more
review.

I didn't have enough computing power to show the real-world
benefits or funding to continue:

https://lore.kernel.org/lkml/?q=wfcqueue+d:..20130501


Hi Eric,

Nice work.  That was a huge change by itself and by dependency
on wfcqueue.  I could not find any valuable discussion on this,
what was the reaction of the community?



It might not be too much trouble for you to brush up the wait-free
patches and test them against the rwlock implementation.


Ha :)  I may try to cherry-pick these patches, let's see how many
conflicts I have to resolve, eventpoll.c has been changed a lot
since that (6 years passed, right?)

But reading your work description I can assume that epoll_wait() calls
should be faster, because they do not content with ep_poll_callback(),
and I did not try to solve this, only contention between producers,
which make my change tiny.

I also found your https://yhbt.net/eponeshotmt.c , where you count
number of bare epoll_wait() calls, which IMO is not correct, because
we need to count how many events are delivered, but not how fast
you've returned from epoll_wait().  But as I said no doubts that
getting rid of contention between consumer and producers will show
even better results.

--
Roman


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-06 Thread Roman Penyaev

On 2018-12-06 00:46, Eric Wong wrote:

Roman Penyaev  wrote:

Hi all,

The goal of this patch is to reduce contention of ep_poll_callback() 
which

can be called concurrently from different CPUs in case of high events
rates and many fds per epoll.  Problem can be very well reproduced by
generating events (write to pipe or eventfd) from many threads, while
consumer thread does polling.  In other words this patch increases the
bandwidth of events which can be delivered from sources to the poller 
by

adding poll items in a lockless way to the list.


Hi Roman,

I also tried to solve this problem many years ago with help of
the well-tested-in-userspace wfcqueue from Mathieu's URCU.

I was also looking to solve contention with parallel epoll_wait
callers with this.  AFAIK, it worked well; but needed the
userspace tests from wfcqueue ported over to the kernel and more
review.

I didn't have enough computing power to show the real-world
benefits or funding to continue:

https://lore.kernel.org/lkml/?q=wfcqueue+d:..20130501


Hi Eric,

Nice work.  That was a huge change by itself and by dependency
on wfcqueue.  I could not find any valuable discussion on this,
what was the reaction of the community?



It might not be too much trouble for you to brush up the wait-free
patches and test them against the rwlock implementation.


Ha :)  I may try to cherry-pick these patches, let's see how many
conflicts I have to resolve, eventpoll.c has been changed a lot
since that (6 years passed, right?)

But reading your work description I can assume that epoll_wait() calls
should be faster, because they do not content with ep_poll_callback(),
and I did not try to solve this, only contention between producers,
which make my change tiny.

I also found your https://yhbt.net/eponeshotmt.c , where you count
number of bare epoll_wait() calls, which IMO is not correct, because
we need to count how many events are delivered, but not how fast
you've returned from epoll_wait().  But as I said no doubts that
getting rid of contention between consumer and producers will show
even better results.

--
Roman


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-06 Thread Roman Penyaev

On 2018-12-06 04:08, Davidlohr Bueso wrote:

On 12/3/18 6:02 AM, Roman Penyaev wrote:



if (!ep_is_linked(epi)) {
-   list_add_tail(>rdllink, >rdllist);
+   /* Reverse ->ovflist, events should be in FIFO */
+   list_add(>rdllink, >rdllist);
ep_pm_stay_awake(epi);
}


This should probably a separate patch as it fixes the ordering,
regardless of the rwlock+xchg optimization.


Yes, should not be a part of this RFC.  Thanks.

--
Roman



Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-06 Thread Roman Penyaev

On 2018-12-06 04:08, Davidlohr Bueso wrote:

On 12/3/18 6:02 AM, Roman Penyaev wrote:



if (!ep_is_linked(epi)) {
-   list_add_tail(>rdllink, >rdllist);
+   /* Reverse ->ovflist, events should be in FIFO */
+   list_add(>rdllink, >rdllist);
ep_pm_stay_awake(epi);
}


This should probably a separate patch as it fixes the ordering,
regardless of the rwlock+xchg optimization.


Yes, should not be a part of this RFC.  Thanks.

--
Roman



Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-06 Thread Roman Penyaev

On 2018-12-06 05:04, Davidlohr Bueso wrote:

On 12/3/18 6:02 AM, Roman Penyaev wrote:

The main change is in replacement of the spinlock with a rwlock, which 
is
taken on read in ep_poll_callback(), and then by adding poll items to 
the

tail of the list using xchg atomic instruction.  Write lock is taken
everywhere else in order to stop list modifications and guarantee that 
list
updates are fully completed (I assume that write side of a rwlock does 
not

starve, it seems qrwlock implementation has these guarantees).


Its good then that Will recently ported qrwlocks to arm64, which iirc 
had
a bad case of writer starvation. In general, qrwlock will maintain 
reader
to writer ratios of acquisitions fairly well, but will favor readers 
over

writers in scenarios where when too many tasks (more than ncpus).


Thanks for noting that.  Then that should not be a problem, since number 
of

parallel ep_poll_callback() calls can't be greater then number of CPUs
because of the wq.lock which is taken by the caller of 
ep_poll_callback().


BTW, did someone make any estimations how much does the latency on the
write side increase if the number of readers is greater than CPUs?

--
Roman



Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-06 Thread Roman Penyaev

On 2018-12-06 05:04, Davidlohr Bueso wrote:

On 12/3/18 6:02 AM, Roman Penyaev wrote:

The main change is in replacement of the spinlock with a rwlock, which 
is
taken on read in ep_poll_callback(), and then by adding poll items to 
the

tail of the list using xchg atomic instruction.  Write lock is taken
everywhere else in order to stop list modifications and guarantee that 
list
updates are fully completed (I assume that write side of a rwlock does 
not

starve, it seems qrwlock implementation has these guarantees).


Its good then that Will recently ported qrwlocks to arm64, which iirc 
had
a bad case of writer starvation. In general, qrwlock will maintain 
reader
to writer ratios of acquisitions fairly well, but will favor readers 
over

writers in scenarios where when too many tasks (more than ncpus).


Thanks for noting that.  Then that should not be a problem, since number 
of

parallel ep_poll_callback() calls can't be greater then number of CPUs
because of the wq.lock which is taken by the caller of 
ep_poll_callback().


BTW, did someone make any estimations how much does the latency on the
write side increase if the number of readers is greater than CPUs?

--
Roman



Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-05 Thread Davidlohr Bueso

On 12/3/18 6:02 AM, Roman Penyaev wrote:


The main change is in replacement of the spinlock with a rwlock, which is
taken on read in ep_poll_callback(), and then by adding poll items to the
tail of the list using xchg atomic instruction.  Write lock is taken
everywhere else in order to stop list modifications and guarantee that list
updates are fully completed (I assume that write side of a rwlock does not
starve, it seems qrwlock implementation has these guarantees).


Its good then that Will recently ported qrwlocks to arm64, which iirc had
a bad case of writer starvation. In general, qrwlock will maintain reader
to writer ratios of acquisitions fairly well, but will favor readers over
writers in scenarios where when too many tasks (more than ncpus).

Thanks,
Davidlohr


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-05 Thread Davidlohr Bueso

On 12/3/18 6:02 AM, Roman Penyaev wrote:


The main change is in replacement of the spinlock with a rwlock, which is
taken on read in ep_poll_callback(), and then by adding poll items to the
tail of the list using xchg atomic instruction.  Write lock is taken
everywhere else in order to stop list modifications and guarantee that list
updates are fully completed (I assume that write side of a rwlock does not
starve, it seems qrwlock implementation has these guarantees).


Its good then that Will recently ported qrwlocks to arm64, which iirc had
a bad case of writer starvation. In general, qrwlock will maintain reader
to writer ratios of acquisitions fairly well, but will favor readers over
writers in scenarios where when too many tasks (more than ncpus).

Thanks,
Davidlohr


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-05 Thread Davidlohr Bueso

On 12/3/18 6:02 AM, Roman Penyaev wrote:



if (!ep_is_linked(epi)) {
-   list_add_tail(>rdllink, >rdllist);
+   /* Reverse ->ovflist, events should be in FIFO */
+   list_add(>rdllink, >rdllist);
ep_pm_stay_awake(epi);
}


This should probably a separate patch as it fixes the ordering,
regardless of the rwlock+xchg optimization.

Thanks,
Davidlohr


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-05 Thread Davidlohr Bueso

On 12/3/18 6:02 AM, Roman Penyaev wrote:



if (!ep_is_linked(epi)) {
-   list_add_tail(>rdllink, >rdllist);
+   /* Reverse ->ovflist, events should be in FIFO */
+   list_add(>rdllink, >rdllist);
ep_pm_stay_awake(epi);
}


This should probably a separate patch as it fixes the ordering,
regardless of the rwlock+xchg optimization.

Thanks,
Davidlohr


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-05 Thread Davidlohr Bueso

+ akpm.

Also, there are some epoll patches queued for -next, and as such
this patch does not apply against linux-next.

Thanks,
Davidlohr

On Tue, 04 Dec 2018, Jason Baron wrote:

On 12/3/18 6:02 AM, Roman Penyaev wrote:

Hi all,

The goal of this patch is to reduce contention of ep_poll_callback() which
can be called concurrently from different CPUs in case of high events
rates and many fds per epoll.  Problem can be very well reproduced by
generating events (write to pipe or eventfd) from many threads, while
consumer thread does polling.  In other words this patch increases the
bandwidth of events which can be delivered from sources to the poller by
adding poll items in a lockless way to the list.

The main change is in replacement of the spinlock with a rwlock, which is
taken on read in ep_poll_callback(), and then by adding poll items to the
tail of the list using xchg atomic instruction.  Write lock is taken
everywhere else in order to stop list modifications and guarantee that list
updates are fully completed (I assume that write side of a rwlock does not
starve, it seems qrwlock implementation has these guarantees).

The following are some microbenchmark results based on the test [1] which
starts threads which generate N events each.  The test ends when all
events are successfully fetched by the poller thread:

spinlock


threads   run timeevents per ms
---   -   -
  8 13191ms 6064/ms
 16 30758ms 5201/ms
 32 44315ms 7220/ms

rwlock + xchg
=

threads   run timeevents per ms
---   -   -
  8  8581ms 9323/ms
 16 13800ms11594/ms
 32 24167ms13240/ms

According to the results bandwidth of delivered events is significantly
increased, thus execution time is reduced.

This is RFC because I did not run any benchmarks comparing current
qrwlock and spinlock implementations (4.19 kernel), although I did
not notice any epoll performance degradations in other benchmarks.

Also I'm not quite sure where to put very special lockless variant
of adding element to the list (list_add_tail_lockless() in this
patch).  Seems keeping it locally is safer.

[1] https://github.com/rouming/test-tools/blob/master/stress-epoll.c

Signed-off-by: Roman Penyaev 
Cc: Alexander Viro 
Cc: "Paul E. McKenney" 
Cc: Linus Torvalds 
Cc: linux-fsde...@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
 fs/eventpoll.c | 107 +++--
 1 file changed, 69 insertions(+), 38 deletions(-)

diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index 42bbe6824b4b..89debda47aca 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -50,10 +50,10 @@
  *
  * 1) epmutex (mutex)
  * 2) ep->mtx (mutex)
- * 3) ep->wq.lock (spinlock)
+ * 3) ep->lock (rwlock)
  *
  * The acquire order is the one listed above, from 1 to 3.
- * We need a spinlock (ep->wq.lock) because we manipulate objects
+ * We need a rwlock (ep->lock) because we manipulate objects
  * from inside the poll callback, that might be triggered from
  * a wake_up() that in turn might be called from IRQ context.
  * So we can't sleep inside the poll callback and hence we need
@@ -85,7 +85,7 @@
  * of epoll file descriptors, we use the current recursion depth as
  * the lockdep subkey.
  * It is possible to drop the "ep->mtx" and to use the global
- * mutex "epmutex" (together with "ep->wq.lock") to have it working,
+ * mutex "epmutex" (together with "ep->lock") to have it working,
  * but having "ep->mtx" will make the interface more scalable.
  * Events that require holding "epmutex" are very rare, while for
  * normal operations the epoll private "ep->mtx" will guarantee
@@ -182,8 +182,6 @@ struct epitem {
  * This structure is stored inside the "private_data" member of the file
  * structure and represents the main data structure for the eventpoll
  * interface.
- *
- * Access to it is protected by the lock inside wq.
  */
 struct eventpoll {
/*
@@ -203,13 +201,16 @@ struct eventpoll {
/* List of ready file descriptors */
struct list_head rdllist;

+   /* Lock which protects rdllist and ovflist */
+   rwlock_t lock;
+
/* RB tree root used to store monitored fd structs */
struct rb_root_cached rbr;

/*
 * This is a single linked list that chains all the "struct epitem" that
 * happened while transferring ready events to userspace w/out
-* holding ->wq.lock.
+* holding ->lock.
 */
struct epitem *ovflist;

@@ -697,17 +698,17 @@ static __poll_t ep_scan_ready_list(struct eventpoll *ep,
 * because we want the "sproc" callback to be able to do it
 * in a lockless way.
 */
-   spin_lock_irq(>wq.lock);
+   write_lock_irq(>lock);
list_splice_init(>rdllist, );
ep->ovflist = NULL;
-   spin_unlock_irq(>wq.lock);
+   

Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-05 Thread Davidlohr Bueso

+ akpm.

Also, there are some epoll patches queued for -next, and as such
this patch does not apply against linux-next.

Thanks,
Davidlohr

On Tue, 04 Dec 2018, Jason Baron wrote:

On 12/3/18 6:02 AM, Roman Penyaev wrote:

Hi all,

The goal of this patch is to reduce contention of ep_poll_callback() which
can be called concurrently from different CPUs in case of high events
rates and many fds per epoll.  Problem can be very well reproduced by
generating events (write to pipe or eventfd) from many threads, while
consumer thread does polling.  In other words this patch increases the
bandwidth of events which can be delivered from sources to the poller by
adding poll items in a lockless way to the list.

The main change is in replacement of the spinlock with a rwlock, which is
taken on read in ep_poll_callback(), and then by adding poll items to the
tail of the list using xchg atomic instruction.  Write lock is taken
everywhere else in order to stop list modifications and guarantee that list
updates are fully completed (I assume that write side of a rwlock does not
starve, it seems qrwlock implementation has these guarantees).

The following are some microbenchmark results based on the test [1] which
starts threads which generate N events each.  The test ends when all
events are successfully fetched by the poller thread:

spinlock


threads   run timeevents per ms
---   -   -
  8 13191ms 6064/ms
 16 30758ms 5201/ms
 32 44315ms 7220/ms

rwlock + xchg
=

threads   run timeevents per ms
---   -   -
  8  8581ms 9323/ms
 16 13800ms11594/ms
 32 24167ms13240/ms

According to the results bandwidth of delivered events is significantly
increased, thus execution time is reduced.

This is RFC because I did not run any benchmarks comparing current
qrwlock and spinlock implementations (4.19 kernel), although I did
not notice any epoll performance degradations in other benchmarks.

Also I'm not quite sure where to put very special lockless variant
of adding element to the list (list_add_tail_lockless() in this
patch).  Seems keeping it locally is safer.

[1] https://github.com/rouming/test-tools/blob/master/stress-epoll.c

Signed-off-by: Roman Penyaev 
Cc: Alexander Viro 
Cc: "Paul E. McKenney" 
Cc: Linus Torvalds 
Cc: linux-fsde...@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
 fs/eventpoll.c | 107 +++--
 1 file changed, 69 insertions(+), 38 deletions(-)

diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index 42bbe6824b4b..89debda47aca 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -50,10 +50,10 @@
  *
  * 1) epmutex (mutex)
  * 2) ep->mtx (mutex)
- * 3) ep->wq.lock (spinlock)
+ * 3) ep->lock (rwlock)
  *
  * The acquire order is the one listed above, from 1 to 3.
- * We need a spinlock (ep->wq.lock) because we manipulate objects
+ * We need a rwlock (ep->lock) because we manipulate objects
  * from inside the poll callback, that might be triggered from
  * a wake_up() that in turn might be called from IRQ context.
  * So we can't sleep inside the poll callback and hence we need
@@ -85,7 +85,7 @@
  * of epoll file descriptors, we use the current recursion depth as
  * the lockdep subkey.
  * It is possible to drop the "ep->mtx" and to use the global
- * mutex "epmutex" (together with "ep->wq.lock") to have it working,
+ * mutex "epmutex" (together with "ep->lock") to have it working,
  * but having "ep->mtx" will make the interface more scalable.
  * Events that require holding "epmutex" are very rare, while for
  * normal operations the epoll private "ep->mtx" will guarantee
@@ -182,8 +182,6 @@ struct epitem {
  * This structure is stored inside the "private_data" member of the file
  * structure and represents the main data structure for the eventpoll
  * interface.
- *
- * Access to it is protected by the lock inside wq.
  */
 struct eventpoll {
/*
@@ -203,13 +201,16 @@ struct eventpoll {
/* List of ready file descriptors */
struct list_head rdllist;

+   /* Lock which protects rdllist and ovflist */
+   rwlock_t lock;
+
/* RB tree root used to store monitored fd structs */
struct rb_root_cached rbr;

/*
 * This is a single linked list that chains all the "struct epitem" that
 * happened while transferring ready events to userspace w/out
-* holding ->wq.lock.
+* holding ->lock.
 */
struct epitem *ovflist;

@@ -697,17 +698,17 @@ static __poll_t ep_scan_ready_list(struct eventpoll *ep,
 * because we want the "sproc" callback to be able to do it
 * in a lockless way.
 */
-   spin_lock_irq(>wq.lock);
+   write_lock_irq(>lock);
list_splice_init(>rdllist, );
ep->ovflist = NULL;
-   spin_unlock_irq(>wq.lock);
+   

Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-05 Thread Eric Wong
Roman Penyaev  wrote:
> Hi all,
> 
> The goal of this patch is to reduce contention of ep_poll_callback() which
> can be called concurrently from different CPUs in case of high events
> rates and many fds per epoll.  Problem can be very well reproduced by
> generating events (write to pipe or eventfd) from many threads, while
> consumer thread does polling.  In other words this patch increases the
> bandwidth of events which can be delivered from sources to the poller by
> adding poll items in a lockless way to the list.

Hi Roman,

I also tried to solve this problem many years ago with help of
the well-tested-in-userspace wfcqueue from Mathieu's URCU.

I was also looking to solve contention with parallel epoll_wait
callers with this.  AFAIK, it worked well; but needed the
userspace tests from wfcqueue ported over to the kernel and more
review.

I didn't have enough computing power to show the real-world
benefits or funding to continue:

https://lore.kernel.org/lkml/?q=wfcqueue+d:..20130501

It might not be too much trouble for you to brush up the wait-free
patches and test them against the rwlock implementation.

(I only noticed this thread since I was catching up on some
 public-inbox work :>)


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-05 Thread Eric Wong
Roman Penyaev  wrote:
> Hi all,
> 
> The goal of this patch is to reduce contention of ep_poll_callback() which
> can be called concurrently from different CPUs in case of high events
> rates and many fds per epoll.  Problem can be very well reproduced by
> generating events (write to pipe or eventfd) from many threads, while
> consumer thread does polling.  In other words this patch increases the
> bandwidth of events which can be delivered from sources to the poller by
> adding poll items in a lockless way to the list.

Hi Roman,

I also tried to solve this problem many years ago with help of
the well-tested-in-userspace wfcqueue from Mathieu's URCU.

I was also looking to solve contention with parallel epoll_wait
callers with this.  AFAIK, it worked well; but needed the
userspace tests from wfcqueue ported over to the kernel and more
review.

I didn't have enough computing power to show the real-world
benefits or funding to continue:

https://lore.kernel.org/lkml/?q=wfcqueue+d:..20130501

It might not be too much trouble for you to brush up the wait-free
patches and test them against the rwlock implementation.

(I only noticed this thread since I was catching up on some
 public-inbox work :>)


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-05 Thread Roman Penyaev

On 2018-12-05 17:38, Jason Baron wrote:


I think it might be interesting for, at least testing, to see if not 
grabbing
wq.lock improves your benchmarks any further? fwiw, epoll only recently 
started

grabbing wq.lock bc lockdep required it.


That's easy!  I've just tested with the following hunk applied to my 
patch on top:


+++ b/fs/eventpoll.c
@@ -1228,7 +1228,7 @@ static int ep_poll_callback(wait_queue_entry_t 
*wait, unsigned mode, int sync, v

break;
}
}
-   wake_up(>wq);
+   wake_up_locked(>wq);
}

Run time:

threads   w/ wq.lock   w/o wq.lock
---   --   ---
 88581ms8602ms
16   13800ms   13715ms
32   24167ms   23817ms

No big difference.  According to perf the contention is on read lock and
on try_to_wake_up(), the p->pi_lock, which serializes access exactly 
like

vanished wq.lock.


-   24.41% 5.39%  a.out[kernel.kallsyms]   [k] ep_poll_callback
   - 19.02% ep_poll_callback
  + 11.88% _raw_read_lock_irqsave
  + 5.74% _raw_read_unlock_irqrestore
  - 1.39% __wake_up_common
 - 1.22% try_to_wake_up
+ 0.98% _raw_spin_lock_irqsave


--
Roman


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-05 Thread Roman Penyaev

On 2018-12-05 17:38, Jason Baron wrote:


I think it might be interesting for, at least testing, to see if not 
grabbing
wq.lock improves your benchmarks any further? fwiw, epoll only recently 
started

grabbing wq.lock bc lockdep required it.


That's easy!  I've just tested with the following hunk applied to my 
patch on top:


+++ b/fs/eventpoll.c
@@ -1228,7 +1228,7 @@ static int ep_poll_callback(wait_queue_entry_t 
*wait, unsigned mode, int sync, v

break;
}
}
-   wake_up(>wq);
+   wake_up_locked(>wq);
}

Run time:

threads   w/ wq.lock   w/o wq.lock
---   --   ---
 88581ms8602ms
16   13800ms   13715ms
32   24167ms   23817ms

No big difference.  According to perf the contention is on read lock and
on try_to_wake_up(), the p->pi_lock, which serializes access exactly 
like

vanished wq.lock.


-   24.41% 5.39%  a.out[kernel.kallsyms]   [k] ep_poll_callback
   - 19.02% ep_poll_callback
  + 11.88% _raw_read_lock_irqsave
  + 5.74% _raw_read_unlock_irqrestore
  - 1.39% __wake_up_common
 - 1.22% try_to_wake_up
+ 0.98% _raw_spin_lock_irqsave


--
Roman


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-05 Thread Jason Baron



On 12/5/18 6:16 AM, Roman Penyaev wrote:
> On 2018-12-04 18:23, Jason Baron wrote:
>> On 12/3/18 6:02 AM, Roman Penyaev wrote:
> 
> [...]
> 
>>>
>>>  ep_set_busy_poll_napi_id(epi);
>>>
>>> @@ -1156,8 +1187,8 @@ static int ep_poll_callback(wait_queue_entry_t *wait, 
>>> unsigned mode, int sync, v
>>>   */
>>>  if (unlikely(ep->ovflist != EP_UNACTIVE_PTR)) {
>>>  if (epi->next == EP_UNACTIVE_PTR) {
>>> -    epi->next = ep->ovflist;
>>> -    ep->ovflist = epi;
>>> +    /* Atomically exchange tail */
>>> +    epi->next = xchg(>ovflist, epi);
>>
>> This also relies on the fact that the same epi can't be added to the
>> list in parallel, because the wait queue doing the wakeup will have the
>> wait_queue_head lock. That was a little confusing for me b/c we only had
>> the read_lock() above.
> 
> Yes, that is indeed not obvious path, but wq is locked by wake_up_*_poll()
> call or caller of wake_up_locked_poll() has to be sure wq.lock is taken.
> 
> I'll add an explicit comment here, thanks for pointing out.
> 
>>
>>>  if (epi->ws) {
>>>  /*
>>>   * Activate ep->ws since epi->ws may get
>>> @@ -1172,7 +1203,7 @@ static int ep_poll_callback(wait_queue_entry_t *wait, 
>>> unsigned mode, int sync, v
>>>
>>>  /* If this file is already in the ready list we exit soon */
>>>  if (!ep_is_linked(epi)) {
>>> -    list_add_tail(>rdllink, >rdllist);
>>> +    list_add_tail_lockless(>rdllink, >rdllist);
>>>  ep_pm_stay_awake_rcu(epi);
>>>  }
>>
>> same for this.
> 
> ... and an explicit comment here.
> 
>>
>>>
>>> @@ -1197,13 +1228,13 @@ static int ep_poll_callback(wait_queue_entry_t 
>>> *wait, unsigned mode, int sync, v
>>>  break;
>>>  }
>>>  }
>>> -    wake_up_locked(>wq);
>>> +    wake_up(>wq);
>>
>> why the switch here to the locked() variant? Shouldn't the 'reader'
>> side, in this case which takes the rwlock for write see all updates in a
>> coherent state at this point?
> 
> lockdep inside __wake_up_common expects wq_head->lock is taken, and
> seems this is not a good idea to leave wq naked on wake up path,
> when several CPUs can enter wake function.  Although default_wake_function
> is protected by spinlock inside try_to_wake_up(), but for example
> autoremove_wake_function() can't be called concurrently for the same wq
> (it removes wq entry from the list).  Also in case of bookmarks
> __wake_up_common adds an entry to the list, thus can't be called without
> any locks.
> 
> I understand you concern and you are right saying that read side sees
> wq entries as stable, but that will work only if __wake_up_common does
> not modify anything, that is seems true now, but of course it is
> too scary to rely on that in the future.

I think it might be interesting for, at least testing, to see if not grabbing
wq.lock improves your benchmarks any further? fwiw, epoll only recently started
grabbing wq.lock bc lockdep required it.

Thanks,

-Jason


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-05 Thread Jason Baron



On 12/5/18 6:16 AM, Roman Penyaev wrote:
> On 2018-12-04 18:23, Jason Baron wrote:
>> On 12/3/18 6:02 AM, Roman Penyaev wrote:
> 
> [...]
> 
>>>
>>>  ep_set_busy_poll_napi_id(epi);
>>>
>>> @@ -1156,8 +1187,8 @@ static int ep_poll_callback(wait_queue_entry_t *wait, 
>>> unsigned mode, int sync, v
>>>   */
>>>  if (unlikely(ep->ovflist != EP_UNACTIVE_PTR)) {
>>>  if (epi->next == EP_UNACTIVE_PTR) {
>>> -    epi->next = ep->ovflist;
>>> -    ep->ovflist = epi;
>>> +    /* Atomically exchange tail */
>>> +    epi->next = xchg(>ovflist, epi);
>>
>> This also relies on the fact that the same epi can't be added to the
>> list in parallel, because the wait queue doing the wakeup will have the
>> wait_queue_head lock. That was a little confusing for me b/c we only had
>> the read_lock() above.
> 
> Yes, that is indeed not obvious path, but wq is locked by wake_up_*_poll()
> call or caller of wake_up_locked_poll() has to be sure wq.lock is taken.
> 
> I'll add an explicit comment here, thanks for pointing out.
> 
>>
>>>  if (epi->ws) {
>>>  /*
>>>   * Activate ep->ws since epi->ws may get
>>> @@ -1172,7 +1203,7 @@ static int ep_poll_callback(wait_queue_entry_t *wait, 
>>> unsigned mode, int sync, v
>>>
>>>  /* If this file is already in the ready list we exit soon */
>>>  if (!ep_is_linked(epi)) {
>>> -    list_add_tail(>rdllink, >rdllist);
>>> +    list_add_tail_lockless(>rdllink, >rdllist);
>>>  ep_pm_stay_awake_rcu(epi);
>>>  }
>>
>> same for this.
> 
> ... and an explicit comment here.
> 
>>
>>>
>>> @@ -1197,13 +1228,13 @@ static int ep_poll_callback(wait_queue_entry_t 
>>> *wait, unsigned mode, int sync, v
>>>  break;
>>>  }
>>>  }
>>> -    wake_up_locked(>wq);
>>> +    wake_up(>wq);
>>
>> why the switch here to the locked() variant? Shouldn't the 'reader'
>> side, in this case which takes the rwlock for write see all updates in a
>> coherent state at this point?
> 
> lockdep inside __wake_up_common expects wq_head->lock is taken, and
> seems this is not a good idea to leave wq naked on wake up path,
> when several CPUs can enter wake function.  Although default_wake_function
> is protected by spinlock inside try_to_wake_up(), but for example
> autoremove_wake_function() can't be called concurrently for the same wq
> (it removes wq entry from the list).  Also in case of bookmarks
> __wake_up_common adds an entry to the list, thus can't be called without
> any locks.
> 
> I understand you concern and you are right saying that read side sees
> wq entries as stable, but that will work only if __wake_up_common does
> not modify anything, that is seems true now, but of course it is
> too scary to rely on that in the future.

I think it might be interesting for, at least testing, to see if not grabbing
wq.lock improves your benchmarks any further? fwiw, epoll only recently started
grabbing wq.lock bc lockdep required it.

Thanks,

-Jason


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-05 Thread Roman Penyaev

On 2018-12-05 00:59, Andrea Parri wrote:

Hi Roman,

On Tue, Dec 04, 2018 at 12:50:58PM +0100, Roman Penyaev wrote:

On 2018-12-03 18:34, Linus Torvalds wrote:



> This also ends up making the memory ordering of "xchg()" very very
> important. Yes, we've documented it as being an ordering op, but I'm
> not sure we've relied on it this directly before.

Seems exit_mm() does exactly the same, the following chunk:

up_read(>mmap_sem);

self.task = current;
self.next = xchg(_state->dumper.next, );


At least code pattern looks similar.


Maybe add a comment on top of (your) xchg() to note/justify these 
memory

ordering requirements?  As Paul said: "if there are races, this would
help force them to happen" (and simplify the review, this/future).


Hi Andrea,

Sure, this path is tricky, so will I cook something up.

--
Roman


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-05 Thread Roman Penyaev

On 2018-12-05 00:59, Andrea Parri wrote:

Hi Roman,

On Tue, Dec 04, 2018 at 12:50:58PM +0100, Roman Penyaev wrote:

On 2018-12-03 18:34, Linus Torvalds wrote:



> This also ends up making the memory ordering of "xchg()" very very
> important. Yes, we've documented it as being an ordering op, but I'm
> not sure we've relied on it this directly before.

Seems exit_mm() does exactly the same, the following chunk:

up_read(>mmap_sem);

self.task = current;
self.next = xchg(_state->dumper.next, );


At least code pattern looks similar.


Maybe add a comment on top of (your) xchg() to note/justify these 
memory

ordering requirements?  As Paul said: "if there are races, this would
help force them to happen" (and simplify the review, this/future).


Hi Andrea,

Sure, this path is tricky, so will I cook something up.

--
Roman


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-05 Thread Roman Penyaev

On 2018-12-04 20:02, Paul E. McKenney wrote:

On Tue, Dec 04, 2018 at 12:23:08PM -0500, Jason Baron wrote:



On 12/3/18 6:02 AM, Roman Penyaev wrote:
> Hi all,
>
> The goal of this patch is to reduce contention of ep_poll_callback() which
> can be called concurrently from different CPUs in case of high events
> rates and many fds per epoll.  Problem can be very well reproduced by
> generating events (write to pipe or eventfd) from many threads, while
> consumer thread does polling.  In other words this patch increases the
> bandwidth of events which can be delivered from sources to the poller by
> adding poll items in a lockless way to the list.
>
> The main change is in replacement of the spinlock with a rwlock, which is
> taken on read in ep_poll_callback(), and then by adding poll items to the
> tail of the list using xchg atomic instruction.  Write lock is taken
> everywhere else in order to stop list modifications and guarantee that list
> updates are fully completed (I assume that write side of a rwlock does not
> starve, it seems qrwlock implementation has these guarantees).
>
> The following are some microbenchmark results based on the test [1] which
> starts threads which generate N events each.  The test ends when all
> events are successfully fetched by the poller thread:
>
> spinlock
> 
>
> threads   run timeevents per ms
> ---   -   -
>   8 13191ms 6064/ms
>  16 30758ms 5201/ms
>  32 44315ms 7220/ms
>
> rwlock + xchg
> =
>
> threads   run timeevents per ms
> ---   -   -
>   8  8581ms 9323/ms
>  16 13800ms11594/ms
>  32 24167ms13240/ms
>
> According to the results bandwidth of delivered events is significantly
> increased, thus execution time is reduced.
>
> This is RFC because I did not run any benchmarks comparing current
> qrwlock and spinlock implementations (4.19 kernel), although I did
> not notice any epoll performance degradations in other benchmarks.
>
> Also I'm not quite sure where to put very special lockless variant
> of adding element to the list (list_add_tail_lockless() in this
> patch).  Seems keeping it locally is safer.
>
> [1] https://github.com/rouming/test-tools/blob/master/stress-epoll.c
>
> Signed-off-by: Roman Penyaev 
> Cc: Alexander Viro 
> Cc: "Paul E. McKenney" 
> Cc: Linus Torvalds 
> Cc: linux-fsde...@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> ---
>  fs/eventpoll.c | 107 +++--
>  1 file changed, 69 insertions(+), 38 deletions(-)
>
> diff --git a/fs/eventpoll.c b/fs/eventpoll.c
> index 42bbe6824b4b..89debda47aca 100644
> --- a/fs/eventpoll.c
> +++ b/fs/eventpoll.c
> @@ -50,10 +50,10 @@
>   *
>   * 1) epmutex (mutex)
>   * 2) ep->mtx (mutex)
> - * 3) ep->wq.lock (spinlock)
> + * 3) ep->lock (rwlock)
>   *
>   * The acquire order is the one listed above, from 1 to 3.
> - * We need a spinlock (ep->wq.lock) because we manipulate objects
> + * We need a rwlock (ep->lock) because we manipulate objects
>   * from inside the poll callback, that might be triggered from
>   * a wake_up() that in turn might be called from IRQ context.
>   * So we can't sleep inside the poll callback and hence we need
> @@ -85,7 +85,7 @@
>   * of epoll file descriptors, we use the current recursion depth as
>   * the lockdep subkey.
>   * It is possible to drop the "ep->mtx" and to use the global
> - * mutex "epmutex" (together with "ep->wq.lock") to have it working,
> + * mutex "epmutex" (together with "ep->lock") to have it working,
>   * but having "ep->mtx" will make the interface more scalable.
>   * Events that require holding "epmutex" are very rare, while for
>   * normal operations the epoll private "ep->mtx" will guarantee
> @@ -182,8 +182,6 @@ struct epitem {
>   * This structure is stored inside the "private_data" member of the file
>   * structure and represents the main data structure for the eventpoll
>   * interface.
> - *
> - * Access to it is protected by the lock inside wq.
>   */
>  struct eventpoll {
>/*
> @@ -203,13 +201,16 @@ struct eventpoll {
>/* List of ready file descriptors */
>struct list_head rdllist;
>
> +  /* Lock which protects rdllist and ovflist */
> +  rwlock_t lock;
> +
>/* RB tree root used to store monitored fd structs */
>struct rb_root_cached rbr;
>
>/*
> * This is a single linked list that chains all the "struct epitem" that
> * happened while transferring ready events to userspace w/out
> -   * holding ->wq.lock.
> +   * holding ->lock.
> */
>struct epitem *ovflist;
>
> @@ -697,17 +698,17 @@ static __poll_t ep_scan_ready_list(struct eventpoll *ep,
> * because we want the "sproc" callback to be able to do it
> * in a lockless way.
> */
> -  spin_lock_irq(>wq.lock);
> +  write_lock_irq(>lock);
>list_splice_init(>rdllist, );
>

Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-05 Thread Roman Penyaev

On 2018-12-04 20:02, Paul E. McKenney wrote:

On Tue, Dec 04, 2018 at 12:23:08PM -0500, Jason Baron wrote:



On 12/3/18 6:02 AM, Roman Penyaev wrote:
> Hi all,
>
> The goal of this patch is to reduce contention of ep_poll_callback() which
> can be called concurrently from different CPUs in case of high events
> rates and many fds per epoll.  Problem can be very well reproduced by
> generating events (write to pipe or eventfd) from many threads, while
> consumer thread does polling.  In other words this patch increases the
> bandwidth of events which can be delivered from sources to the poller by
> adding poll items in a lockless way to the list.
>
> The main change is in replacement of the spinlock with a rwlock, which is
> taken on read in ep_poll_callback(), and then by adding poll items to the
> tail of the list using xchg atomic instruction.  Write lock is taken
> everywhere else in order to stop list modifications and guarantee that list
> updates are fully completed (I assume that write side of a rwlock does not
> starve, it seems qrwlock implementation has these guarantees).
>
> The following are some microbenchmark results based on the test [1] which
> starts threads which generate N events each.  The test ends when all
> events are successfully fetched by the poller thread:
>
> spinlock
> 
>
> threads   run timeevents per ms
> ---   -   -
>   8 13191ms 6064/ms
>  16 30758ms 5201/ms
>  32 44315ms 7220/ms
>
> rwlock + xchg
> =
>
> threads   run timeevents per ms
> ---   -   -
>   8  8581ms 9323/ms
>  16 13800ms11594/ms
>  32 24167ms13240/ms
>
> According to the results bandwidth of delivered events is significantly
> increased, thus execution time is reduced.
>
> This is RFC because I did not run any benchmarks comparing current
> qrwlock and spinlock implementations (4.19 kernel), although I did
> not notice any epoll performance degradations in other benchmarks.
>
> Also I'm not quite sure where to put very special lockless variant
> of adding element to the list (list_add_tail_lockless() in this
> patch).  Seems keeping it locally is safer.
>
> [1] https://github.com/rouming/test-tools/blob/master/stress-epoll.c
>
> Signed-off-by: Roman Penyaev 
> Cc: Alexander Viro 
> Cc: "Paul E. McKenney" 
> Cc: Linus Torvalds 
> Cc: linux-fsde...@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> ---
>  fs/eventpoll.c | 107 +++--
>  1 file changed, 69 insertions(+), 38 deletions(-)
>
> diff --git a/fs/eventpoll.c b/fs/eventpoll.c
> index 42bbe6824b4b..89debda47aca 100644
> --- a/fs/eventpoll.c
> +++ b/fs/eventpoll.c
> @@ -50,10 +50,10 @@
>   *
>   * 1) epmutex (mutex)
>   * 2) ep->mtx (mutex)
> - * 3) ep->wq.lock (spinlock)
> + * 3) ep->lock (rwlock)
>   *
>   * The acquire order is the one listed above, from 1 to 3.
> - * We need a spinlock (ep->wq.lock) because we manipulate objects
> + * We need a rwlock (ep->lock) because we manipulate objects
>   * from inside the poll callback, that might be triggered from
>   * a wake_up() that in turn might be called from IRQ context.
>   * So we can't sleep inside the poll callback and hence we need
> @@ -85,7 +85,7 @@
>   * of epoll file descriptors, we use the current recursion depth as
>   * the lockdep subkey.
>   * It is possible to drop the "ep->mtx" and to use the global
> - * mutex "epmutex" (together with "ep->wq.lock") to have it working,
> + * mutex "epmutex" (together with "ep->lock") to have it working,
>   * but having "ep->mtx" will make the interface more scalable.
>   * Events that require holding "epmutex" are very rare, while for
>   * normal operations the epoll private "ep->mtx" will guarantee
> @@ -182,8 +182,6 @@ struct epitem {
>   * This structure is stored inside the "private_data" member of the file
>   * structure and represents the main data structure for the eventpoll
>   * interface.
> - *
> - * Access to it is protected by the lock inside wq.
>   */
>  struct eventpoll {
>/*
> @@ -203,13 +201,16 @@ struct eventpoll {
>/* List of ready file descriptors */
>struct list_head rdllist;
>
> +  /* Lock which protects rdllist and ovflist */
> +  rwlock_t lock;
> +
>/* RB tree root used to store monitored fd structs */
>struct rb_root_cached rbr;
>
>/*
> * This is a single linked list that chains all the "struct epitem" that
> * happened while transferring ready events to userspace w/out
> -   * holding ->wq.lock.
> +   * holding ->lock.
> */
>struct epitem *ovflist;
>
> @@ -697,17 +698,17 @@ static __poll_t ep_scan_ready_list(struct eventpoll *ep,
> * because we want the "sproc" callback to be able to do it
> * in a lockless way.
> */
> -  spin_lock_irq(>wq.lock);
> +  write_lock_irq(>lock);
>list_splice_init(>rdllist, );
>

Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-05 Thread Roman Penyaev

On 2018-12-04 18:23, Jason Baron wrote:

On 12/3/18 6:02 AM, Roman Penyaev wrote:


[...]



ep_set_busy_poll_napi_id(epi);

@@ -1156,8 +1187,8 @@ static int ep_poll_callback(wait_queue_entry_t 
*wait, unsigned mode, int sync, v

 */
if (unlikely(ep->ovflist != EP_UNACTIVE_PTR)) {
if (epi->next == EP_UNACTIVE_PTR) {
-   epi->next = ep->ovflist;
-   ep->ovflist = epi;
+   /* Atomically exchange tail */
+   epi->next = xchg(>ovflist, epi);


This also relies on the fact that the same epi can't be added to the
list in parallel, because the wait queue doing the wakeup will have the
wait_queue_head lock. That was a little confusing for me b/c we only 
had

the read_lock() above.


Yes, that is indeed not obvious path, but wq is locked by 
wake_up_*_poll()

call or caller of wake_up_locked_poll() has to be sure wq.lock is taken.

I'll add an explicit comment here, thanks for pointing out.




if (epi->ws) {
/*
 * Activate ep->ws since epi->ws may get
@@ -1172,7 +1203,7 @@ static int ep_poll_callback(wait_queue_entry_t 
*wait, unsigned mode, int sync, v


/* If this file is already in the ready list we exit soon */
if (!ep_is_linked(epi)) {
-   list_add_tail(>rdllink, >rdllist);
+   list_add_tail_lockless(>rdllink, >rdllist);
ep_pm_stay_awake_rcu(epi);
}


same for this.


... and an explicit comment here.





@@ -1197,13 +1228,13 @@ static int ep_poll_callback(wait_queue_entry_t 
*wait, unsigned mode, int sync, v

break;
}
}
-   wake_up_locked(>wq);
+   wake_up(>wq);


why the switch here to the locked() variant? Shouldn't the 'reader'
side, in this case which takes the rwlock for write see all updates in 
a

coherent state at this point?


lockdep inside __wake_up_common expects wq_head->lock is taken, and
seems this is not a good idea to leave wq naked on wake up path,
when several CPUs can enter wake function.  Although 
default_wake_function

is protected by spinlock inside try_to_wake_up(), but for example
autoremove_wake_function() can't be called concurrently for the same wq
(it removes wq entry from the list).  Also in case of bookmarks
__wake_up_common adds an entry to the list, thus can't be called without
any locks.

I understand you concern and you are right saying that read side sees
wq entries as stable, but that will work only if __wake_up_common does
not modify anything, that is seems true now, but of course it is
too scary to rely on that in the future.




}
if (waitqueue_active(>poll_wait))
pwake++;

 out_unlock:
-   spin_unlock_irqrestore(>wq.lock, flags);
+   read_unlock_irqrestore(>lock, flags);

/* We have to call this outside the lock */
if (pwake)
@@ -1489,7 +1520,7 @@ static int ep_insert(struct eventpoll *ep, const 
struct epoll_event *event,

goto error_remove_epi;

 	/* We have to drop the new item inside our item list to keep track 
of it */

-   spin_lock_irq(>wq.lock);
+   write_lock_irq(>lock);

/* record NAPI ID of new item if present */
ep_set_busy_poll_napi_id(epi);
@@ -1501,12 +1532,12 @@ static int ep_insert(struct eventpoll *ep, 
const struct epoll_event *event,


/* Notify waiting tasks that events are available */
if (waitqueue_active(>wq))
-   wake_up_locked(>wq);
+   wake_up(>wq);


is this necessary to switch as well? Is this to make lockdep happy?
Looks like there are few more conversions too below...


Yes, necessary, wq.lock should be taken.

--
Roman



Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-05 Thread Roman Penyaev

On 2018-12-04 18:23, Jason Baron wrote:

On 12/3/18 6:02 AM, Roman Penyaev wrote:


[...]



ep_set_busy_poll_napi_id(epi);

@@ -1156,8 +1187,8 @@ static int ep_poll_callback(wait_queue_entry_t 
*wait, unsigned mode, int sync, v

 */
if (unlikely(ep->ovflist != EP_UNACTIVE_PTR)) {
if (epi->next == EP_UNACTIVE_PTR) {
-   epi->next = ep->ovflist;
-   ep->ovflist = epi;
+   /* Atomically exchange tail */
+   epi->next = xchg(>ovflist, epi);


This also relies on the fact that the same epi can't be added to the
list in parallel, because the wait queue doing the wakeup will have the
wait_queue_head lock. That was a little confusing for me b/c we only 
had

the read_lock() above.


Yes, that is indeed not obvious path, but wq is locked by 
wake_up_*_poll()

call or caller of wake_up_locked_poll() has to be sure wq.lock is taken.

I'll add an explicit comment here, thanks for pointing out.




if (epi->ws) {
/*
 * Activate ep->ws since epi->ws may get
@@ -1172,7 +1203,7 @@ static int ep_poll_callback(wait_queue_entry_t 
*wait, unsigned mode, int sync, v


/* If this file is already in the ready list we exit soon */
if (!ep_is_linked(epi)) {
-   list_add_tail(>rdllink, >rdllist);
+   list_add_tail_lockless(>rdllink, >rdllist);
ep_pm_stay_awake_rcu(epi);
}


same for this.


... and an explicit comment here.





@@ -1197,13 +1228,13 @@ static int ep_poll_callback(wait_queue_entry_t 
*wait, unsigned mode, int sync, v

break;
}
}
-   wake_up_locked(>wq);
+   wake_up(>wq);


why the switch here to the locked() variant? Shouldn't the 'reader'
side, in this case which takes the rwlock for write see all updates in 
a

coherent state at this point?


lockdep inside __wake_up_common expects wq_head->lock is taken, and
seems this is not a good idea to leave wq naked on wake up path,
when several CPUs can enter wake function.  Although 
default_wake_function

is protected by spinlock inside try_to_wake_up(), but for example
autoremove_wake_function() can't be called concurrently for the same wq
(it removes wq entry from the list).  Also in case of bookmarks
__wake_up_common adds an entry to the list, thus can't be called without
any locks.

I understand you concern and you are right saying that read side sees
wq entries as stable, but that will work only if __wake_up_common does
not modify anything, that is seems true now, but of course it is
too scary to rely on that in the future.




}
if (waitqueue_active(>poll_wait))
pwake++;

 out_unlock:
-   spin_unlock_irqrestore(>wq.lock, flags);
+   read_unlock_irqrestore(>lock, flags);

/* We have to call this outside the lock */
if (pwake)
@@ -1489,7 +1520,7 @@ static int ep_insert(struct eventpoll *ep, const 
struct epoll_event *event,

goto error_remove_epi;

 	/* We have to drop the new item inside our item list to keep track 
of it */

-   spin_lock_irq(>wq.lock);
+   write_lock_irq(>lock);

/* record NAPI ID of new item if present */
ep_set_busy_poll_napi_id(epi);
@@ -1501,12 +1532,12 @@ static int ep_insert(struct eventpoll *ep, 
const struct epoll_event *event,


/* Notify waiting tasks that events are available */
if (waitqueue_active(>wq))
-   wake_up_locked(>wq);
+   wake_up(>wq);


is this necessary to switch as well? Is this to make lockdep happy?
Looks like there are few more conversions too below...


Yes, necessary, wq.lock should be taken.

--
Roman



Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-04 Thread Andrea Parri
Hi Roman,

On Tue, Dec 04, 2018 at 12:50:58PM +0100, Roman Penyaev wrote:
> On 2018-12-03 18:34, Linus Torvalds wrote:

> > This also ends up making the memory ordering of "xchg()" very very
> > important. Yes, we've documented it as being an ordering op, but I'm
> > not sure we've relied on it this directly before.
> 
> Seems exit_mm() does exactly the same, the following chunk:
> 
>   up_read(>mmap_sem);
> 
>   self.task = current;
>   self.next = xchg(_state->dumper.next, );
> 
> 
> At least code pattern looks similar.

Maybe add a comment on top of (your) xchg() to note/justify these memory
ordering requirements?  As Paul said: "if there are races, this would
help force them to happen" (and simplify the review, this/future).

  Andrea


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-04 Thread Andrea Parri
Hi Roman,

On Tue, Dec 04, 2018 at 12:50:58PM +0100, Roman Penyaev wrote:
> On 2018-12-03 18:34, Linus Torvalds wrote:

> > This also ends up making the memory ordering of "xchg()" very very
> > important. Yes, we've documented it as being an ordering op, but I'm
> > not sure we've relied on it this directly before.
> 
> Seems exit_mm() does exactly the same, the following chunk:
> 
>   up_read(>mmap_sem);
> 
>   self.task = current;
>   self.next = xchg(_state->dumper.next, );
> 
> 
> At least code pattern looks similar.

Maybe add a comment on top of (your) xchg() to note/justify these memory
ordering requirements?  As Paul said: "if there are races, this would
help force them to happen" (and simplify the review, this/future).

  Andrea


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-04 Thread Paul E. McKenney
On Tue, Dec 04, 2018 at 12:23:08PM -0500, Jason Baron wrote:
> 
> 
> On 12/3/18 6:02 AM, Roman Penyaev wrote:
> > Hi all,
> > 
> > The goal of this patch is to reduce contention of ep_poll_callback() which
> > can be called concurrently from different CPUs in case of high events
> > rates and many fds per epoll.  Problem can be very well reproduced by
> > generating events (write to pipe or eventfd) from many threads, while
> > consumer thread does polling.  In other words this patch increases the
> > bandwidth of events which can be delivered from sources to the poller by
> > adding poll items in a lockless way to the list.
> > 
> > The main change is in replacement of the spinlock with a rwlock, which is
> > taken on read in ep_poll_callback(), and then by adding poll items to the
> > tail of the list using xchg atomic instruction.  Write lock is taken
> > everywhere else in order to stop list modifications and guarantee that list
> > updates are fully completed (I assume that write side of a rwlock does not
> > starve, it seems qrwlock implementation has these guarantees).
> > 
> > The following are some microbenchmark results based on the test [1] which
> > starts threads which generate N events each.  The test ends when all
> > events are successfully fetched by the poller thread:
> > 
> > spinlock
> > 
> > 
> > threads   run timeevents per ms
> > ---   -   -
> >   8 13191ms 6064/ms
> >  16 30758ms 5201/ms
> >  32 44315ms 7220/ms
> > 
> > rwlock + xchg
> > =
> > 
> > threads   run timeevents per ms
> > ---   -   -
> >   8  8581ms 9323/ms
> >  16 13800ms11594/ms
> >  32 24167ms13240/ms
> > 
> > According to the results bandwidth of delivered events is significantly
> > increased, thus execution time is reduced.
> > 
> > This is RFC because I did not run any benchmarks comparing current
> > qrwlock and spinlock implementations (4.19 kernel), although I did
> > not notice any epoll performance degradations in other benchmarks.
> > 
> > Also I'm not quite sure where to put very special lockless variant
> > of adding element to the list (list_add_tail_lockless() in this
> > patch).  Seems keeping it locally is safer.
> > 
> > [1] https://github.com/rouming/test-tools/blob/master/stress-epoll.c
> > 
> > Signed-off-by: Roman Penyaev 
> > Cc: Alexander Viro 
> > Cc: "Paul E. McKenney" 
> > Cc: Linus Torvalds 
> > Cc: linux-fsde...@vger.kernel.org
> > Cc: linux-kernel@vger.kernel.org
> > ---
> >  fs/eventpoll.c | 107 +++--
> >  1 file changed, 69 insertions(+), 38 deletions(-)
> > 
> > diff --git a/fs/eventpoll.c b/fs/eventpoll.c
> > index 42bbe6824b4b..89debda47aca 100644
> > --- a/fs/eventpoll.c
> > +++ b/fs/eventpoll.c
> > @@ -50,10 +50,10 @@
> >   *
> >   * 1) epmutex (mutex)
> >   * 2) ep->mtx (mutex)
> > - * 3) ep->wq.lock (spinlock)
> > + * 3) ep->lock (rwlock)
> >   *
> >   * The acquire order is the one listed above, from 1 to 3.
> > - * We need a spinlock (ep->wq.lock) because we manipulate objects
> > + * We need a rwlock (ep->lock) because we manipulate objects
> >   * from inside the poll callback, that might be triggered from
> >   * a wake_up() that in turn might be called from IRQ context.
> >   * So we can't sleep inside the poll callback and hence we need
> > @@ -85,7 +85,7 @@
> >   * of epoll file descriptors, we use the current recursion depth as
> >   * the lockdep subkey.
> >   * It is possible to drop the "ep->mtx" and to use the global
> > - * mutex "epmutex" (together with "ep->wq.lock") to have it working,
> > + * mutex "epmutex" (together with "ep->lock") to have it working,
> >   * but having "ep->mtx" will make the interface more scalable.
> >   * Events that require holding "epmutex" are very rare, while for
> >   * normal operations the epoll private "ep->mtx" will guarantee
> > @@ -182,8 +182,6 @@ struct epitem {
> >   * This structure is stored inside the "private_data" member of the file
> >   * structure and represents the main data structure for the eventpoll
> >   * interface.
> > - *
> > - * Access to it is protected by the lock inside wq.
> >   */
> >  struct eventpoll {
> > /*
> > @@ -203,13 +201,16 @@ struct eventpoll {
> > /* List of ready file descriptors */
> > struct list_head rdllist;
> >  
> > +   /* Lock which protects rdllist and ovflist */
> > +   rwlock_t lock;
> > +
> > /* RB tree root used to store monitored fd structs */
> > struct rb_root_cached rbr;
> >  
> > /*
> >  * This is a single linked list that chains all the "struct epitem" that
> >  * happened while transferring ready events to userspace w/out
> > -* holding ->wq.lock.
> > +* holding ->lock.
> >  */
> > struct epitem *ovflist;
> >  
> > @@ -697,17 +698,17 @@ static __poll_t 

Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-04 Thread Paul E. McKenney
On Tue, Dec 04, 2018 at 12:23:08PM -0500, Jason Baron wrote:
> 
> 
> On 12/3/18 6:02 AM, Roman Penyaev wrote:
> > Hi all,
> > 
> > The goal of this patch is to reduce contention of ep_poll_callback() which
> > can be called concurrently from different CPUs in case of high events
> > rates and many fds per epoll.  Problem can be very well reproduced by
> > generating events (write to pipe or eventfd) from many threads, while
> > consumer thread does polling.  In other words this patch increases the
> > bandwidth of events which can be delivered from sources to the poller by
> > adding poll items in a lockless way to the list.
> > 
> > The main change is in replacement of the spinlock with a rwlock, which is
> > taken on read in ep_poll_callback(), and then by adding poll items to the
> > tail of the list using xchg atomic instruction.  Write lock is taken
> > everywhere else in order to stop list modifications and guarantee that list
> > updates are fully completed (I assume that write side of a rwlock does not
> > starve, it seems qrwlock implementation has these guarantees).
> > 
> > The following are some microbenchmark results based on the test [1] which
> > starts threads which generate N events each.  The test ends when all
> > events are successfully fetched by the poller thread:
> > 
> > spinlock
> > 
> > 
> > threads   run timeevents per ms
> > ---   -   -
> >   8 13191ms 6064/ms
> >  16 30758ms 5201/ms
> >  32 44315ms 7220/ms
> > 
> > rwlock + xchg
> > =
> > 
> > threads   run timeevents per ms
> > ---   -   -
> >   8  8581ms 9323/ms
> >  16 13800ms11594/ms
> >  32 24167ms13240/ms
> > 
> > According to the results bandwidth of delivered events is significantly
> > increased, thus execution time is reduced.
> > 
> > This is RFC because I did not run any benchmarks comparing current
> > qrwlock and spinlock implementations (4.19 kernel), although I did
> > not notice any epoll performance degradations in other benchmarks.
> > 
> > Also I'm not quite sure where to put very special lockless variant
> > of adding element to the list (list_add_tail_lockless() in this
> > patch).  Seems keeping it locally is safer.
> > 
> > [1] https://github.com/rouming/test-tools/blob/master/stress-epoll.c
> > 
> > Signed-off-by: Roman Penyaev 
> > Cc: Alexander Viro 
> > Cc: "Paul E. McKenney" 
> > Cc: Linus Torvalds 
> > Cc: linux-fsde...@vger.kernel.org
> > Cc: linux-kernel@vger.kernel.org
> > ---
> >  fs/eventpoll.c | 107 +++--
> >  1 file changed, 69 insertions(+), 38 deletions(-)
> > 
> > diff --git a/fs/eventpoll.c b/fs/eventpoll.c
> > index 42bbe6824b4b..89debda47aca 100644
> > --- a/fs/eventpoll.c
> > +++ b/fs/eventpoll.c
> > @@ -50,10 +50,10 @@
> >   *
> >   * 1) epmutex (mutex)
> >   * 2) ep->mtx (mutex)
> > - * 3) ep->wq.lock (spinlock)
> > + * 3) ep->lock (rwlock)
> >   *
> >   * The acquire order is the one listed above, from 1 to 3.
> > - * We need a spinlock (ep->wq.lock) because we manipulate objects
> > + * We need a rwlock (ep->lock) because we manipulate objects
> >   * from inside the poll callback, that might be triggered from
> >   * a wake_up() that in turn might be called from IRQ context.
> >   * So we can't sleep inside the poll callback and hence we need
> > @@ -85,7 +85,7 @@
> >   * of epoll file descriptors, we use the current recursion depth as
> >   * the lockdep subkey.
> >   * It is possible to drop the "ep->mtx" and to use the global
> > - * mutex "epmutex" (together with "ep->wq.lock") to have it working,
> > + * mutex "epmutex" (together with "ep->lock") to have it working,
> >   * but having "ep->mtx" will make the interface more scalable.
> >   * Events that require holding "epmutex" are very rare, while for
> >   * normal operations the epoll private "ep->mtx" will guarantee
> > @@ -182,8 +182,6 @@ struct epitem {
> >   * This structure is stored inside the "private_data" member of the file
> >   * structure and represents the main data structure for the eventpoll
> >   * interface.
> > - *
> > - * Access to it is protected by the lock inside wq.
> >   */
> >  struct eventpoll {
> > /*
> > @@ -203,13 +201,16 @@ struct eventpoll {
> > /* List of ready file descriptors */
> > struct list_head rdllist;
> >  
> > +   /* Lock which protects rdllist and ovflist */
> > +   rwlock_t lock;
> > +
> > /* RB tree root used to store monitored fd structs */
> > struct rb_root_cached rbr;
> >  
> > /*
> >  * This is a single linked list that chains all the "struct epitem" that
> >  * happened while transferring ready events to userspace w/out
> > -* holding ->wq.lock.
> > +* holding ->lock.
> >  */
> > struct epitem *ovflist;
> >  
> > @@ -697,17 +698,17 @@ static __poll_t 

Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-04 Thread Jason Baron



On 12/3/18 6:02 AM, Roman Penyaev wrote:
> Hi all,
> 
> The goal of this patch is to reduce contention of ep_poll_callback() which
> can be called concurrently from different CPUs in case of high events
> rates and many fds per epoll.  Problem can be very well reproduced by
> generating events (write to pipe or eventfd) from many threads, while
> consumer thread does polling.  In other words this patch increases the
> bandwidth of events which can be delivered from sources to the poller by
> adding poll items in a lockless way to the list.
> 
> The main change is in replacement of the spinlock with a rwlock, which is
> taken on read in ep_poll_callback(), and then by adding poll items to the
> tail of the list using xchg atomic instruction.  Write lock is taken
> everywhere else in order to stop list modifications and guarantee that list
> updates are fully completed (I assume that write side of a rwlock does not
> starve, it seems qrwlock implementation has these guarantees).
> 
> The following are some microbenchmark results based on the test [1] which
> starts threads which generate N events each.  The test ends when all
> events are successfully fetched by the poller thread:
> 
> spinlock
> 
> 
> threads   run timeevents per ms
> ---   -   -
>   8 13191ms 6064/ms
>  16 30758ms 5201/ms
>  32 44315ms 7220/ms
> 
> rwlock + xchg
> =
> 
> threads   run timeevents per ms
> ---   -   -
>   8  8581ms 9323/ms
>  16 13800ms11594/ms
>  32 24167ms13240/ms
> 
> According to the results bandwidth of delivered events is significantly
> increased, thus execution time is reduced.
> 
> This is RFC because I did not run any benchmarks comparing current
> qrwlock and spinlock implementations (4.19 kernel), although I did
> not notice any epoll performance degradations in other benchmarks.
> 
> Also I'm not quite sure where to put very special lockless variant
> of adding element to the list (list_add_tail_lockless() in this
> patch).  Seems keeping it locally is safer.
> 
> [1] https://github.com/rouming/test-tools/blob/master/stress-epoll.c
> 
> Signed-off-by: Roman Penyaev 
> Cc: Alexander Viro 
> Cc: "Paul E. McKenney" 
> Cc: Linus Torvalds 
> Cc: linux-fsde...@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> ---
>  fs/eventpoll.c | 107 +++--
>  1 file changed, 69 insertions(+), 38 deletions(-)
> 
> diff --git a/fs/eventpoll.c b/fs/eventpoll.c
> index 42bbe6824b4b..89debda47aca 100644
> --- a/fs/eventpoll.c
> +++ b/fs/eventpoll.c
> @@ -50,10 +50,10 @@
>   *
>   * 1) epmutex (mutex)
>   * 2) ep->mtx (mutex)
> - * 3) ep->wq.lock (spinlock)
> + * 3) ep->lock (rwlock)
>   *
>   * The acquire order is the one listed above, from 1 to 3.
> - * We need a spinlock (ep->wq.lock) because we manipulate objects
> + * We need a rwlock (ep->lock) because we manipulate objects
>   * from inside the poll callback, that might be triggered from
>   * a wake_up() that in turn might be called from IRQ context.
>   * So we can't sleep inside the poll callback and hence we need
> @@ -85,7 +85,7 @@
>   * of epoll file descriptors, we use the current recursion depth as
>   * the lockdep subkey.
>   * It is possible to drop the "ep->mtx" and to use the global
> - * mutex "epmutex" (together with "ep->wq.lock") to have it working,
> + * mutex "epmutex" (together with "ep->lock") to have it working,
>   * but having "ep->mtx" will make the interface more scalable.
>   * Events that require holding "epmutex" are very rare, while for
>   * normal operations the epoll private "ep->mtx" will guarantee
> @@ -182,8 +182,6 @@ struct epitem {
>   * This structure is stored inside the "private_data" member of the file
>   * structure and represents the main data structure for the eventpoll
>   * interface.
> - *
> - * Access to it is protected by the lock inside wq.
>   */
>  struct eventpoll {
>   /*
> @@ -203,13 +201,16 @@ struct eventpoll {
>   /* List of ready file descriptors */
>   struct list_head rdllist;
>  
> + /* Lock which protects rdllist and ovflist */
> + rwlock_t lock;
> +
>   /* RB tree root used to store monitored fd structs */
>   struct rb_root_cached rbr;
>  
>   /*
>* This is a single linked list that chains all the "struct epitem" that
>* happened while transferring ready events to userspace w/out
> -  * holding ->wq.lock.
> +  * holding ->lock.
>*/
>   struct epitem *ovflist;
>  
> @@ -697,17 +698,17 @@ static __poll_t ep_scan_ready_list(struct eventpoll *ep,
>* because we want the "sproc" callback to be able to do it
>* in a lockless way.
>*/
> - spin_lock_irq(>wq.lock);
> + write_lock_irq(>lock);
>   list_splice_init(>rdllist, );
>   ep->ovflist = NULL;
> - 

Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-04 Thread Jason Baron



On 12/3/18 6:02 AM, Roman Penyaev wrote:
> Hi all,
> 
> The goal of this patch is to reduce contention of ep_poll_callback() which
> can be called concurrently from different CPUs in case of high events
> rates and many fds per epoll.  Problem can be very well reproduced by
> generating events (write to pipe or eventfd) from many threads, while
> consumer thread does polling.  In other words this patch increases the
> bandwidth of events which can be delivered from sources to the poller by
> adding poll items in a lockless way to the list.
> 
> The main change is in replacement of the spinlock with a rwlock, which is
> taken on read in ep_poll_callback(), and then by adding poll items to the
> tail of the list using xchg atomic instruction.  Write lock is taken
> everywhere else in order to stop list modifications and guarantee that list
> updates are fully completed (I assume that write side of a rwlock does not
> starve, it seems qrwlock implementation has these guarantees).
> 
> The following are some microbenchmark results based on the test [1] which
> starts threads which generate N events each.  The test ends when all
> events are successfully fetched by the poller thread:
> 
> spinlock
> 
> 
> threads   run timeevents per ms
> ---   -   -
>   8 13191ms 6064/ms
>  16 30758ms 5201/ms
>  32 44315ms 7220/ms
> 
> rwlock + xchg
> =
> 
> threads   run timeevents per ms
> ---   -   -
>   8  8581ms 9323/ms
>  16 13800ms11594/ms
>  32 24167ms13240/ms
> 
> According to the results bandwidth of delivered events is significantly
> increased, thus execution time is reduced.
> 
> This is RFC because I did not run any benchmarks comparing current
> qrwlock and spinlock implementations (4.19 kernel), although I did
> not notice any epoll performance degradations in other benchmarks.
> 
> Also I'm not quite sure where to put very special lockless variant
> of adding element to the list (list_add_tail_lockless() in this
> patch).  Seems keeping it locally is safer.
> 
> [1] https://github.com/rouming/test-tools/blob/master/stress-epoll.c
> 
> Signed-off-by: Roman Penyaev 
> Cc: Alexander Viro 
> Cc: "Paul E. McKenney" 
> Cc: Linus Torvalds 
> Cc: linux-fsde...@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> ---
>  fs/eventpoll.c | 107 +++--
>  1 file changed, 69 insertions(+), 38 deletions(-)
> 
> diff --git a/fs/eventpoll.c b/fs/eventpoll.c
> index 42bbe6824b4b..89debda47aca 100644
> --- a/fs/eventpoll.c
> +++ b/fs/eventpoll.c
> @@ -50,10 +50,10 @@
>   *
>   * 1) epmutex (mutex)
>   * 2) ep->mtx (mutex)
> - * 3) ep->wq.lock (spinlock)
> + * 3) ep->lock (rwlock)
>   *
>   * The acquire order is the one listed above, from 1 to 3.
> - * We need a spinlock (ep->wq.lock) because we manipulate objects
> + * We need a rwlock (ep->lock) because we manipulate objects
>   * from inside the poll callback, that might be triggered from
>   * a wake_up() that in turn might be called from IRQ context.
>   * So we can't sleep inside the poll callback and hence we need
> @@ -85,7 +85,7 @@
>   * of epoll file descriptors, we use the current recursion depth as
>   * the lockdep subkey.
>   * It is possible to drop the "ep->mtx" and to use the global
> - * mutex "epmutex" (together with "ep->wq.lock") to have it working,
> + * mutex "epmutex" (together with "ep->lock") to have it working,
>   * but having "ep->mtx" will make the interface more scalable.
>   * Events that require holding "epmutex" are very rare, while for
>   * normal operations the epoll private "ep->mtx" will guarantee
> @@ -182,8 +182,6 @@ struct epitem {
>   * This structure is stored inside the "private_data" member of the file
>   * structure and represents the main data structure for the eventpoll
>   * interface.
> - *
> - * Access to it is protected by the lock inside wq.
>   */
>  struct eventpoll {
>   /*
> @@ -203,13 +201,16 @@ struct eventpoll {
>   /* List of ready file descriptors */
>   struct list_head rdllist;
>  
> + /* Lock which protects rdllist and ovflist */
> + rwlock_t lock;
> +
>   /* RB tree root used to store monitored fd structs */
>   struct rb_root_cached rbr;
>  
>   /*
>* This is a single linked list that chains all the "struct epitem" that
>* happened while transferring ready events to userspace w/out
> -  * holding ->wq.lock.
> +  * holding ->lock.
>*/
>   struct epitem *ovflist;
>  
> @@ -697,17 +698,17 @@ static __poll_t ep_scan_ready_list(struct eventpoll *ep,
>* because we want the "sproc" callback to be able to do it
>* in a lockless way.
>*/
> - spin_lock_irq(>wq.lock);
> + write_lock_irq(>lock);
>   list_splice_init(>rdllist, );
>   ep->ovflist = NULL;
> - 

Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-04 Thread Roman Penyaev

On 2018-12-03 18:34, Linus Torvalds wrote:

On Mon, Dec 3, 2018 at 3:03 AM Roman Penyaev  wrote:


Also I'm not quite sure where to put very special lockless variant
of adding element to the list (list_add_tail_lockless() in this
patch).  Seems keeping it locally is safer.


That function is scary, and can be mis-used so easily that I
definitely don't want to see it anywhere else.

Afaik, it's *really* important that only "add_tail" operations can be
done in parallel.


True, adding element either to head or to tail can work in parallel,
any mix will corrupt the list.  I tried to reflect this in the comment
of list_add_tail_lockless().  Although not sure has it become clearer
to a reader or not.



This also ends up making the memory ordering of "xchg()" very very
important. Yes, we've documented it as being an ordering op, but I'm
not sure we've relied on it this directly before.


Seems exit_mm() does exactly the same, the following chunk:

up_read(>mmap_sem);

self.task = current;
self.next = xchg(_state->dumper.next, );


At least code pattern looks similar.



I also note that now we do more/different locking in the waitqueue
handling, because the code now takes both that rwlock _and_ the
waitqueue spinlock for wakeup. That also makes me worried that the
"waitqueue_active()" games are no no longer reliable. I think they're
fine (looks like they are only done under the write-lock, so it's
effectively the same serialization anyway),



The only difference in waking up is that same epollitem waitqueue can be
observed as active from different CPUs, real wake up happens only once
(wake_up() takes wq.lock, so should be fine to call it multiple times),
but 1 is returned for all callers of ep_poll_callback() who has seen
the wq as active.

If epollitem is created with EPOLLEXCLUSIVE flag, then 1, which is 
returned
from ep_poll_callback(), indicates "break the loop, exclusive wake up 
has

happened" (the loop is in __wake_up_common), but even we consider this
exclusive wake up case this seems is totally fine, because wake up 
events

are not lost and epollitem will scan all ready fds and eventually will
observe all of the callers (who has returned 1 from ep_poll_callback())
as ready.  I hope I did not miss anything.



but the upshoot of all of
this is that I *really* want others to look at this patch too. A lot
of small subtle things here.


Would be great if someone can look through, eventpoll.c looks a
bit abandoned.

--
Roman


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-04 Thread Roman Penyaev

On 2018-12-03 18:34, Linus Torvalds wrote:

On Mon, Dec 3, 2018 at 3:03 AM Roman Penyaev  wrote:


Also I'm not quite sure where to put very special lockless variant
of adding element to the list (list_add_tail_lockless() in this
patch).  Seems keeping it locally is safer.


That function is scary, and can be mis-used so easily that I
definitely don't want to see it anywhere else.

Afaik, it's *really* important that only "add_tail" operations can be
done in parallel.


True, adding element either to head or to tail can work in parallel,
any mix will corrupt the list.  I tried to reflect this in the comment
of list_add_tail_lockless().  Although not sure has it become clearer
to a reader or not.



This also ends up making the memory ordering of "xchg()" very very
important. Yes, we've documented it as being an ordering op, but I'm
not sure we've relied on it this directly before.


Seems exit_mm() does exactly the same, the following chunk:

up_read(>mmap_sem);

self.task = current;
self.next = xchg(_state->dumper.next, );


At least code pattern looks similar.



I also note that now we do more/different locking in the waitqueue
handling, because the code now takes both that rwlock _and_ the
waitqueue spinlock for wakeup. That also makes me worried that the
"waitqueue_active()" games are no no longer reliable. I think they're
fine (looks like they are only done under the write-lock, so it's
effectively the same serialization anyway),



The only difference in waking up is that same epollitem waitqueue can be
observed as active from different CPUs, real wake up happens only once
(wake_up() takes wq.lock, so should be fine to call it multiple times),
but 1 is returned for all callers of ep_poll_callback() who has seen
the wq as active.

If epollitem is created with EPOLLEXCLUSIVE flag, then 1, which is 
returned
from ep_poll_callback(), indicates "break the loop, exclusive wake up 
has

happened" (the loop is in __wake_up_common), but even we consider this
exclusive wake up case this seems is totally fine, because wake up 
events

are not lost and epollitem will scan all ready fds and eventually will
observe all of the callers (who has returned 1 from ep_poll_callback())
as ready.  I hope I did not miss anything.



but the upshoot of all of
this is that I *really* want others to look at this patch too. A lot
of small subtle things here.


Would be great if someone can look through, eventpoll.c looks a
bit abandoned.

--
Roman


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-03 Thread Linus Torvalds
On Mon, Dec 3, 2018 at 3:03 AM Roman Penyaev  wrote:
>
> Also I'm not quite sure where to put very special lockless variant
> of adding element to the list (list_add_tail_lockless() in this
> patch).  Seems keeping it locally is safer.

That function is scary, and can be mis-used so easily that I
definitely don't want to see it anywhere else.

Afaik, it's *really* important that only "add_tail" operations can be
done in parallel.

This also ends up making the memory ordering of "xchg()" very very
important. Yes, we've documented it as being an ordering op, but I'm
not sure we've relied on it this directly before.

I also note that now we do more/different locking in the waitqueue
handling, because the code now takes both that rwlock _and_ the
waitqueue spinlock for wakeup. That also makes me worried that the
"waitqueue_active()" games are no no longer reliable. I think they're
fine (looks like they are only done under the write-lock, so it's
effectively the same serialization anyway), but the upshoot of all of
this is that I *really* want others to look at this patch too. A lot
of small subtle things here.

Linus


Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-03 Thread Linus Torvalds
On Mon, Dec 3, 2018 at 3:03 AM Roman Penyaev  wrote:
>
> Also I'm not quite sure where to put very special lockless variant
> of adding element to the list (list_add_tail_lockless() in this
> patch).  Seems keeping it locally is safer.

That function is scary, and can be mis-used so easily that I
definitely don't want to see it anywhere else.

Afaik, it's *really* important that only "add_tail" operations can be
done in parallel.

This also ends up making the memory ordering of "xchg()" very very
important. Yes, we've documented it as being an ordering op, but I'm
not sure we've relied on it this directly before.

I also note that now we do more/different locking in the waitqueue
handling, because the code now takes both that rwlock _and_ the
waitqueue spinlock for wakeup. That also makes me worried that the
"waitqueue_active()" games are no no longer reliable. I think they're
fine (looks like they are only done under the write-lock, so it's
effectively the same serialization anyway), but the upshoot of all of
this is that I *really* want others to look at this patch too. A lot
of small subtle things here.

Linus


[RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-03 Thread Roman Penyaev
Hi all,

The goal of this patch is to reduce contention of ep_poll_callback() which
can be called concurrently from different CPUs in case of high events
rates and many fds per epoll.  Problem can be very well reproduced by
generating events (write to pipe or eventfd) from many threads, while
consumer thread does polling.  In other words this patch increases the
bandwidth of events which can be delivered from sources to the poller by
adding poll items in a lockless way to the list.

The main change is in replacement of the spinlock with a rwlock, which is
taken on read in ep_poll_callback(), and then by adding poll items to the
tail of the list using xchg atomic instruction.  Write lock is taken
everywhere else in order to stop list modifications and guarantee that list
updates are fully completed (I assume that write side of a rwlock does not
starve, it seems qrwlock implementation has these guarantees).

The following are some microbenchmark results based on the test [1] which
starts threads which generate N events each.  The test ends when all
events are successfully fetched by the poller thread:

spinlock


threads   run timeevents per ms
---   -   -
  8 13191ms 6064/ms
 16 30758ms 5201/ms
 32 44315ms 7220/ms

rwlock + xchg
=

threads   run timeevents per ms
---   -   -
  8  8581ms 9323/ms
 16 13800ms11594/ms
 32 24167ms13240/ms

According to the results bandwidth of delivered events is significantly
increased, thus execution time is reduced.

This is RFC because I did not run any benchmarks comparing current
qrwlock and spinlock implementations (4.19 kernel), although I did
not notice any epoll performance degradations in other benchmarks.

Also I'm not quite sure where to put very special lockless variant
of adding element to the list (list_add_tail_lockless() in this
patch).  Seems keeping it locally is safer.

[1] https://github.com/rouming/test-tools/blob/master/stress-epoll.c

Signed-off-by: Roman Penyaev 
Cc: Alexander Viro 
Cc: "Paul E. McKenney" 
Cc: Linus Torvalds 
Cc: linux-fsde...@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
 fs/eventpoll.c | 107 +++--
 1 file changed, 69 insertions(+), 38 deletions(-)

diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index 42bbe6824b4b..89debda47aca 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -50,10 +50,10 @@
  *
  * 1) epmutex (mutex)
  * 2) ep->mtx (mutex)
- * 3) ep->wq.lock (spinlock)
+ * 3) ep->lock (rwlock)
  *
  * The acquire order is the one listed above, from 1 to 3.
- * We need a spinlock (ep->wq.lock) because we manipulate objects
+ * We need a rwlock (ep->lock) because we manipulate objects
  * from inside the poll callback, that might be triggered from
  * a wake_up() that in turn might be called from IRQ context.
  * So we can't sleep inside the poll callback and hence we need
@@ -85,7 +85,7 @@
  * of epoll file descriptors, we use the current recursion depth as
  * the lockdep subkey.
  * It is possible to drop the "ep->mtx" and to use the global
- * mutex "epmutex" (together with "ep->wq.lock") to have it working,
+ * mutex "epmutex" (together with "ep->lock") to have it working,
  * but having "ep->mtx" will make the interface more scalable.
  * Events that require holding "epmutex" are very rare, while for
  * normal operations the epoll private "ep->mtx" will guarantee
@@ -182,8 +182,6 @@ struct epitem {
  * This structure is stored inside the "private_data" member of the file
  * structure and represents the main data structure for the eventpoll
  * interface.
- *
- * Access to it is protected by the lock inside wq.
  */
 struct eventpoll {
/*
@@ -203,13 +201,16 @@ struct eventpoll {
/* List of ready file descriptors */
struct list_head rdllist;
 
+   /* Lock which protects rdllist and ovflist */
+   rwlock_t lock;
+
/* RB tree root used to store monitored fd structs */
struct rb_root_cached rbr;
 
/*
 * This is a single linked list that chains all the "struct epitem" that
 * happened while transferring ready events to userspace w/out
-* holding ->wq.lock.
+* holding ->lock.
 */
struct epitem *ovflist;
 
@@ -697,17 +698,17 @@ static __poll_t ep_scan_ready_list(struct eventpoll *ep,
 * because we want the "sproc" callback to be able to do it
 * in a lockless way.
 */
-   spin_lock_irq(>wq.lock);
+   write_lock_irq(>lock);
list_splice_init(>rdllist, );
ep->ovflist = NULL;
-   spin_unlock_irq(>wq.lock);
+   write_unlock_irq(>lock);
 
/*
 * Now call the callback function.
 */
res = (*sproc)(ep, , priv);
 
-   spin_lock_irq(>wq.lock);
+   write_lock_irq(>lock);
/*
 * 

[RFC PATCH 1/1] epoll: use rwlock in order to reduce ep_poll_callback() contention

2018-12-03 Thread Roman Penyaev
Hi all,

The goal of this patch is to reduce contention of ep_poll_callback() which
can be called concurrently from different CPUs in case of high events
rates and many fds per epoll.  Problem can be very well reproduced by
generating events (write to pipe or eventfd) from many threads, while
consumer thread does polling.  In other words this patch increases the
bandwidth of events which can be delivered from sources to the poller by
adding poll items in a lockless way to the list.

The main change is in replacement of the spinlock with a rwlock, which is
taken on read in ep_poll_callback(), and then by adding poll items to the
tail of the list using xchg atomic instruction.  Write lock is taken
everywhere else in order to stop list modifications and guarantee that list
updates are fully completed (I assume that write side of a rwlock does not
starve, it seems qrwlock implementation has these guarantees).

The following are some microbenchmark results based on the test [1] which
starts threads which generate N events each.  The test ends when all
events are successfully fetched by the poller thread:

spinlock


threads   run timeevents per ms
---   -   -
  8 13191ms 6064/ms
 16 30758ms 5201/ms
 32 44315ms 7220/ms

rwlock + xchg
=

threads   run timeevents per ms
---   -   -
  8  8581ms 9323/ms
 16 13800ms11594/ms
 32 24167ms13240/ms

According to the results bandwidth of delivered events is significantly
increased, thus execution time is reduced.

This is RFC because I did not run any benchmarks comparing current
qrwlock and spinlock implementations (4.19 kernel), although I did
not notice any epoll performance degradations in other benchmarks.

Also I'm not quite sure where to put very special lockless variant
of adding element to the list (list_add_tail_lockless() in this
patch).  Seems keeping it locally is safer.

[1] https://github.com/rouming/test-tools/blob/master/stress-epoll.c

Signed-off-by: Roman Penyaev 
Cc: Alexander Viro 
Cc: "Paul E. McKenney" 
Cc: Linus Torvalds 
Cc: linux-fsde...@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
 fs/eventpoll.c | 107 +++--
 1 file changed, 69 insertions(+), 38 deletions(-)

diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index 42bbe6824b4b..89debda47aca 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -50,10 +50,10 @@
  *
  * 1) epmutex (mutex)
  * 2) ep->mtx (mutex)
- * 3) ep->wq.lock (spinlock)
+ * 3) ep->lock (rwlock)
  *
  * The acquire order is the one listed above, from 1 to 3.
- * We need a spinlock (ep->wq.lock) because we manipulate objects
+ * We need a rwlock (ep->lock) because we manipulate objects
  * from inside the poll callback, that might be triggered from
  * a wake_up() that in turn might be called from IRQ context.
  * So we can't sleep inside the poll callback and hence we need
@@ -85,7 +85,7 @@
  * of epoll file descriptors, we use the current recursion depth as
  * the lockdep subkey.
  * It is possible to drop the "ep->mtx" and to use the global
- * mutex "epmutex" (together with "ep->wq.lock") to have it working,
+ * mutex "epmutex" (together with "ep->lock") to have it working,
  * but having "ep->mtx" will make the interface more scalable.
  * Events that require holding "epmutex" are very rare, while for
  * normal operations the epoll private "ep->mtx" will guarantee
@@ -182,8 +182,6 @@ struct epitem {
  * This structure is stored inside the "private_data" member of the file
  * structure and represents the main data structure for the eventpoll
  * interface.
- *
- * Access to it is protected by the lock inside wq.
  */
 struct eventpoll {
/*
@@ -203,13 +201,16 @@ struct eventpoll {
/* List of ready file descriptors */
struct list_head rdllist;
 
+   /* Lock which protects rdllist and ovflist */
+   rwlock_t lock;
+
/* RB tree root used to store monitored fd structs */
struct rb_root_cached rbr;
 
/*
 * This is a single linked list that chains all the "struct epitem" that
 * happened while transferring ready events to userspace w/out
-* holding ->wq.lock.
+* holding ->lock.
 */
struct epitem *ovflist;
 
@@ -697,17 +698,17 @@ static __poll_t ep_scan_ready_list(struct eventpoll *ep,
 * because we want the "sproc" callback to be able to do it
 * in a lockless way.
 */
-   spin_lock_irq(>wq.lock);
+   write_lock_irq(>lock);
list_splice_init(>rdllist, );
ep->ovflist = NULL;
-   spin_unlock_irq(>wq.lock);
+   write_unlock_irq(>lock);
 
/*
 * Now call the callback function.
 */
res = (*sproc)(ep, , priv);
 
-   spin_lock_irq(>wq.lock);
+   write_lock_irq(>lock);
/*
 *