On Tue,  1 Oct 2013 17:08:14 +0000 (GMT) Jason Baron <jba...@akamai.com> wrote:

> When calling EPOLL_CTL_ADD for an epoll file descriptor that is attached
> directly to a wakeup source, we do not need to take the global 'epmutex',
> unless the epoll file descriptor is nested. The purpose of taking
> the 'epmutex' on add is to prevent complex topologies such as loops and
> deep wakeup paths from forming in parallel through multiple EPOLL_CTL_ADD
> operations. However, for the simple case of an epoll file descriptor
> attached directly to a wakeup source (with no nesting), we do not need
> to hold the 'epmutex'.
> 
> This patch along with 'epoll: optimize EPOLL_CTL_DEL using rcu' improves
> scalability on larger systems. Quoting Nathan Zimmer's mail on SPECjbb
> performance:
> 
> "
> On the 16 socket run the performance went from 35k jOPS to 125k jOPS.
> In addition the benchmark when from scaling well on 10 sockets to scaling well
> on just over 40 sockets.
> 
> ...
> 
> Currently the benchmark stops scaling at around 40-44 sockets but it seems 
> like
> I found a second unrelated bottleneck.
> "

I couldn't resist fiddling.  Please review:

From: Andrew Morton <a...@linux-foundation.org>
Subject: epoll-do-not-take-global-epmutex-for-simple-topologies-fix

- use `bool' for boolean variables
- remove unneeded/undesirable cast of void*
- add missed ep_scan_ready_list() kerneldoc 

Cc: "Paul E. McKenney" <paul...@us.ibm.com>
Cc: Al Viro <v...@zeniv.linux.org.uk>
Cc: Davide Libenzi <davi...@xmailserver.org>
Cc: Eric Wong <normalper...@yhbt.net>
Cc: Jason Baron <jba...@akamai.com>
Cc: Nathan Zimmer <nzim...@sgi.com>
Cc: Nelson Elhage <nelh...@nelhage.com>
Signed-off-by: Andrew Morton <a...@linux-foundation.org>
---

 fs/eventpoll.c |   11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff -puN 
fs/eventpoll.c~epoll-do-not-take-global-epmutex-for-simple-topologies-fix 
fs/eventpoll.c
--- a/fs/eventpoll.c~epoll-do-not-take-global-epmutex-for-simple-topologies-fix
+++ a/fs/eventpoll.c
@@ -589,13 +589,14 @@ static inline void ep_pm_stay_awake_rcu(
  * @sproc: Pointer to the scan callback.
  * @priv: Private opaque data passed to the @sproc callback.
  * @depth: The current depth of recursive f_op->poll calls.
+ * @ep_locked: caller already holds ep->mtx
  *
  * Returns: The same integer error code returned by the @sproc callback.
  */
 static int ep_scan_ready_list(struct eventpoll *ep,
                              int (*sproc)(struct eventpoll *,
                                           struct list_head *, void *),
-                             void *priv, int depth, int ep_locked)
+                             void *priv, int depth, bool ep_locked)
 {
        int error, pwake = 0;
        unsigned long flags;
@@ -836,12 +837,12 @@ static void ep_ptable_queue_proc(struct
 
 struct readyevents_arg {
        struct eventpoll *ep;
-       int locked;
+       bool locked;
 };
 
 static int ep_poll_readyevents_proc(void *priv, void *cookie, int call_nests)
 {
-       struct readyevents_arg *arg = (struct readyevents_arg *)priv;
+       struct readyevents_arg *arg = priv;
 
        return ep_scan_ready_list(arg->ep, ep_read_events_proc, NULL,
                                  call_nests + 1, arg->locked);
@@ -857,7 +858,7 @@ static unsigned int ep_eventpoll_poll(st
         * During ep_insert() we already hold the ep->mtx for the tfile.
         * Prevent re-aquisition.
         */
-       arg.locked = ((wait && (wait->_qproc == ep_ptable_queue_proc)) ? 1 : 0);
+       arg.locked = wait && (wait->_qproc == ep_ptable_queue_proc);
        arg.ep = ep;
 
        /* Insert inside our poll wait queue */
@@ -1563,7 +1564,7 @@ static int ep_send_events(struct eventpo
        esed.maxevents = maxevents;
        esed.events = events;
 
-       return ep_scan_ready_list(ep, ep_send_events_proc, &esed, 0, 0);
+       return ep_scan_ready_list(ep, ep_send_events_proc, &esed, 0, false);
 }
 
 static inline struct timespec ep_set_mstimeout(long ms)
_

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to