Author: jimharris
Date: Wed Oct 24 18:36:41 2012
New Revision: 242014
URL: http://svn.freebsd.org/changeset/base/242014

Log:
  Pad tdq_lock to avoid false sharing with tdq_load and tdq_cpu_idle.
  
  This enables CPU searches (which read tdq_load) to operate independently
  of any contention on the spinlock.  Some scheduler-intensive workloads
  running on an 8C single-socket SNB Xeon show considerable improvement with
  this change (2-3% perf improvement, 5-6% decrease in CPU util).
  
  Sponsored by: Intel
  Reviewed by:  jeff

Modified:
  head/sys/kern/sched_ule.c

Modified: head/sys/kern/sched_ule.c
==============================================================================
--- head/sys/kern/sched_ule.c   Wed Oct 24 18:33:44 2012        (r242013)
+++ head/sys/kern/sched_ule.c   Wed Oct 24 18:36:41 2012        (r242014)
@@ -223,8 +223,13 @@ static int sched_idlespinthresh = -1;
  * locking in sched_pickcpu();
  */
 struct tdq {
-       /* Ordered to improve efficiency of cpu_search() and switch(). */
+       /* 
+        * Ordered to improve efficiency of cpu_search() and switch().
+        * tdq_lock is padded to avoid false sharing with tdq_load and
+        * tdq_cpu_idle.
+        */
        struct mtx      tdq_lock;               /* run queue lock. */
+       char            pad[64 - sizeof(struct mtx)];
        struct cpu_group *tdq_cg;               /* Pointer to cpu topology. */
        volatile int    tdq_load;               /* Aggregate load. */
        volatile int    tdq_cpu_idle;           /* cpu_idle() is active. */
_______________________________________________
svn-src-head@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-src-head
To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"

Reply via email to