On Tue, 2013-03-05 at 07:40 -0800, Linus Torvalds wrote:
> On Tue, Mar 5, 2013 at 1:35 AM, Davidlohr Bueso <davidlohr.bu...@hp.com> 
> wrote:
> >
> > The following set of patches are based on the discussion of holding the
> > ipc lock unnecessarily, such as for permissions and security checks:
> 
> Ok, looks fine from a quick look (but then, so did your previous patch-set ;)
> 
> You still open-code the spinlock in at least a few places (I saw
> sem_getref), but I still don't care deeply.
> 
> >> 2) While on an Oracle swingbench DSS (data mining) workload the
> > improvements are not as exciting as with Rik's benchmark, we can see
> > some positive numbers. For an 8 socket machine the following are the
> > percentages of %sys time incurred in the ipc lock:
> 
> Ok, I hoped for it being more noticeable. Since that benchmark is less
> trivial than Rik's, can you do a perf record -fg of it and give a more
> complete picture of what the kernel footprint is - and in particular
> who now gets that ipc lock function? Is it purely semtimedop, or what?
> Look out for inlining - ipc_rcu_getref() looks like it would be
> inlined, for example.
> 
> It would be good to get a "top twenty kernel functions" from the
> profile, along with some call data on where the lock callers are.. I
> know that Rik's benchmark *only* had that one call-site, I'm wondering
> if the swingbench one has slightly more complex behavior...

For a 400 user workload (the kernel functions remain basically the same
for any amount of users):

    17.86%           oracle  [kernel.kallsyms]   [k] _raw_spin_lock             
                
     8.46%          swapper  [kernel.kallsyms]   [k] intel_idle                 
                
     5.51%           oracle  [kernel.kallsyms]   [k] try_atomic_semop           
                
     5.05%           oracle  [kernel.kallsyms]   [k] update_sd_lb_stats         
                
     2.81%           oracle  [kernel.kallsyms]   [k] tg_load_down               
                
     2.41%          swapper  [kernel.kallsyms]   [k] update_blocked_averages    
                
     2.38%           oracle  [kernel.kallsyms]   [k] idle_cpu                   
                
     2.37%          swapper  [kernel.kallsyms]   [k] native_write_msr_safe      
                
     2.28%           oracle  [kernel.kallsyms]   [k] update_cfs_rq_blocked_load 
                
     1.84%           oracle  [kernel.kallsyms]   [k] update_blocked_averages    
                
     1.79%           oracle  [kernel.kallsyms]   [k] update_queue               
                
     1.73%          swapper  [kernel.kallsyms]   [k] update_cfs_rq_blocked_load 
                
     1.29%           oracle  [kernel.kallsyms]   [k] native_write_msr_safe      
                
     1.07%             java  [kernel.kallsyms]   [k] update_sd_lb_stats         
                
     0.91%          swapper  [kernel.kallsyms]   [k] poll_idle                  
                
     0.86%           oracle  [kernel.kallsyms]   [k] try_to_wake_up             
                
     0.80%             java  [kernel.kallsyms]   [k] tg_load_down               
                
     0.72%           oracle  [kernel.kallsyms]   [k] load_balance               
                
     0.67%           oracle  [kernel.kallsyms]   [k] __schedule                 
                
     0.67%           oracle  [kernel.kallsyms]   [k] cpumask_next_and           
                

Digging into the _raw_spin_lock call:

 17.86%           oracle  [kernel.kallsyms]   [k] _raw_spin_lock                
            
                     |
                     --- _raw_spin_lock
                        |          
                        |--49.55%-- sys_semtimedop
                        |          |          
                        |          |--77.41%-- system_call
                        |          |          semtimedop
                        |          |          skgpwwait
                        |          |          ksliwat
                        |          |          kslwaitctx


Thanks,
Davidlohr


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to