On Wed, Apr 11, 2012 at 02:18:14PM -0400, Richard Lowe wrote:
> > Different application algorithms show different high-runners but the 
> > high-runner locks are usually
> > not called very often but are held for an abnormally long time.  For some
> > algorithms the high-runners are in libc (e.g. malloc) and OpenMP but in some
> > others it is my application's explicit locks.
> >
> 
> Based on something Hans mentioned, something approaching:
> 
> cputrack -c PAPI_l1_icm,PAPI_l2_ich,PAPI_l2_icm <your application>
> 
> Could be useful in comparison to a system on which this doesn't occur
> (a synthetic test case would be better, I'm not sure these counters
> would be so useful under the real workload).
> 
> There's also the DTrace cpc provider, but I can't currently come up
> with a good way to make use of it for what we'd want to know.

FYI, there is a white paper about the L1I cache aliasing issue on family
0x15 here:

http://developer.amd.com/Assets/SharedL1InstructionCacheonAMD15hCPU.pdf

I don't know whether this applies to Illumos or this particular problem
at all, but it was the first thing I remembered when I read "slow" and
"Opteron" :)

There was also a lock performance problem which also affected older AMD
CPU families, but it was specific to the mutex implementation in glibc.
The mutexes were spinning for a too short time, increasing that time
would massively reduce the contention seen.

Although our default spin count for adaptive mutexes seems to be 10
times what glibc used, using a higher value as described in 
http://src.illumos.org/source/xref/illumos-gate/usr/src/lib/libc/port/threads/synch.c#88
could be worth a try.


Hans


-- 
%SYSTEM-F-ANARCHISM, The operating system has been overthrown


-------------------------------------------
illumos-discuss
Archives: https://www.listbox.com/member/archive/182180/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182180/21175430-2e6923be
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21175430&id_secret=21175430-6a77cda4
Powered by Listbox: http://www.listbox.com

Reply via email to