On Thu, May 12, 2016 at 12:05:37PM +1000, Michael Neuling wrote: > On Wed, 2016-05-11 at 20:24 +0200, Peter Zijlstra wrote: > > On Wed, May 11, 2016 at 02:33:45PM +0200, Peter Zijlstra wrote: > > > > > > Hmm, PPC folks; what does your topology look like? > > > > > > Currently your sched_domain_topology, as per arch/powerpc/kernel/smp.c > > > seems to suggest your cores do not share cache at all. > > > > > > https://en.wikipedia.org/wiki/POWER7 seems to agree and states > > > > > > "4 MB L3 cache per C1 core" > > > > > > And http://www-03.ibm.com/systems/resources/systems_power_software_i_pe > > > rfmgmt_underthehood.pdf > > > also explicitly draws pictures with the L3 per core. > > > > > > _however_, that same document describes L3 inter-core fill and lateral > > > cast-out, which sounds like the L3s work together to form a node wide > > > caching system. > > > > > > Do we want to model this co-operative L3 slices thing as a sort of > > > node-wide LLC for the purpose of the scheduler ? > > Going back a generation; Power6 seems to have a shared L3 (off package) > > between the two cores on the package. The current topology does not > > reflect that at all. > > > > And going forward a generation; Power8 seems to share the per-core > > (chiplet) L3 amonst all cores (chiplets) + is has the centaur (memory > > controller) 16M L4. > > Yep, L1/L2/L3 is per core on POWER8 and POWER7. POWER6 and POWER5 (both > dual core chips) had a shared off chip cache
But as per the above, Power7 and Power8 have explicit logic to share the per-core L3 with the other cores. How effective is that? From some of the slides/documents i've looked at the L3s are connected with a high-speed fabric. Suggesting that the cross-core sharing should be fairly efficient. In which case it would make sense to treat/model the combined L3 as a single large LLC covering all cores.

