* Chuck Ebbert <[email protected]> wrote:
> On Tue, 16 Sep 2014 08:44:03 +0200
> Ingo Molnar <[email protected]> wrote:
>
> >
> > * Chuck Ebbert <[email protected]> wrote:
> >
> > > On Tue, 16 Sep 2014 05:29:20 +0200
> > > Peter Zijlstra <[email protected]> wrote:
> > >
> > > > On Mon, Sep 15, 2014 at 03:26:41PM -0700, Dave Hansen wrote:
> > > > >
> > > > > I'm getting the spew below when booting with Haswell (Xeon
> > > > > E5-2699) CPUs and the "Cluster-on-Die" (CoD) feature
> > > > > enabled in the BIOS.
> > > >
> > > > What is that cluster-on-die thing? I've heard it before but
> > > > never could find anything on it.
> > >
> > > Each CPU has 2.5MB of L3 connected together in a ring that
> > > makes it all act like a single shared cache. The HW tries
> > > to place the data so it's closest to the CPU that uses it.
> > > On the larger processors there are two rings with an
> > > interconnect between them that adds latency if a cache
> > > fetch has to cross that. CoD breaks that connection and
> > > effectively gives you two nodes on one die.
> >
> > Note that that's not really a 'NUMA node' in the way lots of
> > places in the kernel assume it: permanent placement assymetry
> > (and access cost assymetry) of RAM.
> >
> > It's a new topology construct that needs new handling (and
> > probably a new mask): Non Uniform Cache Architecture (NUCA)
> > or so.
>
> Hmm, looking closer at the diagram, each ring has its own
> memory controller, so it really is NUMA if you break the
> interconnect between that caches.
Fair enough, I only went by the description.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/