On Thu, 2015-05-14 at 09:39 +0200, Vlastimil Babka wrote:
> On 05/14/2015 01:38 AM, Benjamin Herrenschmidt wrote:
> > On Wed, 2015-05-13 at 16:10 +0200, Vlastimil Babka wrote:
> >> Sorry for reviving oldish thread...
> >
> > Well, that's actually appreciated since this is constructive discussion
> > of the kind I was hoping to trigger initially :-) I'll look at
> 
> I hoped so :)
> 
> > ZONE_MOVABLE, I wasn't aware of its existence.
> >
> > Don't we still have the problem that ZONEs must be somewhat contiguous
> > chunks ? Ie, my "CAPI memory" will be interleaved in the physical
> > address space somewhat.. This is due to the address space on some of
> > those systems where you'll basically have something along the lines of:
> >
> > [ node 0 mem ] [ node 0 CAPI dev ] .... [ node 1 mem] [ node 1 CAPI dev] ...
> 
> Oh, I see. The VM code should cope with that, but some operations would 
> be inefficiently looping over the holes in the CAPI zone by 2MB 
> pageblock per iteration. This would include compaction scanning, which 
> would suck if you need those large contiguous allocations as you said. 
> Interleaving works better if it's done with a smaller granularity.
> 
> But I guess you could just represent the CAPI as multiple NUMA nodes, 
> each with single ZONE_MOVABLE zone. Especially if "node 0 CAPI dev" and 
> "node 1 CAPI dev" differs in other characteristics than just using a 
> different range of PFNs... otherwise what's the point of this split anyway?

Correct, I think we want the CAPI devs to look like CPU-less NUMA nodes
anyway. This is the right way to target an allocation at one of them and
it conveys the distance properly, so it makes sense.

I'll add the ZONE_MOVABLE to the list of things to investigate on our
side, thanks for the pointer !

Cheers,
Ben.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to