On Sun, Nov 30, 2008 at 07:11:40PM +0200, Avi Kivity wrote:
> Andi Kleen wrote:
> >On Sun, Nov 30, 2008 at 06:38:14PM +0200, Avi Kivity wrote:
> >  
> >>The guest allocates when it touches the page for the first time.  This 
> >>means very little since all of memory may be touched during guest bootup 
> >>or shortly afterwards.  Even if not, it is still a one-time operation, 
> >>and any choices we make based on it will last the lifetime of the guest.
> >>    
> >
> >I was more thinking about some heuristics that checks when a page
> >is first mapped into user space. The only problem is that it is zeroed
> >through the direct mapping before, but perhaps there is a way around it. 
> >That's one of the rare cases when 32bit highmem actually makes things 
> >easier.
> >It might be also easier on some other OS than Linux who don't use
> >direct mapping that aggressively.
> >  
> 
> In the context of kvm, the mmap() calls happen before the guest ever 

The mmap call doesn't matter at all, what matters is when the
page is allocated.

> executes.  First access happens somewhat later, but still we cannot 
> count on the majority of accesses to come from the same cpu as the first 
> access.

It is a reasonable heuristic. It's just like the rather
successfull default local allocation heuristic the native kernel uses.

> >
> >The alternative is to keep your own pools and allocate from the
> >correct pool, but then you either need pinning or getcpu()
> >  
> 
> This is meaningless in kvm context.  Other than small bits of memory 
> needed for I/O and shadow page tables, the bulk of memory is allocated 
> once. 

Mapped once. Anyways that could be changed too if there was need.

> 
> >>We need to mimic real hardware.
> >>    
> >
> >The underlying allocation is in pages, so the NUMA affinity can 
> >be as well handled by this. 
> >
> >Basic algorithm:
> >- If guest touches virtual node that is the same as the local node
> >of the current vcpu assume it's a local allocation.
> >  
> 
> The guest is not making the same assumption; lying to the guest is 

Huh? Pretty much all NUMA aware OS should. Linux will definitely.


> (1) with npt/ept we have no clue as to guest mappings

Yes that is tricky. With A bits in theory it could be made 
to work with EPT, but there are none, and it would still
not work very well.

> (2) even without npt/ept, we have no idea how often mappings are used 
> and by which cpu.  finding out is expensive.

You see a fault on the first mapping. That fault is on the CPU that
did the access.  Therefore you know which one it was.

> (3) for many workloads, there are no unused pages.  the guest 
> application allocates all memory and manages memory by itself.

First a common case of guest using all memory is file cache,
but for NUMA purposes file cache locality typically doesn't
matter because it's not accessed frequently enough that
non locality is a problem. It really only matters for mapping
that are used often by the CPU.

When a single application allocates everything and keeps it that is fine
too because you'll give it approximately local memory on the initial
set up (assuming the application has reasonable NUMA behaviour by itself
on a first touch local allocation policy)

When there's lots of remapping/new processes one would probably need some 
heuristics to detect reallocations, like the mapping heuristics I described 
earlier or PV help.

> Right.  The situation I'm trying to avoid is process A with memory on 
> node X running on node Y, and process B with memory on node Y running on 
> node X.  The scheduler arrives at a local optimum, caused by some 
> spurious load, and won't move to the global optimum because migrating 
> processes across cpus is considered expensive.
> 
> I don't know, perhaps the current scheduler is clever enough to do this 
> already.

It tries too, but there are always extreme cases where it doesn't work.
Also once a process is migrated it won't find back to its memory.
Still for a approximate dynamic solution trusting it is not the worst
you can do.

-Andi
-- 
[EMAIL PROTECTED]
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to