Re: [PATCH -mm 0/3] fix numa vs kvm scalability issue

2014-02-19 Thread Andrew Morton
On Wed, 19 Feb 2014 09:59:17 +0100 Peter Zijlstra  wrote:

> On Tue, Feb 18, 2014 at 05:12:43PM -0500, r...@redhat.com wrote:
> > The NUMA scanning code can end up iterating over many gigabytes
> > of unpopulated memory, especially in the case of a freshly started
> > KVM guest with lots of memory.
> > 
> > This results in the mmu notifier code being called even when
> > there are no mapped pages in a virtual address range. The amount
> > of time wasted can be enough to trigger soft lockup warnings
> > with very large (>2TB) KVM guests.
> > 
> > This patch moves the mmu notifier call to the pmd level, which
> > represents 1GB areas of memory on x86-64. Furthermore, the mmu
> > notifier code is only called from the address in the PMD where
> > present mappings are first encountered.
> > 
> > The hugetlbfs code is left alone for now; hugetlb mappings are
> > not relocatable, and as such are left alone by the NUMA code,
> > and should never trigger this problem to begin with.
> > 
> > The series also adds a cond_resched to task_numa_work, to
> > fix another potential latency issue.
> 
> Andrew, I'll pick up the first kernel/sched/ patch; do you want the
> other two mm/ patches?

That works, thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH -mm 0/3] fix numa vs kvm scalability issue

2014-02-19 Thread Peter Zijlstra
On Tue, Feb 18, 2014 at 05:12:43PM -0500, r...@redhat.com wrote:
> The NUMA scanning code can end up iterating over many gigabytes
> of unpopulated memory, especially in the case of a freshly started
> KVM guest with lots of memory.
> 
> This results in the mmu notifier code being called even when
> there are no mapped pages in a virtual address range. The amount
> of time wasted can be enough to trigger soft lockup warnings
> with very large (>2TB) KVM guests.
> 
> This patch moves the mmu notifier call to the pmd level, which
> represents 1GB areas of memory on x86-64. Furthermore, the mmu
> notifier code is only called from the address in the PMD where
> present mappings are first encountered.
> 
> The hugetlbfs code is left alone for now; hugetlb mappings are
> not relocatable, and as such are left alone by the NUMA code,
> and should never trigger this problem to begin with.
> 
> The series also adds a cond_resched to task_numa_work, to
> fix another potential latency issue.

Andrew, I'll pick up the first kernel/sched/ patch; do you want the
other two mm/ patches?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH -mm 0/3] fix numa vs kvm scalability issue

2014-02-19 Thread Peter Zijlstra
On Tue, Feb 18, 2014 at 05:12:43PM -0500, r...@redhat.com wrote:
 The NUMA scanning code can end up iterating over many gigabytes
 of unpopulated memory, especially in the case of a freshly started
 KVM guest with lots of memory.
 
 This results in the mmu notifier code being called even when
 there are no mapped pages in a virtual address range. The amount
 of time wasted can be enough to trigger soft lockup warnings
 with very large (2TB) KVM guests.
 
 This patch moves the mmu notifier call to the pmd level, which
 represents 1GB areas of memory on x86-64. Furthermore, the mmu
 notifier code is only called from the address in the PMD where
 present mappings are first encountered.
 
 The hugetlbfs code is left alone for now; hugetlb mappings are
 not relocatable, and as such are left alone by the NUMA code,
 and should never trigger this problem to begin with.
 
 The series also adds a cond_resched to task_numa_work, to
 fix another potential latency issue.

Andrew, I'll pick up the first kernel/sched/ patch; do you want the
other two mm/ patches?
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH -mm 0/3] fix numa vs kvm scalability issue

2014-02-19 Thread Andrew Morton
On Wed, 19 Feb 2014 09:59:17 +0100 Peter Zijlstra pet...@infradead.org wrote:

 On Tue, Feb 18, 2014 at 05:12:43PM -0500, r...@redhat.com wrote:
  The NUMA scanning code can end up iterating over many gigabytes
  of unpopulated memory, especially in the case of a freshly started
  KVM guest with lots of memory.
  
  This results in the mmu notifier code being called even when
  there are no mapped pages in a virtual address range. The amount
  of time wasted can be enough to trigger soft lockup warnings
  with very large (2TB) KVM guests.
  
  This patch moves the mmu notifier call to the pmd level, which
  represents 1GB areas of memory on x86-64. Furthermore, the mmu
  notifier code is only called from the address in the PMD where
  present mappings are first encountered.
  
  The hugetlbfs code is left alone for now; hugetlb mappings are
  not relocatable, and as such are left alone by the NUMA code,
  and should never trigger this problem to begin with.
  
  The series also adds a cond_resched to task_numa_work, to
  fix another potential latency issue.
 
 Andrew, I'll pick up the first kernel/sched/ patch; do you want the
 other two mm/ patches?

That works, thanks.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/