Re: [Xen-devel] [PATCH v6 2/2] xen: move TLB-flush filtering out into populate_physmap during vm creation

2016-09-21 Thread Wei Liu
On Tue, Sep 20, 2016 at 06:39:32PM -0700, Dongli Zhang wrote:
> > > This patch implemented parts of TODO left in commit id
> > > a902c12ee45fc9389eb8fe54eeddaf267a555c58 (More efficient TLB-flush
> > > filtering in alloc_heap_pages()). It moved TLB-flush filtering out into
> > > populate_physmap. Because of TLB-flush in alloc_heap_pages, it's very slow
> > > to create a guest with memory size of more than 100GB on host with 100+
> > > cpus.
> > >
> > >
> > Do you have some actual numbers on how much faster after applying this
> > patch?
> > 
> > This is mostly for writing release note etc, so it is fine if you don't
> > have numbers at hand.
> 
> I do not have data of upstream version at hand now.  I always tested the
> performance of this patchset by backporting it to an Oracle VM based on a
> very old Xen version, before I sent out the patchset for review. The
> backport just involves: (1) copy & paste code, (2) change bool to bool_t
> and (3) change true/false to 1/0.
> 
> The test machine has 8 nodes of 2048GB memory and 128 cpu. With this
> patchset applied, the time to re-create a VM (with 135GB memory and 12
> vcpu) is reduced from 5min to 20s.

Thanks, this is useful information.

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v6 2/2] xen: move TLB-flush filtering out into populate_physmap during vm creation

2016-09-20 Thread Dongli Zhang
> > This patch implemented parts of TODO left in commit id
> > a902c12ee45fc9389eb8fe54eeddaf267a555c58 (More efficient TLB-flush
> > filtering in alloc_heap_pages()). It moved TLB-flush filtering out into
> > populate_physmap. Because of TLB-flush in alloc_heap_pages, it's very slow
> > to create a guest with memory size of more than 100GB on host with 100+
> > cpus.
> >
> >
> Do you have some actual numbers on how much faster after applying this
> patch?
> 
> This is mostly for writing release note etc, so it is fine if you don't
> have numbers at hand.

I do not have data of upstream version at hand now.  I always tested the
performance of this patchset by backporting it to an Oracle VM based on a
very old Xen version, before I sent out the patchset for review. The
backport just involves: (1) copy & paste code, (2) change bool to bool_t
and (3) change true/false to 1/0.

The test machine has 8 nodes of 2048GB memory and 128 cpu. With this
patchset applied, the time to re-create a VM (with 135GB memory and 12
vcpu) is reduced from 5min to 20s.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v6 2/2] xen: move TLB-flush filtering out into populate_physmap during vm creation

2016-09-20 Thread Wei Liu
On Tue, Sep 20, 2016 at 10:31:04AM +0800, Dongli Zhang wrote:
> This patch implemented parts of TODO left in commit id
> a902c12ee45fc9389eb8fe54eeddaf267a555c58 (More efficient TLB-flush
> filtering in alloc_heap_pages()). It moved TLB-flush filtering out into
> populate_physmap. Because of TLB-flush in alloc_heap_pages, it's very slow
> to create a guest with memory size of more than 100GB on host with 100+
> cpus.
> 

Do you have some actual numbers on how much faster after applying this
patch?

This is mostly for writing release note etc, so it is fine if you don't
have numbers at hand.

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v6 2/2] xen: move TLB-flush filtering out into populate_physmap during vm creation

2016-09-20 Thread Dario Faggioli
On Tue, 2016-09-20 at 12:20 +0100, George Dunlap wrote:
> On 20/09/16 03:31, Dongli Zhang wrote:
> > 
> > This patch implemented parts of TODO left in commit id
> > a902c12ee45fc9389eb8fe54eeddaf267a555c58 (More efficient TLB-flush
> > filtering in alloc_heap_pages()). It moved TLB-flush filtering out
> > into
> > populate_physmap. Because of TLB-flush in alloc_heap_pages, it's
> > very slow
> > to create a guest with memory size of more than 100GB on host with
> > 100+
> > cpus.
> > 
> > This patch introduced a "MEMF_no_tlbflush" bit to memflags to
> > indicate
> > whether TLB-flush should be done in alloc_heap_pages or its caller
> > populate_physmap.  Once this bit is set in memflags,
> > alloc_heap_pages will
> > ignore TLB-flush. To use this bit after vm is created might lead to
> > security issue, that is, this would make pages accessible to the
> > guest B,
> > when guest A may still have a cached mapping to them.
> > 
> > Therefore, this patch also introduced a "creation_finished" field
> > to struct
> > domain to indicate whether this domain has ever got unpaused by
> > hypervisor.
> > MEMF_no_tlbflush can be set only during vm creation phase when
> > creation_finished is still false before this domain gets unpaused
> > for the
> > first time.
> > 
> > Signed-off-by: Dongli Zhang 
> 
> Acked-by: George Dunlap 
> 
FWIW, and if I'm still in time:

Reviewed-by: Dario Faggioli 

Regards,
Dario
-- 
<> (Raistlin Majere)
-
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R Ltd., Cambridge (UK)

signature.asc
Description: This is a digitally signed message part
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v6 2/2] xen: move TLB-flush filtering out into populate_physmap during vm creation

2016-09-20 Thread George Dunlap
On 20/09/16 03:31, Dongli Zhang wrote:
> This patch implemented parts of TODO left in commit id
> a902c12ee45fc9389eb8fe54eeddaf267a555c58 (More efficient TLB-flush
> filtering in alloc_heap_pages()). It moved TLB-flush filtering out into
> populate_physmap. Because of TLB-flush in alloc_heap_pages, it's very slow
> to create a guest with memory size of more than 100GB on host with 100+
> cpus.
> 
> This patch introduced a "MEMF_no_tlbflush" bit to memflags to indicate
> whether TLB-flush should be done in alloc_heap_pages or its caller
> populate_physmap.  Once this bit is set in memflags, alloc_heap_pages will
> ignore TLB-flush. To use this bit after vm is created might lead to
> security issue, that is, this would make pages accessible to the guest B,
> when guest A may still have a cached mapping to them.
> 
> Therefore, this patch also introduced a "creation_finished" field to struct
> domain to indicate whether this domain has ever got unpaused by hypervisor.
> MEMF_no_tlbflush can be set only during vm creation phase when
> creation_finished is still false before this domain gets unpaused for the
> first time.
> 
> Signed-off-by: Dongli Zhang 

Acked-by: George Dunlap 

> ---
> Changed since v5:
>   * Remove conditional check before "d->creation_finished = true;".
>   * Place "bool creation_finished;" next to the other group of booleans.
>   * Remove duplicate "only" in comments.
> 
> Changed since v4:
>   * Rename is_ever_unpaused to creation_finished.
>   * Change bool_t to bool.
>   * Polish comments.
> 
> Changed since v3:
>   * Set the flag to true in domain_unpause_by_systemcontroller when
> unpausing the guest domain for the first time.
>   * Use true/false for all boot_t variables.
>   * Add unlikely to optimize "if statement".
>   * Correct comment style.
> 
> Changed since v2:
>   * Limit this optimization to domain creation time.
> 
> ---
>  xen/common/domain.c |  7 +++
>  xen/common/memory.c | 22 ++
>  xen/common/page_alloc.c |  4 +++-
>  xen/include/xen/mm.h|  2 ++
>  xen/include/xen/sched.h |  6 ++
>  5 files changed, 40 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index a8804e4..3abaca9 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -1004,6 +1004,13 @@ int domain_unpause_by_systemcontroller(struct domain 
> *d)
>  {
>  int old, new, prev = d->controller_pause_count;
>  
> +/*
> + * We record this information here for populate_physmap to figure out
> + * that the domain has finished being created. In fact, we're only
> + * allowed to set the MEMF_no_tlbflush flag during VM creation.
> + */
> +d->creation_finished = true;
> +
>  do
>  {
>  old = prev;
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index cc0f69e..21797ca 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -141,6 +141,8 @@ static void populate_physmap(struct memop_args *a)
>  unsigned int i, j;
>  xen_pfn_t gpfn, mfn;
>  struct domain *d = a->domain, *curr_d = current->domain;
> +bool need_tlbflush = false;
> +uint32_t tlbflush_timestamp = 0;
>  
>  if ( !guest_handle_subrange_okay(a->extent_list, a->nr_done,
>   a->nr_extents-1) )
> @@ -150,6 +152,17 @@ static void populate_physmap(struct memop_args *a)
>  max_order(curr_d)) )
>  return;
>  
> +/*
> + * With MEMF_no_tlbflush set, alloc_heap_pages() will ignore
> + * TLB-flushes. After VM creation, this is a security issue (it can
> + * make pages accessible to guest B, when guest A may still have a
> + * cached mapping to them). So we do this only during domain creation,
> + * when the domain itself has not yet been unpaused for the first
> + * time.
> + */
> +if ( unlikely(!d->creation_finished) )
> +a->memflags |= MEMF_no_tlbflush;
> +
>  for ( i = a->nr_done; i < a->nr_extents; i++ )
>  {
>  if ( i != a->nr_done && hypercall_preempt_check() )
> @@ -214,6 +227,13 @@ static void populate_physmap(struct memop_args *a)
>  goto out;
>  }
>  
> +if ( unlikely(a->memflags & MEMF_no_tlbflush) )
> +{
> +for ( j = 0; j < (1U << a->extent_order); j++ )
> +accumulate_tlbflush(_tlbflush, [j],
> +_timestamp);
> +}
> +
>  mfn = page_to_mfn(page);
>  }
>  
> @@ -232,6 +252,8 @@ static void populate_physmap(struct memop_args *a)
>  }
>  
>  out:
> +if ( need_tlbflush )
> +filtered_flush_tlb_mask(tlbflush_timestamp);
>  a->nr_done = i;
>  }
>  
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index d7ca3a0..ae2476d 100644
> --- a/xen/common/page_alloc.c
> +++ 

Re: [Xen-devel] [PATCH v6 2/2] xen: move TLB-flush filtering out into populate_physmap during vm creation

2016-09-20 Thread Jan Beulich
>>> On 20.09.16 at 04:31,  wrote:
> This patch implemented parts of TODO left in commit id
> a902c12ee45fc9389eb8fe54eeddaf267a555c58 (More efficient TLB-flush
> filtering in alloc_heap_pages()). It moved TLB-flush filtering out into
> populate_physmap. Because of TLB-flush in alloc_heap_pages, it's very slow
> to create a guest with memory size of more than 100GB on host with 100+
> cpus.
> 
> This patch introduced a "MEMF_no_tlbflush" bit to memflags to indicate
> whether TLB-flush should be done in alloc_heap_pages or its caller
> populate_physmap.  Once this bit is set in memflags, alloc_heap_pages will
> ignore TLB-flush. To use this bit after vm is created might lead to
> security issue, that is, this would make pages accessible to the guest B,
> when guest A may still have a cached mapping to them.
> 
> Therefore, this patch also introduced a "creation_finished" field to struct
> domain to indicate whether this domain has ever got unpaused by hypervisor.
> MEMF_no_tlbflush can be set only during vm creation phase when
> creation_finished is still false before this domain gets unpaused for the
> first time.
> 
> Signed-off-by: Dongli Zhang 

Acked-by: Jan Beulich 
with ...

> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -386,6 +386,12 @@ struct domain
>  bool_t   disable_migrate;
>  /* Is this guest being debugged by dom0? */
>  bool_t   debugger_attached;
> +/*
> + * Set to true at the very end of domain creation, when the domain is
> + * unpaused for the first time by the systemcontroller.
> + */
> +bool creation_finished;

... blank padding added here to match the style of the surrounding
code. I'll try to remember to take care of this during commit, but I'd
appreciate if you'd look at neighboring code next time round.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v6 2/2] xen: move TLB-flush filtering out into populate_physmap during vm creation

2016-09-19 Thread Dongli Zhang
This patch implemented parts of TODO left in commit id
a902c12ee45fc9389eb8fe54eeddaf267a555c58 (More efficient TLB-flush
filtering in alloc_heap_pages()). It moved TLB-flush filtering out into
populate_physmap. Because of TLB-flush in alloc_heap_pages, it's very slow
to create a guest with memory size of more than 100GB on host with 100+
cpus.

This patch introduced a "MEMF_no_tlbflush" bit to memflags to indicate
whether TLB-flush should be done in alloc_heap_pages or its caller
populate_physmap.  Once this bit is set in memflags, alloc_heap_pages will
ignore TLB-flush. To use this bit after vm is created might lead to
security issue, that is, this would make pages accessible to the guest B,
when guest A may still have a cached mapping to them.

Therefore, this patch also introduced a "creation_finished" field to struct
domain to indicate whether this domain has ever got unpaused by hypervisor.
MEMF_no_tlbflush can be set only during vm creation phase when
creation_finished is still false before this domain gets unpaused for the
first time.

Signed-off-by: Dongli Zhang 
---
Changed since v5:
  * Remove conditional check before "d->creation_finished = true;".
  * Place "bool creation_finished;" next to the other group of booleans.
  * Remove duplicate "only" in comments.

Changed since v4:
  * Rename is_ever_unpaused to creation_finished.
  * Change bool_t to bool.
  * Polish comments.

Changed since v3:
  * Set the flag to true in domain_unpause_by_systemcontroller when
unpausing the guest domain for the first time.
  * Use true/false for all boot_t variables.
  * Add unlikely to optimize "if statement".
  * Correct comment style.

Changed since v2:
  * Limit this optimization to domain creation time.

---
 xen/common/domain.c |  7 +++
 xen/common/memory.c | 22 ++
 xen/common/page_alloc.c |  4 +++-
 xen/include/xen/mm.h|  2 ++
 xen/include/xen/sched.h |  6 ++
 5 files changed, 40 insertions(+), 1 deletion(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index a8804e4..3abaca9 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1004,6 +1004,13 @@ int domain_unpause_by_systemcontroller(struct domain *d)
 {
 int old, new, prev = d->controller_pause_count;
 
+/*
+ * We record this information here for populate_physmap to figure out
+ * that the domain has finished being created. In fact, we're only
+ * allowed to set the MEMF_no_tlbflush flag during VM creation.
+ */
+d->creation_finished = true;
+
 do
 {
 old = prev;
diff --git a/xen/common/memory.c b/xen/common/memory.c
index cc0f69e..21797ca 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -141,6 +141,8 @@ static void populate_physmap(struct memop_args *a)
 unsigned int i, j;
 xen_pfn_t gpfn, mfn;
 struct domain *d = a->domain, *curr_d = current->domain;
+bool need_tlbflush = false;
+uint32_t tlbflush_timestamp = 0;
 
 if ( !guest_handle_subrange_okay(a->extent_list, a->nr_done,
  a->nr_extents-1) )
@@ -150,6 +152,17 @@ static void populate_physmap(struct memop_args *a)
 max_order(curr_d)) )
 return;
 
+/*
+ * With MEMF_no_tlbflush set, alloc_heap_pages() will ignore
+ * TLB-flushes. After VM creation, this is a security issue (it can
+ * make pages accessible to guest B, when guest A may still have a
+ * cached mapping to them). So we do this only during domain creation,
+ * when the domain itself has not yet been unpaused for the first
+ * time.
+ */
+if ( unlikely(!d->creation_finished) )
+a->memflags |= MEMF_no_tlbflush;
+
 for ( i = a->nr_done; i < a->nr_extents; i++ )
 {
 if ( i != a->nr_done && hypercall_preempt_check() )
@@ -214,6 +227,13 @@ static void populate_physmap(struct memop_args *a)
 goto out;
 }
 
+if ( unlikely(a->memflags & MEMF_no_tlbflush) )
+{
+for ( j = 0; j < (1U << a->extent_order); j++ )
+accumulate_tlbflush(_tlbflush, [j],
+_timestamp);
+}
+
 mfn = page_to_mfn(page);
 }
 
@@ -232,6 +252,8 @@ static void populate_physmap(struct memop_args *a)
 }
 
 out:
+if ( need_tlbflush )
+filtered_flush_tlb_mask(tlbflush_timestamp);
 a->nr_done = i;
 }
 
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index d7ca3a0..ae2476d 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -827,7 +827,9 @@ static struct page_info *alloc_heap_pages(
 BUG_ON(pg[i].count_info != PGC_state_free);
 pg[i].count_info = PGC_state_inuse;
 
-accumulate_tlbflush(_tlbflush, [i], _timestamp);
+if ( !(memflags & MEMF_no_tlbflush) )
+accumulate_tlbflush(_tlbflush, [i],
+