Re: [Xen-devel] [PATCH v3 3/3] tools/libxc: use superpages during restore of HVM guest

2017-08-25 Thread Olaf Hering
On Fri, Aug 25, Olaf Hering wrote:

> I think with the new check of max_pages an overallocation can not happen
> anymore. If at some point the domU still has room for a superpage, it
> will be allocated. In case the batch does not fully fill the superpage,
> the holes will be freed. In the next batch no superpage can be allocated
> anymore, but single pages will be used.

There is one case where Over-allocation will happen: assume
x86_hvm_populate_pfns gets a batch of pfns that fit trigger the
allocation of a 1G page. All pfns will fit into that partly populated
superpage. Then the guest has a hole right after the max_pfn of that
batch. The next batch will start in a new superpage. As a result the
freeing part of x86_hvm_populate_pfns will not consider the previous
superpage anymore. Now 512MB are allocated, but unpopulated.

To handle this case the min_pfn/max_pfn have to be global so that the
current batch can free allocated pfns from previous batches.

Olaf


signature.asc
Description: PGP signature
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3 3/3] tools/libxc: use superpages during restore of HVM guest

2017-08-25 Thread Olaf Hering
On Fri, Aug 25, Wei Liu wrote:

> Maybe a middle ground is to scan the batch to see if pfns can be fit
> into a whole super page? I don't think you can get a batch as big as 1G
> but there should be a lot of 2M batches?

I think with the new check of max_pages an overallocation can not happen
anymore. If at some point the domU still has room for a superpage, it
will be allocated. In case the batch does not fully fill the superpage,
the holes will be freed. In the next batch no superpage can be allocated
anymore, but single pages will be used.

This punching of holes might be inefficent, the win is the usage of
superpages in case of contiguous pfns.

Olaf


signature.asc
Description: PGP signature
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3 3/3] tools/libxc: use superpages during restore of HVM guest

2017-08-25 Thread Wei Liu
On Fri, Aug 25, 2017 at 02:51:01PM +0200, Olaf Hering wrote:
> On Fri, Aug 25, Wei Liu wrote:
> 
> > I'm still unconvinced this works all the time because it still needs the
> > assumption that the stream contains contiguous pfns.
> 
> This is how it is done today. If the pfns start to arrive in another
> order the format has to be changed to send a memory layout in advance.
> 

Maybe a middle ground is to scan the batch to see if pfns can be fit
into a whole super page? I don't think you can get a batch as big as 1G
but there should be a lot of 2M batches?

> I will check if some sort of retry logic can be added.
> 

This would be useful too.

> 
> Olaf



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3 3/3] tools/libxc: use superpages during restore of HVM guest

2017-08-25 Thread Olaf Hering
On Fri, Aug 25, Wei Liu wrote:

> I'm still unconvinced this works all the time because it still needs the
> assumption that the stream contains contiguous pfns.

This is how it is done today. If the pfns start to arrive in another
order the format has to be changed to send a memory layout in advance.

I will check if some sort of retry logic can be added.


Olaf


signature.asc
Description: PGP signature
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3 3/3] tools/libxc: use superpages during restore of HVM guest

2017-08-25 Thread Wei Liu
On Thu, Aug 24, 2017 at 12:14:43PM +0200, Olaf Hering wrote:
> During creating of a HVM domU meminit_hvm() tries to map superpages.
> After save/restore or migration this mapping is lost, everything is
> allocated in single pages. This causes a performance degradition after
> migration.
> 
> Add neccessary code to preallocate a superpage for the chunk of pfns
> that is received. In case a pfn was not populated on the sending side it
> must be freed on the receiving side to avoid over-allocation.
> 
> The existing code for x86_pv is moved unmodified into its own file.
> 
[...]
> +static int x86_hvm_populate_pfns(struct xc_sr_context *ctx, unsigned count,
> + const xen_pfn_t *original_pfns,
> + const uint32_t *types)
> +{
> +xc_interface *xch = ctx->xch;
> +xen_pfn_t min_pfn = original_pfns[0], max_pfn = original_pfns[0];
> +unsigned i;
> +int rc = -1;
> +
> +for ( i = 0; i < count; ++i )
> +{
> +if ( original_pfns[i] < min_pfn )
> +min_pfn = original_pfns[i];
> +if ( original_pfns[i] > max_pfn )
> +max_pfn = original_pfns[i];
> +if ( (types[i] != XEN_DOMCTL_PFINFO_XTAB &&
> +  types[i] != XEN_DOMCTL_PFINFO_BROKEN) &&
> + !pfn_is_populated(ctx, original_pfns[i]) )
> +{
> +rc = x86_hvm_allocate_pfn(ctx, original_pfns[i]);
> +if ( rc )
> +goto err;
> +rc = pfn_set_populated(ctx, original_pfns[i]);
> +if ( rc )
> +goto err;
> +}
> +}
> +
> +while ( min_pfn < max_pfn )
> +{
> +if ( !xc_sr_bitmap_resize(>x86_hvm.restore.allocated_pfns, 
> min_pfn) )
> +{
> +PERROR("Failed to realloc allocated_pfns %" PRI_xen_pfn, 
> min_pfn);
> +goto err;
> +}
> +if ( !pfn_is_populated(ctx, min_pfn) &&
> +xc_sr_test_and_clear(min_pfn, 
> >x86_hvm.restore.allocated_pfns) ) {
> +xen_pfn_t pfn = min_pfn;
> +rc = xc_domain_decrease_reservation_exact(xch, ctx->domid, 1, 0, 
> );
> +if ( rc )
> +{
> +PERROR("Failed to release pfn %" PRI_xen_pfn, min_pfn);
> +goto err;
> +}
> +ctx->restore.tot_pages--;
> +}
> +min_pfn++;
> +}
> +

I'm still unconvinced this works all the time because it still needs the
assumption that the stream contains contiguous pfns.

It works now probably because the mmio hole is placed right under 1G and
you preemptively avoid allocating 1G page for the first GB.

Suppose we have the following memory layout:

   [0...A) [mmio_start...B) [B...C]
 ^1G here

A-0 < 1G
B-C < 1G
The guest can use up to 1G ram
 
And then you receive the following batch of pfns from remote

 B,A-1

When B arrives, a 1G page is thus allocated, using up all guest's
allowance. Decreasing reservation happens only after the batch has been
processed, which means A will trigger over-allocation even on a 4K page.
Then there is no fallback in x86_hvm_allocate_pfn, which will cause
migration to fail.

Another less contrived example, without expanding the mmio hole beyond
1G:

   [0...A) [mmio_start...1G) [1G...C...2G...D...E]

E < 2G
The guest can use up to 2G ram

And then the batch of pfns from remote:

  C,D,A-1

So maybe you need to adjust the location of decrease reservation? Or do
I misread your code?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 3/3] tools/libxc: use superpages during restore of HVM guest

2017-08-24 Thread Olaf Hering
During creating of a HVM domU meminit_hvm() tries to map superpages.
After save/restore or migration this mapping is lost, everything is
allocated in single pages. This causes a performance degradition after
migration.

Add neccessary code to preallocate a superpage for the chunk of pfns
that is received. In case a pfn was not populated on the sending side it
must be freed on the receiving side to avoid over-allocation.

The existing code for x86_pv is moved unmodified into its own file.

Signed-off-by: Olaf Hering 
---
 tools/libxc/xc_sr_common.h  |  25 +++--
 tools/libxc/xc_sr_restore.c |  75 ++---
 tools/libxc/xc_sr_restore_x86_hvm.c | 202 
 tools/libxc/xc_sr_restore_x86_pv.c  |  72 -
 4 files changed, 296 insertions(+), 78 deletions(-)

diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h
index 8901af112a..bf2758e91a 100644
--- a/tools/libxc/xc_sr_common.h
+++ b/tools/libxc/xc_sr_common.h
@@ -139,6 +139,16 @@ struct xc_sr_restore_ops
  */
 int (*setup)(struct xc_sr_context *ctx);
 
+/**
+ * Populate PFNs
+ *
+ * Given a set of pfns, obtain memory from Xen to fill the physmap for the
+ * unpopulated subset.
+ */
+int (*populate_pfns)(struct xc_sr_context *ctx, unsigned count,
+ const xen_pfn_t *original_pfns, const uint32_t 
*types);
+
+
 /**
  * Process an individual record from the stream.  The caller shall take
  * care of processing common records (e.g. END, PAGE_DATA).
@@ -224,6 +234,8 @@ struct xc_sr_context
 
 int send_back_fd;
 unsigned long p2m_size;
+unsigned long max_pages;
+unsigned long tot_pages;
 xc_hypercall_buffer_t dirty_bitmap_hbuf;
 
 /* From Image Header. */
@@ -336,6 +348,11 @@ struct xc_sr_context
 /* HVM context blob. */
 void *context;
 size_t contextsz;
+
+/* Bitmap of currently allocated PFNs during restore. */
+struct xc_sr_bitmap attempted_1g;
+struct xc_sr_bitmap attempted_2m;
+struct xc_sr_bitmap allocated_pfns;
 } restore;
 };
 } x86_hvm;
@@ -455,14 +472,6 @@ static inline int write_record(struct xc_sr_context *ctx,
  */
 int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec);
 
-/*
- * This would ideally be private in restore.c, but is needed by
- * x86_pv_localise_page() if we receive pagetables frames ahead of the
- * contents of the frames they point at.
- */
-int populate_pfns(struct xc_sr_context *ctx, unsigned count,
-  const xen_pfn_t *original_pfns, const uint32_t *types);
-
 #endif
 /*
  * Local variables:
diff --git a/tools/libxc/xc_sr_restore.c b/tools/libxc/xc_sr_restore.c
index d53948e1a6..8cd9289d1a 100644
--- a/tools/libxc/xc_sr_restore.c
+++ b/tools/libxc/xc_sr_restore.c
@@ -68,74 +68,6 @@ static int read_headers(struct xc_sr_context *ctx)
 return 0;
 }
 
-/*
- * Given a set of pfns, obtain memory from Xen to fill the physmap for the
- * unpopulated subset.  If types is NULL, no page type checking is performed
- * and all unpopulated pfns are populated.
- */
-int populate_pfns(struct xc_sr_context *ctx, unsigned count,
-  const xen_pfn_t *original_pfns, const uint32_t *types)
-{
-xc_interface *xch = ctx->xch;
-xen_pfn_t *mfns = malloc(count * sizeof(*mfns)),
-*pfns = malloc(count * sizeof(*pfns));
-unsigned i, nr_pfns = 0;
-int rc = -1;
-
-if ( !mfns || !pfns )
-{
-ERROR("Failed to allocate %zu bytes for populating the physmap",
-  2 * count * sizeof(*mfns));
-goto err;
-}
-
-for ( i = 0; i < count; ++i )
-{
-if ( (!types || (types &&
- (types[i] != XEN_DOMCTL_PFINFO_XTAB &&
-  types[i] != XEN_DOMCTL_PFINFO_BROKEN))) &&
- !pfn_is_populated(ctx, original_pfns[i]) )
-{
-rc = pfn_set_populated(ctx, original_pfns[i]);
-if ( rc )
-goto err;
-pfns[nr_pfns] = mfns[nr_pfns] = original_pfns[i];
-++nr_pfns;
-}
-}
-
-if ( nr_pfns )
-{
-rc = xc_domain_populate_physmap_exact(
-xch, ctx->domid, nr_pfns, 0, 0, mfns);
-if ( rc )
-{
-PERROR("Failed to populate physmap");
-goto err;
-}
-
-for ( i = 0; i < nr_pfns; ++i )
-{
-if ( mfns[i] == INVALID_MFN )
-{
-ERROR("Populate physmap failed for pfn %u", i);
-rc = -1;
-goto err;
-}
-
-ctx->restore.ops.set_gfn(ctx, pfns[i], mfns[i]);
-}
-}
-
-rc = 0;
-
- err:
-free(pfns);
-free(mfns);
-
-return rc;
-}
-
 /*
  * Given a