Re: [PATCH 0/5] x86/vmware: Steal time accounting support

2020-03-12 Thread Thomas Gleixner
Alexey Makhalov  writes:
>
> Alexey Makhalov (5):
>   x86/vmware: Make vmware_select_hypercall() __init
>   x86/vmware: Remove vmware_sched_clock_setup()
>   x86/vmware: Steal time clock for VMware guest
>   x86/vmware: Enable steal time accounting
>   x86/vmware: Use bool type for vmw_sched_clock

Reviewed-by: Thomas Gleixner 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [RESEND PATCH v2 1/9] iomap: Constify ioreadX() iomem argument (as in generic implementation)

2020-03-12 Thread Michael Ellerman
Krzysztof Kozlowski  writes:
> diff --git a/arch/powerpc/kernel/iomap.c b/arch/powerpc/kernel/iomap.c
> index 5ac84efc6ede..9fe4fb3b08aa 100644
> --- a/arch/powerpc/kernel/iomap.c
> +++ b/arch/powerpc/kernel/iomap.c
> @@ -15,23 +15,23 @@
>   * Here comes the ppc64 implementation of the IOMAP 
>   * interfaces.
>   */
> -unsigned int ioread8(void __iomem *addr)
> +unsigned int ioread8(const void __iomem *addr)
>  {
>   return readb(addr);
>  }
> -unsigned int ioread16(void __iomem *addr)
> +unsigned int ioread16(const void __iomem *addr)
>  {
>   return readw(addr);
>  }
> -unsigned int ioread16be(void __iomem *addr)
> +unsigned int ioread16be(const void __iomem *addr)
>  {
>   return readw_be(addr);
>  }
> -unsigned int ioread32(void __iomem *addr)
> +unsigned int ioread32(const void __iomem *addr)
>  {
>   return readl(addr);
>  }
> -unsigned int ioread32be(void __iomem *addr)
> +unsigned int ioread32be(const void __iomem *addr)
>  {
>   return readl_be(addr);
>  }
> @@ -41,27 +41,27 @@ EXPORT_SYMBOL(ioread16be);
>  EXPORT_SYMBOL(ioread32);
>  EXPORT_SYMBOL(ioread32be);
>  #ifdef __powerpc64__
> -u64 ioread64(void __iomem *addr)
> +u64 ioread64(const void __iomem *addr)
>  {
>   return readq(addr);
>  }
> -u64 ioread64_lo_hi(void __iomem *addr)
> +u64 ioread64_lo_hi(const void __iomem *addr)
>  {
>   return readq(addr);
>  }
> -u64 ioread64_hi_lo(void __iomem *addr)
> +u64 ioread64_hi_lo(const void __iomem *addr)
>  {
>   return readq(addr);
>  }
> -u64 ioread64be(void __iomem *addr)
> +u64 ioread64be(const void __iomem *addr)
>  {
>   return readq_be(addr);
>  }
> -u64 ioread64be_lo_hi(void __iomem *addr)
> +u64 ioread64be_lo_hi(const void __iomem *addr)
>  {
>   return readq_be(addr);
>  }
> -u64 ioread64be_hi_lo(void __iomem *addr)
> +u64 ioread64be_hi_lo(const void __iomem *addr)
>  {
>   return readq_be(addr);
>  }
> @@ -139,15 +139,15 @@ EXPORT_SYMBOL(iowrite64be_hi_lo);
>   * FIXME! We could make these do EEH handling if we really
>   * wanted. Not clear if we do.
>   */
> -void ioread8_rep(void __iomem *addr, void *dst, unsigned long count)
> +void ioread8_rep(const void __iomem *addr, void *dst, unsigned long count)
>  {
>   readsb(addr, dst, count);
>  }
> -void ioread16_rep(void __iomem *addr, void *dst, unsigned long count)
> +void ioread16_rep(const void __iomem *addr, void *dst, unsigned long count)
>  {
>   readsw(addr, dst, count);
>  }
> -void ioread32_rep(void __iomem *addr, void *dst, unsigned long count)
> +void ioread32_rep(const void __iomem *addr, void *dst, unsigned long count)
>  {
>   readsl(addr, dst, count);
>  }

This looks OK to me.

Acked-by: Michael Ellerman  (powerpc)

cheers
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [RESEND PATCH v2 6/9] drm/mgag200: Constify ioreadX() iomem argument (as in generic implementation)

2020-03-12 Thread Thomas Zimmermann
Hi Krzysztof,

I just received a resend email from 3 weeks ago :/

Do you want me to merge the mgag200 patch into drm-misc-next?

Best regards
Thomas

Am 19.02.20 um 18:50 schrieb Krzysztof Kozlowski:
> The ioreadX() helpers have inconsistent interface.  On some architectures
> void *__iomem address argument is a pointer to const, on some not.
> 
> Implementations of ioreadX() do not modify the memory under the address
> so they can be converted to a "const" version for const-safety and
> consistency among architectures.
> 
> Signed-off-by: Krzysztof Kozlowski 
> Reviewed-by: Thomas Zimmermann 
> 
> ---
> 
> Changes since v1:
> 1. Add Thomas' review.
> ---
>  drivers/gpu/drm/mgag200/mgag200_drv.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/mgag200/mgag200_drv.h 
> b/drivers/gpu/drm/mgag200/mgag200_drv.h
> index aa32aad222c2..6512b3af4fb7 100644
> --- a/drivers/gpu/drm/mgag200/mgag200_drv.h
> +++ b/drivers/gpu/drm/mgag200/mgag200_drv.h
> @@ -34,9 +34,9 @@
>  
>  #define MGAG200FB_CONN_LIMIT 1
>  
> -#define RREG8(reg) ioread8(((void __iomem *)mdev->rmmio) + (reg))
> +#define RREG8(reg) ioread8(((const void __iomem *)mdev->rmmio) + (reg))
>  #define WREG8(reg, v) iowrite8(v, ((void __iomem *)mdev->rmmio) + (reg))
> -#define RREG32(reg) ioread32(((void __iomem *)mdev->rmmio) + (reg))
> +#define RREG32(reg) ioread32(((const void __iomem *)mdev->rmmio) + (reg))
>  #define WREG32(reg, v) iowrite32(v, ((void __iomem *)mdev->rmmio) + (reg))
>  
>  #define ATTR_INDEX 0x1fc0
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer



signature.asc
Description: OpenPGP digital signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [RFC for Linux] virtio_balloon: Add VIRTIO_BALLOON_F_THP_ORDER to handle THP spilt issue

2020-03-12 Thread David Hildenbrand
On 12.03.20 09:47, Michael S. Tsirkin wrote:
> On Thu, Mar 12, 2020 at 09:37:32AM +0100, David Hildenbrand wrote:
>> 2. You are essentially stealing THPs in the guest. So the fastest
>> mapping (THP in guest and host) is gone. The guest won't be able to make
>> use of THP where it previously was able to. I can imagine this implies a
>> performance degradation for some workloads. This needs a proper
>> performance evaluation.
> 
> I think the problem is more with the alloc_pages API.
> That gives you exactly the given order, and if there's
> a larger chunk available, it will split it up.
> 
> But for balloon - I suspect lots of other users,
> we do not want to stress the system but if a large
> chunk is available anyway, then we could handle
> that more optimally by getting it all in one go.
> 
> 
> So if we want to address this, IMHO this calls for a new API.
> Along the lines of
> 
>   struct page *alloc_page_range(gfp_t gfp, unsigned int min_order,
>   unsigned int max_order, unsigned int 
> *order)
> 
> the idea would then be to return at a number of pages in the given
> range.
> 
> What do you think? Want to try implementing that?

You can just start with the highest order and decrement the order until
your allocation succeeds using alloc_pages(), which would be enough for
a first version. At least I don't see the immediate need for a new
kernel API.

-- 
Thanks,

David / dhildenb

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [RFC for Linux] virtio_balloon: Add VIRTIO_BALLOON_F_THP_ORDER to handle THP spilt issue

2020-03-12 Thread Michael S. Tsirkin
On Thu, Mar 12, 2020 at 09:37:32AM +0100, David Hildenbrand wrote:
> 2. You are essentially stealing THPs in the guest. So the fastest
> mapping (THP in guest and host) is gone. The guest won't be able to make
> use of THP where it previously was able to. I can imagine this implies a
> performance degradation for some workloads. This needs a proper
> performance evaluation.

I think the problem is more with the alloc_pages API.
That gives you exactly the given order, and if there's
a larger chunk available, it will split it up.

But for balloon - I suspect lots of other users,
we do not want to stress the system but if a large
chunk is available anyway, then we could handle
that more optimally by getting it all in one go.


So if we want to address this, IMHO this calls for a new API.
Along the lines of

struct page *alloc_page_range(gfp_t gfp, unsigned int min_order,
unsigned int max_order, unsigned int 
*order)

the idea would then be to return at a number of pages in the given
range.

What do you think? Want to try implementing that?

-- 
MST

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [RFC for Linux] virtio_balloon: Add VIRTIO_BALLOON_F_THP_ORDER to handle THP spilt issue

2020-03-12 Thread David Hildenbrand
On 12.03.20 08:49, Hui Zhu wrote:
> If the guest kernel has many fragmentation pages, use virtio_balloon
> will split THP of QEMU when it calls MADV_DONTNEED madvise to release
> the balloon pages.
> This is an example in a VM with 1G memory 1CPU:
> cat /proc/meminfo | grep AnonHugePages:
> AnonHugePages: 0 kB
> 
> usemem --punch-holes -s -1 800m &
> 
> cat /proc/meminfo | grep AnonHugePages:
> AnonHugePages:976896 kB
> 
> (qemu) device_add virtio-balloon-pci,id=balloon1
> (qemu) info balloon
> balloon: actual=1024
> (qemu) balloon 624
> (qemu) info balloon
> balloon: actual=624
> 
> cat /proc/meminfo | grep AnonHugePages:
> AnonHugePages:153600 kB
> 
> THP number decreased more than 800M.
> The reason is usemem with punch-holes option will free every other page
> after allocation.  Then 400M free memory inside the guest kernel is
> fragmentation pages.
> The guest kernel will use them to inflate the balloon.  When these
> fragmentation pages are freed, THP will be split.
> 
> This commit tries to handle this with add a new flag
> VIRTIO_BALLOON_F_THP_ORDER.
> When this flag is set, the balloon page order will be set to the THP order.
> Then THP pages will be freed together in the host.
> This is an example in a VM with 1G memory 1CPU:
> cat /proc/meminfo | grep AnonHugePages:
> AnonHugePages: 0 kB
> 
> usemem --punch-holes -s -1 800m &
> 
> cat /proc/meminfo | grep AnonHugePages:
> AnonHugePages:976896 kB
> 
> (qemu) device_add virtio-balloon-pci,id=balloon1,thp-order=on
> (qemu) info balloon
> balloon: actual=1024
> (qemu) balloon 624
> (qemu) info balloon
> balloon: actual=624
> 
> cat /proc/meminfo | grep AnonHugePages:
> AnonHugePages:583680 kB
> 
> The THP number decreases 384M.  This shows that VIRTIO_BALLOON_F_THP_ORDER
> can help handle the THP split issue.


Multiple things:

I recently had a similar discussion with Alex [1] and I think this needs
more thought.

My thoughts:

1. You most certainly want to fallback to allocating pages in a smaller
granularity once you run out of bigger allocations. Sacrifice
performance for memory inflation, which has always been the case and
which is what people expect to happen. (e.g., to shrink the page cache
properly)

2. You are essentially stealing THPs in the guest. So the fastest
mapping (THP in guest and host) is gone. The guest won't be able to make
use of THP where it previously was able to. I can imagine this implies a
performance degradation for some workloads. This needs a proper
performance evaluation.

3. The pages you allocate are not migrateable, e.g., for memory
offlining or alloc_contig_range() users like gigantic pages or soon
virtio-mem. I strongly dislike that. This is IMHO a step backwards. We
want to be able to migrate or even split-up and migrate such pages.

Assume the guest could make good use of a THP somewhere. Who says it
wouldn't be better to sacrifice a huge balloon page to be able to use
THP both in the guest and the host for that mapping? I am not convinced
stealing possible THPs in the guest and not being able to split them up
is really what we want performance wise.


4. I think we also want a better mechanism to directly inflate/deflate
higher/order pages and not reuse the 4k inflate/deflate queues.

5. I think we don't want to hard code such THP values but let the host
tell us the THP size instead, which can easily differ between guest and
host.

Also, I do wonder if balloon compaction in the guest will already result
in more THP getting used again long term. Assume the guest compacts
balloon pages into a single THP again. This will result in a bunch of
DONTNEED/WILLNEED in the hypervisor due to inflation/deflation. I wonder
if the WILLNEED on the sub-pages of a candidate THP in the host will
allow to use a THP in the host again.


[1]
https://lore.kernel.org/linux-mm/939de9de-d82a-aed2-6a51-57a55d81c...@redhat.com/

> 
> Signed-off-by: Hui Zhu 
> ---
>  drivers/virtio/virtio_balloon.c | 57 
> ++---
>  include/linux/balloon_compaction.h  | 14 ++---
>  include/uapi/linux/virtio_balloon.h |  4 +++
>  3 files changed, 54 insertions(+), 21 deletions(-)
> 
> diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
> index 7bfe365..1e1dc76 100644
> --- a/drivers/virtio/virtio_balloon.c
> +++ b/drivers/virtio/virtio_balloon.c
> @@ -175,18 +175,31 @@ static unsigned fill_balloon(struct virtio_balloon *vb, 
> size_t num)
>   unsigned num_pfns;
>   struct page *page;
>   LIST_HEAD(pages);
> + int page_order = 0;
>  
>   /* We can only do one array worth at a time. */
>   num = min(num, ARRAY_SIZE(vb->pfns));
>  
> + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_THP_ORDER))
> + page_order = VIRTIO_BALLOON_THP_ORDER;
> +
>   for (num_pfns = 0; num_pfns < num;
>num_pfns += VIRTIO_BALLOON_PAGES_PER_PAGE) {
> - struct page *page = balloon_page_alloc();
> + struct 

Re: [RFC for QEMU] virtio-balloon: Add option thp-order to set VIRTIO_BALLOON_F_THP_ORDER

2020-03-12 Thread Michael S. Tsirkin
On Thu, Mar 12, 2020 at 03:49:55PM +0800, Hui Zhu wrote:
> If the guest kernel has many fragmentation pages, use virtio_balloon
> will split THP of QEMU when it calls MADV_DONTNEED madvise to release
> the balloon pages.
> Set option thp-order to on will open flags VIRTIO_BALLOON_F_THP_ORDER.
> It will set balloon size to THP size to handle the THP split issue.
> 
> Signed-off-by: Hui Zhu 

What's wrong with just using the PartiallyBalloonedPage machinery
instead? That would make it guest transparent.

> ---
>  hw/virtio/virtio-balloon.c  | 67 
> -
>  include/standard-headers/linux/virtio_balloon.h |  4 ++
>  2 files changed, 47 insertions(+), 24 deletions(-)
> 
> diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
> index a4729f7..cfe86b0 100644
> --- a/hw/virtio/virtio-balloon.c
> +++ b/hw/virtio/virtio-balloon.c
> @@ -340,37 +340,49 @@ static void virtio_balloon_handle_output(VirtIODevice 
> *vdev, VirtQueue *vq)
>  while (iov_to_buf(elem->out_sg, elem->out_num, offset, , 4) == 
> 4) {
>  unsigned int p = virtio_ldl_p(vdev, );
>  hwaddr pa;
> +size_t handle_size = BALLOON_PAGE_SIZE;
>  
>  pa = (hwaddr) p << VIRTIO_BALLOON_PFN_SHIFT;
>  offset += 4;
>  
> -section = memory_region_find(get_system_memory(), pa,
> - BALLOON_PAGE_SIZE);
> -if (!section.mr) {
> -trace_virtio_balloon_bad_addr(pa);
> -continue;
> -}
> -if (!memory_region_is_ram(section.mr) ||
> -memory_region_is_rom(section.mr) ||
> -memory_region_is_romd(section.mr)) {
> -trace_virtio_balloon_bad_addr(pa);
> -memory_region_unref(section.mr);
> -continue;
> -}
> +if (virtio_has_feature(s->host_features,
> +   VIRTIO_BALLOON_F_THP_ORDER))
> +handle_size = BALLOON_PAGE_SIZE << VIRTIO_BALLOON_THP_ORDER;
> +
> +while (handle_size > 0) {
> +section = memory_region_find(get_system_memory(), pa,
> + BALLOON_PAGE_SIZE);
> +if (!section.mr) {
> +trace_virtio_balloon_bad_addr(pa);
> +continue;
> +}
> +if (!memory_region_is_ram(section.mr) ||
> +memory_region_is_rom(section.mr) ||
> +memory_region_is_romd(section.mr)) {
> +trace_virtio_balloon_bad_addr(pa);
> +memory_region_unref(section.mr);
> +continue;
> +}
>  
> -
> trace_virtio_balloon_handle_output(memory_region_name(section.mr),
> -   pa);
> -if (!qemu_balloon_is_inhibited()) {
> -if (vq == s->ivq) {
> -balloon_inflate_page(s, section.mr,
> - section.offset_within_region, );
> -} else if (vq == s->dvq) {
> -balloon_deflate_page(s, section.mr, 
> section.offset_within_region);
> -} else {
> -g_assert_not_reached();
> +
> trace_virtio_balloon_handle_output(memory_region_name(section.mr),
> +   pa);
> +if (!qemu_balloon_is_inhibited()) {
> +if (vq == s->ivq) {
> +balloon_inflate_page(s, section.mr,
> + section.offset_within_region,
> + );
> +} else if (vq == s->dvq) {
> +balloon_deflate_page(s, section.mr,
> + section.offset_within_region);
> +} else {
> +g_assert_not_reached();
> +}
>  }
> +memory_region_unref(section.mr);
> +
> +pa += BALLOON_PAGE_SIZE;
> +handle_size -= BALLOON_PAGE_SIZE;
>  }
> -memory_region_unref(section.mr);
>  }
>  
>  virtqueue_push(vq, elem, offset);
> @@ -693,6 +705,8 @@ static void virtio_balloon_set_config(VirtIODevice *vdev,
>  
>  memcpy(, config_data, virtio_balloon_config_size(dev));
>  dev->actual = le32_to_cpu(config.actual);
> +if (virtio_has_feature(vdev->host_features, VIRTIO_BALLOON_F_THP_ORDER))
> +dev->actual <<= VIRTIO_BALLOON_THP_ORDER;
>  if (dev->actual != oldactual) {
>  qapi_event_send_balloon_change(vm_ram_size -
>  ((ram_addr_t) dev->actual << 
> VIRTIO_BALLOON_PFN_SHIFT));
> @@ -728,6 +742,9 @@ static void virtio_balloon_to_target(void *opaque, 
> ram_addr_t 

Re: [RFC for Linux] virtio_balloon: Add VIRTIO_BALLOON_F_THP_ORDER to handle THP spilt issue

2020-03-12 Thread Michael S. Tsirkin
On Thu, Mar 12, 2020 at 03:49:54PM +0800, Hui Zhu wrote:
> If the guest kernel has many fragmentation pages, use virtio_balloon
> will split THP of QEMU when it calls MADV_DONTNEED madvise to release
> the balloon pages.
> This is an example in a VM with 1G memory 1CPU:
> cat /proc/meminfo | grep AnonHugePages:
> AnonHugePages: 0 kB
> 
> usemem --punch-holes -s -1 800m &
> 
> cat /proc/meminfo | grep AnonHugePages:
> AnonHugePages:976896 kB
> 
> (qemu) device_add virtio-balloon-pci,id=balloon1
> (qemu) info balloon
> balloon: actual=1024
> (qemu) balloon 624
> (qemu) info balloon
> balloon: actual=624
> 
> cat /proc/meminfo | grep AnonHugePages:
> AnonHugePages:153600 kB
> 
> THP number decreased more than 800M.
> The reason is usemem with punch-holes option will free every other page
> after allocation.  Then 400M free memory inside the guest kernel is
> fragmentation pages.
> The guest kernel will use them to inflate the balloon.  When these
> fragmentation pages are freed, THP will be split.
> 
> This commit tries to handle this with add a new flag
> VIRTIO_BALLOON_F_THP_ORDER.
> When this flag is set, the balloon page order will be set to the THP order.
> Then THP pages will be freed together in the host.
> This is an example in a VM with 1G memory 1CPU:
> cat /proc/meminfo | grep AnonHugePages:
> AnonHugePages: 0 kB
> 
> usemem --punch-holes -s -1 800m &
> 
> cat /proc/meminfo | grep AnonHugePages:
> AnonHugePages:976896 kB
> 
> (qemu) device_add virtio-balloon-pci,id=balloon1,thp-order=on
> (qemu) info balloon
> balloon: actual=1024
> (qemu) balloon 624
> (qemu) info balloon
> balloon: actual=624
> 
> cat /proc/meminfo | grep AnonHugePages:
> AnonHugePages:583680 kB
> 
> The THP number decreases 384M.  This shows that VIRTIO_BALLOON_F_THP_ORDER
> can help handle the THP split issue.
> 
> Signed-off-by: Hui Zhu 
> ---
>  drivers/virtio/virtio_balloon.c | 57 
> ++---
>  include/linux/balloon_compaction.h  | 14 ++---
>  include/uapi/linux/virtio_balloon.h |  4 +++
>  3 files changed, 54 insertions(+), 21 deletions(-)
> 
> diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
> index 7bfe365..1e1dc76 100644
> --- a/drivers/virtio/virtio_balloon.c
> +++ b/drivers/virtio/virtio_balloon.c
> @@ -175,18 +175,31 @@ static unsigned fill_balloon(struct virtio_balloon *vb, 
> size_t num)
>   unsigned num_pfns;
>   struct page *page;
>   LIST_HEAD(pages);
> + int page_order = 0;
>  
>   /* We can only do one array worth at a time. */
>   num = min(num, ARRAY_SIZE(vb->pfns));
>  
> + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_THP_ORDER))
> + page_order = VIRTIO_BALLOON_THP_ORDER;
> +
>   for (num_pfns = 0; num_pfns < num;
>num_pfns += VIRTIO_BALLOON_PAGES_PER_PAGE) {
> - struct page *page = balloon_page_alloc();
> + struct page *page;
> +
> + if (page_order)
> + page = alloc_pages(__GFP_HIGHMEM |
> +__GFP_KSWAPD_RECLAIM |
> +__GFP_RETRY_MAYFAIL |
> +__GFP_NOWARN | __GFP_NOMEMALLOC,

The set of flags is inconsistent with balloon_page_alloc.
Pls extend that do not bypass it.


> +page_order);
> + else
> + page = balloon_page_alloc();
>  
>   if (!page) {
>   dev_info_ratelimited(>vdev->dev,
> -  "Out of puff! Can't get %u 
> pages\n",
> -  VIRTIO_BALLOON_PAGES_PER_PAGE);
> + "Out of puff! Can't get %u pages\n",
> + VIRTIO_BALLOON_PAGES_PER_PAGE << page_order);
>   /* Sleep for at least 1/5 of a second before retry. */
>   msleep(200);
>   break;

I suggest we do something guest side only for starters: if we need a
power of two pages, try to get them in a single chunk, with no retrying.
If that fails go back to a single page.


> @@ -206,7 +219,7 @@ static unsigned fill_balloon(struct virtio_balloon *vb, 
> size_t num)
>   vb->num_pages += VIRTIO_BALLOON_PAGES_PER_PAGE;
>   if (!virtio_has_feature(vb->vdev,
>   VIRTIO_BALLOON_F_DEFLATE_ON_OOM))
> - adjust_managed_page_count(page, -1);
> + adjust_managed_page_count(page, -(1 << page_order));
>   vb->num_pfns += VIRTIO_BALLOON_PAGES_PER_PAGE;
>   }
>  
> @@ -223,13 +236,20 @@ static void release_pages_balloon(struct virtio_balloon 
> *vb,
>struct list_head *pages)
>  {
>   struct page *page, *next;
> + int page_order = 0;
> +
> + if (virtio_has_feature(vb->vdev,