Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-29 Thread Wei Wang

On 06/25/2018 08:05 PM, Wei Wang wrote:

This patch series is separated from the previous "Virtio-balloon
Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT,
implemented by this series enables the virtio-balloon driver to report
hints of guest free pages to the host. It can be used to accelerate live
migration of VMs. Here is an introduction of this usage:

Live migration needs to transfer the VM's memory from the source machine
to the destination round by round. For the 1st round, all the VM's memory
is transferred. From the 2nd round, only the pieces of memory that were
written by the guest (after the 1st round) are transferred. One method
that is popularly used by the hypervisor to track which part of memory is
written is to write-protect all the guest memory.

This feature enables the optimization by skipping the transfer of guest
free pages during VM live migration. It is not concerned that the memory
pages are used after they are given to the hypervisor as a hint of the
free pages, because they will be tracked by the hypervisor and transferred
in the subsequent round if they are used and written.

* Tests
- Test Environment
 Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
 Guest: 8G RAM, 4 vCPU
 Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second

- Test Results
 - Idle Guest Live Migration Time (results are averaged over 10 runs):
 - Optimization v.s. Legacy = 284ms vs 1757ms --> ~84% reduction


According to Michael's comments, add one more set of data here:

Enabling page poison with value=0, and enable KSM.

The legacy live migration time is 1806ms (averaged across 10 runs), 
compared to the case with this optimization feature in use (i.e. 284ms), 
there is still around ~84% reduction.



Best,
Wei
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-29 Thread Michael S. Tsirkin
On Fri, Jun 29, 2018 at 03:52:40PM +, Wang, Wei W wrote:
> On Friday, June 29, 2018 10:46 PM, Michael S. Tsirkin wrote:
> > To: David Hildenbrand 
> > Cc: Wang, Wei W ; virtio-...@lists.oasis-open.org;
> > linux-ker...@vger.kernel.org; virtualization@lists.linux-foundation.org;
> > k...@vger.kernel.org; linux...@kvack.org; mho...@kernel.org;
> > a...@linux-foundation.org; torva...@linux-foundation.org;
> > pbonz...@redhat.com; liliang.opensou...@gmail.com;
> > yang.zhang...@gmail.com; quan@gmail.com; ni...@redhat.com;
> > r...@redhat.com; pet...@redhat.com
> > Subject: Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting
> > 
> > On Wed, Jun 27, 2018 at 01:06:32PM +0200, David Hildenbrand wrote:
> > > On 25.06.2018 14:05, Wei Wang wrote:
> > > > This patch series is separated from the previous "Virtio-balloon
> > > > Enhancement" series. The new feature,
> > > > VIRTIO_BALLOON_F_FREE_PAGE_HINT, implemented by this series
> > enables
> > > > the virtio-balloon driver to report hints of guest free pages to the
> > > > host. It can be used to accelerate live migration of VMs. Here is an
> > introduction of this usage:
> > > >
> > > > Live migration needs to transfer the VM's memory from the source
> > > > machine to the destination round by round. For the 1st round, all
> > > > the VM's memory is transferred. From the 2nd round, only the pieces
> > > > of memory that were written by the guest (after the 1st round) are
> > > > transferred. One method that is popularly used by the hypervisor to
> > > > track which part of memory is written is to write-protect all the guest
> > memory.
> > > >
> > > > This feature enables the optimization by skipping the transfer of
> > > > guest free pages during VM live migration. It is not concerned that
> > > > the memory pages are used after they are given to the hypervisor as
> > > > a hint of the free pages, because they will be tracked by the
> > > > hypervisor and transferred in the subsequent round if they are used and
> > written.
> > > >
> > > > * Tests
> > > > - Test Environment
> > > > Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
> > > > Guest: 8G RAM, 4 vCPU
> > > > Migration setup: migrate_set_speed 100G, migrate_set_downtime 2
> > > > second
> > > >
> > > > - Test Results
> > > > - Idle Guest Live Migration Time (results are averaged over 10 
> > > > runs):
> > > > - Optimization v.s. Legacy = 284ms vs 1757ms --> ~84% reduction
> > > > - Guest with Linux Compilation Workload (make bzImage -j4):
> > > > - Live Migration Time (average)
> > > >   Optimization v.s. Legacy = 1402ms v.s. 2528ms --> ~44% 
> > > > reduction
> > > > - Linux Compilation Time
> > > >   Optimization v.s. Legacy = 5min6s v.s. 5min12s
> > > >   --> no obvious difference
> > > >
> > >
> > > Being in version 34 already, this whole thing still looks and feels
> > > like a big hack to me. It might just be me, but especially if I read
> > > about assumptions like "QEMU will not hotplug memory during
> > > migration". This does not feel like a clean solution.
> > >
> > > I am still not sure if we really need this interface, especially as
> > > real free page hinting might be on its way.
> > >
> > > a) we perform free page hinting by setting all free pages
> > > (arch_free_page()) to zero. Migration will detect zero pages and
> > > minimize #pages to migrate. I don't think this is a good idea but
> > > Michel suggested to do a performance evaluation and Nitesh is looking
> > > into that right now.
> > 
> > Yes this test is needed I think. If we can get most of the benefit without 
> > PV
> > interfaces, that's nice.
> > 
> > Wei, I think you need this as part of your performance comparison
> > too: set page poisoning value to 0 and enable KSM, compare with your
> > patches.
> 
> Do you mean live migration with zero pages?
> I can first share the amount of memory transferred during live migration I 
> saw,
> Legacy is around 380MB,
> Optimization is around 340MB.
> This proves that most pages have already been 0 and skipped during the legacy 
> live migration. But the legacy time is still much larger because zero page 
> checking is costly. 
> (It's late night here, I can get you that with my server probably tomorrow)
> 
> Best,
> Wei

Sure thing.

Also we might want to look at optimizing the RLE compressor for
the common case of pages full of zeroes.

Here are some ideas:
https://rusty.ozlabs.org/?p=560

Note Epiphany #2 as well as comments Paolo Bonzini and by Victor Kaplansky.

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-29 Thread David Hildenbrand


>> Why would your suggestion still be applicable?
>>
>> Your point for now is "I might not want to have page hinting enabled due to
>> the overhead, but still a live migration speedup". If that overhead actually
>> exists (we'll have to see) or there might be another reason to disable page
>> hinting, then we have to decide if that specific setup is worth it merging 
>> your
>> changes.
> 
> All the above "if we have", "assume we have" don't sound like a valid 
> argument to me.

Argument? *confused* And that hinders you from answering the question
"Why would your suggestion still be applicable?" ? Well, okay.

So I will answer it by myself: Because somebody would want to disable
page hinting. Maybe there are some people out there.

>  
>> I am not (and don't want to be) in the position to make any decisions here 
>> :) I
>> just want to understand if two interfaces for free pages actually make sense.
> 
> I responded to Nitesh about the differences, you may want to check with him 
> about this.
> I would suggest you to send out your patches to LKML to get a discussion with 
> the mm folks.

Indeed, Nitesh is trying to solve the problems we found in the RFC, so
this can take some time.

> 
> Best,
> Wei
> 


-- 

Thanks,

David / dhildenb
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


RE: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-29 Thread Wang, Wei W
On Friday, June 29, 2018 7:54 PM, David Hildenbrand wrote:
> On 29.06.2018 13:31, Wei Wang wrote:
> > On 06/29/2018 03:46 PM, David Hildenbrand wrote:
> >>>
> >>> I'm afraid it can't. For example, when we have a guest booted,
> >>> without too many memory activities. Assume the guest has 8GB free
> >>> memory. The arch_free_page there won't be able to capture the 8GB
> >>> free pages since there is no free() called. This results in no free pages
> reported to host.
> >>
> >> So, it takes some time from when the guest boots up until the balloon
> >> device was initialized and therefore page hinting can start. For that
> >> period, you won't get any arch_free_page()/page hinting callbacks, correct.
> >>
> >> However in the hypervisor, you can theoretically track which pages
> >> the guest actually touched ("dirty"), so you already know "which
> >> pages were never touched while booting up until virtio-balloon was
> >> brought to life". These, you can directly exclude from migration. No
> >> interface required.
> >>
> >> The remaining problem is pages that were touched ("allocated") by the
> >> guest during bootup but freed again, before virtio-balloon came up.
> >> One would have to measure how many pages these usually are, I would
> >> say it would not be that many (because recently freed pages are
> >> likely to be used again next for allocation). However, there are some
> >> pages not being reported.
> >>
> >> During the lifetime of the guest, this should not be a problem,
> >> eventually one of these pages would get allocated/freed again, so the
> >> problem "solves itself over time". You are looking into the special
> >> case of migrating the VM just after it has been started. But we have
> >> the exact same problem also for ordinary free page hinting, so we
> >> should rather solve that problem. It is not migration specific.
> >>
> >> If we are looking for an alternative to "problem solves itself",
> >> something like "if virtio-balloon comes up, it will report all free
> >> pages step by step using free page hinting, just like we would have
> >> from "arch_free_pages()"". This would be the same interface we are
> >> using for free page hinting - and it could even be made configurable in the
> guest.
> >>
> >> The current approach we are discussing internally for details about
> >> Nitesh's work ("how the magic inside arch_fee_pages() will work
> >> efficiently) would allow this as far as I can see just fine.
> >>
> >> There would be a tiny little window between virtio-balloon comes up
> >> and it has reported all free pages step by step, but that can be
> >> considered a very special corner case that I would argue is not worth
> >> it to be optimized.
> >>
> >> If I am missing something important here, sorry in advance :)
> >>
> >
> > Probably I didn't explain that well. Please see my re-try:
> >
> > That work is to monitor page allocation and free activities via
> > arch_alloc_pages and arch_free_pages. It has per-CPU lists to record
> > the pages that are freed to the mm free list, and the per-CPU lists
> > dump the recorded pages to a global list when any of them is full.
> > So its own per-CPU list will only be able to get free pages when there
> > is an mm free() function gets called. If we have 8GB free memory on
> > the mm free list, but no application uses them and thus no mm free()
> > calls are made. In that case, the arch_free_pages isn't called, and no
> > free pages added to the per-CPU list, but we have 8G free memory right
> > on the mm free list.
> > How would you guarantee the per-CPU lists have got all the free pages
> > that the mm free lists have?
> 
> As I said, if we have some mechanism that will scan the free pages (not
> arch_free_page() once and report hints using the same mechanism step by
> step (not your bulk interface)), this problem is solved. And as I said, this 
> is
> not a migration specific problem, we have the same problem in the current
> page hinting RFC. These pages have to be reported.
> 
> >
> > - I'm also worried about the overhead of maintaining so many per-CPU
> > lists and the global list. For example, if we have applications
> > frequently allocate and free 4KB pages, and each per-CPU list needs to
> > implement the buddy algorithm to sort and merge neighbor pages. Today
> > a server can have more than 100 CPUs, then there will be more than 100
> > per-CPU lists which need to sync to a global list under a lock, I'm
> > not sure if this would scale well.
> 
> The overhead in the current RFC is definitely too high. But I consider this a
> problem to be solved before page hinting would go upstream. And we are
> discussing right now "if we have a reasonable page hinting implementation,
> why would we need your interface in addition".
> 
> >
> > - This seems to be a burden imposed on the core mm memory
> > allocation/free path. The whole overhead needs to be carried during
> > the whole system life cycle. What we actually expected is to just make
> > one call 

RE: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-29 Thread Wang, Wei W
On Friday, June 29, 2018 10:46 PM, Michael S. Tsirkin wrote:
> To: David Hildenbrand 
> Cc: Wang, Wei W ; virtio-...@lists.oasis-open.org;
> linux-ker...@vger.kernel.org; virtualization@lists.linux-foundation.org;
> k...@vger.kernel.org; linux...@kvack.org; mho...@kernel.org;
> a...@linux-foundation.org; torva...@linux-foundation.org;
> pbonz...@redhat.com; liliang.opensou...@gmail.com;
> yang.zhang...@gmail.com; quan@gmail.com; ni...@redhat.com;
> r...@redhat.com; pet...@redhat.com
> Subject: Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting
> 
> On Wed, Jun 27, 2018 at 01:06:32PM +0200, David Hildenbrand wrote:
> > On 25.06.2018 14:05, Wei Wang wrote:
> > > This patch series is separated from the previous "Virtio-balloon
> > > Enhancement" series. The new feature,
> > > VIRTIO_BALLOON_F_FREE_PAGE_HINT, implemented by this series
> enables
> > > the virtio-balloon driver to report hints of guest free pages to the
> > > host. It can be used to accelerate live migration of VMs. Here is an
> introduction of this usage:
> > >
> > > Live migration needs to transfer the VM's memory from the source
> > > machine to the destination round by round. For the 1st round, all
> > > the VM's memory is transferred. From the 2nd round, only the pieces
> > > of memory that were written by the guest (after the 1st round) are
> > > transferred. One method that is popularly used by the hypervisor to
> > > track which part of memory is written is to write-protect all the guest
> memory.
> > >
> > > This feature enables the optimization by skipping the transfer of
> > > guest free pages during VM live migration. It is not concerned that
> > > the memory pages are used after they are given to the hypervisor as
> > > a hint of the free pages, because they will be tracked by the
> > > hypervisor and transferred in the subsequent round if they are used and
> written.
> > >
> > > * Tests
> > > - Test Environment
> > > Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
> > > Guest: 8G RAM, 4 vCPU
> > > Migration setup: migrate_set_speed 100G, migrate_set_downtime 2
> > > second
> > >
> > > - Test Results
> > > - Idle Guest Live Migration Time (results are averaged over 10 runs):
> > > - Optimization v.s. Legacy = 284ms vs 1757ms --> ~84% reduction
> > > - Guest with Linux Compilation Workload (make bzImage -j4):
> > > - Live Migration Time (average)
> > >   Optimization v.s. Legacy = 1402ms v.s. 2528ms --> ~44% reduction
> > > - Linux Compilation Time
> > >   Optimization v.s. Legacy = 5min6s v.s. 5min12s
> > >   --> no obvious difference
> > >
> >
> > Being in version 34 already, this whole thing still looks and feels
> > like a big hack to me. It might just be me, but especially if I read
> > about assumptions like "QEMU will not hotplug memory during
> > migration". This does not feel like a clean solution.
> >
> > I am still not sure if we really need this interface, especially as
> > real free page hinting might be on its way.
> >
> > a) we perform free page hinting by setting all free pages
> > (arch_free_page()) to zero. Migration will detect zero pages and
> > minimize #pages to migrate. I don't think this is a good idea but
> > Michel suggested to do a performance evaluation and Nitesh is looking
> > into that right now.
> 
> Yes this test is needed I think. If we can get most of the benefit without PV
> interfaces, that's nice.
> 
> Wei, I think you need this as part of your performance comparison
> too: set page poisoning value to 0 and enable KSM, compare with your
> patches.

Do you mean live migration with zero pages?
I can first share the amount of memory transferred during live migration I saw,
Legacy is around 380MB,
Optimization is around 340MB.
This proves that most pages have already been 0 and skipped during the legacy 
live migration. But the legacy time is still much larger because zero page 
checking is costly. 
(It's late night here, I can get you that with my server probably tomorrow)

Best,
Wei





___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-29 Thread David Hildenbrand
>> And looking at all the discussions and problems that already happened
>> during the development of this series, I think we should rather look
>> into how clean free page hinting might solve the same problem.
> 
> I'm not sure I follow the logic. We found that neat tricks
> especially re-using the max order free page for reporting.

Let me rephrase: history of this series showed that this is some really
complicated stuff. I am asking if this complexity is actually necessary.

No question that we had a very valuable outcome so far (that especially
is also relevant for other projects like Nitesh's proposal - talking
about virtio requests and locking).

> 
>> If it can't be solved using free page hinting, fair enough.
> 
> I suspect Nitesh will need to find a way not to have mm code
> call out to random drivers or subsystems before that code
> is acceptable.
> 
> 


-- 

Thanks,

David / dhildenb
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-29 Thread Michael S. Tsirkin
On Wed, Jun 27, 2018 at 01:06:32PM +0200, David Hildenbrand wrote:
> On 25.06.2018 14:05, Wei Wang wrote:
> > This patch series is separated from the previous "Virtio-balloon
> > Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT,  
> > implemented by this series enables the virtio-balloon driver to report
> > hints of guest free pages to the host. It can be used to accelerate live
> > migration of VMs. Here is an introduction of this usage:
> > 
> > Live migration needs to transfer the VM's memory from the source machine
> > to the destination round by round. For the 1st round, all the VM's memory
> > is transferred. From the 2nd round, only the pieces of memory that were
> > written by the guest (after the 1st round) are transferred. One method
> > that is popularly used by the hypervisor to track which part of memory is
> > written is to write-protect all the guest memory.
> > 
> > This feature enables the optimization by skipping the transfer of guest
> > free pages during VM live migration. It is not concerned that the memory
> > pages are used after they are given to the hypervisor as a hint of the
> > free pages, because they will be tracked by the hypervisor and transferred
> > in the subsequent round if they are used and written.
> > 
> > * Tests
> > - Test Environment
> > Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
> > Guest: 8G RAM, 4 vCPU
> > Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second
> > 
> > - Test Results
> > - Idle Guest Live Migration Time (results are averaged over 10 runs):
> > - Optimization v.s. Legacy = 284ms vs 1757ms --> ~84% reduction
> > - Guest with Linux Compilation Workload (make bzImage -j4):
> > - Live Migration Time (average)
> >   Optimization v.s. Legacy = 1402ms v.s. 2528ms --> ~44% reduction
> > - Linux Compilation Time
> >   Optimization v.s. Legacy = 5min6s v.s. 5min12s
> >   --> no obvious difference
> > 
> 
> Being in version 34 already, this whole thing still looks and feels like
> a big hack to me. It might just be me, but especially if I read about
> assumptions like "QEMU will not hotplug memory during migration". This
> does not feel like a clean solution.
> 
> I am still not sure if we really need this interface, especially as real
> free page hinting might be on its way.
> 
> a) we perform free page hinting by setting all free pages
> (arch_free_page()) to zero. Migration will detect zero pages and
> minimize #pages to migrate. I don't think this is a good idea but Michel
> suggested to do a performance evaluation and Nitesh is looking into that
> right now.

Yes this test is needed I think. If we can get most of the benefit
without PV interfaces, that's nice.

Wei, I think you need this as part of your performance comparison
too: set page poisoning value to 0 and enable KSM, compare with
your patches.


> b) we perform free page hinting using something that Nitesh proposed. We
> get in QEMU blocks of free pages that we can MADV_FREE. In addition we
> could e.g. clear the dirty bit of these pages in the dirty bitmap, to
> hinder them from getting migrated. Right now the hinting mechanism is
> synchronous (called from arch_free_page()) but we might be able to
> convert it into something asynchronous.
> 
> So we might be able to completely get rid of this interface.

The way I see it, hinting during alloc/free will always add
overhead which might be unacceptable for some people.  So even with
Nitesh's patches there's value in enabling / disabling hinting
dynamically. And Wei's patches would then be useful to set
the stage where we know the initial page state.


> And looking at all the discussions and problems that already happened
> during the development of this series, I think we should rather look
> into how clean free page hinting might solve the same problem.

I'm not sure I follow the logic. We found that neat tricks
especially re-using the max order free page for reporting.

> If it can't be solved using free page hinting, fair enough.

I suspect Nitesh will need to find a way not to have mm code
call out to random drivers or subsystems before that code
is acceptable.


-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-29 Thread David Hildenbrand
On 29.06.2018 13:31, Wei Wang wrote:
> On 06/29/2018 03:46 PM, David Hildenbrand wrote:
>>>
>>> I'm afraid it can't. For example, when we have a guest booted, without
>>> too many memory activities. Assume the guest has 8GB free memory. The
>>> arch_free_page there won't be able to capture the 8GB free pages since
>>> there is no free() called. This results in no free pages reported to host.
>>
>> So, it takes some time from when the guest boots up until the balloon
>> device was initialized and therefore page hinting can start. For that
>> period, you won't get any arch_free_page()/page hinting callbacks, correct.
>>
>> However in the hypervisor, you can theoretically track which pages the
>> guest actually touched ("dirty"), so you already know "which pages were
>> never touched while booting up until virtio-balloon was brought to
>> life". These, you can directly exclude from migration. No interface
>> required.
>>
>> The remaining problem is pages that were touched ("allocated") by the
>> guest during bootup but freed again, before virtio-balloon came up. One
>> would have to measure how many pages these usually are, I would say it
>> would not be that many (because recently freed pages are likely to be
>> used again next for allocation). However, there are some pages not being
>> reported.
>>
>> During the lifetime of the guest, this should not be a problem,
>> eventually one of these pages would get allocated/freed again, so the
>> problem "solves itself over time". You are looking into the special case
>> of migrating the VM just after it has been started. But we have the
>> exact same problem also for ordinary free page hinting, so we should
>> rather solve that problem. It is not migration specific.
>>
>> If we are looking for an alternative to "problem solves itself",
>> something like "if virtio-balloon comes up, it will report all free
>> pages step by step using free page hinting, just like we would have from
>> "arch_free_pages()"". This would be the same interface we are using for
>> free page hinting - and it could even be made configurable in the guest.
>>
>> The current approach we are discussing internally for details about
>> Nitesh's work ("how the magic inside arch_fee_pages() will work
>> efficiently) would allow this as far as I can see just fine.
>>
>> There would be a tiny little window between virtio-balloon comes up and
>> it has reported all free pages step by step, but that can be considered
>> a very special corner case that I would argue is not worth it to be
>> optimized.
>>
>> If I am missing something important here, sorry in advance :)
>>
> 
> Probably I didn't explain that well. Please see my re-try:
> 
> That work is to monitor page allocation and free activities via 
> arch_alloc_pages and arch_free_pages. It has per-CPU lists to record the 
> pages that are freed to the mm free list, and the per-CPU lists dump the 
> recorded pages to a global list when any of them is full.
> So its own per-CPU list will only be able to get free pages when there 
> is an mm free() function gets called. If we have 8GB free memory on the 
> mm free list, but no application uses them and thus no mm free() calls 
> are made. In that case, the arch_free_pages isn't called, and no free 
> pages added to the per-CPU list, but we have 8G free memory right on the 
> mm free list.
> How would you guarantee the per-CPU lists have got all the free pages 
> that the mm free lists have?

As I said, if we have some mechanism that will scan the free pages (not
arch_free_page() once and report hints using the same mechanism step by
step (not your bulk interface)), this problem is solved. And as I said,
this is not a migration specific problem, we have the same problem in
the current page hinting RFC. These pages have to be reported.

> 
> - I'm also worried about the overhead of maintaining so many per-CPU 
> lists and the global list. For example, if we have applications 
> frequently allocate and free 4KB pages, and each per-CPU list needs to 
> implement the buddy algorithm to sort and merge neighbor pages. Today a 
> server can have more than 100 CPUs, then there will be more than 100 
> per-CPU lists which need to sync to a global list under a lock, I'm not 
> sure if this would scale well.

The overhead in the current RFC is definitely too high. But I consider
this a problem to be solved before page hinting would go upstream. And
we are discussing right now "if we have a reasonable page hinting
implementation, why would we need your interface in addition".

> 
> - This seems to be a burden imposed on the core mm memory 
> allocation/free path. The whole overhead needs to be carried during the 
> whole system life cycle. What we actually expected is to just make one 
> call to get the free page hints only when live migration happens.

You're focusing too much on the actual implementation of the page
hinting RFC right now. Assume for now that we would have
- efficient page hinting without 

Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-29 Thread Wei Wang

On 06/29/2018 03:46 PM, David Hildenbrand wrote:


I'm afraid it can't. For example, when we have a guest booted, without
too many memory activities. Assume the guest has 8GB free memory. The
arch_free_page there won't be able to capture the 8GB free pages since
there is no free() called. This results in no free pages reported to host.


So, it takes some time from when the guest boots up until the balloon
device was initialized and therefore page hinting can start. For that
period, you won't get any arch_free_page()/page hinting callbacks, correct.

However in the hypervisor, you can theoretically track which pages the
guest actually touched ("dirty"), so you already know "which pages were
never touched while booting up until virtio-balloon was brought to
life". These, you can directly exclude from migration. No interface
required.

The remaining problem is pages that were touched ("allocated") by the
guest during bootup but freed again, before virtio-balloon came up. One
would have to measure how many pages these usually are, I would say it
would not be that many (because recently freed pages are likely to be
used again next for allocation). However, there are some pages not being
reported.

During the lifetime of the guest, this should not be a problem,
eventually one of these pages would get allocated/freed again, so the
problem "solves itself over time". You are looking into the special case
of migrating the VM just after it has been started. But we have the
exact same problem also for ordinary free page hinting, so we should
rather solve that problem. It is not migration specific.

If we are looking for an alternative to "problem solves itself",
something like "if virtio-balloon comes up, it will report all free
pages step by step using free page hinting, just like we would have from
"arch_free_pages()"". This would be the same interface we are using for
free page hinting - and it could even be made configurable in the guest.

The current approach we are discussing internally for details about
Nitesh's work ("how the magic inside arch_fee_pages() will work
efficiently) would allow this as far as I can see just fine.

There would be a tiny little window between virtio-balloon comes up and
it has reported all free pages step by step, but that can be considered
a very special corner case that I would argue is not worth it to be
optimized.

If I am missing something important here, sorry in advance :)



Probably I didn't explain that well. Please see my re-try:

That work is to monitor page allocation and free activities via 
arch_alloc_pages and arch_free_pages. It has per-CPU lists to record the 
pages that are freed to the mm free list, and the per-CPU lists dump the 
recorded pages to a global list when any of them is full.
So its own per-CPU list will only be able to get free pages when there 
is an mm free() function gets called. If we have 8GB free memory on the 
mm free list, but no application uses them and thus no mm free() calls 
are made. In that case, the arch_free_pages isn't called, and no free 
pages added to the per-CPU list, but we have 8G free memory right on the 
mm free list.
How would you guarantee the per-CPU lists have got all the free pages 
that the mm free lists have?


- I'm also worried about the overhead of maintaining so many per-CPU 
lists and the global list. For example, if we have applications 
frequently allocate and free 4KB pages, and each per-CPU list needs to 
implement the buddy algorithm to sort and merge neighbor pages. Today a 
server can have more than 100 CPUs, then there will be more than 100 
per-CPU lists which need to sync to a global list under a lock, I'm not 
sure if this would scale well.


- This seems to be a burden imposed on the core mm memory 
allocation/free path. The whole overhead needs to be carried during the 
whole system life cycle. What we actually expected is to just make one 
call to get the free page hints only when live migration happens.


Best,
Wei










___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-29 Thread David Hildenbrand
On 29.06.2018 05:51, Wei Wang wrote:
> On 06/27/2018 07:06 PM, David Hildenbrand wrote:
>> On 25.06.2018 14:05, Wei Wang wrote:
>>> This patch series is separated from the previous "Virtio-balloon
>>> Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT,
>>> implemented by this series enables the virtio-balloon driver to report
>>> hints of guest free pages to the host. It can be used to accelerate live
>>> migration of VMs. Here is an introduction of this usage:
>>>
>>> Live migration needs to transfer the VM's memory from the source machine
>>> to the destination round by round. For the 1st round, all the VM's memory
>>> is transferred. From the 2nd round, only the pieces of memory that were
>>> written by the guest (after the 1st round) are transferred. One method
>>> that is popularly used by the hypervisor to track which part of memory is
>>> written is to write-protect all the guest memory.
>>>
>>> This feature enables the optimization by skipping the transfer of guest
>>> free pages during VM live migration. It is not concerned that the memory
>>> pages are used after they are given to the hypervisor as a hint of the
>>> free pages, because they will be tracked by the hypervisor and transferred
>>> in the subsequent round if they are used and written.
>>>
>>> * Tests
>>> - Test Environment
>>>  Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
>>>  Guest: 8G RAM, 4 vCPU
>>>  Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second
>>>
>>> - Test Results
>>>  - Idle Guest Live Migration Time (results are averaged over 10 runs):
>>>  - Optimization v.s. Legacy = 284ms vs 1757ms --> ~84% reduction
>>>  - Guest with Linux Compilation Workload (make bzImage -j4):
>>>  - Live Migration Time (average)
>>>Optimization v.s. Legacy = 1402ms v.s. 2528ms --> ~44% reduction
>>>  - Linux Compilation Time
>>>Optimization v.s. Legacy = 5min6s v.s. 5min12s
>>>--> no obvious difference
>>>
>> Being in version 34 already, this whole thing still looks and feels like
>> a big hack to me. It might just be me, but especially if I read about
>> assumptions like "QEMU will not hotplug memory during migration". This
>> does not feel like a clean solution.
>>
>> I am still not sure if we really need this interface, especially as real
>> free page hinting might be on its way.
>>
>> a) we perform free page hinting by setting all free pages
>> (arch_free_page()) to zero. Migration will detect zero pages and
>> minimize #pages to migrate. I don't think this is a good idea but Michel
>> suggested to do a performance evaluation and Nitesh is looking into that
>> right now.
> 
> The hypervisor doesn't get the zero pages for free. It pays lots of CPU 
> utilization and memory bandwidth (there are some guys complaining abut 
> this on the QEMU mailinglist)
> In the above results, the legacy part already has the zero page feature 
> in use.

Indeed, I don't consider this attempt not very practical in general,
especially as it would rely on KSM right now, which is frowned upon by
many people.

> 
>>
>> b) we perform free page hinting using something that Nitesh proposed. We
>> get in QEMU blocks of free pages that we can MADV_FREE. In addition we
>> could e.g. clear the dirty bit of these pages in the dirty bitmap, to
>> hinder them from getting migrated. Right now the hinting mechanism is
>> synchronous (called from arch_free_page()) but we might be able to
>> convert it into something asynchronous.
>>
>> So we might be able to completely get rid of this interface. And looking
>> at all the discussions and problems that already happened during the
>> development of this series, I think we should rather look into how clean
>> free page hinting might solve the same problem.
>>
>> If it can't be solved using free page hinting, fair enough.
>>
> 
> I'm afraid it can't. For example, when we have a guest booted, without 
> too many memory activities. Assume the guest has 8GB free memory. The 
> arch_free_page there won't be able to capture the 8GB free pages since 
> there is no free() called. This results in no free pages reported to host.


So, it takes some time from when the guest boots up until the balloon
device was initialized and therefore page hinting can start. For that
period, you won't get any arch_free_page()/page hinting callbacks, correct.

However in the hypervisor, you can theoretically track which pages the
guest actually touched ("dirty"), so you already know "which pages were
never touched while booting up until virtio-balloon was brought to
life". These, you can directly exclude from migration. No interface
required.

The remaining problem is pages that were touched ("allocated") by the
guest during bootup but freed again, before virtio-balloon came up. One
would have to measure how many pages these usually are, I would say it
would not be that many (because recently freed pages are likely to be
used again next 

Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-28 Thread Wei Wang

On 06/27/2018 07:06 PM, David Hildenbrand wrote:

On 25.06.2018 14:05, Wei Wang wrote:

This patch series is separated from the previous "Virtio-balloon
Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT,
implemented by this series enables the virtio-balloon driver to report
hints of guest free pages to the host. It can be used to accelerate live
migration of VMs. Here is an introduction of this usage:

Live migration needs to transfer the VM's memory from the source machine
to the destination round by round. For the 1st round, all the VM's memory
is transferred. From the 2nd round, only the pieces of memory that were
written by the guest (after the 1st round) are transferred. One method
that is popularly used by the hypervisor to track which part of memory is
written is to write-protect all the guest memory.

This feature enables the optimization by skipping the transfer of guest
free pages during VM live migration. It is not concerned that the memory
pages are used after they are given to the hypervisor as a hint of the
free pages, because they will be tracked by the hypervisor and transferred
in the subsequent round if they are used and written.

* Tests
- Test Environment
 Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
 Guest: 8G RAM, 4 vCPU
 Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second

- Test Results
 - Idle Guest Live Migration Time (results are averaged over 10 runs):
 - Optimization v.s. Legacy = 284ms vs 1757ms --> ~84% reduction
 - Guest with Linux Compilation Workload (make bzImage -j4):
 - Live Migration Time (average)
   Optimization v.s. Legacy = 1402ms v.s. 2528ms --> ~44% reduction
 - Linux Compilation Time
   Optimization v.s. Legacy = 5min6s v.s. 5min12s
   --> no obvious difference


Being in version 34 already, this whole thing still looks and feels like
a big hack to me. It might just be me, but especially if I read about
assumptions like "QEMU will not hotplug memory during migration". This
does not feel like a clean solution.

I am still not sure if we really need this interface, especially as real
free page hinting might be on its way.

a) we perform free page hinting by setting all free pages
(arch_free_page()) to zero. Migration will detect zero pages and
minimize #pages to migrate. I don't think this is a good idea but Michel
suggested to do a performance evaluation and Nitesh is looking into that
right now.


The hypervisor doesn't get the zero pages for free. It pays lots of CPU 
utilization and memory bandwidth (there are some guys complaining abut 
this on the QEMU mailinglist)
In the above results, the legacy part already has the zero page feature 
in use.




b) we perform free page hinting using something that Nitesh proposed. We
get in QEMU blocks of free pages that we can MADV_FREE. In addition we
could e.g. clear the dirty bit of these pages in the dirty bitmap, to
hinder them from getting migrated. Right now the hinting mechanism is
synchronous (called from arch_free_page()) but we might be able to
convert it into something asynchronous.

So we might be able to completely get rid of this interface. And looking
at all the discussions and problems that already happened during the
development of this series, I think we should rather look into how clean
free page hinting might solve the same problem.

If it can't be solved using free page hinting, fair enough.



I'm afraid it can't. For example, when we have a guest booted, without 
too many memory activities. Assume the guest has 8GB free memory. The 
arch_free_page there won't be able to capture the 8GB free pages since 
there is no free() called. This results in no free pages reported to host.


Best,
Wei
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-27 Thread David Hildenbrand
On 25.06.2018 14:05, Wei Wang wrote:
> This patch series is separated from the previous "Virtio-balloon
> Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT,  
> implemented by this series enables the virtio-balloon driver to report
> hints of guest free pages to the host. It can be used to accelerate live
> migration of VMs. Here is an introduction of this usage:
> 
> Live migration needs to transfer the VM's memory from the source machine
> to the destination round by round. For the 1st round, all the VM's memory
> is transferred. From the 2nd round, only the pieces of memory that were
> written by the guest (after the 1st round) are transferred. One method
> that is popularly used by the hypervisor to track which part of memory is
> written is to write-protect all the guest memory.
> 
> This feature enables the optimization by skipping the transfer of guest
> free pages during VM live migration. It is not concerned that the memory
> pages are used after they are given to the hypervisor as a hint of the
> free pages, because they will be tracked by the hypervisor and transferred
> in the subsequent round if they are used and written.
> 
> * Tests
> - Test Environment
> Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
> Guest: 8G RAM, 4 vCPU
> Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second
> 
> - Test Results
> - Idle Guest Live Migration Time (results are averaged over 10 runs):
> - Optimization v.s. Legacy = 284ms vs 1757ms --> ~84% reduction
> - Guest with Linux Compilation Workload (make bzImage -j4):
> - Live Migration Time (average)
>   Optimization v.s. Legacy = 1402ms v.s. 2528ms --> ~44% reduction
> - Linux Compilation Time
>   Optimization v.s. Legacy = 5min6s v.s. 5min12s
>   --> no obvious difference
> 

Being in version 34 already, this whole thing still looks and feels like
a big hack to me. It might just be me, but especially if I read about
assumptions like "QEMU will not hotplug memory during migration". This
does not feel like a clean solution.

I am still not sure if we really need this interface, especially as real
free page hinting might be on its way.

a) we perform free page hinting by setting all free pages
(arch_free_page()) to zero. Migration will detect zero pages and
minimize #pages to migrate. I don't think this is a good idea but Michel
suggested to do a performance evaluation and Nitesh is looking into that
right now.

b) we perform free page hinting using something that Nitesh proposed. We
get in QEMU blocks of free pages that we can MADV_FREE. In addition we
could e.g. clear the dirty bit of these pages in the dirty bitmap, to
hinder them from getting migrated. Right now the hinting mechanism is
synchronous (called from arch_free_page()) but we might be able to
convert it into something asynchronous.

So we might be able to completely get rid of this interface. And looking
at all the discussions and problems that already happened during the
development of this series, I think we should rather look into how clean
free page hinting might solve the same problem.

If it can't be solved using free page hinting, fair enough.


> ChangeLog:
> v33->v34:
> - mm:
> - add a new API max_free_page_blocks, which estimates the max
>   number of free page blocks that a free page list may have
> - get_from_free_page_list: store addresses to multiple arrays,
>   instead of just one array. This removes the limitation of being
>   able to report only 2TB free memory (the largest array memory
>   that can be allocated on x86 is 4MB, which can store 2^19
>   addresses of 4MB free page blocks).
> - virtio-balloon:
> - Allocate multiple arrays to load free page hints;
> - Use the same method in v32 to do guest/host interaction, the
>   differeces are
>   - the hints are tranferred array by array, instead of
> one by one.
> - send the free page block size of a hint along with the cmd
> id to host, so that host knows each address represents e.g.
> a 4MB memory in our case. 
> v32->v33:
> - mm/get_from_free_page_list: The new implementation to get free page
>   hints based on the suggestions from Linus:
>   https://lkml.org/lkml/2018/6/11/764
>   This avoids the complex call chain, and looks more prudent.
> - virtio-balloon: 
>   - use a fix-sized buffer to get free page hints;
>   - remove the cmd id related interface. Now host can just send a free
> page hint command to the guest (via the host_cmd config register)
> to start the reporting. Currentlty the guest reports only the max
> order free page hints to host, which has generated similar good
> results as before. But the interface used by virtio-balloon to
> report can support reporting more orders in the future 

[PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-25 Thread Wei Wang
This patch series is separated from the previous "Virtio-balloon
Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT,  
implemented by this series enables the virtio-balloon driver to report
hints of guest free pages to the host. It can be used to accelerate live
migration of VMs. Here is an introduction of this usage:

Live migration needs to transfer the VM's memory from the source machine
to the destination round by round. For the 1st round, all the VM's memory
is transferred. From the 2nd round, only the pieces of memory that were
written by the guest (after the 1st round) are transferred. One method
that is popularly used by the hypervisor to track which part of memory is
written is to write-protect all the guest memory.

This feature enables the optimization by skipping the transfer of guest
free pages during VM live migration. It is not concerned that the memory
pages are used after they are given to the hypervisor as a hint of the
free pages, because they will be tracked by the hypervisor and transferred
in the subsequent round if they are used and written.

* Tests
- Test Environment
Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
Guest: 8G RAM, 4 vCPU
Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second

- Test Results
- Idle Guest Live Migration Time (results are averaged over 10 runs):
- Optimization v.s. Legacy = 284ms vs 1757ms --> ~84% reduction
- Guest with Linux Compilation Workload (make bzImage -j4):
- Live Migration Time (average)
  Optimization v.s. Legacy = 1402ms v.s. 2528ms --> ~44% reduction
- Linux Compilation Time
  Optimization v.s. Legacy = 5min6s v.s. 5min12s
  --> no obvious difference

ChangeLog:
v33->v34:
- mm:
- add a new API max_free_page_blocks, which estimates the max
  number of free page blocks that a free page list may have
- get_from_free_page_list: store addresses to multiple arrays,
  instead of just one array. This removes the limitation of being
  able to report only 2TB free memory (the largest array memory
  that can be allocated on x86 is 4MB, which can store 2^19
  addresses of 4MB free page blocks).
- virtio-balloon:
- Allocate multiple arrays to load free page hints;
- Use the same method in v32 to do guest/host interaction, the
  differeces are
  - the hints are tranferred array by array, instead of
one by one.
  - send the free page block size of a hint along with the cmd
id to host, so that host knows each address represents e.g.
a 4MB memory in our case. 
v32->v33:
- mm/get_from_free_page_list: The new implementation to get free page
  hints based on the suggestions from Linus:
  https://lkml.org/lkml/2018/6/11/764
  This avoids the complex call chain, and looks more prudent.
- virtio-balloon: 
  - use a fix-sized buffer to get free page hints;
  - remove the cmd id related interface. Now host can just send a free
page hint command to the guest (via the host_cmd config register)
to start the reporting. Currentlty the guest reports only the max
order free page hints to host, which has generated similar good
results as before. But the interface used by virtio-balloon to
report can support reporting more orders in the future when there
is a need.
v31->v32:
- virtio-balloon:
- rename cmd_id_use to cmd_id_active;
- report_free_page_func: detach used buffers after host sends a vq
  interrupt, instead of busy waiting for used buffers.
v30->v31:
- virtio-balloon:
- virtio_balloon_send_free_pages: return -EINTR rather than 1 to
  indicate an active stop requested by host; and add more
  comments to explain about access to cmd_id_received without
  locks;
-  add_one_sg: add TODO to comments about possible improvement.
v29->v30:
- mm/walk_free_mem_block: add cond_sched() for each order
v28->v29:
- mm/page_poison: only expose page_poison_enabled(), rather than more
  changes did in v28, as we are not 100% confident about that for now.
- virtio-balloon: use a separate buffer for the stop cmd, instead of
  having the start and stop cmd use the same buffer. This avoids the
  corner case that the start cmd is overridden by the stop cmd when
  the host has a delay in reading the start cmd.
v27->v28:
- mm/page_poison: Move PAGE_POISON to page_poison.c and add a function
  to expose page poison val to kernel modules.
v26->v27:
- add a new patch to expose page_poisoning_enabled to kernel modules
- virtio-balloon: set poison_val to 0x, instead of 0xaa
v25->v26: virtio-balloon changes only
- remove kicking free page vq since the host now polls the vq after
  initiating the reporting
-