Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-29 Thread Wei Wang

On 06/25/2018 08:05 PM, Wei Wang wrote:

This patch series is separated from the previous "Virtio-balloon
Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT,
implemented by this series enables the virtio-balloon driver to report
hints of guest free pages to the host. It can be used to accelerate live
migration of VMs. Here is an introduction of this usage:

Live migration needs to transfer the VM's memory from the source machine
to the destination round by round. For the 1st round, all the VM's memory
is transferred. From the 2nd round, only the pieces of memory that were
written by the guest (after the 1st round) are transferred. One method
that is popularly used by the hypervisor to track which part of memory is
written is to write-protect all the guest memory.

This feature enables the optimization by skipping the transfer of guest
free pages during VM live migration. It is not concerned that the memory
pages are used after they are given to the hypervisor as a hint of the
free pages, because they will be tracked by the hypervisor and transferred
in the subsequent round if they are used and written.

* Tests
- Test Environment
 Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
 Guest: 8G RAM, 4 vCPU
 Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second

- Test Results
 - Idle Guest Live Migration Time (results are averaged over 10 runs):
 - Optimization v.s. Legacy = 284ms vs 1757ms --> ~84% reduction


According to Michael's comments, add one more set of data here:

Enabling page poison with value=0, and enable KSM.

The legacy live migration time is 1806ms (averaged across 10 runs), 
compared to the case with this optimization feature in use (i.e. 284ms), 
there is still around ~84% reduction.



Best,
Wei
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH vhost] vhost_net: Fix too many vring kick on busypoll

2018-06-29 Thread Michael S. Tsirkin
On Fri, Jun 29, 2018 at 05:09:50PM +0900, Toshiaki Makita wrote:
> Under heavy load vhost busypoll may run without suppressing
> notification. For example tx zerocopy callback can push tx work while
> handle_tx() is running, then busyloop exits due to vhost_has_work()
> condition and enables notification but immediately reenters handle_tx()
> because the pushed work was tx. In this case handle_tx() tries to
> disable notification again, but when using event_idx it by design
> cannot. Then busyloop will run without suppressing notification.
> Another example is the case where handle_tx() tries to enable
> notification but avail idx is advanced so disables it again. This case
> also lead to the same situation with event_idx.
> 
> The problem is that once we enter this situation busyloop does not work
> under heavy load for considerable amount of time, because notification
> is likely to happen during busyloop and handle_tx() immediately enables
> notification after notification happens. Specifically busyloop detects
> notification by vhost_has_work() and then handle_tx() calls
> vhost_enable_notify().

I'd like to understand the problem a bit better.
Why does this happen?
Doesn't this only happen if ring is empty?

> Because the detected work was the tx work, it
> enters handle_tx(), and enters busyloop without suppression again.
> This is likely to be repeated, so with event_idx we are almost not able
> to suppress notification in this case.
> 
> To fix this, poll the work instead of enabling notification when
> busypoll is interrupted by something. IMHO signal_pending() and
> vhost_has_work() are kind of interruptions rather than signals to
> completely cancel the busypoll, so let's run busypoll after the
> necessary work is done. In order to avoid too long busyloop due to
> interruption, save the endtime in vq field and use it when reentering
> the work function.
> 
> I observed this problem makes tx performance poor but does not rx. I
> guess it is because rx notification from socket is not that heavy so
> did not impact the performance even when we failed to mask the
> notification. Anyway for consistency I fixed rx routine as well as tx.
> 
> Performance numbers:
> 
> - Bulk transfer from guest to external physical server.
> [Guest]->vhost_net->tap--(XDP_REDIRECT)-->i40e --(wire)--> [Server]
> - Set 10us busypoll.
> - Guest disables checksum and TSO because of host XDP.
> - Measured single flow Mbps by netperf, and kicks by perf kvm stat
>   (EPT_MISCONFIG event).
> 
> Before  After
>   Mbps  kicks/s  Mbps  kicks/s
> UDP_STREAM 1472byte  247758 27
> Send   3645.376958.10
> Recv   3588.566958.10
>   1byte9865 37
> Send  4.34   5.43
> Recv  4.17   5.26
> TCP_STREAM 8801.0345794   9592.77 2884
> 
> Signed-off-by: Toshiaki Makita 

Is this with busy poll enabled?
Are there CPU utilization #s?

> ---
>  drivers/vhost/net.c   | 94 
> +++
>  drivers/vhost/vhost.c |  1 +
>  drivers/vhost/vhost.h |  1 +
>  3 files changed, 66 insertions(+), 30 deletions(-)
> 
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index eeaf6739215f..0e85f628b965 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -391,13 +391,14 @@ static inline unsigned long busy_clock(void)
>   return local_clock() >> 10;
>  }
>  
> -static bool vhost_can_busy_poll(struct vhost_dev *dev,
> - unsigned long endtime)
> +static bool vhost_can_busy_poll(unsigned long endtime)
>  {
> - return likely(!need_resched()) &&
> -likely(!time_after(busy_clock(), endtime)) &&
> -likely(!signal_pending(current)) &&
> -!vhost_has_work(dev);
> + return likely(!need_resched() && !time_after(busy_clock(), endtime));
> +}
> +
> +static bool vhost_busy_poll_interrupted(struct vhost_dev *dev)
> +{
> + return unlikely(signal_pending(current)) || vhost_has_work(dev);
>  }
>  
>  static void vhost_net_disable_vq(struct vhost_net *n,
> @@ -437,10 +438,21 @@ static int vhost_net_tx_get_vq_desc(struct vhost_net 
> *net,
>  
>   if (r == vq->num && vq->busyloop_timeout) {
>   preempt_disable();
> - endtime = busy_clock() + vq->busyloop_timeout;
> - while (vhost_can_busy_poll(vq->dev, endtime) &&
> -vhost_vq_avail_empty(vq->dev, vq))
> + if (vq->busyloop_endtime) {
> + endtime = vq->busyloop_endtime;
> + vq->busyloop_endtime = 0;
> + } else {
> + endtime = busy_clock() + vq->busyloop_timeout;
> + }
> + while (vhost_can_busy_poll(endtime)) {
> + if 

Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-29 Thread Michael S. Tsirkin
On Fri, Jun 29, 2018 at 03:52:40PM +, Wang, Wei W wrote:
> On Friday, June 29, 2018 10:46 PM, Michael S. Tsirkin wrote:
> > To: David Hildenbrand 
> > Cc: Wang, Wei W ; virtio-...@lists.oasis-open.org;
> > linux-ker...@vger.kernel.org; virtualization@lists.linux-foundation.org;
> > k...@vger.kernel.org; linux...@kvack.org; mho...@kernel.org;
> > a...@linux-foundation.org; torva...@linux-foundation.org;
> > pbonz...@redhat.com; liliang.opensou...@gmail.com;
> > yang.zhang...@gmail.com; quan@gmail.com; ni...@redhat.com;
> > r...@redhat.com; pet...@redhat.com
> > Subject: Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting
> > 
> > On Wed, Jun 27, 2018 at 01:06:32PM +0200, David Hildenbrand wrote:
> > > On 25.06.2018 14:05, Wei Wang wrote:
> > > > This patch series is separated from the previous "Virtio-balloon
> > > > Enhancement" series. The new feature,
> > > > VIRTIO_BALLOON_F_FREE_PAGE_HINT, implemented by this series
> > enables
> > > > the virtio-balloon driver to report hints of guest free pages to the
> > > > host. It can be used to accelerate live migration of VMs. Here is an
> > introduction of this usage:
> > > >
> > > > Live migration needs to transfer the VM's memory from the source
> > > > machine to the destination round by round. For the 1st round, all
> > > > the VM's memory is transferred. From the 2nd round, only the pieces
> > > > of memory that were written by the guest (after the 1st round) are
> > > > transferred. One method that is popularly used by the hypervisor to
> > > > track which part of memory is written is to write-protect all the guest
> > memory.
> > > >
> > > > This feature enables the optimization by skipping the transfer of
> > > > guest free pages during VM live migration. It is not concerned that
> > > > the memory pages are used after they are given to the hypervisor as
> > > > a hint of the free pages, because they will be tracked by the
> > > > hypervisor and transferred in the subsequent round if they are used and
> > written.
> > > >
> > > > * Tests
> > > > - Test Environment
> > > > Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
> > > > Guest: 8G RAM, 4 vCPU
> > > > Migration setup: migrate_set_speed 100G, migrate_set_downtime 2
> > > > second
> > > >
> > > > - Test Results
> > > > - Idle Guest Live Migration Time (results are averaged over 10 
> > > > runs):
> > > > - Optimization v.s. Legacy = 284ms vs 1757ms --> ~84% reduction
> > > > - Guest with Linux Compilation Workload (make bzImage -j4):
> > > > - Live Migration Time (average)
> > > >   Optimization v.s. Legacy = 1402ms v.s. 2528ms --> ~44% 
> > > > reduction
> > > > - Linux Compilation Time
> > > >   Optimization v.s. Legacy = 5min6s v.s. 5min12s
> > > >   --> no obvious difference
> > > >
> > >
> > > Being in version 34 already, this whole thing still looks and feels
> > > like a big hack to me. It might just be me, but especially if I read
> > > about assumptions like "QEMU will not hotplug memory during
> > > migration". This does not feel like a clean solution.
> > >
> > > I am still not sure if we really need this interface, especially as
> > > real free page hinting might be on its way.
> > >
> > > a) we perform free page hinting by setting all free pages
> > > (arch_free_page()) to zero. Migration will detect zero pages and
> > > minimize #pages to migrate. I don't think this is a good idea but
> > > Michel suggested to do a performance evaluation and Nitesh is looking
> > > into that right now.
> > 
> > Yes this test is needed I think. If we can get most of the benefit without 
> > PV
> > interfaces, that's nice.
> > 
> > Wei, I think you need this as part of your performance comparison
> > too: set page poisoning value to 0 and enable KSM, compare with your
> > patches.
> 
> Do you mean live migration with zero pages?
> I can first share the amount of memory transferred during live migration I 
> saw,
> Legacy is around 380MB,
> Optimization is around 340MB.
> This proves that most pages have already been 0 and skipped during the legacy 
> live migration. But the legacy time is still much larger because zero page 
> checking is costly. 
> (It's late night here, I can get you that with my server probably tomorrow)
> 
> Best,
> Wei

Sure thing.

Also we might want to look at optimizing the RLE compressor for
the common case of pages full of zeroes.

Here are some ideas:
https://rusty.ozlabs.org/?p=560

Note Epiphany #2 as well as comments Paolo Bonzini and by Victor Kaplansky.

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-29 Thread David Hildenbrand


>> Why would your suggestion still be applicable?
>>
>> Your point for now is "I might not want to have page hinting enabled due to
>> the overhead, but still a live migration speedup". If that overhead actually
>> exists (we'll have to see) or there might be another reason to disable page
>> hinting, then we have to decide if that specific setup is worth it merging 
>> your
>> changes.
> 
> All the above "if we have", "assume we have" don't sound like a valid 
> argument to me.

Argument? *confused* And that hinders you from answering the question
"Why would your suggestion still be applicable?" ? Well, okay.

So I will answer it by myself: Because somebody would want to disable
page hinting. Maybe there are some people out there.

>  
>> I am not (and don't want to be) in the position to make any decisions here 
>> :) I
>> just want to understand if two interfaces for free pages actually make sense.
> 
> I responded to Nitesh about the differences, you may want to check with him 
> about this.
> I would suggest you to send out your patches to LKML to get a discussion with 
> the mm folks.

Indeed, Nitesh is trying to solve the problems we found in the RFC, so
this can take some time.

> 
> Best,
> Wei
> 


-- 

Thanks,

David / dhildenb
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


RE: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-29 Thread Wang, Wei W
On Friday, June 29, 2018 7:54 PM, David Hildenbrand wrote:
> On 29.06.2018 13:31, Wei Wang wrote:
> > On 06/29/2018 03:46 PM, David Hildenbrand wrote:
> >>>
> >>> I'm afraid it can't. For example, when we have a guest booted,
> >>> without too many memory activities. Assume the guest has 8GB free
> >>> memory. The arch_free_page there won't be able to capture the 8GB
> >>> free pages since there is no free() called. This results in no free pages
> reported to host.
> >>
> >> So, it takes some time from when the guest boots up until the balloon
> >> device was initialized and therefore page hinting can start. For that
> >> period, you won't get any arch_free_page()/page hinting callbacks, correct.
> >>
> >> However in the hypervisor, you can theoretically track which pages
> >> the guest actually touched ("dirty"), so you already know "which
> >> pages were never touched while booting up until virtio-balloon was
> >> brought to life". These, you can directly exclude from migration. No
> >> interface required.
> >>
> >> The remaining problem is pages that were touched ("allocated") by the
> >> guest during bootup but freed again, before virtio-balloon came up.
> >> One would have to measure how many pages these usually are, I would
> >> say it would not be that many (because recently freed pages are
> >> likely to be used again next for allocation). However, there are some
> >> pages not being reported.
> >>
> >> During the lifetime of the guest, this should not be a problem,
> >> eventually one of these pages would get allocated/freed again, so the
> >> problem "solves itself over time". You are looking into the special
> >> case of migrating the VM just after it has been started. But we have
> >> the exact same problem also for ordinary free page hinting, so we
> >> should rather solve that problem. It is not migration specific.
> >>
> >> If we are looking for an alternative to "problem solves itself",
> >> something like "if virtio-balloon comes up, it will report all free
> >> pages step by step using free page hinting, just like we would have
> >> from "arch_free_pages()"". This would be the same interface we are
> >> using for free page hinting - and it could even be made configurable in the
> guest.
> >>
> >> The current approach we are discussing internally for details about
> >> Nitesh's work ("how the magic inside arch_fee_pages() will work
> >> efficiently) would allow this as far as I can see just fine.
> >>
> >> There would be a tiny little window between virtio-balloon comes up
> >> and it has reported all free pages step by step, but that can be
> >> considered a very special corner case that I would argue is not worth
> >> it to be optimized.
> >>
> >> If I am missing something important here, sorry in advance :)
> >>
> >
> > Probably I didn't explain that well. Please see my re-try:
> >
> > That work is to monitor page allocation and free activities via
> > arch_alloc_pages and arch_free_pages. It has per-CPU lists to record
> > the pages that are freed to the mm free list, and the per-CPU lists
> > dump the recorded pages to a global list when any of them is full.
> > So its own per-CPU list will only be able to get free pages when there
> > is an mm free() function gets called. If we have 8GB free memory on
> > the mm free list, but no application uses them and thus no mm free()
> > calls are made. In that case, the arch_free_pages isn't called, and no
> > free pages added to the per-CPU list, but we have 8G free memory right
> > on the mm free list.
> > How would you guarantee the per-CPU lists have got all the free pages
> > that the mm free lists have?
> 
> As I said, if we have some mechanism that will scan the free pages (not
> arch_free_page() once and report hints using the same mechanism step by
> step (not your bulk interface)), this problem is solved. And as I said, this 
> is
> not a migration specific problem, we have the same problem in the current
> page hinting RFC. These pages have to be reported.
> 
> >
> > - I'm also worried about the overhead of maintaining so many per-CPU
> > lists and the global list. For example, if we have applications
> > frequently allocate and free 4KB pages, and each per-CPU list needs to
> > implement the buddy algorithm to sort and merge neighbor pages. Today
> > a server can have more than 100 CPUs, then there will be more than 100
> > per-CPU lists which need to sync to a global list under a lock, I'm
> > not sure if this would scale well.
> 
> The overhead in the current RFC is definitely too high. But I consider this a
> problem to be solved before page hinting would go upstream. And we are
> discussing right now "if we have a reasonable page hinting implementation,
> why would we need your interface in addition".
> 
> >
> > - This seems to be a burden imposed on the core mm memory
> > allocation/free path. The whole overhead needs to be carried during
> > the whole system life cycle. What we actually expected is to just make
> > one call 

RE: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-29 Thread Wang, Wei W
On Friday, June 29, 2018 10:46 PM, Michael S. Tsirkin wrote:
> To: David Hildenbrand 
> Cc: Wang, Wei W ; virtio-...@lists.oasis-open.org;
> linux-ker...@vger.kernel.org; virtualization@lists.linux-foundation.org;
> k...@vger.kernel.org; linux...@kvack.org; mho...@kernel.org;
> a...@linux-foundation.org; torva...@linux-foundation.org;
> pbonz...@redhat.com; liliang.opensou...@gmail.com;
> yang.zhang...@gmail.com; quan@gmail.com; ni...@redhat.com;
> r...@redhat.com; pet...@redhat.com
> Subject: Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting
> 
> On Wed, Jun 27, 2018 at 01:06:32PM +0200, David Hildenbrand wrote:
> > On 25.06.2018 14:05, Wei Wang wrote:
> > > This patch series is separated from the previous "Virtio-balloon
> > > Enhancement" series. The new feature,
> > > VIRTIO_BALLOON_F_FREE_PAGE_HINT, implemented by this series
> enables
> > > the virtio-balloon driver to report hints of guest free pages to the
> > > host. It can be used to accelerate live migration of VMs. Here is an
> introduction of this usage:
> > >
> > > Live migration needs to transfer the VM's memory from the source
> > > machine to the destination round by round. For the 1st round, all
> > > the VM's memory is transferred. From the 2nd round, only the pieces
> > > of memory that were written by the guest (after the 1st round) are
> > > transferred. One method that is popularly used by the hypervisor to
> > > track which part of memory is written is to write-protect all the guest
> memory.
> > >
> > > This feature enables the optimization by skipping the transfer of
> > > guest free pages during VM live migration. It is not concerned that
> > > the memory pages are used after they are given to the hypervisor as
> > > a hint of the free pages, because they will be tracked by the
> > > hypervisor and transferred in the subsequent round if they are used and
> written.
> > >
> > > * Tests
> > > - Test Environment
> > > Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
> > > Guest: 8G RAM, 4 vCPU
> > > Migration setup: migrate_set_speed 100G, migrate_set_downtime 2
> > > second
> > >
> > > - Test Results
> > > - Idle Guest Live Migration Time (results are averaged over 10 runs):
> > > - Optimization v.s. Legacy = 284ms vs 1757ms --> ~84% reduction
> > > - Guest with Linux Compilation Workload (make bzImage -j4):
> > > - Live Migration Time (average)
> > >   Optimization v.s. Legacy = 1402ms v.s. 2528ms --> ~44% reduction
> > > - Linux Compilation Time
> > >   Optimization v.s. Legacy = 5min6s v.s. 5min12s
> > >   --> no obvious difference
> > >
> >
> > Being in version 34 already, this whole thing still looks and feels
> > like a big hack to me. It might just be me, but especially if I read
> > about assumptions like "QEMU will not hotplug memory during
> > migration". This does not feel like a clean solution.
> >
> > I am still not sure if we really need this interface, especially as
> > real free page hinting might be on its way.
> >
> > a) we perform free page hinting by setting all free pages
> > (arch_free_page()) to zero. Migration will detect zero pages and
> > minimize #pages to migrate. I don't think this is a good idea but
> > Michel suggested to do a performance evaluation and Nitesh is looking
> > into that right now.
> 
> Yes this test is needed I think. If we can get most of the benefit without PV
> interfaces, that's nice.
> 
> Wei, I think you need this as part of your performance comparison
> too: set page poisoning value to 0 and enable KSM, compare with your
> patches.

Do you mean live migration with zero pages?
I can first share the amount of memory transferred during live migration I saw,
Legacy is around 380MB,
Optimization is around 340MB.
This proves that most pages have already been 0 and skipped during the legacy 
live migration. But the legacy time is still much larger because zero page 
checking is costly. 
(It's late night here, I can get you that with my server probably tomorrow)

Best,
Wei





___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-29 Thread David Hildenbrand
>> And looking at all the discussions and problems that already happened
>> during the development of this series, I think we should rather look
>> into how clean free page hinting might solve the same problem.
> 
> I'm not sure I follow the logic. We found that neat tricks
> especially re-using the max order free page for reporting.

Let me rephrase: history of this series showed that this is some really
complicated stuff. I am asking if this complexity is actually necessary.

No question that we had a very valuable outcome so far (that especially
is also relevant for other projects like Nitesh's proposal - talking
about virtio requests and locking).

> 
>> If it can't be solved using free page hinting, fair enough.
> 
> I suspect Nitesh will need to find a way not to have mm code
> call out to random drivers or subsystems before that code
> is acceptable.
> 
> 


-- 

Thanks,

David / dhildenb
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-29 Thread Michael S. Tsirkin
On Wed, Jun 27, 2018 at 01:06:32PM +0200, David Hildenbrand wrote:
> On 25.06.2018 14:05, Wei Wang wrote:
> > This patch series is separated from the previous "Virtio-balloon
> > Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT,  
> > implemented by this series enables the virtio-balloon driver to report
> > hints of guest free pages to the host. It can be used to accelerate live
> > migration of VMs. Here is an introduction of this usage:
> > 
> > Live migration needs to transfer the VM's memory from the source machine
> > to the destination round by round. For the 1st round, all the VM's memory
> > is transferred. From the 2nd round, only the pieces of memory that were
> > written by the guest (after the 1st round) are transferred. One method
> > that is popularly used by the hypervisor to track which part of memory is
> > written is to write-protect all the guest memory.
> > 
> > This feature enables the optimization by skipping the transfer of guest
> > free pages during VM live migration. It is not concerned that the memory
> > pages are used after they are given to the hypervisor as a hint of the
> > free pages, because they will be tracked by the hypervisor and transferred
> > in the subsequent round if they are used and written.
> > 
> > * Tests
> > - Test Environment
> > Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
> > Guest: 8G RAM, 4 vCPU
> > Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second
> > 
> > - Test Results
> > - Idle Guest Live Migration Time (results are averaged over 10 runs):
> > - Optimization v.s. Legacy = 284ms vs 1757ms --> ~84% reduction
> > - Guest with Linux Compilation Workload (make bzImage -j4):
> > - Live Migration Time (average)
> >   Optimization v.s. Legacy = 1402ms v.s. 2528ms --> ~44% reduction
> > - Linux Compilation Time
> >   Optimization v.s. Legacy = 5min6s v.s. 5min12s
> >   --> no obvious difference
> > 
> 
> Being in version 34 already, this whole thing still looks and feels like
> a big hack to me. It might just be me, but especially if I read about
> assumptions like "QEMU will not hotplug memory during migration". This
> does not feel like a clean solution.
> 
> I am still not sure if we really need this interface, especially as real
> free page hinting might be on its way.
> 
> a) we perform free page hinting by setting all free pages
> (arch_free_page()) to zero. Migration will detect zero pages and
> minimize #pages to migrate. I don't think this is a good idea but Michel
> suggested to do a performance evaluation and Nitesh is looking into that
> right now.

Yes this test is needed I think. If we can get most of the benefit
without PV interfaces, that's nice.

Wei, I think you need this as part of your performance comparison
too: set page poisoning value to 0 and enable KSM, compare with
your patches.


> b) we perform free page hinting using something that Nitesh proposed. We
> get in QEMU blocks of free pages that we can MADV_FREE. In addition we
> could e.g. clear the dirty bit of these pages in the dirty bitmap, to
> hinder them from getting migrated. Right now the hinting mechanism is
> synchronous (called from arch_free_page()) but we might be able to
> convert it into something asynchronous.
> 
> So we might be able to completely get rid of this interface.

The way I see it, hinting during alloc/free will always add
overhead which might be unacceptable for some people.  So even with
Nitesh's patches there's value in enabling / disabling hinting
dynamically. And Wei's patches would then be useful to set
the stage where we know the initial page state.


> And looking at all the discussions and problems that already happened
> during the development of this series, I think we should rather look
> into how clean free page hinting might solve the same problem.

I'm not sure I follow the logic. We found that neat tricks
especially re-using the max order free page for reporting.

> If it can't be solved using free page hinting, fair enough.

I suspect Nitesh will need to find a way not to have mm code
call out to random drivers or subsystems before that code
is acceptable.


-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-29 Thread David Hildenbrand
On 29.06.2018 13:31, Wei Wang wrote:
> On 06/29/2018 03:46 PM, David Hildenbrand wrote:
>>>
>>> I'm afraid it can't. For example, when we have a guest booted, without
>>> too many memory activities. Assume the guest has 8GB free memory. The
>>> arch_free_page there won't be able to capture the 8GB free pages since
>>> there is no free() called. This results in no free pages reported to host.
>>
>> So, it takes some time from when the guest boots up until the balloon
>> device was initialized and therefore page hinting can start. For that
>> period, you won't get any arch_free_page()/page hinting callbacks, correct.
>>
>> However in the hypervisor, you can theoretically track which pages the
>> guest actually touched ("dirty"), so you already know "which pages were
>> never touched while booting up until virtio-balloon was brought to
>> life". These, you can directly exclude from migration. No interface
>> required.
>>
>> The remaining problem is pages that were touched ("allocated") by the
>> guest during bootup but freed again, before virtio-balloon came up. One
>> would have to measure how many pages these usually are, I would say it
>> would not be that many (because recently freed pages are likely to be
>> used again next for allocation). However, there are some pages not being
>> reported.
>>
>> During the lifetime of the guest, this should not be a problem,
>> eventually one of these pages would get allocated/freed again, so the
>> problem "solves itself over time". You are looking into the special case
>> of migrating the VM just after it has been started. But we have the
>> exact same problem also for ordinary free page hinting, so we should
>> rather solve that problem. It is not migration specific.
>>
>> If we are looking for an alternative to "problem solves itself",
>> something like "if virtio-balloon comes up, it will report all free
>> pages step by step using free page hinting, just like we would have from
>> "arch_free_pages()"". This would be the same interface we are using for
>> free page hinting - and it could even be made configurable in the guest.
>>
>> The current approach we are discussing internally for details about
>> Nitesh's work ("how the magic inside arch_fee_pages() will work
>> efficiently) would allow this as far as I can see just fine.
>>
>> There would be a tiny little window between virtio-balloon comes up and
>> it has reported all free pages step by step, but that can be considered
>> a very special corner case that I would argue is not worth it to be
>> optimized.
>>
>> If I am missing something important here, sorry in advance :)
>>
> 
> Probably I didn't explain that well. Please see my re-try:
> 
> That work is to monitor page allocation and free activities via 
> arch_alloc_pages and arch_free_pages. It has per-CPU lists to record the 
> pages that are freed to the mm free list, and the per-CPU lists dump the 
> recorded pages to a global list when any of them is full.
> So its own per-CPU list will only be able to get free pages when there 
> is an mm free() function gets called. If we have 8GB free memory on the 
> mm free list, but no application uses them and thus no mm free() calls 
> are made. In that case, the arch_free_pages isn't called, and no free 
> pages added to the per-CPU list, but we have 8G free memory right on the 
> mm free list.
> How would you guarantee the per-CPU lists have got all the free pages 
> that the mm free lists have?

As I said, if we have some mechanism that will scan the free pages (not
arch_free_page() once and report hints using the same mechanism step by
step (not your bulk interface)), this problem is solved. And as I said,
this is not a migration specific problem, we have the same problem in
the current page hinting RFC. These pages have to be reported.

> 
> - I'm also worried about the overhead of maintaining so many per-CPU 
> lists and the global list. For example, if we have applications 
> frequently allocate and free 4KB pages, and each per-CPU list needs to 
> implement the buddy algorithm to sort and merge neighbor pages. Today a 
> server can have more than 100 CPUs, then there will be more than 100 
> per-CPU lists which need to sync to a global list under a lock, I'm not 
> sure if this would scale well.

The overhead in the current RFC is definitely too high. But I consider
this a problem to be solved before page hinting would go upstream. And
we are discussing right now "if we have a reasonable page hinting
implementation, why would we need your interface in addition".

> 
> - This seems to be a burden imposed on the core mm memory 
> allocation/free path. The whole overhead needs to be carried during the 
> whole system life cycle. What we actually expected is to just make one 
> call to get the free page hints only when live migration happens.

You're focusing too much on the actual implementation of the page
hinting RFC right now. Assume for now that we would have
- efficient page hinting without 

Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-29 Thread Wei Wang

On 06/29/2018 03:46 PM, David Hildenbrand wrote:


I'm afraid it can't. For example, when we have a guest booted, without
too many memory activities. Assume the guest has 8GB free memory. The
arch_free_page there won't be able to capture the 8GB free pages since
there is no free() called. This results in no free pages reported to host.


So, it takes some time from when the guest boots up until the balloon
device was initialized and therefore page hinting can start. For that
period, you won't get any arch_free_page()/page hinting callbacks, correct.

However in the hypervisor, you can theoretically track which pages the
guest actually touched ("dirty"), so you already know "which pages were
never touched while booting up until virtio-balloon was brought to
life". These, you can directly exclude from migration. No interface
required.

The remaining problem is pages that were touched ("allocated") by the
guest during bootup but freed again, before virtio-balloon came up. One
would have to measure how many pages these usually are, I would say it
would not be that many (because recently freed pages are likely to be
used again next for allocation). However, there are some pages not being
reported.

During the lifetime of the guest, this should not be a problem,
eventually one of these pages would get allocated/freed again, so the
problem "solves itself over time". You are looking into the special case
of migrating the VM just after it has been started. But we have the
exact same problem also for ordinary free page hinting, so we should
rather solve that problem. It is not migration specific.

If we are looking for an alternative to "problem solves itself",
something like "if virtio-balloon comes up, it will report all free
pages step by step using free page hinting, just like we would have from
"arch_free_pages()"". This would be the same interface we are using for
free page hinting - and it could even be made configurable in the guest.

The current approach we are discussing internally for details about
Nitesh's work ("how the magic inside arch_fee_pages() will work
efficiently) would allow this as far as I can see just fine.

There would be a tiny little window between virtio-balloon comes up and
it has reported all free pages step by step, but that can be considered
a very special corner case that I would argue is not worth it to be
optimized.

If I am missing something important here, sorry in advance :)



Probably I didn't explain that well. Please see my re-try:

That work is to monitor page allocation and free activities via 
arch_alloc_pages and arch_free_pages. It has per-CPU lists to record the 
pages that are freed to the mm free list, and the per-CPU lists dump the 
recorded pages to a global list when any of them is full.
So its own per-CPU list will only be able to get free pages when there 
is an mm free() function gets called. If we have 8GB free memory on the 
mm free list, but no application uses them and thus no mm free() calls 
are made. In that case, the arch_free_pages isn't called, and no free 
pages added to the per-CPU list, but we have 8G free memory right on the 
mm free list.
How would you guarantee the per-CPU lists have got all the free pages 
that the mm free lists have?


- I'm also worried about the overhead of maintaining so many per-CPU 
lists and the global list. For example, if we have applications 
frequently allocate and free 4KB pages, and each per-CPU list needs to 
implement the buddy algorithm to sort and merge neighbor pages. Today a 
server can have more than 100 CPUs, then there will be more than 100 
per-CPU lists which need to sync to a global list under a lock, I'm not 
sure if this would scale well.


- This seems to be a burden imposed on the core mm memory 
allocation/free path. The whole overhead needs to be carried during the 
whole system life cycle. What we actually expected is to just make one 
call to get the free page hints only when live migration happens.


Best,
Wei










___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH vhost] vhost_net: Fix too many vring kick on busypoll

2018-06-29 Thread Jason Wang



On 2018年06月29日 16:09, Toshiaki Makita wrote:

Under heavy load vhost busypoll may run without suppressing
notification. For example tx zerocopy callback can push tx work while
handle_tx() is running, then busyloop exits due to vhost_has_work()
condition and enables notification but immediately reenters handle_tx()
because the pushed work was tx. In this case handle_tx() tries to
disable notification again, but when using event_idx it by design
cannot. Then busyloop will run without suppressing notification.
Another example is the case where handle_tx() tries to enable
notification but avail idx is advanced so disables it again. This case
also lead to the same situation with event_idx.


Good catch.



The problem is that once we enter this situation busyloop does not work
under heavy load for considerable amount of time, because notification
is likely to happen during busyloop and handle_tx() immediately enables
notification after notification happens. Specifically busyloop detects
notification by vhost_has_work() and then handle_tx() calls
vhost_enable_notify(). Because the detected work was the tx work, it
enters handle_tx(), and enters busyloop without suppression again.
This is likely to be repeated, so with event_idx we are almost not able
to suppress notification in this case.

To fix this, poll the work instead of enabling notification when
busypoll is interrupted by something. IMHO signal_pending() and
vhost_has_work() are kind of interruptions rather than signals to
completely cancel the busypoll, so let's run busypoll after the
necessary work is done. In order to avoid too long busyloop due to
interruption, save the endtime in vq field and use it when reentering
the work function.


I think we don't care long busyloop unless e.g tx can starve rx?



I observed this problem makes tx performance poor but does not rx. I
guess it is because rx notification from socket is not that heavy so
did not impact the performance even when we failed to mask the
notification.


I think the reason is:

For tx, we may only process used ring under handle_tx(). Busy polling 
code can not recognize this case.
For rx, we call peek_head_len() after exiting busy loop, so if the work 
is for rx, it can be processed in handle_rx() immediately.



Anyway for consistency I fixed rx routine as well as tx.

Performance numbers:

- Bulk transfer from guest to external physical server.
 [Guest]->vhost_net->tap--(XDP_REDIRECT)-->i40e --(wire)--> [Server]


Just to confirm in this case since zerocopy is enabled, we are in fact 
use the generic XDP datapath?



- Set 10us busypoll.
- Guest disables checksum and TSO because of host XDP.
- Measured single flow Mbps by netperf, and kicks by perf kvm stat
   (EPT_MISCONFIG event).

 Before  After
   Mbps  kicks/s  Mbps  kicks/s
UDP_STREAM 1472byte  247758 27
 Send   3645.376958.10
 Recv   3588.566958.10
   1byte9865 37
 Send  4.34   5.43
 Recv  4.17   5.26
TCP_STREAM 8801.0345794   9592.77 2884


Impressive numbers.



Signed-off-by: Toshiaki Makita 
---
  drivers/vhost/net.c   | 94 +++
  drivers/vhost/vhost.c |  1 +
  drivers/vhost/vhost.h |  1 +
  3 files changed, 66 insertions(+), 30 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index eeaf6739215f..0e85f628b965 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -391,13 +391,14 @@ static inline unsigned long busy_clock(void)
return local_clock() >> 10;
  }
  
-static bool vhost_can_busy_poll(struct vhost_dev *dev,

-   unsigned long endtime)
+static bool vhost_can_busy_poll(unsigned long endtime)
  {
-   return likely(!need_resched()) &&
-  likely(!time_after(busy_clock(), endtime)) &&
-  likely(!signal_pending(current)) &&
-  !vhost_has_work(dev);
+   return likely(!need_resched() && !time_after(busy_clock(), endtime));
+}
+
+static bool vhost_busy_poll_interrupted(struct vhost_dev *dev)
+{
+   return unlikely(signal_pending(current)) || vhost_has_work(dev);
  }
  
  static void vhost_net_disable_vq(struct vhost_net *n,

@@ -437,10 +438,21 @@ static int vhost_net_tx_get_vq_desc(struct vhost_net *net,
  
  	if (r == vq->num && vq->busyloop_timeout) {

preempt_disable();
-   endtime = busy_clock() + vq->busyloop_timeout;
-   while (vhost_can_busy_poll(vq->dev, endtime) &&
-  vhost_vq_avail_empty(vq->dev, vq))
+   if (vq->busyloop_endtime) {
+   endtime = vq->busyloop_endtime;
+   vq->busyloop_endtime = 0;


Looks like endtime may be before current time. So we skip busy loop in 

[PATCH vhost] vhost_net: Fix too many vring kick on busypoll

2018-06-29 Thread Toshiaki Makita
Under heavy load vhost busypoll may run without suppressing
notification. For example tx zerocopy callback can push tx work while
handle_tx() is running, then busyloop exits due to vhost_has_work()
condition and enables notification but immediately reenters handle_tx()
because the pushed work was tx. In this case handle_tx() tries to
disable notification again, but when using event_idx it by design
cannot. Then busyloop will run without suppressing notification.
Another example is the case where handle_tx() tries to enable
notification but avail idx is advanced so disables it again. This case
also lead to the same situation with event_idx.

The problem is that once we enter this situation busyloop does not work
under heavy load for considerable amount of time, because notification
is likely to happen during busyloop and handle_tx() immediately enables
notification after notification happens. Specifically busyloop detects
notification by vhost_has_work() and then handle_tx() calls
vhost_enable_notify(). Because the detected work was the tx work, it
enters handle_tx(), and enters busyloop without suppression again.
This is likely to be repeated, so with event_idx we are almost not able
to suppress notification in this case.

To fix this, poll the work instead of enabling notification when
busypoll is interrupted by something. IMHO signal_pending() and
vhost_has_work() are kind of interruptions rather than signals to
completely cancel the busypoll, so let's run busypoll after the
necessary work is done. In order to avoid too long busyloop due to
interruption, save the endtime in vq field and use it when reentering
the work function.

I observed this problem makes tx performance poor but does not rx. I
guess it is because rx notification from socket is not that heavy so
did not impact the performance even when we failed to mask the
notification. Anyway for consistency I fixed rx routine as well as tx.

Performance numbers:

- Bulk transfer from guest to external physical server.
[Guest]->vhost_net->tap--(XDP_REDIRECT)-->i40e --(wire)--> [Server]
- Set 10us busypoll.
- Guest disables checksum and TSO because of host XDP.
- Measured single flow Mbps by netperf, and kicks by perf kvm stat
  (EPT_MISCONFIG event).

Before  After
  Mbps  kicks/s  Mbps  kicks/s
UDP_STREAM 1472byte  247758 27
Send   3645.376958.10
Recv   3588.566958.10
  1byte9865 37
Send  4.34   5.43
Recv  4.17   5.26
TCP_STREAM 8801.0345794   9592.77 2884

Signed-off-by: Toshiaki Makita 
---
 drivers/vhost/net.c   | 94 +++
 drivers/vhost/vhost.c |  1 +
 drivers/vhost/vhost.h |  1 +
 3 files changed, 66 insertions(+), 30 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index eeaf6739215f..0e85f628b965 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -391,13 +391,14 @@ static inline unsigned long busy_clock(void)
return local_clock() >> 10;
 }
 
-static bool vhost_can_busy_poll(struct vhost_dev *dev,
-   unsigned long endtime)
+static bool vhost_can_busy_poll(unsigned long endtime)
 {
-   return likely(!need_resched()) &&
-  likely(!time_after(busy_clock(), endtime)) &&
-  likely(!signal_pending(current)) &&
-  !vhost_has_work(dev);
+   return likely(!need_resched() && !time_after(busy_clock(), endtime));
+}
+
+static bool vhost_busy_poll_interrupted(struct vhost_dev *dev)
+{
+   return unlikely(signal_pending(current)) || vhost_has_work(dev);
 }
 
 static void vhost_net_disable_vq(struct vhost_net *n,
@@ -437,10 +438,21 @@ static int vhost_net_tx_get_vq_desc(struct vhost_net *net,
 
if (r == vq->num && vq->busyloop_timeout) {
preempt_disable();
-   endtime = busy_clock() + vq->busyloop_timeout;
-   while (vhost_can_busy_poll(vq->dev, endtime) &&
-  vhost_vq_avail_empty(vq->dev, vq))
+   if (vq->busyloop_endtime) {
+   endtime = vq->busyloop_endtime;
+   vq->busyloop_endtime = 0;
+   } else {
+   endtime = busy_clock() + vq->busyloop_timeout;
+   }
+   while (vhost_can_busy_poll(endtime)) {
+   if (vhost_busy_poll_interrupted(vq->dev)) {
+   vq->busyloop_endtime = endtime;
+   break;
+   }
+   if (!vhost_vq_avail_empty(vq->dev, vq))
+   break;
cpu_relax();
+   }
preempt_enable();
r = vhost_get_vq_desc(vq, vq->iov, 

Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-29 Thread David Hildenbrand
On 29.06.2018 05:51, Wei Wang wrote:
> On 06/27/2018 07:06 PM, David Hildenbrand wrote:
>> On 25.06.2018 14:05, Wei Wang wrote:
>>> This patch series is separated from the previous "Virtio-balloon
>>> Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT,
>>> implemented by this series enables the virtio-balloon driver to report
>>> hints of guest free pages to the host. It can be used to accelerate live
>>> migration of VMs. Here is an introduction of this usage:
>>>
>>> Live migration needs to transfer the VM's memory from the source machine
>>> to the destination round by round. For the 1st round, all the VM's memory
>>> is transferred. From the 2nd round, only the pieces of memory that were
>>> written by the guest (after the 1st round) are transferred. One method
>>> that is popularly used by the hypervisor to track which part of memory is
>>> written is to write-protect all the guest memory.
>>>
>>> This feature enables the optimization by skipping the transfer of guest
>>> free pages during VM live migration. It is not concerned that the memory
>>> pages are used after they are given to the hypervisor as a hint of the
>>> free pages, because they will be tracked by the hypervisor and transferred
>>> in the subsequent round if they are used and written.
>>>
>>> * Tests
>>> - Test Environment
>>>  Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
>>>  Guest: 8G RAM, 4 vCPU
>>>  Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second
>>>
>>> - Test Results
>>>  - Idle Guest Live Migration Time (results are averaged over 10 runs):
>>>  - Optimization v.s. Legacy = 284ms vs 1757ms --> ~84% reduction
>>>  - Guest with Linux Compilation Workload (make bzImage -j4):
>>>  - Live Migration Time (average)
>>>Optimization v.s. Legacy = 1402ms v.s. 2528ms --> ~44% reduction
>>>  - Linux Compilation Time
>>>Optimization v.s. Legacy = 5min6s v.s. 5min12s
>>>--> no obvious difference
>>>
>> Being in version 34 already, this whole thing still looks and feels like
>> a big hack to me. It might just be me, but especially if I read about
>> assumptions like "QEMU will not hotplug memory during migration". This
>> does not feel like a clean solution.
>>
>> I am still not sure if we really need this interface, especially as real
>> free page hinting might be on its way.
>>
>> a) we perform free page hinting by setting all free pages
>> (arch_free_page()) to zero. Migration will detect zero pages and
>> minimize #pages to migrate. I don't think this is a good idea but Michel
>> suggested to do a performance evaluation and Nitesh is looking into that
>> right now.
> 
> The hypervisor doesn't get the zero pages for free. It pays lots of CPU 
> utilization and memory bandwidth (there are some guys complaining abut 
> this on the QEMU mailinglist)
> In the above results, the legacy part already has the zero page feature 
> in use.

Indeed, I don't consider this attempt not very practical in general,
especially as it would rely on KSM right now, which is frowned upon by
many people.

> 
>>
>> b) we perform free page hinting using something that Nitesh proposed. We
>> get in QEMU blocks of free pages that we can MADV_FREE. In addition we
>> could e.g. clear the dirty bit of these pages in the dirty bitmap, to
>> hinder them from getting migrated. Right now the hinting mechanism is
>> synchronous (called from arch_free_page()) but we might be able to
>> convert it into something asynchronous.
>>
>> So we might be able to completely get rid of this interface. And looking
>> at all the discussions and problems that already happened during the
>> development of this series, I think we should rather look into how clean
>> free page hinting might solve the same problem.
>>
>> If it can't be solved using free page hinting, fair enough.
>>
> 
> I'm afraid it can't. For example, when we have a guest booted, without 
> too many memory activities. Assume the guest has 8GB free memory. The 
> arch_free_page there won't be able to capture the 8GB free pages since 
> there is no free() called. This results in no free pages reported to host.


So, it takes some time from when the guest boots up until the balloon
device was initialized and therefore page hinting can start. For that
period, you won't get any arch_free_page()/page hinting callbacks, correct.

However in the hypervisor, you can theoretically track which pages the
guest actually touched ("dirty"), so you already know "which pages were
never touched while booting up until virtio-balloon was brought to
life". These, you can directly exclude from migration. No interface
required.

The remaining problem is pages that were touched ("allocated") by the
guest during bootup but freed again, before virtio-balloon came up. One
would have to measure how many pages these usually are, I would say it
would not be that many (because recently freed pages are likely to be
used again next