Re: [PATCH v34 0/4] Virtio-balloon: support free page reporting

2018-06-28 Thread Wei Wang

On 06/27/2018 07:06 PM, David Hildenbrand wrote:

On 25.06.2018 14:05, Wei Wang wrote:

This patch series is separated from the previous "Virtio-balloon
Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT,
implemented by this series enables the virtio-balloon driver to report
hints of guest free pages to the host. It can be used to accelerate live
migration of VMs. Here is an introduction of this usage:

Live migration needs to transfer the VM's memory from the source machine
to the destination round by round. For the 1st round, all the VM's memory
is transferred. From the 2nd round, only the pieces of memory that were
written by the guest (after the 1st round) are transferred. One method
that is popularly used by the hypervisor to track which part of memory is
written is to write-protect all the guest memory.

This feature enables the optimization by skipping the transfer of guest
free pages during VM live migration. It is not concerned that the memory
pages are used after they are given to the hypervisor as a hint of the
free pages, because they will be tracked by the hypervisor and transferred
in the subsequent round if they are used and written.

* Tests
- Test Environment
 Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
 Guest: 8G RAM, 4 vCPU
 Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second

- Test Results
 - Idle Guest Live Migration Time (results are averaged over 10 runs):
 - Optimization v.s. Legacy = 284ms vs 1757ms --> ~84% reduction
 - Guest with Linux Compilation Workload (make bzImage -j4):
 - Live Migration Time (average)
   Optimization v.s. Legacy = 1402ms v.s. 2528ms --> ~44% reduction
 - Linux Compilation Time
   Optimization v.s. Legacy = 5min6s v.s. 5min12s
   --> no obvious difference


Being in version 34 already, this whole thing still looks and feels like
a big hack to me. It might just be me, but especially if I read about
assumptions like "QEMU will not hotplug memory during migration". This
does not feel like a clean solution.

I am still not sure if we really need this interface, especially as real
free page hinting might be on its way.

a) we perform free page hinting by setting all free pages
(arch_free_page()) to zero. Migration will detect zero pages and
minimize #pages to migrate. I don't think this is a good idea but Michel
suggested to do a performance evaluation and Nitesh is looking into that
right now.


The hypervisor doesn't get the zero pages for free. It pays lots of CPU 
utilization and memory bandwidth (there are some guys complaining abut 
this on the QEMU mailinglist)
In the above results, the legacy part already has the zero page feature 
in use.




b) we perform free page hinting using something that Nitesh proposed. We
get in QEMU blocks of free pages that we can MADV_FREE. In addition we
could e.g. clear the dirty bit of these pages in the dirty bitmap, to
hinder them from getting migrated. Right now the hinting mechanism is
synchronous (called from arch_free_page()) but we might be able to
convert it into something asynchronous.

So we might be able to completely get rid of this interface. And looking
at all the discussions and problems that already happened during the
development of this series, I think we should rather look into how clean
free page hinting might solve the same problem.

If it can't be solved using free page hinting, fair enough.



I'm afraid it can't. For example, when we have a guest booted, without 
too many memory activities. Assume the guest has 8GB free memory. The 
arch_free_page there won't be able to capture the 8GB free pages since 
there is no free() called. This results in no free pages reported to host.


Best,
Wei
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v2 0/5] Add virtio-iommu driver

2018-06-28 Thread Peter Maydell
On 27 June 2018 at 20:59, Michael S. Tsirkin  wrote:
> My point was virtio mmio isn't widely used outside of ARM.
> Rather than try to make everyone use it, IMHO it's better
> to start with PCI.

Yes. I would much rather we let virtio-mmio disappear into
history as a random bit of legacy stuff. One of the things
I didn't like about the virtio-iommu design was that it
resurrected virtio-mmio as something we need to actively care
about again, so if there is a path forward that will work with
pci virtio I would much prefer that.

thanks
-- PMM
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v33 1/4] mm: add a function to get free page blocks

2018-06-28 Thread Wei Wang

On 06/28/2018 03:07 AM, Michael S. Tsirkin wrote:

On Wed, Jun 27, 2018 at 09:05:39AM -0700, Linus Torvalds wrote:

[ Sorry for slow reply, my travels have made a mess of my inbox ]

On Mon, Jun 25, 2018 at 6:55 PM Michael S. Tsirkin  wrote:

Linus, do you think it would be ok to have get_from_free_page_list
actually pop entries from the free list and use them as the buffer
to store PAs?

Honestly, what I think the best option would be is to get rid of this
interface *entirely*, and just have the balloon code do

 #define GFP_MINFLAGS (__GFP_NORETRY | __GFP_NOWARN |
__GFP_THISNODE | __GFP_NOMEMALLOC)

 struct page *page =  alloc_pages(GFP_MINFLAGS, MAX_ORDER-1);

  which is not a new interface, and simply removes the max-order page
from the list if at all possible.

The above has the advantage of "just working", and not having any races.

Now, because you don't want to necessarily *entirely* deplete the max
order, I'd suggest that the *one* new interface you add is just a "how
many max-order pages are there" interface. So then you can query
(either before or after getting the max-order page) just how many of
them there were and whether you want to give that page back.

Notice? No need for any page lists or physical addresses. No races. No
complex new functions.

The physical address you can just get from the "struct page" you got.

And if you run out of memory because of getting a page, you get all
the usual "hey, we ran out of memory" responses..

Wouldn't the above be sufficient?

 Linus



Thanks for the elaboration.


I think so, thanks!

Wei, to put it in balloon terms, I think there's one thing we missed: if
you do manage to allocate a page, and you don't have a use for it, then
hey, you can just give it to the host because you know it's free - you
are going to return it to the free list.



I'm not sure if this would be better than Linus' previous suggestion, 
because live migration is expected to be performed without disturbing 
the guest. If we do allocation to get all the free pages at all 
possible, then the guest applications would be seriously affected. For 
example, the network would become very slow as the allocation of sk_buf 
often triggers OOM during live migration. If live migration happens from 
time to time, and users try memory related tools like "free -h" on the 
guest, the reported statistics (e.g. the fee memory becomes very low 
abruptly due to the balloon allocation) would confuse them.


With the previous suggestion, we only get hints of the free pages (i.e. 
just report the address of free pages to host without taking them off 
the list).


Best,
Wei
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH net-next v2] net: vhost: improve performance when enable busyloop

2018-06-28 Thread Tonghao Zhang
On Wed, Jun 27, 2018 at 11:58 PM Michael S. Tsirkin  wrote:
>
> On Wed, Jun 27, 2018 at 10:24:43PM +0800, Jason Wang wrote:
> >
> >
> > On 2018年06月26日 13:17, xiangxia.m@gmail.com wrote:
> > > From: Tonghao Zhang 
> > >
> > > This patch improves the guest receive performance from
> > > host. On the handle_tx side, we poll the sock receive
> > > queue at the same time. handle_rx do that in the same way.
> > >
> > > For avoiding deadlock, change the code to lock the vq one
> > > by one and use the VHOST_NET_VQ_XX as a subclass for
> > > mutex_lock_nested. With the patch, qemu can set differently
> > > the busyloop_timeout for rx or tx queue.
> > >
> > > We set the poll-us=100us and use the iperf3 to test
> > > its throughput. The iperf3 command is shown as below.
> > >
> > > on the guest:
> > > iperf3  -s -D
> > >
> > > on the host:
> > > iperf3  -c 192.168.1.100 -i 1 -P 10 -t 10 -M 1400
> > >
> > > * With the patch: 23.1 Gbits/sec
> > > * Without the patch:  12.7 Gbits/sec
> > >
> > > Signed-off-by: Tonghao Zhang 
> >
> > Thanks a lot for the patch. Looks good generally, but please split this big
> > patch into separate ones like:
> >
> > patch 1: lock vqs one by one
> > patch 2: replace magic number of lock annotation
> > patch 3: factor out generic busy polling logic to vhost_net_busy_poll()
> > patch 4: add rx busy polling in tx path.
> >
> > And please cc Michael in v3.
> >
> > Thanks
>
> Pls include host CPU utilization numbers. You can get them e.g. using
> vmstat.
OK, thanks.
> I suspect we also want the polling controllable e.g. through
> an ioctl.
>
> --
> MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH net-next v2] net: vhost: improve performance when enable busyloop

2018-06-28 Thread Tonghao Zhang
On Wed, Jun 27, 2018 at 10:24 PM Jason Wang  wrote:
>
>
>
> On 2018年06月26日 13:17, xiangxia.m@gmail.com wrote:
> > From: Tonghao Zhang 
> >
> > This patch improves the guest receive performance from
> > host. On the handle_tx side, we poll the sock receive
> > queue at the same time. handle_rx do that in the same way.
> >
> > For avoiding deadlock, change the code to lock the vq one
> > by one and use the VHOST_NET_VQ_XX as a subclass for
> > mutex_lock_nested. With the patch, qemu can set differently
> > the busyloop_timeout for rx or tx queue.
> >
> > We set the poll-us=100us and use the iperf3 to test
> > its throughput. The iperf3 command is shown as below.
> >
> > on the guest:
> > iperf3  -s -D
> >
> > on the host:
> > iperf3  -c 192.168.1.100 -i 1 -P 10 -t 10 -M 1400
> >
> > * With the patch: 23.1 Gbits/sec
> > * Without the patch:  12.7 Gbits/sec
> >
> > Signed-off-by: Tonghao Zhang 
>
> Thanks a lot for the patch. Looks good generally, but please split this
> big patch into separate ones like:
>
> patch 1: lock vqs one by one
> patch 2: replace magic number of lock annotation
> patch 3: factor out generic busy polling logic to vhost_net_busy_poll()
> patch 4: add rx busy polling in tx path.
>
> And please cc Michael in v3.
Thanks. will be done.

> Thanks
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization