Is it better to use guest_cpuid_has_mpx() instead of
vmx_mpx_supported()?
CPUID hasn't been set yet, so I think it is okay to key it on
vmx_mpx_supported(). It will be deactivated soon afterwards.
Or even do it unconditionally; just make sure to add a comment about it.
diff --git
Correct Gleb's email address.
Liang
-Original Message-
From: Li, Liang Z
Sent: Wednesday, May 20, 2015 10:36 PM
To: k...@vger.kernel.org; linux-kernel@vger.kernel.org
Cc: g...@kernel.or; pbonz...@redhat.com; t...@linutronix.de;
mi...@redhat.com; h...@zytor.com; x...@kernel.org
Cc: linux-kernel@vger.kernel.org; ian.campb...@citrix.com;
wei.l...@citrix.com; xen-de...@lists.xenproject.org;
net...@vger.kernel.org
Subject: Re: [PATCH] xen-netback: remove duplicated function definition
From: Liang Li liang.z...@intel.com
Date: Sat, 4 Jul 2015 03:33:00 +0800
There
kernel has. Forgive
my neglect.
Liang
-Original Message-
From: Li, Liang Z
Sent: Thursday, August 06, 2015 11:47 AM
To: 'Paolo Bonzini'; 'Juan Quintela'; linux-kernel@vger.kernel.org
Cc: Zhang, Yang Z; k...@vger.kernel.org
Subject: about the time consuming kvm_vcpu_ioctl
The synchronize_rcu() is a time consuming operation, the unpstream kernel
still have some issue, the KVM_RUN ioctl will take more then 10ms when resume
the VM after migration.
Liang
-Original Message-
From: Li, Liang Z
Sent: Thursday, August 06, 2015 11:47 AM
To: 'Paolo Bonzini
Hi Paolo Juan,
I found some of the kvm_vcpu_ioctl operation takes about more than 10ms with
the 3.10.0-229.el7.x86_64 kernel, which prolong the VM service downtime when
doing live migration about 20~30ms.
This happened when doing the KVM_KVMCLOCK_CTRL ioctl. It's worse if more VCPUs
used by
Ping...
Liang
> -Original Message-
> From: Li, Liang Z
> Sent: Wednesday, June 01, 2016 10:41 AM
> To: linux-kernel@vger.kernel.org
> Cc: k...@vger.kernel.org; qemu-de...@nongnu.org; Michael S. Tsirkin;
> Paolo Bonzini; Cornelia Huck; Amit Shah
> Subject: RE:
> > > Interesting. How about instead of tell host, we do multiple scans,
> > > each time ignoring pages out of range?
> > >
> > > for (pfn = min pfn; pfn < max pfn; pfn += 1G) {
> > > foreach page
> > > if page pfn < pfn || page pfn >= pfn + 1G
> > >
ive?
Liang
> -Original Message-
> From: Li, Liang Z
> Sent: Friday, May 27, 2016 6:34 PM
> To: linux-kernel@vger.kernel.org
> Cc: k...@vger.kernel.org; qemu-de...@nongnu.org; Li, Liang Z; Michael S.
> Tsirkin; Paolo Bonzini; Cornelia Huck; Amit Shah
> Subject: [PATC
Ping ...
Liang
> -Original Message-
> From: Li, Liang Z
> Sent: Monday, June 13, 2016 5:47 PM
> To: k...@vger.kernel.org
> Cc: virtio-...@lists.oasis-open.org; qemu-de...@nongun.org; linux-
> ker...@vger.kernel.org; m...@redhat.com; Li, Liang Z; Paolo Bonzini; Cor
Any comments?
Liang
> -Original Message-
> From: Li, Liang Z
> Sent: Monday, June 13, 2016 5:47 PM
> To: k...@vger.kernel.org
> Cc: virtio-...@lists.oasis-open.org; qemu-de...@nongun.org; linux-
> ker...@vger.kernel.org; m...@redhat.com; Li, Liang Z
> Subject: [PA
Hi Michael,
Could you help to review this patch set and give some comments when you have
time?
My work is blocked here.
Thanks !
Liang
> -Original Message-
> From: Li, Liang Z
> Sent: Monday, June 13, 2016 5:47 PM
> To: k...@vger.kernel.org
> Cc: virtio-...@lists.oasis
Hi Michael,
Thanks for your comments!
>
> 2<< 30 is 2G but that is not a useful comment.
> pls explain what is the reason for this selection.
>
Will change in the next version.
> > +struct balloon_bmap_hdr {
> > + __virtio32 id;
> > + __virtio32 page_shift;
> > + __virtio64 start_pfn;
> This is a -EBUSY. Is there anything magic about mfn 188d903? It just looks
> like plain RAM in the E820 table.
> Have you got dom0 configured to use linear p2m mode? Without it, dom0 can
> only have a maximum of 512GB of RAM.
> ~Andrew
No special configuration for dom0, actually, the
> >> We found dom0 will crash when booing on HSW-EX server, the dom0
> >> kernel version is v4.4. By debugging I found the your patch '
> >> x86/xen: discard RAM regions above the maximum reservation' , which
> the commit ID is : f5775e0b6116b7e2425ccf535243b21 caused the regression.
> The debug
> > Could provide more information on how to use virtio-serial to exchange
> data? Thread , Wiki or code are all OK.
> > I have not find some useful information yet.
>
> See this commit in the Linux sources:
>
> 108fc82596e3b66b819df9d28c1ebbc9ab5de14c
>
> that adds a way to send guest trace
> > This patch set is the QEMU side implementation.
> >
> > The virtio-balloon is extended so that QEMU can get the free pages
> > information from the guest through virtio.
> >
> > After getting the free pages information (a bitmap), QEMU can use it
> > to filter out the guest's free pages in the
> On 3/8/2016 4:44 PM, Amit Shah wrote:
> > On (Fri) 04 Mar 2016 [15:02:47], Jitendra Kolhe wrote:
>
> * Liang Li (liang.z...@intel.com) wrote:
> > The current QEMU live migration implementation mark the all the
> > guest's RAM pages as dirtied in the ram bulk stage, all these
>
> > > > > > I'm just catching back up on this thread; so without
> > > > > > reference to any particular previous mail in the thread.
> > > > > >
> > > > > > 1) How many of the free pages do we tell the host about?
> > > > > > Your main change is telling the host about all the
> > > > > >
> On Mon, Mar 14, 2016 at 05:03:34PM +, Dr. David Alan Gilbert wrote:
> > * Li, Liang Z (liang.z...@intel.com) wrote:
> > > >
> > > > Hi,
> > > > I'm just catching back up on this thread; so without reference
> > > > to any particu
> On 04/03/2016 15:26, Li, Liang Z wrote:
> >> >
> >> > The memory usage will keep increasing due to ever growing caches,
> >> > etc, so you'll be left with very little free memory fairly soon.
> >> >
> > I don't think so.
> >
> > > Hi,
> > > I'm just catching back up on this thread; so without reference to
> > > any particular previous mail in the thread.
> > >
> > > 1) How many of the free pages do we tell the host about?
> > > Your main change is telling the host about all the
> > > free pages.
> >
> >
> On Fri, Mar 04, 2016 at 06:51:21PM +, Dr. David Alan Gilbert wrote:
> > * Paolo Bonzini (pbonz...@redhat.com) wrote:
> > >
> > >
> > > On 04/03/2016 15:26, Li, Liang Z wrote:
> > > >> >
> > > >> > The memory usage wi
>
> Hi,
> I'm just catching back up on this thread; so without reference to any
> particular previous mail in the thread.
>
> 1) How many of the free pages do we tell the host about?
> Your main change is telling the host about all the
> free pages.
Yes, all the guest's free
> On Mon, Mar 07, 2016 at 01:40:06PM +0200, Michael S. Tsirkin wrote:
> > On Mon, Mar 07, 2016 at 06:49:19AM +0000, Li, Liang Z wrote:
> > > > > No. And it's exactly what I mean. The ballooned memory is still
> > > > > processed during live migration without
> > > > > Yes, we really can teach qemu to skip these pages and it's not hard.
> > > > > The problem is the poor performance, this PV solution
> > > >
> > > > Balloon is always PV. And do not call patches solutions please.
> > > >
> > > > > is aimed to make it more
> > > > > efficient and reduce
> On Fri, Mar 04, 2016 at 03:13:03PM +0000, Li, Liang Z wrote:
> > > > Maybe I am not clear enough.
> > > >
> > > > I mean if we inflate balloon before live migration, for a 8GB
> > > > guest, it takes
> > > about 5 Seconds for the in
> Subject: Re: [RFC qemu 0/4] A PV solution for live migration optimization
>
> On (Thu) 03 Mar 2016 [18:44:24], Liang Li wrote:
> > The current QEMU live migration implementation mark the all the
> > guest's RAM pages as dirtied in the ram bulk stage, all these pages
> > will be processed and
> Subject: Re: [RFC qemu 2/4] virtio-balloon: Add a new feature to balloon
> device
>
> On Thu, Mar 03, 2016 at 06:44:26PM +0800, Liang Li wrote:
> > Extend the virtio balloon device to support a new feature, this new
> > feature can help to get guest's free pages information, which can be
> >
> On Thu, Mar 03, 2016 at 06:44:28PM +0800, Liang Li wrote:
> > Get the free pages information through virtio and filter out the free
> > pages in the ram bulk stage. This can significantly reduce the total
> > live migration time as well as network traffic.
> >
> > Signed-off-by: Liang Li
> On Thu, 3 Mar 2016 18:44:26 +0800
> Liang Li wrote:
>
> > Extend the virtio balloon device to support a new feature, this new
> > feature can help to get guest's free pages information, which can be
> > used for live migration optimzation.
>
> Do you have a spec for
> On Thu, Mar 03, 2016 at 06:44:24PM +0800, Liang Li wrote:
> > The current QEMU live migration implementation mark the all the
> > guest's RAM pages as dirtied in the ram bulk stage, all these pages
> > will be processed and that takes quit a lot of CPU cycles.
> >
> > From guest's point of view,
> Subject: Re: [RFC qemu 0/4] A PV solution for live migration optimization
>
> * Liang Li (liang.z...@intel.com) wrote:
> > The current QEMU live migration implementation mark the all the
> > guest's RAM pages as dirtied in the ram bulk stage, all these pages
> > will be processed and that takes
> On Thu, 3 Mar 2016 18:44:28 +0800
> Liang Li wrote:
>
> > Get the free pages information through virtio and filter out the free
> > pages in the ram bulk stage. This can significantly reduce the total
> > live migration time as well as network traffic.
> >
> >
> On Fri, Mar 04, 2016 at 01:52:53AM +0000, Li, Liang Z wrote:
> > > I wonder if it would be possible to avoid the kernel changes by
> > > parsing /proc/self/pagemap - if that can be used to detect
> > > unmapped/zero mapped pages in the guest ram, woul
> * Roman Kagan (rka...@virtuozzo.com) wrote:
> > On Fri, Mar 04, 2016 at 08:23:09AM +0000, Li, Liang Z wrote:
> > > > On Thu, Mar 03, 2016 at 05:46:15PM +, Dr. David Alan Gilbert wrote:
> > > > > * Liang Li (liang.z...@intel.com) wrote:
> &g
> On Thu, Mar 03, 2016 at 05:46:15PM +, Dr. David Alan Gilbert wrote:
> > * Liang Li (liang.z...@intel.com) wrote:
> > > The current QEMU live migration implementation mark the all the
> > > guest's RAM pages as dirtied in the ram bulk stage, all these pages
> > > will be processed and that
> > > * Liang Li (liang.z...@intel.com) wrote:
> > > > The current QEMU live migration implementation mark the all the
> > > > guest's RAM pages as dirtied in the ram bulk stage, all these
> > > > pages will be processed and that takes quit a lot of CPU cycles.
> > > >
> > > > From guest's point
> On Fri, Mar 04, 2016 at 09:12:12AM +0000, Li, Liang Z wrote:
> > > Although I wonder which is cheaper; that would be fairly expensive
> > > for the guest wouldn't it? And you'd somehow have to kick the guest
> > > before migration to do the ballooning
> > Maybe I am not clear enough.
> >
> > I mean if we inflate balloon before live migration, for a 8GB guest, it
> > takes
> about 5 Seconds for the inflating operation to finish.
>
> And these 5 seconds are spent where?
>
The time is spent on allocating the pages and send the allocated pages
> Subject: Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration
> optimization
>
> On Fri, Mar 04, 2016 at 09:08:44AM +0000, Li, Liang Z wrote:
> > > On Fri, Mar 04, 2016 at 01:52:53AM +, Li, Liang Z wrote:
> > > > > I wonder if it would be
> > > > > > Only detect the unmapped/zero mapped pages is not enough.
> > > Consider
> > > > > the
> > > > > > situation like case 2, it can't achieve the same result.
> > > > >
> > > > > Your case 2 doesn't exist in the real world. If people could
> > > > > stop their main memory consumer in the
> > On 04/03/2016 15:26, Li, Liang Z wrote:
> > >> >
> > >> > The memory usage will keep increasing due to ever growing caches,
> > >> > etc, so you'll be left with very little free memory fairly soon.
> > >> >
> > > I do
...@lists.linux-foundation.org; r...@twiddle.net; r...@redhat.com
> Subject: Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration
> optimization
>
> On Mon, Mar 07, 2016 at 06:49:19AM +, Li, Liang Z wrote:
> > > > No. And it's exactly what I mean. The ballooned memory
> > No. And it's exactly what I mean. The ballooned memory is still
> > processed during live migration without skipping. The live migration code is
> in migration/ram.c.
>
> So if guest acknowledged VIRTIO_BALLOON_F_MUST_TELL_HOST, we can
> teach qemu to skip these pages.
> Want to write a patch
> On Fri, Apr 22, 2016 at 10:48:38AM +0100, Dr. David Alan Gilbert wrote:
> > * Michael S. Tsirkin (m...@redhat.com) wrote:
> > > On Tue, Apr 19, 2016 at 03:02:09PM +, Li, Liang Z wrote:
> > > > > On Tue, 2016-04-19 at 22:34 +0800, Liang Li wrote:
> > &g
> On Mon, Apr 25, 2016 at 03:11:05AM +0000, Li, Liang Z wrote:
> > > On Fri, Apr 22, 2016 at 10:48:38AM +0100, Dr. David Alan Gilbert wrote:
> > > > * Michael S. Tsirkin (m...@redhat.com) wrote:
> > > > > On Tue, Apr 19, 2016 at 03:02:09PM +, Li, Liang Z
> On Wed, Apr 20, 2016 at 01:41:24AM +0000, Li, Liang Z wrote:
> > > Cc: Rik van Riel; v...@zeniv.linux.org.uk;
> > > linux-kernel@vger.kernel.org; quint...@redhat.com;
> > > amit.s...@redhat.com; pbonz...@redhat.com; dgilb...@redhat.com;
> > > linux...@kv
> On Tue, May 24, 2016 at 02:36:08PM +0000, Li, Liang Z wrote:
> > > > > > > This can be pre-initialized, correct?
> > > > > >
> > > > > > pre-initialized? I am not quite understand your mean.
> > > > >
> > > > > > > This is grossly inefficient if you only requested a single page.
> > > > > > > And it's also allocating memory very aggressively without
> > > > > > > ever telling the host what is going on.
> > > > > >
> > > > > > If only requested a single page, there is no need to send the
> > >
> On 20/05/2016 11:59, Liang Li wrote:
> > +
> > + sg_init_table(sg, 5);
> > + sg_set_buf([0], , sizeof(flags));
> > + sg_set_buf([1], _pfn, sizeof(start_pfn));
> > + sg_set_buf([2], _shift, sizeof(page_shift));
> > + sg_set_buf([3], _len,
> On Fri, 20 May 2016 17:59:46 +0800
> Liang Li wrote:
>
> > The implementation of the current virtio-balloon is not very
> > efficient, Bellow is test result of time spends on inflating the
> > balloon to 3GB of a 4GB idle guest:
> >
> > a. allocating pages (6.5%, 103ms)
>
> > > > {
> > > > - struct scatterlist sg;
> > > > unsigned int len;
> > > >
> > > > - sg_init_one(, vb->pfns, sizeof(vb->pfns[0]) * vb->num_pfns);
> > > > + if (virtio_has_feature(vb->vdev,
> > > VIRTIO_BALLOON_F_PAGE_BITMAP)) {
> > > > + u32 page_shift =
> On Fri, May 20, 2016 at 05:59:46PM +0800, Liang Li wrote:
> > The implementation of the current virtio-balloon is not very
> > efficient, Bellow is test result of time spends on inflating the
> > balloon to 3GB of a 4GB idle guest:
> >
> > a. allocating pages (6.5%, 103ms)
> > b. sending PFNs to
> > On Fri, May 20, 2016 at 05:59:46PM +0800, Liang Li wrote:
> > > The implementation of the current virtio-balloon is not very
> > > efficient, Bellow is test result of time spends on inflating the
> > > balloon to 3GB of a 4GB idle guest:
> > >
> > > a. allocating pages (6.5%, 103ms)
> > > b.
> > > > > This can be pre-initialized, correct?
> > > >
> > > > pre-initialized? I am not quite understand your mean.
> > >
> > > I think you can maintain sg as part of device state and init sg with the
> bitmap.
> > >
> >
> > I got it.
> >
> > > > > This is grossly inefficient if you only
> On Tue, 2016-04-19 at 22:34 +0800, Liang Li wrote:
> > The free page bitmap will be sent to QEMU through virtio interface and
> > used for live migration optimization.
> > Drop the cache before building the free page bitmap can get more free
> > pages. Whether dropping the cache is decided by
: [PATCH kernel 1/2] mm: add the related functions to build the
> free page bitmap
>
> On Tue, Apr 19, 2016 at 03:02:09PM +, Li, Liang Z wrote:
> > > On Tue, 2016-04-19 at 22:34 +0800, Liang Li wrote:
> > > > The free page bitmap will be sent to QEMU throu
> On Tue, 2016-04-19 at 15:02 +0000, Li, Liang Z wrote:
> > >
> > > On Tue, 2016-04-19 at 22:34 +0800, Liang Li wrote:
> > > >
> > > > The free page bitmap will be sent to QEMU through virtio interface
> > > > and used for live migra
> On Wed, May 25, 2016 at 08:48:17AM +0000, Li, Liang Z wrote:
> > > > > Suggestion to address all above comments:
> > > > > 1. allocate a bunch of pages and link them up,
> > > > > calculating the min and the max pfn.
> > >
> > > >
> > > > Hi MST,
> > > >
> > > > I have measured the performance when using a 32K page bitmap,
> > >
> > > Just to make sure. Do you mean a 32Kbyte bitmap?
> > > Covering 1Gbyte of memory?
> > Yes.
> >
> > >
> > > > and inflate the balloon to 3GB
> > > > of an idle guest with 4GB RAM.
> > >
> > > Suggestion to address all above comments:
> > > 1. allocate a bunch of pages and link them up,
> > > calculating the min and the max pfn.
> > > if max-min exceeds the allocated bitmap size,
> > > tell host.
> >
> > I am not sure if it works well in some cases, e.g. The
> So I'm fine with this patchset, but I noticed it was not yet reviewed by MM
> people. And that is not surprising since you did not copy memory
> management mailing list on it.
>
> I added linux...@kvack.org Cc on this mail but this might not be enough.
>
> Please repost (e.g. [PATCH v2
> > }
> >
> > +static void update_free_pages_stats(struct virtio_balloon *vb,
>
> why _stats?
Will change.
> > + max_pfn = get_max_pfn();
> > + mutex_lock(>balloon_lock);
> > + while (pfn < max_pfn) {
> > + memset(vb->page_bitmap, 0, vb->bmap_len);
> > + ret =
> > > This ends up doing a 1MB kmalloc() right? That seems a _bit_ big.
> > > How big was the pfn buffer before?
> >
> > Yes, it is if the max pfn is more than 32GB.
> > The size of the pfn buffer use before is 256*4 = 1024 Bytes, it's too
> > small, and it's the main reason for bad performance.
> > > On Wed, Jul 27, 2016 at 09:03:21AM -0700, Dave Hansen wrote:
> > > > On 07/26/2016 06:23 PM, Liang Li wrote:
> > > > > + vb->pfn_limit = VIRTIO_BALLOON_PFNS_LIMIT;
> > > > > + vb->pfn_limit = min(vb->pfn_limit, get_max_pfn());
> > > > > + vb->bmap_len = ALIGN(vb->pfn_limit,
> On Thu, Jul 28, 2016 at 06:36:18AM +0000, Li, Liang Z wrote:
> > > > > This ends up doing a 1MB kmalloc() right? That seems a _bit_ big.
> > > > > How big was the pfn buffer before?
> > > >
> > > > Yes, it is if the max pfn is more than 3
> On Thu, Jul 28, 2016 at 03:06:37AM +0000, Li, Liang Z wrote:
> > > > + * VIRTIO_BALLOON_PFNS_LIMIT is used to limit the size of page
> > > > +bitmap
> > > > + * to prevent a very large page bitmap, there are two reasons for this:
> > > > + *
> Subject: Re: [PATCH v2 repost 6/7] mm: add the related functions to get free
> page info
>
> On 07/26/2016 06:23 PM, Liang Li wrote:
> > + for_each_migratetype_order(order, t) {
> > + list_for_each(curr, >free_area[order].free_list[t]) {
> > + pfn =
> > + * VIRTIO_BALLOON_PFNS_LIMIT is used to limit the size of page bitmap
> > + * to prevent a very large page bitmap, there are two reasons for this:
> > + * 1) to save memory.
> > + * 2) allocate a large bitmap may fail.
> > + *
> > + * The actual limit of pfn is determined by:
> > + *
> Subject: Re: [PATCH v2 repost 4/7] virtio-balloon: speed up inflate/deflate
> process
>
> On Wed, Jul 27, 2016 at 09:03:21AM -0700, Dave Hansen wrote:
> > On 07/26/2016 06:23 PM, Liang Li wrote:
> > > + vb->pfn_limit = VIRTIO_BALLOON_PFNS_LIMIT;
> > > + vb->pfn_limit = min(vb->pfn_limit,
> > +/*
> > + * VIRTIO_BALLOON_PFNS_LIMIT is used to limit the size of page bitmap
> > + * to prevent a very large page bitmap, there are two reasons for this:
> > + * 1) to save memory.
> > + * 2) allocate a large bitmap may fail.
> > + *
> > + * The actual limit of pfn is determined by:
> > + *
> Subject: Re: [PATCH v2 repost 4/7] virtio-balloon: speed up inflate/deflate
> process
>
> On 07/26/2016 06:23 PM, Liang Li wrote:
> > + vb->pfn_limit = VIRTIO_BALLOON_PFNS_LIMIT;
> > + vb->pfn_limit = min(vb->pfn_limit, get_max_pfn());
> > + vb->bmap_len = ALIGN(vb->pfn_limit,
> On 07/27/2016 03:05 PM, Michael S. Tsirkin wrote:
> > On Wed, Jul 27, 2016 at 09:40:56AM -0700, Dave Hansen wrote:
> >> On 07/26/2016 06:23 PM, Liang Li wrote:
> >>> + for_each_migratetype_order(order, t) {
> >>> + list_for_each(curr, >free_area[order].free_list[t]) {
> >>> +
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 7da61ad..3ad8b10
> > 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -4523,6 +4523,52 @@ unsigned long get_max_pfn(void) }
> > EXPORT_SYMBOL(get_max_pfn);
> >
> > +static void mark_free_pages_bitmap(struct zone *zone,
> > It's only small because it makes you rescan the free list.
> > So maybe you should do something else.
> > I looked at it a bit. Instead of scanning the free list, how about
> > scanning actual page structures? If page is unused, pass it to host.
> > Solves the problem of rescanning multiple
Hi Michael,
If you have time, could you help to review this patch set?
Thanks!
Liang
> -Original Message-
> From: Li, Liang Z
> Sent: Wednesday, June 29, 2016 6:32 PM
> To: m...@redhat.com
> Cc: linux-kernel@vger.kernel.org; virtualizat...@lists.linux-foun
Ping ...
Liang
> -Original Message-
> From: Li, Liang Z
> Sent: Wednesday, June 29, 2016 6:32 PM
> To: m...@redhat.com
> Cc: linux-kernel@vger.kernel.org; virtualizat...@lists.linux-foundation.org;
> k...@vger.kernel.org; qemu-de...@nongnu.org; virtio-dev@lists.oasis-
> Subject: Re: [PATCH v3 kernel 0/7] Extend virtio-balloon for fast
> (de)inflating
> & fast live migration
>
> On 08/07/2016 11:35 PM, Liang Li wrote:
> > Dave Hansen suggested a new scheme to encode the data structure,
> > because of additional complexity, it's not implemented in v3.
>
>
>
>
> > +static void free_extended_page_bitmap(struct virtio_balloon *vb) {
> > + int i, bmap_count = vb->nr_page_bmap;
> > +
> > + for (i = 1; i < bmap_count; i++) {
> > + kfree(vb->page_bitmap[i]);
> > + vb->page_bitmap[i] = NULL;
> > + vb->nr_page_bmap--;
> >
> Am 21.12.2016 um 07:52 schrieb Liang Li:
> > This patch set contains two parts of changes to the virtio-balloon.
> >
> > One is the change for speeding up the inflating & deflating process,
> > the main idea of this optimization is to use {pfn|length} to present
> > the page information instead
> > > > > > Signed-off-by: Liang Li
> > > > > > Cc: Michael S. Tsirkin
> > > > > > Cc: Paolo Bonzini
> > > > > > Cc: Cornelia Huck
> > > > > > Cc: Amit Shah
> > > > > > Cc: Dave Hansen
> On Wed, Jan 18, 2017 at 04:56:58AM +0000, Li, Liang Z wrote:
> > > > - virtqueue_add_outbuf(vq, , 1, vb, GFP_KERNEL);
> > > > - virtqueue_kick(vq);
> > > > +static void do_set_resp_bitmap(struct virtio_balloon *vb,
> > > >
> Sent: Wednesday, January 18, 2017 3:11 AM
> To: Li, Liang Z
> Cc: k...@vger.kernel.org; virtio-...@lists.oasis-open.org; qemu-
> de...@nongnu.org; linux...@kvack.org; linux-kernel@vger.kernel.org;
> virtualizat...@lists.linux-foundation.org; amit.s...@redhat.com; Hansen,
>
> > - virtqueue_add_outbuf(vq, , 1, vb, GFP_KERNEL);
> > - virtqueue_kick(vq);
> > +static void do_set_resp_bitmap(struct virtio_balloon *vb,
> > + unsigned long base_pfn, int pages)
> >
> > - /* When host has read buffer, this completes via balloon_ack */
> > -
> On 29/12/2016 10:25, Liang Li wrote:
> > x86-64 is currently limited physical address width to 46 bits, which
> > can support 64 TiB of memory. Some vendors require to support more for
> > some use case. Intel plans to extend the physical address width to
> > 52 bits in some of the future
> Subject: Re: [PATCH v3 kernel 0/7] Extend virtio-balloon for fast
> (de)inflating
> & fast live migration
>
> 2016-08-08 14:35 GMT+08:00 Liang Li :
> > This patch set contains two parts of changes to the virtio-balloon.
> >
> > One is the change for speeding up the
Intel SDM doesn't describe whether the A bit will be set or not when CPU
accesses a no present EPT page table entry?
even this patch works for the current CPU, it's not good to make such an
assumption.
Should we revert it?
Thanks!
Liang
> -Original Message-
> From:
Hi Michael,
I know you are very busy. If you have time, could you help to take a look at
this patch set?
Thanks!
Liang
> -Original Message-
> From: Li, Liang Z
> Sent: Thursday, August 18, 2016 9:06 AM
> To: Michael S. Tsirkin
> Cc: virtualizat...@lists.linux-founda
Hi Michael,
Could you help to review this version when you have time?
Thanks!
Liang
> -Original Message-
> From: Li, Liang Z
> Sent: Monday, August 08, 2016 2:35 PM
> To: linux-kernel@vger.kernel.org
> Cc: virtualizat...@lists.linux-foundation.org; linux...@kvack.o
.@redhat.com
> Subject: Re: [RESEND PATCH v3 kernel 0/7] Extend virtio-balloon for fast
> (de)inflating & fast live migration
>
> On 10/26/2016 03:13 AM, Li, Liang Z wrote:
> > 3 times memory required is not accurate, please ignore this. sorry ...
> > The complexity is the
> On 10/26/2016 03:06 AM, Li, Liang Z wrote:
> > I am working on Dave's new bitmap schema, I have finished the part of
> > getting the 'hybrid scheme bitmap' and found the complexity was more
> > than I expected. The main issue is more memory is required to save the
>
> Please squish this and patch 5 together. It makes no sense to separate them.
>
OK.
> > +static void send_unused_pages_info(struct virtio_balloon *vb,
> > + unsigned long req_id)
> > +{
> > + struct scatterlist sg_in;
> > + unsigned long pfn = 0, bmap_len,
> On 11/06/2016 07:37 PM, Li, Liang Z wrote:
> >> Let's say we do a 32k bitmap that can hold ~1M pages. That's 4GB of RAM.
> >> On a 1TB system, that's 256 passes through the top-level loop.
> >> The bottom-level lists have tens of thousands of pages in them, even
> On Fri, Oct 21, 2016 at 10:25:21AM -0700, Dave Hansen wrote:
> > On 10/20/2016 11:24 PM, Liang Li wrote:
> > > Dave Hansen suggested a new scheme to encode the data structure,
> > > because of additional complexity, it's not implemented in v3.
> >
> > So, what do you want done with this patch
> On 10/20/2016 11:24 PM, Liang Li wrote:
> > Expose the function to get the max pfn, so it can be used in the
> > virtio-balloon device driver. Simply include the 'linux/bootmem.h'
> > is not enough, if the device driver is built to a module, directly
> > refer the max_pfn lead to build failed.
>
> On 10/20/2016 11:24 PM, Liang Li wrote:
> > Add a new feature which supports sending the page information with a
> > bitmap. The current implementation uses PFNs array, which is not very
> > efficient. Using bitmap can improve the performance of
> > inflating/deflating significantly
>
> Why is
> On 10/20/2016 11:24 PM, Liang Li wrote:
> > Will allow faster notifications using a bitmap down the road.
> > balloon_pfn_to_page() can be removed because it's useless.
>
> This is a pretty terse description of what's going on here. Could you try to
> elaborate a bit? What *is* the current
> > +static inline void init_pfn_range(struct virtio_balloon *vb) {
> > + vb->min_pfn = ULONG_MAX;
> > + vb->max_pfn = 0;
> > +}
> > +
> > +static inline void update_pfn_range(struct virtio_balloon *vb,
> > +struct page *page)
> > +{
> > + unsigned long
> On 10/20/2016 11:24 PM, Liang Li wrote:
> > Dave Hansen suggested a new scheme to encode the data structure,
> > because of additional complexity, it's not implemented in v3.
>
> So, what do you want done with this patch set? Do you want it applied as-is
> so that we can introduce a new
1 - 100 of 238 matches
Mail list logo