Re: [Qemu-devel] [PATCH v3 0/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver.

2016-06-15 Thread Jitendra Kolhe
ping ... also had received some bounce back from few individual email-ids, so consider this one as resend. Thanks, - Jitendra On 5/30/2016 4:19 PM, Jitendra Kolhe wrote: > ping... > for entire v3 version of the patchset. > http://patchwork.ozlabs.org/project/qemu-devel/list/?submit

Re: [Qemu-devel] [PATCH v3 0/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver.

2016-05-30 Thread Jitendra Kolhe
ping... for entire v3 version of the patchset. http://patchwork.ozlabs.org/project/qemu-devel/list/?submitter=68462 - Jitendra On Wed, May 18, 2016 at 4:50 PM, Jitendra Kolhe <jitendra.ko...@hpe.com> wrote: > While measuring live migration performance for qemu/kvm guest, it was

Re: [Qemu-devel] [PATCH v1] migration: skip sending ram pages released by virtio-balloon driver.

2016-03-10 Thread Jitendra Kolhe
On 3/7/2016 10:35 PM, Eric Blake wrote: > On 03/04/2016 02:02 AM, Jitendra Kolhe wrote: >> While measuring live migration performance for qemu/kvm guest, it >> was observed that the qemu doesn’t maintain any intelligence for the >> guest ram pages which are release by the

Re: [Qemu-devel] [RFC kernel 0/2]A PV solution for KVM live migration optimization

2016-03-09 Thread Jitendra Kolhe
On 3/8/2016 4:44 PM, Amit Shah wrote: > On (Fri) 04 Mar 2016 [15:02:47], Jitendra Kolhe wrote: >>>> >>>> * Liang Li (liang.z...@intel.com) wrote: >>>>> The current QEMU live migration implementation mark the all the >>>>> guest's RA

Re: [Qemu-devel] [PATCH v1] migration: skip sending ram pages released by virtio-balloon driver.

2016-03-11 Thread Jitendra Kolhe
On 3/11/2016 12:55 PM, Li, Liang Z wrote: On 3/10/2016 3:19 PM, Roman Kagan wrote: On Fri, Mar 04, 2016 at 02:32:47PM +0530, Jitendra Kolhe wrote: Even though the pages which are returned to the host by virtio-balloon driver are zero pages, the migration algorithm will still end up scanning

Re: [Qemu-devel] [PATCH v1] migration: skip sending ram pages released by virtio-balloon driver.

2016-03-15 Thread Jitendra Kolhe
On 3/11/2016 8:09 PM, Jitendra Kolhe wrote: You mean the total live migration time for the unmodified qemu and the 'you modified for test' qemu are almost the same? Not sure I understand the question, but if 'you modified for test' means below modifications to save_zero_page(), then answer

Re: [Qemu-devel] [PATCH v1] migration: skip sending ram pages released by virtio-balloon driver.

2016-03-11 Thread Jitendra Kolhe
On 3/11/2016 4:24 PM, Li, Liang Z wrote: I wonder if it is the scanning for zeros or sending the whiteout which affects the total migration time more. If it is the former (as I would expect) then a rather local change to is_zero_range() to make use of the mapping information before scanning

Re: [Qemu-devel] [PATCH v1] migration: skip sending ram pages released by virtio-balloon driver.

2016-03-10 Thread Jitendra Kolhe
On 3/10/2016 10:57 PM, Eric Blake wrote: On 03/10/2016 01:57 AM, Jitendra Kolhe wrote: +++ b/qapi-schema.json @@ -544,11 +544,14 @@ # been migrated, pulling the remaining pages along as needed. NOTE: If # the migration fails during postcopy the VM will fail. (since 2.5

Re: [Qemu-devel] [PATCH v1] migration: skip sending ram pages released by virtio-balloon driver.

2016-03-10 Thread Jitendra Kolhe
On 3/10/2016 3:19 PM, Roman Kagan wrote: On Fri, Mar 04, 2016 at 02:32:47PM +0530, Jitendra Kolhe wrote: Even though the pages which are returned to the host by virtio-balloon driver are zero pages, the migration algorithm will still end up scanning the entire page ram_find_and_save_block

Re: [Qemu-devel] [PATCH v2] migration: skip sending ram pages released by virtio-balloon driver.

2016-04-13 Thread Jitendra Kolhe
On 4/10/2016 10:29 PM, Michael S. Tsirkin wrote: > On Fri, Apr 01, 2016 at 04:38:28PM +0530, Jitendra Kolhe wrote: >> On 3/29/2016 5:58 PM, Michael S. Tsirkin wrote: >>> On Mon, Mar 28, 2016 at 09:46:05AM +0530, Jitendra Kolhe wrote: >>>> While measuring live migr

Re: [Qemu-devel] [PATCH v1] migration: skip sending ram pages released by virtio-balloon driver.

2016-03-21 Thread Jitendra Kolhe
On 3/18/2016 4:57 PM, Roman Kagan wrote: > [ Sorry I've lost this thread with email setup changes on my side; > catching up ] > > On Tue, Mar 15, 2016 at 06:50:45PM +0530, Jitendra Kolhe wrote: >> On 3/11/2016 8:09 PM, Jitendra Kolhe wrote: >>> Here is what >&

[Qemu-devel] [PATCH v1] migration: skip sending ram pages released by virtio-balloon driver.

2016-03-04 Thread Jitendra Kolhe
ct on the downtime. Moreover, the applications in the guest space won’t be actually faulting on the ram pages which are already ballooned out, the proposed optimization will not show any improvement in migration time during postcopy. Signed-off-by: Jitendra Kolhe <jitendra.ko...@hp

[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization

2016-03-04 Thread Jitendra Kolhe
> > > > * Liang Li (liang.z...@intel.com) wrote: > > > The current QEMU live migration implementation mark the all the > > > guest's RAM pages as dirtied in the ram bulk stage, all these pages > > > will be processed and that takes quit a lot of CPU cycles. > > > > > > From guest's point of view,

Re: [Qemu-devel] [PATCH v2] migration: skip sending ram pages released by virtio-balloon driver.

2016-04-05 Thread Jitendra Kolhe
On 3/31/2016 10:09 PM, Dr. David Alan Gilbert wrote: > * Jitendra Kolhe (jitendra.ko...@hpe.com) wrote: >> While measuring live migration performance for qemu/kvm guest, it >> was observed that the qemu doesn’t maintain any intelligence for the >> guest ram pages which are

Re: [Qemu-devel] [PATCH v2] migration: skip sending ram pages released by virtio-balloon driver.

2016-04-01 Thread Jitendra Kolhe
On 3/29/2016 5:58 PM, Michael S. Tsirkin wrote: > On Mon, Mar 28, 2016 at 09:46:05AM +0530, Jitendra Kolhe wrote: >> While measuring live migration performance for qemu/kvm guest, it >> was observed that the qemu doesn’t maintain any intelligence for the >> guest ram pag

Re: [Qemu-devel] [PATCH v2] migration: skip sending ram pages released by virtio-balloon driver.

2016-04-01 Thread Jitendra Kolhe
On 3/29/2016 4:18 PM, Paolo Bonzini wrote: > > > On 29/03/2016 12:47, Jitendra Kolhe wrote: >>> Indeed. It is correct for the main system RAM, but hot-plugged RAM >>> would also have a zero-based section.offset_within_region. You need to >>> add memory_r

Re: [Qemu-devel] [PATCH v2] migration: skip sending ram pages released by virtio-balloon driver.

2016-03-29 Thread Jitendra Kolhe
On 3/29/2016 3:35 PM, Paolo Bonzini wrote: > > > On 28/03/2016 08:59, Michael S. Tsirkin wrote: +qemu_mutex_lock_balloon_bitmap(); for (;;) { size_t offset = 0; uint32_t pfn; elem = virtqueue_pop(vq, sizeof(VirtQueueElement));

Re: [Qemu-devel] [PATCH v2] migration: skip sending ram pages released by virtio-balloon driver.

2016-03-29 Thread Jitendra Kolhe
On 3/28/2016 7:41 PM, Eric Blake wrote: > On 03/27/2016 10:16 PM, Jitendra Kolhe wrote: >> While measuring live migration performance for qemu/kvm guest, it >> was observed that the qemu doesn’t maintain any intelligence for the >> guest ram pages which are released by the

Re: [Qemu-devel] [PATCH v2] migration: skip sending ram pages released by virtio-balloon driver.

2016-03-29 Thread Jitendra Kolhe
On 3/28/2016 4:06 PM, Denis V. Lunev wrote: > On 03/28/2016 07:16 AM, Jitendra Kolhe wrote: >> While measuring live migration performance for qemu/kvm guest, it >> was observed that the qemu doesn’t maintain any intelligence for the >> guest ram pages which are released by the

[Qemu-devel] [PATCH v2] migration: skip sending ram pages released by virtio-balloon driver.

2016-03-27 Thread Jitendra Kolhe
vm_stop, which has significant impact on the downtime. Moreover, the applications in the guest space won’t be actually faulting on the ram pages which are already ballooned out, the proposed optimization will not show any improvement in migration time during postcopy. Signed-off-by: Jitendra Kolhe <j

Re: [Qemu-devel] [PATCH v2] migration: skip sending ram pages released by virtio-balloon driver.

2016-04-29 Thread Jitendra Kolhe
On 4/13/2016 5:06 PM, Michael S. Tsirkin wrote: > On Wed, Apr 13, 2016 at 12:15:38PM +0100, Dr. David Alan Gilbert wrote: >> * Michael S. Tsirkin (m...@redhat.com) wrote: >>> On Wed, Apr 13, 2016 at 04:24:55PM +0530, Jitendra Kolhe wrote: >>>> Can we extend suppor

Re: [Qemu-devel] [PATCH] hw/virtio/balloon: Fixes for different host page sizes

2016-05-23 Thread Jitendra Kolhe
re of them yet, so that will be a chance > for a really proper final solution, I hope. > >> How about we just skip madvise if host page size is > balloon >> page size, for 2.6? > > That would mean a regression compared to what we have today. Currently, > the ba

[Qemu-devel] [PATCH v3 2/4] balloon: add balloon bitmap migration capability and setup bitmap migration status.

2016-05-18 Thread Jitendra Kolhe
is disabled, migration setup will resize balloon bitmap ramblock size to zero to avoid overhead of bitmap migration. Signed-off-by: Jitendra Kolhe <jitendra.ko...@hpe.com> --- balloon.c | 58 +-- hw/virtio/virtio-balloon.c

[Qemu-devel] [PATCH v3 1/4] balloon: maintain bitmap for pages released by guest balloon driver.

2016-05-18 Thread Jitendra Kolhe
virtio-balloon driver will be represented by 1 in the bitmap. The bitmap is also resized in case of more RAM is hotplugged. Signed-off-by: Jitendra Kolhe <jitendra.ko...@hpe.com> --- balloon.c | 91 +- exec.c | 6 ++

[Qemu-devel] [PATCH v3 3/4] balloon: reset balloon bitmap ramblock size on source and target.

2016-05-18 Thread Jitendra Kolhe
. Signed-off-by: Jitendra Kolhe <jitendra.ko...@hpe.com> --- balloon.c | 15 +++ hw/virtio/virtio-balloon.c | 15 +++ include/hw/virtio/virtio-balloon.h | 1 + include/sysemu/balloon.h | 1 + 4 files changed, 32 insertions(+)

[Qemu-devel] [PATCH v3 4/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver.

2016-05-18 Thread Jitendra Kolhe
> VIRTIO_BALLOON_PFN_SHIFT, the bitmap test function will return true if all sub-pages of size (1UL << VIRTIO_BALLOON_PFN_SHIFT) within dirty page are ballooned out. The test against bitmap gets disabled in case balloon bitmap status is set to disable during migration setup. Signed-off-by:

[Qemu-devel] [PATCH v3 0/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver.

2016-05-18 Thread Jitendra Kolhe
p ramblock size is set to zero if the optimization is disabled, to avoid overhead of migrating the bitmap. If the bitmap is not migrated to the target, the destination starts with a fresh bitmap and tracks the ballooning operation thereafter. Jitendra Kolhe (4): balloon: maintain bitmap for pages re

[Qemu-devel] [PATCH v3 4/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver.

2016-05-18 Thread Jitendra Kolhe
> VIRTIO_BALLOON_PFN_SHIFT, the bitmap test function will return true if all sub-pages of size (1UL << VIRTIO_BALLOON_PFN_SHIFT) within dirty page are ballooned out. The test against bitmap gets disabled in case balloon bitmap status is set to disable during migration setup. Signed-off-by:

Re: [Qemu-devel] [PATCH RFC] mem-prealloc: Reduce large guest start-up and migration time.

2017-02-02 Thread Jitendra Kolhe
On 1/27/2017 6:56 PM, Daniel P. Berrange wrote: > On Thu, Jan 05, 2017 at 12:54:02PM +0530, Jitendra Kolhe wrote: >> Using "-mem-prealloc" option for a very large guest leads to huge guest >> start-up and migration time. This is because with "-mem-prealloc" opti

Re: [Qemu-devel] [PATCH RFC] mem-prealloc: Reduce large guest start-up and migration time.

2017-02-06 Thread Jitendra Kolhe
On 1/30/2017 2:02 PM, Jitendra Kolhe wrote: > On 1/27/2017 6:33 PM, Dr. David Alan Gilbert wrote: >> * Jitendra Kolhe (jitendra.ko...@hpe.com) wrote: >>> Using "-mem-prealloc" option for a very large guest leads to huge guest >>> start-up and migration time

Re: [Qemu-devel] [PATCH RFC] mem-prealloc: Reduce large guest start-up and migration time.

2017-01-30 Thread Jitendra Kolhe
On 1/27/2017 6:23 PM, Juan Quintela wrote: > Jitendra Kolhe <jitendra.ko...@hpe.com> wrote: >> Using "-mem-prealloc" option for a very large guest leads to huge guest >> start-up and migration time. This is because with "-mem-prealloc" option >> qem

Re: [Qemu-devel] [PATCH RFC] mem-prealloc: Reduce large guest start-up and migration time.

2017-01-30 Thread Jitendra Kolhe
On 1/27/2017 6:33 PM, Dr. David Alan Gilbert wrote: > * Jitendra Kolhe (jitendra.ko...@hpe.com) wrote: >> Using "-mem-prealloc" option for a very large guest leads to huge guest >> start-up and migration time. This is because with "-mem-prealloc" option &g

[Qemu-devel] [PATCH v4] mem-prealloc: reduce large guest start-up and migration time.

2017-02-23 Thread Jitendra Kolhe
no longer touches any pages. - simplify code my returning memset_thread_failed status from touch_all_pages. Signed-off-by: Jitendra Kolhe <jitendra.ko...@hpe.com> --- backends/hostmem.c | 4 +- exec.c | 2 +- include/qem

Re: [Qemu-devel] [PATCH v3] mem-prealloc: reduce large guest start-up and migration time.

2017-02-23 Thread Jitendra Kolhe
On 2/23/2017 3:31 PM, Paolo Bonzini wrote: > > > On 23/02/2017 10:56, Jitendra Kolhe wrote: >> if (sigsetjmp(sigjump, 1)) { >> -error_setg(errp, "os_mem_prealloc: Insufficient free host memory " >> -&qu

[Qemu-devel] [PATCH v3] mem-prealloc: reduce large guest start-up and migration time.

2017-02-23 Thread Jitendra Kolhe
memset threads. Changed in v3: - limit number of threads spawned based on min(sysconf(_SC_NPROCESSORS_ONLN), 16, smp_cpus) - implement memset thread specific siglongjmp in SIGBUS signal_handler. Signed-off-by: Jitendra Kolhe <jitendra.ko...@hpe.com> --- backends/hostmem.c | 4 +-- exec

Re: [Qemu-devel] [PATCH v2] mem-prealloc: reduce large guest start-up and migration time.

2017-02-13 Thread Jitendra Kolhe
On 2/13/2017 5:34 PM, Igor Mammedov wrote: > On Mon, 13 Feb 2017 11:23:17 + > "Daniel P. Berrange" <berra...@redhat.com> wrote: > >> On Mon, Feb 13, 2017 at 11:45:46AM +0100, Igor Mammedov wrote: >>> On Mon, 13 Feb 2017 14:30:56 +0530 >>&

[Qemu-devel] [PATCH v2] mem-prealloc: reduce large guest start-up and migration time.

2017-02-13 Thread Jitendra Kolhe
| 31m43.400s 64 Core - 1TB | 0m39.885s | 7m55.289s 64 Core - 256GB | 0m11.960s | 2m0.135s --- Changed in v2: - modify number of memset threads spawned to min(smp_cpus, 16). - removed 64GB memory restriction for spawning memset

Re: [Qemu-devel] [PATCH v3 0/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver.

2017-01-04 Thread Jitendra Kolhe
On 1/5/2017 7:03 AM, Li, Liang Z wrote: >> Am 23.12.2016 um 03:50 schrieb Li, Liang Z: While measuring live migration performance for qemu/kvm guest, it was observed that the qemu doesn’t maintain any intelligence for the guest ram pages released by the guest balloon driver and

[Qemu-devel] [PATCH RFC] mem-prealloc: Reduce large guest start-up and migration time.

2017-01-04 Thread Jitendra Kolhe
usage - map guest pages using 16 threads --- 64 Core - 4TB | 1m58.970s | 31m43.400s 64 Core - 1TB | 0m39.885s | 7m55.289s 64 Core - 256GB | 0m11.960s | 2m0.135s --

Re: [Qemu-devel] [PATCH v3 0/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver.

2016-12-26 Thread Jitendra Kolhe
On 12/23/2016 8:20 AM, Li, Liang Z wrote: >> While measuring live migration performance for qemu/kvm guest, it was >> observed that the qemu doesn’t maintain any intelligence for the guest ram >> pages released by the guest balloon driver and treat such pages as any other >> normal guest ram

[Qemu-devel] [PATCH] mem-prealloc: fix sysconf(_SC_NPROCESSORS_ONLN) failure case.

2017-03-21 Thread Jitendra Kolhe
sysconf() failure gracefully. In case sysconf() fails, we fall back to single threaded. (Spotted by Coverity, CID 1372465.) Signed-off-by: Jitendra Kolhe <jitendra.ko...@hpe.com> --- util/oslib-posix.c | 16 ++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/util

Re: [Qemu-devel] [PULL 03/18] mem-prealloc: reduce large guest start-up and migration time.

2017-03-21 Thread Jitendra Kolhe
On 3/18/2017 7:28 PM, Peter Maydell wrote: > On 14 March 2017 at 16:18, Paolo Bonzini <pbonz...@redhat.com> wrote: >> From: Jitendra Kolhe <jitendra.ko...@hpe.com> >> >> Using "-mem-prealloc" option for a large guest leads to higher guest >> start-