On 3/18/2016 4:57 PM, Roman Kagan wrote:
> [ Sorry I've lost this thread with email setup changes on my side;
> catching up ]
>
> On Tue, Mar 15, 2016 at 06:50:45PM +0530, Jitendra Kolhe wrote:
>> On 3/11/2016 8:09 PM, Jitendra Kolhe wrote:
>>> Here is what
>>> I tried, let’s say we have 3
[ Sorry I've lost this thread with email setup changes on my side;
catching up ]
On Tue, Mar 15, 2016 at 06:50:45PM +0530, Jitendra Kolhe wrote:
> On 3/11/2016 8:09 PM, Jitendra Kolhe wrote:
> > Here is what
> >I tried, let’s say we have 3 versions of qemu (below timings are for
> >16GB idle
On 3/11/2016 8:09 PM, Jitendra Kolhe wrote:
You mean the total live migration time for the unmodified qemu and the
'you modified for test' qemu
are almost the same?
Not sure I understand the question, but if 'you modified for test' means
below modifications to save_zero_page(), then answer is
On 3/11/2016 4:24 PM, Li, Liang Z wrote:
I wonder if it is the scanning for zeros or sending the whiteout
which affects the total migration time more. If it is the former
(as I would
expect) then a rather local change to is_zero_range() to make use of
the mapping information before scanning
> >>> I wonder if it is the scanning for zeros or sending the whiteout
> >>> which affects the total migration time more. If it is the former
> >>> (as I would
> >>> expect) then a rather local change to is_zero_range() to make use of
> >>> the mapping information before scanning would get you
On 3/11/2016 12:55 PM, Li, Liang Z wrote:
On 3/10/2016 3:19 PM, Roman Kagan wrote:
On Fri, Mar 04, 2016 at 02:32:47PM +0530, Jitendra Kolhe wrote:
Even though the pages which are returned to the host by
virtio-balloon driver are zero pages, the migration algorithm will
still end up scanning
> On 3/10/2016 3:19 PM, Roman Kagan wrote:
> > On Fri, Mar 04, 2016 at 02:32:47PM +0530, Jitendra Kolhe wrote:
> >> Even though the pages which are returned to the host by
> >> virtio-balloon driver are zero pages, the migration algorithm will
> >> still end up scanning the entire page
On 3/10/2016 3:19 PM, Roman Kagan wrote:
On Fri, Mar 04, 2016 at 02:32:47PM +0530, Jitendra Kolhe wrote:
Even though the pages which are returned to the host by virtio-balloon
driver are zero pages, the migration algorithm will still end up
scanning the entire page ram_find_and_save_block() ->
On 3/10/2016 10:57 PM, Eric Blake wrote:
On 03/10/2016 01:57 AM, Jitendra Kolhe wrote:
+++ b/qapi-schema.json
@@ -544,11 +544,14 @@
# been migrated, pulling the remaining pages along as needed. NOTE:
If
# the migration fails during postcopy the VM will fail. (since 2.5)
On 03/10/2016 01:57 AM, Jitendra Kolhe wrote:
>>> +++ b/qapi-schema.json
>>> @@ -544,11 +544,14 @@
>>> # been migrated, pulling the remaining pages along as needed.
>>> NOTE: If
>>> # the migration fails during postcopy the VM will fail. (since
>>> 2.5)
>>> #
>>> +#
On Fri, Mar 04, 2016 at 02:32:47PM +0530, Jitendra Kolhe wrote:
> Even though the pages which are returned to the host by virtio-balloon
> driver are zero pages, the migration algorithm will still end up
> scanning the entire page ram_find_and_save_block() -> ram_save_page/
>
On 3/7/2016 10:35 PM, Eric Blake wrote:
> On 03/04/2016 02:02 AM, Jitendra Kolhe wrote:
>> While measuring live migration performance for qemu/kvm guest, it
>> was observed that the qemu doesn’t maintain any intelligence for the
>> guest ram pages which are release by the guest balloon driver and
On 03/04/2016 02:02 AM, Jitendra Kolhe wrote:
> While measuring live migration performance for qemu/kvm guest, it
> was observed that the qemu doesn’t maintain any intelligence for the
> guest ram pages which are release by the guest balloon driver and
> treat such pages as any other normal guest
While measuring live migration performance for qemu/kvm guest, it
was observed that the qemu doesn’t maintain any intelligence for the
guest ram pages which are release by the guest balloon driver and
treat such pages as any other normal guest ram pages. This has direct
impact on overall migration
14 matches
Mail list logo