On %D, %SN wrote:
%Q
%C
Liang
> -Original Message-
> From: qemu-devel-bounces+liang.z.li=intel@nongnu.org [mailto:qemu-
> devel-bounces+liang.z.li=intel@nongnu.org] On Behalf Of Juan Quintela
> Sent: Tuesday, February 23, 2016 5:57 PM
> To: Li, Liang
> On 23 February 2016 at 09:48, Paolo Bonzini wrote:
> > On 23/02/2016 10:09, Peter Maydell wrote:
> >> Hi. I'm afraid this doesn't compile for x86-64 Linux:
> >
> > What compiler is this, and does the following compile with no
> > particular extra options?
> >
> > #pragma
> Cc: qemu list; Dr. David Alan Gilbert; Juan Quintela
> Subject: Re: [Qemu-devel] [PULL 0/5] migration pull
>
> On 23 February 2016 at 07:30, Amit Shah wrote:
> > The following changes since commit
> 8eb779e4223a18db9838a49ece1bc72cfdfb7761:
> >
> > Merge remote-tracking
Ping ...
Liang
> -Original Message-
> From: Li, Liang Z
> Sent: Friday, January 15, 2016 6:06 PM
> To: qemu-devel@nongnu.org
> Cc: quint...@redhat.com; amit.s...@redhat.com; dgilb...@redhat.com;
> zhang.zhanghaili...@huawei.com; Li, Liang Z
> Subject: [PATCH RES
> Not sure; I could take it from the migration tree if no one objects.
>
> Amit
Thanks, Amit. If rework is needed, just let me know.
Liang
> On 27/01/2016 08:33, Liang Li wrote:
> > buffer_find_nonzero_offset() is a hot function during live migration.
> > Now it use SSE2 instructions for optimization. For platform supports
> > AVX2 instructions, use the AVX2 instructions for optimization can help
> > to improve the performance of
> On 20/01/2016 10:05, Liang Li wrote:
> > Detect if the compiler can support the ifun and avx2, if so, set
> > CONFIG_AVX2_OPT which will be used to turn on the avx2 instruction
> > optimization.
> >
> > Signed-off-by: Liang Li
> > ---
> > configure | 20
This patch will break LM.
> >
> > Which portion of the VM's RAM pages will be written by QEMU? Do you
> know some exact information?
> > I can't wait for Paolo's response.
>
> It is basically anything that uses rom_add_file_fixed or rom_add_blob_fixed
> with an address that points into RAM.
>
>
> On 2016/1/15 18:24, Li, Liang Z wrote:
> >> It seems that this patch is incorrect, if the no-zero pages are
> >> zeroed again during !ram_bulk_stage, we didn't send the new zeroed
> >> page, there will be an error.
> >>
> >
> > If not in ram_b
> > Not yet, I saw Dave's comment's, it will beak post copy, it's not hard to
> > fix
> this.
> > A more important thing is Paolo's comments, I don't know in which case
> this patch will break LM. Do you have any idea about this?
> > Hope that QEMU don't write data to the block 'pc.ram'.
> >
>
>
> Actually, someone has done like that before and cause a migration bug, See
> commit f1c72795af573b24a7da5eb52375c9aba8a37972, and the fixing patch is
> commit 9ef051e5536b6368a1076046ec6c4ec4ac12b5c6
> Revert "migration: do not sent zero pages in bulk stage"
Thanks for your information, I
> On 15/01/2016 10:48, Liang Li wrote:
> > Now that VM's RAM pages are initialized to zero, (VM's RAM is allcated
> > with the mmap() and MAP_ANONYMOUS option, or mmap() without
> MAP_SHARED
> > if hugetlbfs is used.) so there is no need to send the zero page
> > header to destination.
> >
> > For
> * Liang Li (liang.z...@intel.com) wrote:
> > Now that VM's RAM pages are initialized to zero, (VM's RAM is allcated
> > with the mmap() and MAP_ANONYMOUS option, or mmap() without
> MAP_SHARED
> > if hugetlbfs is used.) so there is no need to send the zero page
> > header to destination.
> >
> >
> It seems that this patch is incorrect, if the no-zero pages are zeroed again
> during !ram_bulk_stage, we didn't send the new zeroed page, there will be
> an error.
>
If not in ram_bulk_stage, still send the header, could you explain why it's
wrong?
Liang
> > For guest just uses a small
Hi Kevin,
We just found when starting QEMU with the '-smp 20 -no-acpi' option, the
centos6.6 or rhel 7.2 guest failed to boot. By debugging,
I found it's your seabios patch, the commit id '9ee2e26255661a' caused the
failure.
I don't know what issue your patch tried to fix, assuming it's the
Correct something.
The actual parameter for QEMU in our test case is:
'qemu-systerm-x86_64 -enable-kvm -smp 20 -m 2048 -no-acpi -monitor stdio
-drive file=/mnt/centos6u6.qcow, if=none, id=foo -device virtio-blk-pci,
drive=foo'
if there is no virtio-blk device: ' qemu-systerm-x86_64
> On Thu, Jan 14, 2016 at 10:36:07AM +0000, Li, Liang Z wrote:
> > Correct something.
> > The actual parameter for QEMU in our test case is:
> > 'qemu-systerm-x86_64 -enable-kvm -smp 20 -m 2048 -no-acpi -monitor
> > stdio -drive file=/mnt/centos6u6.qcow, if=none
> From: "Dr. David Alan Gilbert"
>
> qemu_get_buffer does a copy, we can avoid the memcpy, and we can then
> remove the extra buffer.
>
> Signed-off-by: Dr. David Alan Gilbert
> ---
> migration/ram.c | 11 +++
> 1 file changed, 3
> >> without the ' -mavx2' option for gcc, there are compiling error:
> >> '__m256i undeclared', the __attribute__((target("avx2"))) can't solve
> >> this issue. Any idea?
> >
> > You're right that you can't use the normal __m256i, as it doesn't get
> > declared.
>
> It should be declared.
> On 12/08/2015 04:08 AM, Liang Li wrote:
> > +++ b/util/buffer-zero-avx2.c
> > @@ -0,0 +1,54 @@
> > +#include "qemu-common.h"
> > +
> > +#if defined CONFIG_IFUNC && defined CONFIG_AVX2 #include
> > +
> > +#define AVX2_VECTYPE__m256i
> > +#define AVX2_SPLAT(p) _mm256_set1_epi8(*(p))
> On 12/09/2015 01:32 AM, Li, Liang Z wrote:
> > I think you means the ' __attribute__((target("avx2")))', I have tried this
> way, the issue here is:
> > without the ' -mavx2' option for gcc, there are compiling error:
> > '__m256i undeclared', the __attribu
> On 8 December 2015 at 12:08, Liang Li wrote:
> > Add the '--enable-avx2' & '--disable-avx2' option so as to config the
> > AVX2 instruction optimization.
> >
> > If '--disable-avx2' is not set, configure will detect if the compiler
> > can support AVX2 option, if yes, AVX2
> >> - blen could still be smaller that compressBound(size), you need to
> >> recheck
> >> - blen could have changed, but you don't take that in account for the
> >> following caller.
> >>
> >> So, I think code has a bug?
> >
> > Yes, there is a bug, I should consider the case QEMUFile with
> > Add pci = [ '$VF_BDF', '$VF_BDF', '$VF_BDF'] in
>
> This is a bit confusing: it is not actually correct to assign the same
> device, even
> an SR_IOV VF, multiple times, so these must be all different. More like:
>
> pci = [ '$VF_BDF1', '$VF_BDF2', '$VF_BDF3']
>
>
> > hvm guest
> On Fri, Dec 04, 2015 at 11:52:07AM +0800, Liang Li wrote:
> > There are some flaws in qemu_put_compression_data, this patch tries to
> > fix it. Now it can be used by other code.
> >
> > Signed-off-by: Liang Li
> > ---
> > migration/qemu-file.c | 10 +-
> > 1 file
> > There are some flaws in qemu_put_compression_data, this patch tries to
> > fix it. Now it can be used by other code.
> >
> > Signed-off-by: Liang Li
> > ---
> > migration/qemu-file.c | 10 +-
> > 1 file changed, 9 insertions(+), 1 deletion(-)
> >
> > diff --git
> On (Fri) 04 Dec 2015 [11:53:09], Liang Li wrote:
> > There are some flaws in qemu_put_compression_data, this patch tries to
> > fix it. Now it can be used by other code.
>
> Can you please write a better description here? What are the flaws?
> What is being fixed? What other users, and how is
> Hi
>
> We are in hard freeze. My understanding is that this are "optimizations" that
> can wait for 2.6:
> - my understanding from commit from message one and from quick look at
> the code is that this change is not needed for current users, is that
> correct?
> - we avoid a copy at the
>
> Thanks for describing how to reproduce the bug.
> If some pages are not transferred to destination then it is a bug, so we need
> to know what the problem is, notice that the problem can be that TCG is not
> marking dirty some page, that Migration code "forgets" about that page, or
> anything
>
> The main issue here is that you are not testing whether the compiler supports
> gnu_indirect_function.
>
> I suggest that you start by moving the functions to util/buffer-zero.c
>
> Then the structure should be something like
>
> #ifdef CONFIG_HAVE_AVX2
> #include
> #endif
>
> ... define
> On 12/11/2015 03:49, Li, Liang Z wrote:
> > I am very surprised about the live migration performance result when
> > I use your ' memeqzero4_paolo' instead of these SSE2 Intrinsics to
> > check the zero pages.
>
> What code were you using? Remember I suggest
> >>> I am very surprised about the live migration performance result
> >>> when I use your ' memeqzero4_paolo' instead of these SSE2 Intrinsics
> >>> to check the zero pages.
> >>
> >> What code were you using? Remember I suggested using only unsigned
> >> long checks, like
> >>
> >>
> On 12/11/2015 10:40, Li, Liang Z wrote:
> > I migrate a 8GB RAM Idle guest, I think most of it's pages are zero pages.
> >
> > I use your new code:
> > -
> > unsigned long *p = ...
> > if (p[0] || p[1
> >> >
> >> > I use your new code:
> >> > -
> >> > unsigned long *p = ...
> >> > if (p[0] || p[1] || p[2] || p[3]
> >> > || memcmp(p+4, p, size - 4 * sizeof(unsigned long)) != 0)
> >> > return BUFFER_NOT_ZERO;
> >> > else
> >> >
> > This patch use the ifunc mechanism to select the proper function when
> > running, for platform supports AVX2, excute the AVX2 instructions,
> > else, excute the original code.
> >
> > Signed-off-by: Liang Li
> > ---
> > include/qemu-common.h | 28 +++--
>
> On 10/11/2015 10:26, Li, Liang Z wrote:
> > I don't know Paolo's opinion about how to deal with the SSE2
> > Intrinsics, he is the author. From my personal view, now that we have
> > found a better way, why to use such low level SSE2/AVX2 Intrinsics.
>
> I to
> On 10/11/2015 10:41, Li, Liang Z wrote:
> >> On 10/11/2015 10:26, Li, Liang Z wrote:
> >>> I don't know Paolo's opinion about how to deal with the SSE2
> >>> Intrinsics, he is the author. From my personal view, now that we
> >>> have found a
> On 10/11/2015 10:26, Li, Liang Z wrote:
> > I don't know Paolo's opinion about how to deal with the SSE2
> > Intrinsics, he is the author. From my personal view, now that we have
> > found a better way, why to use such low level SSE2/AVX2 Intrinsics.
>
> I totally agr
> On 10/11/2015 10:56, Li, Liang Z wrote:
> > > I agree that your patch can be dropped, but go ahead and submit your
> > > improvements!
> >
> > You mean I do this work?
> > If you are busy, I can do this.
>
> It's not that I'm busy, it's that it's you
> > Eric, thanks for you information. I didn't notice that discussion before.
> >
> >
> > I rewrite the buffer_find_nonzero_offset() with the 'bool memeqzero4_paolo
> length'
> > then write a test program to check a large amount of zero pages, and
> > use the 'time' to recode the time takes by
> Rather than trying to cater to multiple assembly instruction implementations
> ourselves, have you tried taking the ideas in this earlier thread?
> https://lists.gnu.org/archive/html/qemu-devel/2015-10/msg05298.html
>
> Ideally, libc's memcmp() will already be using the most efficient assembly
> >>> Yes, you are right. Thanks a lot.
> >>>
> >>> BTW, can this patch fix the regression you reported?
> >>>
> >>> Reviewed-by: Liang Li
> >>>
> >> yes
> > Great. You'd better change the commit message to make it more clear.
> >
> > Liang
> argh.. you are right...
>
>
> -Original Message-
> From: Denis V. Lunev [mailto:d...@openvz.org]
> Sent: Saturday, November 07, 2015 11:20 PM
> To: Li, Liang Z; Paolo Bonzini; Juan Quintela; Amit Shah
> Cc: QEMU
> Subject: assert during internal snapshot
>
> Hello, All!
>
> Hello, All!
>
> This commit
>
> commit 94f5a43704129ca4995aa3385303c5ae225bde42
> Author: Liang Li
> Date: Mon Nov 2 15:37:00 2015 +0800
>
> migration: defer migration_end & blk_mig_cleanup
>
> Because of the patch 3ea3b7fa9af067982f34b of kvm, which
> migration: defer migration_end & blk_mig_cleanup
>
> Because of the patch 3ea3b7fa9af067982f34b of kvm, which introduces a
> lazy collapsing of small sptes into large sptes mechanism, now
> migration_end() is a time consuming operation because it calls
>
> On 11/09/2015 08:10 AM, Li, Liang Z wrote:
> >> since commit
> >> commit 94f5a43704129ca4995aa3385303c5ae225bde42
> >> Author: Liang Li <liang.z...@intel.com>
> >> Date: Mon Nov 2 15:37:00 2015 +0800
> >>
> >>
> since commit
> commit 94f5a43704129ca4995aa3385303c5ae225bde42
> Author: Liang Li
> Date: Mon Nov 2 15:37:00 2015 +0800
>
> migration: defer migration_end & blk_mig_cleanup
>
> when actual .cleanup callbacks calling was removed from complete operations.
> This can be simplified a bit:
>
> int kvm_get_tsc(CPUState *cs)
> {
> X86CPU *cpu = X86_CPU(cs);
> CPUX86State *env = >env;
> struct {
> struct kvm_msrs info;
> struct kvm_msr_entry entries[1];
> } msr_data;
> int ret;
>
> if (env->tsc_valid) {
>
Hi mg,
Have you tested this patch? Can it fix the kvm-clock issue?
Liang
> -Original Message-
> From: Marcin Gibuła [mailto:m.gib...@beyond.pl]
> Sent: Wednesday, August 26, 2015 3:26 AM
> To: Li, Liang Z; qemu-devel@nongnu.org
> Cc: pbonz...@redhat.com; mtosa...@redha
Hi Amit,
I am very glad that you are looking at this patchset. Looking forward to your
comments. :)
Liang
> -Original Message-
> From: Amit Shah [mailto:amit.s...@redhat.com]
> Sent: Monday, October 26, 2015 2:10 PM
> To: Li, Liang Z
> Cc: 'qemu-devel@nong
> > Some cleanup operations take long time during the pause and copy
> > stage, especially with the KVM patch 3ea3b7fa9af067, do these
> > operations after the completion of live migration can help to reduce
> > VM
> downtime.
> >
> > Ony the first patch changes the behavior, the rest 3 patches
> > Some cleanup operations take long time during the pause and copy
> > stage, especially with the KVM patch 3ea3b7fa9af067, do these
> > operations after the completion of live migration can help to reduce VM
> downtime.
> >
> > Ony the first patch changes the behavior, the rest 3 patches are
> On 02/09/2015 07:40, Amit Shah wrote:
> >> The buffer_find_nonzero_offset() will be called to check the zero
> >> page
> >> > during live migration, it's a hot function.
> >> > buffer_find_nonzero_offset() has already been optimized with SSE2
> >> > instructions, for platform that supports AVX2,
W dniu 2015-08-25 o 07:52, Liang Li pisze:
This patch is for kvm live migration optimization, it fixes the issue
which commit 317b0a6d8ba tries to fix in another way, and it can
reduce the live migration VM downtime about 300us.
*This patch is not tested for the issue commit 317b0a6d8ba
Subject: Re: [Qemu-devel] about the patch kvmclock Ensure proper env-tsc
value for kvmclock_current_nsec calculation
Thanks for your reply, I have read the thread in your email, what's the
mean of 'switching from old to new disk', could give a detail description?
The test case was
Could please point out what issue the patch 317b0a6d8ba44e try
to fix? I
found in live migration the cpu_synchronize_all_states will be called
twice, and it will take more than 1 ms sometimes. I try to do some
optimization but lack the knowledge about the background.
What the
On Thu, Aug 13, 2015 at 01:25:29AM +, Li, Liang Z wrote:
Hi Paolo Marcelo,
Could please point out what issue the patch 317b0a6d8ba44e try to fix?
I
found in live migration the cpu_synchronize_all_states will be called twice,
and it will take more than 1 ms sometimes. I try
Hi Paolo Marcelo,
Could please point out what issue the patch 317b0a6d8ba44e try to fix? I
found in live migration the cpu_synchronize_all_states will be called twice,
and it will take more than 1 ms sometimes. I try to do some optimization but
lack the knowledge about the background.
On 12/08/2015 23:04, Liang Li wrote:
@@ -1008,8 +1009,10 @@ static void *migration_thread(void *opaque)
}
qemu_mutex_lock_iothread();
+end_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
+qemu_savevm_state_cancel();
+
You can remove the
Hi Kevin Juan,
I found the function bdrv_invalidate_cache_all() in the
process_incoming_migration_co() takes more the 10ms when doing live migration
with shared storage, which prolong the service down time.
The bdrv_invalidate_cache_all() is needed when doing the live migration with
'-b'
Right now, we don't have an interface to detect that cases and
got back to the iterative stage.
How about go back to the iterative stage when detect that the
pending_size is larger Than max_size, like this:
+/* do flush here is aimed to shorten the VM
diff --git a/docs/migration.txt b/docs/migration.txt index f6df4be..b4b93d1
100644
--- a/docs/migration.txt
+++ b/docs/migration.txt
@@ -291,3 +291,170 @@ save/send this state when we are in the middle of a
pio operation (that is what ide_drive_pio_state_needed() checks). If
DRQ_STAT is
* Liang Li (liang.z...@intel.com) wrote:
Add the qmp commands to tune and query the parameters used in live
migration.
Hi,
Do you know if there's anyone working on libvirt code to drive this
interface
and turn on your compression code?
Yes, I have confirmed that one person of
Thanks Dave, I will retry according to your suggestion.
Did that work for you?
Yes, it works.
Great.
Bye the way, I found that the source guest will resume after about 15
minuets if there are some network errors happened during post copy. Is it
the expected behavior?
* Li, Liang Z (liang.z...@intel.com) wrote:
Hi David,
I have tired your v6 postcopy patches and found it doesn't work. When
I tried to start the postcopy in live migration, some errors were printed. I
just did the following things:
On destination side, started the qemu like
Hi David,
I have tired your v6 postcopy patches and found it doesn't work. When I tried
to start the
postcopy in live migration, some errors were printed. I just did the following
things:
On destination side, started the qemu like this:
On 13/04/2015 16:12, Liang Li wrote:
2. Do the attach and detach operation with a time interval. eg. 10s.
The error message will not disappear if retry, in this case, it's
a bug.
In the 'xen_pt_region_add' and 'xen_pt_region_del', we should only
care about the
Eric Blake ebl...@redhat.com wrote:
On 04/08/2015 12:20 AM, Liang Li wrote:
Put the three parameters related to multiple thread (de)compression
into an int array, and use an enum type to index the parameter.
Signed-off-by: Liang Li liang.z...@intel.com
Signed-off-by: Yang Zhang
Eric, can you review this and the following patch? I think they are correct,
but
I preffer someone more versed on QMP to review them.
Thanks, Juan.
Hi Juan Eric,
Since the latest QEMU is 2.3.0-rc2, is it possible to merge this serial of
patches to QEMU 2.3?
If it can't not, then
@@ -889,7 +889,6 @@ static inline void
start_compression(CompressParam *param)
qemu_mutex_unlock(param-mutex); }
-
static uint64_t bytes_transferred;
static void flush_compressed_data(QEMUFile *f) @@ -1458,8 +1457,28
@@
void ram_handle_compressed(void *host, uint8_t ch,
Now, multiple thread compression can co-work with xbzrle. when xbzrle
is on, multiple thread compression will only work at the first round
of RAM data sync.
Signed-off-by: Liang Li liang.z...@intel.com
Signed-off-by: Yang Zhang yang.z.zh...@intel.com
Reviewed-by: Dr.David Alan
Hi guys,
Any more comments about this patch serials? Especially the last
three patches about the qmp and hmp interfaces.
Markus Eric, could you help to take a look? Sorry for missing
Markus in the CC lists.
Liang
-Original Message-
From: Li, Liang Z
Sent
-
1 file changed, 177 insertions(+), 7 deletions(-)
diff --git a/arch_init.c b/arch_init.c index 48cae22..9f63c0f 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -355,12 +355,33 @@ static DecompressParam *decomp_param; static
QemuThread *decompress_threads; static uint8_t
--- a/arch_init.c
+++ b/arch_init.c
@@ -355,12 +355,33 @@ static DecompressParam *decomp_param;
static
QemuThread *decompress_threads; static uint8_t
*compressed_data_buf;
+static int do_compress_ram_page(CompressParam *param);
+
static void *do_data_compress(void
--- a/arch_init.c
+++ b/arch_init.c
@@ -355,12 +355,33 @@ static DecompressParam *decomp_param; static
QemuThread *decompress_threads; static uint8_t
*compressed_data_buf;
+static int do_compress_ram_page(CompressParam *param);
+
static void *do_data_compress(void *opaque) {
Then, how to deal with this issue in 2.3, leave it here? or make an
incomplete fix like I do above?
I think it is better to leave it here for 2.3. With a patch like this
one, we improve in one load and we got worse in a different load
(depens a lot in the ratio of dirtying memory vs
+qemu_mutex_lock(param-mutex);
+while (!param-start !quit_decomp_thread) {
start protected by param-mutex.
+qemu_cond_wait(param-cond, param-mutex);
+pagesize = TARGET_PAGE_SIZE;
+if (!quit_decomp_thread) {
+/*
Right now, we don't have an interface to detect that cases and got
back to the iterative stage.
How about go back to the iterative stage when detect that the
pending_size is larger Than max_size, like this:
+/* do flush here is aimed to shorten the VM downtime,
+
The command takes a list of key-value pairs. Looks like this
(example stolen from your patch to qmp-commands.hx):
{ execute: migrate-set-parameters,
arguments: { parameters:
[ { parameter: compress-level, value: 1
} ] } }
Awkward. I'd
* Li, Liang Z (liang.z...@intel.com) wrote:
First explanation, why I think this don't fix the full problem.
Whith this patch, we fix the problem where we have a dirty block
layer but basically nothing dirtying the memory on the guest (we
are moving the 20 seconds from
The command takes a list of key-value pairs. Looks like this
(example stolen from your patch to qmp-commands.hx):
{ execute: migrate-set-parameters,
arguments: { parameters:
[ { parameter: compress-level, value: 1 }
] } }
Awkward. I'd very much
First explanation, why I think this don't fix the full problem.
Whith this patch, we fix the problem where we have a dirty block
layer but basically nothing dirtying the memory on the guest (we are
moving the 20 seconds from max_downtime for the blocklayer flush),
to 20 seconds
+#
+# Migration parameter information
+#
+# @compress-level: compression level
+#
+# @compress-threads: compression thread count # #
+@decompress-threads: decompression thread count # # Since: 2.3 ## {
+'union': 'MigrationParameterStatus',
+ 'base': 'MigrationParameterBase',
+
This needs further review/changes on the block layer.
First explanation, why I think this don't fix the full problem.
Whith this patch, we fix the problem where we have a dirty block layer but
basically nothing dirtying the memory on the guest (we are moving the 20
seconds from max_downtime
Hi Juan
This patch will make my work more difficult, as we discussed before, after
modification,
I have to use a lock before I can reuse the function save_block_hdr in my
compression threads,
this will lead to low efficiency. Could help to revert this patch?
Liang
-Original
Current migration code returns number of bytes transferred and from there
we decide if we.have sent something or not. Problem, we need two results:
number of pages written, and number of bytes written (depending on
compression, zero pages, etc, it is not possible to derive one value from the
I thihnk this would make the code work, but not the locking. You are using
here:
quit_comp_thread: global, and not completely clear what protects it
comp_done_lock: global
comp_done_cond: global
param[i].busy: I would suggest renaming to pending work
param[i].mutex:
param[i].cond:
-Original Message-
From: Dr. David Alan Gilbert [mailto:dgilb...@redhat.com]
Sent: Tuesday, February 17, 2015 10:24 PM
To: Juan Quintela
Cc: qemu-devel@nongnu.org; Li, Liang Z
Subject: Re: [Qemu-devel] [PATCH 0/6] migration: differentiate between
pages and bytes
* Juan Quintela
(Li special edition)
Current migration code returns number of bytes transferred and from there
we decide if we.have sent something or not. Problem, we need two results:
number of pages written, and number of bytes written (depending on
compression, zero pages, etc, it is not possible to
Drop this patch and just give an error when trying to set xbzrle and
compression? User have to pick one and only one, no second guess
him/her?
Live migration can benefit from compression co-work with xbzrle. You
know, xbzrle transfer the raw RAM pages to destination in the ram bulk
Liang Li liang.z...@intel.com wrote:
Now, multiple thread compression can co-work with xbzrle. when xbzrle
is on, multiple thread compression will only work at the first round
of RAM data sync.
Signed-off-by: Liang Li liang.z...@intel.com
Signed-off-by: Yang Zhang
Reviewing patch 8, I found that we need to fix some things here.
+static int ram_save_compressed_page(QEMUFile *f, RAMBlock *block,
+ram_addr_t offset, bool
+last_stage) {
+int bytes_sent = -1;
+
+/* To be done*/
+
+return
Hi Juan,
Have you reviewed the 04 patch of the patch series? I didn't see the reply
email.
Liang
-Original Message-
From: Juan Quintela [mailto:quint...@redhat.com]
Sent: Wednesday, February 11, 2015 5:03 PM
To: Li, Liang Z
Cc: qemu-devel@nongnu.org; ebl...@redhat.com
-Original Message-
From: Juan Quintela [mailto:quint...@redhat.com]
Sent: Wednesday, February 11, 2015 7:45 PM
To: Li, Liang Z
Cc: qemu-devel@nongnu.org; ebl...@redhat.com; amit.s...@redhat.com;
lcapitul...@redhat.com; arm...@redhat.com; dgilb...@redhat.com; Zhang,
Yang Z
Subject
+++ b/migration/migration.c
@@ -66,9 +66,12 @@ MigrationState *migrate_get_current(void)
.bandwidth_limit = MAX_THROTTLE,
.xbzrle_cache_size = DEFAULT_MIGRATE_CACHE_SIZE,
.mbps = -1,
-.compress_thread_count =
DEFAULT_MIGRATE_COMPRESS_THREAD_COUNT,
Thanks Dave Eric for spending time to review my patches and giving the
valuable comments, I will refine my patches in the later version according
to your suggestions.
* Liang Li (liang.z...@intel.com) wrote:
This feature can help to reduce the data transferred about 60%, and
the migration
* Liang Li (liang.z...@intel.com) wrote:
Add the qmp and hmp commands to tune the parameters used in live
migration.
If I understand correctly on the destination side we need to set the number
of decompression threads very early on an incoming migration - I'm not clear
how early that
+size_t migrate_qemu_add_compression_data(QEMUFile *f,
+const uint8_t *p, size_t size, int level)
It's an odd name, QEMUFile is only used by migration anyway; maybe
qemufile_add_compression_data ?
+{
+size_t blen = IO_BUF_SIZE - f-buf_index - sizeof(int);
+
+if
typedef struct compress_param compress_param;
+enum {
+DONE,
+START,
+};
+
Do you really need any more than a 'bool busy' ?
Good ideal.
struct decompress_param {
/* To be done */
};
typedef struct decompress_param decompress_param;
static
-
+/* When starting the process of a new block, the first page of
+ * the block should be sent out before other pages in the same
+ * block, and all the pages in last block should have been sent
+ * out, keeping this order is important.
Why? Is this just because of
201 - 300 of 313 matches
Mail list logo