Re: [Qemu-devel] [PATCH V2] add migration capability to bypass the shared memory

2017-09-25 Thread Zhang Haoyu
If hotplug memory during migration, the calculation of migration_dirty_pages maybe incorrect, should fixed as below, -void migration_bitmap_extend(ram_addr_t old, ram_addr_t new) +void migration_bitmap_extend(RAMBlock *block, ram_addr_t old, ram_addr_t new) { /* called in qemu main thread, so

Re: [Qemu-devel] [PATCH V2] add migration capability to bypass the shared memory

2017-09-25 Thread Zhang Haoyu
If hotplug memory during migration, the calculation of migration_dirty_pages maybe not correct, void migration_bitmap_extend(ram_addr_t old, ram_addr_t new) { ... migration_dirty_pages += new - old; call_rcu(old_bitmap, migration_bitmap_free, rcu); ... } Thanks,

Re: [Qemu-devel] [PATCH] add migration capability to bypass the shared memory

2017-09-21 Thread Zhang Haoyu
Hi, Any update? Thanks, Zhang Haoyu On 2016/8/30 12:11, Lai Jiangshan wrote: > On Wed, Aug 10, 2016 at 5:03 PM, Juan Quintela <quint...@redhat.com> wrote: >> Lai Jiangshan <jiangshan...@gmail.com> wrote: >> >> Hi >> >> First of all, I like a l

Re: [Qemu-devel] [PATCH V2] add migration capability to bypass the shared memory

2017-09-20 Thread Zhang Haoyu
Hi Jiangshan, Any update from this patch? Thanks, Zhang Haoyu On 2016/8/11 22:45, Lai Jiangshan wrote: > Note, the old local migration patchset: > https://lists.gnu.org/archive/html/qemu-devel/2013-12/msg00073.html > > this patch can be considered as a new local migration im

Re: [Qemu-devel] [RFC] introduce bitmap to bdrv_commit to trackdirty sector

2015-03-09 Thread Zhang Haoyu
On 2015-03-10 08:29:19, Fam Zheng wrote: On Mon, 03/09 16:14, Zhang Haoyu wrote: Hi John, Vladimir We can using active block commit to implement incremental backup without guest disruption, e.g., origin = A = B = C = current BDS, a new external snapshot will be produced before

Re: [Qemu-devel] question about live migration with storage

2015-03-09 Thread Zhang Haoyu
On 2015-01-15 18:08:39, Paolo Bonzini wrote: On 15/01/2015 10:56, Zhang Haoyu wrote: I see, when waiting the completion of drive_mirror IO, the coroutine will be switched back to main-thread to poll and process other events, like qmp request, then after the IO completed, coroutine

Re: [Qemu-devel] [RFC] introduce bitmap to bdrv_commit to trackdirtysector

2015-03-09 Thread Zhang Haoyu
On 2015-03-10 09:54:47, Fam Zheng wrote: On Tue, 03/10 09:30, Zhang Haoyu wrote: On 2015-03-10 08:29:19, Fam Zheng wrote: On Mon, 03/09 16:14, Zhang Haoyu wrote: Hi John, Vladimir We can using active block commit to implement incremental backup without guest disruption

Re: [Qemu-devel] [RFC] introduce bitmap to bdrv_commit totrackdirtysector

2015-03-09 Thread Zhang Haoyu
missed something. Thanks, Zhang Haoyu And does qemu support commit any external snapshot to its backing file? Yes.

Re: [Qemu-devel] [RFC] introduce bitmap to bdrv_commit to track dirty sector

2015-03-09 Thread Zhang Haoyu
() the unneeded snapshot in source or destination end. So, comparing with above mechanism, what's the advantages of the incremental backup implemented by John and Vladimir? Thanks, Zhang Haoyu On 2015-03-09 15:38:40, Paolo Bonzini wrote: On 09/03/2015 08:03, Zhang Haoyu wrote: On 2015-03-03 18:00:09

Re: [Qemu-devel] [RFC] introduce bitmap to bdrv_commit to track dirty sector

2015-03-09 Thread Zhang Haoyu
On 2015-03-03 18:00:09, Paolo Bonzini wrote: On 03/03/2015 07:52, Zhang Haoyu wrote: Hi, If introducing bitmap to bdrv_commit to track dirty sector, could we implement guest non-disruption while performing commit? That is already implemented. It uses the same code that implements

[Qemu-devel] [RFC] introduce bitmap to bdrv_commit to track dirty sector

2015-03-02 Thread Zhang Haoyu
Hi, If introducing bitmap to bdrv_commit to track dirty sector, could we implement guest non-disruption while performing commit? Thanks, Zhang Haoyu

[Qemu-devel] [PATCH] fix mc146818rtc wrong subsection name to avoid vmstate_subsection_load() fail

2015-02-05 Thread Zhang Haoyu
fix mc146818rtc wrong subsection name to avoid vmstate_subsection_load() fail during incoming migration or loadvm. Signed-off-by: Zhang Haoyu zhan...@sangfor.com.cn --- hw/timer/mc146818rtc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/hw/timer/mc146818rtc.c b/hw/timer

[Qemu-devel] [RFC] optimization for qcow2 cache get/put

2015-01-26 Thread Zhang Haoyu
, Zhang Haoyu

Re: [Qemu-devel] [question] incremental backup a running vm

2015-01-26 Thread Zhang Haoyu
On 2015-01-26 19:29:03, Paolo Bonzini wrote: On 26/01/2015 12:13, Zhang Haoyu wrote: Thanks, Paolo, but too many internal snapshots were saved by customers, switching to external snapshot mechanism has significant impaction on subsequent upgrade. In that case, patches are welcome

Re: [Qemu-devel] [RFC] optimization for qcow2 cache get/put

2015-01-26 Thread Zhang Haoyu
On 2015-01-27 09:24:13, Zhang Haoyu wrote: On 2015-01-26 22:11:59, Max Reitz wrote: On 2015-01-26 at 08:20, Zhang Haoyu wrote: Hi, all Regarding too large qcow2 image, e.g., 2TB, so long disruption happened when performing snapshot, which was caused by cache update and IO wait

Re: [Qemu-devel] [RFC] optimization for qcow2 cache get/put

2015-01-26 Thread Zhang Haoyu
On 2015-01-26 22:11:59, Max Reitz wrote: On 2015-01-26 at 08:20, Zhang Haoyu wrote: Hi, all Regarding too large qcow2 image, e.g., 2TB, so long disruption happened when performing snapshot, which was caused by cache update and IO wait. perf top data shown as below, PerfTop

Re: [Qemu-devel] [question] incremental backup a running vm

2015-01-26 Thread Zhang Haoyu
On 2015-01-26 17:29:43, Paolo Bonzini wrote: On 26/01/2015 02:07, Zhang Haoyu wrote: Hi, Kashyap I've tried ‘drive_backup’ via QMP, but the snapshots were missed to backup to destination, I think the reason is that backup_run() only copy the guest data regarding qcow2 image. Yes

Re: [Qemu-devel] [question] incremental backup a running vm

2015-01-25 Thread Zhang Haoyu
On 2015-01-23 07:30:19, Kashyap Chamarthy wrote: On Wed, Jan 21, 2015 at 11:39:44AM +0100, Paolo Bonzini wrote: On 21/01/2015 11:32, Zhang Haoyu wrote: Hi, Does drive_mirror support incremental backup a running vm? Or other mechanism does? incremental backup a running vm

[Qemu-devel] [question] incremental backup a running vm

2015-01-21 Thread Zhang Haoyu
for the changed data. Next time backup, only the dirty data will be mirrored to destination. Even the VM shutdown and start after several days, the bitmap will be loaded while starting vm. Any ideas? Thanks, Zhang Haoyu

Re: [Qemu-devel] [PATCH] spice-char: fix wrong assert condition

2015-01-18 Thread Zhang Haoyu
On 2015-01-17 19:55:16, Peter Maydell wrote: On 17 January 2015 at 11:52, Peter Maydell peter.mayd...@linaro.org wrote: On 17 January 2015 at 06:48, Zhang Haoyu zhan...@sangfor.com.cn wrote: G_IO_OUT|G_IO_HUP are passed from all of the callers of chr_add_watch hooker, the assert condition

[Qemu-devel] [PATCH] spice-char: fix wrong assert condition

2015-01-16 Thread Zhang Haoyu
G_IO_OUT|G_IO_HUP are passed from all of the callers of chr_add_watch hooker, the assert condition MUST be changed. Signed-off-by: Zhang Haoyu zhan...@sangfor.com.cn --- spice-qemu-char.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spice-qemu-char.c b/spice-qemu-char.c

Re: [Qemu-devel] question about live migration with storage

2015-01-15 Thread Zhang Haoyu
On 2015-01-15 17:11:49, Paolo Bonzini wrote: On 15/01/2015 04:54, Zhang Haoyu wrote: 2) Finer-grain control the parameters of block migration (dirty bitmap granularity). 3) Block and RAM migration do not share the same socket and thus can more easily be parallelized

Re: [Qemu-devel] question about live migration with storage

2015-01-14 Thread Zhang Haoyu
On 2015-01-14 17:07:08, Paolo Bonzini wrote: On 14/01/2015 08:58, Zhang Haoyu wrote: 2) Finer-grain control the parameters of block migration (dirty bitmap granularity). 3) Block and RAM migration do not share the same socket and thus can more easily be parallelized. drive_mirror

Re: [Qemu-devel] question about live migration with storage

2015-01-14 Thread Zhang Haoyu
On 2015-01-14 15:42:41, Paolo Bonzini wrote: On 14/01/2015 03:41, Zhang Haoyu wrote: Hi, Paolo, what's advantages of drive_mirror over traditional mechanism implemented in block-migration.c ? Why libvirt use drive_mirror instead of traditional iterative mechanism as the default way

Re: [Qemu-devel] question about live migration with storage

2015-01-13 Thread Zhang Haoyu
On 2015-01-13 17:45:45, Paolo Bonzini wrote: On 13/01/2015 03:03, Zhang Haoyu wrote: I want to live migrate a vm with storage, with regard to the migration of storage, should I use drive_mirror or traditional mechanism implemented in block-migration.c ? Because I don't use

Re: [Qemu-devel] How to clone a running vm?

2015-01-12 Thread Zhang Haoyu
On 2015-01-12 15:50:13, Zhang Haoyu wrote: Hi, I want to clone a running vm without shutoff, can below method work? 1) create a snapshot for the vm 2) create a new qcow2 image from the snapshot, but how? 3) use the new qcow2 image as backing image to clone vms Can drive_mirror clone a running

[Qemu-devel] question about live migration with storage

2015-01-12 Thread Zhang Haoyu
Hi, I want to live migrate a vm with storage, with regard to the migration of storage, should I use drive_mirror or traditional mechanism implemented in block-migration.c ? Any advices? Thanks, Zhang Haoyu

Re: [Qemu-devel] question about live migration with storage

2015-01-12 Thread Zhang Haoyu
On 2015-01-13 09:49:00, Zhang Haoyu wrote: Hi, I want to live migrate a vm with storage, with regard to the migration of storage, should I use drive_mirror or traditional mechanism implemented in block-migration.c ? Because I don't use libvirtd to manage vm, if I want to use drive_mirror

Re: [Qemu-devel] Does kvm friendly support GPT?

2015-01-11 Thread Zhang Haoyu
On 2014-12-22 09:28:52, Paolo Bonzini wrote: On 22/12/2014 07:39, Zhang Haoyu wrote: Hi, When I perform P2V from native servers with win2008 to kvm vm, some cases failed due to the physical disk was using GPT for partition, and QEMU doesn't support GPT by default. And, I see in below

[Qemu-devel] How to clone a running vm?

2015-01-11 Thread Zhang Haoyu
Hi, I want to clone a running vm without shutoff, can below method work? 1) create a snapshot for the vm 2) create a new qcow2 image from the snapshot, but how? 3) use the new qcow2 image as backing image to clone vms Any ideas? Thanks, Zhang Haoyu

Re: [Qemu-devel] vhost-user: migration?

2015-01-09 Thread Zhang Haoyu
Hi, what's the status of migration support for vhost-user? Thanks, Zhang Haoyu On 2014-06-18 22:07:49, Michael S. Tsirkin wrote: On Wed, Jun 18, 2014 at 04:37:57PM +0300, Nikolay Nikolaev wrote: On Wed, Jun 18, 2014 at 3:35 PM, Michael S

Re: [Qemu-devel] [question] How to get the guest physical memory usage from host?

2014-12-22 Thread Zhang Haoyu
On 2014/12/22 16:41, Andrey Korolyov wrote: On Mon, Dec 22, 2014 at 6:59 AM, Zhang Haoyu zhhy.zhangha...@gmail.com wrote: Hi, How to get the guest physical memory usage from host? I don't want to introduce a guest-agent to get the info. Thanks, Zhang Haoyu There`s probably one

Re: [Qemu-devel] [question] How to get the guest physical memory usage from host?

2014-12-22 Thread Zhang Haoyu
On 2014/12/22 17:16, Andrey Korolyov wrote: On Mon, Dec 22, 2014 at 11:57 AM, Zhang Haoyu zhhy.zhangha...@gmail.com wrote: On 2014/12/22 16:41, Andrey Korolyov wrote: On Mon, Dec 22, 2014 at 6:59 AM, Zhang Haoyu zhhy.zhangha...@gmail.com wrote: Hi, How to get the guest physical memory

Re: [Qemu-devel] Does kvm friendly support GPT?

2014-12-22 Thread Zhang Haoyu
On 2014/12/22 17:28, Paolo Bonzini wrote: On 22/12/2014 07:39, Zhang Haoyu wrote: Hi, When I perform P2V from native servers with win2008 to kvm vm, some cases failed due to the physical disk was using GPT for partition, and QEMU doesn't support GPT by default. And, I see in below

[Qemu-devel] cannot receive qemu-dev/kvm-dev mails sent by myself

2014-12-22 Thread Zhang Haoyu
Hi, I cannot receive qemu-dev/kvm-dev mails sent by myself, but mails from others can be received, any helps? Thanks, Zhang Haoyu

Re: [Qemu-devel] Does kvm friendly support GPT?

2014-12-22 Thread Zhang Haoyu
On 2014/12/22 17:52, Paolo Bonzini wrote: On 22/12/2014 10:40, Zhang Haoyu wrote: 2) the FAT driver is not free, which prevents distribution in Fedora and several other distributions Sorry, I cannot follow you, the FAT mentioned above means FAT filesystem? what's the relationship

Re: [Qemu-devel] cannot receive qemu-dev/kvm-dev mails sent by myself

2014-12-22 Thread Zhang Haoyu
On 2014/12/22 17:54, Paolo Bonzini wrote: On 22/12/2014 10:48, Zhang Haoyu wrote: Hi, I cannot receive qemu-dev/kvm-dev mails sent by myself, but mails from others can be received, any helps? For qemu-devel, you need to configure mailman to send messages even if they are yours

Re: [Qemu-devel] cannot receive qemu-dev/kvm-dev mails sent by myself

2014-12-22 Thread Zhang Haoyu
On 2014/12/22 20:05, Paolo Bonzini wrote: On 22/12/2014 12:40, Zhang Haoyu wrote: On 2014/12/22 17:54, Paolo Bonzini wrote: On 22/12/2014 10:48, Zhang Haoyu wrote: Hi, I cannot receive qemu-dev/kvm-dev mails sent by myself, but mails from others can be received, any helps

Re: [Qemu-devel] cannot receive qemu-dev/kvm-dev mails sent by myself

2014-12-22 Thread Zhang Haoyu
On 2014/12/23 9:36, Fam Zheng wrote: On Mon, 12/22 20:21, Zhang Haoyu wrote: On 2014/12/22 20:05, Paolo Bonzini wrote: On 22/12/2014 12:40, Zhang Haoyu wrote: On 2014/12/22 17:54, Paolo Bonzini wrote: On 22/12/2014 10:48, Zhang Haoyu wrote: Hi, I cannot receive qemu-dev/kvm-dev mails

Re: [Qemu-devel] [question] How to get the guest physical memory usage from host?

2014-12-22 Thread Zhang Haoyu
above? Thanks, Zhang Haoyu Generally I meant virDomainMemoryPeek, but nothing prevents you to write code with same functionality, if libvirt usage is not preferred, it is only about asking monitor for chunks of memory and parse them in a proper way. Thanks, Andrey.

[Qemu-devel] [question] How to get the guest physical memory usage from host?

2014-12-21 Thread Zhang Haoyu
Hi, How to get the guest physical memory usage from host? I don't want to introduce a guest-agent to get the info. Thanks, Zhang Haoyu

[Qemu-devel] Does kvm friendly support GPT?

2014-12-21 Thread Zhang Haoyu
But, it seems that OVMF is not stable enough for kvm. Any advises? Thanks, Zhang Haoyu

Re: [Qemu-devel] [PATCH] support vhost-user socket to reconnect

2014-12-21 Thread Zhang Haoyu
Hi, Kun Is this patch one of patch series? I don't see any place to reference is_reconnect field. On 2014/12/22 15:06, zhangkun wrote: From: zhangkun zhang.zhang...@huawei.com Signed-off-by: zhangkun zhang.zhang...@huawei.com --- net/vhost-user.c | 10 +- 1 file changed, 9

[Qemu-devel] [question] kvm fully support vga adapter pass-through ?

2014-11-18 Thread Zhang Haoyu
Hi all, Does the combination of qemu-2.0.1 and linux-3.10 fully support direct-assign vga adapters to vm? Thanks, Zhang Haoyu

Re: [Qemu-devel] Where is the VM live migration code?

2014-11-17 Thread Zhang Haoyu
Hi, I saw this page: http://www.linux-kvm.org/page/Migration. It looks like Migration is a feature provided by KVM? But when I look at the Linux kernel source code, i.e., virt/kvm, and arch/x86/kvm, I don't see the code for this migration feature. Most of live migration code is in

[Qemu-devel] [PATCH] qcow2-cache: conditionally call bdrv_flush() in qcow2_cache_flush()

2014-11-06 Thread Zhang Haoyu
Needless to call bdrv_flush() in qcow2_cache_flush() if no cache entry is dirty. Signed-off-by: Zhang Haoyu zhan...@sangfor.com --- block/qcow2-cache.c | 24 +--- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/block/qcow2-cache.c b/block/qcow2-cache.c index

Re: [Qemu-devel] [question] updating the base image for all clones which have been running for months

2014-11-06 Thread Zhang Haoyu
the same for your guests. For bare-metal, I use manager to push the applications to each host-agent which is running in each host, the host-agent is responsible to install the applications. Thanks, Zhang Haoyu Install the applications on each clone separately, or use some other method to make

[Qemu-devel] [question] updating the base image for all clones which have been running for months

2014-11-03 Thread Zhang Haoyu
is responsible to install the applications. Thanks, Zhang Haoyu

Re: [Qemu-devel] [question] updating the base image for all clones which havebeen running for months

2014-11-03 Thread Zhang Haoyu
separately, or use some other method to make it available (like installing on a shared network resource). Could you detail installing on a shared network resource? Thanks, Zhang Haoyu Can I rebase image A to B which have the applications to be installed, then change the base image to B for all

[Qemu-devel] [question] How is the progress of optimizing qcow2_check_metadata_overlap() with reagard to cpu overhead?

2014-10-25 Thread Zhang Haoyu
Hi, Max How is the progress of optimizing qcow2_check_metadata_overlap? http://thread.gmane.org/gmane.comp.emulators.kvm.devel/127037/focus=127364 Thanks, Zhang Haoyu

[Qemu-devel] [PATCH v4] snapshot: use local variable to bdrv_pwrite_sync L1 table

2014-10-22 Thread Zhang Haoyu
Use local variable to bdrv_pwrite_sync L1 table, needless to make conversion of cached L1 table between big-endian and host style. Signed-off-by: Zhang Haoyu zhan...@sangfor.com Reviewed-by: Max Reitz mre...@redhat.com --- v3 - v4: - convert local L1 table to host-style before copy it back

Re: [Qemu-devel] [PATCH v4] snapshot: use local variable to bdrv_pwrite_syncL1 table

2014-10-22 Thread Zhang Haoyu
Use local variable to bdrv_pwrite_sync L1 table, needless to make conversion of cached L1 table between big-endian and host style. Signed-off-by: Zhang Haoyu zhan...@sangfor.com Reviewed-by: Max Reitz mre...@redhat.com --- v3 - v4: - convert local L1 table to host-style before copy

[Qemu-devel] [PATCH] snapshot: use local variable to bdrv_pwrite_sync L1 table

2014-10-21 Thread Zhang Haoyu
Use local variable to bdrv_pwrite_sync L1 table, needless to make conversion of cached L1 table between big-endian and host style. Signed-off-by: Zhang Haoyu zhan...@sangfor.com --- block/qcow2-refcount.c | 22 +++--- 1 file changed, 7 insertions(+), 15 deletions(-) diff --git

[Qemu-devel] [PATCH bugfix] snapshot: add bdrv_drain_all() to bdrv_snapshot_delete() to avoid concurrency problem

2014-10-21 Thread Zhang Haoyu
() to bdrv_snapshot_delete() to avoid this problem. Signed-off-by: Zhang Haoyu zhan...@sangfor.com --- block/snapshot.c | 4 1 file changed, 4 insertions(+) diff --git a/block/snapshot.c b/block/snapshot.c index 85c52ff..ebc386a 100644 --- a/block/snapshot.c +++ b/block/snapshot.c @@ -236,6 +236,10

Re: [Qemu-devel] [question] savevm/delvm: Is it necessary to performbdrv_drain_all before savevm and delvm?

2014-10-21 Thread Zhang Haoyu
it into a coroutine or add a bdrv_drain_all() indeed. I'm inclined to add bdrv_drain_all(), just keeping consistent with the other snapshot-related operations, like savevm, loadvm, internal_snapshot_prepare, etc. Thanks, Zhang Haoyu This also means that we probably need to review all other cases where

Re: [Qemu-devel] [PATCH] snapshot: use local variable to bdrv_pwrite_syncL1 table

2014-10-21 Thread Zhang Haoyu
Use local variable to bdrv_pwrite_sync L1 table, needless to make conversion of cached L1 table between big-endian and host style. Signed-off-by: Zhang Haoyu zhan...@sangfor.com --- block/qcow2-refcount.c | 22 +++--- 1 file changed, 7 insertions(+), 15 deletions

[Qemu-devel] [PATCH v2] snapshot: use local variable to bdrv_pwrite_sync L1 table

2014-10-21 Thread Zhang Haoyu
Use local variable to bdrv_pwrite_sync L1 table, needless to make conversion of cached L1 table between big-endian and host style. Signed-off-by: Zhang Haoyu zhan...@sangfor.com --- v1 - v2: - remove the superflous assignment, l1_table = NULL; - replace 512 with BDRV_SECTOR_SIZE

Re: [Qemu-devel] [Qemu-stable] [PATCH v2] snapshot: use local variable tobdrv_pwrite_sync L1 table

2014-10-21 Thread Zhang Haoyu
Use local variable to bdrv_pwrite_sync L1 table, needless to make conversion of cached L1 table between big-endian and host style. Signed-off-by: Zhang Haoyu zhan...@sangfor.com --- v1 - v2: - remove the superflous assignment, l1_table = NULL; - replace 512 with BDRV_SECTOR_SIZE

[Qemu-devel] [PATCH v3] snapshot: use local variable to bdrv_pwrite_sync L1 table

2014-10-21 Thread Zhang Haoyu
Use local variable to bdrv_pwrite_sync L1 table, needless to make conversion of cached L1 table between big-endian and host style. Signed-off-by: Zhang Haoyu zhan...@sangfor.com Reviewed-by: Max Reitz mre...@redhat.com --- v2 - v3: - replace g_try_malloc0 with qemu_try_blockalign - copy

Re: [Qemu-devel] [Qemu-trivial] [PATCH v3] snapshot: use local variable to bdrv_pwrite_sync L1 table

2014-10-21 Thread Zhang Haoyu
Use local variable to bdrv_pwrite_sync L1 table, needless to make conversion of cached L1 table between big-endian and host style. Signed-off-by: Zhang Haoyu zhan...@sangfor.com Reviewed-by: Max Reitz mre...@redhat.com --- v2 - v3: - replace g_try_malloc0 with qemu_try_blockalign - copy

[Qemu-devel] [question] savevm/delvm: Is it neccesary to perform bdrv_drain_all before savevm and delvm?

2014-10-20 Thread Zhang Haoyu
Hi, I noticed that bdrv_drain_all is performed in load_vmstate before bdrv_snapshot_goto, and bdrv_drain_all is performed in qmp_transaction before internal_snapshot_prepare, so is it also neccesary to perform bdrv_drain_all in savevm and delvm? Thanks, Zhang Haoyu

Re: [Qemu-devel] [question] savevm/delvm: Is it necessary to perform bdrv_drain_all before savevm and delvm?

2014-10-20 Thread Zhang Haoyu
IOs while deleting snapshot, then is it possible that there is concurrency problem between the process of deleting snapshot and the coroutine of io read/write(bdrv_co_do_rw) invoked by the pending IOs? This coroutine is also in main thread. Am I missing something? Thanks, Zhang Haoyu Kevin

Re: [Qemu-devel] [question] savevm/delvm: Is it necessary to perform bdrv_drain_all before savevm and delvm?

2014-10-20 Thread Zhang Haoyu
read/write(bdrv_co_do_rw) are performed in main thread, could BDRVQcowState.lock work? Thanks, Zhang Haoyu This might actually be a valid concern. Kevin

Re: [Qemu-devel] [question] is it possible that big-endian l1 tableoffsetreferenced by other I/O while updating l1 table offset in qcow2_update_snapshot_refcount?

2014-10-13 Thread Zhang Haoyu
to perform I/O operation. Thanks, Zhang Haoyu But I find it rather ugly to convert the cached L1 table to big endian, so I'd be fine with the patch you proposed. Max

Re: [Qemu-devel] [question] is it possible that big-endian l1 tableoffsetreferencedby other I/O while updating l1 table offset in qcow2_update_snapshot_refcount?

2014-10-13 Thread Zhang Haoyu
? Thanks, Zhang Haoyu Max |-- bdrv_pwrite |--- bdrv_pwritev | bdrv_prwv_co |- aio_poll(aio_context) == this aio_context is qemu_aio_context |-- aio_dispatch |--- bdrv_co_io_em_complete | qemu_coroutine_enter(co-coroutine, NULL); == coroutine entry is bdrv_co_do_rw

Re: [Qemu-devel] [question] is it possible that big-endian l1tableoffsetreferencedby other I/O while updating l1 table offset inqcow2_update_snapshot_refcount?

2014-10-13 Thread Zhang Haoyu
, not the other thread, both bdrv_co_do_rw and qcow2_update_snapshot_refcount are performed in the same thread (main-thread), how does BDRVQcowState.lock avoid the reentrant? Thanks, Zhang Haoyu Max Thanks, Zhang Haoyu Max |-- bdrv_pwrite |--- bdrv_pwritev | bdrv_prwv_co |- aio_poll

Re: [Qemu-devel] [PATCH] qcow2: fix double-free of Qcow2DiscardRegion in qcow2_process_discards

2014-10-12 Thread Zhang Haoyu
On 2014-10-12 15:34, Kevin Wolf wrote: Am 11.10.2014 um 09:14 hat Zhang Haoyu geschrieben: In qcow2_update_snapshot_refcount - qcow2_process_discards() - bdrv_discard() may free the Qcow2DiscardRegion which is referenced by next pointer in qcow2_process_discards() now, in next iteration, d

Re: [Qemu-devel] [question] is it possible that big-endian l1 tableoffset referenced by other I/O while updating l1 table offset in qcow2_update_snapshot_refcount?

2014-10-12 Thread Zhang Haoyu
. l1_table is not necessarily a local variable to qcow2_update_snapshot_refcount, which depends on condition of if (l1_table_offset != s-l1_table_offset), if the condition not true, l1_table = s-l1_table. Thanks, Zhang Haoyu Max

[Qemu-devel] [PATCH] qcow2: fix double-free of Qcow2DiscardRegion in qcow2_process_discards

2014-10-11 Thread Zhang Haoyu
|-- qcow2_free_any_clusters |--- qcow2_free_clusters | update_refcount |- qcow2_process_discards |-- g_free(d) == In next iteration, this Qcow2DiscardRegion will be double-free. Signed-off-by: Zhang Haoyu zhan...@sangfor.com Signed-off-by: Fu Xuewei f...@sangfor.com

[Qemu-devel] [PATCH] qcow2: fix leak of Qcow2DiscardRegion in update_refcount_discard

2014-10-11 Thread Zhang Haoyu
When the Qcow2DiscardRegion is adjacent to another one referenced by d, free this Qcow2DiscardRegion metadata referenced by p after it was removed from s-discards queue. Signed-off-by: Zhang Haoyu zhan...@sangfor.com --- block/qcow2-refcount.c | 1 + 1 file changed, 1 insertion(+) diff --git

[Qemu-devel] [question] Is there a plan to introduce a unified co-scheduling mechanism to CFS ?

2014-10-10 Thread Zhang Haoyu
/task is running in guest. Is there a plane for this work? Thanks, Zhang Haoyu

Re: [Qemu-devel] [question] Is there a plan to introduce a unified co-scheduling mechanism to CFS ?

2014-10-10 Thread Zhang Haoyu
thought. Regards, Wanpeng Li Thanks, Zhang Haoyu

[Qemu-devel] [question] is it posssible that big-endian l1 table offset referenced by other I/O while updating l1 table offset in qcow2_update_snapshot_refcount?

2014-10-09 Thread Zhang Haoyu
offset (very large value), so the file is truncated to very large. Any ideas? Thanks, Zhang Haoyu

Re: [Qemu-devel] [question] is it possible that big-endian l1 table offset referenced by other I/O while updating l1 table offset in qcow2_update_snapshot_refcount?

2014-10-09 Thread Zhang Haoyu
, l1_size2); free(tmp_l1_table); } Thanks, Zhang Haoyu

[Qemu-devel] [PATCH bugfix v2] snapshot: fix referencing wrong variable in while loop in do_delvm

2014-09-29 Thread Zhang Haoyu
The while loop variabal is bs1, but bs is always passed to bdrv_snapshot_delete_by_id_or_name. Broken in commit a89d89d, v1.7.0. v1 - v2: * add broken commit id to commit message Signed-off-by: Zhang Haoyu zhan...@sangfor.com Reviewed-by: Markus Armbruster arm...@redhat.com --- savevm.c | 11

Re: [Qemu-devel] [question] virtio-blk performancedegradationhappened with virito-serial

2014-09-22 Thread Zhang Haoyu
-serial in guest, and the difference of perf top data on guest when disable/enable virtio-serial in guest, any ideas? Thanks, Zhang Haoyu If you restrict the number of vectors the virtio-serial device gets (using the -device virtio-serial-pci,vectors= param), does that make things

Re: [Qemu-devel] [question] virtio-blk performance degradation happened with virito-serial

2014-09-16 Thread Zhang Haoyu
. Emulating virtio-balloon device instead of virtio-serial deivce , then to see whether the virtio-blk performance is hampered. Base on the test result, corresponding analysis will be performed. Any ideas? Thanks, Zhang Haoyu

Re: [Qemu-devel] [PATCH] kvm: ioapic: conditionally delay irq delivery duringeoi broadcast

2014-09-11 Thread Zhang Haoyu
Signed-off-by: Zhang Haoyu zhan...@sangfor.com --- include/trace/events/kvm.h | 20 ++ virt/kvm/ioapic.c | 51 -- virt/kvm/ioapic.h | 6 ++ 3 files changed, 75 insertions(+), 2 deletions(-) diff --git

[Qemu-devel] [PATCH] kvm: ioapic: conditionally delay irq delivery duringeoi broadcast

2014-09-11 Thread Zhang Haoyu
in case it may register one very soon and for guest who has a bad irq detection routine ( such as note_interrupt() in linux ), this bad irq would be recognized soon as in the past. Cc: Michael S. Tsirkin m...@redhat.com Signed-off-by: Jason Wang jasow...@redhat.com Signed-off-by: Zhang Haoyu zhan

Re: [Qemu-devel] [PATCH] kvm: ioapic: conditionally delay irq delivery during eoi broadcast

2014-09-11 Thread Zhang Haoyu
as in the past. Cc: Michael S. Tsirkin m...@redhat.com Signed-off-by: Jason Wang jasow...@redhat.com Signed-off-by: Zhang Haoyu zhan...@sangfor.com --- include/trace/events/kvm.h | 20 +++ virt/kvm/ioapic.c | 50 -- virt/kvm

[Qemu-devel] [PATCH v2] kvm: ioapic: conditionally delay irq delivery duringeoi broadcast

2014-09-11 Thread Zhang Haoyu
-irq_eoi[i] == IOAPIC_SUCCESSIVE_IRQ_MAX_COUNT) { Cc: Michael S. Tsirkin m...@redhat.com Cc: Jan Kiszka jan.kis...@siemens.com Signed-off-by: Jason Wang jasow...@redhat.com Signed-off-by: Zhang Haoyu zhan...@sangfor.com --- include/trace/events/kvm.h | 20 +++ virt/kvm/ioapic.c

Re: [Qemu-devel] [question] virtio-blk performance degradation happened with virito-serial

2014-09-11 Thread Zhang Haoyu
: virtio-pci Kernel modules: virtio_pci Thanks, Zhang Haoyu

[Qemu-devel] [PATCH] kvm: ioapic: conditionally delay irq delivery during eoi broadcast

2014-09-10 Thread Zhang Haoyu
in case it may register one very soon and for guest who has a bad irq detection routine ( such as note_interrupt() in linux ), this bad irq would be recognized soon as in the past. Cc: Michael S. Tsirkin m...@redhat.com Signed-off-by: Jason Wang jasow...@redhat.com Signed-off-by: Zhang Haoyu zhan

Re: [Qemu-devel] [PATCH] kvm: ioapic: conditionally delay irq delivery during eoi broadcast

2014-09-10 Thread Zhang Haoyu
Signed-off-by: Jason Wang jasow...@redhat.com Signed-off-by: Zhang Haoyu zhan...@sangfor.com --- include/trace/events/kvm.h | 20 ++ virt/kvm/ioapic.c | 51 -- virt/kvm/ioapic.h | 6 ++ 3 files changed, 75 insertions

[Qemu-devel] [PATCH] kvm: ioapic: conditionally delay irq delivery during eoi broadcast

2014-09-10 Thread Zhang Haoyu
in case it may register one very soon and for guest who has a bad irq detection routine ( such as note_interrupt() in linux ), this bad irq would be recognized soon as in the past. Cc: Michael S. Tsirkin m...@redhat.com Signed-off-by: Jason Wang jasow...@redhat.com Signed-off-by: Zhang Haoyu zhan

Re: [Qemu-devel] [question] virtio-blk performance degradationhappenedwith virito-serial

2014-09-07 Thread Zhang Haoyu
Hi, Paolo, Amit, any ideas? Thanks, Zhang Haoyu On 2014-9-4 15:56, Zhang Haoyu wrote: If virtio-blk and virtio-serial share an IRQ, the guest operating system has to check each virtqueue for activity. Maybe there is some inefficiency doing that. AFAIK virtio-serial registers 64 virtqueues

Re: [Qemu-devel] [question] virtio-blk performance degradationhappenedwith virito-serial

2014-09-04 Thread Zhang Haoyu
[disabled] [size=256K] Capabilities: [40] MSI-X: Enable+ Count=3 Masked- Vector table: BAR=1 offset= PBA: BAR=1 offset=0800 Kernel driver in use: virtio-pci Kernel modules: virtio_pci Thanks, Zhang Haoyu Paolo

[Qemu-devel] [question] git clone kvm.git failed

2014-09-04 Thread Zhang Haoyu
/unaligned': File too large How to resolve these errors? Thanks, Zhang Haoyu

Re: [Qemu-devel] [question] virtio-blk performancedegradationhappened with virito-serial

2014-09-03 Thread Zhang Haoyu
. with virtio-serial enabled: 64k-write-sequence: 4200 IOPS with virtio-serial disabled: 64k-write-sequence: 5300 IOPS How to confirm whether it's MSI in windows? Thanks, Zhang Haoyu So, I think it has no business with legacy interrupt mode, right? I am going to observe the difference of perf top data

Re: [Qemu-devel] [question] e1000 interrupt stormhappenedbecauseofits correspondingioapic-irr bit always set

2014-09-03 Thread Zhang Haoyu
-irq_eoi will reach to 100. I want to add u32 irq_eoi[IOAPIC_NUM_PINS]; instead of u32 irq_eoi;. Any ideas? Zhang Haoyu I'm a bit concerned how this will affect realtime guests. Worth adding a flag to enable this, so that e.g. virtio is not affected? Your concern is reasonable. If applying

Re: [Qemu-devel] [question] virtio-blk performance degradationhappened with virito-serial

2014-09-01 Thread Zhang Haoyu
/enable virtio-serial in guest, and the difference of perf top data on guest when disable/enable virtio-serial in guest, any ideas? Thanks, Zhang Haoyu If you restrict the number of vectors the virtio-serial device gets (using the -device virtio-serial-pci,vectors= param), does that make things

Re: [Qemu-devel] [question] virtio-blk performance degradationhappened with virito-serial

2014-09-01 Thread Zhang Haoyu
mode, right? I am going to observe the difference of perf top data on qemu and perf kvm stat data when disable/enable virtio-serial in guest, and the difference of perf top data on guest when disable/enable virtio-serial in guest, any ideas? Thanks, Zhang Haoyu If you restrict the number

Re: [Qemu-devel] [question] virtio-blk performancedegradationhappened with virito-serial

2014-09-01 Thread Zhang Haoyu
windows driver specific, too. I have not test linux guest, I'll test it later. Thanks, Zhang Haoyu Amit

[Qemu-devel] [question] virtio-blk performance degradation happened with virito-serial

2014-08-29 Thread Zhang Haoyu
,base=localtime -global kvm-pit.lost_tick_policy=discard -global PIIX4_PM.disable_s3 =1 -global PIIX4_PM.disable_s4=1 Any ideas? Thanks, Zhang Haoyu

Re: [Qemu-devel] [question] e1000 interrupt storm happenedbecauseofits correspondingioapic-irr bit always set

2014-08-28 Thread Zhang Haoyu
continually, and not too long, ioapic-irq_eoi will reach to 100. I want to add u32 irq_eoi[IOAPIC_NUM_PINS]; instead of u32 irq_eoi;. Any ideas? Zhang Haoyu at at -375,12 +414,14 at at void kvm_ioapic_reset(struct kvm_ioapic *ioapic) { int i; + cancel_delayed_work_sync(ioapic

Re: [Qemu-devel] [question] e1000 interrupt storm happenedbecauseofitscorrespondingioapic-irr bit always set

2014-08-28 Thread Zhang Haoyu
Hi, Yang, Gleb, Michael, Could you help review below patch please? Thanks, Zhang Haoyu Hi Jason, I tested below patch, it's okay, the e1000 interrupt storm disappeared. But I am going to make a bit change on it, could you help review it? Currently, we call ioapic_service() immediately when

Re: [Qemu-devel] [question] e1000 interrupt storm happened becauseofits correspondingioapic-irr bit always set

2014-08-27 Thread Zhang Haoyu
. Under what cases did you meet this issue? Some scenarios, not constant and 100% reproducity, e.g., reboot vm, ifdown e1000 nic, install kaspersky(network configuration is performed during installing stage), .etc. Thanks, Zhang Haoyu Thanks, Zhang Haoyu

Re: [Qemu-devel] [PATCH v6 0/3] linux-aio: introduce submit I/O asa batch

2014-08-26 Thread Zhang Haoyu
, if not, then read the data from the disk or host page cache. Any ideas? Thanks, Zhang Haoyu In Patch 2 we should complete requests with -EIO if io_submit() returned 0 = ret len. I fixed this up when applying because the patch was completing with a bogus ret value. Stefan

Re: [Qemu-devel] [PATCH v6 0/3] linux-aio: introduce submit I/O as a batch

2014-08-26 Thread Zhang Haoyu
directly, bypass host page cache. IO merging also can be performed in the queue. Any ideas? Thanks, Zhang Haoyu In Patch 2 we should complete requests with -EIO if io_submit() returned 0 = ret len. I fixed this up when applying because the patch was completing with a bogus ret value. Stefan

Re: [Qemu-devel] [PATCH v6 0/3] linux-aio: introduce submit I/O asabatch

2014-08-26 Thread Zhang Haoyu
, Zhang Haoyu Fam

  1   2   >