Re: [RFC PATCH 03/21] contrib/gitdm: Add Baidu to the domain map

2020-10-05 Thread Chai,Wen
Reviewed-by chai...@baidu.com


Thanks
Chai Wen

在 2020/10/5 上午2:04, "Philippe Mathieu-Daudé"
 写入:

>There is a number of contributors from this domain,
>add its own entry to the gitdm domain map.
>
>Cc: Jia Lina 
>Cc: Li Hangjing 
>Cc: Xie Yongji 
>Cc: Chai Wen 
>Cc: Ni Xun 
>Cc: Zhang Yu 
>Signed-off-by: Philippe Mathieu-Daudé 
>---
>One Reviewed-by/Ack-by from someone from this domain
>should be sufficient to get this patch merged.
>---
>contrib/gitdm/domain-map | 1 +
>1 file changed, 1 insertion(+)
>
>diff --git a/contrib/gitdm/domain-map b/contrib/gitdm/domain-map
>index 2956f9e4de..0eaa4ec051 100644
>--- a/contrib/gitdm/domain-map
>+++ b/contrib/gitdm/domain-map
>@@ -8,6 +8,7 @@ linux.alibaba.com Alibaba
>amazon.com  Amazon
>amazon.de   Amazon
>amd.com AMD
>+baidu.com   Baidu
>cmss.chinamobile.com China Mobile
>citrix.com  Citrix
>greensocs.com   GreenSocs
>-- 
>2.26.2
>
>



Re: [Qemu-devel] [PATCH] migration: avoid segmentfault when take a snapshot of a VM which being migrated

2018-10-14 Thread Chai,Wen


在 2018/10/12 下午5:01, "Dr. David Alan Gilbert"  写入:

>* jialina01 (jialin...@baidu.com) wrote:
>> During an active background migraion, snapshot will trigger a
>> segmentfault. As snapshot clears the "current_migration" struct
>> and updates "to_dst_file" before it finds out that there is a
>> migration task, Migration accesses the null pointer in
>> "current_migration" struct and qemu crashes eventually.
>> 
>> Signed-off-by: jialina01 
>> Signed-off-by: chaiwen 
>> Signed-off-by: zhangyu 
>
>Yes, that looks like an improvement, but is it enough?

Indeed it’s not enough, this patch fails to handle the case that doing
a snapshot under an active non-block migration.

>With that change does qemu_savevm_state fail cleanly if attempted
>during a migration?   I suspect it will still try and do a snapshot,
>because I don't see where it's checking the current state
>(like the check in migrate_prepare that does a QERR_MIGRATIION_ACTIVE).

Yes, migration_is_setup_or_active looks like a more sensible detection for
this case,
and a proper error message it better to be set.
Thanks for your comment, we will send a v2 patch later.

Thanks
Chai Wen

>
>Dave
>
>
>> ---
>>  migration/savevm.c | 14 +-
>>  1 file changed, 5 insertions(+), 9 deletions(-)
>> 
>> diff --git a/migration/savevm.c b/migration/savevm.c
>> index 2d10e45582..9cb97ca343 100644
>> --- a/migration/savevm.c
>> +++ b/migration/savevm.c
>> @@ -1319,21 +1319,18 @@ static int qemu_savevm_state(QEMUFile *f, Error
>>**errp)
>>  MigrationState *ms = migrate_get_current();
>>  MigrationStatus status;
>>  
>> -migrate_init(ms);
>> -
>> -ms->to_dst_file = f;
>> -
>>  if (migration_is_blocked(errp)) {
>> -ret = -EINVAL;
>> -goto done;
>> +return -EINVAL;
>>  }
>>  
>>  if (migrate_use_block()) {
>>  error_setg(errp, "Block migration and snapshots are
>>incompatible");
>> -ret = -EINVAL;
>> -goto done;
>> +return -EINVAL;
>>  }
>>  
>> +migrate_init(ms);
>> +ms->to_dst_file = f;
>> +
>>  qemu_mutex_unlock_iothread();
>>  qemu_savevm_state_header(f);
>>  qemu_savevm_state_setup(f);
>> @@ -1355,7 +1352,6 @@ static int qemu_savevm_state(QEMUFile *f, Error
>>**errp)
>>  error_setg_errno(errp, -ret, "Error while writing VM state");
>>  }
>>  
>> -done:
>>  if (ret != 0) {
>>  status = MIGRATION_STATUS_FAILED;
>>  } else {
>> -- 
>> 2.13.2.windows.1
>> 
>--
>Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



[Qemu-devel] [Bug] block: virtio-blk-dataplane causes segfault

2014-07-01 Thread Chai Wen
Hi

I tested the latest release v2.1 qemu feature virtio-dataplane.
And it seems that there is a little bug in it.
Please see the following below for detail, thanks.

git: git.qemu.org/qemu.git
branch: master
top commit:
commit 92259b7f434b382fc865d1f65f7d5adeed295749
Author: Peter Maydell peter.mayd...@linaro.org
Date:   Tue Jul 1 18:48:01 2014 +0100

Update version for v2.1.0-rc0 release


reproduce steps:
./qemu-system-x86_64 -enable-kvm -smp 2 -m 1024 -drive 
file=/var/vm_images/images/6G_smallsuse.img,if=none,format=raw,cache=none,id=image1
 -device
virtio-blk-pci,drive=image1,scsi=off,x-data-plane=on -net none
Segmentation fault (core dumped)


And I think there is something wrong in the completion callback chains for 
dataplane.
static void complete_request_vring(VirtIOBlockReq *req, unsigned char status)
{
stb_p(req-in-status, status);

vring_push(req-dev-dataplane-vring, req-elem,
   req-qiov.size + sizeof(*req-in));
notify_guest(req-dev-dataplane);
g_slice_free(VirtIOBlockReq, req);---
}

static void virtio_blk_rw_complete(void *opaque, int ret)
{
VirtIOBlockReq *req = opaque;

trace_virtio_blk_rw_complete(req, ret);

if (ret) {
int p = virtio_ldl_p(VIRTIO_DEVICE(req-dev), req-out.type);
bool is_read = !(p  VIRTIO_BLK_T_OUT);
if (virtio_blk_handle_rw_error(req, -ret, is_read))
return;
}

virtio_blk_req_complete(req, VIRTIO_BLK_S_OK);---
bdrv_acct_done(req-dev-bs, req-acct); ---
virtio_blk_free_request(req);
}




(gdb) bt
#0  0x5588236f in bdrv_acct_done (bs=0x48004800480048, 
cookie=0x563016e8) at block.c:5478
#1  0x5564035b in virtio_blk_rw_complete (opaque=0x563016a0, ret=0) 
at /home/git_dev/qemu_115/qemu/hw/block/virtio-blk.c:99
#2  0x55883d22 in bdrv_co_em_bh (opaque=value optimized out) at 
block.c:4665
#3  0x5587c9f7 in aio_bh_poll (ctx=0x562267f0) at async.c:81
#4  0x5588dc17 in aio_poll (ctx=0x562267f0, blocking=true) at 
aio-posix.c:188
#5  0x556fa357 in iothread_run (opaque=0x56227268) at iothread.c:41
#6  0x76bb9851 in start_thread () from /lib64/libpthread.so.0
#7  0x71ac890d in clone () from /lib64/libc.so.6
(gdb) info thread
  7 Thread 0x7fff9bfff700 (LWP 25553)  0x76bbf811 in sem_timedwait () 
from /lib64/libpthread.so.0
  6 Thread 0x7fffe61fd700 (LWP 25552)  0x76bbd43c in 
pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
* 5 Thread 0x7fffe69fe700 (LWP 25551)  0x5588236f in bdrv_acct_done 
(bs=0x48004800480048, cookie=0x563016e8) at block.c:5478
  4 Thread 0x7fffe71ff700 (LWP 25550)  0x76bbf811 in sem_timedwait () 
from /lib64/libpthread.so.0
  3 Thread 0x7fffec9d8700 (LWP 25548)  0x71ac0a47 in ioctl () from 
/lib64/libc.so.6
  2 Thread 0x7fffed1d9700 (LWP 25547)  0x71ac0a47 in ioctl () from 
/lib64/libc.so.6
  1 Thread 0x77fc09a0 (LWP 25544)  0x71abf487 in ppoll () from 
/lib64/libc.so.6


-- 
Regards

Chai Wen



Re: [Qemu-devel] [Bug] block: virtio-blk-dataplane causes segfault

2014-07-01 Thread Chai Wen
On 07/02/2014 11:24 AM, Chai Wen wrote:

 Hi
 
 I tested the latest release v2.1 qemu feature virtio-dataplane.
 And it seems that there is a little bug in it.
 Please see the following below for detail, thanks.
 


Oh, Just saw Stefan's fix. Please ignore.

Stefan Hajnoczi (4):
  virtio-blk: avoid dataplane VirtIOBlockReq early free
  dataplane: do not free VirtQueueElement in vring_push()
  virtio-blk: avoid g_slice_new0() for VirtIOBlockReq and
VirtQueueElement
  virtio-blk: embed VirtQueueElement in VirtIOBlockReq


thanks
chai wen

 git: git.qemu.org/qemu.git
 branch: master
 top commit:
 commit 92259b7f434b382fc865d1f65f7d5adeed295749
 Author: Peter Maydell peter.mayd...@linaro.org
 Date:   Tue Jul 1 18:48:01 2014 +0100
 
 Update version for v2.1.0-rc0 release
 
 
 reproduce steps:
 ./qemu-system-x86_64 -enable-kvm -smp 2 -m 1024 -drive 
 file=/var/vm_images/images/6G_smallsuse.img,if=none,format=raw,cache=none,id=image1
  -device
 virtio-blk-pci,drive=image1,scsi=off,x-data-plane=on -net none
 Segmentation fault (core dumped)
 
 
 And I think there is something wrong in the completion callback chains for 
 dataplane.
 static void complete_request_vring(VirtIOBlockReq *req, unsigned char status)
 {
 stb_p(req-in-status, status);
 
 vring_push(req-dev-dataplane-vring, req-elem,
req-qiov.size + sizeof(*req-in));
 notify_guest(req-dev-dataplane);
 g_slice_free(VirtIOBlockReq, req);---
 }
 
 static void virtio_blk_rw_complete(void *opaque, int ret)
 {
 VirtIOBlockReq *req = opaque;
 
 trace_virtio_blk_rw_complete(req, ret);
 
 if (ret) {
 int p = virtio_ldl_p(VIRTIO_DEVICE(req-dev), req-out.type);
 bool is_read = !(p  VIRTIO_BLK_T_OUT);
 if (virtio_blk_handle_rw_error(req, -ret, is_read))
 return;
 }
 
 virtio_blk_req_complete(req, VIRTIO_BLK_S_OK);---
 bdrv_acct_done(req-dev-bs, req-acct); ---
 virtio_blk_free_request(req);
 }
 
 
 
 
 (gdb) bt
 #0  0x5588236f in bdrv_acct_done (bs=0x48004800480048, 
 cookie=0x563016e8) at block.c:5478
 #1  0x5564035b in virtio_blk_rw_complete (opaque=0x563016a0, 
 ret=0) at /home/git_dev/qemu_115/qemu/hw/block/virtio-blk.c:99
 #2  0x55883d22 in bdrv_co_em_bh (opaque=value optimized out) at 
 block.c:4665
 #3  0x5587c9f7 in aio_bh_poll (ctx=0x562267f0) at async.c:81
 #4  0x5588dc17 in aio_poll (ctx=0x562267f0, blocking=true) at 
 aio-posix.c:188
 #5  0x556fa357 in iothread_run (opaque=0x56227268) at 
 iothread.c:41
 #6  0x76bb9851 in start_thread () from /lib64/libpthread.so.0
 #7  0x71ac890d in clone () from /lib64/libc.so.6
 (gdb) info thread
   7 Thread 0x7fff9bfff700 (LWP 25553)  0x76bbf811 in sem_timedwait () 
 from /lib64/libpthread.so.0
   6 Thread 0x7fffe61fd700 (LWP 25552)  0x76bbd43c in 
 pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
 * 5 Thread 0x7fffe69fe700 (LWP 25551)  0x5588236f in bdrv_acct_done 
 (bs=0x48004800480048, cookie=0x563016e8) at block.c:5478
   4 Thread 0x7fffe71ff700 (LWP 25550)  0x76bbf811 in sem_timedwait () 
 from /lib64/libpthread.so.0
   3 Thread 0x7fffec9d8700 (LWP 25548)  0x71ac0a47 in ioctl () from 
 /lib64/libc.so.6
   2 Thread 0x7fffed1d9700 (LWP 25547)  0x71ac0a47 in ioctl () from 
 /lib64/libc.so.6
   1 Thread 0x77fc09a0 (LWP 25544)  0x71abf487 in ppoll () from 
 /lib64/libc.so.6
 
 



-- 
Regards

Chai Wen



Re: [Qemu-devel] [PATCH resend] block: fix wrong order in live block migration setup

2014-06-04 Thread Chai Wen
On 06/04/2014 05:23 PM, Stefan Hajnoczi wrote:

 On Wed, Jun 04, 2014 at 11:47:37AM +0800, chai wen wrote:

 The function init_blk_migration is better to be called before
 set_dirty_tracking as the reasons below.

 If we want to track dirty blocks via dirty_maps on a BlockDriverState
 when doing live block-migration, its correspoding 'BlkMigDevState' should be
 added to block_mig_state.bmds_list first for subsequent processing.
 Otherwise set_dirty_tracking will do nothing on an empty list than allocating
 dirty_bitmaps for them. And bdrv_get_dirty_count will access the 
 bmds-dirty_maps directly, then there would be a segfault triggered.

 If the set_dirty_tracking fails, qemu_savevm_state_cancel will handle
 the cleanup of init_blk_migration automatically.


 Reviewed-by: Fam Zheng f...@redhat.com
 Signed-off-by: chai wen chaiw.f...@cn.fujitsu.com
 ---
  block-migration.c |3 +--
  1 files changed, 1 insertions(+), 2 deletions(-)
 
 Thanks, applied to my block tree:
 https://github.com/stefanha/qemu/commits/block
 


OK, thanks for your attention about this fix. :)

thanks
chai wen

 Stefan
 .
 



-- 
Regards

Chai Wen



[Qemu-devel] [PATCH resend] block: fix wrong order in live block migration setup

2014-06-03 Thread chai wen

The function init_blk_migration is better to be called before
set_dirty_tracking as the reasons below.

If we want to track dirty blocks via dirty_maps on a BlockDriverState
when doing live block-migration, its correspoding 'BlkMigDevState' should be
added to block_mig_state.bmds_list first for subsequent processing.
Otherwise set_dirty_tracking will do nothing on an empty list than allocating
dirty_bitmaps for them. And bdrv_get_dirty_count will access the 
bmds-dirty_maps directly, then there would be a segfault triggered.

If the set_dirty_tracking fails, qemu_savevm_state_cancel will handle
the cleanup of init_blk_migration automatically.


Reviewed-by: Fam Zheng f...@redhat.com
Signed-off-by: chai wen chaiw.f...@cn.fujitsu.com
---
 block-migration.c |3 +--
 1 files changed, 1 insertions(+), 2 deletions(-)

diff --git a/block-migration.c b/block-migration.c
index 1656270..25a0388 100644
--- a/block-migration.c
+++ b/block-migration.c
@@ -629,6 +629,7 @@ static int block_save_setup(QEMUFile *f, void *opaque)
 block_mig_state.submitted, block_mig_state.transferred);
 
 qemu_mutex_lock_iothread();
+init_blk_migration(f);
 
 /* start track dirty blocks */
 ret = set_dirty_tracking();
@@ -638,8 +639,6 @@ static int block_save_setup(QEMUFile *f, void *opaque)
 return ret;
 }
 
-init_blk_migration(f);
-
 qemu_mutex_unlock_iothread();
 
 ret = flush_blks(f);
-- 
1.7.1




[Qemu-devel] [Bug Report] snapshot under a background migration

2014-05-28 Thread Chai Wen

Hi,

There is a issue that doing snapshot under a background migration could cause a 
segfault.

Steps to reproduce this issue are:
1. dirty plenty of pages in the 1st guest
2. run command 'migrate -d tcp:***:***' in 1st monitor to migrate the 1st 
guest to 2nd guest in background
3. run command 'savevm' in 1st monitor
(host test env
arch: x86_64 i3-2120 CPU
qemu: master on git://git.qemu.org/qemu.git
kernel: 3.0.76)
And the corresponding stack is as below. It looks like a wrongly re-access of 
some memory.

But I am not sure whether it should be treated as a function restriction of the 
migration/savevm
than a bug. (Or it is to say we should not do snapshot when there is a 
migration processing)
Even though is is a restriction, qemu should be aware of this illegal operation 
?

And this issue is also found in stable-1.5, stable-1.6.


=
...
(qemu) migrate -d tcp:0:
(qemu) [New Thread 0x75055700 (LWP 31620)]

(qemu) savevm

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x75055700 (LWP 31620)]
find_next_bit (addr=optimized out, size=1048576, offset=0) at util/bitops.c:47
47  tmp = *p;
(gdb) bt
#0  find_next_bit (addr=optimized out, size=1048576, offset=0) at 
util/bitops.c:47
#1  0x557aa09a in migration_bitmap_find_and_reset_dirty 
(start=optimized out, mr=optimized out)
at /home/chaiwen/upstream-qemu/qemu/arch_init.c:427
#2  ram_find_and_save_block (f=0x563729b0, last_stage=false) at 
/home/chaiwen/upstream-qemu/qemu/arch_init.c:656
#3  0x557aa5c1 in ram_save_iterate (f=0x563729b0, opaque=optimized 
out) at /home/chaiwen/upstream-qemu/qemu/arch_init.c:870
#4  0x55827b76 in qemu_savevm_state_iterate (f=0x563729b0) at 
/home/chaiwen/upstream-qemu/qemu/savevm.c:541
#5  0x5572692e in migration_thread (opaque=0x55caa920 
current_migration.29169) at migration.c:602
#6  0x75a177b6 in start_thread () from /lib64/libpthread.so.0
#7  0x75772d6d in clone () from /lib64/libc.so.6
#8  0x in ?? ()
(gdb) info thread

(gdb) thread 1
[Switching to thread 1 (Thread 0x77fbb8e0 (LWP 30987))]
#0  steal_time_msr_needed (opaque=0x563151a0) at 
/home/chaiwen/upstream-qemu/qemu/target-i386/machine.c:348
348 {
(gdb) bt
#0  steal_time_msr_needed (opaque=0x563151a0) at 
/home/chaiwen/upstream-qemu/qemu/target-i386/machine.c:348
#1  0x557a7f3e in vmstate_subsection_save (opaque=optimized out, 
vmsd=optimized out, f=optimized out) at vmstate.c:221
#2  vmstate_save_state (f=0x56418000, vmsd=0x55c6d9e0 
vmstate_x86_cpu, opaque=0x563151a0) at vmstate.c:159
#3  0x55827a1a in vmstate_save (se=optimized out, f=optimized out) 
at /home/chaiwen/upstream-qemu/qemu/savevm.c:447
#4  qemu_savevm_state_complete (f=0x56418000) at 
/home/chaiwen/upstream-qemu/qemu/savevm.c:608
#5  0x55827fae in qemu_savevm_state (f=optimized out) at 
/home/chaiwen/upstream-qemu/qemu/savevm.c:671
#6  do_savevm (mon=0x561a7980, qdict=optimized out) at 
/home/chaiwen/upstream-qemu/qemu/savevm.c:976
#7  0x55824449 in handle_user_command (mon=0x561a7980, 
cmdline=optimized out)
at /home/chaiwen/upstream-qemu/qemu/monitor.c:4159
#8  0x5582476b in monitor_command_cb (opaque=0x561a7980, 
cmdline=0x56418a1e \001, readline_opaque=0x0)
at /home/chaiwen/upstream-qemu/qemu/monitor.c:5021
#9  0x558d07c9 in readline_handle_byte (rs=0x5630f410, 
ch=optimized out) at util/readline.c:376
#10 0x55824519 in monitor_read (opaque=optimized out, 
buf=0x7fffcc60 \r, size=1)
at /home/chaiwen/upstream-qemu/qemu/monitor.c:5004
#11 0x55755f2b in fd_chr_read (chan=optimized out, cond=optimized 
out, opaque=0x5619a020) at qemu-char.c:848
#12 0x7732d60a in g_main_context_dispatch () from 
/usr/lib64/libglib-2.0.so.0
#13 0x55725732 in glib_pollfds_poll () at main-loop.c:190
#14 os_host_main_loop_wait (timeout=optimized out) at main-loop.c:235
#15 main_loop_wait (nonblocking=optimized out) at main-loop.c:484
#16 0x557a5de5 in main_loop () at vl.c:2077
#17 main (argc=optimized out, argv=optimized out, envp=optimized out) at 
vl.c:4561



-- 
Regards

Chai Wen



Re: [Qemu-devel] [PATCH] [qemu-devel] fix wrong order when doing live block migration setup

2014-05-28 Thread Chai Wen

Hi Kevin  Stefan

How about this fix ?

On 05/28/2014 09:13 AM, Chai Wen wrote:

 On 05/27/2014 06:11 PM, Fam Zheng wrote:
 
 On Tue, 05/27 16:54, chai wen wrote:
 If we want to track dirty blocks using dirty_maps on a BlockDriverState
 when doing live block-migration, its correspoding 'BlkMigDevState' should be
 add to block_mig_state.bmds_list firstly for subsequent processing.
 Otherwise set_dirty_tracking will do nothing on an empty list than 
 allocating
 dirty_bitmaps for them.

 And what's the worse, bdrv_get_dirty_count will access the
 bmds-dirty_maps directly, there could be a segfault as the reasons
 above.

 Signed-off-by: chai wen chaiw.f...@cn.fujitsu.com
 ---
  block-migration.c |2 +-
  1 files changed, 1 insertions(+), 1 deletions(-)

 diff --git a/block-migration.c b/block-migration.c
 index 56951e0..43203aa 100644
 --- a/block-migration.c
 +++ b/block-migration.c
 @@ -626,6 +626,7 @@ static int block_save_setup(QEMUFile *f, void *opaque)
  block_mig_state.submitted, block_mig_state.transferred);
  
  qemu_mutex_lock_iothread();
 +init_blk_migration(f);

 Thanks for spotting this!

 I reverted the order of init_blk_migration and set_dirty_tracking in commit
 b8afb520e (block: Handle error of bdrv_getlength in bdrv_create_dirty_bitmap)
 incorrectly, thought that in this way, no clean up is needed if
 set_dirty_tracking fails.

 But by looking at savevm.c:qemu_savevm_state() we can see that
 qemu_savevm_state_cancel() will do the clean up automatically, so this fix is
 valid.

 Reviewed-by: Fam Zheng f...@redhat.com
 
 
 Yeah, thank you for the review.
 
 
 thanks
 chai wen
 

  
  /* start track dirty blocks */
  ret = set_dirty_tracking();
 @@ -635,7 +636,6 @@ static int block_save_setup(QEMUFile *f, void *opaque)
  return ret;
  }
  
 -init_blk_migration(f);
  
  qemu_mutex_unlock_iothread();
  
 -- 
 1.7.1


 .

 
 
 



-- 
Regards

Chai Wen



[Qemu-devel] [PATCH] [qemu-devel] fix wrong order when doing live block migration setup

2014-05-27 Thread chai wen
If we want to track dirty blocks using dirty_maps on a BlockDriverState
when doing live block-migration, its correspoding 'BlkMigDevState' should be
add to block_mig_state.bmds_list firstly for subsequent processing.
Otherwise set_dirty_tracking will do nothing on an empty list than allocating
dirty_bitmaps for them.

And what's the worse, bdrv_get_dirty_count will access the
bmds-dirty_maps directly, there could be a segfault as the reasons
above.

Signed-off-by: chai wen chaiw.f...@cn.fujitsu.com
---
 block-migration.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/block-migration.c b/block-migration.c
index 56951e0..43203aa 100644
--- a/block-migration.c
+++ b/block-migration.c
@@ -626,6 +626,7 @@ static int block_save_setup(QEMUFile *f, void *opaque)
 block_mig_state.submitted, block_mig_state.transferred);
 
 qemu_mutex_lock_iothread();
+init_blk_migration(f);
 
 /* start track dirty blocks */
 ret = set_dirty_tracking();
@@ -635,7 +636,6 @@ static int block_save_setup(QEMUFile *f, void *opaque)
 return ret;
 }
 
-init_blk_migration(f);
 
 qemu_mutex_unlock_iothread();
 
-- 
1.7.1




Re: [Qemu-devel] [PATCH] [qemu-devel] fix wrong order when doing live block migration setup

2014-05-27 Thread Chai Wen
Hi,

Sorry for forgetting to cc maintainers.
I got this issue when doing live migration test, and simple steps to reproduce 
are
master: qemu -enable-kvm -smp 1 -m 512 -drive file=/data1/src.img,if=virtio 
\
 -net none -monitor stdio -vnc 0:2
slave:  qemu -enable-kvm -smp 1 -m 512 -dirve 
file=/data2/dest.img,if=virtio \
 -net none -monitor stdio -vnc 0:3 -incoming tcp:127.0.0.1: 
\

when doing migration cmd migrate -b tcp:127.0.0.1: in master's monitor,
it throws out a segfault error.

After checking some code of block migration, I think there is something wrong 
with the
setup sequence in block migration setup.

thanks
chai wen


On 05/27/2014 04:54 PM, chai wen wrote:

 If we want to track dirty blocks using dirty_maps on a BlockDriverState
 when doing live block-migration, its correspoding 'BlkMigDevState' should be
 add to block_mig_state.bmds_list firstly for subsequent processing.
 Otherwise set_dirty_tracking will do nothing on an empty list than allocating
 dirty_bitmaps for them.
 
 And what's the worse, bdrv_get_dirty_count will access the
 bmds-dirty_maps directly, there could be a segfault as the reasons
 above.
 
 Signed-off-by: chai wen chaiw.f...@cn.fujitsu.com
 ---
  block-migration.c |2 +-
  1 files changed, 1 insertions(+), 1 deletions(-)
 
 diff --git a/block-migration.c b/block-migration.c
 index 56951e0..43203aa 100644
 --- a/block-migration.c
 +++ b/block-migration.c
 @@ -626,6 +626,7 @@ static int block_save_setup(QEMUFile *f, void *opaque)
  block_mig_state.submitted, block_mig_state.transferred);
  
  qemu_mutex_lock_iothread();
 +init_blk_migration(f);
  
  /* start track dirty blocks */
  ret = set_dirty_tracking();
 @@ -635,7 +636,6 @@ static int block_save_setup(QEMUFile *f, void *opaque)
  return ret;
  }
  
 -init_blk_migration(f);
  
  qemu_mutex_unlock_iothread();
  



-- 
Regards

Chai Wen



Re: [Qemu-devel] [PATCH] [qemu-devel] fix wrong order when doing live block migration setup

2014-05-27 Thread Chai Wen
On 05/27/2014 06:11 PM, Fam Zheng wrote:

 On Tue, 05/27 16:54, chai wen wrote:
 If we want to track dirty blocks using dirty_maps on a BlockDriverState
 when doing live block-migration, its correspoding 'BlkMigDevState' should be
 add to block_mig_state.bmds_list firstly for subsequent processing.
 Otherwise set_dirty_tracking will do nothing on an empty list than allocating
 dirty_bitmaps for them.

 And what's the worse, bdrv_get_dirty_count will access the
 bmds-dirty_maps directly, there could be a segfault as the reasons
 above.

 Signed-off-by: chai wen chaiw.f...@cn.fujitsu.com
 ---
  block-migration.c |2 +-
  1 files changed, 1 insertions(+), 1 deletions(-)

 diff --git a/block-migration.c b/block-migration.c
 index 56951e0..43203aa 100644
 --- a/block-migration.c
 +++ b/block-migration.c
 @@ -626,6 +626,7 @@ static int block_save_setup(QEMUFile *f, void *opaque)
  block_mig_state.submitted, block_mig_state.transferred);
  
  qemu_mutex_lock_iothread();
 +init_blk_migration(f);
 
 Thanks for spotting this!
 
 I reverted the order of init_blk_migration and set_dirty_tracking in commit
 b8afb520e (block: Handle error of bdrv_getlength in bdrv_create_dirty_bitmap)
 incorrectly, thought that in this way, no clean up is needed if
 set_dirty_tracking fails.
 
 But by looking at savevm.c:qemu_savevm_state() we can see that
 qemu_savevm_state_cancel() will do the clean up automatically, so this fix is
 valid.
 
 Reviewed-by: Fam Zheng f...@redhat.com


Yeah, thank you for the review.


thanks
chai wen

 
  
  /* start track dirty blocks */
  ret = set_dirty_tracking();
 @@ -635,7 +636,6 @@ static int block_save_setup(QEMUFile *f, void *opaque)
  return ret;
  }
  
 -init_blk_migration(f);
  
  qemu_mutex_unlock_iothread();
  
 -- 
 1.7.1


 .
 



-- 
Regards

Chai Wen