Re: [Qemu-block] [PATCH 00/14] block: Image locking series

2016-12-01 Thread Max Reitz
On 31.10.2016 16:38, Fam Zheng wrote:
> This is v9 of the image locking series. I redid the whole series, adopting the
> "two locks" approach from Kevin and Max.
> 
> Depends on "[Qemu-devel] [PATCH] raw-posix: Rename 'raw_s' to 'rs'" in Max's
> block branch.
> 
> Fam Zheng (14):
>   osdep: Add qemu_lock_fd and qemu_unlock_fd
>   block: Define BDRV_O_SHARE_RW
>   qemu-io: Set "share-rw" flag together with read-only
>   qemu-img: Set "share-rw" flag in read-only commands
>   block: Set "share-rw" flag in drive-backup when sync=none
>   block: Set "share-rw" flag for incoming migration
>   iotests: 055: Don't attach the drive to vm for drive-backup
>   iotests: 030: Read-only open image for getting map
>   iotests: 087: Don't attch test image twice
>   iotests: 085: Avoid image locking conflict
>   iotests: 091: Quit QEMU before checking image
>   tests: Use null-co:// instead of /dev/null as the dummy image
>   raw-posix: Implement image locking
>   tests: Add test-image-lock

One issue I have with the series in the current state is that it does
not involve the format layer. For raw images, it's fine to be shared if
BDRV_O_SHARE_RW comes from the block backend, but for qcow2 images it's
not. Therefore, most drivers in the format layer should force
BDRV_O_SHARE_RW to be off for the protocol layer.

In fact, BDRV_O_SHARE_RW does not mean that an image has to allow
sharing, but it means that the block backend (i.e. the user of the block
device) is fine with sharing. Therefore, it's actually very fine to set
this flag for the drive-backup target because we can easily justify not
to care about concurrent writes there.

However, we have to have a way to override locking, and you have four
patches in your series that look like they're trying to do this:

We don't really have to override locking in patches 3 and 4, it's enough
to give a hint to the block layer that sharing is fine (i.e. to set
BDRV_O_SHARE_RW). If something in the block layer overrides this
decision, we can just let the user override it again (by setting
disable-lock).

Patch 5 can easily justify setting BDRV_O_SHARE_RW (as said above), but
it cannot justify setting disable-lock, and there is no user to do so.
If the target image is e.g. in qcow2 format, we should not just
force-share the image but we should fix drive-backup.

Patch 6 on the other hand is very justified in force-sharing the image.
Unfortunately, it doesn't really do that. Setting BDRV_O_SHARE_RW should
not do that, as I said above. It should ideally set disable-lock. But
then again, patch 13 does all kinds of funny force-sharing based on
whether BDRV_O_INACTIVE is set, so maybe this could be the general
model: Would it work to drop patch 6 and just make raw-posix always keep
the image unlocked if BDRV_O_INACTIVE is set?

(The issue with setting disable-lock internally is furthermore that it's
raw-posix-specific (and I think that is justified) -- you cannot just
set it blindly and hope it gets to the right BDS eventually.)

Max



signature.asc
Description: OpenPGP digital signature


Re: [Qemu-block] [PATCH 05/14] block: Set "share-rw" flag in drive-backup when sync=none

2016-12-01 Thread Max Reitz
On 31.10.2016 16:38, Fam Zheng wrote:
> In this case we may open the source's backing image chain multiple
> times. Setting share flag means the new open won't try to acquire or
> check any lock, once we implement image locking.
> 
> Signed-off-by: Fam Zheng 
> 
> ---
> 
> An alternative is reusing (and bdrv_ref) the existing source's backing
> bs instead of opening another one. If we decide that approach is better,
> it's better to do it in a separate series.

Yes, it is better (there's a reason why we do it for drive-mirror), and
while I somehow agree that we could put it off until later, I'm not sure
we should. Opening an image with both BDRV_O_RDWR and BDRV_O_SHARE_RW at
the same time just because our implementation is lacking is not ideal.

Anyway, the whole issue becomes more complex when involving format
drivers. I'll write more to that in a response to the cover letter.

> ---
>  blockdev.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/blockdev.c b/blockdev.c
> index d11a74f..9992c5d 100644
> --- a/blockdev.c
> +++ b/blockdev.c
> @@ -3160,6 +3160,7 @@ static void do_drive_backup(DriveBackup *backup, 
> BlockJobTxn *txn, Error **errp)
>  }
>  if (backup->sync == MIRROR_SYNC_MODE_NONE) {
>  source = bs;
> +flags |= BDRV_O_SHARE_RW;

In any case, there should be a comment explaining the situation here
(having to go through git blame is a bit tedious...), possibly involving
a TODO or FIXME regarding that we really shouldn't be using
BDRV_O_SHARE_RW (or maybe we should? I'm not sure, I'll explore it in
said cover letter response).

Max

>  }
>  
>  size = bdrv_getlength(bs);
> 




signature.asc
Description: OpenPGP digital signature


Re: [Qemu-block] [PATCH 04/14] qemu-img: Set "share-rw" flag in read-only commands

2016-12-01 Thread Max Reitz
On 31.10.2016 16:38, Fam Zheng wrote:
> Checking the status of an image when it is being used by guest is
> usually useful,

True for qemu-img info and maybe even qemu-img compare (and qemu-img map
is just a debugging tool, so that's fine, too), but I don't think
qemu-img check is very useful. You're not unlikely to see leaks and
maybe even errors (with lazy_refcounts=on) which don't mean anything
because they will go away once the VM is shut down.

> and there is no risk of corrupting data, so don't let
> the upcoming image locking feature limit this use case.

I agree that there is no harm in doing it, but for qemu-img check I also
think it isn't very useful either.

Anyway, you can keep it since I think it should not be doing anything:
The formats implementing qemu-img check should clear BDRV_O_SHARE_RW
anyway (unless overridden however that may work).

> 
> Signed-off-by: Fam Zheng 
> ---
>  qemu-img.c | 10 --
>  1 file changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/qemu-img.c b/qemu-img.c
> index afcd51f..b2f4077 100644
> --- a/qemu-img.c
> +++ b/qemu-img.c
> @@ -679,6 +679,10 @@ static int img_check(int argc, char **argv)
>  break;
>  }
>  }
> +
> +if (!(flags & BDRV_O_RDWR)) {
> +flags |= BDRV_O_SHARE_RW;
> +}

If you want to keep this for img_check() (and I'm not going to stop you
if you do), I think it would be better to put this right in front of
img_open() to make it really clear that both are not set at the same
time (without having to look into bdrv_parse_cache_mode()).

Max

>  if (optind != argc - 1) {
>  error_exit("Expecting one image file name");
>  }
> @@ -1231,6 +1235,7 @@ static int img_compare(int argc, char **argv)
>  goto out3;
>  }
>  
> +flags |= BDRV_O_SHARE_RW;
>  blk1 = img_open(image_opts, filename1, fmt1, flags, writethrough, quiet);
>  if (!blk1) {
>  ret = 2;
> @@ -2279,7 +2284,8 @@ static ImageInfoList *collect_image_info_list(bool 
> image_opts,
>  g_hash_table_insert(filenames, (gpointer)filename, NULL);
>  
>  blk = img_open(image_opts, filename, fmt,
> -   BDRV_O_NO_BACKING | BDRV_O_NO_IO, false, false);
> +   BDRV_O_NO_BACKING | BDRV_O_NO_IO | BDRV_O_SHARE_RW,
> +   false, false);
>  if (!blk) {
>  goto err;
>  }
> @@ -2605,7 +2611,7 @@ static int img_map(int argc, char **argv)
>  return 1;
>  }
>  
> -blk = img_open(image_opts, filename, fmt, 0, false, false);
> +blk = img_open(image_opts, filename, fmt, BDRV_O_SHARE_RW, false, false);
>  if (!blk) {
>  return 1;
>  }
> 




signature.asc
Description: OpenPGP digital signature


Re: [Qemu-block] [PATCH 01/14] osdep: Add qemu_lock_fd and qemu_unlock_fd

2016-12-01 Thread Max Reitz
On 31.10.2016 16:38, Fam Zheng wrote:
> They are wrappers of POSIX fcntl "file private locking".
> 
> Signed-off-by: Fam Zheng 
> ---
>  include/qemu/osdep.h |  2 ++
>  util/osdep.c | 29 +
>  2 files changed, 31 insertions(+)
> 
> diff --git a/include/qemu/osdep.h b/include/qemu/osdep.h
> index 0e3c330..f15e122 100644
> --- a/include/qemu/osdep.h
> +++ b/include/qemu/osdep.h
> @@ -294,6 +294,8 @@ int qemu_close(int fd);
>  #ifndef _WIN32
>  int qemu_dup(int fd);
>  #endif
> +int qemu_lock_fd(int fd, int64_t start, int64_t len, bool exclusive);
> +int qemu_unlock_fd(int fd, int64_t start, int64_t len);
>  
>  #if defined(__HAIKU__) && defined(__i386__)
>  #define FMT_pid "%ld"
> diff --git a/util/osdep.c b/util/osdep.c
> index 06fb1cf..b85a490 100644
> --- a/util/osdep.c
> +++ b/util/osdep.c
> @@ -140,6 +140,35 @@ static int qemu_parse_fdset(const char *param)
>  {
>  return qemu_parse_fd(param);
>  }
> +
> +static int qemu_lock_fcntl(int fd, int64_t start, int64_t len, int fl_type)
> +{
> +#ifdef F_OFD_SETLK
> +int ret;
> +struct flock fl = {
> +.l_whence = SEEK_SET,
> +.l_start  = start,
> +.l_len= len,
> +.l_type   = fl_type,
> +};
> +do {
> +ret = fcntl(fd, F_OFD_SETLK, );
> +} while (ret == -1 && errno == EINTR);

As I've asked in the last version: Can EINTR happen at all? My man page
tells me it's possible only with F(_OFD)_SETLKW.

Max

> +return ret == -1 ? -errno : 0;
> +#else
> +return -ENOTSUP;
> +#endif
> +}
> +
> +int qemu_lock_fd(int fd, int64_t start, int64_t len, bool exclusive)
> +{
> +return qemu_lock_fcntl(fd, start, len, exclusive ? F_WRLCK : F_RDLCK);
> +}
> +
> +int qemu_unlock_fd(int fd, int64_t start, int64_t len)
> +{
> +return qemu_lock_fcntl(fd, start, len, F_UNLCK);
> +}
>  #endif
>  
>  /*
> 




signature.asc
Description: OpenPGP digital signature


Re: [Qemu-block] [Qemu-devel] [PATCH] qemu-img: Improve commit invalid base message

2016-12-01 Thread Max Reitz
On 01.12.2016 03:36, Eric Blake wrote:
> On 11/30/2016 08:05 PM, Max Reitz wrote:
>> When trying to invoke qemu-img commit with a base image file name that
>> is not part of the top image's backing chain, the user receives a rather
>> plain "Base not found" error message. This is not really helpful because
>> it does not explain what "not found" means, potentially leaving the user
>> wondering why qemu cannot find a file despite it clearly existing in the
>> file system.
>>
>> Improve the error message by clarifying that "not found" means "not
>> found in the top image's backing chain".
>>
>> Reported-by: Ala Hino 
>> Signed-off-by: Max Reitz 
>> ---
>> Reported in: https://bugzilla.redhat.com/show_bug.cgi?id=1390991
>> ---
>>  qemu-img.c | 4 +++-
>>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> Safe for 2.8, if the block maintainers agree.
> 
> Reviewed-by: Eric Blake 

Thanks!

I agree that it's safe, but we're at a point where only critical things
should go into the release. Getting this into 2.9 will be sufficient.

Max



signature.asc
Description: OpenPGP digital signature


Re: [Qemu-block] [PATCH 06/10] aio-posix: remove walking_handlers, protecting AioHandler list with list_lock

2016-12-01 Thread Paolo Bonzini


On 30/11/2016 14:36, Paolo Bonzini wrote:
> 
> 
> On 30/11/2016 14:31, Stefan Hajnoczi wrote:
>> On Tue, Nov 29, 2016 at 12:47:03PM +0100, Paolo Bonzini wrote:
>>> @@ -272,22 +275,32 @@ bool aio_prepare(AioContext *ctx)
>>>  bool aio_pending(AioContext *ctx)
>>>  {
>>>  AioHandler *node;
>>> +bool result = false;
>>>  
>>> -QLIST_FOREACH(node, >aio_handlers, node) {
>>> +/*
>>> + * We have to walk very carefully in case aio_set_fd_handler is
>>> + * called while we're walking.
>>> + */
>>> +qemu_lockcnt_inc(>list_lock);
>>> +
>>> +QLIST_FOREACH_RCU(node, >aio_handlers, node) {
>>>  int revents;
>>>  
>>>  revents = node->pfd.revents & node->pfd.events;
>>>  if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read &&
>>>  aio_node_check(ctx, node->is_external)) {
>>> -return true;
>>> +result = true;
>>> +break;
>>>  }
>>>  if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write &&
>>>  aio_node_check(ctx, node->is_external)) {
>>> -return true;
>>> +result = true;
>>> +break;
>>>  }
>>>  }
>>> +qemu_lockcnt_dec(>list_lock);
>>>  
>>> -return false;
>>> +return result;
>>>  }
>>>  
>>>  bool aio_dispatch(AioContext *ctx)
>>> @@ -308,13 +321,12 @@ bool aio_dispatch(AioContext *ctx)
>>>   * We have to walk very carefully in case aio_set_fd_handler is
>>>   * called while we're walking.
>>>   */
>>> -ctx->walking_handlers++;
>>> +qemu_lockcnt_inc(>list_lock);
>>>  
>>> -QLIST_FOREACH_SAFE(node, >aio_handlers, node, tmp) {
>>> +QLIST_FOREACH_SAFE_RCU(node, >aio_handlers, node, tmp) {
>>>  int revents;
>>>  
>>> -revents = node->pfd.revents & node->pfd.events;
>>> -node->pfd.revents = 0;
>>> +revents = atomic_xchg(>pfd.revents, 0) & node->pfd.events;
>>
>> Why is node->pfd.revents accessed with atomic_*() here and in aio_poll()
>> but not in aio_pending()?
> 
> It could use atomic_read there, indeed.

Actually, thanks to the (already committed) patches that limit aio_poll
to the I/O thread, these atomic accesses are not needed anymore.

Paolo



Re: [Qemu-block] [PATCH 1/3] timer: fix misleading comment in timer.h

2016-12-01 Thread Paolo Bonzini


On 01/12/2016 14:50, Stefan Hajnoczi wrote:
> On Wed, Nov 30, 2016 at 11:30:38PM -0500, Yaowei Bai wrote:
>> It's timer to expire, not clock.
>>
>> Signed-off-by: Yaowei Bai 
>> ---
>>  include/qemu/timer.h | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> For the whole series:
> 
> Reviewed-by: Stefan Hajnoczi 
> 
> PS: I suggest sending a cover letter "[PATCH 0/3]" in the future.  This
> makes it easy for reviewers to indicate they have reviewed the whole
> series.  Without a cover letter it's ambiguous whether my single
> Reviewed-by: applies to just this patch or to the whole series - and
> patch management tools will probably get it wrong too.
> 

I've queued the series for QEMU 2.9.  This kind of patch can probably be
sent to qemu-triv...@nongnu.org, which will simplify their inclusion.

Of course, this is not meant to diminish your contribution!  "Trivial"
patches are important and good comments will also help the next person
studying QEMU's source code.

Thanks,

Paolo



[Qemu-block] [RFC PATCH] glusterfs: allow partial reads

2016-12-01 Thread Wolfgang Bumiller
Fixes #1644754.

Signed-off-by: Wolfgang Bumiller 
---
I'm not sure what the original rationale was to treat both partial
reads as well as well as writes as I/O error. (Seems to have happened
from original glusterfs v1 to v2 series with a note but no reasoning
for the read side as far as I could see.)
The general direction lately seems to be to move away from sector
based block APIs. Also eg. the NFS code allows partial reads. (It
does, however, have an old patch (c2eb918e3) dedicated to aligning
sizes to 512 byte boundaries for file creation for compatibility to
other parts of qemu like qcow2. This already happens in glusterfs,
though, but if you move a file from a different storage over to
glusterfs you may end up with a qcow2 file with eg. the L1 table in
the last 80 bytes of the file aligned to _begin_ at a 512 boundary,
but not _end_ at one.)

 block/gluster.c | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/block/gluster.c b/block/gluster.c
index 891c13b..3db0bf8 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -41,6 +41,7 @@ typedef struct GlusterAIOCB {
 int ret;
 Coroutine *coroutine;
 AioContext *aio_context;
+bool is_write;
 } GlusterAIOCB;
 
 typedef struct BDRVGlusterState {
@@ -716,8 +717,10 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, 
ssize_t ret, void *arg)
 acb->ret = 0; /* Success */
 } else if (ret < 0) {
 acb->ret = -errno; /* Read/Write failed */
+} else if (acb->is_write) {
+acb->ret = -EIO; /* Partial write - fail it */
 } else {
-acb->ret = -EIO; /* Partial read/write - fail it */
+acb->ret = 0; /* Success */
 }
 
 aio_bh_schedule_oneshot(acb->aio_context, qemu_gluster_complete_aio, acb);
@@ -965,6 +968,7 @@ static coroutine_fn int 
qemu_gluster_co_pwrite_zeroes(BlockDriverState *bs,
 acb.ret = 0;
 acb.coroutine = qemu_coroutine_self();
 acb.aio_context = bdrv_get_aio_context(bs);
+acb.is_write = true;
 
 ret = glfs_zerofill_async(s->fd, offset, size, gluster_finish_aiocb, );
 if (ret < 0) {
@@ -1087,9 +1091,11 @@ static coroutine_fn int 
qemu_gluster_co_rw(BlockDriverState *bs,
 acb.aio_context = bdrv_get_aio_context(bs);
 
 if (write) {
+acb.is_write = true;
 ret = glfs_pwritev_async(s->fd, qiov->iov, qiov->niov, offset, 0,
  gluster_finish_aiocb, );
 } else {
+acb.is_write = false;
 ret = glfs_preadv_async(s->fd, qiov->iov, qiov->niov, offset, 0,
 gluster_finish_aiocb, );
 }
@@ -1153,6 +1159,7 @@ static coroutine_fn int 
qemu_gluster_co_flush_to_disk(BlockDriverState *bs)
 acb.ret = 0;
 acb.coroutine = qemu_coroutine_self();
 acb.aio_context = bdrv_get_aio_context(bs);
+acb.is_write = true;
 
 ret = glfs_fsync_async(s->fd, gluster_finish_aiocb, );
 if (ret < 0) {
@@ -1199,6 +1206,7 @@ static coroutine_fn int 
qemu_gluster_co_pdiscard(BlockDriverState *bs,
 acb.ret = 0;
 acb.coroutine = qemu_coroutine_self();
 acb.aio_context = bdrv_get_aio_context(bs);
+acb.is_write = true;
 
 ret = glfs_discard_async(s->fd, offset, size, gluster_finish_aiocb, );
 if (ret < 0) {
-- 
2.1.4





Re: [Qemu-block] [Qemu-devel] [PATCH] migration: re-active images when migration fails to complete

2016-12-01 Thread Kevin Wolf
Forwarding to qemu-block so I won't forget to have a look.

Am 19.11.2016 um 12:43 hat zhanghailiang geschrieben:
> commit fe904ea8242cbae2d7e69c052c754b8f5f1ba1d6 fixed a case
> which migration aborted QEMU because it didn't regain the control
> of images while some errors happened.
> 
> Actually, we have another case in that error path to abort QEMU
> because of the same reason:
> migration_thread()
> migration_completion()
>bdrv_inactivate_all() > inactivate images
>qemu_savevm_state_complete_precopy()
>socket_writev_buffer() > error because destination 
> fails
>  qemu_fflush() ---> set error on migration stream
>qemu_mutex_unlock_iothread() --> unlock
> qmp_migrate_cancel() -> user cancelled migration
> migrate_set_state() --> set migrate CANCELLING
> migration_completion() -> go on to fail_invalidate
> if (s->state == MIGRATION_STATUS_ACTIVE) -> Jump this branch
> migration_thread() ---> break migration loop
>   vm_start() -> restart guest with inactive
> images
> We failed to regain the control of images because we only regain it
> while the migration state is "active", but here users cancelled the migration
> when they found some errors happened (for example, libvirtd daemon is shutdown
> in destination unexpectedly).
> 
> Signed-off-by: zhanghailiang 
> ---
>  migration/migration.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index f498ab8..0c1ee6d 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -1752,7 +1752,8 @@ fail_invalidate:
>  /* If not doing postcopy, vm_start() will be called: let's regain
>   * control on images.
>   */
> -if (s->state == MIGRATION_STATUS_ACTIVE) {
> +if (s->state == MIGRATION_STATUS_ACTIVE ||
> +s->state == MIGRATION_STATUS_CANCELLING) {
>  Error *local_err = NULL;
>  
>  bdrv_invalidate_cache_all(_err);
> -- 
> 1.8.3.1
> 
> 
> 



Re: [Qemu-block] [PATCH 1/3] timer: fix misleading comment in timer.h

2016-12-01 Thread Stefan Hajnoczi
On Wed, Nov 30, 2016 at 11:30:38PM -0500, Yaowei Bai wrote:
> It's timer to expire, not clock.
> 
> Signed-off-by: Yaowei Bai 
> ---
>  include/qemu/timer.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

For the whole series:

Reviewed-by: Stefan Hajnoczi 

PS: I suggest sending a cover letter "[PATCH 0/3]" in the future.  This
makes it easy for reviewers to indicate they have reviewed the whole
series.  Without a cover letter it's ambiguous whether my single
Reviewed-by: applies to just this patch or to the whole series - and
patch management tools will probably get it wrong too.


signature.asc
Description: PGP signature


Re: [Qemu-block] QEMU 1.1.2: block IO throttle might occasionally freeze running process's IO to zero

2016-12-01 Thread Paolo Bonzini


On 01/12/2016 05:07, Bob Chen wrote:
> Test case:
> 
> 1. QEMU 1.1.2
> 2. Run fio inside the vm, give it some pressure. Watch the realtime
> throughput
> 3. block_set_io_throttle drive_2 1 0 0 2000 0 0 #
> throttle bps and iops, any value
> 4. Observed that the IO is very likely to freeze to zero. The fio
> process stuck!
> 5. Kill the former fio process, start a new one. The IO turns back to normal
> 
> Didn't reproduce it with QEMU 2.5.
> 
> 
> Actually I'm not wishfully thinking the community would help fix this
> bug on such an ancient version. Just hope someone can tell me what is
> the root cause. Then I have to evaluate whether I should move to higher
> version QEMU, or fix this bug on 1.1.2 in-place(if it is a small one).

Hi,

throttling has been rewritten in QEMU 2.0 (see the commits around
5ddfffb, "throttle: Add a new throttling API implementing continuous
leaky bucket.", 2013-09-06), so the root cause is simply that the old
algorithms were buggy. :)

I think that the new implementation has been backported to QEMU versions
as old as 1.1.2, but if you can move to a newer version it would be simpler.

Paolo



Re: [Qemu-block] [Qemu-devel] QEMU 1.1.2: block IO throttle might occasionally freeze running process's IO to zero

2016-12-01 Thread Fam Zheng
On Thu, 12/01 12:07, Bob Chen wrote:
> Test case:
> 
> 1. QEMU 1.1.2
> 2. Run fio inside the vm, give it some pressure. Watch the realtime
> throughput
> 3. block_set_io_throttle drive_2 1 0 0 2000 0 0 # throttle
> bps and iops, any value
> 4. Observed that the IO is very likely to freeze to zero. The fio process
> stuck!
> 5. Kill the former fio process, start a new one. The IO turns back to normal
> 
> Didn't reproduce it with QEMU 2.5.
> 
> 
> Actually I'm not wishfully thinking the community would help fix this bug
> on such an ancient version. Just hope someone can tell me what is the root
> cause. Then I have to evaluate whether I should move to higher version
> QEMU, or fix this bug on 1.1.2 in-place(if it is a small one).

The throttling implementation is completely refreshed so I think it's not easy
to suggest a root cause for you.

Fam