On 2019/10/24 21:56, Dr. David Alan Gilbert wrote:
> * Zhenyu Ye (yezhen...@huawei.com) wrote:
>>
>>
>> On 2019/10/23 23:19, Stefan Hajnoczi wrote:
>>> On Tue, Oct 22, 2019 at 04:12:03PM +0800, yezhenyu (A) wrote:
>>>> Since qemu2.9, QEMU added
adjust iothread poll-* properties
>> for special iothread:
>>
>> set_iothread_poll_max_ns: set the maximum polling time in ns;
>> set_iothread_poll_grow: set how many ns will be added to polling time;
>> set_iothread_poll_shrink: set how many ns will be remov
On 2019/10/24 22:38, Dr. David Alan Gilbert wrote:
> * Zhenyu Ye (yezhen...@huawei.com) wrote:
>>
>>
>> On 2019/10/24 21:56, Dr. David Alan Gilbert wrote:
>>> * Zhenyu Ye (yezhen...@huawei.com) wrote:
>>>>
>>>>
>>>> On 2019/10/23
> time.
>
> Thanks; I don't know much about iothread, so I'll answer just from the
> HMP side.
>
Thanks for your review. I will update my patch according to your advice, and
submit a NEW PATCH. Some of my answers are below...
>> Signed-off-by: Zhenyu Ye
>> ---
>&g
adjust iothread poll-* properties
>> for special iothread:
>>
>> set_iothread_poll_max_ns: set the maximum polling time in ns;
>> set_iothread_poll_grow: set how many ns will be added to polling time;
>> set_iothread_poll_shrink: set how many ns will be removed from polling
>> time.
>
Hi Stefan,
On 2020/10/13 18:00, Stefan Hajnoczi wrote:
>
> Sorry, I lost track of this on-going email thread.
>
> Thanks for the backtrace. It shows the io_submit call is done while the
> AioContext lock is held. The monitor thread is waiting for the
> IOThread's AioContext lock. vcpus threads
On 2020/10/19 21:25, Paolo Bonzini wrote:
> On 19/10/20 14:40, Zhenyu Ye wrote:
>> The kernel backtrace for io_submit in GUEST is:
>>
>> guest# ./offcputime -K -p `pgrep -nx fio`
>> b'finish_task_switch'
>> b'__schedule'
>>
Hi Stefan,
On 2020/9/14 21:27, Stefan Hajnoczi wrote:
>>
>> Theoretically, everything running in an iothread is asynchronous. However,
>> some 'asynchronous' actions are not non-blocking entirely, such as
>> io_submit(). This will block while the iodepth is too big and I/O pressure
>> is too
Hi Daniel,
On 2020/9/14 22:42, Daniel P. Berrangé wrote:
> On Tue, Aug 11, 2020 at 09:54:08PM +0800, Zhenyu Ye wrote:
>> Hi Kevin,
>>
>> On 2020/8/10 23:38, Kevin Wolf wrote:
>>> Am 10.08.2020 um 16:52 hat Zhenyu Ye geschrieben:
>>>> Before doing qmp acti
On 2020/9/18 22:06, Fam Zheng wrote:
>
> I can see how blocking in a slow io_submit can cause trouble for main
> thread. I think one way to fix it (until it's made truly async in new
> kernels) is moving the io_submit call to thread pool, and wrapped in a
> coroutine, perhaps.
>
I'm not sure if
Hi Stefan, Fam,
On 2020/9/18 0:01, Fam Zheng wrote:
> On 2020-09-17 16:44, Stefan Hajnoczi wrote:
>> On Thu, Sep 17, 2020 at 03:36:57PM +0800, Zhenyu Ye wrote:
>>> When the hang occurs, the QEMU is blocked at:
>>>
>>> #0 0x95762b64 in ?? () fr
On 2020/7/21 19:54, Daniel P. Berrangé wrote:
> On Fri, Jul 17, 2020 at 05:19:43PM +0800, Zhenyu Ye wrote:
>> We add the reference of creds in migration_tls_get_creds(),
>> but there was no place to unref it. So the OBJECT(creds) will
>> never be freed and result in mem
in g_main_context_dispatch
(/usr/lib64/libglib-2.0.so.0+0x52a7b)
Since we're fine to use the borrowed reference when using the creds,
so just remove the object_ref() in migration_tls_get_creds().
Signed-off-by: Zhenyu Ye
---
migration/tls.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/migration/tls.c b
aio_context_acquire_timeout(),
which will return ETIMEDOUT after @t seconds.
This will be used in next patch.
Signed-off-by: Zhenyu Ye
---
include/block/aio.h | 5 +
include/qemu/thread-posix.h | 1 +
include/qemu/thread.h | 1 +
util/async.c| 10 ++
util/qemu-thread
if the
waiting time exceeds LOCK_TIMEOUT (default set to 3 seconds).
Signed-off-by: Zhenyu Ye
---
block/qapi-sysemu.c | 7 ++-
block/qapi.c| 6 +-
blockdev.c | 35 ++-
include/block/aio.h | 1 +
4 files changed, 42 insertions(+), 7
qmp actions.
Zhenyu Ye (2):
util: introduce aio_context_acquire_timeout
qmp: use aio_context_acquire_timeout replace aio_context_acquire
block/qapi-sysemu.c | 7 ++-
block/qapi.c| 6 +-
blockdev.c | 35
Hi Stefan,
On 2020/8/12 21:51, Stefan Hajnoczi wrote:
> On Mon, Aug 10, 2020 at 10:52:44PM +0800, Zhenyu Ye wrote:
>> Before doing qmp actions, we need to lock the qemu_global_mutex,
>> so the qmp actions should not take too long time.
>>
>> Unfortunately, some qmp
Hi Kevin,
On 2020/8/10 23:38, Kevin Wolf wrote:
> Am 10.08.2020 um 16:52 hat Zhenyu Ye geschrieben:
>> Before doing qmp actions, we need to lock the qemu_global_mutex,
>> so the qmp actions should not take too long time.
>>
>> Unfortunately, some qmp actions
and the I/O returns slowly, the main thread will be stuck
until the lock is released, which will affect the vcpu operation
and finall cause the vm to be stuck.
Signed-off-by: Zhenyu Ye
---
block/qapi.c | 6 --
1 file changed, 6 deletions(-)
diff --git a/block/qapi.c b/block/qapi.c
index
We add the reference of creds in migration_tls_get_creds(),
but there was no place to unref it. So the OBJECT(creds) will
never be freed and result in memory leak.
Unref the creds after creating the tls-channel server/client.
Signed-off-by: Zhenyu Ye
---
migration/tls.c | 12 +---
1
>From 0b4318c9dbf6fa152ec14eab29837ea06e2d78e5 Mon Sep 17 00:00:00 2001
From: eillon
Date: Wed, 25 Nov 2020 19:17:03 +0800
Subject: [PATCH] x86/cpu: initialize the CPU concurrently
Currently we initialize cpu one by one in qemu_init_vcpu(), every cpu
should have to wait util cpu->created = true.
Hi Eduardo,
Sorry for the delay.
On 2020/12/22 5:36, Eduardo Habkost wrote:
> On Mon, Dec 21, 2020 at 07:36:18PM +0800, Zhenyu Ye wrote:
>> Providing a optional mechanism to wait for all VCPU threads be
>> created out of qemu_init_vcpu(), then we can initialize the cpu
>> co
On 2020/12/15 0:33, Stefan Hajnoczi wrote:
> On Tue, Dec 08, 2020 at 08:47:42AM -0500, Glauber Costa wrote:
>> The work we did at the time was in fixing those things in the kernel
>> as much as we could.
>> But the API is just like that...
>
The best way for us is to replace io_submit with
Providing a optional mechanism to wait for all VCPU threads be
created out of qemu_init_vcpu(), then we can initialize the cpu
concurrently on the x86 architecture.
This reduces the time of creating virtual machines. For example, when
the haxm is used as the accelerator,
Hi Eduardo,
Thanks for your review.
On 2020/12/19 2:47, Eduardo Habkost wrote:
> On Wed, Nov 25, 2020 at 07:54:17PM +0800, Zhenyu Ye wrote:
>> From 0b4318c9dbf6fa152ec14eab29837ea06e2d78e5 Mon Sep 17 00:00:00 2001
>> From: eillon
>> Date: Wed, 25 Nov 2020 19:17:03 +0800
&g
Hi Igor Mammedov,
Thanks for your review.
On 2020/12/19 1:17, Igor Mammedov wrote:
> On Wed, 25 Nov 2020 19:54:17 +0800
> Zhenyu Ye wrote:
>
>> From 0b4318c9dbf6fa152ec14eab29837ea06e2d78e5 Mon Sep 17 00:00:00 2001
>> From: eillon
>> Date: Wed, 25 Nov 2020 19:17:
Hi Eduardo,
On 2020/12/25 2:06, Eduardo Habkost wrote:
>>
>> The most time-consuming operation in haxm is ioctl(HAX_VM_IOCTL_VCPU_CREATE).
>> Saddly this can not be split.
>>
>> Even if we fix the problem in haxm, other accelerators may also have
>> this problem. So I think if we can make the
Hi all,
commit 8dcd3c9b91 ("qemu-img: align result of is_allocated_sectors")
introduces block alignment when doing qemu-img convert. However, the
alignment is:
s.alignment = MAX(pow2floor(s.min_sparse),
DIV_ROUND_UP(out_bs->bl.request_alignment,
ping?
On 2021/4/2 11:52, Zhenyu Ye wrote:
> Hi all,
>
> commit 8dcd3c9b91 ("qemu-img: align result of is_allocated_sectors")
> introduces block alignment when doing qemu-img convert. However, the
> alignment is:
>
> s.alignme
On 2021/8/3 23:03, Eric Blake wrote:
> On Fri, Apr 02, 2021 at 11:52:25AM +0800, Zhenyu Ye wrote:
>> Hi all,
>>
>> commit 8dcd3c9b91 ("qemu-img: align result of is_allocated_sectors")
>> introduces block alignment when doing qemu-img convert. However, the
&g
30 matches
Mail list logo