If hotplug memory during migration, the calculation of migration_dirty_pages
maybe incorrect, should fixed as below,
-void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
+void migration_bitmap_extend(RAMBlock *block, ram_addr_t old, ram_addr_t new)
{
/* called in qemu main thread, so
If hotplug memory during migration, the calculation of migration_dirty_pages
maybe not correct,
void migration_bitmap_extend(ram_addr_t old, ram_addr_t new)
{
...
migration_dirty_pages += new - old;
call_rcu(old_bitmap, migration_bitmap_free, rcu);
...
}
Thanks,
Hi,
Any update?
Thanks,
Zhang Haoyu
On 2016/8/30 12:11, Lai Jiangshan wrote:
> On Wed, Aug 10, 2016 at 5:03 PM, Juan Quintela <quint...@redhat.com> wrote:
>> Lai Jiangshan <jiangshan...@gmail.com> wrote:
>>
>> Hi
>>
>> First of all, I like a l
Hi Jiangshan,
Any update from this patch?
Thanks,
Zhang Haoyu
On 2016/8/11 22:45, Lai Jiangshan wrote:
> Note, the old local migration patchset:
> https://lists.gnu.org/archive/html/qemu-devel/2013-12/msg00073.html
>
> this patch can be considered as a new local migration im
On 2015-03-10 08:29:19, Fam Zheng wrote:
On Mon, 03/09 16:14, Zhang Haoyu wrote:
Hi John, Vladimir
We can using active block commit to implement incremental backup without
guest disruption,
e.g.,
origin = A = B = C = current BDS,
a new external snapshot will be produced before
On 2015-01-15 18:08:39, Paolo Bonzini wrote:
On 15/01/2015 10:56, Zhang Haoyu wrote:
I see, when waiting the completion of drive_mirror IO, the coroutine will be
switched back to main-thread to poll and process other events, like qmp
request,
then after the IO completed, coroutine
On 2015-03-10 09:54:47, Fam Zheng wrote:
On Tue, 03/10 09:30, Zhang Haoyu wrote:
On 2015-03-10 08:29:19, Fam Zheng wrote:
On Mon, 03/09 16:14, Zhang Haoyu wrote:
Hi John, Vladimir
We can using active block commit to implement incremental backup
without guest disruption
missed something.
Thanks,
Zhang Haoyu
And does qemu support commit any external snapshot to its backing file?
Yes.
() the unneeded snapshot in source or destination end.
So, comparing with above mechanism,
what's the advantages of the incremental backup implemented by John and
Vladimir?
Thanks,
Zhang Haoyu
On 2015-03-09 15:38:40, Paolo Bonzini wrote:
On 09/03/2015 08:03, Zhang Haoyu wrote:
On 2015-03-03 18:00:09
On 2015-03-03 18:00:09, Paolo Bonzini wrote:
On 03/03/2015 07:52, Zhang Haoyu wrote:
Hi,
If introducing bitmap to bdrv_commit to track dirty sector,
could we implement guest non-disruption while performing commit?
That is already implemented. It uses the same code that implements
Hi,
If introducing bitmap to bdrv_commit to track dirty sector,
could we implement guest non-disruption while performing commit?
Thanks,
Zhang Haoyu
fix mc146818rtc wrong subsection name to avoid vmstate_subsection_load() fail
during incoming migration or loadvm.
Signed-off-by: Zhang Haoyu zhan...@sangfor.com.cn
---
hw/timer/mc146818rtc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hw/timer/mc146818rtc.c b/hw/timer
,
Zhang Haoyu
On 2015-01-26 19:29:03, Paolo Bonzini wrote:
On 26/01/2015 12:13, Zhang Haoyu wrote:
Thanks, Paolo,
but too many internal snapshots were saved by customers,
switching to external snapshot mechanism has significant impaction
on subsequent upgrade.
In that case, patches are welcome
On 2015-01-27 09:24:13, Zhang Haoyu wrote:
On 2015-01-26 22:11:59, Max Reitz wrote:
On 2015-01-26 at 08:20, Zhang Haoyu wrote:
Hi, all
Regarding too large qcow2 image, e.g., 2TB,
so long disruption happened when performing snapshot,
which was caused by cache update and IO wait
On 2015-01-26 22:11:59, Max Reitz wrote:
On 2015-01-26 at 08:20, Zhang Haoyu wrote:
Hi, all
Regarding too large qcow2 image, e.g., 2TB,
so long disruption happened when performing snapshot,
which was caused by cache update and IO wait.
perf top data shown as below,
PerfTop
On 2015-01-26 17:29:43, Paolo Bonzini wrote:
On 26/01/2015 02:07, Zhang Haoyu wrote:
Hi, Kashyap
I've tried ‘drive_backup’ via QMP,
but the snapshots were missed to backup to destination,
I think the reason is that backup_run() only copy the
guest data regarding qcow2 image.
Yes
On 2015-01-23 07:30:19, Kashyap Chamarthy wrote:
On Wed, Jan 21, 2015 at 11:39:44AM +0100, Paolo Bonzini wrote:
On 21/01/2015 11:32, Zhang Haoyu wrote:
Hi,
Does drive_mirror support incremental backup a running vm?
Or other mechanism does?
incremental backup a running vm
for
the changed data.
Next time backup, only the dirty data will be mirrored to destination.
Even the VM shutdown and start after several days,
the bitmap will be loaded while starting vm.
Any ideas?
Thanks,
Zhang Haoyu
On 2015-01-17 19:55:16, Peter Maydell wrote:
On 17 January 2015 at 11:52, Peter Maydell peter.mayd...@linaro.org wrote:
On 17 January 2015 at 06:48, Zhang Haoyu zhan...@sangfor.com.cn wrote:
G_IO_OUT|G_IO_HUP are passed from all of the callers
of chr_add_watch hooker, the assert condition
G_IO_OUT|G_IO_HUP are passed from all of the callers
of chr_add_watch hooker, the assert condition MUST be
changed.
Signed-off-by: Zhang Haoyu zhan...@sangfor.com.cn
---
spice-qemu-char.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/spice-qemu-char.c b/spice-qemu-char.c
On 2015-01-15 17:11:49, Paolo Bonzini wrote:
On 15/01/2015 04:54, Zhang Haoyu wrote:
2) Finer-grain control the parameters of block migration (dirty
bitmap
granularity).
3) Block and RAM migration do not share the same socket and thus can
more easily be parallelized
On 2015-01-14 17:07:08, Paolo Bonzini wrote:
On 14/01/2015 08:58, Zhang Haoyu wrote:
2) Finer-grain control the parameters of block migration (dirty bitmap
granularity).
3) Block and RAM migration do not share the same socket and thus can
more easily be parallelized.
drive_mirror
On 2015-01-14 15:42:41, Paolo Bonzini wrote:
On 14/01/2015 03:41, Zhang Haoyu wrote:
Hi, Paolo,
what's advantages of drive_mirror over traditional mechanism implemented in
block-migration.c ?
Why libvirt use drive_mirror instead of traditional iterative mechanism as
the default way
On 2015-01-13 17:45:45, Paolo Bonzini wrote:
On 13/01/2015 03:03, Zhang Haoyu wrote:
I want to live migrate a vm with storage, with regard to the migration of
storage,
should I use drive_mirror or traditional mechanism implemented in
block-migration.c ?
Because I don't use
On 2015-01-12 15:50:13, Zhang Haoyu wrote:
Hi,
I want to clone a running vm without shutoff,
can below method work?
1) create a snapshot for the vm
2) create a new qcow2 image from the snapshot, but how?
3) use the new qcow2 image as backing image to clone vms
Can drive_mirror clone a running
Hi,
I want to live migrate a vm with storage, with regard to the migration of
storage,
should I use drive_mirror or traditional mechanism implemented in
block-migration.c ?
Any advices?
Thanks,
Zhang Haoyu
On 2015-01-13 09:49:00, Zhang Haoyu wrote:
Hi,
I want to live migrate a vm with storage, with regard to the migration of
storage,
should I use drive_mirror or traditional mechanism implemented in
block-migration.c ?
Because I don't use libvirtd to manage vm,
if I want to use drive_mirror
On 2014-12-22 09:28:52, Paolo Bonzini wrote:
On 22/12/2014 07:39, Zhang Haoyu wrote:
Hi,
When I perform P2V from native servers with win2008 to kvm vm,
some cases failed due to the physical disk was using GPT for partition,
and QEMU doesn't support GPT by default.
And, I see in below
Hi,
I want to clone a running vm without shutoff,
can below method work?
1) create a snapshot for the vm
2) create a new qcow2 image from the snapshot, but how?
3) use the new qcow2 image as backing image to clone vms
Any ideas?
Thanks,
Zhang Haoyu
Hi,
what's the status of migration support for vhost-user?
Thanks,
Zhang Haoyu
On 2014-06-18 22:07:49, Michael S. Tsirkin wrote:
On Wed, Jun 18, 2014 at 04:37:57PM +0300, Nikolay Nikolaev wrote:
On Wed, Jun 18, 2014 at 3:35 PM, Michael S
On 2014/12/22 16:41, Andrey Korolyov wrote:
On Mon, Dec 22, 2014 at 6:59 AM, Zhang Haoyu zhhy.zhangha...@gmail.com
wrote:
Hi,
How to get the guest physical memory usage from host?
I don't want to introduce a guest-agent to get the info.
Thanks,
Zhang Haoyu
There`s probably one
On 2014/12/22 17:16, Andrey Korolyov wrote:
On Mon, Dec 22, 2014 at 11:57 AM, Zhang Haoyu zhhy.zhangha...@gmail.com
wrote:
On 2014/12/22 16:41, Andrey Korolyov wrote:
On Mon, Dec 22, 2014 at 6:59 AM, Zhang Haoyu zhhy.zhangha...@gmail.com
wrote:
Hi,
How to get the guest physical memory
On 2014/12/22 17:28, Paolo Bonzini wrote:
On 22/12/2014 07:39, Zhang Haoyu wrote:
Hi,
When I perform P2V from native servers with win2008 to kvm vm,
some cases failed due to the physical disk was using GPT for partition,
and QEMU doesn't support GPT by default.
And, I see in below
Hi,
I cannot receive qemu-dev/kvm-dev mails sent by myself,
but mails from others can be received,
any helps?
Thanks,
Zhang Haoyu
On 2014/12/22 17:52, Paolo Bonzini wrote:
On 22/12/2014 10:40, Zhang Haoyu wrote:
2) the FAT driver is not free, which prevents distribution in Fedora and
several other distributions
Sorry, I cannot follow you,
the FAT mentioned above means FAT filesystem?
what's the relationship
On 2014/12/22 17:54, Paolo Bonzini wrote:
On 22/12/2014 10:48, Zhang Haoyu wrote:
Hi,
I cannot receive qemu-dev/kvm-dev mails sent by myself,
but mails from others can be received,
any helps?
For qemu-devel, you need to configure mailman to send messages even if
they are yours
On 2014/12/22 20:05, Paolo Bonzini wrote:
On 22/12/2014 12:40, Zhang Haoyu wrote:
On 2014/12/22 17:54, Paolo Bonzini wrote:
On 22/12/2014 10:48, Zhang Haoyu wrote:
Hi,
I cannot receive qemu-dev/kvm-dev mails sent by myself,
but mails from others can be received,
any helps
On 2014/12/23 9:36, Fam Zheng wrote:
On Mon, 12/22 20:21, Zhang Haoyu wrote:
On 2014/12/22 20:05, Paolo Bonzini wrote:
On 22/12/2014 12:40, Zhang Haoyu wrote:
On 2014/12/22 17:54, Paolo Bonzini wrote:
On 22/12/2014 10:48, Zhang Haoyu wrote:
Hi,
I cannot receive qemu-dev/kvm-dev mails
above?
Thanks,
Zhang Haoyu
Generally I meant virDomainMemoryPeek, but nothing prevents you to
write code with same functionality, if libvirt usage is not preferred,
it is only about asking monitor for chunks of memory and parse them in
a proper way.
Thanks, Andrey.
Hi,
How to get the guest physical memory usage from host?
I don't want to introduce a guest-agent to get the info.
Thanks,
Zhang Haoyu
But, it seems that OVMF is not stable enough for kvm.
Any advises?
Thanks,
Zhang Haoyu
Hi, Kun
Is this patch one of patch series?
I don't see any place to reference is_reconnect field.
On 2014/12/22 15:06, zhangkun wrote:
From: zhangkun zhang.zhang...@huawei.com
Signed-off-by: zhangkun zhang.zhang...@huawei.com
---
net/vhost-user.c | 10 +-
1 file changed, 9
Hi all,
Does the combination of qemu-2.0.1 and linux-3.10 fully support direct-assign
vga adapters to vm?
Thanks,
Zhang Haoyu
Hi,
I saw this page:
http://www.linux-kvm.org/page/Migration.
It looks like Migration is a feature provided by KVM? But when I look
at the Linux kernel source code, i.e., virt/kvm, and arch/x86/kvm, I
don't see the code for this migration feature.
Most of live migration code is in
Needless to call bdrv_flush() in qcow2_cache_flush()
if no cache entry is dirty.
Signed-off-by: Zhang Haoyu zhan...@sangfor.com
---
block/qcow2-cache.c | 24 +---
1 file changed, 13 insertions(+), 11 deletions(-)
diff --git a/block/qcow2-cache.c b/block/qcow2-cache.c
index
the same for your guests.
For bare-metal, I use manager to push the applications to each host-agent
which is running in each host, the host-agent is responsible to install the
applications.
Thanks,
Zhang Haoyu
Install the applications on each clone separately, or use some other
method to make
is responsible to install the applications.
Thanks,
Zhang Haoyu
separately, or use some other
method to make it available (like installing on a shared network
resource).
Could you detail installing on a shared network resource?
Thanks,
Zhang Haoyu
Can I rebase image A to B which have the applications to be installed,
then change the base image to B for all
Hi, Max
How is the progress of optimizing qcow2_check_metadata_overlap?
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/127037/focus=127364
Thanks,
Zhang Haoyu
Use local variable to bdrv_pwrite_sync L1 table,
needless to make conversion of cached L1 table between
big-endian and host style.
Signed-off-by: Zhang Haoyu zhan...@sangfor.com
Reviewed-by: Max Reitz mre...@redhat.com
---
v3 - v4:
- convert local L1 table to host-style before copy it
back
Use local variable to bdrv_pwrite_sync L1 table,
needless to make conversion of cached L1 table between
big-endian and host style.
Signed-off-by: Zhang Haoyu zhan...@sangfor.com
Reviewed-by: Max Reitz mre...@redhat.com
---
v3 - v4:
- convert local L1 table to host-style before copy
Use local variable to bdrv_pwrite_sync L1 table,
needless to make conversion of cached L1 table between
big-endian and host style.
Signed-off-by: Zhang Haoyu zhan...@sangfor.com
---
block/qcow2-refcount.c | 22 +++---
1 file changed, 7 insertions(+), 15 deletions(-)
diff --git
() to bdrv_snapshot_delete() to avoid this problem.
Signed-off-by: Zhang Haoyu zhan...@sangfor.com
---
block/snapshot.c | 4
1 file changed, 4 insertions(+)
diff --git a/block/snapshot.c b/block/snapshot.c
index 85c52ff..ebc386a 100644
--- a/block/snapshot.c
+++ b/block/snapshot.c
@@ -236,6 +236,10
it into a coroutine or add a bdrv_drain_all() indeed.
I'm inclined to add bdrv_drain_all(), just keeping consistent with the other
snapshot-related operations, like savevm, loadvm, internal_snapshot_prepare,
etc.
Thanks,
Zhang Haoyu
This also means that we probably need to review all other cases where
Use local variable to bdrv_pwrite_sync L1 table,
needless to make conversion of cached L1 table between
big-endian and host style.
Signed-off-by: Zhang Haoyu zhan...@sangfor.com
---
block/qcow2-refcount.c | 22 +++---
1 file changed, 7 insertions(+), 15 deletions
Use local variable to bdrv_pwrite_sync L1 table,
needless to make conversion of cached L1 table between
big-endian and host style.
Signed-off-by: Zhang Haoyu zhan...@sangfor.com
---
v1 - v2:
- remove the superflous assignment, l1_table = NULL;
- replace 512 with BDRV_SECTOR_SIZE
Use local variable to bdrv_pwrite_sync L1 table,
needless to make conversion of cached L1 table between
big-endian and host style.
Signed-off-by: Zhang Haoyu zhan...@sangfor.com
---
v1 - v2:
- remove the superflous assignment, l1_table = NULL;
- replace 512 with BDRV_SECTOR_SIZE
Use local variable to bdrv_pwrite_sync L1 table,
needless to make conversion of cached L1 table between
big-endian and host style.
Signed-off-by: Zhang Haoyu zhan...@sangfor.com
Reviewed-by: Max Reitz mre...@redhat.com
---
v2 - v3:
- replace g_try_malloc0 with qemu_try_blockalign
- copy
Use local variable to bdrv_pwrite_sync L1 table,
needless to make conversion of cached L1 table between
big-endian and host style.
Signed-off-by: Zhang Haoyu zhan...@sangfor.com
Reviewed-by: Max Reitz mre...@redhat.com
---
v2 - v3:
- replace g_try_malloc0 with qemu_try_blockalign
- copy
Hi,
I noticed that bdrv_drain_all is performed in load_vmstate before
bdrv_snapshot_goto,
and bdrv_drain_all is performed in qmp_transaction before
internal_snapshot_prepare,
so is it also neccesary to perform bdrv_drain_all in savevm and delvm?
Thanks,
Zhang Haoyu
IOs while deleting snapshot,
then is it possible that there is concurrency problem between the
process of deleting snapshot
and the coroutine of io read/write(bdrv_co_do_rw) invoked by the pending
IOs?
This coroutine is also in main thread.
Am I missing something?
Thanks,
Zhang Haoyu
Kevin
read/write(bdrv_co_do_rw)
are performed in main thread, could BDRVQcowState.lock work?
Thanks,
Zhang Haoyu
This might actually be a valid concern.
Kevin
to perform I/O operation.
Thanks,
Zhang Haoyu
But I find it rather ugly to convert the cached L1 table to big endian,
so I'd be fine with the patch you proposed.
Max
?
Thanks,
Zhang Haoyu
Max
|-- bdrv_pwrite
|--- bdrv_pwritev
| bdrv_prwv_co
|- aio_poll(aio_context) == this aio_context is qemu_aio_context
|-- aio_dispatch
|--- bdrv_co_io_em_complete
| qemu_coroutine_enter(co-coroutine, NULL); == coroutine entry is
bdrv_co_do_rw
, not the other
thread,
both bdrv_co_do_rw and qcow2_update_snapshot_refcount are performed in the same
thread (main-thread),
how does BDRVQcowState.lock avoid the reentrant?
Thanks,
Zhang Haoyu
Max
Thanks,
Zhang Haoyu
Max
|-- bdrv_pwrite
|--- bdrv_pwritev
| bdrv_prwv_co
|- aio_poll
On 2014-10-12 15:34, Kevin Wolf wrote:
Am 11.10.2014 um 09:14 hat Zhang Haoyu geschrieben:
In qcow2_update_snapshot_refcount - qcow2_process_discards() - bdrv_discard()
may free the Qcow2DiscardRegion which is referenced by next pointer in
qcow2_process_discards() now, in next iteration, d
.
l1_table is not necessarily a local variable to qcow2_update_snapshot_refcount,
which depends on condition of if (l1_table_offset != s-l1_table_offset),
if the condition not true, l1_table = s-l1_table.
Thanks,
Zhang Haoyu
Max
|-- qcow2_free_any_clusters
|--- qcow2_free_clusters
| update_refcount
|- qcow2_process_discards
|-- g_free(d) == In next iteration, this Qcow2DiscardRegion will be
double-free.
Signed-off-by: Zhang Haoyu zhan...@sangfor.com
Signed-off-by: Fu Xuewei f...@sangfor.com
When the Qcow2DiscardRegion is adjacent to another one referenced by d,
free this Qcow2DiscardRegion metadata referenced by p after
it was removed from s-discards queue.
Signed-off-by: Zhang Haoyu zhan...@sangfor.com
---
block/qcow2-refcount.c | 1 +
1 file changed, 1 insertion(+)
diff --git
/task is running in guest.
Is there a plane for this work?
Thanks,
Zhang Haoyu
thought.
Regards,
Wanpeng Li
Thanks,
Zhang Haoyu
offset (very large value), so the file is truncated to very
large.
Any ideas?
Thanks,
Zhang Haoyu
,
l1_size2);
free(tmp_l1_table);
}
Thanks,
Zhang Haoyu
The while loop variabal is bs1,
but bs is always passed to bdrv_snapshot_delete_by_id_or_name.
Broken in commit a89d89d, v1.7.0.
v1 - v2:
* add broken commit id to commit message
Signed-off-by: Zhang Haoyu zhan...@sangfor.com
Reviewed-by: Markus Armbruster arm...@redhat.com
---
savevm.c | 11
-serial in guest,
and the difference of perf top data on guest when disable/enable
virtio-serial in guest,
any ideas?
Thanks,
Zhang Haoyu
If you restrict the number of vectors the virtio-serial device gets
(using the -device virtio-serial-pci,vectors= param), does that make
things
. Emulating virtio-balloon device instead of virtio-serial deivce ,
then to see whether the virtio-blk performance is hampered.
Base on the test result, corresponding analysis will be performed.
Any ideas?
Thanks,
Zhang Haoyu
Signed-off-by: Zhang Haoyu zhan...@sangfor.com
---
include/trace/events/kvm.h | 20 ++
virt/kvm/ioapic.c | 51
--
virt/kvm/ioapic.h | 6 ++
3 files changed, 75 insertions(+), 2 deletions(-)
diff --git
in case it may
register one very soon and for guest who has a bad irq detection routine ( such
as note_interrupt() in linux ), this bad irq would be recognized soon as in the
past.
Cc: Michael S. Tsirkin m...@redhat.com
Signed-off-by: Jason Wang jasow...@redhat.com
Signed-off-by: Zhang Haoyu zhan
as in
the
past.
Cc: Michael S. Tsirkin m...@redhat.com
Signed-off-by: Jason Wang jasow...@redhat.com
Signed-off-by: Zhang Haoyu zhan...@sangfor.com
---
include/trace/events/kvm.h | 20 +++
virt/kvm/ioapic.c | 50
--
virt/kvm
-irq_eoi[i] ==
IOAPIC_SUCCESSIVE_IRQ_MAX_COUNT) {
Cc: Michael S. Tsirkin m...@redhat.com
Cc: Jan Kiszka jan.kis...@siemens.com
Signed-off-by: Jason Wang jasow...@redhat.com
Signed-off-by: Zhang Haoyu zhan...@sangfor.com
---
include/trace/events/kvm.h | 20 +++
virt/kvm/ioapic.c
: virtio-pci
Kernel modules: virtio_pci
Thanks,
Zhang Haoyu
in case it may
register one very soon and for guest who has a bad irq detection routine ( such
as note_interrupt() in linux ), this bad irq would be recognized soon as in the
past.
Cc: Michael S. Tsirkin m...@redhat.com
Signed-off-by: Jason Wang jasow...@redhat.com
Signed-off-by: Zhang Haoyu zhan
Signed-off-by: Jason Wang jasow...@redhat.com
Signed-off-by: Zhang Haoyu zhan...@sangfor.com
---
include/trace/events/kvm.h | 20 ++
virt/kvm/ioapic.c | 51 --
virt/kvm/ioapic.h | 6 ++
3 files changed, 75 insertions
in case it may
register one very soon and for guest who has a bad irq detection routine ( such
as note_interrupt() in linux ), this bad irq would be recognized soon as in the
past.
Cc: Michael S. Tsirkin m...@redhat.com
Signed-off-by: Jason Wang jasow...@redhat.com
Signed-off-by: Zhang Haoyu zhan
Hi, Paolo, Amit,
any ideas?
Thanks,
Zhang Haoyu
On 2014-9-4 15:56, Zhang Haoyu wrote:
If virtio-blk and virtio-serial share an IRQ, the guest operating system
has to check each virtqueue for activity. Maybe there is some
inefficiency doing that.
AFAIK virtio-serial registers 64 virtqueues
[disabled] [size=256K]
Capabilities: [40] MSI-X: Enable+ Count=3 Masked-
Vector table: BAR=1 offset=
PBA: BAR=1 offset=0800
Kernel driver in use: virtio-pci
Kernel modules: virtio_pci
Thanks,
Zhang Haoyu
Paolo
/unaligned': File too large
How to resolve these errors?
Thanks,
Zhang Haoyu
.
with virtio-serial enabled:
64k-write-sequence: 4200 IOPS
with virtio-serial disabled:
64k-write-sequence: 5300 IOPS
How to confirm whether it's MSI in windows?
Thanks,
Zhang Haoyu
So, I think it has no business with legacy interrupt mode, right?
I am going to observe the difference of perf top data
-irq_eoi will reach to 100.
I want to add u32 irq_eoi[IOAPIC_NUM_PINS]; instead of u32 irq_eoi;.
Any ideas?
Zhang Haoyu
I'm a bit concerned how this will affect realtime guests.
Worth adding a flag to enable this, so that e.g. virtio is not
affected?
Your concern is reasonable.
If applying
/enable virtio-serial in guest,
and the difference of perf top data on guest when disable/enable virtio-serial
in guest,
any ideas?
Thanks,
Zhang Haoyu
If you restrict the number of vectors the virtio-serial device gets
(using the -device virtio-serial-pci,vectors= param), does that make
things
mode, right?
I am going to observe the difference of perf top data on qemu and perf kvm
stat data when disable/enable virtio-serial in guest,
and the difference of perf top data on guest when disable/enable virtio-serial
in guest,
any ideas?
Thanks,
Zhang Haoyu
If you restrict the number
windows driver specific, too.
I have not test linux guest, I'll test it later.
Thanks,
Zhang Haoyu
Amit
,base=localtime -global
kvm-pit.lost_tick_policy=discard -global PIIX4_PM.disable_s3
=1 -global PIIX4_PM.disable_s4=1
Any ideas?
Thanks,
Zhang Haoyu
continually,
and not too long, ioapic-irq_eoi will reach to 100.
I want to add u32 irq_eoi[IOAPIC_NUM_PINS]; instead of u32 irq_eoi;.
Any ideas?
Zhang Haoyu
at at -375,12 +414,14 at at void kvm_ioapic_reset(struct
kvm_ioapic *ioapic)
{
int i;
+ cancel_delayed_work_sync(ioapic
Hi, Yang, Gleb, Michael,
Could you help review below patch please?
Thanks,
Zhang Haoyu
Hi Jason,
I tested below patch, it's okay, the e1000 interrupt storm disappeared.
But I am going to make a bit change on it, could you help review it?
Currently, we call ioapic_service() immediately when
. Under what cases
did you meet this issue?
Some scenarios, not constant and 100% reproducity,
e.g., reboot vm, ifdown e1000 nic, install kaspersky(network configuration is
performed during installing stage), .etc.
Thanks,
Zhang Haoyu
Thanks,
Zhang Haoyu
,
if not, then read the data from the disk or host page cache.
Any ideas?
Thanks,
Zhang Haoyu
In Patch 2 we should complete requests with -EIO if io_submit() returned
0 = ret len. I fixed this up when applying because the patch was
completing with a bogus ret value.
Stefan
directly,
bypass host page cache.
IO merging also can be performed in the queue.
Any ideas?
Thanks,
Zhang Haoyu
In Patch 2 we should complete requests with -EIO if io_submit() returned
0 = ret len. I fixed this up when applying because the patch was
completing with a bogus ret value.
Stefan
,
Zhang Haoyu
Fam
1 - 100 of 143 matches
Mail list logo