Hi Peter,
On 2024/2/29 17:44, Peter Maydell wrote:
On Thu, 29 Feb 2024 at 03:01, Kunkun Jiang wrote:
Hi Peter,
On 2024/2/27 23:28, Peter Maydell wrote:
On Tue, 27 Feb 2024 at 14:42, Kunkun Jiang via wrote:
Hi everybody,
I want to start qemu-system-aarch64 with a vmlinux,
which is an ELF
Hi Peter,
On 2024/2/27 23:28, Peter Maydell wrote:
On Tue, 27 Feb 2024 at 14:42, Kunkun Jiang via wrote:
Hi everybody,
I want to start qemu-system-aarch64 with a vmlinux,
which is an ELF format file. The arm_load_elf() is
implemented in arm_setup_direct_kernel_boot(). So I
thought
virt,gic-version=3 -enable-kvm -smp 4 -m
1G -cpu host -kernel vmlinux -initrd fs -append "xxx"
Am I using it the wrong way?
Looking forward to your reply.
Thanks,
Kunkun Jiang
Hi Steve,
On 2023/7/10 23:43, Steven Sistare wrote:
On 7/5/2023 4:56 AM, Kunkun Jiang wrote:
Hi Steve,
I have a few questions about the msi part of the vfio device.
In the reboot mode, you mentioned "The guest drivers' suspend methods
flush outstanding requests and re-initialize the de
addition, ARM GICv4 provides support for the direct injection of vLPIs.
Interrupts are more difficult to handle. In this case, what should be done?
Look forward to your reply.
Kunkun Jiang
On 2022/7/27 0:10, Steve Sistare wrote:
Finish cpr for vfio-pci MSI/MSI-X devices by preserving eventfd's and
v
The ACPI device may not implement the ospm_status callback. Executing
qmp "query-acpi-ospm-status" will cause segmentation fault. Add error
proofing add log to avoid such serious consequences.
Signed-off-by: Kunkun Jiang
---
monitor/qmp-cmds.c | 7 ++-
1 file changed, 6 insert
ement, default
state of VFIO device is _RUNNING. And if a VFIO device is hot-plugged
while the VM is running, "vm_running" should be 1. This patch fixes it.
Fixes: 02a7e71b1e5 (vfio: Add VM state change handler to know state of VM)
Signed-off-by: Kunkun Jiang
---
hw/vfio/migration.c | 2 ++
age [Eric Auger]
v1 -> v2:
- Add iterate sub-page BARs in vfio_pci_load_config and try to update them
[Alex Williamson]
Kunkun Jiang (2):
vfio/pci: Add support for mmapping sub-page MMIO BARs after live
migration
vfio/common: Add a trace point when a MMIO RAM section cannot be
mappe
()
(vfio_pci_load_config) and vfio_sub_page_bar_update_mapping()
will not be called.
This may result in poor performance after live migration.
So iterate BARs in vfio_pci_load_config() and try to update
sub-page BARs.
Reported-by: Nianyao Tang
Reported-by: Qixin Gan
Signed-off-by: Kunkun Jiang
---
hw/vfio/pci.c
annot be DMA mapped")
did.
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index a784b219e6..dd387b0d39 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -893,6 +893,13 @@ s
Hi Eric,
On 2021/10/23 22:26, Eric Auger wrote:
Hi Kunkun,
On 10/22/21 12:01 PM, Kunkun Jiang wrote:
Hi Eric,
On 2021/10/22 0:15, Eric Auger wrote:
Hi Kunkun,
On 9/14/21 3:53 AM, Kunkun Jiang wrote:
We expand MemoryRegions of vfio-pci sub-page MMIO BARs to
vfio_pci_write_config to improve
Hi Eric,
On 2021/10/22 1:02, Eric Auger wrote:
Hi Kunkun,
On 9/14/21 3:53 AM, Kunkun Jiang wrote:
The MSI-X structures of some devices and other non-MSI-X structures
are in the same BAR. They may share one host page, especially in the
may be in the same bar?
You are right. So embarrassing
Hi Eric,
On 2021/10/22 0:15, Eric Auger wrote:
Hi Kunkun,
On 9/14/21 3:53 AM, Kunkun Jiang wrote:
We expand MemoryRegions of vfio-pci sub-page MMIO BARs to
vfio_pci_write_config to improve IO performance.
s/to vfio_pci_write_config/ in vfio_pci_write_config()
Thank you for your review. I
Hi Eric,
On 2021/10/8 0:58, Eric Auger wrote:
Hi Kunkun,
On 4/14/21 3:45 AM, Kunkun Jiang wrote:
On 2021/4/13 20:57, Auger Eric wrote:
Hi Kunkun,
On 4/13/21 2:10 PM, Kunkun Jiang wrote:
Hi Eric,
On 2021/4/11 20:08, Eric Auger wrote:
In nested mode, legacy vfio_iommu_map_notify cannot
Kindly ping,
Hi all,
Will this patch be picked up soon, or is there any other advice?
Thanks,
Kunkun Jiang
On 2021/9/14 9:53, Kunkun Jiang wrote:
This series include patches as below:
Patch 1:
- vfio/pci: Fix vfio-pci sub-page MMIO BAR mmaping in live migration
Patch 2:
- Added a trace
Hi Kevin:
On 2021/9/24 14:47, Tian, Kevin wrote:
From: Kunkun Jiang
Sent: Friday, September 24, 2021 2:19 PM
Hi all,
I encountered a problem in vfio device migration test. The
vCPU may be paused during vfio-pci DMA in iommu nested
stage mode && vSVA. This may lead to migration fail a
vCPU is not paused, the vfio device is
always running. This looks like a *deadlock*.
Do you have any ideas to solve this problem?
Looking forward to your replay.
Thanks,
Kunkun Jiang
ction cannot be DMA mapped")
did.
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 8728d4d5c2..2fc6213c0f 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -892,6 +892,13 @@ s
in vfio_pci_write_config and try to update
sub-page BARs.
Fixes: c5e2fb3ce4d (vfio: Add save and load functions for VFIO PCI devices)
Reported-by: Nianyao Tang
Reported-by: Qixin Gan
Signed-off-by: Kunkun Jiang
---
hw/vfio/pci.c | 15 ++-
1 file changed, 14 insertions(+), 1
try to update them
[Alex Williamson]
Kunkun Jiang (2):
vfio/pci: Fix vfio-pci sub-page MMIO BAR mmaping in live migration
vfio/common: Add trace point when a MMIO RAM section less than
PAGE_SIZE
hw/vfio/common.c | 7 +++
hw/vfio/pci.c| 15 ++-
2 files changed, 21 inserti
On 2021/9/11 6:24, Alex Williamson wrote:
On Fri, 10 Sep 2021 16:33:12 +0800
Kunkun Jiang wrote:
Hi Alex,
On 2021/9/9 4:45, Alex Williamson wrote:
On Fri, 3 Sep 2021 17:36:10 +0800
Kunkun Jiang wrote:
We expand MemoryRegions of vfio-pci sub-page MMIO BARs to
vfio_pci_write_config
Hi Alex,
On 2021/9/9 4:45, Alex Williamson wrote:
On Fri, 3 Sep 2021 17:36:11 +0800
Kunkun Jiang wrote:
The MSI-X structures of some devices and other non-MSI-X structures
are in the same BAR. They may share one host page, especially in the
case of large page granularity, suck as 64K.
s
Hi Alex,
On 2021/9/9 4:45, Alex Williamson wrote:
On Fri, 3 Sep 2021 17:36:10 +0800
Kunkun Jiang wrote:
We expand MemoryRegions of vfio-pci sub-page MMIO BARs to
vfio_pci_write_config to improve IO performance.
The MemoryRegions of destination VM will not be expanded
successful in live
is 64KB.
vfio_listenerregion_add() will be called to map the remaining range
(0x30-0x). And it will return early at
'int128_ge((int128_make64(iova), llend))' and hasn't any message.
Let's add a trace point to informed users like 5c08600547c did.
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c
This series include patches as below:
Patch 1:
- Deleted a check to fix vfio-pci sub-page MMIO BAR mmaping in live migration
Patch 2:
- Added a trace point to informe users when a MMIO RAM ection less than
PAGE_SIZE
Kunkun Jiang (2):
vfio/pci: Fix vfio-pci sub-page MMIO BAR mmaping in live
Signed-off-by: Kunkun Jiang
---
hw/vfio/pci.c | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index e1ea1d8a23..891b211ddf 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -1189,18 +1189,12 @@ void vfio_pci_write_config(PCIDevice *pdev
s. Some devices' BAR
may map MSI-X structures and others in one host page.
By the way, is this set of patch to be updated after "/dev/iommu" is
sent out?
Thanks,
Kunkun Jiang
+end = int128_get64(int128_sub(llend, int128_one()));
+
+vaddr = memory_region_get_ram_p
a trace point to informed users.
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 7d80f43e39..bbb8d1ea0c 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -892,6 +892,13 @@ static void
This series include patches as below:
Patch 1:
- Add a trace point to informe users when a MMIO RAM section less than minimum
size
Patch 2:
- Fix address alignment in region_add/regiondel with vfio iommu smallest page
size
Kunkun Jiang (2):
vfio/common: Add trace point when a MMIO RAM
the smallest page size to align the address.
Fixes: 1eb7f642750 (vfio: Support host translation granule size)
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 30 +-
1 file changed, 21 insertions(+), 9 deletions(-)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index
On 2021/7/6 22:27, Eric Auger wrote:
Hi Dave,
On 7/6/21 4:19 PM, Dr. David Alan Gilbert wrote:
* Eric Auger (eric.au...@redhat.com) wrote:
Hi,
On 7/6/21 10:18 AM, Kunkun Jiang wrote:
Hi Eric,
On 2021/6/30 17:16, Eric Auger wrote:
On 6/30/21 3:38 AM, Kunkun Jiang wrote:
On 2021/6/30 4:14
Hi Eric,
On 2021/7/6 21:52, Eric Auger wrote:
Hi,
On 7/6/21 10:18 AM, Kunkun Jiang wrote:
Hi Eric,
On 2021/6/30 17:16, Eric Auger wrote:
On 6/30/21 3:38 AM, Kunkun Jiang wrote:
On 2021/6/30 4:14, Eric Auger wrote:
Hi Kunkun,
On 6/29/21 11:33 AM, Kunkun Jiang wrote:
Hi all,
Accroding
On 2021/7/6 18:27, Dr. David Alan Gilbert wrote:
* Kunkun Jiang (jiangkun...@huawei.com) wrote:
Hi Daniel,
On 2021/7/5 20:48, Daniel P. Berrangé wrote:
On Mon, Jul 05, 2021 at 08:36:52PM +0800, Kunkun Jiang wrote:
In the current version, the source QEMU process does not automatic
exit after
Hi Eric,
On 2021/6/30 17:16, Eric Auger wrote:
On 6/30/21 3:38 AM, Kunkun Jiang wrote:
On 2021/6/30 4:14, Eric Auger wrote:
Hi Kunkun,
On 6/29/21 11:33 AM, Kunkun Jiang wrote:
Hi all,
Accroding to the patch cddafd8f353d2d251b1a5c6c948a577a85838582,
our original intention is to flush
Hi Daniel,
On 2021/7/5 20:48, Daniel P. Berrangé wrote:
On Mon, Jul 05, 2021 at 08:36:52PM +0800, Kunkun Jiang wrote:
In the current version, the source QEMU process does not automatic
exit after a successful migration. Additional action is required,
such as sending { "execute&qu
QEMU process after a successful migration
Kunkun Jiang (2):
qapi/run-state: Add a new shutdown cause 'migration-completed'
qapi/migration: Add a new migration capability 'auto-quit'
migration/migration.c | 13 +
migration/migration.h | 1 +
qapi/migration.json | 6
For compatibility, a new migration capability 'auto-quit' is added
to control the exit of source QEMU after a successful migration.
Signed-off-by: Kunkun Jiang
---
migration/migration.c | 14 +-
migration/migration.h | 1 +
qapi/migration.json | 6 +-
3 files changed, 19
s after a successful migration.
Signed-off-by: Kunkun Jiang
---
migration/migration.c | 1 +
qapi/run-state.json | 4 +++-
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/migration/migration.c b/migration/migration.c
index 4228635d18..16782c93c2 100644
--- a/migration/migrati
On 2021/6/30 4:14, Eric Auger wrote:
Hi Kunkun,
On 6/29/21 11:33 AM, Kunkun Jiang wrote:
Hi all,
Accroding to the patch cddafd8f353d2d251b1a5c6c948a577a85838582,
our original intention is to flush the ITS tables into guest RAM at
the point
RUN_STATE_FINISH_MIGRATE, but sometimes the VM gets
direct access to that state without GICv4.1.
* Let's simply fail the save operation...
*/
if (ite->irq->hw && !kvm_vgic_global_state.has_gicv4_1)
return -EACCES;
Looking forward to your reply.
Thanks,
Kunkun Jiang
Kindly ping,
Hi everyone,
Will this patch be picked up soon, or is there any other work for me to do?
Best Regards,
Kunkun Jiang
On 2021/5/27 20:31, Kunkun Jiang wrote:
In the vfio_migration_init(), the SaveVMHandler is registered for
VFIO device. But it lacks the operation of 'unregister
Hi Philippe,
On 2021/5/27 21:44, Philippe Mathieu-Daudé wrote:
On 5/27/21 2:31 PM, Kunkun Jiang wrote:
In the vfio_migration_init(), the SaveVMHandler is registered for
VFIO device. But it lacks the operation of 'unregister'. It will
lead to 'Segmentation fault (core dumped
: Register SaveVMHandlers for VFIO device)
Reported-by: Qixin Gan
Signed-off-by: Kunkun Jiang
---
hw/vfio/migration.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
index 201642d75e..ef397ebe6c 100644
--- a/hw/vfio/migration.c
+++ b/hw/vfio/migration.c
tg, num_pages);
Thanks,
Kunkun Jiang
@@ -877,7 +878,7 @@ static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, dma_addr_t iova,
tg, num_pages);
IOMMU_NOTIFIER_FOREACH(n, mr) {
-smmuv3_notify_iova(mr, n, asid, iova,
Hi all,
This series has been updated to v3.[1]
Any comments and reviews are welcome.
Thanks,
Kunkun Jiang
[1] [RFC PATCH v3 0/4] Add migration support for VFIO PCI devices in
SMMUv3 nested mode
https://lore.kernel.org/qemu-devel/20210511020816.2905-1-jiangkun...@huawei.com/
On 2021/3/31 18
Extract part of the code from vfio_sync_dirty_bitmap to form a
new helper, which allows to mark dirty pages of a RAM section.
This helper will be called for nested stage.
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 22 ++
1 file changed, 14 insertions(+), 8 deletions
. If this operation fails, the migration fails.
Signed-off-by: Kunkun Jiang
---
hw/arm/smmuv3.c | 33 -
1 file changed, 28 insertions(+), 5 deletions(-)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index ca690513e6..ac1de572f3 100644
--- a/hw/arm/smmuv3.c
+++ b/hw/arm
stage.
This patch adds vfio_prereg_listener_log_sync to mark dirty
pages in nested mode.
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 27 +++
1 file changed, 27 insertions(+)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 9fb8d44a6d..149e535a75 100644
--- a/h
this won't cause any errors. Add
global_log_start/stop interface in vfio_memory_prereg_listener
can separate stage 2 from stage 1.
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 24
1 file changed, 24 insertions(+)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index
he post_load function to vmstate_smmuv3 for passing stage 1
configuration to the destination host after the migration
Best regards,
Kunkun Jiang
History:
v2 -> v3:
- Rebase to v9 of Eric's series 'vSMMUv3/pSMMUv3 2 stage VFIO integration'[1]
- Delete smmuv3_manual_set_pci_device_pasid_table() and r
Hi all,
Sorry for my carelessness.
This is the v2 of this series.
Thanks,
Kunkun Jiang
On 2021/5/8 17:31, Kunkun Jiang wrote:
In the past, we clear dirty log immediately after sync dirty log to
userspace. This may cause redundant dirty handling if userspace
handles dirty log iteratively
9f4bf4baa8b820c7930e23c9566c9493db7e1d25. ]
Signed-off-by: Zenghui Yu
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 62 +++
include/hw/vfio/vfio-common.h | 9 +
2 files changed, 65 insertions(+), 6 deletions(-)
diff --git a/hw/vfio/common.c b/hw
From: Zenghui Yu
The new capability VFIO_DIRTY_LOG_MANUAL_CLEAR and the new ioctl
VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP_NOCLEAR and
VFIO_IOMMU_DIRTY_PAGES_FLAG_CLEAR_BITMAP have been introduced in
the kernel, update the header to add them.
Signed-off-by: Zenghui Yu
Signed-off-by: Kunkun Jiang
VFIO_DIRTY_LOG_MANUAL_CLEAR and
provide the log_clear() hook for vfio_memory_listener. If the
kernel supports it, deliever the clear message to kernel.
Signed-off-by: Zenghui Yu
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 149 +-
include/hw/vfio/vfio-common.h
log, which can
eliminate some redundant dirty handling
History:
v1 -> v2:
- Add a new ioctl VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP_NOCLEAR to get
vfio dirty log when support manual clear.
Thanks,
Kunkun Jiang
[1]
IOMMU part:
https://lore.kernel.org/linux-iommu/20210507102211.8836-1-zhuk
Hi Dave,
On 2021/5/6 21:05, Dr. David Alan Gilbert wrote:
* Kunkun Jiang (jiangkun...@huawei.com) wrote:
Hi all,
Hi,
Recently I am learning about the part of live migration.
I have a question about the last round.
When the pending_size is less than the threshold, it will enter
the last
().
Is my understanding correct?
Should the source wait the result of the last round of destination ?
Thanks,
Kunkun Jiang
Hi Eric,
On 2021/4/27 3:16, Auger Eric wrote:
Hi Kunkun,
On 4/15/21 4:03 AM, Kunkun Jiang wrote:
Hi Eric,
On 2021/4/14 16:05, Auger Eric wrote:
Hi Kunkun,
On 4/14/21 3:45 AM, Kunkun Jiang wrote:
On 2021/4/13 20:57, Auger Eric wrote:
Hi Kunkun,
On 4/13/21 2:10 PM, Kunkun Jiang wrote:
Hi
t be true for all IOMMU types.
*/
}
I think we need a check here. If it is nested mode, just return after
g_free(giommu). Because in nested mode, stage 2 (gpa->hpa) and the
stage 1 (giova->gpa) are set separately.
When hot delete a pci device, we are going to call
vfio_lis
Hi Eric,
On 2021/4/26 20:30, Auger Eric wrote:
Hi Kunkun,
On 4/14/21 3:45 AM, Kunkun Jiang wrote:
On 2021/4/13 20:57, Auger Eric wrote:
Hi Kunkun,
On 4/13/21 2:10 PM, Kunkun Jiang wrote:
Hi Eric,
On 2021/4/11 20:08, Eric Auger wrote:
In nested mode, legacy vfio_iommu_map_notify cannot
Hi Eric,
On 2021/4/14 16:05, Auger Eric wrote:
Hi Kunkun,
On 4/14/21 3:45 AM, Kunkun Jiang wrote:
On 2021/4/13 20:57, Auger Eric wrote:
Hi Kunkun,
On 4/13/21 2:10 PM, Kunkun Jiang wrote:
Hi Eric,
On 2021/4/11 20:08, Eric Auger wrote:
In nested mode, legacy vfio_iommu_map_notify cannot
On 2021/4/13 20:57, Auger Eric wrote:
Hi Kunkun,
On 4/13/21 2:10 PM, Kunkun Jiang wrote:
Hi Eric,
On 2021/4/11 20:08, Eric Auger wrote:
In nested mode, legacy vfio_iommu_map_notify cannot be used as
there is no "caching" mode and we do not trap on map.
On Intel, vfio_iommu_
2M huge page. Then qemu passed
the iova and size (4K) to host kernel. Finally, host kernel issues a
TLBI cmd
with "range" (4K) which can not invalidate the TLB entry of 2M huge page.
(pSMMU supports RIL)
Thanks,
Kunkun Jiang
+}
+
+ret = ioctl(container->fd, V
Hi Eric,
On 2021/4/12 16:34, Auger Eric wrote:
Hi Kunkun,
On 2/19/21 10:42 AM, Kunkun Jiang wrote:
In nested mode, we call the set_pasid_table() callback on each STE
update to pass the guest stage 1 configuration to the host and
apply it at physical level.
In the case of live migration, we
Hi Eric,
On 2021/4/8 21:56, Auger Eric wrote:
Hi Kunkun,
On 2/19/21 10:42 AM, Kunkun Jiang wrote:
On Intel, the DMA mapped through the host single stage. Instead
we set up the stage 2 and stage 1 separately in nested mode as there
is no "Caching Mode".
You need to rewrite the above
Hi Eric,
On 2021/4/12 16:40, Auger Eric wrote:
Hi Kunkun,
On 2/19/21 10:42 AM, Kunkun Jiang wrote:
Hi all,
Since the SMMUv3's nested translation stages[1] has been introduced by Eric, we
need to pay attention to the migration of VFIO PCI devices in SMMUv3 nested
stage
mode. At present
Hi Eric,
On 2021/4/8 21:46, Auger Eric wrote:
Hi Kunkun,
On 2/19/21 10:42 AM, Kunkun Jiang wrote:
Extract part of the code from vfio_sync_dirty_bitmap to form a
new helper, which allows to mark dirty pages of a RAM section.
This helper will be called for nested stage.
Signed-off-by: Kunkun
Hi Eric,
On 2021/4/8 15:27, Auger Eric wrote:
Hi Kunkun,
On 4/7/21 11:26 AM, Kunkun Jiang wrote:
Hi Eric,
On 2021/4/7 3:50, Auger Eric wrote:
Hi Kunkun,
On 3/27/21 3:24 AM, Kunkun Jiang wrote:
Hi all,
Recently, I did some tests on SMMU nested mode. Here is
a question about
Hi Dave,
On 2021/4/7 1:14, Dr. David Alan Gilbert wrote:
* Kunkun Jiang (jiangkun...@huawei.com) wrote:
Kindly ping,
Hi David Alan Gilbert,
Will this series be picked up soon, or is there any other work for me to do?
You don't need to do anything, but it did miss the cutoff for soft
freeze
Hi Eric,
On 2021/4/7 3:50, Auger Eric wrote:
Hi Kunkun,
On 3/27/21 3:24 AM, Kunkun Jiang wrote:
Hi all,
Recently, I did some tests on SMMU nested mode. Here is
a question about the translation granule size supported by
vSMMU.
There is such a code in SMMUv3_init_regs():
/* 4K and 64K
Kindly ping,
Hi David Alan Gilbert,
Will this series be picked up soon, or is there any other work for me to do?
Best Regards,
Kunkun Jiang
On 2021/3/16 20:57, Kunkun Jiang wrote:
Hi all,
This series include patches as below:
Patch 1:
- reduce unnecessary rate limiting in ram_save_host_page
patch adds
vfio_prereg_listener_log_sync to mark dirty pages in nested mode.
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 27 +++
1 file changed, 27 insertions(+)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 3117979307..86722814d4 100644
--- a/hw/vfio/common.
this won't cause any errors. Add
global_log_start/stop interface in vfio_memory_prereg_listener
can separate stage 2 from stage 1.
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 22 ++
1 file changed, 22 insertions(+)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index
ination
host after the migration.
Best Regards,
Kunkun Jiang
[1] [RFC,v8,00/28] vSMMUv3/pSMMUv3 2 stage VFIO integration
http://patchwork.ozlabs.org/project/qemu-devel/cover/20210225105233.650545-1-eric.au...@redhat.com/
This Patch set includes patches as below:
Patch 1-2:
- Refactor the vfio_lis
. If this operation is fail, the migration is fail.
Signed-off-by: Kunkun Jiang
---
hw/arm/smmuv3.c | 62 +
hw/arm/trace-events | 1 +
2 files changed, 63 insertions(+)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 55aa6ad874..4d28ca3777 100644
Extract part of the code from vfio_sync_dirty_bitmap to form a
new helper, which allows to mark dirty pages of a RAM section.
This helper will be called for nested stage.
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 22 ++
1 file changed, 14 insertions(+), 8 deletions
is not supported,
vSVA will failed to be enabled in the future for 16K guest
kernel. So it'd better to support it.
Signed-off-by: Kunkun Jiang
---
hw/arm/smmuv3.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
index 3b87324ce2..0a483b0bab 100644
ssions between Eric and Linu about
this [1], but this idea does not seem to be implemented.
[1] https://lists.gnu.org/archive/html/qemu-arm/2017-09/msg00149.html
Best regards,
Kunkun Jiang
On 2021/3/18 20:36, Tian, Kevin wrote:
From: Kunkun Jiang
Sent: Thursday, March 18, 2021 8:29 PM
Hi Kevin,
On 2021/3/18 17:04, Tian, Kevin wrote:
From: Kunkun Jiang
Sent: Thursday, March 18, 2021 3:59 PM
Hi Kevin,
On 2021/3/18 14:28, Tian, Kevin wrote:
From: Kunkun Jiang
Sent: Wednesday
On 2021/3/18 20:36, Tian, Kevin wrote:
From: Kunkun Jiang
Sent: Thursday, March 18, 2021 8:29 PM
Hi Kevin,
On 2021/3/18 17:04, Tian, Kevin wrote:
From: Kunkun Jiang
Sent: Thursday, March 18, 2021 3:59 PM
Hi Kevin,
On 2021/3/18 14:28, Tian, Kevin wrote:
From: Kunkun Jiang
Sent: Wednesday
Hi Kevin,
On 2021/3/18 17:04, Tian, Kevin wrote:
From: Kunkun Jiang
Sent: Thursday, March 18, 2021 3:59 PM
Hi Kevin,
On 2021/3/18 14:28, Tian, Kevin wrote:
From: Kunkun Jiang
Sent: Wednesday, March 10, 2021 5:41 PM
Hi all,
In the past, we clear dirty log immediately after sync dirty log
Hi Kevin,
On 2021/3/18 14:28, Tian, Kevin wrote:
From: Kunkun Jiang
Sent: Wednesday, March 10, 2021 5:41 PM
Hi all,
In the past, we clear dirty log immediately after sync dirty log to
userspace. This may cause redundant dirty handling if userspace
handles dirty log iteratively:
After vfio
kindly ping,
Any comments and reviews are welcome.
Thanks,
Kunkun Jiang
On 2021/3/10 17:41, Kunkun Jiang wrote:
Hi all,
In the past, we clear dirty log immediately after sync dirty log to
userspace. This may cause redundant dirty handling if userspace
handles dirty log iteratively:
After
Hi Peter,
On 2021/3/17 5:39, Peter Xu wrote:
On Tue, Mar 16, 2021 at 08:57:15PM +0800, Kunkun Jiang wrote:
When the host page is a huge page and something is sent in the
current iteration, migration_rate_limit() should be executed.
If not, it can be omitted.
Signed-off-by: Keqian Zhu
Signed
rmance to use migration_bitmap_find_dirty().
Tested on Kunpeng 920; VM parameters: 1U 4G (page size 1G)
The time of ram_save_host_page() in the last round of ram saving:
before optimize: 9250us after optimize: 34us
Signed-off-by: Keqian Zhu
Signed-off-by: Kunkun Jiang
---
migrati
When the host page is a huge page and something is sent in the
current iteration, migration_rate_limit() should be executed.
If not, it can be omitted.
Signed-off-by: Keqian Zhu
Signed-off-by: Kunkun Jiang
Reviewed-by: David Edmondson
---
migration/ram.c | 9 +++--
1 file changed, 7
- Remove 'goto' [David Edmondson]
Kunkun Jiang (2):
migration/ram: Reduce unnecessary rate limiting
migration/ram: Optimize ram_save_host_page()
migration/ram.c | 34 +++---
1 file changed, 19 insertions(+), 15 deletions(-)
--
2.23.0
9f4bf4baa8b820c7930e23c9566c9493db7e1d25. ]
Signed-off-by: Zenghui Yu
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 62 +++
include/hw/vfio/vfio-common.h | 9 +
2 files changed, 65 insertions(+), 6 deletions(-)
diff --git a/hw/vfio/common.c b/hw
for vfio_memory_listener. If the
kernel supports it, deliever the clear message to kernel.
Signed-off-by: Zenghui Yu
Signed-off-by: Kunkun Jiang
---
hw/vfio/common.c | 145 +-
include/hw/vfio/vfio-common.h | 1 +
2 files changed, 145 insertions(+), 1
From: Zenghui Yu
The new capability VFIO_DIRTY_LOG_MANUAL_CLEAR and the new ioctl
VFIO_IOMMU_DIRTY_PAGES_FLAG_CLEAR_BITMAP have been introduced in
the kernel, update the header to add them.
Signed-off-by: Zenghui Yu
Signed-off-by: Kunkun Jiang
---
linux-headers/linux/vfio.h | 55
vfio dirty log, which can
eliminate some redundant dirty handling
Thanks,
Kunkun Jiang
[1]
https://lore.kernel.org/linux-iommu/20210310090614.26668-1-zhukeqi...@huawei.com/T/#mb168c9738ecd3d8794e2da14f970545d5820f863
Zenghui Yu (3):
linux-headers: update against 5.12-rc2 and "vfio log
Hi Alex,
On 2021/3/10 7:17, Alex Williamson wrote:
On Thu, 4 Mar 2021 21:34:46 +0800
Kunkun Jiang wrote:
The cpu_physical_memory_set_dirty_lebitmap() can quickly deal with
the dirty pages of memory by bitmap-traveling, regardless of whether
the bitmap is aligned correctly
Hi,
On 2021/3/10 0:15, Peter Xu wrote:
On Tue, Mar 09, 2021 at 10:33:04PM +0800, Kunkun Jiang wrote:
Hi,
On 2021/3/9 5:12, Peter Xu wrote:
On Mon, Mar 08, 2021 at 06:34:58PM +0800, Kunkun Jiang wrote:
Hi,
On 2021/3/5 22:22, Peter Xu wrote:
Kunkun,
On Fri, Mar 05, 2021 at 03:50:34PM +0800
Hi,
On 2021/3/9 5:12, Peter Xu wrote:
On Mon, Mar 08, 2021 at 06:34:58PM +0800, Kunkun Jiang wrote:
Hi,
On 2021/3/5 22:22, Peter Xu wrote:
Kunkun,
On Fri, Mar 05, 2021 at 03:50:34PM +0800, Kunkun Jiang wrote:
When the host page is a huge page and something is sent in the
current iteration
Hi,
On 2021/3/9 5:03, Peter Xu wrote:
On Mon, Mar 08, 2021 at 06:33:56PM +0800, Kunkun Jiang wrote:
Hi, Peter
On 2021/3/5 21:59, Peter Xu wrote:
On Fri, Mar 05, 2021 at 03:50:33PM +0800, Kunkun Jiang wrote:
The ram_save_host_page() has been modified several times
since its birth
Hi,
On 2021/3/9 5:36, Peter Xu wrote:
On Mon, Mar 08, 2021 at 09:58:02PM +0800, Kunkun Jiang wrote:
Hi,
On 2021/3/5 22:30, Peter Xu wrote:
On Fri, Mar 05, 2021 at 03:50:35PM +0800, Kunkun Jiang wrote:
Starting from pss->page, ram_save_host_page() will check every page
and send the di
Hi,
On 2021/3/5 22:30, Peter Xu wrote:
On Fri, Mar 05, 2021 at 03:50:35PM +0800, Kunkun Jiang wrote:
Starting from pss->page, ram_save_host_page() will check every page
and send the dirty pages up to the end of the current host page or
the boundary of used_length of the block. If the host p
Hi,
On 2021/3/5 22:22, Peter Xu wrote:
Kunkun,
On Fri, Mar 05, 2021 at 03:50:34PM +0800, Kunkun Jiang wrote:
When the host page is a huge page and something is sent in the
current iteration, the migration_rate_limit() should be executed.
If not, this function can be omitted to save time
Hi, Peter
On 2021/3/5 21:59, Peter Xu wrote:
On Fri, Mar 05, 2021 at 03:50:33PM +0800, Kunkun Jiang wrote:
The ram_save_host_page() has been modified several times
since its birth. But the comment hasn't been modified as it should
be. It'd better to modify the comment to explain
rmance to use migration_bitmap_find_dirty().
Signed-off-by: Keqian Zhu
Signed-off-by: Kunkun Jiang
---
migration/ram.c | 39 +++
1 file changed, 19 insertions(+), 20 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 9fc5b2997c..2821
1 - 100 of 124 matches
Mail list logo