On 01/05/2024 13:28, Avihai Horon wrote:
>
> On 01/05/2024 14:50, Joao Martins wrote:
>> External email: Use caution opening links or attachments
>>
>>
>> On 30/04/2024 06:16, Avihai Horon wrote:
>>> Emit VFIO device migration state change QAPI event when
On 30/04/2024 06:16, Avihai Horon wrote:
> Emit VFIO device migration state change QAPI event when a VFIO device
> changes its migration state. This can be used by management applications
> to get updates on the current state of the VFIO device for their own
> purposes.
>
> A new per VFIO device
On 30/04/2024 06:16, Avihai Horon wrote:
> Add a new QAPI event for VFIO device migration state change. This event
> will be emitted when a VFIO device changes its migration state, for
> example, during migration or when stopping/starting the guest.
>
> This event can be used by management
On 18/03/2024 07:54, Eric Auger wrote:
> Hi Zhenzhong,
>
> On 2/28/24 04:59, Zhenzhong Duan wrote:
>> Introduce a helper function iommufd_device_get_info() to get
>> host IOMMU related information through iommufd uAPI.
> Looks strange to have this patch in this series. I Would rather put it
> in
On 27/02/2024 02:41, Duan, Zhenzhong wrote:
>
>
>> -Original Message-----
>> From: Joao Martins
>> Subject: Re: [PATCH rfcv2 18/18] intel_iommu: Block migration if cap is
>> updated
>>
>> On 01/02/2024 07:28, Zhenzhong Duan wrote:
>>> When
On 27/02/2024 07:41, Peter Xu wrote:
> On Thu, Feb 22, 2024 at 05:56:27PM +0200, Avihai Horon wrote:
>> This bug was observed in several VFIO migration scenarios where some
>> workload on the VM prevented RAM from ever reaching a hard zero, not
>> allowing VFIO initial pre-copy data to be sent,
On 26/02/2024 07:29, Duan, Zhenzhong wrote:
> Hi Joao,
>
>> -Original Message-----
>> From: Joao Martins
>> Subject: [PATCH RFCv2 1/8] backends/iommufd: Introduce helper function
>> iommufd_device_get_hw_capabilities()
>>
>> The new helper wil
On 20/02/2024 17:27, John Allen wrote:
> On Wed, Feb 07, 2024 at 11:21:05AM +0000, Joao Martins wrote:
>> On 12/09/2023 22:18, John Allen wrote:
>>> In the event that a guest process attempts to access memory that has
>>> been poisoned in response to a deferred unc
On 20/02/2024 12:52, Avihai Horon wrote:
>
> On 20/02/2024 12:59, Joao Martins wrote:
>> External email: Use caution opening links or attachments
>>
>>
>> On 19/02/2024 09:30, Avihai Horon wrote:
>>> Hi Joao,
>>>
>>> On 12/02/2024 15
On 19/02/2024 10:12, Avihai Horon wrote:
> Hi Joao,
>
> On 12/02/2024 15:56, Joao Martins wrote:
>> External email: Use caution opening links or attachments
>>
>>
>> By default VFIO migration is set to auto, which will support live
>> migration if the migra
On 19/02/2024 10:05, Avihai Horon wrote:
> Hi Joao,
>
> On 12/02/2024 15:56, Joao Martins wrote:
>> External email: Use caution opening links or attachments
>>
>>
>> Allow disabling hugepages to be dirty track at base page
>> granularity in similar vei
On 19/02/2024 09:30, Avihai Horon wrote:
> Hi Joao,
>
> On 12/02/2024 15:56, Joao Martins wrote:
>> External email: Use caution opening links or attachments
>>
>>
>> ioctl(iommufd, IOMMU_HWPT_GET_DIRTY_BITMAP, arg) is the UAPI
>> that fetches the bitm
On 19/02/2024 09:03, Avihai Horon wrote:
> Hi Joao,
>
> On 12/02/2024 15:56, Joao Martins wrote:
>> External email: Use caution opening links or attachments
>>
>>
>> Probe hardware dirty tracking support by querying device hw capabilities
>> via IOMMUFD_GET
On 19/02/2024 08:58, Avihai Horon wrote:
> Hi Joao,
>
> On 12/02/2024 15:56, Joao Martins wrote:
>> External email: Use caution opening links or attachments
>>
>>
>> There's generally two modes of operation for IOMMUFD:
>>
>> * The simple user API whic
On 14/02/2024 15:40, Cédric Le Goater wrote:
> Hello Joao,
>
> On 2/13/24 12:59, Joao Martins wrote:
>> On 12/02/2024 13:56, Joao Martins wrote:
>>> This small series adds support for Dirty Tracking in IOMMUFD backend.
>>> The sole reason I still made it R
On 12/02/2024 13:56, Joao Martins wrote:
> diff --git a/backends/iommufd.c b/backends/iommufd.c
> index 8486894f1b3f..2970135af4b9 100644
> --- a/backends/iommufd.c
> +++ b/backends/iommufd.c
> @@ -211,6 +211,35 @@ int iommufd_backend_unmap_dma(IOMMUFDBackend *be,
>
On 12/02/2024 13:56, Joao Martins wrote:
> This small series adds support for Dirty Tracking in IOMMUFD backend.
> The sole reason I still made it RFC is because of the second patch,
> where we are implementing user-managed auto domains.
>
> In essence it is quite similar to the o
On 01/02/2024 07:28, Zhenzhong Duan wrote:
> When there is VFIO device and vIOMMU cap/ecap is updated based on host
> IOMMU cap/ecap, migration should be blocked.
>
> Signed-off-by: Zhenzhong Duan
Is this really needed considering migration with vIOMMU is already blocked
anyways?
> ---
>
On 12/02/2024 17:17, Markus Armbruster wrote:
> Joao Martins writes:
>
>> Allow disabling hugepages to be dirty track at base page
>> granularity in similar vein to vfio_type1_iommu.disable_hugepages
>> but per IOAS.
>>
>> Signed-off-by: Joao Martins
>
&
On 12/02/2024 16:27, Jason Gunthorpe wrote:
> On Mon, Feb 12, 2024 at 01:56:37PM +0000, Joao Martins wrote:
>> There's generally two modes of operation for IOMMUFD:
>>
>> * The simple user API which intends to perform relatively simple things
>> with IOMMUs e.g. DPDK. I
dirty page tracking. This also allows to
use IOMMU dirty tracking even on VFs with their own dirty
tracker scheme.
Signed-off-by: Joao Martins
---
hw/vfio/common.c | 7 +++
hw/vfio/migration.c | 3 ++-
hw/vfio/pci.c | 3 +++
include/hw/vfio/vfio
purposes.
So starting with IOMMU dirty tracking it can use to acomodate the lack of
VF dirty page tracking allowing us to minimize the VF requirements for
migration and thus enabling migration by default for those.
Signed-off-by: Joao Martins
---
hw/vfio/iommufd.c| 3 +--
hw/vfio
ioctl(iommufd, IOMMU_HWPT_SET_DIRTY_TRACKING, arg) is the UAPI that
enables or disables dirty page tracking.
It is called on the whole list of iommu domains it is are tracking,
and on failure it rolls it back.
Signed-off-by: Joao Martins
---
backends/iommufd.c | 19
IOMMU supports dirty
tracking. The latter is in the possibility of a device being attached
that doesn't have a dirty tracker.
Signed-off-by: Joao Martins
---
hw/vfio/common.c | 18 ++
hw/vfio/iommufd.c | 25 -
include/hw/vfio/vfio
Allow disabling hugepages to be dirty track at base page
granularity in similar vein to vfio_type1_iommu.disable_hugepages
but per IOAS.
Signed-off-by: Joao Martins
---
backends/iommufd.c | 36
backends/trace-events| 1 +
hw/vfio/iommufd.c
ioctl(iommufd, IOMMU_HWPT_GET_DIRTY_BITMAP, arg) is the UAPI
that fetches the bitmap that tells what was dirty in an IOVA
range.
A single bitmap is allocated and used across all the hwpts
sharing an IOAS which is then used in log_sync() to set Qemu
global bitmaps.
Signed-off-by: Joao Martins
The new helper will fetch vendor agnostic IOMMU capabilities supported
both by hardware and software. Right now it is only iommu dirty
tracking.
Signed-off-by: Joao Martins
---
backends/iommufd.c | 25 +
include/sysemu/iommufd.h | 2 ++
2 files changed, 27
.327930-10-zhenzhong.d...@intel.com/
[2]
https://lore.kernel.org/qemu-devel/20220428211351.3897-1-joao.m.mart...@oracle.com/
[3]
https://lore.kernel.org/qemu-devel/20230622214845.3980-1-joao.m.mart...@oracle.com/
Joao Martins (8):
backends/iommufd: Introduce helpe
. Essentially mimmicing kernel
iommufd_device_auto_get_domain(). If this fails (i.e. mdevs) it falls back
to IOAS attach.
Signed-off-by: Joao Martins
---
Right now the only alternative to a userspace autodomains implementation
is to mimmicing all the flags being added to HWPT_ALLOC but into VFIO
IOAS attach
On 12/09/2023 22:18, John Allen wrote:
> In the event that a guest process attempts to access memory that has
> been poisoned in response to a deferred uncorrected MCE, an AMD system
> will currently generate a SIGBUS error which will result in the entire
> guest being shutdown. Ideally, we only
On 07/02/2024 06:29, Alexander Monakov wrote:
> On Tue, 6 Feb 2024, Elena Ufimtseva wrote:
>> Hello Alexander
>>
>> On Tue, Feb 6, 2024 at 12:50 PM Alexander Monakov
>> wrote:
>>
>>> Thanks to early checks in the inline buffer_is_zero wrapper, the SIMD
>>> routines are invoked much more rarely in
On 18/01/2024 10:17, Yi Liu wrote:
> On 2024/1/18 16:17, Duan, Zhenzhong wrote:
>>
>>
>>> -Original Message-
>>> From: Joao Martins
>>> Subject: Re: [PATCH rfcv1 4/6] vfio: initialize IOMMUFDDevice and pass to
>>> vIOMMU
>>>
&g
On 15/01/2024 10:13, Zhenzhong Duan wrote:
> diff --git a/hw/vfio/iommufd.c b/hw/vfio/iommufd.c
> index 9bfddc1360..cbd035f148 100644
> --- a/hw/vfio/iommufd.c
> +++ b/hw/vfio/iommufd.c
> @@ -309,6 +309,7 @@ static int iommufd_cdev_attach(const char *name,
> VFIODevice *vbasedev,
>
On 21/11/2023 08:43, Zhenzhong Duan wrote:
> Hi,
>
> Thanks all for giving guides and comments on previous series, this is
> the remaining part of the iommufd support.
>
> Besides suggested changes in v6, I'd like to highlight two changes
> for final review:
> 1. Instantiate can_be_deleted
hu, Nov 09, 2023 at 01:21:59PM +, Joao Martins wrote:
>>> On 09/11/2023 13:09, Jason Gunthorpe wrote:
>>>> On Thu, Nov 09, 2023 at 01:03:02PM +, Joao Martins wrote:
>>>>
>>>>>> I am not talking about mdevs; but rather the regular (n
On 09/11/2023 14:10, Bui Quang Minh wrote:
> On 11/9/23 17:11, Santosh Shukla wrote:
>> On 10/24/2023 8:51 PM, Bui Quang Minh wrote:
>>> Hi everyone,
>>>
>>> This series implements x2APIC mode in userspace local APIC and the
>>> RDMSR/WRMSR helper to access x2APIC registers in x2APIC mode. Intel
On 09/11/2023 13:09, Jason Gunthorpe wrote:
> On Thu, Nov 09, 2023 at 01:03:02PM +0000, Joao Martins wrote:
>
>>> I am not talking about mdevs; but rather the regular (non mdev) case not
>>> being
>>> able to use dirty tracking with autodomains hwpt allocat
On 09/11/2023 12:59, Joao Martins wrote:
> On 09/11/2023 12:57, Jason Gunthorpe wrote:
>> On Thu, Nov 09, 2023 at 12:17:35PM +0000, Joao Martins wrote:
>>> On 08/11/2023 12:48, Jason Gunthorpe wrote:
>>>> On Wed, Nov 08, 2023 at 07:16:52AM +, Duan, Zhenzhon
On 09/11/2023 12:57, Jason Gunthorpe wrote:
> On Thu, Nov 09, 2023 at 12:17:35PM +0000, Joao Martins wrote:
>> On 08/11/2023 12:48, Jason Gunthorpe wrote:
>>> On Wed, Nov 08, 2023 at 07:16:52AM +, Duan, Zhenzhong wrote:
>>>
>>>>>> +ret = iommuf
On 08/11/2023 12:48, Jason Gunthorpe wrote:
> On Wed, Nov 08, 2023 at 07:16:52AM +, Duan, Zhenzhong wrote:
>
+ret = iommufd_backend_alloc_hwpt(iommufd, vbasedev->devid,
+ container->ioas_id, _id);
+
+if (ret) {
+
On 31/10/2023 13:14, Juan Quintela wrote:
> Joao Martins wrote:
>> Right now, migration statistics either print downtime or expected
>> downtime depending on migration completing of in progress. Also in the
>> beginning of migration by printing the downtime limit as expec
On 30/10/2023 16:09, Peter Xu wrote:
> On Mon, Oct 30, 2023 at 11:13:55AM -0400, Peter Xu wrote:
>>> Perhaps it is easy to wrap the checkpoint tracepoint in its own function to
>>> allow extension of something else e.g. add the timestamp (or any other data
>>> into
>>> the checkpoints) or do
On 27/10/2023 15:41, Peter Xu wrote:
> On Fri, Oct 27, 2023 at 09:58:03AM +0100, Joao Martins wrote:
>> On 26/10/2023 21:07, Peter Xu wrote:
>>> On Thu, Oct 26, 2023 at 08:33:13PM +0100, Joao Martins wrote:
>>>> Sure. For the fourth patch, feel free to
On 26/10/2023 21:07, Peter Xu wrote:
> On Thu, Oct 26, 2023 at 08:33:13PM +0100, Joao Martins wrote:
>> Sure. For the fourth patch, feel free to add Suggested-by and/or a Link,
>> considering it started on the other patches (if you also agree it is right).
>> The
>&g
On 26/10/2023 20:01, Peter Xu wrote:
> Add tracepoints for major downtime checkpoints on both src and dst. They
> share the same tracepoint with a string showing its stage.
>
> On src, we have these checkpoints added:
>
> - downtime-start: right before vm stops on src
> - vm-stopped: after
On 26/10/2023 19:18, Peter Xu wrote:
> On Thu, Oct 26, 2023 at 01:03:57PM -0400, Peter Xu wrote:
>> On Thu, Oct 26, 2023 at 05:06:37PM +0100, Joao Martins wrote:
>>> On 26/10/2023 16:53, Peter Xu wrote:
>>>> This small series (actually only the last patch; firs
On 26/10/2023 16:53, Peter Xu wrote:
> This small series (actually only the last patch; first two are cleanups)
> wants to improve ability of QEMU downtime analysis similarly to what Joao
> used to propose here:
>
> https://lore.kernel.org/r/20230926161841.98464-1-joao.m.mart...@oracle.com
>
On 06/10/2023 18:09, Cédric Le Goater wrote:
>>> Getting acks from everyone will be difficultsince some PHBs are orphans.
>>
>> [...] This is what gets me a bit hesitant
>
> orphans shouldn't be an issue, nor the PPC emulated machines. We will see
> what other maintainers have to say.
How about
On 04/10/2023 20:33, Peter Xu wrote:
> On Tue, Sep 26, 2023 at 05:18:41PM +0100, Joao Martins wrote:
>> Right now, migration statistics either print downtime or expected
>> downtime depending on migration completing of in progress. Also in the
>> beginning of migration by
On 04/10/2023 18:19, Peter Xu wrote:
> On Tue, Sep 26, 2023 at 05:18:36PM +0100, Joao Martins wrote:
>> For now, mainly precopy data, and here I added both tracepoints and
>> QMP stats via query-migrate. Postcopy is still missing.
>
> IIUC many of those will cover post
On 04/10/2023 18:10, Peter Xu wrote:
> Hi, Joao,
>
> On Tue, Sep 26, 2023 at 05:18:40PM +0100, Joao Martins wrote:
>> Deliver the downtime breakdown also via `query-migrate`
>> to allow users to understand what their downtime value
>> represents.
>
> I agree
On 06/10/2023 09:52, Eric Auger wrote:
> Hi Joao,
>
> On 6/22/23 23:48, Joao Martins wrote:
>> From: Yi Liu
>>
>> Refactor pci_device_iommu_address_space() and move the
>> code that fetches the device bus and iommu bus into its
>> own privat
On 06/10/2023 09:50, Cédric Le Goater wrote:
> On 10/6/23 10:38, Joao Martins wrote:
>> On 02/10/2023 16:12, Cédric Le Goater wrote:
>>> Hello Joao,
>>>
>>> On 6/22/23 23:48, Joao Martins wrote:
>>>> From: Yi Liu
>>>>
>&g
On 06/10/2023 09:45, Eric Auger wrote:
> Hi Joao,
>
> On 6/22/23 23:48, Joao Martins wrote:
>> From: Yi Liu
>>
>> Add a pci_setup_iommu_ops() that uses a newly added structure
>> (PCIIOMMUOps) instead of using PCIIOMMUFunc. The old pci_setup_iommu()
>&g
On 06/10/2023 10:48, Michael S. Tsirkin wrote:
> On Fri, Oct 06, 2023 at 09:58:30AM +0100, Joao Martins wrote:
>> On 03/10/2023 15:01, Michael S. Tsirkin wrote:
>>> On Wed, Sep 27, 2023 at 12:14:28PM +0100, Joao Martins wrote:
>>>> On setups with one or more
On 03/10/2023 15:01, Michael S. Tsirkin wrote:
> On Wed, Sep 27, 2023 at 12:14:28PM +0100, Joao Martins wrote:
>> On setups with one or more virtio-net devices with vhost on,
>> dirty tracking iteration increases cost the bigger the number
>> amount of queues are set u
On 02/10/2023 16:42, Cédric Le Goater wrote:
> On 7/10/23 15:44, Joao Martins wrote:
>>
>>
>> On 09/07/2023 16:17, Avihai Horon wrote:
>>>
>>> On 23/06/2023 0:48, Joao Martins wrote:
>>>> External email: Use caution opening lin
On 02/10/2023 16:23, Cédric Le Goater wrote:
> On 6/22/23 23:48, Joao Martins wrote:
>> Implement IOMMU MR get_attr() method and use the dma_translation
>> property to report the IOMMU_ATTR_DMA_TRANSLATION attribute.
>> Additionally add the necessary get_iommu_attr
On 06/10/2023 09:39, Joao Martins wrote:
>
>
> On 02/10/2023 16:22, Cédric Le Goater wrote:
>> On 6/22/23 23:48, Joao Martins wrote:
>>> From: Yi Liu
>>>
>>> Refactor pci_device_iommu_address_space() and move the
>>> code that fetches th
On 02/10/2023 16:22, Cédric Le Goater wrote:
> On 6/22/23 23:48, Joao Martins wrote:
>> From: Yi Liu
>>
>> Refactor pci_device_iommu_address_space() and move the
>> code that fetches the device bus and iommu bus into its
>> own private help
On 02/10/2023 16:12, Cédric Le Goater wrote:
> Hello Joao,
>
> On 6/22/23 23:48, Joao Martins wrote:
>> From: Yi Liu
>>
>> Add a pci_setup_iommu_ops() that uses a newly added structure
>> (PCIIOMMUOps) instead of using PCIIOMMUFunc. The old pci_setup_iommu()
&
On 28/09/2023 02:55, Wang, Lei wrote:
> On 9/27/2023 0:18, Joao Martins wrote:
>> Right now downtime_start is stored in MigrationState.
>>
>> In preparation to having more downtime timestamps during
>> switchover, move downtime_start to an array namely, @timestamp
of sw vhost to fix this "over log scan" issue.
Signed-off-by: Joao Martins
---
I am not fully sure the heuristic captures the myriad of different vhost
devices -- I think so. IIUC, the log is always shared, it's just whether
it's qemu head memory or via /dev/shm when other processes want
data when available to give
some comparison.
For now, mainly precopy data, and here I added both tracepoints and
QMP stats via query-migrate. Postcopy is still missing.
Thoughts, comments appreciated as usual.
Thanks!
Joao
Joao Martins (5):
migration: Store downtime timestamps in
Deliver the downtime breakdown also via `query-migrate`
to allow users to understand what their downtime value
represents.
Signed-off-by: Joao Martins
---
qapi/migration.json | 22 ++
migration/migration.c | 14 ++
2 files changed, 36 insertions(+)
diff --git
To facilitate understanding of what constitutes downtime, add
a tracepoint that gives the downtime breakdown throughout all
steps of switchover.
Signed-off-by: Joao Martins
---
migration/migration.c | 34 ++
migration/trace-events | 1 +
2 files changed, 35
and not necessarily
accessible outside. Given the non-determinism of the switchover cost, it
can be useful to understand if the downtime was far off from the one
detected by the migration algoritm, thus print the resultant downtime
alongside its estimation.
Signed-off-by: Joao Martins
-off-by: Joao Martins
---
qapi/migration.json | 16 +++-
migration/migration.c | 5 +
migration/savevm.c| 2 ++
3 files changed, 22 insertions(+), 1 deletion(-)
diff --git a/qapi/migration.json b/qapi/migration.json
index b836cc881d33..2d91fbcb22ff 100644
--- a/qapi
.
Signed-off-by: Joao Martins
---
qapi/migration.json | 14 ++
migration/migration.h | 7 +--
migration/migration.c | 24
3 files changed, 39 insertions(+), 6 deletions(-)
diff --git a/qapi/migration.json b/qapi/migration.json
index 8843e74b59c7
On 08/09/2023 13:05, Joao Martins wrote:
> Add an option 'x-migration-iommu-pt' to VFIO that allows it to relax
> whether the vIOMMU usage blocks the migration. The current behaviour
> is kept and we block migration in the following conditions:
>
> * By default if the guest does tr
On 18/09/2023 23:00, William Roche wrote:
> Hi John,
>
> I'd like to put the emphasis on the fact that ignoring the SRAO error
> for a VM is a real problem at least for a specific (rare) case I'm
> currently working on: The VM migration.
>
> Context:
>
> - In the case of a poisoned page in the
On 11/09/2023 19:35, Alex Williamson wrote:
> On Mon, 11 Sep 2023 11:12:55 +0100
> Joao Martins wrote:
>
>> On 11/09/2023 10:48, Duan, Zhenzhong wrote:
>>>> -Original Message-----
>>>> From: Joao Martins
>>>> Sent: Monday, September 11, 20
On 11/09/2023 10:48, Duan, Zhenzhong wrote:
+static bool vfio_section_is_vfio_pci(MemoryRegionSection *section,
+ VFIOContainer *container)
+{
+VFIOPCIDevice *pcidev;
+VFIODevice *vbasedev;
+VFIOGroup *group;
+
On 11/09/2023 10:48, Duan, Zhenzhong wrote:
>> -Original Message-
>> From: Joao Martins
>> Sent: Monday, September 11, 2023 5:07 PM
>> Subject: Re: [PATCH v1] vfio/common: Separate vfio-pci ranges
>>
>> On 11/09/2023 09:57, Duan, Zhenzhong wrote:
>&
On 11/09/2023 09:57, Duan, Zhenzhong wrote:
>> -Original Message-
>> From: qemu-devel-bounces+zhenzhong.duan=intel@nongnu.org > devel-bounces+zhenzhong.duan=intel@nongnu.org> On Behalf Of Joao
>> Martins
>> Sent: Friday, September 8, 2023 5:30 PM
>
On 06/09/2023 22:29, William Roche wrote:
> On 9/6/23 17:16, Peter Xu wrote:
>>
>> Just a note..
>>
>> Probably fine for now to reuse block page size, but IIUC the right thing to
>> do is to fetch it from the signal info (in QEMU's sigbus_handler()) of
>> kernel_siginfo.si_addr_lsb.
>>
>> At least
sending switchover data, assuming that should always be the most important
> way to use the network at that time.
>
> This can resolve issues like "unconvergence migration" which is caused by
> hilarious low "migration bandwidth" detected for whatever reason.
&
On 08/09/2023 10:29, Joao Martins wrote:
> QEMU computes the DMA logging ranges for two predefined ranges: 32-bit
> and 64-bit. In the OVMF case, when the dynamic MMIO window is enabled,
> QEMU includes in the 64-bit range the RAM regions at the lower part
> and vfio-pci device
On 08/09/2023 12:52, Duan, Zhenzhong wrote:
> On 9/8/2023 6:11 PM, Joao Martins wrote:
>> On 08/09/2023 07:11, Duan, Zhenzhong wrote:
>>> Hi Joao,
>>>
>>> On 6/23/2023 5:48 AM, Joao Martins wrote:
>>>> Currently, device dirty page tracking with vIOM
undeterministic). But let the
user enable it if it can tolerate migration failures.
Signed-off-by: Joao Martins
---
Followup from discussion here:
https://lore.kernel.org/qemu-devel/d5d30f58-31f0-1103-6956-377de34a7...@redhat.com/
This is a smaller (and simpler) take than [0], but is likely the only
option
On 08/09/2023 07:28, Duan, Zhenzhong wrote:
>
> On 6/23/2023 5:48 AM, Joao Martins wrote:
>> Only block the case when the underlying vIOMMU model does not report any
>> address space limits, in addition to DMA translation being off or no
>> vIOMMU present. The limits
On 08/09/2023 07:11, Duan, Zhenzhong wrote:
> Hi Joao,
>
> On 6/23/2023 5:48 AM, Joao Martins wrote:
>> Currently, device dirty page tracking with vIOMMU is not supported,
>> and a blocker is added and the migration is prevented.
>>
>> When vIOMMU is used, IO
On 08/09/2023 07:23, Duan, Zhenzhong wrote:
>
> On 6/23/2023 5:48 AM, Joao Martins wrote:
>> Implement IOMMU MR get_attr() method and use the dma_translation
>> property to report the IOMMU_ATTR_DMA_TRANSLATION attribute.
>> Additionally add the necessary get_iommu_a
guests.
Signed-off-by: Joao Martins
[ clg: - wrote commit log
- fixed overlapping 32-bit and PCI ranges when using SeaBIOS ]
Signed-off-by: Cédric Le Goater
---
v2:
* s/minpci/minpci64/
* s/maxpci/maxpci64/
* Expand comment to cover the pci-hole64 and why we don't do special
handling of pci
On 08/09/2023 09:28, Cédric Le Goater wrote:
> On 9/8/23 10:16, Joao Martins wrote:
>> On 08/09/2023 08:14, Cédric Le Goater wrote:
>>> From: Joao Martins
>>>
>>> QEMU computes the DMA logging ranges for two predefined ranges: 32-bit
>>> and 64-bit. I
On 08/09/2023 08:14, Cédric Le Goater wrote:
> From: Joao Martins
>
> QEMU computes the DMA logging ranges for two predefined ranges: 32-bit
> and 64-bit. In the OVMF case, when the dynamic MMIO window is enabled,
> QEMU includes in the 64-bit range the RAM regions at
On 07/09/2023 13:40, Cédric Le Goater wrote:
> Hello Joao,
>
>> Cedric, you mentioned that you take a look at this after you come back, not
>> sure
>> if that's still the plan. But it's been a while since the last version, so
>> would
>> you have me repost/rebase on the latest (post your PR)?
>
On 22/06/2023 22:48, Joao Martins wrote:
> Hey,
>
> This series introduces support for vIOMMU with VFIO device migration,
> particurlarly related to how we do the dirty page tracking.
>
> Today vIOMMUs serve two purposes: 1) enable interrupt remaping 2)
> provide dma
On 06/09/2023 14:59, “William Roche wrote:
> From: William Roche
>
> A memory page poisoned from the hypervisor level is no longer readable.
> Thus, it is now treated as a zero-page for the ram saving migration phase.
>
> The migration of a VM will crash Qemu when it tries to read the
> memory
On 01/09/2023 18:59, Joao Martins wrote:
> On 03/08/2023 16:53, Peter Xu wrote:
>> @@ -2694,7 +2694,17 @@ static void migration_update_counters(MigrationState
>> *s,
>> transferred = current_bytes - s->iteration_initial_bytes;
>> time_spent = current_
On 03/08/2023 16:53, Peter Xu wrote:
> @@ -2694,7 +2694,17 @@ static void migration_update_counters(MigrationState
> *s,
> transferred = current_bytes - s->iteration_initial_bytes;
> time_spent = current_time - s->iteration_start_time;
> bandwidth = (double)transferred /
k this is matching the last discussion:
Reviewed-by: Joao Martins
The patch ordering doesn't look correct though. Perhaps we should expose succor
only after MCE is fixed so this patch would be the second, not the first?
Also, this should in generally be OK for -cpu host, but might be mis
On 03/08/2023 16:53, Peter Xu wrote:
> @@ -2719,7 +2729,8 @@ static void migration_update_counters(MigrationState *s,
> update_iteration_initial_status(s);
>
> trace_migrate_transferred(transferred, time_spent,
> - bandwidth, s->threshold_size);
> +
On 12/07/2023 20:11, John Allen wrote:
> On Fri, Jul 07, 2023 at 04:25:22PM +0200, Paolo Bonzini wrote:
>> On 7/6/23 21:40, John Allen wrote:
>>> case 0x8007:
>>> *eax = 0;
>>> -*ebx = 0;
>>> +*ebx = env->features[FEAT_8000_0007_EBX] |
>>>
+Peter, +Jason (intel-iommu maintainer/reviewer)
On 15/07/2023 16:22, Bui Quang Minh wrote:
> As userspace APIC now supports x2APIC, intel interrupt remapping
> hardware can be set to EIM mode when userspace local APIC is used.
>
> Reviewed-by: Michael S. Tsirkin
> Signed-off-by: Bui Quang Minh
On 09/07/2023 16:24, Avihai Horon wrote:
> On 23/06/2023 0:48, Joao Martins wrote:
>> Currently, device dirty page tracking with vIOMMU is not supported,
>> and a blocker is added and the migration is prevented.
>>
>> When vIOMMU is used, IOVA ranges are DMA
On 09/07/2023 16:17, Avihai Horon wrote:
>
> On 23/06/2023 0:48, Joao Martins wrote:
>> External email: Use caution opening links or attachments
>>
>>
>> From: Avihai Horon
>>
>> Implement get_attr() method and use the address width property to r
On 09/07/2023 16:10, Avihai Horon wrote:
> On 23/06/2023 0:48, Joao Martins wrote:
>> vfio_get_group() allocates and fills the group/container/space on
>> success which will store the AddressSpace inside the VFIOSpace struct.
>> Use the newly added pci_device_iommu_get
+x86 qemu folks
On 06/07/2023 20:40, John Allen wrote:
> For the most part, AMD hosts can use the same MCE injection code as Intel but,
> there are instances where the qemu implementation is Intel specific. First,
> MCE
> deliviery works differently on AMD and does not support broadcast. Second,
+x86 qemu folks
On 06/07/2023 21:22, Moger, Babu wrote:
> Hi John,
> Thanks for the patches. Few comments below.
>
> On 7/6/23 14:40, John Allen wrote:
>> Add cpuid bit definition for the SUCCOR feature. This cpuid bit is required
>> to
>> be exposed to guests to allow them to handle machine
1 - 100 of 477 matches
Mail list logo