On Thu, 10 Mar 2022 14:55:50 -0500
Steven Sistare <steven.sist...@oracle.com> wrote:

> On 3/10/2022 1:35 PM, Alex Williamson wrote:
> > On Thu, 10 Mar 2022 10:00:29 -0500
> > Steven Sistare <steven.sist...@oracle.com> wrote:
> >   
> >> On 3/7/2022 5:16 PM, Alex Williamson wrote:  
> >>> On Wed, 22 Dec 2021 11:05:24 -0800
> >>> Steve Sistare <steven.sist...@oracle.com> wrote:  
> >>>> @@ -1878,6 +1908,18 @@ static int vfio_init_container(VFIOContainer 
> >>>> *container, int group_fd,
> >>>>  {
> >>>>      int iommu_type, ret;
> >>>>  
> >>>> +    /*
> >>>> +     * If container is reused, just set its type and skip the ioctls, 
> >>>> as the
> >>>> +     * container and group are already configured in the kernel.
> >>>> +     * VFIO_TYPE1v2_IOMMU is the only type that supports reuse/cpr.
> >>>> +     * If you ever add new types or spapr cpr support, kind reader, 
> >>>> please
> >>>> +     * also implement VFIO_GET_IOMMU.
> >>>> +     */    
> >>>
> >>> VFIO_CHECK_EXTENSION should be able to tell us this, right?  Maybe the
> >>> problem is that vfio_iommu_type1_check_extension() should actually base
> >>> some of the details on the instantiated vfio_iommu, ex.
> >>>
> >>>   switch (arg) {
> >>>   case VFIO_TYPE1_IOMMU:
> >>>           return (iommu && iommu->v2) ? 0 : 1;
> >>>   case VFIO_UNMAP_ALL:
> >>>   case VFIO_UPDATE_VADDR:
> >>>   case VFIO_TYPE1v2_IOMMU:
> >>>           return (iommu && !iommu->v2) ? 0 : 1;
> >>>   case VFIO_TYPE1_NESTING_IOMMU:
> >>>           return (iommu && !iommu->nesting) ? 0 : 1;
> >>>   ...
> >>>
> >>> We can't support v1 if we've already set a v2 container and vice versa.
> >>> There are probably some corner cases and compatibility to puzzle
> >>> through, but I wouldn't think we need a new ioctl to check this.    
> >>
> >> That change makes sense, and may be worth while on its own merits, but 
> >> does not
> >> solve the problem, which is that qemu will not be able to infer iommu_type 
> >> in
> >> the future if new types are added.  Given:
> >>   * a new kernel supporting shiny new TYPE1v3
> >>   * old qemu starts and selects TYPE1v2 in vfio_get_iommu_type because it 
> >> has no
> >>     knowledge of v3
> >>   * live update to qemu which supports v3, which will be listed first in 
> >> vfio_get_iommu_type.
> >>
> >> Then the new qemu has no way to infer iommu_type.  If it has code that 
> >> makes 
> >> decisions based on iommu_type (eg, VFIO_SPAPR_TCE_v2_IOMMU in 
> >> vfio_container_region_add,
> >> or vfio_ram_block_discard_disable, or ...), then new qemu cannot function 
> >> correctly.
> >>
> >> For that, VFIO_GET_IOMMU would be the cleanest solution, to be added the 
> >> same time our
> >> hypothetical future developer adds TYPE1v3.  The current inability to ask 
> >> the kernel
> >> "what are you" about a container feels like a bug to me.  
> > 
> > Hmm, I don't think the kernel has an innate responsibility to remind
> > the user of a configuration that they've already made.    
> 
> No, but it can make userland cleaner.  For example, CRIU checkpoint/restart 
> queries
> the kernel to save process state, and later makes syscalls to restore it.  
> Where the
> kernel does not export sufficient information, CRIU must provide interpose 
> libraries
> so it can remember state internally on its way to the kernel.  And 
> applications must
> link against the interpose libraries.

The counter argument is that it bloats the kernel to add interfaces to
report back things that userspace should already know.  Which has more
exploit vectors, a new kernel ioctl or yet another userspace library?
 
> > But I also
> > don't follow your TYPE1v3 example.  If we added such a type, I imagine
> > the switch would change to:
> > 
> >     switch (arg)
> >     case VFIO_TYPE1_IOMMU:
> >             return (iommu && (iommu->v2 || iommu->v3) ? 0 : 1;
> >     case VFIO_UNMAP_ALL:
> >     case VFIO_UPDATE_VADDR:
> >             return (iommu && !(iommu-v2 || iommu->v3) ? 0 : 1;
> >     case VFIO_TYPE1v2_IOMMU:
> >             return (iommu && !iommu-v2) ? 0 : 1;
> >     case VFIO_TYPE1v3_IOMMU:
> >             return (iommu && !iommu->v3) ? 0 : 1;
> >     ...
> > 
> > How would that not allow exactly the scenario described, ie. new QEMU
> > can see that old QEMU left it a v2 IOMMU.  
> 
> OK, that works as long as the switch returns true for all options before
> VFIO_SET_IOMMU is called.  I guess your test for "iommu" above does that,
> which I missed before.  If we are on the same page now, I will modify my
> comment "please also implement VFIO_GET_IOMMU".

Yes, in the above all extensions are supported before the container
type is set, then once set only the relevant extensions are available.

...
> >>>> diff --git a/migration/cpr.c b/migration/cpr.c
> >>>> index 37eca66..cee82cf 100644
> >>>> --- a/migration/cpr.c
> >>>> +++ b/migration/cpr.c
> >>>> @@ -7,6 +7,7 @@
> >>>>  
> >>>>  #include "qemu/osdep.h"
> >>>>  #include "exec/memory.h"
> >>>> +#include "hw/vfio/vfio-common.h"
> >>>>  #include "io/channel-buffer.h"
> >>>>  #include "io/channel-file.h"
> >>>>  #include "migration.h"
> >>>> @@ -101,7 +102,9 @@ void qmp_cpr_exec(strList *args, Error **errp)
> >>>>          error_setg(errp, "cpr-exec requires cpr-save with restart 
> >>>> mode");
> >>>>          return;
> >>>>      }
> >>>> -
> >>>> +    if (cpr_vfio_save(errp)) {
> >>>> +        return;
> >>>> +    }    
> >>>
> >>> Why is vfio so unique that it needs separate handlers versus other
> >>> devices?  Thanks,    
> >>
> >> In earlier patches these functions fiddled with more objects, but at this 
> >> point
> >> they are simple enough to convert to pre_save and post_load vmstate 
> >> handlers for
> >> the container and group objects.  However, we would still need to call 
> >> special 
> >> functons for vfio from qmp_cpr_exec:
> >>
> >>   * validate all containers support CPR before we start blasting vaddrs
> >>     However, I could validate all, in every call to pre_save for each 
> >> container.
> >>     That would be less efficient, but fits the vmstate model.  
> > 
> > Would it be a better option to mirror the migration blocker support, ie.
> > any device that doesn't support cpr registers a blocker and generic
> > code only needs to keep track of whether any blockers are registered.  
> 
> We cannot specifically use migrate_add_blocker(), because it is checked in
> the migration specific function migrate_prepare(), in a layer of functions 
> above the simpler qemu_save_device_state() used in cpr.  But yes, we could
> do something similar for vfio.  Increment a global counter in vfio_realize
> if the container does not support cpr, and decrement it when the container is
> destroyed.  pre_save could just check the counter.

Right, not suggesting to piggyback on migrate_add_blocker() only to use
a similar mechanism.  Only drivers that can't support cpr need register
a blocker but testing for blockers is done generically, not just for
vfio devices.

> >>   * restore all vaddr's if qemu_save_device_state fails.
> >>     However, I could recover for all containers inside pre_save when one 
> >> container fails.
> >>     Feels strange touching all objects in a function for one, but there is 
> >> no real
> >>     downside.  
> > 
> > I'm not as familiar as I should be with migration callbacks, thanks to
> > mostly not supporting it for vfio devices, but it seems strange to me
> > that there's no existing callback or notifier per device to propagate
> > save failure.  Do we not at least get some sort of resume callback in
> > that case?  
> 
> We do not:
>     struct VMStateDescription {
>         int (*pre_load)(void *opaque);
>         int (*post_load)(void *opaque, int version_id);
>         int (*pre_save)(void *opaque);
>         int (*post_save)(void *opaque);
> 
> The handler returns an error, which stops further saves and is propagated back
> to the top level caller qemu_save_device_state().
> 
> The vast majority of handlers do not have side effects, with no need to 
> unwind 
> anything on failure.
> 
> This raises another point.  If pre_save succeeds for all the containers,
> but fails for some non-vfio object, then the overall operation is abandoned,
> but we do not restore the vaddr's.  To plug that hole, we need to call the
> unwind code from qmp_cpr_save, or implement your alternative below.

We're trying to reuse migration interfaces, are we also triggering
migration state change notifiers?  ie.
MIGRATION_STATUS_{CANCELLING,CANCELLED,FAILED}  We already hook vfio
devices supporting migration into that notifier to tell the driver to
move the device back to the running state on failure, which seems a bit
unique to vfio devices.  Containers could maybe register their own
callbacks.

> > As an alternative, maybe each container could register a vm change
> > handler that would trigger reloading vaddrs if we move to a running
> > state and a flag on the container indicates vaddrs were invalidated?
> > Thanks,  
> 
> That works and is modular, but I dislike that it adds checks on the
> happy path for a case that will rarely happen, and it pushes recovery from
> failure further away from the original failure, which would make debugging
> cascading failures more difficult.

Would using the migration notifier move us sufficiently closer to the
failure point?  Otherwise I think you're talking about unwinding all
the containers when any one fails, where you didn't like that object
overreach, or maybe adding an optional callback... but I wonder if the
above notifier essentially already does that.

In any case, I think we have options to either implement new or use
existing notifier-like functionality to avoid all these vfio specific
callouts.  Thanks,

Alex


Reply via email to