Re: [PATCH v3] vfio/pci: Propagate ACPI notifications to user-space via eventfd

2023-05-17 Thread Grzegorz Jaszczyk
wt., 16 maj 2023 o 01:00 Alex Williamson 
napisaƂ(a):
>
> On Tue,  2 May 2023 13:27:00 +
> Grzegorz Jaszczyk  wrote:
>
> > From: Grzegorz Jaszczyk 
> >
> > To allow pass-through devices receiving ACPI notifications, permit to
> > register ACPI notify handler (via introduced new ioctl) for a given
>
> This shouldn't require a new ioctl, it should fit within the
> abstraction of the VFIO_DEVICE_SET_IRQS ioctl which already supports
> virtual IRQs for things like error notification and device release
> requests.  Support for this IRQ index on a given device should also be
> discoverable via VFIO_DEVICE_GET_IRQ_INFO that way.

Ok, I will try utilizing VFIO_DEVICE_SET_IRQS instead of introducing new IOCTL.

>
> > device. The handler role is to receive and propagate such ACPI
> > notifications to the user-space through the user provided eventfd. This
> > allows VMM to receive and propagate them further to the VM, where the
> > actual driver for pass-through device resides and can react to device
> > specific notifications accordingly.
> >
> > The eventfd usage ensures VMM and device isolation: it allows to use a
> > dedicated channel associated with the device for such events, such that
> > the VMM has direct access.
> >
> > Since the eventfd counter is used as ACPI notification value
> > placeholder, the eventfd signaling needs to be serialized in order to
> > not end up with notification values being coalesced. Therefore ACPI
> > notification values are buffered and signalized one by one, when the
> > previous notification value has been consumed.
>
> I don't see anything that prevents this queuing mechanism from creating
> an arbitrarily large queue, don't we need to drop events at some point
> to avoid introducing an exploit vector?  Aren't these notifications
> often for things like "device check", where queuing duplicate entries
> doesn't make sense and perhaps the most recent notification is the only
> relevant value otherwise?  If we only need to avoid calling
> eventfd_signal() while a non-zero value is pending, couldn't we call
> eventfd_ctx_do_read() ourselves to clear the old value rather than
> queuing?  Thanks,

I think we have to queue. There are multiple different notifications
values, some common across devices, some reserved by vendors for
hardware specific notifications and other device specific ones. The
notification values signifies the purpose of the notification. So the
most recent notification may not be the only relevant one.

Indeed there is a missing mechanism preventing creation of large
queues. I will fix it in the next version and probably allow a certain
number of notifications to be queued e.g. 20 most recent (I do not
expect to have multiple notifications values queued but just want to
prevent losing several different kinds if they are received in a row).

Thank you,
Grzegorz
>
>
> Alex
>
> > Signed-off-by: Grzegorz Jaszczyk 
> > ---
> > Changelog v2..v3:
> > - Fix compilation warnings when building with "W=1"
> > Changelog v1..v2:
> > - The v2 implementation is actually completely different then v1:
> >   instead of using acpi netlink events for propagating ACPI
> >   notifications to the user space take advantage of eventfd, which can
> >   provide better VMM and device isolation: it allows to use a dedicated
> >   channel associated with the device for such events, such that the VMM
> >   has direct access.
> > - Using eventfd counter as notification value placeholder was suggested
> >   in v1 and requires additional serialization logic introduced in v2.
> > - Since the vfio-pci supports non-ACPI platforms address !CONFIG_ACPI
> >   case.
> > - v1 discussion: 
> > https://patchwork.kernel.org/project/kvm/patch/20230307220553.631069-1-...@semihalf.com/
> > ---
> >  drivers/vfio/pci/vfio_pci_core.c | 214 +++
> >  include/linux/vfio_pci_core.h|  11 ++
> >  include/uapi/linux/vfio.h|  15 +++
> >  3 files changed, 240 insertions(+)
> >
> > diff --git a/drivers/vfio/pci/vfio_pci_core.c 
> > b/drivers/vfio/pci/vfio_pci_core.c
> > index a5ab416cf476..2d6101e89fde 100644
> > --- a/drivers/vfio/pci/vfio_pci_core.c
> > +++ b/drivers/vfio/pci/vfio_pci_core.c
> > @@ -10,6 +10,7 @@
> >
> >  #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> >
> > +#include 
> >  #include 
> >  #include 
> >  #include 
> > @@ -679,6 +680,70 @@ void vfio_pci_core_disable(struct vfio_pci_core_device 
> > *vdev)
> >  }
> >  EXPORT_SYMBOL_GPL(vfio_pci_core_disable);
> >
> > +struct notification_queue {
> > + int notification_val;
> > + struct list_head notify_val_next;
> > +};
> > +
> > +#if IS_ENABLED(CONFIG_ACPI)
> > +static void vfio_pci_core_acpi_notify(acpi_handle handle, u32 event, void 
> > *data)
> > +{
> > + struct vfio_pci_core_device *vdev = (struct vfio_pci_core_device 
> > *)data;
> > + struct vfio_acpi_notification *acpi_notify = vdev->acpi_notification;
> > + struct notification_queue *entry;
> > +
> > + entry = kmalloc(sizeof(*entry),

Re: [PATCH v3] vfio/pci: Propagate ACPI notifications to user-space via eventfd

2023-05-15 Thread Alex Williamson
On Tue,  2 May 2023 13:27:00 +
Grzegorz Jaszczyk  wrote:

> From: Grzegorz Jaszczyk 
> 
> To allow pass-through devices receiving ACPI notifications, permit to
> register ACPI notify handler (via introduced new ioctl) for a given

This shouldn't require a new ioctl, it should fit within the
abstraction of the VFIO_DEVICE_SET_IRQS ioctl which already supports
virtual IRQs for things like error notification and device release
requests.  Support for this IRQ index on a given device should also be
discoverable via VFIO_DEVICE_GET_IRQ_INFO that way.

> device. The handler role is to receive and propagate such ACPI
> notifications to the user-space through the user provided eventfd. This
> allows VMM to receive and propagate them further to the VM, where the
> actual driver for pass-through device resides and can react to device
> specific notifications accordingly.
> 
> The eventfd usage ensures VMM and device isolation: it allows to use a
> dedicated channel associated with the device for such events, such that
> the VMM has direct access.
> 
> Since the eventfd counter is used as ACPI notification value
> placeholder, the eventfd signaling needs to be serialized in order to
> not end up with notification values being coalesced. Therefore ACPI
> notification values are buffered and signalized one by one, when the
> previous notification value has been consumed.

I don't see anything that prevents this queuing mechanism from creating
an arbitrarily large queue, don't we need to drop events at some point
to avoid introducing an exploit vector?  Aren't these notifications
often for things like "device check", where queuing duplicate entries
doesn't make sense and perhaps the most recent notification is the only
relevant value otherwise?  If we only need to avoid calling
eventfd_signal() while a non-zero value is pending, couldn't we call
eventfd_ctx_do_read() ourselves to clear the old value rather than
queuing?  Thanks,

Alex
 
> Signed-off-by: Grzegorz Jaszczyk 
> ---
> Changelog v2..v3:
> - Fix compilation warnings when building with "W=1"
> Changelog v1..v2:
> - The v2 implementation is actually completely different then v1:
>   instead of using acpi netlink events for propagating ACPI
>   notifications to the user space take advantage of eventfd, which can
>   provide better VMM and device isolation: it allows to use a dedicated
>   channel associated with the device for such events, such that the VMM
>   has direct access.
> - Using eventfd counter as notification value placeholder was suggested
>   in v1 and requires additional serialization logic introduced in v2.
> - Since the vfio-pci supports non-ACPI platforms address !CONFIG_ACPI
>   case.
> - v1 discussion: 
> https://patchwork.kernel.org/project/kvm/patch/20230307220553.631069-1-...@semihalf.com/
> ---
>  drivers/vfio/pci/vfio_pci_core.c | 214 +++
>  include/linux/vfio_pci_core.h|  11 ++
>  include/uapi/linux/vfio.h|  15 +++
>  3 files changed, 240 insertions(+)
> 
> diff --git a/drivers/vfio/pci/vfio_pci_core.c 
> b/drivers/vfio/pci/vfio_pci_core.c
> index a5ab416cf476..2d6101e89fde 100644
> --- a/drivers/vfio/pci/vfio_pci_core.c
> +++ b/drivers/vfio/pci/vfio_pci_core.c
> @@ -10,6 +10,7 @@
>  
>  #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
>  
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -679,6 +680,70 @@ void vfio_pci_core_disable(struct vfio_pci_core_device 
> *vdev)
>  }
>  EXPORT_SYMBOL_GPL(vfio_pci_core_disable);
>  
> +struct notification_queue {
> + int notification_val;
> + struct list_head notify_val_next;
> +};
> +
> +#if IS_ENABLED(CONFIG_ACPI)
> +static void vfio_pci_core_acpi_notify(acpi_handle handle, u32 event, void 
> *data)
> +{
> + struct vfio_pci_core_device *vdev = (struct vfio_pci_core_device *)data;
> + struct vfio_acpi_notification *acpi_notify = vdev->acpi_notification;
> + struct notification_queue *entry;
> +
> + entry = kmalloc(sizeof(*entry), GFP_KERNEL);
> + if (!entry)
> + return;
> +
> + entry->notification_val = event;
> + INIT_LIST_HEAD(&entry->notify_val_next);
> +
> + mutex_lock(&acpi_notify->notification_list_lock);
> + list_add_tail(&entry->notify_val_next, &acpi_notify->notification_list);
> + mutex_unlock(&acpi_notify->notification_list_lock);
> +
> + schedule_work(&acpi_notify->acpi_notification_work);
> +}
> +
> +static void vfio_pci_acpi_notify_close_device(struct vfio_pci_core_device 
> *vdev)
> +{
> + struct vfio_acpi_notification *acpi_notify = vdev->acpi_notification;
> + struct pci_dev *pdev = vdev->pdev;
> + struct acpi_device *adev = ACPI_COMPANION(&pdev->dev);
> + struct notification_queue *entry, *entry_tmp;
> + u64 cnt;
> +
> + if (!acpi_notify || !acpi_notify->acpi_notify_trigger)
> + return;
> +
> + acpi_remove_notify_handler(adev->handle, ACPI_DEVICE_NOTIFY,
> +vfio_pci_core_acpi_notify);
> +
> + even