> From: Nicolin Chen <nicol...@nvidia.com>
> Sent: Sunday, May 18, 2025 11:22 AM
> 
> + * @nesting_parent_iova: Base address of the queue memory in the guest
> physical
> + *                       address space

'nesting_parent' is a bit redundant. 'iova' should be sufficient as it
implies a s2 input address.


> + * @length: Length of the queue memory
> + *
> + * Allocate a HW queue object for a vIOMMU-specific HW-accelerated
> queue, which
> + * allows HW to access a guest queue memory described using
> @nesting_parent_iova
> + * and @length.
> + *
> + * Upon success, the underlying physical pages of the guest queue memory
> will be
> + * pinned to prevent VMM from unmapping them in the IOAS until the HW
> queue gets
> + * destroyed.

This is conditional.

> +void iommufd_hw_queue_destroy(struct iommufd_object *obj)
> +{
> +     struct iommufd_hw_queue *hw_queue =
> +             container_of(obj, struct iommufd_hw_queue, obj);
> +     struct iommufd_viommu *viommu = hw_queue->viommu;
> +
> +     if (viommu->ops->hw_queue_destroy)
> +             viommu->ops->hw_queue_destroy(hw_queue);
> +     iopt_unpin_pages(&viommu->hwpt->ioas->iopt, hw_queue-
> >base_addr,
> +                      hw_queue->length, true);

Check the flag

> +
> +             /* Validate if the underlying physical pages are contiguous */
> +             for (i = 1; i < max_npages && pages[i]; i++) {

Page[i] must be valid otherwise it's a bug?

Reviewed-by: Kevin Tian <kevin.t...@intel.com>

Reply via email to