On Thu, Jul 25, 2024 at 6:08 PM Michal Hocko <[email protected]> wrote:
>
> On Thu 25-07-24 10:50:45, Barry Song wrote:
> > On Thu, Jul 25, 2024 at 12:27 AM Michal Hocko <[email protected]> wrote:
> > >
> > > On Wed 24-07-24 20:55:40, Barry Song wrote:
> [...]
> > > > diff --git a/drivers/vdpa/vdpa_user/iova_domain.c
> > > > b/drivers/vdpa/vdpa_user/iova_domain.c
> > > > index 791d38d6284c..eff700e5f7a2 100644
> > > > --- a/drivers/vdpa/vdpa_user/iova_domain.c
> > > > +++ b/drivers/vdpa/vdpa_user/iova_domain.c
> > > > @@ -287,28 +287,44 @@ void vduse_domain_remove_user_bounce_pages(struct
> > > > vduse_iova_domain *domain)
> > > > {
> > > > struct vduse_bounce_map *map;
> > > > unsigned long i, count;
> > > > + struct page **pages = NULL;
> > > >
> > > > write_lock(&domain->bounce_lock);
> > > > if (!domain->user_bounce_pages)
> > > > goto out;
> > > > -
> > > > count = domain->bounce_size >> PAGE_SHIFT;
> > > > + write_unlock(&domain->bounce_lock);
> > > > +
> > > > + pages = kmalloc_array(count, sizeof(*pages), GFP_KERNEL |
> > > > __GFP_NOFAIL);
> > > > + for (i = 0; i < count; i++)
> > > > + pages[i] = alloc_page(GFP_KERNEL | __GFP_NOFAIL);
> > >
> > > AFAICS vduse_domain_release calls this function with
> > > spin_lock(&domain->iotlb_lock) so dropping &domain->bounce_lock is not
> > > sufficient.
> >
> > yes. this is true:
> >
> > static int vduse_domain_release(struct inode *inode, struct file *file)
> > {
> > struct vduse_iova_domain *domain = file->private_data;
> >
> > spin_lock(&domain->iotlb_lock);
> > vduse_iotlb_del_range(domain, 0, ULLONG_MAX);
> > vduse_domain_remove_user_bounce_pages(domain);
> > vduse_domain_free_kernel_bounce_pages(domain);
> > spin_unlock(&domain->iotlb_lock);
> > put_iova_domain(&domain->stream_iovad);
> > put_iova_domain(&domain->consistent_iovad);
> > vhost_iotlb_free(domain->iotlb);
> > vfree(domain->bounce_maps);
> > kfree(domain);
> >
> > return 0;
> > }
> >
> > This is quite a pain. I admit I don't have knowledge of this driver, and I
> > don't
> > think it's safe to release two locks and then reacquire them. The situation
> > is
> > rather complex. Therefore, I would prefer if the VDPA maintainers could
> > take the lead in implementing a proper fix.
>
> Would it be possible to move all that work to a deferred context?
My understanding is that we need to be aware of both the iotlb_lock and
bounce_lock to implement the correct changes. As long as we still need
to acquire these two locks in a deferred context, there doesn't seem to
be any difference.
I can do the memory pre-allocation before spin_lock(&domain->iotlb_lock),
but I have no knowledge whether the "count" will change after I make
the preallocation.
diff --git a/drivers/vdpa/vdpa_user/iova_domain.c
b/drivers/vdpa/vdpa_user/iova_domain.c
index 791d38d6284c..7ec87ef33d42 100644
--- a/drivers/vdpa/vdpa_user/iova_domain.c
+++ b/drivers/vdpa/vdpa_user/iova_domain.c
@@ -544,9 +544,12 @@ static int vduse_domain_release(struct inode
*inode, struct file *file)
{
struct vduse_iova_domain *domain = file->private_data;
+ struct page **pages;
+ spin_lock(&domain->iotlb_lock); maybe also + bounce_lock?
+ count = domain->bounce_size >> PAGE_SHIFT;
+ spin_unlock(&domain->iotlb_lock);
+
+ preallocate_count_pages(pages, count);
+
....
spin_lock(&domain->iotlb_lock);
vduse_iotlb_del_range(domain, 0, ULLONG_MAX);
- vduse_domain_remove_user_bounce_pages(domain);
+ vduse_domain_remove_user_bounce_pages(domain, pages);
vduse_domain_free_kernel_bounce_pages(domain);
spin_unlock(&domain->iotlb_lock);
put_iova_domain(&domain->stream_iovad);
> --
> Michal Hocko
> SUSE Labs