On 1/13/23 1:52 PM, Jason Gunthorpe wrote:
> On Fri, Jan 13, 2023 at 12:11:32PM -0500, Matthew Rosato wrote:
>> @@ -462,9 +520,19 @@ static inline void vfio_device_pm_runtime_put(struct 
>> vfio_device *device)
>>  static int vfio_device_fops_release(struct inode *inode, struct file *filep)
>>  {
>>      struct vfio_device *device = filep->private_data;
>> +    struct kvm *kvm = NULL;
>>  
>>      vfio_device_group_close(device);
>>  
>> +    mutex_lock(&device->dev_set->lock);
>> +    if (device->open_count == 0 && device->kvm) {
>> +            kvm = device->kvm;
>> +            device->kvm = NULL;
>> +    }
>> +    mutex_unlock(&device->dev_set->lock);
> 
> This still doesn't seem right, another thread could have incr'd the
> open_count already 
> 
> This has to be done at the moment open_count is decremented to zero,
> while still under the original lock.

Hmm..  Fair.  Well, we can go back to clearing of device->kvm in 
vfio_device_last_close but the group lock is held then so we can't immediately 
do the kvm_put at that time -- unless we go back to the notion of stacking the 
kvm_put on a workqueue, but now from vfio.  If we do that, I think we also have 
to scrap the idea of putting the kvm_put_kvm function pointer into 
device->put_kvm too (or otherwise stash it along with the kvm value to be 
picked up by the scheduled work).

Another thought would be passing the device->open_count that was read while 
holding the dev_set->lock back on vfio_close_device() / 
vfio_device_group_close() as an indicator of whether vfio_device_last_close() 
was called - then you could use the stashed kvm value because it doesn't matter 
what's currently in device->kvm or what the current device->open_count is, you 
know that kvm reference needs to be put.

e.g.:
struct *kvm = device->kvm;
void (*put)(struct kvm *kvm) = device->put_kvm;
opened = vfio_device_group_close(device);
if (opened == 0 && kvm)
        put(kvm);

Reply via email to