Yan Zhao <[email protected]> writes:
> On Fri, Oct 17, 2025 at 01:11:52PM -0700, Ackerley Tng wrote:
>> For shared to private conversions, if refcounts on any of the folios
>> within the range are elevated, fail the conversion with -EAGAIN.
>>
>> At the point of shared to private conversion, all folios in range are
>> also unmapped. The filemap_invalidate_lock() is held, so no faulting
>> can occur. Hence, from that point on, only transient refcounts can be
>> taken on the folios associated with that guest_memfd.
>>
>> Hence, it is safe to do the conversion from shared to private.
>>
>> After conversion is complete, refcounts may become elevated, but that
>> is fine since users of transient refcounts don't actually access
>> memory.
>>
>> For private to shared conversions, there are no refcount checks. any
>> transient refcounts are expected to drop their refcounts soon. The
>> conversion process will spin waiting for these transient refcounts to
>> go away.
> Where's the code to spin?
>
Thanks, I will fix the commit message for the next revision.
>> +/*
>> + * Preallocate memory for attributes to be stored on a maple tree, pointed
>> to
>> + * by mas. Adjacent ranges with attributes identical to the new attributes
>> + * will be merged. Also sets mas's bounds up for storing attributes.
>> + *
>> + * This maintains the invariant that ranges with the same attributes will
>> + * always be merged.
>> + */
>> +static int kvm_gmem_mas_preallocate(struct ma_state *mas, u64 attributes,
>> + pgoff_t start, size_t nr_pages)
>> +{
>> + pgoff_t end = start + nr_pages;
>> + pgoff_t last = end - 1;
>> + void *entry;
>> +
>> + /* Try extending range. entry is NULL on overflow/wrap-around. */
>> + mas_set_range(mas, end, end);
>> + entry = mas_find(mas, end);
>> + if (entry && xa_to_value(entry) == attributes)
>> + last = mas->last;
>> +
>> + mas_set_range(mas, start - 1, start - 1);
> Check start == 0 ?
>
Thanks!
>> + entry = mas_find(mas, start - 1);
>> + if (entry && xa_to_value(entry) == attributes)
>> + start = mas->index;
>> +
>> + mas_set_range(mas, start, last);
>> + return mas_preallocate(mas, xa_mk_value(attributes), GFP_KERNEL);
>> +}
> ...
>
>> +static long kvm_gmem_set_attributes(struct file *file, void __user *argp)
>> +{
>> + struct gmem_file *f = file->private_data;
>> + struct inode *inode = file_inode(file);
>> + struct kvm_memory_attributes2 attrs;
>> + pgoff_t err_index;
>> + size_t nr_pages;
>> + pgoff_t index;
>> + int r;
>> +
>> + if (copy_from_user(&attrs, argp, sizeof(attrs)))
>> + return -EFAULT;
>> +
>> + if (attrs.flags)
>> + return -EINVAL;
>> + if (attrs.attributes & ~kvm_supported_mem_attributes(f->kvm))
>> + return -EINVAL;
>> + if (attrs.size == 0 || attrs.offset + attrs.size < attrs.offset)
>> + return -EINVAL;
>> + if (!PAGE_ALIGNED(attrs.offset) || !PAGE_ALIGNED(attrs.offset))
> Should be
> if (!PAGE_ALIGNED(attrs.offset) || !PAGE_ALIGNED(attrs.size))
> ?
>
Thanks!
>> + return -EINVAL;
>> +
>> + if (attrs.offset > inode->i_size ||
> Should be
> if (attrs.offset >= inode->i_size ||
> ?
Thanks!
>> + attrs.offset + attrs.size > inode->i_size)
>> + return -EINVAL;
>> +
>> + nr_pages = attrs.size >> PAGE_SHIFT;
>> + index = attrs.offset >> PAGE_SHIFT;
>> + r = __kvm_gmem_set_attributes(inode, index, nr_pages, attrs.attributes,
>> + &err_index);
>> + if (r) {
>> + attrs.error_offset = err_index << PAGE_SHIFT;
>> +
>> + if (copy_to_user(argp, &attrs, sizeof(attrs)))
>> + return -EFAULT;
>> + }
>> +
>> + return r;
>> +}