On Fri, Jul 17, 2020 at 01:00:25AM -0700, Ram Pai wrote:
>  
> +int kvmppc_uv_migrate_mem_slot(struct kvm *kvm,
> +             const struct kvm_memory_slot *memslot)

Don't see any callers for this outside of this file, so why not static?

> +{
> +     unsigned long gfn = memslot->base_gfn;
> +     struct vm_area_struct *vma;
> +     unsigned long start, end;
> +     int ret = 0;
> +
> +     while (kvmppc_next_nontransitioned_gfn(memslot, kvm, &gfn)) {

So you checked the state of gfn under uvmem_lock above, but release
it too.

> +
> +             mmap_read_lock(kvm->mm);
> +             start = gfn_to_hva(kvm, gfn);
> +             if (kvm_is_error_hva(start)) {
> +                     ret = H_STATE;
> +                     goto next;
> +             }
> +
> +             end = start + (1UL << PAGE_SHIFT);
> +             vma = find_vma_intersection(kvm->mm, start, end);
> +             if (!vma || vma->vm_start > start || vma->vm_end < end) {
> +                     ret = H_STATE;
> +                     goto next;
> +             }
> +
> +             mutex_lock(&kvm->arch.uvmem_lock);
> +             ret = kvmppc_svm_migrate_page(vma, start, end,
> +                             (gfn << PAGE_SHIFT), kvm, PAGE_SHIFT, false);

What is the guarantee that the gfn is in the same earlier state when you do
do migration here?

Regards,
Bharata.

Reply via email to