----- On Oct 17, 2018, at 3:19 AM, Srikar Dronamraju sri...@linux.vnet.ibm.com 
wrote:

> Hi Mathieu,
> 
>> +static int do_cpu_opv(struct cpu_op *cpuop, int cpuopcnt,
>> +                  struct cpu_opv_vaddr *vaddr_ptrs, int cpu)
>> +{
>> +    struct mm_struct *mm = current->mm;
>> +    int ret;
>> +
>> +retry:
>> +    if (cpu != raw_smp_processor_id()) {
>> +            ret = push_task_to_cpu(current, cpu);
>> +            if (ret)
>> +                    goto check_online;
>> +    }
>> +    down_read(&mm->mmap_sem);
>> +    ret = vaddr_ptrs_check(vaddr_ptrs);
>> +    if (ret)
>> +            goto end;
>> +    preempt_disable();
>> +    if (cpu != smp_processor_id()) {
>> +            preempt_enable();
>> +            up_read(&mm->mmap_sem);
>> +            goto retry;
>> +    }
> 
> If we have a higher priority task/s either pinned to the cpu, dont we end up
> in busy-looping till the task exits/sleeps?

You're right!

How about we ditch the thread migration altogether, and simply perform
the cpu_opv operations in a IPI handler ?

This is possible now that cpu_opv uses a temporary vmap() rather than
try to touch the user-space page through the current thread's page table.

Thoughts ?

Thanks,

Mathieu

> 
>> +    ret = __do_cpu_opv(cpuop, cpuopcnt);
>> +    preempt_enable();
>> +end:
>> +    up_read(&mm->mmap_sem);
>> +    return ret;
>> +
>> +check_online:
>> +    /*
>> +     * push_task_to_cpu() returns -EINVAL if the requested cpu is not part
>> +     * of the current thread's cpus_allowed mask.
>> +     */
>> +    if (ret == -EINVAL)
>> +            return ret;
>> +    get_online_cpus();
>> +    if (cpu_online(cpu)) {
>> +            put_online_cpus();
>> +            goto retry;
> > +   }

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

Reply via email to