>>> On 16.05.19 at 14:46, <jgr...@suse.com> wrote:
> On 16/05/2019 14:20, Jan Beulich wrote:
>>>>> On 06.05.19 at 08:56, <jgr...@suse.com> wrote:
>>> --- a/xen/common/schedule.c
>>> +++ b/xen/common/schedule.c
>>> @@ -314,14 +314,42 @@ static struct sched_item *sched_alloc_item(struct 
>>> vcpu *v)
>>>      return NULL;
>>>  }
>>>  
>>> -int sched_init_vcpu(struct vcpu *v, unsigned int processor)
>>> +static unsigned int sched_select_initial_cpu(struct vcpu *v)
>>> +{
>>> +    struct domain *d = v->domain;
>>> +    nodeid_t node;
>>> +    cpumask_t cpus;
>> 
>> To be honest, I'm not happy to see new on-stack instances of
>> cpumask_t appear. Seeing ...
>> 
>>> +    cpumask_clear(&cpus);
>>> +    for_each_node_mask ( node, d->node_affinity )
>>> +        cpumask_or(&cpus, &cpus, &node_to_cpumask(node));
>>> +    cpumask_and(&cpus, &cpus, cpupool_domain_cpumask(d));
>>> +    if ( cpumask_empty(&cpus) )
>>> +        cpumask_copy(&cpus, cpupool_domain_cpumask(d));
>> 
>> ... this fallback you use anyway, is there any issue with it also
>> serving the case where zalloc_cpumask_var() fails?
> 
> Either that, or:
> 
> - just fail to create the vcpu in that case, as chances are rather
>   high e.g. the following arch_vcpu_create() will fail anyway

Ah, right, this is for vCPU creation only anyway.

> - take the scheduling lock and use cpumask_scratch
> - (ab)use one of the available cpumasks in struct sched_unit which
>   are not in use yet
> 
> My preference would be using cpumask_scratch.

I'm actually fine with any of the variants, including that of simply
returning -ENOMEM.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to