On 16/05/2019 14:20, Jan Beulich wrote: >>>> On 06.05.19 at 08:56, <jgr...@suse.com> wrote: >> --- a/xen/common/schedule.c >> +++ b/xen/common/schedule.c >> @@ -314,14 +314,42 @@ static struct sched_item *sched_alloc_item(struct vcpu >> *v) >> return NULL; >> } >> >> -int sched_init_vcpu(struct vcpu *v, unsigned int processor) >> +static unsigned int sched_select_initial_cpu(struct vcpu *v) >> +{ >> + struct domain *d = v->domain; > > const (perhaps also the function parameter)?
Yes. > >> + nodeid_t node; >> + cpumask_t cpus; > > To be honest, I'm not happy to see new on-stack instances of > cpumask_t appear. Seeing ... > >> + cpumask_clear(&cpus); >> + for_each_node_mask ( node, d->node_affinity ) >> + cpumask_or(&cpus, &cpus, &node_to_cpumask(node)); >> + cpumask_and(&cpus, &cpus, cpupool_domain_cpumask(d)); >> + if ( cpumask_empty(&cpus) ) >> + cpumask_copy(&cpus, cpupool_domain_cpumask(d)); > > ... this fallback you use anyway, is there any issue with it also > serving the case where zalloc_cpumask_var() fails? Either that, or: - just fail to create the vcpu in that case, as chances are rather high e.g. the following arch_vcpu_create() will fail anyway - take the scheduling lock and use cpumask_scratch - (ab)use one of the available cpumasks in struct sched_unit which are not in use yet My preference would be using cpumask_scratch. Juergen _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel