On Thu, 2015-09-24 at 03:29 -0600, Jan Beulich wrote: > > > > On 24.09.15 at 06:31, <dario.faggi...@citrix.com> wrote: > > --- a/xen/common/sched_credit.c > > +++ b/xen/common/sched_credit.c
> > #define csched_balance_mask (CSCHED_PCPU(smp_processor_id()) > > ->balance_mask) > > > > +#define csched_balance_mask_cpu(c) (CSCHED_PCPU(c)->balance_mask) > > csched_runq_steal() gets called with peer_cpu's runqueue lock held > afaics, but uses smp_processor_id()'s balance_mask. I.e. it looks to > me that what Jürgen suggested as an option is actually a requirement. > And I'm very much agreeable on taking his suggestion, because I actually like it. Correctness should not be an issue, though. In fact, here is the story about csched_runq_steal(): schedule() cpu = smp_processor_id() lock = pcpu_schedule_lock_irq(cpu); sched = this_cpu(scheduler); next_slice = sched->do_schedule(sched, ...); | --> csched_schedule() cpu = smp_processor_id(); snext = csched_load_balance(..., cpu, ...); peer_cpu = xxx; lock = pcpu_schedule_trylock(peer_cpu); speer = csched_runq_steal(peer_cpu, cpu, ...); csched_balance_cpumask(..., csched_balance_mask); pcpu_schedule_unlock(lock, peer_cpu); pcpu_schedule_unlock_irq(lock, cpu); So, we can safely use smp_processor_id()'s scratch space, as we own its runqueue lock too. In any case, thanks a lot for having a look. Dario -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
signature.asc
Description: This is a digitally signed message part
_______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel