On Fri, 17 Mar 2017, Dario Faggioli wrote: > Hello, > > This patch series implements what I call the 'null' scheduler. > > It's a very simple, very static scheduling posicy that always schedules the > same vCPU(s) on the same pCPU(s). That's it. > > If there are less vCPUs than pCPUs, some of the pCPUs are _always_ idle. If > there are more, some vCPUs _never_ run. > That is not entirely true, as there is some logic to make sure that waiting > to run vCPUs are executed, for instance, on a new pCPU that enters the > cpupool, and things like that. > > The demand for this cames from Xen on ARM people and the embedded world in > general (Hey, Stefano! :-P), where it is not uncommon to have super static > systems that perceives an advanced general purpose scheduler just as pure > overhead.
Yep, this scheduler is exactly what I want, thanks! :-) > As a matter of fact, this may turn out useful in less embedded scenario, like > High Performace Computing (where, again, scheduling is, often, unnecessary > overhead), but even some of our much more classic Xen use case (like > consolidation, Cc-ing Jonathan and Marcus who said they were interested in > it). > > The scheduler is really simple, and especially the hot paths --i.e., sleep, > wakeup and schedule-- are super lean and quick, in 99% of the cases. All the > slightly more complicated logic for dealing with pCPUs coming and going from > a cpupool that uses this scheduler resides in functions that handle > insertion, removal and migration of vCPUs, which are only called when such > configuration changes happens (so, typically, "offline", in most of the > embedded usecases). > > I implemented support for hard affinity in order to provide at least a > rudimental interface for interacting with the scheduler and affect the > placement (it's called assignment within the code) of vCPUs on pCPUs. > > I've tested the scheduler both inside a cpupool (using both Credit1 and > Credit2 as boot schedulers) and as default, choosing it at boot and using it > for Dom0 and a few other domains. In the latter case, you probably want to > limit the number of Dom0's vCPUs too, or there will be very few to experiment > with! :-P > > I haven't done any performance or overhead measurements so far, but I will > soon enough. > > I also consider this to be experimental, and I'll also write a feature > document ASAP. > > Thanks and Regards, > Dario > --- > Dario Faggioli (3): > xen: sched: introduce the 'null' semi-static scheduler > xen: sched_null: support for hard affinity > tools: sched: add support for 'null' scheduler > > docs/misc/xen-command-line.markdown | 2 > tools/libxl/libxl.h | 6 > tools/libxl/libxl_sched.c | 24 + > tools/libxl/libxl_types.idl | 1 > xen/common/Kconfig | 11 > xen/common/Makefile | 1 > xen/common/sched_null.c | 837 > +++++++++++++++++++++++++++++++++++ > xen/common/schedule.c | 2 > xen/include/public/domctl.h | 1 > 9 files changed, 884 insertions(+), 1 deletion(-) > create mode 100644 xen/common/sched_null.c > -- > <<This happens because I choose it to happen!>> (Raistlin Majere) > ----------------------------------------------------------------- > Dario Faggioli, Ph.D, http://about.me/dario.faggioli > Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) > _______________________________________________ Xen-devel mailing list Xenfirstname.lastname@example.org https://lists.xen.org/xen-devel