On 29/05/2019 05:23, Andrew Cooper wrote:
> Drop introduced trailing whitespace, excessively long lines, mal-indention,
> superfluous use of PRI macros for int-or-smaller types, and incorrect PRI
> macros for gfns and mfns.
>
> Signed-off-by: Andrew Cooper
> ---
> CC: George Dunlap
> CC: Tamas K
Drop introduced trailing whitespace, excessively long lines, mal-indention,
superfluous use of PRI macros for int-or-smaller types, and incorrect PRI
macros for gfns and mfns.
Signed-off-by: Andrew Cooper
---
CC: George Dunlap
CC: Tamas K Lengyel
CC: Jan Beulich
CC: Wei Liu
CC: Roger Pau
This also introduced the top-level Guest Documentation section.
Signed-off-by: Andrew Cooper
---
CC: George Dunlap
CC: Ian Jackson
CC: Jan Beulich
CC: Konrad Rzeszutek Wilk
CC: Stefano Stabellini
CC: Tim Deegan
CC: Wei Liu
CC: Julien Grall
v2:
* Drop AT ligatures
* Move into an x86
On 29/05/2019 02:34, Tamas K Lengyel wrote:
>> And some questions.
>>
>> 1) I'm guessing the drakvuf_inject_trap(drakvuf, 0x293e6a0, 0) call is
>> specific to the exact windows kernel in use?
>>
>> 2) In vmi_init(), what is the purpose of fmask and zero_page_gfn? You add
>> one extra gfn to the
> @Tamas, if you could check the traps implementation.
I had a quick look and it seems like you forgot to set the mem_access
permissions on the pages. You need the remapped gfn's to be marked
execute-only in the altp2m_idx view, and their actual gfn completely
inaccessible in altp2m_idx. You need
> And some questions.
>
> 1) I'm guessing the drakvuf_inject_trap(drakvuf, 0x293e6a0, 0) call is
> specific to the exact windows kernel in use?
>
> 2) In vmi_init(), what is the purpose of fmask and zero_page_gfn? You add
> one extra gfn to the guest, called zero_page, and fill it with 1's from
On 28/05/2019 13:33, Mathieu Tarral wrote:
> Hi Andrew,
>
>>> The bug is still here, so we can exclude a microcode issue.
>> Good - that is one further angle excluded. Always make sure you are
>> running with up-to-date microcode, but it looks like we back to
>> investigating a logical bug in
On 5/28/19 6:48 PM, Stefano Stabellini wrote:
> From: Stefano Stabellini
>
> On arm64 swiotlb is often (not always) already initialized by mem_init.
> We don't want to initialize it twice, which would trigger a second
> memory allocation. Moreover, the second memory pool is typically made of
>
On Thu, 23 May 2019, Julien Grall wrote:
> On 23/05/2019 00:26, Stefano Stabellini wrote:
> > From: Stefano Stabellini
> >
> > On arm64 swiotlb is already initialized by mem_init. We don't want to
>
> Arm64 will not always initialize the swiotlb. It will only be done if the user
> force it or
From: Stefano Stabellini
On arm64 swiotlb is often (not always) already initialized by mem_init.
We don't want to initialize it twice, which would trigger a second
memory allocation. Moreover, the second memory pool is typically made of
high pages and ends up replacing the original memory pool
flight 137012 linux-4.19 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/137012/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-i386-qemut-rhel6hvm-amd 12 guest-start/redhat.repeat fail REGR. vs.
129313
> diff --git a/include/xen/balloon.h b/include/xen/balloon.h
> index 4914b93a23f2..a72ef3f88b39 100644
> --- a/include/xen/balloon.h
> +++ b/include/xen/balloon.h
> @@ -28,14 +28,6 @@ int alloc_xenballooned_pages(int nr_pages, struct page
> **pages);
> void free_xenballooned_pages(int nr_pages,
Ian Jackson writes ("[PATCH STABLE] tools/firmware: update OVMF Makefile, when
necessary"):
> Now done, including for staging-4.6. 4.6 is out of security support,
> but osstest wants to be able to build 4.6 so that it can test
> migration from 4.6 to 4.7, and 4.7 *is* still (just about) in
flight 137009 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/137009/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-i386-xl-qemut-ws16-amd64 13 guest-saverestore fail REGR. vs. 136516
Tests which
flight 137004 linux-next real [real]
http://logs.test-lab.xenproject.org/osstest/logs/137004/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-shadow 18 guest-localmigrate/x10 fail REGR. vs. 136823
Tests which did not
On 5/20/19 4:13 PM, Julien Grall wrote:
> Hi,
>
> On 10/05/2019 14:25, Julien Grall wrote:
>>
>>
>> On 10/05/2019 14:24, Jan Beulich wrote:
>> On 10.05.19 at 15:02, wrote:
>>>
On 10/05/2019 12:35, Jan Beulich wrote:
On 07.05.19 at 17:14, wrote:
>> ---
(+ Andre)
Hi,
Title: Interrupts are still unmasked when executing action for interrupt
routed to Xen. So you need to be more specific. How about
"xen/arm: gic: Defer the decision to unmask interrupts to do_{LPI, IRQ}()"?
On 5/27/19 10:29 AM, Andrii Anisov wrote:
From: Andrii Anisov
This
flight 137036 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/137036/
Failures :-/ but no regressions.
Tests which did not succeed, but are not blocking:
test-amd64-amd64-libvirt 13 migrate-support-checkfail never pass
test-arm64-arm64-xl-xsm
Hi Volodymyr,
Sorry for the late reply.
On 5/20/19 3:57 PM, Volodymyr Babchuk wrote:
Julien Grall writes:
Hi,
On 20/05/2019 14:41, Volodymyr Babchuk wrote:
Julien Grall writes:
Hi,
First of all, please add a cover letter when you send a series. This
help for threading and also a place
On 5/28/2019 12:41 AM, Roger Pau Monné wrote:
On Mon, May 27, 2019 at 03:35:21PM -0700, John L. Poole wrote:
On 5/27/2019 9:18 AM, Roger Pau Monné wrote:
On Mon, Apr 29, 2019 at 05:27:34PM +0200, Roger Pau Monné wrote:
IMO it would be better if you can build directly from the upstream git
On 5/28/2019 12:41 AM, Roger Pau Monné wrote:
On Mon, May 27, 2019 at 03:35:21PM -0700, John L. Poole wrote:
On 5/27/2019 9:18 AM, Roger Pau Monné wrote:
On Mon, Apr 29, 2019 at 05:27:34PM +0200, Roger Pau Monné wrote:
IMO it would be better if you can build directly from the upstream git
Currently runtime parameters of the hypervisor cannot be inspected through an
xl command, however they can be changed with the "xl set-parameter" command.
Being able to check these parameters at runtime would be a useful diagnostic
tool.
This patch series implements a new xl command "xl
Add a sysctl hypercall to support reading hypervisor runtime parameters.
Limitations:
- Custom runtime parameters (OPT_CUSTOM) are not supported yet.
- For integer parameters (OPT_UINT), only unsigned parameters are printed
correctly.
- The implementation only reads runtime parameters, but it can
Add a new libxc function to get hypervisor parameters.
Signed-off-by: Vasilis Liaskovitis
---
tools/libxc/include/xenctrl.h | 1 +
tools/libxc/xc_misc.c | 26 ++
2 files changed, 27 insertions(+)
diff --git a/tools/libxc/include/xenctrl.h
Add a new libxl function to get hypervisor parameters.
Signed-off-by: Vasilis Liaskovitis
---
tools/libxl/libxl.c | 19 +++
tools/libxl/libxl.h | 1 +
2 files changed, 20 insertions(+)
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index ec71574e99..9bb0382c38 100644
Add a new xl command "get-parameters" to get hypervisor runtime parameters.
Examples:
xl get-parameters "gnttab_max_frames gnttab_max_maptrack_frames"
gnttab_max_frames gnttab_max_maptrack_frames : 64 1024
xl set-parameters gnttab_max_frames=128
xl get-parameters gnttab_max_frames
On 28/05/2019 16:32, Jan Beulich wrote:
On 28.05.19 at 15:08, wrote:
>> --- a/xen/common/stop_machine.c
>> +++ b/xen/common/stop_machine.c
>> @@ -69,8 +69,8 @@ static void stopmachine_wait_state(void)
>>
>> int stop_machine_run(int (*fn)(void *), void *data, unsigned int cpu)
>> {
>> -
>>> On 28.05.19 at 15:08, wrote:
> --- a/xen/common/stop_machine.c
> +++ b/xen/common/stop_machine.c
> @@ -69,8 +69,8 @@ static void stopmachine_wait_state(void)
>
> int stop_machine_run(int (*fn)(void *), void *data, unsigned int cpu)
> {
> -cpumask_t allbutself;
> unsigned int i,
>>> On 28.05.19 at 15:59, wrote:
> Tmem has been removed. Reflect that in SUPPORT.md
>
> Signed-off-by: Juergen Gross
Acked-by: Jan Beulich
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
Tmem has been removed. Reflect that in SUPPORT.md
Signed-off-by: Juergen Gross
---
SUPPORT.md | 10 --
1 file changed, 10 deletions(-)
diff --git a/SUPPORT.md b/SUPPORT.md
index e4fb15b2f8..375473a456 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -236,16 +236,6 @@ Allow pages belonging
> * Improvements to domain creation (v2)
> - Andrew Cooper
Hi Andrew,
could you point me to a git branch where you have this work? I'm
experimenting with some stuff and would like to see what your work in
this area touches.
Thanks,
Tamas
___
The "allbutself" cpumask in stop_machine_run() is not needed. Instead
of allocating it on the stack it can easily be avoided.
Signed-off-by: Juergen Gross
---
xen/common/stop_machine.c | 11 +--
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/xen/common/stop_machine.c
Hi Andrew,
> > The bug is still here, so we can exclude a microcode issue.
>
> Good - that is one further angle excluded. Always make sure you are
> running with up-to-date microcode, but it looks like we back to
> investigating a logical bug in libvmi or Xen.
I reimplemented a small test,
On 28/05/2019 13:51, Jan Beulich wrote:
On 28.05.19 at 12:33, wrote:
>> @@ -61,6 +62,23 @@ unsigned int sched_granularity = 1;
>> bool sched_disable_smt_switching;
>> cpumask_var_t sched_res_mask;
>>
>> +#ifdef CONFIG_X86
>> +static int __init sched_select_granularity(const char *str)
>>
On 28/05/2019 13:47, Jan Beulich wrote:
On 28.05.19 at 12:33, wrote:
>> Instead of having a full blown scheduler running for the free cpus
>> add a very minimalistic scheduler for that purpose only ever scheduling
>> the related idle vcpu. This has the big advantage of not needing any
>>
>>> On 28.05.19 at 12:28, wrote:
> From: Tamas K Lengyel
>
> The p2m_altp2m_lazy_copy is responsible for lazily populating an
> altp2m view when the guest traps out due to no EPT entry being present
> in the active view. Currently, in addition to taking a number of
> unused argements, the
flight 137003 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/137003/
Failures :-/ but no regressions.
Tests which did not succeed, but are not blocking:
test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 136969
On 28/05/2019 13:44, Jan Beulich wrote:
On 28.05.19 at 12:33, wrote:
>> --- a/xen/arch/x86/sysctl.c
>> +++ b/xen/arch/x86/sysctl.c
>> @@ -200,7 +200,8 @@ long arch_do_sysctl(
>>
>> case XEN_SYSCTL_CPU_HOTPLUG_SMT_ENABLE:
>> case XEN_SYSCTL_CPU_HOTPLUG_SMT_DISABLE:
>> -
>>> On 28.05.19 at 12:33, wrote:
> @@ -61,6 +62,23 @@ unsigned int sched_granularity = 1;
> bool sched_disable_smt_switching;
> cpumask_var_t sched_res_mask;
>
> +#ifdef CONFIG_X86
> +static int __init sched_select_granularity(const char *str)
> +{
> +if (strcmp("cpu", str) == 0)
> +
>>> On 28.05.19 at 12:33, wrote:
> Instead of having a full blown scheduler running for the free cpus
> add a very minimalistic scheduler for that purpose only ever scheduling
> the related idle vcpu. This has the big advantage of not needing any
> per-cpu, per-domain or per-scheduling unit data
>>> On 28.05.19 at 12:33, wrote:
> --- a/xen/arch/x86/sysctl.c
> +++ b/xen/arch/x86/sysctl.c
> @@ -200,7 +200,8 @@ long arch_do_sysctl(
>
> case XEN_SYSCTL_CPU_HOTPLUG_SMT_ENABLE:
> case XEN_SYSCTL_CPU_HOTPLUG_SMT_DISABLE:
> -if ( !cpu_has_htt ||
>>> On 28.05.19 at 12:32, wrote:
> This prepares support of larger scheduling granularities, e.g. core
> scheduling.
>
> While at it move sched_has_urgent_vcpu() from include/asm-x86/cpuidle.h
> into schedule.c removing the need for including sched-if.h in
> cpuidle.h and multiple other C
This email only tracks big items for xen.git tree. Please reply for items you
would like to see in 4.13 so that people have an idea what is going on and
prioritise accordingly.
You're welcome to provide description and use cases of the feature you're
working on.
= Timeline =
We now adopt a
>>> On 28.05.19 at 12:05, wrote:
> On Tue, May 28, 2019 at 02:51:22AM -0600, Jan Beulich wrote:
>> >>> On 27.05.19 at 18:44, wrote:
>> > On Fri, May 24, 2019 at 04:01:23AM -0600, Jan Beulich wrote:
>> >> >>> On 10.05.19 at 18:10, wrote:
>> >> > --- a/xen/include/xen/pci.h
>> >> > +++
Today the vcpu runstate of a new scheduled vcpu is always set to
"running" even if at that time vcpu_runnable() is already returning
false due to a race (e.g. with pausing the vcpu).
With core scheduling this can no longer work as not all vcpus of a
schedule unit have to be "running" when being
Having a pointer to struct scheduler in struct sched_resource instead
of per cpu is enough.
Signed-off-by: Juergen Gross
---
V1: new patch
---
xen/common/sched_credit.c | 18 +++---
xen/common/sched_credit2.c | 3 ++-
xen/common/schedule.c | 21 ++---
Switch null scheduler completely from vcpu to sched_unit usage.
Signed-off-by: Juergen Gross
---
xen/common/sched_null.c | 304
1 file changed, 149 insertions(+), 155 deletions(-)
diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c
Instead of letting schedule_cpu_switch() handle moving cpus from and
to cpupools, split it into schedule_cpu_add() and schedule_cpu_rm().
This will allow us to drop allocating/freeing scheduler data for free
cpus as the idle scheduler doesn't need such data.
Signed-off-by: Juergen Gross
---
V1:
In preparation of core scheduling let the percpu pointer
schedule_data.curr point to a strct sched_unit instead of the related
vcpu. At the same time rename the per-vcpu scheduler specific structs
to per-unit ones.
Signed-off-by: Juergen Gross
---
xen/common/sched_arinc653.c | 2 +-
Switch credit2 scheduler completely from vcpu to sched_unit usage.
As we are touching lots of lines remove some white space at the end of
the line, too.
Signed-off-by: Juergen Gross
---
xen/common/sched_credit2.c | 820 ++---
1 file changed, 403
In order to be able to move cpus to cpupools with core scheduling
active it is mandatory to merge multiple cpus into one scheduling
resource or to split a scheduling resource with multiple cpus in it
into multiple scheduling resources. This in turn requires to modify
the cpu <-> scheduling
Add a percpu variable holding the index of the cpu in the current
sched_resource structure. This index is used to get the correct vcpu
of a sched_unit on a specific cpu.
For now this index will be zero for all cpus, but with core scheduling
it will be possible to have higher values, too.
For support of core scheduling the scheduler cpu callback for
CPU_STARTING has to be moved into a dedicated function called by
start_secondary() as it needs to run before spin_debug_enable() then
due to potentially calling xfree().
Signed-off-by: Juergen Gross
---
RFC V2: fix ARM build
---
When core or socket scheduling are active enabling or disabling smt is
not possible as that would require a major host reconfiguration.
Add a bool sched_disable_smt_switching which will be set for core or
socket scheduling.
Signed-off-by: Juergen Gross
---
V1: new patch
---
Add a scheduling granularity enum ("cpu", "core", "socket") for
specification of the scheduling granularity. Initially it is set to
"cpu", this can be modified by the new boot parameter (x86 only)
"sched-gran".
According to the selected granularity sched_granularity is set after
all cpus are
In order to prepare using always cpu scheduling for free cpus
regardless of other cpupools scheduling granularity always use a single
fixed lock for all free cpus shared by all schedulers. This will allow
to move any number of free cpus to a cpupool guarded by only one lock.
This requires to drop
On- and offlining cpus with core scheduling is rather complicated as
the cpus are taken on- or offline one by one, but scheduling wants them
rather to be handled per core.
As the future plan is to be able to select scheduling granularity per
cpupool prepare that by storing the granularity in
Especially in the do_schedule() functions of the different schedulers
using smp_processor_id() for the local cpu number is correct only if
the sched_unit is a single vcpu. As soon as larger sched_units are
used most uses should be replaced by the cpu number of the local
sched_resource instead.
cpupool_domain_cpumask() is used by scheduling to select cpus or to
iterate over cpus. In order to support scheduling units spanning
multiple cpus let cpupool_domain_cpumask() return a cpumask with only
one bit set per scheduling resource.
Signed-off-by: Juergen Gross
---
xen/common/cpupool.c
Instead of having a full blown scheduler running for the free cpus
add a very minimalistic scheduler for that purpose only ever scheduling
the related idle vcpu. This has the big advantage of not needing any
per-cpu, per-domain or per-scheduling unit data for free cpus and in
turn simplifying
With core scheduling active it is necessary to move multiple cpus at
the same time to or from a cpupool in order to avoid split scheduling
resources in between.
Signed-off-by: Juergen Gross
---
V1: new patch
---
xen/common/cpupool.c | 87 +++---
With core scheduling active schedule_cpu_[add/rm]() has to cope with
different scheduling granularity: a cpu not in any cpupool is subject
to granularity 1 (cpu scheduling), while a cpu in a cpupool might be
in a scheduling resource with more than one cpu.
Handle that by having arrays of old/new
Having a pointer to struct cpupool in struct sched_resource instead
of per cpu is enough.
Signed-off-by: Juergen Gross
---
V1: new patch
---
xen/common/cpupool.c | 4 +---
xen/common/sched_credit.c | 2 +-
xen/common/sched_rt.c | 2 +-
xen/common/schedule.c | 8
When scheduling an unit with multiple vcpus there is no guarantee all
vcpus are available (e.g. above maxvcpus or vcpu offline). Fall back to
idle vcpu of the current cpu in that case. This requires to store the
correct schedule_unit pointer in the idle vcpu as long as it used as
fallback vcpu.
When switching sched units synchronize all vcpus of the new unit to be
scheduled at the same time.
A variable sched_granularity is added which holds the number of vcpus
per schedule unit.
As tasklets require to schedule the idle unit it is required to set the
tasklet_work_scheduled parameter of
With core or socket scheduling we need to know the number of siblings
per scheduling unit before we can setup the scheduler properly. In
order to prepare that do cpupool0 population only after all cpus are
up.
With that in place there is no need to create cpupool0 earlier, so
do that just before
Today a cpu which is removed from the system is taken directly from
Pool0 to the offline state. This will conflict with core scheduling,
so remove it from Pool0 first. Additionally accept removing a free cpu
instead of requiring it to be in Pool0.
For the resume failed case we need to call the
The credit scheduler calls vcpu_pause_nosync() and vcpu_unpause()
today. Add sched_unit_pause_nosync() and sched_unit_unpause() to
perform the same operations on scheduler units instead.
Signed-off-by: Juergen Gross
---
xen/common/sched_credit.c | 6 +++---
xen/include/xen/sched-if.h | 10
vcpu_wake() and vcpu_sleep() need to be made core scheduling aware:
they might need to switch a single vcpu of an already scheduled unit
between running and not running.
Especially when vcpu_sleep() for a vcpu is being called by a vcpu of
the same scheduling unit special care must be taken in
In order to prepare core- and socket-scheduling use a new struct
sched_unit instead of struct vcpu for interfaces of the different
schedulers.
Rename the per-scheduler functions insert_vcpu and remove_vcpu to
insert_unit and remove_unit to reflect the change of the parameter.
In the schedulers
With a scheduling granularity greater than 1 multiple vcpus share the
same struct sched_unit. Support that.
Setting the initial processor must be done carefully: we can't use
sched_set_res() as that relies on for_each_sched_unit_vcpu() which in
turn needs the vcpu already as a member of the
vcpu_force_reschedule() is only used for modifying the periodic timer
of a vcpu. Forcing a vcpu to give up the physical cpu for that purpose
is kind of brutal.
So instead of doing the reschedule dance just operate on the timer
directly.
In case we are modifying the timer of the currently running
This prepares making the different schedulers vcpu agnostic.
Signed-off-by: Juergen Gross
---
xen/common/sched_arinc653.c | 4 ++--
xen/common/sched_credit.c | 6 +++---
xen/common/sched_credit2.c | 10 +-
xen/common/sched_null.c | 4 ++--
xen/common/sched_rt.c | 4 ++--
In order to prepare for multiple vcpus per schedule unit move struct
task_slice in schedule() from the local stack into struct sched_unit
of the currently running unit. To make access easier for the single
schedulers add the pointer of the currently running unit as a parameter
of do_schedule().
In several places there is support for multiple vcpus per sched unit
missing. Add that missing support (with the exception of initial
allocation) and missing helpers for that.
Signed-off-by: Juergen Gross
---
RFC V2: fix vcpu_runstate_helper()
V1: add special handling for idle unit in
Rename the scheduler related perf counters from vcpu* to unit* where
appropriate.
Signed-off-by: Juergen Gross
---
xen/common/sched_credit.c| 32
xen/common/sched_credit2.c | 18 +-
xen/common/sched_null.c | 18 +-
Where appropriate switch from for_each_vcpu() to for_each_sched_unit()
in order to prepare core scheduling.
Signed-off-by: Juergen Gross
---
xen/common/domain.c | 9 ++---
xen/common/schedule.c | 109 ++
2 files changed, 60 insertions(+), 58
Instead of using the SCHED_OP() macro to call the different scheduler
specific functions add inline wrappers for that purpose.
Signed-off-by: Juergen Gross
---
RFC V2: new patch (Andrew Cooper)
V1: use conditional operator (Jan Beulich, Dario Faggioli)
drop no longer needed ASSERT()s
---
Affinities are scheduler specific attributes, they should be per
scheduling unit. So move all affinity related fields in struct vcpu
to struct sched_unit. While at it switch affinity related functions in
sched-if.h to use a pointer to sched_unit instead to vcpu as parameter.
vcpu->last_run_time
Switch credit scheduler completely from vcpu to sched_unit usage.
Signed-off-by: Juergen Gross
---
xen/common/sched_credit.c | 506 +++---
1 file changed, 252 insertions(+), 254 deletions(-)
diff --git a/xen/common/sched_credit.c
sched_move_irqs() should work on a sched_unit as that is the unit
moved between cpus.
Rename the current function to vcpu_move_irqs() as it is still needed
in schedule().
Signed-off-by: Juergen Gross
---
xen/common/schedule.c | 18 +-
1 file changed, 13 insertions(+), 5
Add an is_running indicator to struct sched_unit which will be set
whenever the unit is being scheduled. Switch scheduler code to use
unit->is_running instead of vcpu->is_running for scheduling decisions.
At the same time introduce a state_entry_time field in struct
sched_unit being updated
We'll need a way to free a sched_unit structure without side effects
in a later patch.
Signed-off-by: Juergen Gross
---
RFC V2: new patch, carved out from RFC V1 patch 49
---
xen/common/schedule.c | 38 +-
1 file changed, 21 insertions(+), 17 deletions(-)
In preparation for core scheduling carve out the GDT related
functionality (writing GDT related PTEs, loading default of full GDT)
into sub-functions.
Signed-off-by: Juergen Gross
Acked-by: Jan Beulich
---
RFC V2: split off non-refactoring part
V1: constify pointers, use initializers (Jan
Some functions of struct scheduler are mandatory. Test those in the
scheduler initialization loop to be present and drop schedulers not
complying.
Signed-off-by: Juergen Gross
---
V1: new patch
---
xen/common/schedule.c | 26 +-
1 file changed, 25 insertions(+), 1
Instead of dynamically decide whether the previous vcpu was using full
or default GDT just add a percpu variable for that purpose. This at
once removes the need for testing vcpu_ids to differ twice.
Cache the need_full_gdt(nd) value in a local variable.
Signed-off-by: Juergen Gross
Reviewed-by:
Prepare supporting multiple cpus per scheduling resource by allocating
the cpumask per resource dynamically.
Modify sched_res_mask to have only one bit per scheduling resource set.
Signed-off-by: Juergen Gross
---
V1: new patch (carved out from other patch)
---
xen/common/schedule.c | 16
Today there are two distinct scenarios for vcpu_create(): either for
creation of idle-domain vcpus (vcpuid == processor) or for creation of
"normal" domain vcpus (including dom0), where the caller selects the
initial processor on a round-robin scheme of the allowed processors
(allowed being based
Add a scheduling abstraction layer between physical processors and the
schedulers by introducing a struct sched_resource. Each scheduler unit
running is active on such a scheduler resource. For the time being
there is one struct sched_resource per cpu, but in future there might
be one for each
Add the following helpers using a sched_unit as input instead of a
vcpu:
- is_idle_unit() similar to is_idle_vcpu()
- unit_runnable() like vcpu_runnable()
- sched_set_res() to set the current processor of an unit
- sched_unit_cpu() to get the current processor of an unit
-
Use sched_units instead of vcpus in schedule(). This includes the
introduction of sched_unit_runstate_change() as a replacement of
vcpu_runstate_change() in schedule().
Signed-off-by: Juergen Gross
---
xen/common/schedule.c | 70 +--
1 file
Switch rt scheduler completely from vcpu to sched_unit usage.
Signed-off-by: Juergen Gross
---
xen/common/sched_rt.c | 356 --
1 file changed, 174 insertions(+), 182 deletions(-)
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index
Switch arinc653 scheduler completely from vcpu to sched_unit usage.
Signed-off-by: Juergen Gross
---
xen/common/sched_arinc653.c | 208 +---
1 file changed, 101 insertions(+), 107 deletions(-)
diff --git a/xen/common/sched_arinc653.c
Add counters to struct sched_unit summing up runstates of associated
vcpus.
Signed-off-by: Juergen Gross
---
RFC V2: add counters for each possible runstate
---
xen/common/schedule.c | 5 +
xen/include/xen/sched.h | 2 ++
2 files changed, 7 insertions(+)
diff --git
Let the schedulers put a sched_unit pointer into struct task_slice
instead of a vcpu pointer.
Signed-off-by: Juergen Gross
---
xen/common/sched_arinc653.c | 8
xen/common/sched_credit.c | 4 ++--
xen/common/sched_credit2.c | 4 ++--
xen/common/sched_null.c | 12 ++--
Instead of returning a physical cpu number let pick_cpu() return a
scheduler resource instead. Rename pick_cpu() to pick_resource() to
reflect that change.
Signed-off-by: Juergen Gross
---
xen/common/sched_arinc653.c | 12 ++--
xen/common/sched_credit.c| 16
Add a pointer to the domain to struct sched_unit in order to avoid
having to dereference the vcpu pointer of struct sched_unit to find
the related domain.
Signed-off-by: Juergen Gross
---
xen/common/schedule.c | 3 ++-
xen/include/xen/sched.h | 1 +
2 files changed, 3 insertions(+), 1
In order to make it easy to iterate over sched_unit elements of a
domain build a single linked list and add an iterator for it. The new
list is guarded by the same mechanisms as the vcpu linked list as it
is modified only via vcpu_create() or vcpu_destroy().
For completeness add another iterator
Rename vcpu_schedule_[un]lock[_irq]() to unit_schedule_[un]lock[_irq]()
and let it take a sched_unit pointer instead of a vcpu pointer as
parameter.
Signed-off-by: Juergen Gross
---
xen/common/sched_credit.c | 17
xen/common/sched_credit2.c | 40
Now that vcpu_migrate_start() and vcpu_migrate_finish() are used only
to ensure a vcpu is running on a suitable processor they can be
switched to operate on schedule units instead of vcpus.
While doing that rename them accordingly and make the _start() variant
static.
vcpu_move_locked() is
1 - 100 of 116 matches
Mail list logo