On 06/05/2019 12:01, Jan Beulich wrote:
>>>> On 06.05.19 at 11:23, <jgr...@suse.com> wrote:
>> And that was mentioned in the cover letter: cpu hotplug is not yet
>> handled (hence the RFC status of the series).
>>
>> When cpu hotplug is being added it might be appropriate to switch the
>> scheme as you suggested. Right now the current solution is much more
>> simple.
> 
> I see (I did notice the cover letter remark, but managed to not
> honor it when writing the reply), but I'm unconvinced if incurring
> more code churn by not dealing with things the "dynamic" way
> right away is indeed the "more simple" (overall) solution.

I have started to address cpu on/offlining now.

There are multiple design decisions to take.

1. Interaction between sched-gran and smt boot parameters
2. Interaction between sched-gran and xen-hptool smt switching
3. Interaction between sched-gran and single cpu on/offlining

Right now any guest won't see a difference regarding sched-gran
selection. This means we don't have to think about potential migration
restrictions. This might change in future when we want to enable the
guest to e.g. use core scheduling themselves in order to mitigate
against side channel attacks within the guest.

The most simple solution would be (and I'd like to send out V1 of my
series with that implemented):

sched-gran=core and sched-gran=socket don't allow dynamical switching
of smt via xen-hptool.

With sched-gran=core or sched-gran=socket offlining a single cpu results
in moving the complete core or socket to cpupool_free_cpus and then
offlining from there. Only complete cores/sockets can be moved to any
cpupool. When onlining a cpu it is added to cpupool_free_cpus and if
the core/socket is completely online it will automatically be added to
Pool-0 (as today any single onlined cpu).

The next steps (for future patches) could be:

- per-cpupool smt settings (static at cpupool creation, moving a domain
  between cpupools with different smt settings not supported)

- support moving domains between cpupools with different smt settings
  (a guest started with smt=0 would only ever use 1 thread per core)

- support per-cpupool scheduling granularity

Thoughts?


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to