On 11/10/2018 19:37, Dario Faggioli wrote:
> Hey,
> 
> Sorry if replying took some time. :-P
> 
> On Fri, 2018-09-07 at 18:00 +0200, Juergen Gross wrote:
>> On 25/08/18 01:35, Dario Faggioli wrote:
>>>
>>> There are git branches here:
>>>  https://gitlab.com/dfaggioli/xen.git rel/sched/core-scheduling-
>>> RFCv1
>>>  https://github.com/fdario/xen.git rel/sched/core-scheduling-RFCv1
>>>
>>> Any comment is more than welcome.
>>
>> Have you thought about a more generic approach?
>>
> I had. And I have thought about it more since this email. :-)
> 
>> Instead of trying to schedule only vcpus of the same domain on a core
>> I'd rather switch form vcpu scheduling to real core scheduling. The
>> scheduler would see guest cores to be scheduled on physical cores. A
>> guest core consists of "guest threads" being vcpus (vcpus are bound
>> to their guest cores, so that part of the topology could even be used
>> by the guest for performance tuning). 
>>
> Right, so I think I got the big picture. And it was something that, as
> I've said above, I've been thinking too, and we've also talked about
> something similar with Andrew in Nanjing.
> 
> I'm still missing how something like this would work in details,
> perhaps because I'm really used to reason within the boundaries of the
> model we currently have.
> 
> So, for example:
> - domain A has vCore0 and vCore1
> - each vCore has 2 threads ({vCore0.0, vCore0.1} and
>   {vCore1.0, vCore1.1})
> - we're on a 2-way SMT host
> - vCore1 is running on physical core 3 on the host
> - more specifically, vCore1.0 is currently executing on thread 0 of
>   physical core 3 of the host, and vCore1.1 is currently executing on
>   thread 1 of core 3 of the host
> - say that both vCore1.0 and vCore1.1 are in guest context
> 
> Now:
> * vCore1.0 blocks. What happens?

It is going to vBlocked (the physical thread is sitting in the
hypervisor waiting for either a (core-)scheduling event or for
unblocking vCore1.0). vCore1.1 keeps running. Or, if vCore1.1
is already vIdle/vBlocked, vCore1 is switching to blocked and the
scheduler is looking for another vCore to schedule on the physical
core.

> * vCore1.0 makes an hypercall. What happens?

Same as today. The hypercall is being executed.

> * vCore1.0 VMEXITs. What happens?

Same as today. The VMEXIT is handled.

In case you referring to a potential rendezvous for e.g. L1TF
mitigation: this would be handled scheduler agnostic.

>> The state machine determining the core state from its vcpus would be
>> scheduler agnostic (schedule.c), same for switching guest cores on a
>> physical core.
>>
> What do you mean with "same for switching guest cores on a physical
> core"?

No per-scheduler handling, but a common scheduler.c function (maybe with
new per-scheduler hooks if needed). So schedule() modified to work on
scheduling entities (threads/cores/sockets).

> All in all, I like the idea, because it is about introducing nice
> abstractions, it is general, etc., but it looks like a major rework of
> the scheduler.

Correct. Finally something to do :-p

> And it's not that I am not up for major reworks, but I'd like to
> understand properly what that is buying us.

I would hope so!

> Note that, while this series which tries to implement core-scheduling
> for Credit1 is rather long and messy, doing the same (and with a
> similar approach) for Credit2 is a lot easier and nicer. I have it
> almost ready, and will send it soon.

Okay, but would it keep vThreads of the same vCore let always running
together on the same physical core?

>> This scheme could even be expanded for socket scheduling.
>>
> Right. But again, in Credit2, I've been able to implement socket-wise
> coscheduling with this approach (I mean, an approach similar to the one
> in this series, but adapted to Credit2).

And then there still is sched_rt.c


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to