On 19/09/16 11:06, Julien Grall wrote:
> Hi George,
> On 19/09/2016 11:45, George Dunlap wrote:
>> On Mon, Sep 19, 2016 at 9:53 AM, Julien Grall <julien.gr...@arm.com>
>>>>> As mentioned in the mail you pointed above, this series is not
>>>>> enough to
>>>>> big.LITTLE working on then. Xen is always using the boot CPU to detect
>>>>> list of features. With big.LITTLE features may not be the same.
>>>>> And I would prefer to see Xen supporting big.LITTLE correctly before
>>>>> beginning to think to expose big.LITTLE to the userspace (via cpupool)
>>>> Do you mean vcpus be scheduled between big and little cpus freely?
>>> By supporting big.LITTLE correctly I meant Xen thinks that all the
>>> cores has
>>> the same set of features. So the feature detection is only done the boot
>>> CPU. See processor_setup for instance...
>>> Moving vCPUs between big and little cores would be a hard task (cache
>>> issue, and possibly feature) and I don't expect to ever cross this in
>>> However, I am expecting to see big.LITTLE exposed to the guest (i.e
>>> big and little vCPUs).
>> So it sounds like the big and LITTLE cores are architecturally
>> different enough that software must be aware of which one it's running
> That's correct. Each big and LITTLE cores may have different errata,
> different features...
> It has also the advantage to let the guest dealing itself with its own
> power efficiency without introducing a specific Xen interface.
Well in theory there would be advantages either way -- either to
allowing Xen to automatically add power-saving "smarts" to guests which
weren't programmed for them, or to exposing the power-saving abilities
to guests which were. But it sounds like automatically migrating
between them isn't really an option (or would be a lot more trouble than
>>> I care about having a design allowing an easy use of big.LITTLE on
>>> Xen. Your
>>> solution requires the administrator to know the underlying platform and
>>> create the pool.
>>> In the solution I suggested, the pools would be created by Xen (and
>>> the info
>>> exposed to the userspace for the admin).
>> FWIW another approach could be the one taken by "xl
>> cpupool-numa-split": you could have "xl cpupool-bigLITTLE-split" or
>> something that would automatically set up the pools.
>> But expanding the schedulers to know about different classes of cpus,
>> and having vcpus specified as running only on specific types of pcpus,
>> seems like a more flexible approach.
> So, if I understand correctly, you would not recommend to extend the
> number of CPU pool per domain, correct?
Well imagine trying to set the scheduling parameters, such as weight,
which in the past have been per-domain. Now you have to specify
parameters for a domain in each of the cpupools that its' in.
No, I think it would be a lot simpler to just teach the scheduler about
different classes of cpus. credit1 would probably need to be modified
so that its credit algorithm would be per-class rather than pool-wide;
but credit2 shouldn't need much modification at all, other than to make
sure that a given runqueue doesn't include more than one class; and to
do load-balancing only with runqueues of the same class.
Xen-devel mailing list