On Tue, 2015-03-24 at 14:41 +0000, Wei Liu wrote: > On Tue, Mar 24, 2015 at 02:41:48PM +0100, Dario Faggioli wrote:
> Currently all the checks that returns error are all guest visible > mis-configurations. I'm trying to reason whether we should return an > error or just print a warning. > > What is the outcome if you have conflicting setting in vNUMA and vcpu > affinity? > The outcome is that vcpus will either always run (if hard affinity and vnuma info mismatch) or prefer to run (if soft affinity and vnuma info mismatch) on pcpus from pnode(s) different from the pnode specified in vnuma configuration, and hence where the memory is. So, for instance, with this: vnuma = [ [ "pnode=0","size=1000","vcpus=0-1","vdistances=10,20" ], [ "pnode=1","size=1000","vcpus=2-3","vdistances=20,10" ] ] and this: cpus_soft = "node:1" cpus = "node:0" in the config file, we have a soft affinity mismatch for vcpus 0,1 and an hard affinity mismatch for vcpus 2,3. This means that vcpus 0,1 can run everywhere, but they will prefer to run on pcpus from pnode 1, while, when the domain was built, they've been assigned to vnode 0, which has its memory allocated from pnode 0. This will (potentially) cause a lot of remote memory accesses. It also means that vcpus 2,3 will only run on pcpus from node:0, while they've been assigned to vnode 1, which has its memory allocated from pnode 1. This will mean all memory accesses will be remote memory accesses So no functional consequences, but performance will most likely be affected. > I guess it's just performance penalty but nothing guest > visible could happen? > Exactly. Regards, Dario
signature.asc
Description: This is a digitally signed message part
_______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel