Re: l3 cache and cpu pinning

2021-04-23 Thread Daniel P . Berrangé
On Thu, Apr 22, 2021 at 01:34:18PM +0200, Roman Mohr wrote:
> On Thu, Apr 22, 2021 at 1:24 PM Roman Mohr  wrote:
> 
> >
> >
> > On Thu, Apr 22, 2021 at 1:19 PM Roman Mohr  wrote:
> >
> >>
> >>
> >> On Wed, Apr 21, 2021 at 1:09 PM Daniel P. Berrangé 
> >> wrote:
> >>
> >>> On Wed, Apr 21, 2021 at 12:53:49PM +0200, Roman Mohr wrote:
> >>> > Hi,
> >>> >
> >>> > I have a question regarding enabling l3 cache emulation on Domains. Can
> >>> > this also be enabled without cpu-pinning, or does it need cpu pinning
> >>> to
> >>> > emulate the l3 caches according to the cpus where the guest is pinned
> >>> to?
> >>>
> >>> I presume you're referring to
> >>>
> >>>   
> >>> 
> >>>   
> >>>
> >>> There is no hard restriction placed on usage of these modes by QEMU.
> >>>
> >>> Conceptually though, you only want to use "passthrough" mode if you
> >>> have configured the sockets/cores/threads topology to match the host
> >>> CPUs. In turn you only ever want to set sockets/cores/threads to
> >>> match the host if you have done CPU pinning such that the topology
> >>> actually matches the host CPUs that have been pinned to.
> >>>
> >>> As a rule of thumb
> >>>
> >>>  - If letting CPUs float
> >>>
> >>>  -> Always uses sockets=1, cores=num-vCPUs, threads=1
> >>>  -> cache==emulate
> >>>  -> Always use 1 guest NUMA node (ie the default)
> >>>
> >>>
> >> Is `emulate` also the default in libvirt? If not, would you see any
> >> reason, e.g. thinking about migrations, to not set it always if no cpu
> >> pinning is done?
> >>
> >
> > To answer my own question: I guess something like [1] is a good reason to
> > not enable l3-cache by default, since it seems to have an impact on VM
> > density on nodes.
> >
> 
> Hm, seems like this change got only merged for older machine types. So
> according to the libvirt doc (not setting it means hypervisor default), it
> is probably set to emulation?

Actually that patch didn't get merged at all afaict.

The support for l3-cache was introduced in QEMU 2.8.0, defaulting to
enabled. The code you see that disables it in older machine types dates
from this time, because we had to preserve ABI for machine tpyes < 2.8.0


So in practice today you'll be getting "emulate" mode already with any
non-ancient QEMU.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: l3 cache and cpu pinning

2021-04-22 Thread Roman Mohr
On Thu, Apr 22, 2021 at 1:24 PM Roman Mohr  wrote:

>
>
> On Thu, Apr 22, 2021 at 1:19 PM Roman Mohr  wrote:
>
>>
>>
>> On Wed, Apr 21, 2021 at 1:09 PM Daniel P. Berrangé 
>> wrote:
>>
>>> On Wed, Apr 21, 2021 at 12:53:49PM +0200, Roman Mohr wrote:
>>> > Hi,
>>> >
>>> > I have a question regarding enabling l3 cache emulation on Domains. Can
>>> > this also be enabled without cpu-pinning, or does it need cpu pinning
>>> to
>>> > emulate the l3 caches according to the cpus where the guest is pinned
>>> to?
>>>
>>> I presume you're referring to
>>>
>>>   
>>> 
>>>   
>>>
>>> There is no hard restriction placed on usage of these modes by QEMU.
>>>
>>> Conceptually though, you only want to use "passthrough" mode if you
>>> have configured the sockets/cores/threads topology to match the host
>>> CPUs. In turn you only ever want to set sockets/cores/threads to
>>> match the host if you have done CPU pinning such that the topology
>>> actually matches the host CPUs that have been pinned to.
>>>
>>> As a rule of thumb
>>>
>>>  - If letting CPUs float
>>>
>>>  -> Always uses sockets=1, cores=num-vCPUs, threads=1
>>>  -> cache==emulate
>>>  -> Always use 1 guest NUMA node (ie the default)
>>>
>>>
>> Is `emulate` also the default in libvirt? If not, would you see any
>> reason, e.g. thinking about migrations, to not set it always if no cpu
>> pinning is done?
>>
>
> To answer my own question: I guess something like [1] is a good reason to
> not enable l3-cache by default, since it seems to have an impact on VM
> density on nodes.
>

Hm, seems like this change got only merged for older machine types. So
according to the libvirt doc (not setting it means hypervisor default), it
is probably set to emulation?

>
>
>>
>>
>>>
>>>  - If strictly pinning CPUs 1:1
>>>
>>>  -> Use sockets=N, cores=M, threads=0 to match the topology
>>> of the CPUs that have been pinned to
>>>  -> cache==passthrough
>>>  -> Configure virtual NUMA nodes if the CPU pinning or guest
>>> RAM needs cross host NUMA nodes.
>>>
>>>
>>>
>>> Regards,
>>> Daniel
>>> --
>>> |: https://berrange.com  -o-
>>> https://www.flickr.com/photos/dberrange :|
>>> |: https://libvirt.org -o-
>>> https://fstop138.berrange.com :|
>>> |: https://entangle-photo.org-o-
>>> https://www.instagram.com/dberrange :|
>>>
>>>
>
> [1] https://lists.gnu.org/archive/html/qemu-devel/2017-11/msg04592.html
>


Re: l3 cache and cpu pinning

2021-04-22 Thread Roman Mohr
On Thu, Apr 22, 2021 at 1:19 PM Roman Mohr  wrote:

>
>
> On Wed, Apr 21, 2021 at 1:09 PM Daniel P. Berrangé 
> wrote:
>
>> On Wed, Apr 21, 2021 at 12:53:49PM +0200, Roman Mohr wrote:
>> > Hi,
>> >
>> > I have a question regarding enabling l3 cache emulation on Domains. Can
>> > this also be enabled without cpu-pinning, or does it need cpu pinning to
>> > emulate the l3 caches according to the cpus where the guest is pinned
>> to?
>>
>> I presume you're referring to
>>
>>   
>> 
>>   
>>
>> There is no hard restriction placed on usage of these modes by QEMU.
>>
>> Conceptually though, you only want to use "passthrough" mode if you
>> have configured the sockets/cores/threads topology to match the host
>> CPUs. In turn you only ever want to set sockets/cores/threads to
>> match the host if you have done CPU pinning such that the topology
>> actually matches the host CPUs that have been pinned to.
>>
>> As a rule of thumb
>>
>>  - If letting CPUs float
>>
>>  -> Always uses sockets=1, cores=num-vCPUs, threads=1
>>  -> cache==emulate
>>  -> Always use 1 guest NUMA node (ie the default)
>>
>>
> Is `emulate` also the default in libvirt? If not, would you see any
> reason, e.g. thinking about migrations, to not set it always if no cpu
> pinning is done?
>

To answer my own question: I guess something like [1] is a good reason to
not enable l3-cache by default, since it seems to have an impact on VM
density on nodes.


>
>
>>
>>  - If strictly pinning CPUs 1:1
>>
>>  -> Use sockets=N, cores=M, threads=0 to match the topology
>> of the CPUs that have been pinned to
>>  -> cache==passthrough
>>  -> Configure virtual NUMA nodes if the CPU pinning or guest
>> RAM needs cross host NUMA nodes.
>>
>>
>>
>> Regards,
>> Daniel
>> --
>> |: https://berrange.com  -o-
>> https://www.flickr.com/photos/dberrange :|
>> |: https://libvirt.org -o-
>> https://fstop138.berrange.com :|
>> |: https://entangle-photo.org-o-
>> https://www.instagram.com/dberrange :|
>>
>>

[1] https://lists.gnu.org/archive/html/qemu-devel/2017-11/msg04592.html


Re: l3 cache and cpu pinning

2021-04-22 Thread Roman Mohr
On Wed, Apr 21, 2021 at 1:09 PM Daniel P. Berrangé 
wrote:

> On Wed, Apr 21, 2021 at 12:53:49PM +0200, Roman Mohr wrote:
> > Hi,
> >
> > I have a question regarding enabling l3 cache emulation on Domains. Can
> > this also be enabled without cpu-pinning, or does it need cpu pinning to
> > emulate the l3 caches according to the cpus where the guest is pinned to?
>
> I presume you're referring to
>
>   
> 
>   
>
> There is no hard restriction placed on usage of these modes by QEMU.
>
> Conceptually though, you only want to use "passthrough" mode if you
> have configured the sockets/cores/threads topology to match the host
> CPUs. In turn you only ever want to set sockets/cores/threads to
> match the host if you have done CPU pinning such that the topology
> actually matches the host CPUs that have been pinned to.
>
> As a rule of thumb
>
>  - If letting CPUs float
>
>  -> Always uses sockets=1, cores=num-vCPUs, threads=1
>  -> cache==emulate
>  -> Always use 1 guest NUMA node (ie the default)
>
>
Is `emulate` also the default in libvirt? If not, would you see any reason,
e.g. thinking about migrations, to not set it always if no cpu pinning is
done?


>
>  - If strictly pinning CPUs 1:1
>
>  -> Use sockets=N, cores=M, threads=0 to match the topology
> of the CPUs that have been pinned to
>  -> cache==passthrough
>  -> Configure virtual NUMA nodes if the CPU pinning or guest
> RAM needs cross host NUMA nodes.
>
>
>
> Regards,
> Daniel
> --
> |: https://berrange.com  -o-
> https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org -o-
> https://fstop138.berrange.com :|
> |: https://entangle-photo.org-o-
> https://www.instagram.com/dberrange :|
>
>


Re: l3 cache and cpu pinning

2021-04-21 Thread Daniel P . Berrangé
On Wed, Apr 21, 2021 at 12:09:42PM +0100, Daniel P. Berrangé wrote:
> On Wed, Apr 21, 2021 at 12:53:49PM +0200, Roman Mohr wrote:
> > Hi,
> > 
> > I have a question regarding enabling l3 cache emulation on Domains. Can
> > this also be enabled without cpu-pinning, or does it need cpu pinning to
> > emulate the l3 caches according to the cpus where the guest is pinned to?
> 
> I presume you're referring to
> 
>   
> 
>   
> 
> There is no hard restriction placed on usage of these modes by QEMU.
> 
> Conceptually though, you only want to use "passthrough" mode if you
> have configured the sockets/cores/threads topology to match the host
> CPUs. In turn you only ever want to set sockets/cores/threads to
> match the host if you have done CPU pinning such that the topology
> actually matches the host CPUs that have been pinned to.
> 
> As a rule of thumb
> 
>  - If letting CPUs float
>  
>  -> Always uses sockets=1, cores=num-vCPUs, threads=1
>  -> cache==emulate
>  -> Always use 1 guest NUMA node (ie the default)
> 
> 
>  - If strictly pinning CPUs 1:1
> 
>  -> Use sockets=N, cores=M, threads=0 to match the topology
> of the CPUs that have been pinned to

Opps, I meant  threads=P there, not 0 - ie match host threads.

With recentish libvirt+QEMU there is also a "dies=NNN" parameter for
topology which may be relevant for some host CPUs (very recent Intel
ones)

>  -> cache==passthrough
>  -> Configure virtual NUMA nodes if the CPU pinning or guest
> RAM needs cross host NUMA nodes.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: l3 cache and cpu pinning

2021-04-21 Thread Roman Mohr
On Wed, Apr 21, 2021 at 1:09 PM Daniel P. Berrangé 
wrote:

> On Wed, Apr 21, 2021 at 12:53:49PM +0200, Roman Mohr wrote:
> > Hi,
> >
> > I have a question regarding enabling l3 cache emulation on Domains. Can
> > this also be enabled without cpu-pinning, or does it need cpu pinning to
> > emulate the l3 caches according to the cpus where the guest is pinned to?
>
> I presume you're referring to
>
>   
> 
>   
>

Exactly.


>
> There is no hard restriction placed on usage of these modes by QEMU.
>
> Conceptually though, you only want to use "passthrough" mode if you
> have configured the sockets/cores/threads topology to match the host
> CPUs. In turn you only ever want to set sockets/cores/threads to
> match the host if you have done CPU pinning such that the topology
> actually matches the host CPUs that have been pinned to.
>
> As a rule of thumb
>
>  - If letting CPUs float
>
>  -> Always uses sockets=1, cores=num-vCPUs, threads=1
>  -> cache==emulate
>  -> Always use 1 guest NUMA node (ie the default)
>
>
>  - If strictly pinning CPUs 1:1
>
>  -> Use sockets=N, cores=M, threads=0 to match the topology
> of the CPUs that have been pinned to
>  -> cache==passthrough
>  -> Configure virtual NUMA nodes if the CPU pinning or guest
> RAM needs cross host NUMA nodes.
>
>
>
Thanks, that answers my questions.

Best regards,
Roman


>
> Regards,
> Daniel
> --
> |: https://berrange.com  -o-
> https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org -o-
> https://fstop138.berrange.com :|
> |: https://entangle-photo.org-o-
> https://www.instagram.com/dberrange :|
>
>


Re: l3 cache and cpu pinning

2021-04-21 Thread Daniel P . Berrangé
On Wed, Apr 21, 2021 at 12:53:49PM +0200, Roman Mohr wrote:
> Hi,
> 
> I have a question regarding enabling l3 cache emulation on Domains. Can
> this also be enabled without cpu-pinning, or does it need cpu pinning to
> emulate the l3 caches according to the cpus where the guest is pinned to?

I presume you're referring to

  

  

There is no hard restriction placed on usage of these modes by QEMU.

Conceptually though, you only want to use "passthrough" mode if you
have configured the sockets/cores/threads topology to match the host
CPUs. In turn you only ever want to set sockets/cores/threads to
match the host if you have done CPU pinning such that the topology
actually matches the host CPUs that have been pinned to.

As a rule of thumb

 - If letting CPUs float
 
 -> Always uses sockets=1, cores=num-vCPUs, threads=1
 -> cache==emulate
 -> Always use 1 guest NUMA node (ie the default)


 - If strictly pinning CPUs 1:1

 -> Use sockets=N, cores=M, threads=0 to match the topology
of the CPUs that have been pinned to
 -> cache==passthrough
 -> Configure virtual NUMA nodes if the CPU pinning or guest
RAM needs cross host NUMA nodes.



Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



l3 cache and cpu pinning

2021-04-21 Thread Roman Mohr
Hi,

I have a question regarding enabling l3 cache emulation on Domains. Can
this also be enabled without cpu-pinning, or does it need cpu pinning to
emulate the l3 caches according to the cpus where the guest is pinned to?

Thank you and best regards,
Roman