Hello Paco - 

> On 13 Apr 2017, at 11:10, Paco Bernabé <[email protected]> wrote:
> 
> Hi,
> 
> The issue is apparently solved; we found a solution here 
> https://www.stackhpc.com/tripleo-numa-vcpu-pinning.html where libvirt and 
> qemu-kvm version restrictions where indicated. The CentOS7.3 repo has an 
> older qemu-kvm version (1.5.3) than the one needed (>= 2.1.0), so we added 
> the kvm-common repo, as recommended by the web. Now 1 host is returned 
> (Filter NUMATopologyFilter returned 1 hosts) and the guest VM has the desired 
> cpu topology.

I’d seen your mail on my way onto a plane, and wanted to get home and get my 
facts straight before responding.  Great to see you got there first, and that 
our post was helpful: made my day :-)

Share and enjoy,
Stig

> 
> -- 
> Met vriendelijke groeten / Best regards,
> Paco Bernabé
> Senior Systemsprogrammer | SURFsara | Science Park 140 | 1098XG Amsterdam | T 
> +31 610 961 785 | [email protected] | www.surfsara.nl
> 
> 
> <signature.jpg>
> 
>> Op 13 apr. 2017, om 11:38 heeft Paco Bernabé <[email protected]> 
>> het volgende geschreven:
>> 
>> Hi,
>> 
>> More info; in de log file of the nova-scheduler we see messages like (<HOST> 
>> is de compute host name):
>> 
>>      • <HOST>, <HOST> fails NUMA topology requirements. No host NUMA 
>> topology while the instance specified one. host_passes 
>> /usr/lib/python2.7/site-packages/nova/scheduler/filters/numa_topology_filter.py:100
>>      • Filter NUMATopologyFilter returned 0 hosts
>> 
>> So, we are not sure if the filters are ok in nova.conf:
>> 
>> scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,NUMATopologyFilter
>> 
>> -- 
>> Met vriendelijke groeten / Best regards,
>> Paco Bernabé
>> Senior Systemsprogrammer | SURFsara | Science Park 140 | 1098XG Amsterdam | 
>> T +31 610 961 785 | [email protected] | www.surfsara.nl
>> 
>> 
>> <signature.jpg>
>> 
>>> Op 13 apr. 2017, om 09:34 heeft Paco Bernabé 
>>> <[email protected]> het volgende geschreven:
>>> 
>>> Hi,
>>> 
>>> After reading the following articles:
>>> 
>>>     • https://docs.openstack.org/admin-guide/compute-flavors.html
>>>     • 
>>> http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
>>>     • 
>>> http://openstack-in-production.blogspot.nl/2015/08/numa-and-cpu-pinning-in-high-throughput.html
>>>     • 
>>> http://www.stratoscale.com/blog/openstack/cpu-pinning-and-numa-awareness/
>>> 
>>> We are not able yet to expose the NUMA config to the guest VM. This is the 
>>> configuration of one of our compute nodes:
>>> 
>>> # lscpu
>>> Architecture:          x86_64
>>> CPU op-mode(s):        32-bit, 64-bit
>>> Byte Order:            Little Endian
>>> CPU(s):                48
>>> On-line CPU(s) list:   0-47
>>> Thread(s) per core:    2
>>> Core(s) per socket:    12
>>> Socket(s):             2
>>> NUMA node(s):          4
>>> Vendor ID:             GenuineIntel
>>> CPU family:            6
>>> Model:                 79
>>> Model name:            Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
>>> Stepping:              1
>>> CPU MHz:               2266.085
>>> BogoMIPS:              4404.00
>>> Virtualization:        VT-x
>>> L1d cache:             32K
>>> L1i cache:             32K
>>> L2 cache:              256K
>>> L3 cache:              15360K
>>> NUMA node0 CPU(s):     0-5,24-29
>>> NUMA node1 CPU(s):     6-11,30-35
>>> NUMA node2 CPU(s):     12-17,36-41
>>> NUMA node3 CPU(s):     18-23,42-47
>>> 
>>> 
>>> And this is the flavour configuration:
>>> 
>>> OS-FLV-DISABLED:disabled   | False                                          
>>>                                                                             
>>>                                                                             
>>>                                                                             
>>>                                                                             
>>>                                                                             
>>>                 
>>> OS-FLV-EXT-DATA:ephemeral  | 2048                                           
>>>                                                                             
>>>                                                                             
>>>                                                                             
>>>                                                                             
>>>                                                                             
>>>                  
>>> disk                       | 30                                             
>>>                                                                             
>>>                                                                             
>>>                                                                             
>>>                                                                             
>>>                                                                             
>>>                  
>>> extra_specs                | {"hw:numa_nodes": "8", "hw:numa_cpus.0": 
>>> "0-5", "hw:numa_cpus.1": "6-11", "hw:numa_cpus.2": "12-17", 
>>> "hw:numa_cpus.3": "18-23", "hw:numa_cpus.4": "24-29", "hw:numa_cpus.5": 
>>> "30-35", "hw:numa_cpus.6": "36-41", "hw:numa_cpus.7": "42-45", 
>>> "hw:numa_mem.7": "16384", "hw:numa_mem.6": "24576", "hw:numa_mem.5": 
>>> "24576", "hw:numa_mem.4": "24576", "hw:numa_mem.3": "24576", 
>>> "hw:numa_mem.2": "24576", "hw:numa_mem.1": "24576", "hw:numa_mem.0": 
>>> "24576"} 
>>> os-flavor-access:is_public | True                                           
>>>                                                                             
>>>                                                                             
>>>                                                                             
>>>                                                                             
>>>                                                                             
>>>                 ram                        | 188416                         
>>>                                                                             
>>>                                                                             
>>>                                                                             
>>>                                                                             
>>>                                                                             
>>>                                 rxtx_factor                | 1.0            
>>>                                                                             
>>>                                                                             
>>>                                                                             
>>>                                                                             
>>>                                                                             
>>>                                                 vcpus                      
>>> | 46
>>> 
>>> We have set 8 Numa nodes, because we read that non-continous ranges of CPUs 
>>> are not supported in CentOS7 and the solution is to create 2 times the 
>>> number of Numa nodes. What you see below is what is passed to libvirt on 
>>> the compute node:
>>> 
>>> <cpu mode='host-passthrough'>
>>>     <topology sockets=’46' cores='1' threads='1'/>
>>> </cpu>
>>> 
>>> But we want something like:
>>> 
>>> <cpu mode='host-passthrough'>
>>>     <numa>
>>>             <cell id='0' cpus=‘0-5’ memory=‘24576’/>
>>>             <cell id=‘1' cpus=‘6-11’ memory=‘24576'/>
>>>             …
>>>             <cell id=‘6' cpus=’36-41’ memory=‘24576'/>
>>>             <cell id=‘7' cpus=’42-45' memory=‘16384'/>
>>>     </numa>
>>> </cpu>
>>> 
>>> We have edited nova.conf at the compute node with the parameter and value 
>>> cpu_mode=host-passthrough. On the nova scheduler we have added 
>>> NumaTopologyFilter to the parameter scheduler_default_filters in nova.conf. 
>>> Of course, we have restarted all openstack services at the controller and 
>>> the nova-compute at the compute node.
>>> 
>>> We also have tried with a simpler version with the following extra specs:
>>> 
>>> | extra_specs                | {"hw:numa_cpus.0": "0,1,2,3,4,5", 
>>> "hw:numa_nodes": "1", "hw:numa_mem.0": "24576"} |
>>> 
>>> But we still see:
>>> 
>>> <cpu mode='host-passthrough'>
>>>     <topology sockets=’6' cores='1' threads='1'/>
>>> </cpu>
>>> 
>>> Any idea? I’m sure there must be something that we have skipped. Has the 
>>> pinning something to do? What we understand is that it’s only for 
>>> performance, but that should be the next step and it wouldn’t interfere in 
>>> what we are trying to achieve. Thanks in advance.
>>> 
>>> 
>>> 
>>> -- 
>>> Met vriendelijke groeten / Best regards,
>>> Paco Bernabé
>>> Senior Systemsprogrammer | SURFsara | Science Park 140 | 1098XG Amsterdam | 
>>> T +31 610 961 785 | [email protected] | www.surfsara.nl
>>> 
>>> 
>>> <signature.jpg>
>>> 
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> [email protected]
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> 
>> _______________________________________________
>> OpenStack-operators mailing list
>> [email protected]
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> _______________________________________________
> OpenStack-operators mailing list
> [email protected]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


_______________________________________________
OpenStack-operators mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to