Re: [openstack-dev] [Openstack] Need help in validating CPU Pinning feature
- Original Message - From: Srinivasa Rao Ragolu srag...@mvista.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, Hi Steve, Thank you so much for your reply and detailed steps to go forward. I am using devstack setup and nova master commit. As I could not able to see CPUPinningFilter implementation in source, I have used NUMATopologyFilter. But same problem exists. I could not able to see any vcpupin in the guest xml. Please see the section of vcpu from xml below. metadata nova:instance xmlns:nova=http://openstack.org/xmlns/libvirt/nova/1.0; nova:package version=2015.1/ nova:nametest_pinning/nova:name nova:creationTime2014-12-29 07:30:04/nova:creationTime nova:flavor name=pinned.medium nova:memory2048/nova:memory nova:disk20/nova:disk nova:swap0/nova:swap nova:ephemeral0/nova:ephemeral nova:vcpus2/nova:vcpus /nova:flavor nova:owner nova:user uuid=d72f55401b924e36ac88efd223717c75admin/nova:user nova:project uuid=4904cdf59c254546981f577351b818deadmin/nova:project /nova:owner nova:root type=image uuid=fe017c19-6b4e-4625-93b1-2618dc5ce323/ /nova:instance /metadata memory unit='KiB'2097152/memory currentMemory unit='KiB'2097152/currentMemory vcpu placement='static' cpuset='0-3'2/vcpu Kindly suggest me which branch of NOVA I need to take to validated pinning feature. Alse let me know CPUPinningFilter is required to validate pinning feature? There are still a few outstanding patches, e.g. see: https://review.openstack.org/#/q/topic:bp/virt-driver-cpu-pinning,n,z Thanks, Steve On Sat, Dec 27, 2014 at 4:37 AM, Steve Gordon sgor...@redhat.com wrote: - Original Message - From: Srinivasa Rao Ragolu srag...@mvista.com To: joejiang ifz...@126.com Hi Joejing, Thanks for quick reply. Above xml is getting generated fine if I set vcpu_pin_set=1-12 in /etc/nova/nova.conf. But how to pin each vcpu with pcpu something like below cputune vcpupin vcpu=‘0’ cpuset=‘1-5,12-17’/ vcpupin vcpu=‘1’ cpuset=‘2-3,12-17’/ /cputune One more questions is Are Numa nodes are compulsory for pin each vcpu to pcpu? The specification for the CPU pinning functionality recently implemented in Nova is here: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/virt-driver-cpu-pinning.html Note that exact vCPU to pCPU pinning is not exposed to the user as this would require them to have direct knowledge of the host pCPU layout. Instead they request that the instance receive dedicated CPU resourcing and Nova handles allocation of pCPUs and pinning of vCPUs to them. Example usage: * Create a host aggregate and add set metadata on it to indicate it is to be used for pinning, 'pinned' is used for the example but any key value can be used. The same key must used be used in later steps though:: $ nova aggregate-create cpu_pinning $ nova aggregate-set-metadata 1 pinned=true NB: For aggregates/flavors that wont be dedicated set pinned=false. * Set all existing flavors to avoid this aggregate:: $ for FLAVOR in `nova flavor-list | cut -f 2 -d ' ' | grep -o [0-9]*`; do nova flavor-key ${FLAVOR} set aggregate_instance_extra_specs:pinned=false; done * Create flavor that has extra spec hw:cpu_policy set to dedicated. In this example it is created with ID of 6, 2048 MB of RAM, 20 GB drive, and 2 vCPUs:: $ nova flavor-create pinned.medium 6 2048 20 2 $ nova flavor-key 6 set hw:cpu_policy=dedicated * Set the flavor to require the aggregate set aside for dedicated pinning of guests:: $ nova flavor-key 6 set aggregate_instance_extra_specs:pinned=true * Add a compute host to the created aggregate (see nova host-list to get the host name(s)):: $ nova aggregate-add-host 1 my_packstack_host_name * Add the AggregateInstanceExtraSpecsFilter and CPUPinningFilter filters to the scheduler_default_filters in /etc/nova.conf:: scheduler_default_filters = RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter, ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter, AggregateInstanceExtraSpecsFilter,CPUPinningFilter NB: On Kilo code base I believe the filter is NUMATopologyFilter * Restart the scheduler:: # systemctl restart openstack-nova-scheduler * After the above - with a normal (non-admin user) try to boot an instance with the newly created flavor:: $ nova boot --image fedora --flavor 6 test_pinning * Confirm the instance has succesfully booted and that it's vCPU's are pinned to _a single_ host CPU by observing the cputune element of the generated domain XML:: # virsh list IdName
Re: [openstack-dev] [Openstack] Need help in validating CPU Pinning feature
Hi Steve, Thank you so much for your reply and detailed steps to go forward. I am using devstack setup and nova master commit. As I could not able to see CPUPinningFilter implementation in source, I have used NUMATopologyFilter. But same problem exists. I could not able to see any vcpupin in the guest xml. Please see the section of vcpu from xml below. metadata nova:instance xmlns:nova=http://openstack.org/xmlns/libvirt/nova/1.0; nova:package version=2015.1/ nova:nametest_pinning/nova:name nova:creationTime2014-12-29 07:30:04/nova:creationTime nova:flavor name=pinned.medium nova:memory2048/nova:memory nova:disk20/nova:disk nova:swap0/nova:swap nova:ephemeral0/nova:ephemeral nova:vcpus2/nova:vcpus /nova:flavor nova:owner nova:user uuid=d72f55401b924e36ac88efd223717c75admin/nova:user nova:project uuid=4904cdf59c254546981f577351b818deadmin/nova:project /nova:owner nova:root type=image uuid=fe017c19-6b4e-4625-93b1-2618dc5ce323/ /nova:instance /metadata memory unit='KiB'2097152/memory currentMemory unit='KiB'2097152/currentMemory vcpu placement='static' cpuset='0-3'2/vcpu Kindly suggest me which branch of NOVA I need to take to validated pinning feature. Alse let me know CPUPinningFilter is required to validate pinning feature? Thanks a lot, Srinivas. On Sat, Dec 27, 2014 at 4:37 AM, Steve Gordon sgor...@redhat.com wrote: - Original Message - From: Srinivasa Rao Ragolu srag...@mvista.com To: joejiang ifz...@126.com Hi Joejing, Thanks for quick reply. Above xml is getting generated fine if I set vcpu_pin_set=1-12 in /etc/nova/nova.conf. But how to pin each vcpu with pcpu something like below cputune vcpupin vcpu=‘0’ cpuset=‘1-5,12-17’/ vcpupin vcpu=‘1’ cpuset=‘2-3,12-17’/ /cputune One more questions is Are Numa nodes are compulsory for pin each vcpu to pcpu? The specification for the CPU pinning functionality recently implemented in Nova is here: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/virt-driver-cpu-pinning.html Note that exact vCPU to pCPU pinning is not exposed to the user as this would require them to have direct knowledge of the host pCPU layout. Instead they request that the instance receive dedicated CPU resourcing and Nova handles allocation of pCPUs and pinning of vCPUs to them. Example usage: * Create a host aggregate and add set metadata on it to indicate it is to be used for pinning, 'pinned' is used for the example but any key value can be used. The same key must used be used in later steps though:: $ nova aggregate-create cpu_pinning $ nova aggregate-set-metadata 1 pinned=true NB: For aggregates/flavors that wont be dedicated set pinned=false. * Set all existing flavors to avoid this aggregate:: $ for FLAVOR in `nova flavor-list | cut -f 2 -d ' ' | grep -o [0-9]*`; do nova flavor-key ${FLAVOR} set aggregate_instance_extra_specs:pinned=false; done * Create flavor that has extra spec hw:cpu_policy set to dedicated. In this example it is created with ID of 6, 2048 MB of RAM, 20 GB drive, and 2 vCPUs:: $ nova flavor-create pinned.medium 6 2048 20 2 $ nova flavor-key 6 set hw:cpu_policy=dedicated * Set the flavor to require the aggregate set aside for dedicated pinning of guests:: $ nova flavor-key 6 set aggregate_instance_extra_specs:pinned=true * Add a compute host to the created aggregate (see nova host-list to get the host name(s)):: $ nova aggregate-add-host 1 my_packstack_host_name * Add the AggregateInstanceExtraSpecsFilter and CPUPinningFilter filters to the scheduler_default_filters in /etc/nova.conf:: scheduler_default_filters = RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter, ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter, AggregateInstanceExtraSpecsFilter,CPUPinningFilter NB: On Kilo code base I believe the filter is NUMATopologyFilter * Restart the scheduler:: # systemctl restart openstack-nova-scheduler * After the above - with a normal (non-admin user) try to boot an instance with the newly created flavor:: $ nova boot --image fedora --flavor 6 test_pinning * Confirm the instance has succesfully booted and that it's vCPU's are pinned to _a single_ host CPU by observing the cputune element of the generated domain XML:: # virsh list IdName State 2 instance-0001 running # virsh dumpxml instance-0001 ... vcpu placement='static' cpuset='0-3'2/vcpu cputune vcpupin vcpu='0' cpuset='0'/ vcpupin vcpu='1' cpuset='1'/ /cputune -Steve ___ OpenStack-dev mailing list
Re: [openstack-dev] [Openstack] Need help in validating CPU Pinning feature
Hi Srinivasa, This section is related to cpu affinity in instance xml file. vcpu placement='static' cpuset='1-12'4/vcpu Regards, Joe Chiang At 2014-12-26 12:25:29, Srinivasa Rao Ragolu srag...@mvista.com wrote: Hi All, I have been working on CPU Pinning feature validation. I could able set vcpu_pin_set config in nova.conf and could able to see cpupin set in guest xml and working fine while launching. Please let me know how can I set cputune: vcpupin in guest xml? Or Any documents refer to validate cpu pinning use cases. Thanks, Srinivas.___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack] Need help in validating CPU Pinning feature
Hi Joejing, Thanks for quick reply. Above xml is getting generated fine if I set vcpu_pin_set=1-12 in /etc/nova/nova.conf. But how to pin each vcpu with pcpu something like below cputune vcpupin vcpu=‘0’ cpuset=‘1-5,12-17’/ vcpupin vcpu=‘1’ cpuset=‘2-3,12-17’/ /cputune One more questions is Are Numa nodes are compulsory for pin each vcpu to pcpu? Thanks, Srininvas. On Fri, Dec 26, 2014 at 4:37 PM, joejiang ifz...@126.com wrote: Hi Srinivasa, This section is related to cpu affinity in instance xml file. vcpu placement='static' cpuset='1-12'4/vcpu Regards, Joe Chiang At 2014-12-26 12:25:29, Srinivasa Rao Ragolu srag...@mvista.com wrote: Hi All, I have been working on CPU Pinning feature validation. I could able set vcpu_pin_set config in nova.conf and could able to see cpupin set in guest xml and working fine while launching. Please let me know how can I set cputune: vcpupin in guest xml? Or Any documents refer to validate cpu pinning use cases. Thanks, Srinivas. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack] Need help in validating CPU Pinning feature
- Original Message - From: Srinivasa Rao Ragolu srag...@mvista.com To: joejiang ifz...@126.com Hi Joejing, Thanks for quick reply. Above xml is getting generated fine if I set vcpu_pin_set=1-12 in /etc/nova/nova.conf. But how to pin each vcpu with pcpu something like below cputune vcpupin vcpu=‘0’ cpuset=‘1-5,12-17’/ vcpupin vcpu=‘1’ cpuset=‘2-3,12-17’/ /cputune One more questions is Are Numa nodes are compulsory for pin each vcpu to pcpu? The specification for the CPU pinning functionality recently implemented in Nova is here: http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/virt-driver-cpu-pinning.html Note that exact vCPU to pCPU pinning is not exposed to the user as this would require them to have direct knowledge of the host pCPU layout. Instead they request that the instance receive dedicated CPU resourcing and Nova handles allocation of pCPUs and pinning of vCPUs to them. Example usage: * Create a host aggregate and add set metadata on it to indicate it is to be used for pinning, 'pinned' is used for the example but any key value can be used. The same key must used be used in later steps though:: $ nova aggregate-create cpu_pinning $ nova aggregate-set-metadata 1 pinned=true NB: For aggregates/flavors that wont be dedicated set pinned=false. * Set all existing flavors to avoid this aggregate:: $ for FLAVOR in `nova flavor-list | cut -f 2 -d ' ' | grep -o [0-9]*`; do nova flavor-key ${FLAVOR} set aggregate_instance_extra_specs:pinned=false; done * Create flavor that has extra spec hw:cpu_policy set to dedicated. In this example it is created with ID of 6, 2048 MB of RAM, 20 GB drive, and 2 vCPUs:: $ nova flavor-create pinned.medium 6 2048 20 2 $ nova flavor-key 6 set hw:cpu_policy=dedicated * Set the flavor to require the aggregate set aside for dedicated pinning of guests:: $ nova flavor-key 6 set aggregate_instance_extra_specs:pinned=true * Add a compute host to the created aggregate (see nova host-list to get the host name(s)):: $ nova aggregate-add-host 1 my_packstack_host_name * Add the AggregateInstanceExtraSpecsFilter and CPUPinningFilter filters to the scheduler_default_filters in /etc/nova.conf:: scheduler_default_filters = RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter, ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter, AggregateInstanceExtraSpecsFilter,CPUPinningFilter NB: On Kilo code base I believe the filter is NUMATopologyFilter * Restart the scheduler:: # systemctl restart openstack-nova-scheduler * After the above - with a normal (non-admin user) try to boot an instance with the newly created flavor:: $ nova boot --image fedora --flavor 6 test_pinning * Confirm the instance has succesfully booted and that it's vCPU's are pinned to _a single_ host CPU by observing the cputune element of the generated domain XML:: # virsh list IdName State 2 instance-0001 running # virsh dumpxml instance-0001 ... vcpu placement='static' cpuset='0-3'2/vcpu cputune vcpupin vcpu='0' cpuset='0'/ vcpupin vcpu='1' cpuset='1'/ /cputune -Steve ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev