[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2017-09-11 Thread Ricardo Noriega
** Changed in: networking-l2gw
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Astara:
  Fix Released
Status in Bandit:
  Fix Released
Status in Barbican:
  Fix Released
Status in Blazar:
  Triaged
Status in Ceilometer:
  Invalid
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in daisycloud-core:
  New
Status in Designate:
  Fix Released
Status in OpenStack Backup/Restore and DR (Freezer):
  In Progress
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in Higgins:
  New
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in networking-calico:
  Fix Released
Status in networking-infoblox:
  In Progress
Status in networking-l2gw:
  Fix Released
Status in networking-sfc:
  Fix Released
Status in quark:
  In Progress
Status in OpenStack Compute (nova):
  Won't Fix
Status in os-brick:
  Fix Released
Status in PBR:
  Fix Released
Status in pycadf:
  Fix Released
Status in python-barbicanclient:
  Fix Released
Status in python-ceilometerclient:
  Invalid
Status in python-cinderclient:
  Fix Released
Status in python-designateclient:
  Fix Committed
Status in Glance Client:
  Fix Released
Status in python-mistralclient:
  Fix Released
Status in python-solumclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in Rally:
  In Progress
Status in Sahara:
  Fix Released
Status in Solum:
  Fix Released
Status in sqlalchemy-migrate:
  In Progress
Status in SWIFT:
  In Progress
Status in tacker:
  Fix Committed
Status in tempest:
  Invalid
Status in zaqar:
  Fix Released

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/astara/+bug/1259292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470625] Re: Mechanism to register and run all external neutron alembic migrations automatically

2017-09-11 Thread Ricardo Noriega
** Changed in: networking-l2gw
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470625

Title:
  Mechanism to register and run all external neutron alembic migrations
  automatically

Status in devstack:
  Fix Released
Status in networking-cisco:
  Fix Committed
Status in networking-l2gw:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  For alembic migration branches that are out-of-tree, we need a
  mechanism whereby the external code can register its branches when it
  is installed, and then neutron will provide automation of running all
  installed external migration branches when neutron-db-manage is used
  for upgrading.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1470625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599936] Re: l2gw provider config prevents *aas provider config from being loaded

2017-09-11 Thread Ricardo Noriega
I don't think this is a bug since oslo config will gather and merge
parameters such service_providers across multiple files. I'm setting
this as "Invalid"

** Changed in: networking-l2gw
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1599936

Title:
  l2gw provider config prevents *aas provider config from being loaded

Status in devstack:
  New
Status in networking-l2gw:
  Invalid
Status in neutron:
  In Progress

Bug description:
  networking-l2gw devstack plugin stores its service_providers config in 
/etc/l2gw_plugin.ini
  and add --config-file for it.
  as a result, neutron-server is invoked like the following.

  /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf
  --config-file /etc/neutron/plugins/midonet/midonet.ini --config-file
  /etc/neutron/l2gw_plugin.ini

  it breaks *aas service providers because NeutronModule.service_providers finds
  l2gw providers in cfg.CONF.service_providers.service_provider and thus 
doesn't look
  at *aas service_providers config, which is in /etc/neutron/neutron_*aas.conf.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1599936/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590746] [NEW] SRIOV PF/VF allocation fails with NUMA aware flavor

2016-06-09 Thread Ricardo Noriega
Public bug reported:

Description
===
It seems that the main failure happens due to the incorrect NUMA filtering in 
the pci allocation mechanism. The allocation is being done according to the 
instance NUMA topology, however, this is not always correct. Specifically in 
the case when a user selects hw:numa_nodes=1, which would mean that VM will 
take resources from just one numa node and not from a specific one.


Steps to reproduce
==

Create nova flavor with NUMA awareness, CPU pinning, Huge pages, etc:

#  nova flavor-create prefer_pin_1 auto 2048 20 1
#  nova flavor-key prefer_pin_1 set  hw:numa_nodes=1
#  nova flavor-key prefer_pin_1 set  hw:mem_page_size=1048576
#  nova flavor-key prefer_pin_1 set hw:numa_mempolicy=strict
#  nova flavor-key prefer_pin_1 set hw:cpu_policy=dedicated
#  nova flavor-key prefer_pin_1 set hw:cpu_thread_policy=prefer

Then instantiate VMs with direct-physical neutron ports:

neutron port-create nfv_sriov --binding:vnic-type direct-physical --name pf1
nova boot pf1 --flavor prefer_pin_1 --image centos_udev --nic 
port-id=a0fe88f6-07cc-4c70-b702-1915e36ed728
neutron port-create nfv_sriov --binding:vnic-type direct-physical --name pf2
nova boot pf2 --flavor prefer_pin_1 --image centos_udev --nic 
port-id=b96de3ec-ef94-428b-96bc-dc46623a2427

Third VM instantiation failed. Our environment has got 4 NICs configured
to be allocated. However, with a regular flavor (m1.normal), the
instantiation works:

neutron port-create nfv_sriov --binding:vnic-type direct-physical --name pf3
nova boot pf3 --flavor 2 --image centos_udev --nic 
port-id=52caacfe-0324-42bd-84ad-9a54d80e8fbe
neutron port-create nfv_sriov --binding:vnic-type direct-physical --name pf4
nova boot pf4 --flavor 2 --image centos_udev --nic 
port-id=7335a9a6-82d0-4595-bb88-754678db56ef


Expected result
===

PCI passthrough (PFs and VFs) should work in an environment with
NUMATopologyFilter enable


Actual result
=

Checking availability of NICs with NUMATopologyFilter is not working.


Environment
===

1 controller + 1 compute.

OpenStack Mitaka

Logs & Configs
==

See attachment

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "sosreport-nfv100.hi.inet-20160609134718.tar.xz"
   
https://bugs.launchpad.net/bugs/1590746/+attachment/4680374/+files/sosreport-nfv100.hi.inet-20160609134718.tar.xz

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1590746

Title:
  SRIOV PF/VF allocation fails with NUMA aware flavor

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  It seems that the main failure happens due to the incorrect NUMA filtering in 
the pci allocation mechanism. The allocation is being done according to the 
instance NUMA topology, however, this is not always correct. Specifically in 
the case when a user selects hw:numa_nodes=1, which would mean that VM will 
take resources from just one numa node and not from a specific one.

  
  Steps to reproduce
  ==

  Create nova flavor with NUMA awareness, CPU pinning, Huge pages, etc:

  #  nova flavor-create prefer_pin_1 auto 2048 20 1
  #  nova flavor-key prefer_pin_1 set  hw:numa_nodes=1
  #  nova flavor-key prefer_pin_1 set  hw:mem_page_size=1048576
  #  nova flavor-key prefer_pin_1 set hw:numa_mempolicy=strict
  #  nova flavor-key prefer_pin_1 set hw:cpu_policy=dedicated
  #  nova flavor-key prefer_pin_1 set hw:cpu_thread_policy=prefer

  Then instantiate VMs with direct-physical neutron ports:

  neutron port-create nfv_sriov --binding:vnic-type direct-physical --name pf1
  nova boot pf1 --flavor prefer_pin_1 --image centos_udev --nic 
port-id=a0fe88f6-07cc-4c70-b702-1915e36ed728
  neutron port-create nfv_sriov --binding:vnic-type direct-physical --name pf2
  nova boot pf2 --flavor prefer_pin_1 --image centos_udev --nic 
port-id=b96de3ec-ef94-428b-96bc-dc46623a2427

  Third VM instantiation failed. Our environment has got 4 NICs
  configured to be allocated. However, with a regular flavor
  (m1.normal), the instantiation works:

  neutron port-create nfv_sriov --binding:vnic-type direct-physical --name pf3
  nova boot pf3 --flavor 2 --image centos_udev --nic 
port-id=52caacfe-0324-42bd-84ad-9a54d80e8fbe
  neutron port-create nfv_sriov --binding:vnic-type direct-physical --name pf4
  nova boot pf4 --flavor 2 --image centos_udev --nic 
port-id=7335a9a6-82d0-4595-bb88-754678db56ef

  
  Expected result
  ===

  PCI passthrough (PFs and VFs) should work in an environment with
  NUMATopologyFilter enable

  
  Actual result
  =

  Checking availability of NICs with NUMATopologyFilter is not working.

  
  Environment
  ===

  1 controller + 1 compute.

  OpenStack Mitaka

  Logs & Configs
  ==

  See attachment

To manage notifications about this 

[Yahoo-eng-team] [Bug 1578150] Re: hw:cpu_thread_policy=isolated allows allocation of siblings

2016-05-04 Thread Ricardo Noriega
** Also affects: nova
   Importance: Undecided
   Status: New

** No longer affects: nova-solver-scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1578150

Title:
  hw:cpu_thread_policy=isolated allows allocation of siblings

Status in OpenStack Compute (nova):
  New

Bug description:
  According to description of cpu_thread_policy:

   ``isolate``: The host must not have an SMT architecture or must emulate
  a non-SMT architecture. If the host does not have an SMT architecture,
  each vCPU is placed on a different core as expected. If the host does
 have an SMT architecture - that is, one or more cores have thread
 siblings - then each vCPU is placed on a different physical core. No
 vCPUs from other guests are placed on the same core. All but one thread
  sibling on each utilized core is therefore guaranteed to be unusable.

  Having 20 threads available to allocate 10 isolated vCPUs:

  nova flavor-create isolated auto 1024 10 5
  nova flavor-key isolated set hw:cpu_policy=dedicated
  nova flavor-key isolated set hw:cpu_thread_policy=isolate
  nova flavor-key isolated set hw:numa_nodes=1
  nova boot testIso1 --flavor isolated --image cirros --nic net-id=$NET_ID
  nova boot testIso2 --flavor isolated --image cirros --nic net-id=$NET_ID

  
  # virsh vcpupin 2
  VCPU: CPU Affinity
  --
 0: 10
 1: 16
 2: 8
 3: 18
 4: 2

  
  # virsh vcpupin 3
  VCPU: CPU Affinity
  --
 0: 17
 1: 3
 2: 11
 3: 9
 4: 19

  
  Cheking CPU topology:

  NUMANode L#0 (P#0 32GB)
  Socket L#0 + L3 L#0 (15MB)
L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
  PU L#0 (P#0)
  PU L#1 (P#12)
L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
  PU L#2 (P#2)
  PU L#3 (P#14)
L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2
  PU L#4 (P#4)
  PU L#5 (P#16)
L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3
  PU L#6 (P#6)
  PU L#7 (P#18)
L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4
  PU L#8 (P#8)
  PU L#9 (P#20)
L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5
  PU L#10 (P#10)
  PU L#11 (P#22)

NUMANode L#1 (P#1 32GB) + Socket L#1 + L3 L#1 (15MB)
  L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6
PU L#12 (P#1)
PU L#13 (P#13)
  L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7
PU L#14 (P#3)
PU L#15 (P#15)
  L2 L#8 (256KB) + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8
PU L#16 (P#5)
PU L#17 (P#17)
  L2 L#9 (256KB) + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9
PU L#18 (P#7)
PU L#19 (P#19)
  L2 L#10 (256KB) + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10
PU L#20 (P#9)
PU L#21 (P#21)
  L2 L#11 (256KB) + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11
PU L#22 (P#11)
PU L#23 (P#23)

  Now, we have allocated 10 vCPUs in an isolated way (take into account
  that threads number 1,13 and 0,12 are not available)

  If we boot a VM with only one vCPU in an isolated way, it will give an
  error (so expected behaviour):

  nova flavor-create iso auto 1024 10 1
  nova flavor-key iso set hw:cpu_policy=dedicated
  nova flavor-key iso set hw:cpu_thread_policy=isolate
  nova flavor-key iso set hw:numa_nodes=1
  nova boot testI --flavor iso --image cirros --nic net-id=$NET_ID

  If we boot a VM with one vCPU with m1.tiny flavor, it will be allowed:

  nova boot test --flavor 1 --image cirros --nic net-id=$NET_ID

  # nova list
  
+--+--+++-+-+
  | ID   | Name | Status | Task State | 
Power State | Networks|
  
+--+--+++-+-+
  | a669fbca-0607-44b5-b167-da49edf0276b | test | ACTIVE | -  | 
Running | om=192.168.1.17 |
  | 783a013c-78b8-4b66-89d9-eaab4e4d0ade | testI| ERROR  | -  | 
NOSTATE | |
  | 66e4a64d-b55e-4b37-94b9-c7330118467e | testIso1 | ACTIVE | -  | 
Running | om=192.168.1.15 |
  | cd5e51b3-de30-4765-b407-a707385cb45c | testIso2 | ACTIVE | -  | 
Running | om=192.168.1.16 |
  
+--+--+++-+-+

  # virsh vcpupin 4
  VCPU: CPU Affinity
  --
 0: 2-11,14-23

  So vCPUs are not properly isolated

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1578150/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1578155] [NEW] 'hw:cpu_thread_policy=prefer' misbehaviour

2016-05-04 Thread Ricardo Noriega
Public bug reported:

Description
===

'hw:cpu_thread_policy=prefer' allocates vCPUs in pairs of sibling
threads properly. An odd number of vCPUs will allocate pairs and a
single one. That single one should not be isolated. So 20 available
threads, shall be able to allocate 4 VMs of 5 vCPUs. When booting up the
third VM, it is giving an error.

Steps to reproduce
==

1.- Creating a flavor:

nova flavor-create pinning auto 1024 10 5
nova flavor-key pinning set hw:cpu_policy=dedicated
nova flavor-key pinning set hw:cpu_thread_policy=prefer
nova flavor-key pinning set hw:numa_nodes=1

2.- Booting up simple VMs:

nova boot testPin1 --flavor pinning --image cirros --nic net-id=$NET_ID

In my setup, I have 20 available threads:

  NUMANode L#0 (P#0 32GB)
Socket L#0 + L3 L#0 (15MB)
  L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
PU L#0 (P#0)
PU L#1 (P#12)
  L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
PU L#2 (P#2)
PU L#3 (P#14)
  L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2
PU L#4 (P#4)
PU L#5 (P#16)
  L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3
PU L#6 (P#6)
PU L#7 (P#18)
  L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4
PU L#8 (P#8)
PU L#9 (P#20)
  L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5
PU L#10 (P#10)
PU L#11 (P#22)

  NUMANode L#1 (P#1 32GB) + Socket L#1 + L3 L#1 (15MB)
L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6
  PU L#12 (P#1)
  PU L#13 (P#13)
L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7
  PU L#14 (P#3)
  PU L#15 (P#15)
L2 L#8 (256KB) + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8
  PU L#16 (P#5)
  PU L#17 (P#17)
L2 L#9 (256KB) + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9
  PU L#18 (P#7)
  PU L#19 (P#19)
L2 L#10 (256KB) + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10
  PU L#20 (P#9)
  PU L#21 (P#21)
L2 L#11 (256KB) + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11
  PU L#22 (P#11)
  PU L#23 (P#23)

Using the cpu_thread_policy:prefer, the behaviour is ok for the first
two VMs. So 5 threads are allocated in pairs.

[root@nfvsdn-04 ~(keystone_admin)]# virsh vcpupin 2
VCPU: CPU Affinity
--
   0: 10
   1: 22
   2: 16
   3: 4
   4: 8

[root@nfvsdn-04 ~(keystone_admin)]# virsh vcpupin 3
VCPU: CPU Affinity
--
   0: 17
   1: 5
   2: 3
   3: 15
   4: 11

However, eventhough there are enough threads in order to allocate
another 2 VMs with the same flavor, I get the following error booting up
the third VM:

INFO nova.filters Filtering removed all hosts for the request with
instance ID 'cbb53e29-a7da-4c14-a3ad-4fb3aa04f101'. Filter results:
['RetryFilter: (start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1,
end: 1)', 'RamFilter: (start: 1, end: 1)', 'C[r│omputeFilter: (start: 1,
end: 1)', 'ComputeCapabilitiesFilter: (start: 1, end: 1)',
'ImagePropertiesFilter: (start: 1, end: 1)', 'CoreFilter: (start: 1,
end: 1)', 'NUMATopologyFilter: (start: 1, endo: 0)']


There should be enough space for 4 VMs allocated with cpu_thread_policy=prefer 
flavor.

Expected result
===

To have 4 VMs up with flavor 'pinning'.

Actual result
=

3rd VM fails at scheduling.

Environment
==

All-in-one environment.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: cpu prefer thread

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1578155

Title:
  'hw:cpu_thread_policy=prefer' misbehaviour

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  'hw:cpu_thread_policy=prefer' allocates vCPUs in pairs of sibling
  threads properly. An odd number of vCPUs will allocate pairs and a
  single one. That single one should not be isolated. So 20 available
  threads, shall be able to allocate 4 VMs of 5 vCPUs. When booting up
  the third VM, it is giving an error.

  Steps to reproduce
  ==

  1.- Creating a flavor:

  nova flavor-create pinning auto 1024 10 5
  nova flavor-key pinning set hw:cpu_policy=dedicated
  nova flavor-key pinning set hw:cpu_thread_policy=prefer
  nova flavor-key pinning set hw:numa_nodes=1

  2.- Booting up simple VMs:

  nova boot testPin1 --flavor pinning --image cirros --nic net-
  id=$NET_ID

  In my setup, I have 20 available threads:

NUMANode L#0 (P#0 32GB)
  Socket L#0 + L3 L#0 (15MB)
L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
  PU L#0 (P#0)
  PU L#1 (P#12)
L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
  PU L#2 (P#2)
  PU L#3 (P#14)
L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2