[Yahoo-eng-team] [Bug 1816507] [NEW] in import keypair form, Check if the name exists

2019-02-18 Thread pengyuesheng
Public bug reported:

in import keypair form,Check if the name exists

** Affects: horizon
 Importance: Undecided
 Assignee: pengyuesheng (pengyuesheng)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1816507

Title:
  in import keypair form,Check if the name exists

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  in import keypair form,Check if the name exists

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1816507/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1814143] Re: Bump pyroute2 version to 0.5.3

2019-02-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/634279
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=5fff6e3b9429f8e7216e25c8e83b1f3af0661261
Submitter: Zuul
Branch:master

commit 5fff6e3b9429f8e7216e25c8e83b1f3af0661261
Author: Rodolfo Alonso Hernandez 
Date:   Thu Jan 31 17:05:10 2019 +

Bump pyroute2 version to 0.5.3

Bump pyroute2 version to 0.5.3 in order to retrieve the latest updates
and features.

A code refactor and reorganization was done between version 0.5.1 and
0.5.2. In order to enforce the new code structure, the version should
be bumped to the last stable one.

Change-Id: Ia75186570e7a320a3fbdf35bd01ec43dc071f6e8
Closes-Bug: #1814143


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1814143

Title:
  Bump pyroute2 version to 0.5.3

Status in neutron:
  Fix Released

Bug description:
  Bump pyroute2 version to 0.5.3 in order to retrieve the latest updates
  and features.

  A code refactor and reorganization was done between version 0.5.1 and
  0.5.2. In order to enforce the new code structure, the version should
  be bumped to the last stable one.

  Example of error using version 0.5.3 structure with lowest accepted
  version 0.4.21: http://logs.openstack.org/85/625685/7/check/openstack-
  tox-lower-constraints/55bfeb0/job-output.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1814143/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1816502] [NEW] Use subnetpool to create subnet failed when the min_prefixlen of subnetpool is not set

2019-02-18 Thread wangwei
Public bug reported:

Creating a subnet is failed with a subnetpool when the min_prefixlen of
subnetpool is not set.

[root@EXTENV-10-254-8-11 ~]# neutron subnetpool-create ww_v4_subnetpool1 
--pool-prefix 172.1.0.0/16
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
Created a new subnetpool:
+---+--+
| Field | Value|
+---+--+
| address_scope_id  |  |
| created_at| 2019-02-19T02:07:29Z |
| default_prefixlen | 8   |
| default_quota |  |
| description   |  |
| id| cc490736-d912-4ccd-9919-cfd4f92f34d0 |
| ip_version| 4|
| is_default| False|
| max_prefixlen | 32   |
| min_prefixlen | 8   |
| name  | ww_v4_subnetpool1|
| prefixes  | 172.1.0.0/16 |
| project_id| f0561ceca7874b188e43266c60a65128 |
| revision_number   | 0|
| shared| False|
| tags  |  |
| tenant_id | f0561ceca7874b188e43266c60a65128 |
| updated_at| 2019-02-19T02:07:29Z |
+---+--+
[root@EXTENV-10-254-8-11 ~]# neutron subnet-create --subnetpool 
ww_v4_subnetpool1 ww_v4_net1
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
Failed to allocate subnet: Insufficient prefix space to allocate subnet size /8.
Neutron server returns request_ids: ['req-63d8430c-fa6e-4113-a59e-62a2bd97bb7e']

I think when the min_prefixlen is not set,it is better to set it to be
the prefixken(16) of pool_prefix (172.1.0.0/16) by default.And when the
min_prefixlen is set(as the following test result),it should be checked
whether it is bigger than the min prefixlen of pool prefix.

[root@EXTENV-10-254-8-11 ~]# neutron subnetpool-create --min-prefixlen 16 
--pool-prefix 172.2.0.0/24 ww_v4_subnetpool2
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
Created a new subnetpool:
+---+--+
| Field | Value|
+---+--+
| address_scope_id  |  |
| created_at| 2019-02-18T09:33:12Z |
| default_prefixlen | 16   |
| default_quota |  |
| description   |  |
| id| 9da2fc9c-dade-4399-a4a9-fd22df45d3b7 |
| ip_version| 4|
| is_default| False|
| max_prefixlen | 32   |
| min_prefixlen | 16   |
| name  | ww_v4_subnetpool2|
| prefixes  | 172.2.0.0/24 |
| project_id| f0561ceca7874b188e43266c60a65128 |
| revision_number   | 0|
| shared| False|
| tags  |  |
| tenant_id | f0561ceca7874b188e43266c60a65128 |
| updated_at| 2019-02-18T09:33:12Z |
+---+--+
[root@EXTENV-10-254-8-11 ~]# 
[root@EXTENV-10-254-8-11 ~]# neutron subnet-create --subnetpool 
ww_v4_subnetpool2 ww_v4_net1 
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
Failed to allocate subnet: Insufficient prefix space to allocate subnet size 
/16.
Neutron server returns request_ids: ['req-09744170-b8fb-42e9-ba77-1d63fa75de29']

** Affects: neutron
 Importance: Undecided
 Assignee: wangwei (emma2019)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => wangwei (emma2019)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1816502

Title:
  Use subnetpool to create subnet failed when the min_prefixlen of
  subnetpool is not set

Status in neutron:
  In Progress

Bug description:
  Creating a subnet is failed with a subnetpool when the min_prefixlen
  of subnetpool is not set.

  [root@EXTENV-10-254-8-11 ~]# neutron subnetpool-create ww_v4_subnetpool1 
--pool-prefix

[Yahoo-eng-team] [Bug 1816498] [NEW] in create keypair form, Error should be reported when only a space is entered

2019-02-18 Thread pengyuesheng
Public bug reported:

in create keypair form, Error should be reported when only a space is
entered

** Affects: horizon
 Importance: Undecided
 Assignee: pengyuesheng (pengyuesheng)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => pengyuesheng (pengyuesheng)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1816498

Title:
  in create keypair form, Error should be reported when only a space is
  entered

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  in create keypair form, Error should be reported when only a space is
  entered

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1816498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1798475] Re: Fullstack test test_ha_router_restart_agents_no_packet_lost failing

2019-02-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/627285
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=5b7d444b3176dd3f8bf166d332781ac93670a51d
Submitter: Zuul
Branch:master

commit 5b7d444b3176dd3f8bf166d332781ac93670a51d
Author: LIU Yulong 
Date:   Tue Dec 25 17:45:05 2018 +0800

Not set the HA port down at regular l3-agent restart

If l3-agent was restarted by a regular action, such as config change,
package upgrade, manually service restart etc. We should not set the
HA port down during such scenarios. Unless the physical host was
rebooted, aka the VRRP processes were all terminated.

This patch adds a new RPC call during l3 agent init, it will try to
retrieve the HA router count first. And then compare the VRRP process
(keepalived) count and 'neutron-keepalived-state-change' count
with the hosting router count. If the count matches, then that
set HA port to 'DOWN' state action will not be triggered anymore.

Closes-Bug: #1798475
Change-Id: I5e2bb64df0aaab11a640a798963372c8d91a06a8


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1798475

Title:
  Fullstack test test_ha_router_restart_agents_no_packet_lost failing

Status in neutron:
  Fix Released

Bug description:
  Found at least 4 times recently:

  
http://logs.openstack.org/97/602497/5/gate/neutron-fullstack/b8ba2f9/logs/testr_results.html.gz
  
http://logs.openstack.org/90/610190/2/gate/neutron-fullstack/1f633ed/logs/testr_results.html.gz
  
http://logs.openstack.org/52/608052/1/gate/neutron-fullstack/6d36706/logs/testr_results.html.gz
  
http://logs.openstack.org/48/609748/1/gate/neutron-fullstack/f74a133/logs/testr_results.html.gz

  
  Looks that sometimes during L3 agent restart there is some packets loss 
noticed and that cause failure. We need to investigate that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1798475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1804521] Re: Mapping API doesn't use default roles

2019-02-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/619614
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=e94dff934a07aabfce5cf23943cb338b07093912
Submitter: Zuul
Branch:master

commit e94dff934a07aabfce5cf23943cb338b07093912
Author: Lance Bragstad 
Date:   Thu Nov 22 16:09:43 2018 +

Update mapping policies for system admin

This change makes the policy definitions for admin mapping operations
consistent with the other mapping policies. Subsequent patches will
incorporate:

 - testing for domain users
 - testing for project users

Change-Id: Iad665112c73de41e2c1727a557fe5255e89b3fb6
Related-Bug: 1804519
Closes-Bug: 1804521


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1804521

Title:
  Mapping API doesn't use default roles

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  In Rocky, keystone implemented support to ensure at least three
  default roles were available [0]. The federated mapping API doesn't
  incorporate these defaults into its default policies [1], but it
  should.

  [0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html
  [1] 
https://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/policies/mapping.py?id=fb73912d87b61c419a86c0a9415ebdcf1e186927

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1804521/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1816489] [NEW] Functional test neutron.tests.functional.agent.l3.test_ha_router.LinuxBridgeL3HATestCase. test_ha_router_lifecycle failing

2019-02-18 Thread Slawek Kaplonski
Public bug reported:

Functional test
neutron.tests.functional.agent.l3.test_ha_router.LinuxBridgeL3HATestCase.
test_ha_router_lifecycle is failing from time to time.

Example of failure: http://logs.openstack.org/68/623268/14/gate/neutron-
functional-python27/4dc7fb8/logs/testr_results.html.gz

Logstash query:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22line%2081%2C%20in%20test_ha_router_lifecycle%5C%22

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: functional-tests gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1816489

Title:
  Functional test
  neutron.tests.functional.agent.l3.test_ha_router.LinuxBridgeL3HATestCase.
  test_ha_router_lifecycle failing

Status in neutron:
  Confirmed

Bug description:
  Functional test
  neutron.tests.functional.agent.l3.test_ha_router.LinuxBridgeL3HATestCase.
  test_ha_router_lifecycle is failing from time to time.

  Example of failure: http://logs.openstack.org/68/623268/14/gate
  /neutron-functional-python27/4dc7fb8/logs/testr_results.html.gz

  Logstash query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22line%2081%2C%20in%20test_ha_router_lifecycle%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1816489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815758] Re: Error in ip_lib.get_devices_info() retrieving veth interface info

2019-02-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/636652
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=05644f79add6ce323e45676a7137f77d746877ea
Submitter: Zuul
Branch:master

commit 05644f79add6ce323e45676a7137f77d746877ea
Author: Rodolfo Alonso Hernandez 
Date:   Wed Feb 13 15:22:35 2019 +

Retrieve devices with link not present

In ip_lib.get_devices_info(), privileged.get_link_devices() can return
devices with links not present in this namespace or not listed. In this
situation, get_devices_info() will always try to find the device to set
the parameter "parent_name", what will trigger an exception.

This patch solves this issue avoiding the population of "parent_name"
if the link device is not present in the devices list.

Change-Id: Ic5c7d9008a11da5c406dc383cfdae2892a3118d8
Closes-Bug: #1815758


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1815758

Title:
  Error in ip_lib.get_devices_info() retrieving veth interface info

Status in neutron:
  Fix Released

Bug description:
  In ip_lib.get_devices_info(), if the device retrieved is one of the
  interfaces of a veth pair and the other one is created in other
  namespace, the information of the second interface won't be available
  in the list of interfaces of the first interface namespace. Because of
  this, is not possible to assign the "parent_name" information in the
  returned dict.

  By default, if the interface is a veth pair, this key shouldn't be
  populated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1815758/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1816485] [NEW] [rfe] change neutron process names to match their role

2019-02-18 Thread Doug Wiegley
Public bug reported:

See the commit message description here:
https://review.openstack.org/#/c/637019/

** Affects: neutron
 Importance: Undecided
 Assignee: Doug Wiegley (dougwig)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1816485

Title:
  [rfe] change neutron process names to match their role

Status in neutron:
  In Progress

Bug description:
  See the commit message description here:
  https://review.openstack.org/#/c/637019/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1816485/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815989] Re: OVS drops RARP packets by QEMU upon live-migration causes up to 40s ping pause in Rocky

2019-02-18 Thread Brian Haley
I've added the os-vif component based on Sean's comment.

** Also affects: os-vif
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1815989

Title:
  OVS drops RARP packets by QEMU upon live-migration causes up to 40s
  ping pause in Rocky

Status in neutron:
  New
Status in os-vif:
  New

Bug description:
  This issue is well known, and there were previous attempts to fix it,
  like this one

  https://bugs.launchpad.net/neutron/+bug/1414559

  
  This issue still exists in Rocky and gets worse. In Rocky, nova compute, nova 
libvirt and neutron ovs agent all run inside containers.

  So far the only simply fix I have is to increase the number of RARP
  packets QEMU sends after live-migration from 5 to 10. To be complete,
  the nova change (not merged) proposed in the above mentioned activity
  does not work.

  I am creating this ticket hoping to get an up-to-date (for Rockey and
  onwards) expert advise on how to fix in nova-neutron.

  
  For the record, below are the time stamps in my test between neutron ovs 
agent "activating" the VM port and rarp packets seen by tcpdump on the compute. 
10 RARP packets are sent by (recompiled) QEMU, 7 are seen by tcpdump, the 2nd 
last packet barely made through.

  openvswitch-agent.log:

  2019-02-14 19:00:13.568 73453 INFO
  neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
  [req-26129036-b514-4fa0-a39f-a6b21de17bb9 - - - - -] Port
  57d0c265-d971-404d-922d-963c8263e6eb updated. Details: {'profile': {},
  'network_qos_policy_id': None, 'qos_policy_id': None,
  'allowed_address_pairs': [], 'admin_state_up': True, 'network_id':
  '1bf4b8e0-9299-485b-80b0-52e18e7b9b42', 'segmentation_id': 648,
  'fixed_ips': [

  {'subnet_id': 'b7c09e83-f16f-4d4e-a31a-e33a922c0bac', 'ip_address': 
'10.0.1.4'}
  ], 'device_owner': u'compute:nova', 'physical_network': u'physnet0', 
'mac_address': 'fa:16:3e:de:af:47', 'device': 
u'57d0c265-d971-404d-922d-963c8263e6eb', 'port_security_enabled': True, 
'port_id': '57d0c265-d971-404d-922d-963c8263e6eb', 'network_type': u'vlan', 
'security_groups': [u'5f2175d7-c2c1-49fd-9d05-3a8de3846b9c']}
  2019-02-14 19:00:13.568 73453 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-26129036-b514-4fa0-a39f-a6b21de17bb9 - - - - -] Assigning 4 as local vlan 
for net-id=1bf4b8e0-9299-485b-80b0-52e18e7b9b42

   
  tcpdump for rarp packets:

  [root@overcloud-ovscompute-overcloud-0 nova]# tcpdump -i any rarp -nev
  tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 
262144 bytes

  19:00:10.788220 B fa:16:3e:de:af:47 ethertype Reverse ARP (0x8035), length 
62: Ethernet (len 6), IPv4 (len 4), Reverse Request who-is fa:16:3e:de:af:47 
tell fa:16:3e:de:af:47, length 46
  19:00:11.138216 B fa:16:3e:de:af:47 ethertype Reverse ARP (0x8035), length 
62: Ethernet (len 6), IPv4 (len 4), Reverse Request who-is fa:16:3e:de:af:47 
tell fa:16:3e:de:af:47, length 46
  19:00:11.588216 B fa:16:3e:de:af:47 ethertype Reverse ARP (0x8035), length 
62: Ethernet (len 6), IPv4 (len 4), Reverse Request who-is fa:16:3e:de:af:47 
tell fa:16:3e:de:af:47, length 46
  19:00:12.138217 B fa:16:3e:de:af:47 ethertype Reverse ARP (0x8035), length 
62: Ethernet (len 6), IPv4 (len 4), Reverse Request who-is fa:16:3e:de:af:47 
tell fa:16:3e:de:af:47, length 46
  19:00:12.788216 B fa:16:3e:de:af:47 ethertype Reverse ARP (0x8035), length 
62: Ethernet (len 6), IPv4 (len 4), Reverse Request who-is fa:16:3e:de:af:47 
tell fa:16:3e:de:af:47, length 46
  19:00:13.538216 B fa:16:3e:de:af:47 ethertype Reverse ARP (0x8035), length 
62: Ethernet (len 6), IPv4 (len 4), Reverse Request who-is fa:16:3e:de:af:47 
tell fa:16:3e:de:af:47, length 46
  19:00:14.388320 B fa:16:3e:de:af:47 ethertype Reverse ARP (0x8035), length 
62: Ethernet (len 6), IPv4 (len 4), Reverse Request who-is fa:16:3e:de:af:47 
tell fa:16:3e:de:af:47, length 46

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1815989/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1792493] Re: DVR and floating IPs broken in latest 7.0.0.0rc1?

2019-02-18 Thread LIU Yulong
According to Rodolfo's explanation, so we can close this bug.

** Changed in: neutron
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1792493

Title:
  DVR and floating IPs broken in latest 7.0.0.0rc1?

Status in kolla-ansible:
  New
Status in neutron:
  Fix Released

Bug description:
  Kolla-Ansible 7.0.0.0rc1 with binary image build (since the source
  option is failing to build ceilometer images currently) on CentOS 7.5
  (latest updates)

  What worked previously does not appear to work anymore.  I'm not sure
  if this is due to an update in CentOS 7.5 or OVS or other at this
  stage, but compute nodes are no longer ARP replying to ARP requests
  for who has the floating IP.

  For testing, I looked for the IP assigned to the FIP namespace's fg
  interface (in my case, fg-ba492724-bd).  This appears to be an IP on
  the ext-net network, but is not the floating IP assigned to a VM.
  Let's call this A.A.A.A and the floating IP B.B.B.B.

  I can tcpdump traffic on the physical port of the compute node and see
  the ARP requests for both A.A.A.A and B.B.B.B with respective pings
  from the Internet, but no ARP replies.

  I have attached a diagram showing, what I believe to be, the correct
  path for the packets.

  There appears to be something broken between my two arrows.

  Since tcpdump is not installed in the openvswitch_vswitchd container,
  nor is ovs-tcpdump, I can't figure out how to mirror and sniff ports
  on the br-ex and br-int bridges, at least in a containerized instance
  of OVS.  If anyone knows a way to do this, I would really appreciate
  the help.

  I haven't found any issues in the OVS configuration (ovs-vsctl show) -
  which matches the attached diagram.

  Has anyone else had issues?

  OVS returns this version info:
  ovs-vsctl (Open vSwitch) 2.9.0
  DB Schema 7.15.1

  in case it helps.

  Eric

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla-ansible/+bug/1792493/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1816454] [NEW] hw:mem_page_size is not respecting all documented values

2019-02-18 Thread Tyler Stachecki
Public bug reported:

Per the Rocky documentation for hugepages:
https://docs.openstack.org/nova/rocky/admin/huge-pages.html

2MB hugepages can be specified either as:
--property hw:mem_page_size=2Mb, or
--property hw:mem_page_size=2048

However, whenever I use the former notation (2Mb), conductor fails with
the misleading NUMA error below... whereas with the latter notation
(2048), allocation succeeds and the resulting instance is backed with
2MB hugepages on an x86_64 platform (as verified by checking
`/proc/meminfo | grep HugePages_Free` before/after stopping the created
instance).

ERROR nova.scheduler.utils [req-de6920d5-829b-411c-acd7-1343f48824c9
cb2abbb91da54209a5ad93a845b4cc26 cb226ff7932d40b0a48ec129e162a2fb -
default default] [instance: 5b53d1d4-6a16-4db9-ab52-b267551c6528] Error
from last host: node1 (node FQDN-REDACTED): ['Traceback (most recent
call last):\n', '  File "/usr/lib/python3/dist-
packages/nova/compute/manager.py", line 2106, in
_build_and_run_instance\nwith rt.instance_claim(context, instance,
node, limits):\n', '  File "/usr/lib/python3/dist-
packages/oslo_concurrency/lockutils.py", line 274, in inner\nreturn
f(*args, **kwargs)\n', '  File "/usr/lib/python3/dist-
packages/nova/compute/resource_tracker.py", line 217, in
instance_claim\npci_requests, overhead=overhead, limits=limits)\n',
'  File "/usr/lib/python3/dist-packages/nova/compute/claims.py", line
95, in __init__\nself._claim_test(resources, limits)\n', '  File
"/usr/lib/python3/dist-packages/nova/compute/claims.py", line 162, in
_claim_test\n"; ".join(reasons))\n',
'nova.exception.ComputeResourcesUnavailable: Insufficient compute
resources: Requested instance NUMA topology cannot fit the given host
NUMA topology.\n', '\nDuring handling of the above exception, another
exception occurred:\n\n', 'Traceback (most recent call last):\n', '
File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line
1940, in _do_build_and_run_instance\nfilter_properties,
request_spec)\n', '  File "/usr/lib/python3/dist-
packages/nova/compute/manager.py", line 2156, in
_build_and_run_instance\ninstance_uuid=instance.uuid,
reason=e.format_message())\n', 'nova.exception.RescheduledException:
Build of instance 5b53d1d4-6a16-4db9-ab52-b267551c6528 was re-scheduled:
Insufficient compute resources: Requested instance NUMA topology cannot
fit the given host NUMA topology.\n']

Additional info:
I am using Debian testing (buster) and all OpenStack packages included therein.

$ dpkg -l | grep nova
ii  nova-common   2:18.1.0-2
  all  OpenStack Compute - common files
ii  nova-compute  2:18.1.0-2
  all  OpenStack Compute - compute node
ii  nova-compute-kvm  2:18.1.0-2
  all  OpenStack Compute - compute node (KVM)
ii  python3-nova  2:18.1.0-2
  all  OpenStack Compute - libraries
ii  python3-novaclient2:11.0.0-2
  all  client library for OpenStack Compute API - 3.x

$ dpkg -l | grep qemu
ii  ipxe-qemu 1.0.0+git-20161027.b991c67-1  
  all  PXE boot firmware - ROM images for qemu
ii  qemu-block-extra:amd641:3.1+dfsg-2+b1   
  amd64extra block backend modules for qemu-system and qemu-utils
ii  qemu-kvm  1:3.1+dfsg-2+b1   
  amd64QEMU Full virtualization on x86 hardware
ii  qemu-system-common1:3.1+dfsg-2+b1   
  amd64QEMU full system emulation binaries (common files)
ii  qemu-system-data  1:3.1+dfsg-2  
  all  QEMU full system emulation (data files)
ii  qemu-system-gui   1:3.1+dfsg-2+b1   
  amd64QEMU full system emulation binaries (user interface and audio 
support)
ii  qemu-system-x86   1:3.1+dfsg-2+b1   
  amd64QEMU full system emulation binaries (x86)
ii  qemu-utils1:3.1+dfsg-2+b1   
  amd64QEMU utilities

* I forced nova to allocate on the same hypervisor (node1) when checking
for the issue and can repeatedly allocate using a flavor which specifies
hugepages with hw:mem_page_size=2048 -- on the contrary, when using a
flavor which is otherwise unchanged except for the 2048/2Mb difference,
allocation repeatedly fails.

* I am using libvirt+kvm.  I don't think it matters, but I am using Ceph
as a storage backend and neutron in a very basic VLAN-based segmentation
configuration (no OVS or anything remotely fancy).

* I specified hw:numa_nodes='1' when creating the flavor... and all my
hypervisors only have 1 NUMA node, so allocation should always succeed

[Yahoo-eng-team] [Bug 1816443] [NEW] ovs agent can fail with oslo_config.cfg.NoSuchOptError

2019-02-18 Thread Attila Fazekas
Public bug reported:

Neutron ovs agent some cases have this in his log:

The rpc_response_max_timeout supposed to have a default value:

I wonder is the issue related to https://bugs.launchpad.net/cinder/+bug/1796759,
where the oslo.messaging change affected 2 other component.


Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.143 30426 ERROR neutron.agent.common.async_process [-] Error received 
from [ovsdb-client monitor tcp:127.0.0.1:6640 Interface nam>
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 CRITICAL neutron [-] Unhandled error: 
oslo_config.cfg.NoSuchOptError: no such option rpc_response_max_timeout in 
group >
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron Traceback (most recent call last):
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron   File 
"/usr/local/lib/python3.7/site-packages/oslo_config/cfg.py", line 2183, in 
__getattr__
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron return self._get(name)
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron   File 
"/usr/local/lib/python3.7/site-packages/oslo_config/cfg.py", line 2617, in _get
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron value, loc = self._do_get(name, group, 
namespace)
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron   File 
"/usr/local/lib/python3.7/site-packages/oslo_config/cfg.py", line 2635, in 
_do_get
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron info = self._get_opt_info(name, group)
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron   File 
"/usr/local/lib/python3.7/site-packages/oslo_config/cfg.py", line 2835, in 
_get_opt_info
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron raise NoSuchOptError(opt_name, group)
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron oslo_config.cfg.NoSuchOptError: no such option 
rpc_response_max_timeout in group [DEFAULT]
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron


Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron During handling of the above exception, 
another exception occurred:
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron Traceback (most recent call last):
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron   File 
"/usr/local/bin/neutron-openvswitch-agent", line 10, in 
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron sys.exit(main())
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron   File 
"/opt/stack/neutron/neutron/cmd/eventlet/plugins/ovs_neutron_agent.py", line 
20, in main
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron agent_main.main()
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/main.py", 
line 47, in main
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron mod.main()
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py",
 line 3>
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron 
'neutron.plugins.ml2.drivers.openvswitch.agent.'
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron   File 
"/usr/local/lib/python3.7/site-packages/os_ken/base/app_manager.py", line 370, 
in run_apps
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron hub.joinall(services)
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron   File 
"/usr/local/lib/python3.7/site-packages/os_ken/lib/hub.py", line 102, in joinall
Feb 18 14:33:12 f29-dev-02 neutron-openvswitch-agent[30426]: 2019-02-18 
14:33:12.260 30426 ERROR neutron t.wait()
Feb 18 14:33:12 f29-dev-02 neutron-openvswitc

[Yahoo-eng-team] [Bug 1816399] [NEW] The periodic task to clean up expired console_auth tokens is invalid

2019-02-18 Thread jiangyuhao
Public bug reported:

Description
===
In compute node, the periodic task to clean up expired console_auth tokens is 
invalid, can't remove expired console auth tokens for this host.

Steps to reproduce
==
1.In controller node, config nova-novncproxy using database to store novnc auth 
tokens.
enable_consoleauth=false

2.In compute node, config vnc server address and token_ttl.
server_proxyclient_address=10.43.203.225
token_ttl=60

3.Restart nova-compute and nova-novncproxy.

4.Using nova command to get novncproxy_base_url and token.

Expected result
===
The periodic task can remove expired console auth tokens in database.

Actual result
=

This periodic task is invalid.


Environment
===
1. Exact version of OpenStack you are running. See the following
master

2. Which hypervisor did you use?
Libvirt + KVM

3. Which networking type did you use?
Neutron with OpenVSwitch

Logs & Configs
==
1. In console_auth_tokens table, host's value is 
CONF.vnc.server_proxyclient_address.

def get_vnc_console(self, context, instance):
def get_vnc_port_for_instance(instance_name):
guest = self._host.get_guest(instance)

xml = guest.get_xml_desc()
xml_dom = etree.fromstring(xml)

graphic = xml_dom.find("./devices/graphics[@type='vnc']")
if graphic is not None:
return graphic.get('port')
# NOTE(rmk): We had VNC consoles enabled but the instance in
# question is not actually listening for connections.
raise exception.ConsoleTypeUnavailable(console_type='vnc')

port = get_vnc_port_for_instance(instance.name)
host = CONF.vnc.server_proxyclient_address

return ctype.ConsoleVNC(host=host, port=port)


2. In periodic task, the host's value is hostname.

@periodic_task.periodic_task(spacing=CONF.instance_delete_interval)
def _cleanup_expired_console_auth_tokens(self, context):
"""Remove expired console auth tokens for this host.

Console authorization tokens and their connection data are stored
in the database when a user asks for a console connection to an
instance. After a time they expire. We periodically remove any expired
tokens from the database.
"""
# If the database backend isn't in use, don't bother looking for
# expired tokens. The database backend is not supported for cells v1.
if not CONF.cells.enable:
objects.ConsoleAuthToken.\
clean_expired_console_auths_for_host(context, self.host)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1816399

Title:
  The periodic task to clean up expired console_auth tokens is invalid

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  In compute node, the periodic task to clean up expired console_auth tokens is 
invalid, can't remove expired console auth tokens for this host.

  Steps to reproduce
  ==
  1.In controller node, config nova-novncproxy using database to store novnc 
auth tokens.
  enable_consoleauth=false

  2.In compute node, config vnc server address and token_ttl.
  server_proxyclient_address=10.43.203.225
  token_ttl=60

  3.Restart nova-compute and nova-novncproxy.

  4.Using nova command to get novncproxy_base_url and token.

  Expected result
  ===
  The periodic task can remove expired console auth tokens in database.

  Actual result
  =

  This periodic task is invalid.

  
  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
  master

  2. Which hypervisor did you use?
  Libvirt + KVM

  3. Which networking type did you use?
  Neutron with OpenVSwitch

  Logs & Configs
  ==
  1. In console_auth_tokens table, host's value is 
CONF.vnc.server_proxyclient_address.

  def get_vnc_console(self, context, instance):
  def get_vnc_port_for_instance(instance_name):
  guest = self._host.get_guest(instance)

  xml = guest.get_xml_desc()
  xml_dom = etree.fromstring(xml)

  graphic = xml_dom.find("./devices/graphics[@type='vnc']")
  if graphic is not None:
  return graphic.get('port')
  # NOTE(rmk): We had VNC consoles enabled but the instance in
  # question is not actually listening for connections.
  raise exception.ConsoleTypeUnavailable(console_type='vnc')

  port = get_vnc_port_for_instance(instance.name)
  host = CONF.vnc.server_proxyclient_address

  return ctype.ConsoleVNC(host=host, port=port)

  
  2. In periodic task, the host's value is hostname.

  @periodic_task.periodic_task

[Yahoo-eng-team] [Bug 1816395] [NEW] L2 Networking with SR-IOV enabled NICs in neutron

2019-02-18 Thread mohit.048
Public bug reported:

for text SR-IOV Passthrough For Networking is broken , it should point to 
https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking
instead of the current one
https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking/

This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [ ] This doc is inaccurate in this way: __
- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release: 11.0.7.dev52 on 2019-02-12 18:51
SHA: 58025f12c93b59b004e07e3412ac7db519070516
Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/contributor/internals/sriov_nic_agent.rst
URL: 
https://docs.openstack.org/neutron/pike/contributor/internals/sriov_nic_agent.html

** Affects: neutron
 Importance: Undecided
 Assignee: mohit.048 (mohit.048)
 Status: New


** Tags: doc

** Changed in: neutron
 Assignee: (unassigned) => mohit.048 (mohit.048)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1816395

Title:
  L2 Networking with SR-IOV enabled NICs in neutron

Status in neutron:
  New

Bug description:
  for text SR-IOV Passthrough For Networking is broken , it should point to 
https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking
  instead of the current one
  https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking/

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [ ] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 11.0.7.dev52 on 2019-02-12 18:51
  SHA: 58025f12c93b59b004e07e3412ac7db519070516
  Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/contributor/internals/sriov_nic_agent.rst
  URL: 
https://docs.openstack.org/neutron/pike/contributor/internals/sriov_nic_agent.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1816395/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1816393] [NEW] collect-logs should capture /etc/cloud and /var/lib/cloud artifacts

2019-02-18 Thread Chad Smith
Public bug reported:

Collect-logs should surface all artifacts that could contain data affected the 
current boot
 -  /etc/cloud/ files
 - /var/lib/cloud files including parsed instance-data.json

** Affects: cloud-init
 Importance: Medium
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1816393

Title:
  collect-logs should capture /etc/cloud and /var/lib/cloud artifacts

Status in cloud-init:
  Confirmed

Bug description:
  Collect-logs should surface all artifacts that could contain data affected 
the current boot
   -  /etc/cloud/ files
   - /var/lib/cloud files including parsed instance-data.json

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1816393/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1809454] Re: [SRU] nova rbd auth fallback uses cinder user with libvirt secret

2019-02-18 Thread Launchpad Bug Tracker
This bug was fixed in the package nova - 2:17.0.7-0ubuntu2

---
nova (2:17.0.7-0ubuntu2) bionic; urgency=medium

  * d/p/ensure-rbd-auth-fallback-uses-matching-credentials.patch: Cherry-
picked from upstream to ensure ceph backend continues to work for upgrades
from pre-Ocata (LP: #1809454).

 -- Corey Bryant   Mon, 07 Jan 2019 14:54:42
-0500

** Changed in: nova (Ubuntu Bionic)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1809454

Title:
  [SRU] nova rbd auth fallback uses cinder user with libvirt secret

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Committed
Status in Ubuntu Cloud Archive pike series:
  Fix Committed
Status in Ubuntu Cloud Archive queens series:
  Fix Committed
Status in Ubuntu Cloud Archive rocky series:
  Fix Released
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  Triaged
Status in OpenStack Compute (nova) pike series:
  Triaged
Status in OpenStack Compute (nova) queens series:
  In Progress
Status in OpenStack Compute (nova) rocky series:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Bionic:
  Fix Released
Status in nova source package in Cosmic:
  Fix Released
Status in nova source package in Disco:
  Fix Released

Bug description:
  [Impact]
  From David Ames (thedac), originally posted to 
https://bugs.launchpad.net/charm-nova-compute/+bug/1671422/comments/25:

  Updating this bug. We may decide to move this elsewhere it at some
  point.

  We have a deployment that was upgraded through to pike at which point
  it was noticed that nova instances with ceph backed volumes would not
  start.

  The cinder key was manually added to the nova-compute nodes in /etc/ceph and 
with:
  sudo virsh secret-define --file /tmp/cinder.secret

  However, this did not resolve the problem. It appeared libvirt was
  trying to use a mixed pair of usernames and keys. It was using the
  cinder username but the nova-compute key.

  Looking at nova's code it falls back to nova.conf when it does not have a 
secret_uuid from cinder but it was not setting the username correctly.
  
https://github.com/openstack/nova/blob/stable/pike/nova/virt/libvirt/volume/net.py#L74

  The following seems to mitigate this as a temporary fix on nova-
  compute until we can come up with a complete plan:

  https://pastebin.ubuntu.com/p/tGm7C7fpXT/

  diff --git a/nova/virt/libvirt/volume/net.py b/nova/virt/libvirt/volume/net.py
  index cec43ce93b..8b0148df0b 100644
  --- a/nova/virt/libvirt/volume/net.py
  +++ b/nova/virt/libvirt/volume/net.py
  @@ -71,6 +71,7 @@ class 
LibvirtNetVolumeDriver(libvirt_volume.LibvirtBaseVolumeDriver):
   else:
   LOG.debug('Falling back to Nova configuration for RBD auth '
     'secret_uuid value.')
     + conf.auth_username = CONF.libvirt.rbd_user
   conf.auth_secret_uuid = CONF.libvirt.rbd_secret_uuid
   # secret_type is always hard-coded to 'ceph' in cinder
   conf.auth_secret_type = netdisk_properties['secret_type']

  Apply to /usr/lib/python2.7/dist-
  packages/nova/virt/libvirt/volume/net.py

  We still need a migration plan to get from the topology with nova-
  compute directly related to ceph to the topology with cinder-ceph
  related to nova-compute using ceph-access which would populate
  cinder's secret_uuid.

  It is possible we will need to carry the patch for existing instances.
  It may be worth getting that upstream as master has the same problem.

  [Test Case]
  Upgrade a juju-deployed cloud with ceph backend for nova and cinder from 
pre-ocata to ocata or above. Ensure that nova instances with ceph backed 
volumes successfully start.

  [Regression Potential]
  The fix is minimal and will not be fixed in Ubuntu until it has been approved 
upstream.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1809454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1808456] Re: ceph backend reporting meaningless error when no space left

2019-02-18 Thread Abhishek Kekane
** Also affects: glance-store
   Importance: Undecided
   Status: New

** Changed in: glance
   Status: New => Invalid

** Changed in: glance-store
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1808456

Title:
  ceph backend reporting meaningless error when no space left

Status in Glance:
  Invalid
Status in glance_store:
  New

Bug description:
  When uploading image, but there's no space left in ceph(rbd) backend, 
client(such as glanceclient) will receive a meaningless error:
  500 Internal Server Error: The server has either erred or is incapable of 
performing the requested operation. (HTTP 500)

  
  steps to reproduce:
  -
  1.Prepare ceph backend for glance, make the free space small enough, e.g. 
10MB.
  To be simple, you also can modify ceph's code(function resize), to let it 
raise errno.ENOSPC. I did this way.
  2.uploading image: glance image-create --name img2-ceph --visibility public 
--disk-format raw --container-format bare --progress --backend rbd --file 
/opt/stack/data/glance/images/d4ca8259-168b-42f5-a719-40038362ae8c

  
  logs
  -
  stack@ubuntu16vmliang:~$ glance image-create --name img2-ceph --visibility 
public --disk-format raw --container-format bare --progress --backend rbd 
--file /opt/stack/data/glance/images/d4ca8259-168b-42f5-a719-40038362ae8c
  > 
/usr/local/lib/python2.7/dist-packages/glanceclient/v2/shell.py(555)do_image_upload()
  -> backend = None
  (Pdb) c
  [=>] 100%
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | None |
  | container_format | bare |
  | created_at   | 2018-12-14T02:08:36Z |
  | disk_format  | raw  |
  | id   | 8c2e48f0-aafc-4744-95b6-fe0b6fbfe975 |
  | min_disk | 0|
  | min_ram  | 0|
  | name | img2-ceph|
  | os_hash_algo | None |
  | os_hash_value| None |
  | os_hidden| False|
  | owner| 3242a198f7044fcd9b756866ec296391 |
  | protected| False|
  | size | None |
  | status   | queued   |
  | tags | []   |
  | updated_at   | 2018-12-14T02:08:36Z |
  | virtual_size | Not available|
  | visibility   | public   |
  +--+--+
  500 Internal Server Error: The server has either erred or is incapable of 
performing the requested operation. (HTTP 500)

  
  expected
  -
  The correct message should be something related "Storage Full", rbd.py should 
raise glance_store.StorageFull, and this exception will be caught by 
notifier.py.

  Some code snippet in notifier.py:
  except glance_store.StorageFull as e:
  msg = (_("Image storage media is full: %s") %
 encodeutils.exception_to_unicode(e))
  _send_notification(notify_error, 'image.upload', msg)
  raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg)

  After doing this, the expected behavior will be:
  stack@ubuntu16vmliang:~$ glance image-create --name img2-ceph --visibility 
public --disk-format raw --container-format bare --progress --backend rbd 
--file /opt/stack/data/glance/images/d4ca8259-168b-42f5-a719-40038362ae8c
  > 
/usr/local/lib/python2.7/dist-packages/glanceclient/v2/shell.py(555)do_image_upload()
  -> backend = None
  (Pdb) c
  [=>] 100%
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | None |
  | container_format | bare |
  | created_at   | 2018-12-14T01:41:36Z |
  | disk_format  | raw  |
  | id   | 8aefa92d-bd9c-4726-95ae-d8f698d7bc82 |
  | min_disk | 0|
  | min_ram  | 0|
  | name | img2-ceph|
  | os_hash_algo | None

[Yahoo-eng-team] [Bug 1590608] Re: Services should use http_proxy_to_wsgi middleware

2019-02-18 Thread s10
** Also affects: mistral
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590608

Title:
  Services should use http_proxy_to_wsgi middleware

Status in Aodh:
  Fix Released
Status in Barbican:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in OpenStack Barbican Charm:
  Fix Released
Status in OpenStack heat charm:
  Triaged
Status in Cinder:
  Fix Released
Status in cloudkitty:
  Fix Released
Status in congress:
  Triaged
Status in OpenStack Backup/Restore and DR (Freezer):
  Fix Released
Status in Glance:
  Fix Released
Status in Gnocchi:
  Fix Released
Status in OpenStack Heat:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  New
Status in neutron:
  Fix Released
Status in Panko:
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack Searchlight:
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released

Bug description:
  It's a common problem when putting a service behind a load balancer to
  need to forward the Protocol and hosts of the original request so that
  the receiving service can construct URLs to the loadbalancer and not
  the private worker node.

  Most services have implemented some form of secure_proxy_ssl_header =
  HTTP_X_FORWARDED_PROTO handling however exactly how this is done is
  dependent on the service.

  oslo.middleware provides the http_proxy_to_wsgi middleware that
  handles these headers and the newer RFC7239 forwarding header and
  completely hides the problem from the service.

  This middleware should be adopted by all services in preference to
  their own HTTP_X_FORWARDED_PROTO handling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1590608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp