[Yahoo-eng-team] [Bug 1754782] [NEW] we skip critical scheduler filters when forcing the host on instance boot

2018-03-09 Thread Chris Friesen
Public bug reported:

When booting an instance it's possible to force it to be placed on a
specific host using the  "--availability-zone nova:host" syntax.

If you do this, the code at 
https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L581
 will return early rather than call self.filter_handler.get_filtered_objects()

Based on discussions at the PTG with Dan Smith, the simplest solution
would be to create a flag similar to RUN_ON_REBUILD which would be
applied to the various scheduler filters in a manner analogous to how
rebuild is handled now.

Presumably we'd want to call something like this during the instance
boot code to ensure we hit the existing "if not check_type" at L581:

request_spec.scheduler_hints['_nova_check_type'] = ['build']


Then in the various critical filers (NUMATopologyFilter for example, and 
PciPassthroughFilter, and maybe some others like ComputeFilter) we could define 
something like "RUN_ON_BUILD = True" to ensure that they run even when forcing 
a host.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: compute scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1754782

Title:
  we skip critical scheduler filters when forcing the host on instance
  boot

Status in OpenStack Compute (nova):
  New

Bug description:
  When booting an instance it's possible to force it to be placed on a
  specific host using the  "--availability-zone nova:host" syntax.

  If you do this, the code at 
  
https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L581
 will return early rather than call self.filter_handler.get_filtered_objects()

  Based on discussions at the PTG with Dan Smith, the simplest solution
  would be to create a flag similar to RUN_ON_REBUILD which would be
  applied to the various scheduler filters in a manner analogous to how
  rebuild is handled now.

  Presumably we'd want to call something like this during the instance
  boot code to ensure we hit the existing "if not check_type" at L581:

  request_spec.scheduler_hints['_nova_check_type'] = ['build']

  
  Then in the various critical filers (NUMATopologyFilter for example, and 
PciPassthroughFilter, and maybe some others like ComputeFilter) we could define 
something like "RUN_ON_BUILD = True" to ensure that they run even when forcing 
a host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1754782/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1754770] [NEW] Duplicate iptables rule detected in Linuxbridge agent logs

2018-03-09 Thread Slawek Kaplonski
Public bug reported:

After patch [1] which should close issue [2] was merged there are
warnings about "Duplicate iptables rule detected." in Linuxbridge
neutron agent logs. Example of such warnings is e.g. at [3].

[1] https://review.openstack.org/#/c/525607/
[2] https://bugs.launchpad.net/neutron/+bug/1720205
[3] 
http://logs.openstack.org/07/525607/12/check/neutron-tempest-plugin-scenario-linuxbridge/09f04f9/logs/screen-q-agt.txt.gz?level=WARNING

** Affects: neutron
 Importance: Medium
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: linuxbridge sg-fw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1754770

Title:
  Duplicate iptables rule detected in Linuxbridge agent logs

Status in neutron:
  Confirmed

Bug description:
  After patch [1] which should close issue [2] was merged there are
  warnings about "Duplicate iptables rule detected." in Linuxbridge
  neutron agent logs. Example of such warnings is e.g. at [3].

  [1] https://review.openstack.org/#/c/525607/
  [2] https://bugs.launchpad.net/neutron/+bug/1720205
  [3] 
http://logs.openstack.org/07/525607/12/check/neutron-tempest-plugin-scenario-linuxbridge/09f04f9/logs/screen-q-agt.txt.gz?level=WARNING

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1754770/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1754767] [NEW] bad error message when /var/lib/glance/* have wrong rights

2018-03-09 Thread Thomas Goirand
Public bug reported:

Because of an error in the packaging of Glance in Debian, I had:

# openstack image create --container-format bare --disk-format qcow2 --file 
debian-testing-openstack-amd64.qcow2 debian-buster-amd64
410 Gone: Error in store configuration. Adding images to store is disabled. 
(HTTP N/A)

It took me more than one hour to figure out that /var/lib/glance/images
and /var/lib/glance/image-cache was owned by root:root instead of
glance:glance. After fixing this and restarting the daemon, it just
worked, of course.

This was really my fault, because I attempted to fix the Debian postinst
to stop using chown -R, but still... While this is a normal behavior,
the "Adding images to store is disabled." error message is really
deceptive. It made me think that my glance-{api,registry}.conf files
were wrong.

So, of course, that's only a wishlist bug. Please fix the error message
and make it nicer and less deceptive, at least in the logs (no need to
have the user see that the admin is silly).

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1754767

Title:
  bad error message when /var/lib/glance/* have wrong rights

Status in Glance:
  New

Bug description:
  Because of an error in the packaging of Glance in Debian, I had:

  # openstack image create --container-format bare --disk-format qcow2 --file 
debian-testing-openstack-amd64.qcow2 debian-buster-amd64
  410 Gone: Error in store configuration. Adding images to store is disabled. 
(HTTP N/A)

  It took me more than one hour to figure out that
  /var/lib/glance/images and /var/lib/glance/image-cache was owned by
  root:root instead of glance:glance. After fixing this and restarting
  the daemon, it just worked, of course.

  This was really my fault, because I attempted to fix the Debian
  postinst to stop using chown -R, but still... While this is a normal
  behavior, the "Adding images to store is disabled." error message is
  really deceptive. It made me think that my glance-{api,registry}.conf
  files were wrong.

  So, of course, that's only a wishlist bug. Please fix the error
  message and make it nicer and less deceptive, at least in the logs (no
  need to have the user see that the admin is silly).

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1754767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1754723] [NEW] 'openstack user list' is not listing userid correctly in case of LDAP

2018-03-09 Thread Deepak Ghuge
Public bug reported:

The command 'openstack user list' is not listing proper user details
when keystone is configured with LDAP.

The user_id_attribute is set to uid but user listing show hash like ids
during user listing.

This behavior is seen in Pike release. 
 
keystone.conf
[ldap]
user_id_attribute = uid
user_mail_attribute = mail
user_name_attribute = cn

The First column is ID, it should show the correct ID of user from LDAP based 
on 'user_id_attribute'
but here is showing hash like id.

[root@a2n1 ~]# openstack user list  --domain EXT_USER_DOMAIN 
+--++
| ID   | Name   
|
+--++
| dfda96a70eec870fe0cc154778e4c527001984589d69a4d602666a756b5dd35f | userr  
|
| 98d8c9a1f148c15f42c954b3f54a2117dfe5a1db90b977af395dce3731ec6271 | userrw 
|
| ee70d65cd729d20655c4aa966490e9210dc99879e4e22205442957a4805558a2 | userr_1
|

In Mitaka or earlier release, value of ID was coming from LDAP and was
correctly shown in ID column of 'openstack user list' output.

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1754723

Title:
  'openstack user list' is not listing userid correctly in case of LDAP

Status in OpenStack Identity (keystone):
  New

Bug description:
  The command 'openstack user list' is not listing proper user details
  when keystone is configured with LDAP.

  The user_id_attribute is set to uid but user listing show hash like
  ids during user listing.

  This behavior is seen in Pike release. 
   
  keystone.conf
  [ldap]
  user_id_attribute = uid
  user_mail_attribute = mail
  user_name_attribute = cn

  The First column is ID, it should show the correct ID of user from LDAP based 
on 'user_id_attribute'
  but here is showing hash like id.

  [root@a2n1 ~]# openstack user list  --domain EXT_USER_DOMAIN 
  
+--++
  | ID   | Name 
  |
  
+--++
  | dfda96a70eec870fe0cc154778e4c527001984589d69a4d602666a756b5dd35f | userr
  |
  | 98d8c9a1f148c15f42c954b3f54a2117dfe5a1db90b977af395dce3731ec6271 | userrw   
  |
  | ee70d65cd729d20655c4aa966490e9210dc99879e4e22205442957a4805558a2 | userr_1  
  |

  In Mitaka or earlier release, value of ID was coming from LDAP and was
  correctly shown in ID column of 'openstack user list' output.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1754723/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1754716] [NEW] Disconnect volume on live migration source fails if initialize_connection doesn't return identical output

2018-03-09 Thread Matthew Booth
Public bug reported:

During live migration we update bdm.connection_info for attached volumes
in pre_live_migration to reflect the new connection on the destination
node. This means that after migration completes we no longer have a
reference to the original connection_info to do the detach on the source
host, so we have to re-fetch it with a second call to
initialize_connection before calling disconnect.

Unfortunately the cinder driver interface does not strictly require that
multiple calls to initialize_connection will return consistent results.
Although they normally do in practice, there is at least one cinder
driver (delliscsi) which doesn't. This results in a failure to
disconnect on the source host post migration.

** Affects: nova
 Importance: Undecided
 Assignee: Matthew Booth (mbooth-9)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1754716

Title:
  Disconnect volume on live migration source fails if
  initialize_connection doesn't return identical output

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  During live migration we update bdm.connection_info for attached
  volumes in pre_live_migration to reflect the new connection on the
  destination node. This means that after migration completes we no
  longer have a reference to the original connection_info to do the
  detach on the source host, so we have to re-fetch it with a second
  call to initialize_connection before calling disconnect.

  Unfortunately the cinder driver interface does not strictly require
  that multiple calls to initialize_connection will return consistent
  results. Although they normally do in practice, there is at least one
  cinder driver (delliscsi) which doesn't. This results in a failure to
  disconnect on the source host post migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1754716/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1754695] [NEW] Incorrect state of the Openflow table

2018-03-09 Thread Przemyslaw Wernicki
Public bug reported:

During provision of large scale vm's number several percent of vm's
fireup without network connectivity. We found that the reason of faulty
networking is the incorrect state in Openflow table and there is no
connectivity over vxlan between affected compute nodes and controllers.

A proper Openflow table shows complete list of vxlan interfaces to all compute 
nodes and controllers:
cookie=0x98572d2e5f45dc06, duration=2639.513s, table=22, n_packets=212, 
n_bytes=57108, priority=1,dl_vlan=10 
actions=strip_vlan,load:0x30->NXM_NX_TUN_ID[],output:"vxlan-0afe0c74",output:"vxlan-0afe0c80",output:"vxlan-0afe0c0b",output:"vxlan-0afe0c0c",output:"vxlan-0afe0c0d",output:"vxlan-0afe0c7d",output:"vxlan-0afe0c66",output:"vxlan-0afe0c81",output:"vxlan-0afe0c6d",output:"vxlan-0afe0c6c",output:"vxlan-0afe0c69",output:"vxlan-0afe0c7a",output:"vxlan-0afe0c79",output:"vxlan-0afe0c78",output:"vxlan-0afe0c7f",output:"vxlan-0afe0c7e",output:"vxlan-0afe0c67",output:"vxlan-0afe0c7c",output:"vxlan-0afe0c83",output:"vxlan-0afe0c86",output:"vxlan-0afe0c87",output:"vxlan-0afe0c76",output:"vxlan-0afe0c84",output:"vxlan-0afe0c85",output:"vxlan-0afe0c75",output:"vxlan-0afe0c72",output:"vxlan-0afe0c73",output:"vxlan-0afe0c71",output:"vxlan-0afe0c6f",output:"vxlan-0afe0c7b",output:"vxlan-0afe0c6b",output:"vxlan-0afe0c6a",output:"vxlan-0afe0c6e",output:"vxlan-0afe0c77",output:"vxlan-0afe0c65",output:"vxlan-0afe0c70"

An incorrect state of Openflow table shows that the vxlan interfaces to 
controllers are missing:
cookie=0xeee71baa637a6dde, duration=754.490s, table=22, n_packets=147, 
n_bytes=39834, priority=1,dl_vlan=10 
actions=strip_vlan,load:0x30->NXM_NX_TUN_ID[],output:"vxlan-0afe0c74",output:"vxlan-0afe0c80",output:"vxlan-0afe0c7d",output:"vxlan-0afe0c66",output:"vxlan-0afe0c81",output:"vxlan-0afe0c6d",output:"vxlan-0afe0c6c",output:"vxlan-0afe0c69",output:"vxlan-0afe0c7a",output:"vxlan-0afe0c79",output:"vxlan-0afe0c78",output:"vxlan-0afe0c7f",output:"vxlan-0afe0c7e",output:"vxlan-0afe0c67",output:"vxlan-0afe0c7c",output:"vxlan-0afe0c86",output:"vxlan-0afe0c87",output:"vxlan-0afe0c76",output:"vxlan-0afe0c84",output:"vxlan-0afe0c85",output:"vxlan-0afe0c75",output:"vxlan-0afe0c72",output:"vxlan-0afe0c73",output:"vxlan-0afe0c71",output:"vxlan-0afe0c6f",output:"vxlan-0afe0c7b",output:"vxlan-0afe0c6b",output:"vxlan-0afe0c6a",output:"vxlan-0afe0c6e",output:"vxlan-0afe0c65",output:"vxlan-0afe0c70"

Restarting neutron_openvswitch_agent container fix the problem on
affected compute node by adding missing vxlans.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: in-stable-pike

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1754695

Title:
  Incorrect state of the Openflow table

Status in neutron:
  New

Bug description:
  During provision of large scale vm's number several percent of vm's
  fireup without network connectivity. We found that the reason of
  faulty networking is the incorrect state in Openflow table and there
  is no connectivity over vxlan between affected compute nodes and
  controllers.

  A proper Openflow table shows complete list of vxlan interfaces to all 
compute nodes and controllers:
  cookie=0x98572d2e5f45dc06, duration=2639.513s, table=22, n_packets=212, 
n_bytes=57108, priority=1,dl_vlan=10 
actions=strip_vlan,load:0x30->NXM_NX_TUN_ID[],output:"vxlan-0afe0c74",output:"vxlan-0afe0c80",output:"vxlan-0afe0c0b",output:"vxlan-0afe0c0c",output:"vxlan-0afe0c0d",output:"vxlan-0afe0c7d",output:"vxlan-0afe0c66",output:"vxlan-0afe0c81",output:"vxlan-0afe0c6d",output:"vxlan-0afe0c6c",output:"vxlan-0afe0c69",output:"vxlan-0afe0c7a",output:"vxlan-0afe0c79",output:"vxlan-0afe0c78",output:"vxlan-0afe0c7f",output:"vxlan-0afe0c7e",output:"vxlan-0afe0c67",output:"vxlan-0afe0c7c",output:"vxlan-0afe0c83",output:"vxlan-0afe0c86",output:"vxlan-0afe0c87",output:"vxlan-0afe0c76",output:"vxlan-0afe0c84",output:"vxlan-0afe0c85",output:"vxlan-0afe0c75",output:"vxlan-0afe0c72",output:"vxlan-0afe0c73",output:"vxlan-0afe0c71",output:"vxlan-0afe0c6f",output:"vxlan-0afe0c7b",output:"vxlan-0afe0c6b",output:"vxlan-0afe0c6a",output:"vxlan-0afe0c6e",output:"vxlan-0afe0c77",output:"vxlan-0afe0c65",output:"vxlan-0afe0c70"

  An incorrect state of Openflow table shows that the vxlan interfaces to 
controllers are missing:
  cookie=0xeee71baa637a6dde, duration=754.490s, table=22, n_packets=147, 
n_bytes=39834, priority=1,dl_vlan=10 

[Yahoo-eng-team] [Bug 1753555] Re: horizon test helper prevents mox free horizon plugin

2018-03-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/549842
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=fb9ef26a7b26f42c350b7a15e988ce58ce1da943
Submitter: Zuul
Branch:master

commit fb9ef26a7b26f42c350b7a15e988ce58ce1da943
Author: Akihiro Motoki 
Date:   Tue Mar 6 03:44:31 2018 +0900

Allow mox-free horizon plugins to consume horizon test helper

horizon test helpers now depend on mox3.
This prevents horizon plugins from consuming the test helpers
even if a plugin itself is mox free.
This commit makes mox import optional.

Change-Id: I631518d8f4cd9641920f68cd1e405298ddb7965a
Closes-Bug: #1753555


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1753555

Title:
  horizon test helper prevents mox free horizon plugin

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Currently horizon test helpers (horizon/test/helpers.py and 
openstack_dashboard/test/helpers.py) always imports mox3.
  These test helpers are consumed by horizon plugins.
  Even when horizon plugins are mox free, these test helpers requires mox3 and 
plugins cannot drop mox3 from test-requirements.txt. mox should be optional in 
horizon test helpers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1753555/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1754677] [NEW] Unable to remove an assignment from domain and project

2018-03-09 Thread Lance Bragstad
Public bug reported:

When you setup a user with a role assignment on a domain and then a role
assignment on a project "acting as a domain", you can't actually remove
them. The following pastes sets up the environment:

http://paste.openstack.org/show/695978/

Which results in the following when a user tries to remove either of
those assignments:

http://paste.openstack.org/show/696013/

And the resulting trace:

http://paste.openstack.org/show/695994/

It appears the issue is because the somewhere in the assignment code
we're only expecting a single assignment to be returned for us to
delete, which isn't the case here and causes ambiguity.

** Affects: keystone
 Importance: High
 Status: Triaged

** Changed in: keystone
   Status: New => Triaged

** Changed in: keystone
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1754677

Title:
  Unable to remove an assignment from domain and project

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  When you setup a user with a role assignment on a domain and then a
  role assignment on a project "acting as a domain", you can't actually
  remove them. The following pastes sets up the environment:

  http://paste.openstack.org/show/695978/

  Which results in the following when a user tries to remove either of
  those assignments:

  http://paste.openstack.org/show/696013/

  And the resulting trace:

  http://paste.openstack.org/show/695994/

  It appears the issue is because the somewhere in the assignment code
  we're only expecting a single assignment to be returned for us to
  delete, which isn't the case here and causes ambiguity.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1754677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1748156] Re: Proper error message should get displayed while trying to delete dhcp port from horizon

2018-03-09 Thread Akihiro Motoki
This is a horizon issue. Retarget it to horizon.

** Project changed: neutron => horizon

** Tags added: error-reporting

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1748156

Title:
  Proper error message should get displayed while trying to delete dhcp
  port from horizon

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps
  1) Login to openstack horizon
  2) Create network and subnet .
  3) Goto admin --> Networks --> Ports and check there would be dhcp port 
listed.
  4) Try to delete dhcp port, exception appears on horizon "Error: Unable to 
delete port: (c33a1942-fa07)".

  But same if I try to delete from cli it throws proper error message:

  $ neutron port-delete c33a1942-fa07-454a-95ac-d0e6a52ff481
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Bad port request: Can not delete DHCP port 
c33a1942-fa07-454a-95ac-d0e6a52ff481.

  From cli proper message gets displayed. Can we make changes on horizon
  side also to provide proper error message while deleting dhcp port?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1748156/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1748153] Re: Proper error message should get displayed while trying to delete router interface

2018-03-09 Thread Akihiro Motoki
This is a horizon issue. Retarget it to horizon.

** Project changed: neutron => horizon

** Tags added: error-reporting

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1748153

Title:
  Proper error message should get displayed while trying to delete
  router interface

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps:
  1) Create network, subnet
  2) Create router and attach subnet to router.
  3) Attach router to external network as a gateway.
  4) Create port and associate port to floatingip.
  5) Goto Horizon and try to delete router-interface, it doesn't gives you 
proper error message.

  Error message on Horizon:
  Error: Unable to delete interface: (5148c328-d7df)

  But same If I try to delete from cli, it throws proper error message:
  $ neutron router-interface-delete 273b224e-033e-4fb4-89ca-6a7fcca595bd 
4b00cb1e-75df-4555-88c3-66dfb826e295
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Router interface for subnet 4b00cb1e-75df-4555-88c3-66dfb826e295 on router 
273b224e-033e-4fb4-89ca-6a7fcca595bd cannot be deleted, as it is required by 
one or more floating IPs.

  Can we make changes on horizon side also to provide proper error
  message just like cli throws message?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1748153/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1754661] [NEW] Unexpected API Error.

2018-03-09 Thread Updesh Bhadoriya
Public bug reported:

launching an instance in openstack newton =
openstack server create --flavor m1.tiny --image cirros  --nic 
net-id=8c1fa730-fe35-4e01-a0cf-774c1f417df3 --security-group default  
selfservice-instance
error==
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-3b5ca259-0c97-4dac-bec2-1b19876108cf)


nova-api-log==

2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py", line 
631, in create
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions 
**create_kwargs)
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/hooks.py", line 154, in inner
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions rv = 
f(*args, **kwargs)
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1535, in create
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions 
check_server_group_quota=check_server_group_quota)
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1128, in 
_create_instance
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions 
reservation_id, max_count)
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 824, in 
_validate_and_build_base_options
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions 
requested_networks, max_count)
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 439, in 
_check_requested_networks
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions max_count)
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/network/api.py", line 391, in 
validate_networks
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions 
requested_networks)
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/network/rpcapi.py", line 214, in 
validate_networks
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions return 
self.client.call(ctxt, 'validate_networks', networks=networks)
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 428, in 
call
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions return 
self.prepare().call(ctxt, method, **kwargs)
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 169, in 
call
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions 
retry=self.retry)
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 97, in 
_send
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions 
timeout=timeout, retry=retry)
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
584, in send
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions 
retry=retry)
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
573, in _send
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions result = 
self._waiter.wait(msg_id, timeout)
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
456, in wait
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions message = 
self.waiters.get(msg_id, timeout=timeout)
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
347, in get
2018-03-09 13:32:31.663 28121 ERROR nova.api.openstack.extensions 'to 
message ID %s' % msg_id)
2018-03-09 13:32:31.663 28121 ERROR 

[Yahoo-eng-team] [Bug 1754648] [NEW] tox startdash error

2018-03-09 Thread Trinh Nguyen
Public bug reported:

When I'm trying to create a new horizon dashboard using tox:

tox -e manage -- startdash cloud_studio --target
openstack_dashboard/dashboards/cloud_studio

I got this errors:

django.template.exceptions.TemplateSyntaxError: 'horizon' is not a
registered tag library. Must be one of:

Full stacktrace: http://paste.openstack.org/show/696167/

My environment:

+ Horizon: master (HEAD is at a17a81aecefb440b552506a5a63543fdb15e6517)
+ OS: Ubuntu 17.10
+ tox=2.9.1

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  When I'm trying to create a new horizon dashboard using tox:
  
  tox -e manage -- startdash cloud_studio --target
  openstack_dashboard/dashboards/cloud_studio
  
  I got this errors:
  
  django.template.exceptions.TemplateSyntaxError: 'horizon' is not a
  registered tag library. Must be one of:
  
  Full stacktrace: http://paste.openstack.org/show/696167/
+ 
+ My environment:
+ 
+ + Horizon: master (HEAD is at a17a81aecefb440b552506a5a63543fdb15e6517)
+ + OS: Ubuntu 17.10
+ + tox=2.9.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1754648

Title:
  tox startdash error

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When I'm trying to create a new horizon dashboard using tox:

  tox -e manage -- startdash cloud_studio --target
  openstack_dashboard/dashboards/cloud_studio

  I got this errors:

  django.template.exceptions.TemplateSyntaxError: 'horizon' is not a
  registered tag library. Must be one of:

  Full stacktrace: http://paste.openstack.org/show/696167/

  My environment:

  + Horizon: master (HEAD is at a17a81aecefb440b552506a5a63543fdb15e6517)
  + OS: Ubuntu 17.10
  + tox=2.9.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1754648/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1754634] [NEW] Image Import call does not honour enabled methods config option

2018-03-09 Thread Erno Kuvaja
Public bug reported:

Regardless what is configured import call will always accept all the
methods. This means that for example one cannot turn 'web-download'
method off if the image import feature is enabled.

This can be easily corrected by changing the request de-serializer to
check the method in the request against the config option rather than
hardcoded list.

** Affects: glance
 Importance: Critical
 Status: New

** Changed in: glance
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1754634

Title:
  Image Import call does not honour enabled methods config option

Status in Glance:
  New

Bug description:
  Regardless what is configured import call will always accept all the
  methods. This means that for example one cannot turn 'web-download'
  method off if the image import feature is enabled.

  This can be easily corrected by changing the request de-serializer to
  check the method in the request against the config option rather than
  hardcoded list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1754634/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1751208] Re: api-ref: list-resource-type-associations example is incorrect

2018-03-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/550183
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=0b49605630084dd3203a3c5b5c9d5587d3ccdb20
Submitter: Zuul
Branch:master

commit 0b49605630084dd3203a3c5b5c9d5587d3ccdb20
Author: Brian Rosmaita 
Date:   Tue Mar 6 12:41:32 2018 -0500

api-ref: fix list-resource-type-assocs example

In the metadefs section of the api-ref, the current example response
for the list-resource-type-associations call is incorrect.  Add a
correct example response.

Change-Id: I10e92ce96b40563b3c4d02ac5c542960564837ec
Closes-bug: #1751208


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1751208

Title:
  api-ref: list-resource-type-associations example is incorrect

Status in Glance:
  Fix Released

Bug description:
  List resource type associations. Erroneous Response Example.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1751208/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1549915] Re: Lots of "NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported" observed in gate-cinder-python27 logs

2018-03-09 Thread Stephen Finucane
These occur on the latest DevStack deploy. The opt and the warning both
originate in glance so I'm reassigning.

** Changed in: cinder
   Status: Invalid => Confirmed

** Project changed: cinder => glance

** Project changed: glance => oslo.db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1549915

Title:
  Lots of "NotSupportedWarning: Configuration option(s) ['use_tpool']
  not supported" observed in gate-cinder-python27 logs

Status in oslo.db:
  Confirmed

Bug description:
  There are lots of instances of "NotSupportedWarning: Configuration option(s) 
['use_tpool'] not supported" observed in gate-cinder-python27 logs.
  eg:
  
http://logs.openstack.org/02/282002/1/check/gate-cinder-python27/332a226/console.html.gz

  
  ...
  2016-02-18 22:42:12.214 | 
/home/jenkins/workspace/gate-cinder-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:241:
 NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  2016-02-18 22:42:12.214 |   exception.NotSupportedWarning
  2016-02-18 22:42:12.214 | 
  2016-02-18 22:42:12.224 | {3} 
cinder.tests.unit.api.contrib.test_admin_actions.AdminActionsAttachDetachTest.test_volume_force_detach_raises_remote_error
 [3.892236s] ... ok
  2016-02-18 22:42:12.224 | 
  2016-02-18 22:42:12.224 | Captured stderr:
  2016-02-18 22:42:12.224 | 
  2016-02-18 22:42:12.224 | 
/home/jenkins/workspace/gate-cinder-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:241:
 NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  2016-02-18 22:42:12.224 |   exception.NotSupportedWarning
  ...
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.db/+bug/1549915/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1749797] Re: placement returns 503 when keystone is down

2018-03-09 Thread Chris Dent
** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1749797

Title:
  placement returns 503 when keystone is down

Status in keystonemiddleware:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  See the logs here: http://logs.openstack.org/50/544750/8/check/ironic-
  grenade-dsvm-multinode-multitenant/5713fb8/logs/screen-placement-
  api.txt.gz#_Feb_15_17_58_22_463228

  This is during an upgrade while Keystone is down. Placement returns a
  503 because it cannot reach keystone.

  I'm not sure what the expected behavior should be, but a 503 feels
  wrong.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystonemiddleware/+bug/1749797/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1753964] Re: Image remains in queued state for web-download if node_staging_uri is not set

2018-03-09 Thread Erno Kuvaja
So, the actual bug is invalid, there is clearly a bug in the side effect
though, which is it throwing 500.

The node_staging_uri is used as local cache for the taskflow to store
the data it downloads before the rest of the tasks are ran. So it is
needed for the 'web-download' (In 'web-download' only use case it does
not need to be shared between the glance nodes). If the methods are
incorrectly configured, the image should stay in 'queued' but we should
fail the import call gracefully.

** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1753964

Title:
  Image remains in queued state for web-download if node_staging_uri is
  not set

Status in Glance:
  Invalid

Bug description:
  If operator does not set 'node_staging_uri' in glance-api.conf then
  importing image using web-download remains in queued state.

  Steps to reproduce:
  1. Ensure glance-api is running under mod_wsgi (add WSGI_MODE=mod_wsgi in 
local.conf and run stack.sh)
  2. Do not set node_staging_uri in glance-api.conf

  3. Create image using below curl command:
  curl -i -X POST -H "x-auth-token: " 
http://192.168.0.13:9292/v2/images -d 
'{"container_format":"bare","disk_format":"raw","name":"Import web-download"}'

  4. Import image using below curl command:
  curl -i -X POST -H "Content-type: application/json" -H "x-auth-token: 
" 
http://192.168.0.13:9292/v2/images//import -d 
'{"method":{"name":"web-download","uri":"https://www.openstack.org/assets/openstack-logo/2016R/OpenStack-Logo-Horizontal.eps.zip"}}'

  Expected result:
  Image should be in active state.

  Actual result:
  Image remains in queued state.

  API Logs:
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: DEBUG glance_store.backend [-] 
Attempting to import store file {{(pid=3506) _load_store 
/usr/local/lib/python2.7/dist-packages/glance_store/backend.py:231}}
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: DEBUG glance_store.capabilities 
[-] Store glance_store._drivers.filesystem.Store doesn't support updating 
dynamic storage capabilities. Please overwrite 'update_capabilities' method of 
the store to implement updating logics if needed. {{(pid=3506) 
update_capabilities 
/usr/local/lib/python2.7/dist-packages/glance_store/capabilities.py:97}}
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: Traceback (most recent call last):
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenpool.py", line 82, in 
_spawn_n_impl
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: func(*args, **kwargs)
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/opt/stack/glance/glance/domain/proxy.py", line 238, in run
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: self.base.run(executor)
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/opt/stack/glance/glance/notifier.py", line 581, in run
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: super(TaskProxy, 
self).run(executor)
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/opt/stack/glance/glance/domain/proxy.py", line 238, in run
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: self.base.run(executor)
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/opt/stack/glance/glance/domain/proxy.py", line 238, in run
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: self.base.run(executor)
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/opt/stack/glance/glance/domain/__init__.py", line 438, in run
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: 
executor.begin_processing(self.task_id)
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/opt/stack/glance/glance/async/taskflow_executor.py", line 144, in 
begin_processing
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: super(TaskExecutor, 
self).begin_processing(task_id)
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/opt/stack/glance/glance/async/__init__.py", line 63, in begin_processing
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: self._run(task_id, task.type)
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/opt/stack/glance/glance/async/taskflow_executor.py", line 165, in _run
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: flow = self._get_flow(task)
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/opt/stack/glance/glance/async/taskflow_executor.py", line 134, in _get_flow
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: invoke_kwds=kwds).driver
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/usr/local/lib/python2.7/dist-packages/stevedore/driver.py", line 61, in 
__init__
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: 
warn_on_missing_entrypoint=warn_on_missing_entrypoint
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]:   File 
"/usr/local/lib/python2.7/dist-packages/stevedore/named.py", line 81, in 
__init__
  Mar 07 09:26:07 ubuntu-16 glance-api[3499]: verify_requirements)
  Mar 07 09:26:07 ubuntu-16 

[Yahoo-eng-team] [Bug 1549915] [NEW] Lots of "NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported" observed in gate-cinder-python27 logs

2018-03-09 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

There are lots of instances of "NotSupportedWarning: Configuration option(s) 
['use_tpool'] not supported" observed in gate-cinder-python27 logs.
eg:
http://logs.openstack.org/02/282002/1/check/gate-cinder-python27/332a226/console.html.gz


...
2016-02-18 22:42:12.214 | 
/home/jenkins/workspace/gate-cinder-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:241:
 NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
2016-02-18 22:42:12.214 |   exception.NotSupportedWarning
2016-02-18 22:42:12.214 | 
2016-02-18 22:42:12.224 | {3} 
cinder.tests.unit.api.contrib.test_admin_actions.AdminActionsAttachDetachTest.test_volume_force_detach_raises_remote_error
 [3.892236s] ... ok
2016-02-18 22:42:12.224 | 
2016-02-18 22:42:12.224 | Captured stderr:
2016-02-18 22:42:12.224 | 
2016-02-18 22:42:12.224 | 
/home/jenkins/workspace/gate-cinder-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:241:
 NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
2016-02-18 22:42:12.224 |   exception.NotSupportedWarning
...


** Affects: glance
 Importance: Undecided
 Status: Confirmed

-- 
Lots of "NotSupportedWarning: Configuration option(s) ['use_tpool'] not 
supported" observed in gate-cinder-python27 logs
https://bugs.launchpad.net/bugs/1549915
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to Glance.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590608] Re: Services should use http_proxy_to_wsgi middleware

2018-03-09 Thread Ryan Beisner
** Changed in: charm-barbican
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590608

Title:
  Services should use http_proxy_to_wsgi middleware

Status in Aodh:
  Fix Released
Status in Barbican:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in OpenStack Barbican Charm:
  Fix Released
Status in OpenStack heat charm:
  Triaged
Status in Cinder:
  Fix Released
Status in cloudkitty:
  Fix Released
Status in congress:
  Triaged
Status in OpenStack Backup/Restore and DR (Freezer):
  Fix Released
Status in Glance:
  Fix Released
Status in Gnocchi:
  Fix Released
Status in OpenStack Heat:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in neutron:
  Fix Released
Status in Panko:
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released

Bug description:
  It's a common problem when putting a service behind a load balancer to
  need to forward the Protocol and hosts of the original request so that
  the receiving service can construct URLs to the loadbalancer and not
  the private worker node.

  Most services have implemented some form of secure_proxy_ssl_header =
  HTTP_X_FORWARDED_PROTO handling however exactly how this is done is
  dependent on the service.

  oslo.middleware provides the http_proxy_to_wsgi middleware that
  handles these headers and the newer RFC7239 forwarding header and
  completely hides the problem from the service.

  This middleware should be adopted by all services in preference to
  their own HTTP_X_FORWARDED_PROTO handling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1590608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1754600] [NEW] the detail for openstack quota show is not supported

2018-03-09 Thread zhangyanxian
Public bug reported:

According to http://specs.openstack.org/openstack/neutron-
specs/specs/pike/extend-quota-api-to-send-usage-stats.html

A new optional argument will be added to the openstack client, for example: $ 
openstack quota show {tenant_id/project_id} –detail
This adds following endpoint: GET /v2.0/quotas/{tenant_id}/detail
It reports detailed quotas for a specific tenant such as reserved, limit and 
used for each resource.
https://bugs.launchpad.net/neutron/+bug/1599488

But it still doesn't work in the latest queen version of OpenStack:
root@ubuntudbs:/opt/backuplocalconf# openstack quota list --network
+--+--+--+---+---+-+-+--+-+--+
| Project ID   | Floating IPs | Networks | Ports | RBAC 
Policies | Routers | Security Groups | Security Group Rules | Subnets | Subnet 
Pools |
+--+--+--+---+---+-+-+--+-+--+
| 754081c072dd4bf3bc698dac57747fd7 |  100 |  100 |  1000 |  
  10 |  10 |  10 |  100 | 100 |   
-1 |
+--+--+--+---+---+-+-+--+-+--+
root@ubuntudbs:/opt/backuplocalconf# openstack quota show 
754081c072dd4bf3bc698dac57747fd7 --detail
usage: openstack quota show [-h] [-f {json,shell,table,value,yaml}]
[-c COLUMN] [--max-width ] [--fit-width]
[--print-empty] [--noindent] [--prefix PREFIX]
[--class | --default]
[]
openstack quota show: error: unrecognized arguments: --detail
root@ubuntudbs:/opt/backuplocalconf# 

root@ubuntudbs:/opt/backuplocalconf# neutron quota-list ZyxProject
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
++-+--+--+-+++-+++--+
| floatingip | network | port | project_id   | rbac_policy 
| router | security_group | security_group_rule | subnet | subnetpool | 
tenant_id|
++-+--+--+-+++-+++--+
|100 | 100 | 1000 | 754081c072dd4bf3bc698dac57747fd7 |  10 
| 10 | 10 | 100 |100 | -1 | 
754081c072dd4bf3bc698dac57747fd7 |
++-+--+--+-+++-+++--+
root@ubuntudbs:/opt/backuplocalconf# neutron quota-show 
754081c072dd4bf3bc698dac57747fd7 --detail
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
+-+---+
| Field   | Value |
+-+---+
| floatingip  | 100   |
| network | 100   |
| port| 1000  |
| rbac_policy | 10|
| router  | 10|
| security_group  | 10|
| security_group_rule | 100   |
| subnet  | 100   |
| subnetpool  | -1|
+-+---+
root@ubuntudbs:/opt/backuplocalconf# 

the neutron quota_details extension is ok:

root@ubuntudbs:/opt/backuplocalconf# neutron ext-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
++--+
| alias  | name 
|
++--+
| default-subnetpools| Default Subnetpools  
|
| qos| Quality of Service   
|
| availability_zone  | Availability Zone
|
| network_availability_zone  | Network Availability Zone
|
| auto-allocated-topology| Auto Allocated Topology Services 
|
| ext-gw-mode| Neutron L3 Configurable external gateway 
mode 

[Yahoo-eng-team] [Bug 1736171] Re: Update OS API charm default haproxy timeout values

2018-03-09 Thread Ryan Beisner
** Changed in: charm-neutron-api
   Status: Fix Committed => Fix Released

** Changed in: charm-keystone
   Status: Fix Committed => Fix Released

** Changed in: charm-nova-cloud-controller
   Status: Fix Committed => Fix Released

** Changed in: charm-cinder
   Status: Fix Committed => Fix Released

** Changed in: charm-glance
   Status: Fix Committed => Fix Released

** Changed in: charm-ceph-radosgw
   Status: Fix Committed => Fix Released

** Changed in: charm-heat
   Status: Fix Committed => Fix Released

** Changed in: charm-openstack-dashboard
   Status: Fix Committed => Fix Released

** Changed in: charm-barbican
   Status: Fix Committed => Fix Released

** Changed in: charm-ceilometer
   Status: Fix Committed => Fix Released

** Changed in: charm-swift-proxy
   Status: Fix Committed => Fix Released

** Changed in: charm-manila
   Status: Fix Committed => Fix Released

** Changed in: charm-aodh
   Status: Fix Committed => Fix Released

** Changed in: charm-designate
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1736171

Title:
  Update OS API charm default haproxy timeout values

Status in OpenStack AODH Charm:
  Fix Released
Status in OpenStack Barbican Charm:
  Fix Released
Status in OpenStack ceilometer charm:
  Fix Released
Status in OpenStack ceph-radosgw charm:
  Fix Released
Status in OpenStack cinder charm:
  Fix Released
Status in OpenStack Designate Charm:
  Fix Released
Status in OpenStack glance charm:
  Fix Released
Status in OpenStack heat charm:
  Fix Released
Status in OpenStack keystone charm:
  Fix Released
Status in OpenStack Manila Charm:
  Fix Released
Status in OpenStack neutron-api charm:
  Fix Released
Status in OpenStack neutron-gateway charm:
  Invalid
Status in OpenStack nova-cloud-controller charm:
  Fix Released
Status in OpenStack openstack-dashboard charm:
  Fix Released
Status in OpenStack swift-proxy charm:
  Fix Released
Status in neutron:
  Invalid

Bug description:
  Change OpenStack API charm haproxy timeout values

haproxy-server-timeout: 9
haproxy-client-timeout: 9
haproxy-connect-timeout: 9000
haproxy-queue-timeout: 9000

  Workaround until this lands is to set these values in config:

  juju config neutron-api haproxy-server-timeout=9 haproxy-client-
  timeout=9 haproxy-queue-timeout=9000 haproxy-connect-timeout=9000

  
  --- Original Bug -
  NeutronNetworks.create_and_delete_subnets is failing when run with 
concurrency greater than 1.

  Here's a snippet of a failure: http://paste.ubuntu.com/25927074/

  Here is my rally yaml: http://paste.ubuntu.com/26112719/

  This is happening using pike on xenial, from the ubuntu cloud
  archive's.  The deployment is distributed across 9 nodes, with HA
  services.

  For now we have adjusted our test scenario to be more realistic.  When
  we spread the test over 30 tenants, instead of 3 and if we simulate 2
  users per tenant, instead of 3, we do not hit the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-aodh/+bug/1736171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750705] Re: glance db_sync requires mysql db to have log_bin_trust_function_creators = 1

2018-03-09 Thread Ryan Beisner
** Changed in: charm-percona-cluster
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1750705

Title:
  glance db_sync requires mysql db to have
  log_bin_trust_function_creators = 1

Status in OpenStack percona-cluster charm:
  Fix Released
Status in Glance:
  Fix Released
Status in glance package in Ubuntu:
  Fix Released

Bug description:
  Upon deploying glance via cs:~openstack-charmers-next/xenial/glance
  glance appears to throw a CRIT unhandled error, so far I have
  experienced this on arm64. Not sure about other archs at this point in
  time. Decided to bug and will investigate further.

  Cloud- xenial-queens/proposed

  This occurs when the the shared-db-relation hook fires for mysql
  :shared-db.

  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed CRITI 
[glance] Unhandled error
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
Traceback (most recent call last):
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/bin/glance-manage", line 10, in 
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
sys.exit(main())
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/glance/cmd/manage.py", line 528, in main
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
return CONF.command.action_fn()
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/glance/cmd/manage.py", line 360, in sync
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
self.command_object.sync(CONF.command.version)
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/glance/cmd/manage.py", line 153, in sync
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
self.expand()
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/glance/cmd/manage.py", line 208, in expand
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
self._sync(version=expand_head)
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/glance/cmd/manage.py", line 168, in _sync
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
alembic_command.upgrade(a_config, version)
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/alembic/command.py", line 254, in upgrade
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
script.run_env()
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/alembic/script/base.py", line 425, in run_env
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
util.load_python_file(self.dir, 'env.py')
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/alembic/util/pyfiles.py", line 93, in 
load_python_file
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
module = load_module_py(module_id, path)
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/alembic/util/compat.py", line 75, in 
load_module_py
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
mod = imp.load_source(module_id, path, fp)
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/glance/db/sqlalchemy/alembic_migrations/env.py",
 line 88, in 
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
run_migrations_online()
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/glance/db/sqlalchemy/alembic_migrations/env.py",
 line 83, in run_migrations_online
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
context.run_migrations()
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"", line 8, in run_migrations
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/alembic/runtime/environment.py", line 836, in 
run_migrations
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed 
self.get_context().run_migrations(**kw)
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed   File 
"/usr/lib/python2.7/dist-packages/alembic/runtime/migration.py", line 330, in 
run_migrations
  unit-glance-0: 01:28:22 DEBUG unit.glance/0.shared-db-relation-changed