[Yahoo-eng-team] [Bug 1764259] [NEW] neutron openstack client returns ' Unknown error' instead of the real error

2018-04-15 Thread Adit Sarfaty
Public bug reported:

For several neutron create actions, when called via the openstack client you do 
not get the real error issued by the plugin, as you do with the neutronclient. 
instead yo get: 
BadRequestException: Unknown error


For example, try to create a subnet without a cidr:
1) with the neutron client you see the real error:
neutron subnet-create --name sub1 net1
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
Bad subnets request: a subnetpool must be specified in the absence of a cidr.
Neutron server returns request_ids: ['req-8ee84525-6e98-4774-9392-ab8b596cde1a']

2) with the openstack client the information is missing:
openstack subnet create --network net1 sub1
BadRequestException: Unknown error

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1764259

Title:
  neutron openstack client returns ' Unknown error' instead of the real
  error

Status in neutron:
  New

Bug description:
  For several neutron create actions, when called via the openstack client you 
do not get the real error issued by the plugin, as you do with the 
neutronclient. instead yo get: 
  BadRequestException: Unknown error

  
  For example, try to create a subnet without a cidr:
  1) with the neutron client you see the real error:
  neutron subnet-create --name sub1 net1
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Bad subnets request: a subnetpool must be specified in the absence of a cidr.
  Neutron server returns request_ids: 
['req-8ee84525-6e98-4774-9392-ab8b596cde1a']

  2) with the openstack client the information is missing:
  openstack subnet create --network net1 sub1
  BadRequestException: Unknown error

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1764259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1737172] Re: Resize on same host in multiple-compute nodes environment not possible

2018-04-15 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1737172

Title:
  Resize on same host in multiple-compute nodes environment not possible

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Hi,

  On Ocata release, nova 15.0.7( Libvirt+KVM) its not possible to resize 
working instance on the same host, in multiple compute nodes environment
  In nova.conf is set:
  allow_migrate_to_same_host = True

  but this is not enough. Resize in Dashboard returns:
  Error: No valid host was found.

  2017-12-08 14:35:36.127 18389 INFO nova.filters [req-21f0e039-f92a-
  41d7-88c3-fbecd9b46c4b a719e1fa1583410e84a60584b057854f
  76544321203c4ea5b40651daf4505f35 - - -] Filtering removed all hosts
  for the request with instance ID 'cb76c5cc-3b0b-40eb-
  ba54-05b21331b693'. Filter results: ['RetryFilter: (start: 1, end:
  1)', 'AvailabilityZoneFilter: (start: 1, end: 0)']

  It is not possible to specify:
  scheduler_default_filters = AllHostsFilter
  as this will ignore Availability zone specified per each instance launch.

  Please advice if this is expected behavior and if can be fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1737172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1764200] [NEW] Glance Cinder backed images & multiple regions

2018-04-15 Thread Ben O'Hara
Public bug reported:

When using the cinder backed images as per

https://docs.openstack.org/cinder/latest/admin/blockstorage-volume-
backed-image.html

We have multiple locations, glance configured as

/etc/glance/glance-api.conf

[glance_store]
stores = swift, cinder
default_store = swift
-snip-
cinder_store_auth_address = https://hostname:5000/v3
cinder_os_region_name = Region
cinder_store_user_name = glance
cinder_store_password = Password
cinder_store_project_name = cinder-images
cinder_catalog_info = volume:cinder:internalURL


cinder clones the volume correctly, then talks to glance to add the location of 
cinder://

glance then talks to cinder to validate the volume id, however this step
uses the wrong cinder endpoint and checks the other region.

>From /usr/lib/python2.7/site-packages/glance_store/_drivers/cinder.py

It appears the region name is only used when not passing in the
project/user/password.

Passing the os_region_name to the cinderclient.Client call on line 351
appears to fix this.

ie

c = cinderclient.Client(username,
password,
project,
auth_url=url,
region_name=glance_store.cinder_os_region_name,
insecure=glance_store.cinder_api_insecure,
retries=glance_store.cinder_http_retries,
cacert=glance_store.cinder_ca_certificates_file)

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1764200

Title:
  Glance Cinder backed images & multiple regions

Status in Glance:
  New

Bug description:
  When using the cinder backed images as per

  https://docs.openstack.org/cinder/latest/admin/blockstorage-volume-
  backed-image.html

  We have multiple locations, glance configured as

  /etc/glance/glance-api.conf

  [glance_store]
  stores = swift, cinder
  default_store = swift
  -snip-
  cinder_store_auth_address = https://hostname:5000/v3
  cinder_os_region_name = Region
  cinder_store_user_name = glance
  cinder_store_password = Password
  cinder_store_project_name = cinder-images
  cinder_catalog_info = volume:cinder:internalURL

  
  cinder clones the volume correctly, then talks to glance to add the location 
of cinder://

  glance then talks to cinder to validate the volume id, however this
  step uses the wrong cinder endpoint and checks the other region.

  From /usr/lib/python2.7/site-packages/glance_store/_drivers/cinder.py

  It appears the region name is only used when not passing in the
  project/user/password.

  Passing the os_region_name to the cinderclient.Client call on line 351
  appears to fix this.

  ie

  c = cinderclient.Client(username,
  password,
  project,
  auth_url=url,
  region_name=glance_store.cinder_os_region_name,
  insecure=glance_store.cinder_api_insecure,
  retries=glance_store.cinder_http_retries,
  cacert=glance_store.cinder_ca_certificates_file)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1764200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1751354] Re: Drop deprecated Flavor Edit feature

2018-04-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/560977
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=377422b33e3db760cc1b0103147d599d8a9b4d3d
Submitter: Zuul
Branch:master

commit 377422b33e3db760cc1b0103147d599d8a9b4d3d
Author: Ivan Kolodyazhny 
Date:   Thu Apr 12 20:18:48 2018 +0300

Delete the deprecated Edit Flavor feature

Historically, Horizon has provided the ability to edit Flavors by
deleting and creating a new one with the same information. This is not
supported in the Nova API and causes unexpected issues and breakages.

Change-Id: I796259c66e01be088a50ab6ba63f515de9590c9b
Closes-Bug: #1751354


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1751354

Title:
  Drop deprecated Flavor Edit feature

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Flavor-Edit feature was deprecated in Pike (bug 1709056).
  We can drop it in Rocky release.

  Note that "Flavor Access" tab should be kept.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1751354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1764125] [NEW] Re-attaching an encrypted(Barbican) Cinder (RBD) volume to an instance fails

2018-04-15 Thread Tzach Shefi
Public bug reported:

Description of problem: 
An attached encrypted (Barbican) RBD Cinder volume was attached to instance, 
write data to it. 
Then volume was detached, when trying to reattach the volume to same instance 
volume fails to attach. Odd errors on attached nova-compute.log 

2018-04-15 13:14:06.274 1 ERROR nova.compute.manager [instance: 
923c5318-8502-4f85-a215-78afc4fd641b] uuid=managed_object_id)
2018-04-15 13:14:06.274 1 ERROR nova.compute.manager [instance: 
923c5318-8502-4f85-a215-78afc4fd641b] ManagedObjectNotFoundError: Key not 
found, uuid: 7912eac8-2652-4c92-b53f-3db4ecca7bc7

2018-04-15 13:14:06.523 1 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/cinderclient/client.py", line 177, in request
2018-04-15 13:14:06.523 1 ERROR oslo_messaging.rpc.server raise 
exceptions.from_response(resp, body)
2018-04-15 13:14:06.523 1 ERROR oslo_messaging.rpc.server 
VolumeAttachmentNotFound: Volume attachment 
c17e2b89-5a36-4e7e-8c71-b975f2f5ccb3 could not be found.
2018-04-15 13:14:06.523 1 ERROR oslo_messaging.rpc.server 


How reproducible:
Unsure looks like every time I try to re-attach. 

Steps to Reproduce:
1. Boot an instance
2. Create an encrypted(Barbican) backed Cinder(RBD) volume, attach to instance 
write data.  
3. Detach volume from instance
4. Try to reattach same volume to same instance. 

$nova volume-attach 923c5318-8502-4f85-a215-78afc4fd641b
16584072-ef78-4a80-91ab-cbd47e9bc70d auto

5. Volume fails to attach
No error volume remains unattached
cinder list 
+--+---+-+--++--+--+
| ID   | Status| Name| Size | 
Volume Type| Bootable | Attached to  |
+--+---+-+--++--+--+
| 16584072-ef78-4a80-91ab-cbd47e9bc70d | available | 2-Encrypted | 1| 
LuksEncryptor-Template-256 | false|  |


Actual results:
Volume fails to attach. 


Expected results:
Volume should successfully reattach. 


Environment / Version-Release number of selected component (if applicable):
rhel7.5 
openstack-nova-conductor-17.0.2-0.20180323024604.0390d5f.el7ost.noarch
python-nova-17.0.2-0.20180323024604.0390d5f.el7ost.noarch
python-novaclient-9.1.1-1.el7ost.noarch
openstack-cinder-12.0.1-0.20180326201852.46c4ec1.el7ost.noarch
openstack-nova-scheduler-17.0.2-0.20180323024604.0390d5f.el7ost.noarch
openstack-nova-console-17.0.2-0.20180323024604.0390d5f.el7ost.noarch
puppet-cinder-12.3.1-0.20180222074326.18152ac.el7ost.noarch
openstack-nova-compute-17.0.2-0.20180323024604.0390d5f.el7ost.noarch
python2-cinderclient-3.5.0-1.el7ost.noarch
python-cinder-12.0.1-0.20180326201852.46c4ec1.el7ost.noarch
openstack-nova-api-17.0.2-0.20180323024604.0390d5f.el7ost.noarch
openstack-nova-novncproxy-17.0.2-0.20180323024604.0390d5f.el7ost.noarch
puppet-nova-12.3.1-0.20180319062741.9db79a6.el7ost.noarch
openstack-nova-common-17.0.2-0.20180323024604.0390d5f.el7ost.noarch
openstack-nova-migration-17.0.2-0.20180323024604.0390d5f.el7ost.noarch
openstack-nova-placement-api-17.0.2-0.20180323024604.0390d5f.el7ost.noarch

Libvirt + KVM
Neutron networking
Cinder volume is RBD backed and encrypted via Barbican.

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "Nova compute log"
   
https://bugs.launchpad.net/bugs/1764125/+attachment/5116630/+files/nova-compute.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1764125

Title:
  Re-attaching an encrypted(Barbican) Cinder (RBD) volume to an instance
  fails

Status in OpenStack Compute (nova):
  New

Bug description:
  Description of problem: 
  An attached encrypted (Barbican) RBD Cinder volume was attached to instance, 
write data to it. 
  Then volume was detached, when trying to reattach the volume to same instance 
volume fails to attach. Odd errors on attached nova-compute.log 

  2018-04-15 13:14:06.274 1 ERROR nova.compute.manager [instance: 
923c5318-8502-4f85-a215-78afc4fd641b] uuid=managed_object_id)
  2018-04-15 13:14:06.274 1 ERROR nova.compute.manager [instance: 
923c5318-8502-4f85-a215-78afc4fd641b] ManagedObjectNotFoundError: Key not 
found, uuid: 7912eac8-2652-4c92-b53f-3db4ecca7bc7

  2018-04-15 13:14:06.523 1 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/cinderclient/client.py", line 177, in request
  2018-04-15 13:14:06.523 1 ERROR oslo_messaging.rpc.server raise 
exceptions.from_response(resp, body)
  2018-04-15 13:14:06.523 1 ERROR oslo_messaging.rpc.server 
VolumeAttachmentNotFound: Volume attachment 
c17e2b89-5a36-4e7e-8c71-b975f2f5ccb3 could not be found.