[Yahoo-eng-team] [Bug 1650373] Re: Image list displays incorrect type for instance snapshots

2018-01-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/529970
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=687d224485ced1b8d20fa24a2511abc597cfab09
Submitter: Zuul
Branch:master

commit 687d224485ced1b8d20fa24a2511abc597cfab09
Author: Abdallah Banishamsa 
Date:   Sun Dec 24 04:50:16 2017 -0500

Fix displayed type for instance snapshots

Fix displayed type for instance snapshots in image list

Change-Id: I780e6a8aecca421e6273c0e1ad39fbbaefaba956
Closes-bug: #1650373


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1650373

Title:
  Image list displays incorrect type for instance snapshots

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Prior to Newton, instance snapshots were displayed in the image list
  as having a type of "snapshot". This would allow you to distinguish
  between a base "image", and an instance "snapshots".

  As of Newton, instance snapshots are incorrectly displaying as having
  a type of "image", even though their properties indicate "image_type:
  snapshot".

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1650373/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1741185] [NEW] Install and configure a compute node for Red Hat Enterprise Linux and CentOS in nova

2018-01-03 Thread Darcy
Public bug reported:

Error: Package: 1:openstack-nova-compute-16.0.3-2.el7.noarch (pike) 
   Requires: qemu-kvm-rhev >= 2.9.0


To solve this problem, please install qemu-kvm-ev before install 
openstack-nova-compute

This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [ ] This doc is inaccurate in this way: __
- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release: 16.0.5.dev11 on 2017-12-21 19:52
SHA: ae7aef15f6ce2354443f6cce379506e4d8eefb75
Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/install/compute-install-rdo.rst
URL: https://docs.openstack.org/nova/pike/install/compute-install-rdo.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1741185

Title:
  Install and configure a compute node for Red Hat Enterprise Linux and
  CentOS in nova

Status in OpenStack Compute (nova):
  New

Bug description:
  Error: Package: 1:openstack-nova-compute-16.0.3-2.el7.noarch (pike) 
 Requires: qemu-kvm-rhev >= 2.9.0

  
  To solve this problem, please install qemu-kvm-ev before install 
openstack-nova-compute

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [ ] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 16.0.5.dev11 on 2017-12-21 19:52
  SHA: ae7aef15f6ce2354443f6cce379506e4d8eefb75
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/install/compute-install-rdo.rst
  URL: https://docs.openstack.org/nova/pike/install/compute-install-rdo.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1741185/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662387] Re: occasional NetworkInUse in shared network tempest tests

2018-01-03 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1662387

Title:
  occasional NetworkInUse in shared network tempest tests

Status in neutron:
  Expired

Bug description:
  shared networks created during test can be picked by nova. (eg. for other 
tempest tests)
  and it ends up with InUse errors on clean up.

  in the following example, an unrelated test 
(tempest-TestServerAdvancedOps-1737118212)
  happened to create a port on the network.

  http://logs.openstack.org/60/409560/9/check/gate-tempest-dsvm-
  networking-midonet-ml2-ubuntu-
  xenial/eac4f95/logs/testr_results.html.gz

  ft1.1: tearDownClass 
(neutron.tests.tempest.api.admin.test_shared_network_extension.AllowedAddressPairSharedNetworkTest)_StringException:
 Traceback (most recent call last):
File "tempest/test.py", line 307, in tearDownClass
  six.reraise(etype, value, trace)
File "tempest/test.py", line 290, in tearDownClass
  teardown()
File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 169, 
in resource_cleanup
  network['id'])
File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 210, 
in _try_delete_resource
  delete_callable(*args, **kwargs)
File 
"/opt/stack/new/neutron/neutron/tests/tempest/services/network/json/network_client.py",
 line 115, in _delete
  resp, body = self.delete(uri)
File "tempest/lib/common/rest_client.py", line 306, in delete
  return self.request('DELETE', url, extra_headers, headers, body)
File "tempest/lib/common/rest_client.py", line 663, in request
  self._error_checker(resp, resp_body)
File "tempest/lib/common/rest_client.py", line 775, in _error_checker
  raise exceptions.Conflict(resp_body, resp=resp)
  tempest.lib.exceptions.Conflict: An object with that identifier already exists
  Details: {u'type': u'NetworkInUse', u'detail': u'', u'message': u'Unable to 
complete operation on network 773449f0-a5b0-4099-95fe-6b1a4ed096a6. There are 
one or more ports still in use on the network.'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1662387/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1738341] Re: Nova volume-attach shows volume in 'available' status instead of 'In-use'

2018-01-03 Thread Matt Riedemann
*** This bug is a duplicate of bug 1737724 ***
https://bugs.launchpad.net/bugs/1737724

This is already fixed in cinder, the failure is in the cinder volume
side:

Dec 14 21:52:09 cld13b9 cinder-volume[13929]: ERROR
oslo_db.sqlalchemy.exc_filters Dec 14 21:52:09 cld13b9 cinder-
volume[13929]: ERROR oslo_messaging.rpc.server [None req-d6427368-7a5a-
48e5-969a-ff79d27bbf9d admin None] Exception during message handling:
DBError: (pymysql.err.InternalError) (1241, u'Operand should contain 1
column(s)') [SQL: u'INSERT INTO attachment_specs (created_at,
updated_at, deleted_at, deleted, `key`, value, attachment_id) VALUES
(%(created_at)s, %(updated_at)s, %(deleted_at)s, %(deleted)s, %(key)s,
%(value)s, %(attachment_id)s)'] [parameters: {'attachment_id': '0784d959
-fa1b-45e4-8185-eccbefdafda2', 'deleted': 0, 'created_at':
datetime.datetime(2017, 12, 15, 5, 52, 9, 132697), 'updated_at': None,
'value': [u'1000b05ada00fea1', u'1000b05ada00fea9', u'10005cb901c32121',
u'10005cb901c32129'], 'key': u'wwpns', 'deleted_at': None}]

I'll duplicate it once I find the cinder bug.

** This bug has been marked a duplicate of bug 1737724
   FC attachment issue

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1738341

Title:
  Nova volume-attach shows volume in 'available' status instead of 'In-
  use'

Status in OpenStack Compute (nova):
  New

Bug description:
  stack@cld13b11:~$ openstack --version
  openstack 3.13.0
  stack@cld13b11:~$

  stack@cld13b11:~/devstack$ git branch -a
  * master
remotes/origin/HEAD -> origin/master
remotes/origin/master
remotes/origin/stable/newton
remotes/origin/stable/ocata
remotes/origin/stable/pike
  stack@cld13b11:~/devstack$

  Below Steps Followed - 
  Created Instance
  stack@cld13b11:~/devstack$ git branch -a
  * master
remotes/origin/HEAD -> origin/master
remotes/origin/master
remotes/origin/stable/newton
remotes/origin/stable/ocata
remotes/origin/stable/pike
  stack@cld13b11:~/devstack$

  CREATED AN INSTANCE
  stack@cld13b9:~$ nova boot --image 40ed0df4-2693-4b5e-9d17-666f1763de6c 
--flavor m1.tiny Instnace1
  
+--+-+
  | Property | Value
   |
  
+--+-+
  | OS-DCF:diskConfig| MANUAL   
   |
  | OS-EXT-AZ:availability_zone  |  
   |
  | OS-EXT-SRV-ATTR:host | -
   |
  | OS-EXT-SRV-ATTR:hostname | instnace1
   |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | -
   |
  | OS-EXT-SRV-ATTR:instance_name|  
   |
  | OS-EXT-SRV-ATTR:kernel_id|  
   |
  | OS-EXT-SRV-ATTR:launch_index | 0
   |
  | OS-EXT-SRV-ATTR:ramdisk_id   |  
   |
  | OS-EXT-SRV-ATTR:reservation_id   | r-jpn0k95b   
   |
  | OS-EXT-SRV-ATTR:root_device_name | -
   |
  | OS-EXT-SRV-ATTR:user_data| -
   |
  | OS-EXT-STS:power_state   | 0
   |
  | OS-EXT-STS:task_state| scheduling   
   |
  | OS-EXT-STS:vm_state  | building 
   |
  | OS-SRV-USG:launched_at   | -
   |
  | OS-SRV-USG:terminated_at | -
   |
  | accessIPv4   |  
   |
  | accessIPv6   |  
   |
  | adminPass| nnUwV3XtYJuT 
   |
  | config_drive |  
   |
  | created  | 2017-12-15T05:35:00Z 
   |
  | description 

[Yahoo-eng-team] [Bug 1736212] Re: Facing Error while deploying Overcloud - " NoValidHost: No valid host was found. There are not enough hosts available."

2018-01-03 Thread Matt Riedemann
Which version of nova and ironic?

Can you attach the nova-scheduler and nova-compute logs? Without digging
into the logs for the failure one can't really tell why it failed.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1736212

Title:
  Facing Error while deploying Overcloud - " NoValidHost: No valid host
  was found. There are not enough hosts available."

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description:
  Facing the error message (No valid host was found. There are not enough hosts 
available., Code: 500) while deploying overcloud.

  Environment:
  Undercloud successfully deployed on VM
  Planning to deploy 'Compute' and 'Controller' on separate baremetal servers
  Using PIKE version

  Note:
  Introspection completed sucessfully with 2 nodes

  Issue:
  Facing the error message (No valid host was found. There are not enough hosts 
available., Code: 500) while deploying overcloud using the below command

  openstack overcloud deploy --template temlpates --control-scale 1
  --control-flavor controllertest --compute-scale 1 --control-flavor
  computetest

  Verified logs on 'nova Condutor, nova schedule and heat as well, all
  of them are showing the same error message. Please let me know how to
  fix this issue.

  Provided the required output below.

  Command Output:
  (undercloud) [stack@director ~]$ ironic node-list
  
+--+-+---+-++-+
  | UUID | Name| Instance UUID | Power 
State | Provisioning State | Maintenance |
  
+--+-+---+-++-+
  | 2fb4794c-f4e4-46c6-bbee-18432e647225 | controller0 | None  | power 
off   | available  | False   |
  | 6bacea55-39d1-4e6b-a65a-6bb0c95c214b | compute0| None  | power 
off   | available  | False   |
  
+--+-+---+-++-+
  (undercloud) [stack@director ~]$ openstack flavor list
  
+--++--+--+---+---+---+
  | ID   | Name   |  RAM | Disk | 
Ephemeral | VCPUs | Is Public |
  
+--++--+--+---+---+---+
  | 09b9adde-37fd-4a49-af1e-e405a3bc892c | control| 4096 |   40 |   
  0 | 1 | True  |
  | 09ca6eed-9f41-4e53-8470-ca21c1c773fc | compute| 4096 |   40 |   
  0 | 1 | True  |
  | 3a57f910-3cfc-47d7-bd61-3c6459913ed0 | computetest| 7200 |  140 |   
  0 | 8 | True  |
  | 8e840584-5ab0-4dc9-8fce-b05e16a77f21 | controllertest | 6400 |  100 |   
  0 | 4 | True  |
  | 93f697dc-ed34-4f6f-8041-7c26b648f307 | swift-storage  | 4096 |   40 |   
  0 | 1 | True  |
  | ab752d2c-c320-4026-80e0-53c687a12a9b | ceph-storage   | 4096 |   40 |   
  0 | 1 | True  |
  | cf54e4d2-f7ce-4152-bc1e-02a16f1bdca4 | block-storage  | 4096 |   40 |   
  0 | 1 | True  |
  | d6342b38-43a9-4f8c-89b1-7cf36f6f00a1 | baremetal  | 4096 |   40 |   
  0 | 1 | True  |
  
+--++--+--+---+---+---+
  (undercloud) [stack@director ~]$ openstack image list
  +--+++
  | ID   | Name   | Status |
  +--+++
  | ae19fc91-146f-4bac-9777-4031b23b36be | bm-deploy-kernel   | active |
  | bc10bb5f-1ed4-4b99-8d6d-3cd56881e942 | bm-deploy-ramdisk  | active |
  | be1f2a18-e3b0-43a5-a194-32fcf792faaa | overcloud-full | active |
  | 2f357051-f349-4131-b1b1-85f6fc8c3abe | overcloud-full-initrd  | active |
  | a3315b17-e411-4d0c-ad7d-d5e614b6b18d | overcloud-full-vmlinuz | active |
  +--+++

  (undercloud) [stack@director ~]$ ironic node-show controller0
  
+++
  | Property   | Value  
|
  
+++
  | boot_interface | None   
|
  | chassis_uuid   | None   
|
  | clean_step | {} 

[Yahoo-eng-team] [Bug 1738729] Re: keystoneclient.exceptions.EndpointNotFound

2018-01-03 Thread Matt Riedemann
Check your [neutron] section of nova.conf, because it's trying to
construct an instance of python-neutronclient to communicate with
neutron using the credentials in the [neutron] section of nova.conf, and
apparently it's failing to lookup the networking endpoint in the service
catalog.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1738729

Title:
  keystoneclient.exceptions.EndpointNotFound

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The nova-api service give this error to me when I tryed to create an
  instance.

  My openstack release is Liberty,Network module is neutron(Provider
  networks).I had check all my config files,but nothing can made this
  work.

  Finally I tryed to update nova componets,It works.

  But There is another problem,Throuh the cmd lines created an
  instance,the lines must be:

  nova boot --flavor m1.tiny --image cirros --security-group default
  --key-name mykey public-instance

  The command lines can't include "--nic" Item.If include this item,the 
instance will be created fail.In the log of nova-api will output these messages:
  
---
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions 
[req-ac12c942-a2e8-4ad2-a905-9beadb052ebf 34400c494f91427ba0ed68aadd008a70 
8dc0a0aa960d41a69cf114a3883fd085 - - -] Unexpected exception in API method
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/servers.py", line 
611, in create
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions 
**create_kwargs)
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/hooks.py", line 149, in inner
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions rv = 
f(*args, **kwargs)
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1587, in create
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions 
check_server_group_quota=check_server_group_quota)
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1187, in 
_create_instance
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions 
auto_disk_config, reservation_id, max_count)
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 961, in 
_validate_and_build_base_options
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions 
pci_request_info, requested_networks)
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 1092, in 
create_pci_requests_for_sriov_ports
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions neutron 
= get_client(context, admin=True)
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 237, in 
get_client
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions 
auth_token = _ADMIN_AUTH.get_token(_SESSION)
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/auth/identity/base.py", line 
200, in get_token
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions return 
self.get_access(session).auth_token
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/auth/identity/base.py", line 
240, in get_access
  2017-12-18 14:12:28.613 27413 ERROR nova.api.openstack.extensions 
self.auth_ref 

[Yahoo-eng-team] [Bug 1738933] Re: openstack server create failed with http 500

2018-01-03 Thread Matt Riedemann
Looks like you have some invalid configuration in nova.conf, like nova
can't communicate with neutron.

NeutronAdminCredentialConfigurationInvalid

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1738933

Title:
  openstack server create failed with http 500

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  START with options: [u'server', u'create', u'--flavor', u'1', u'--image', 
u'cirros', u'--nic', u'net-id=980b72ec-f4d1-48db-b552-4e0b918e2780', 
u'provider-instance1', u'--debug']
  options: Namespace(access_key='', access_secret='***', access_token='***', 
access_token_endpoint='', access_token_type='', auth_type='', 
auth_url='http://controller:35357/v3', cacert=None, cert='', client_id='', 
client_secret='***', cloud='', code='', consumer_key='', consumer_secret='***', 
debug=True, default_domain='default', default_domain_id='', 
default_domain_name='', deferred_help=False, discovery_endpoint='', 
domain_id='', domain_name='', endpoint='', identity_provider='', 
identity_provider_url='', insecure=None, interface='', key='', log_file=None, 
openid_scope='', os_beta_command=False, os_compute_api_version='', 
os_dns_api_version='2', os_identity_api_version='3', os_image_api_version='2', 
os_key_manager_api_version='1', os_network_api_version='', 
os_object_api_version='', os_project_id=None, os_project_name=None, 
os_volume_api_version='', passcode='', password='***', profile='', 
project_domain_id='', project_domain_name='Default', project_id='', 
project_name='admin', protocol='', redirect_uri='', region_name='', 
service_provider_endpoint='', service_provider_entity_id='', timing=False, 
token='***', trust_id='', url='', user_domain_id='', 
user_domain_name='Default', user_id='', username='admin', verbose_level=3, 
verify=None)
  Auth plugin password selected
  auth_config_hook(): {'auth_type': 'password', 'beta_command': False, 
u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 
u'metering_api_version': u'2', 'auth_url': 'http://controller:35357/v3', 
u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 
'networks': [], u'image_api_version': '2', 'verify': True, u'dns_api_version': 
'2', u'object_store_api_version': u'1', 'username': 'admin', 
u'container_infra_api_version': u'1', 'verbose_level': 3, 'region_name': '', 
'api_timeout': None, u'baremetal_api_version': u'1', 'auth': 
{'user_domain_name': 'Default', 'project_name': 'admin', 'project_domain_name': 
'Default'}, 'default_domain': 'default', u'container_api_version': u'1', 
u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 
u'orchestration_api_version': u'1', 'timing': False, 'password': '***', 
u'application_catalog_api_version': u'1', 'cacert': None, 
u'key_manager_api_version': '1', u'workflow_api_version': u'2', 
'deferred_help': False, u'identity_api_version': '3', u'volume_api_version': 
u'2', 'cert': None, u'secgroup_source': u'neutron', u'status': u'active', 
'debug': True, u'interface': None, u'disable_vendor_agent': {}}
  defaults: {u'auth_type': 'password', u'status': u'active', 
u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 
'api_timeout': None, u'baremetal_api_version': u'1', u'image_api_version': 
u'2', u'container_infra_api_version': u'1', u'metering_api_version': u'2', 
u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 
u'orchestration_api_version': u'1', 'cacert': None, u'network_api_version': 
u'2', u'message': u'', u'image_format': u'qcow2', 
u'application_catalog_api_version': u'1', u'key_manager_api_version': u'v1', 
u'workflow_api_version': u'2', 'verify': True, u'identity_api_version': u'2.0', 
u'volume_api_version': u'2', 'cert': None, u'secgroup_source': u'neutron', 
u'container_api_version': u'1', u'dns_api_version': u'2', 
u'object_store_api_version': u'1', u'interface': None, u'disable_vendor_agent': 
{}}
  cloud cfg: {'auth_type': 'password', 'beta_command': False, 
u'compute_api_version': u'2', 'key': None, u'database_api_version': u'1.0', 
u'metering_api_version': u'2', 'auth_url': 'http://controller:35357/v3', 
u'network_api_version': u'2', u'message': u'', u'image_format': u'qcow2', 
'networks': [], u'image_api_version': '2', 'verify': True, u'dns_api_version': 
'2', u'object_store_api_version': u'1', 'username': 'admin', 
u'container_infra_api_version': u'1', 'verbose_level': 3, 'region_name': '', 
'api_timeout': None, u'baremetal_api_version': u'1', 'auth': 
{'user_domain_name': 'Default', 'project_name': 'admin', 'project_domain_name': 
'Default'}, 'default_domain': 'default', u'container_api_version': u'1', 
u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 
u'orchestration_api_version': u'1', 'timing': False, 'password': '***', 
u'application_catalog_api_version': u'1', 'cacert': None, 

[Yahoo-eng-team] [Bug 1739542] Re: Volume-backed instance cannot backup

2018-01-03 Thread Matt Riedemann
Yeah see: https://review.openstack.org/#/c/164494/

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

** Tags added: api backup volumes

** Changed in: nova
   Status: Opinion => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1739542

Title:
  Volume-backed instance cannot  backup

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Nova backup doesn't support the volume-backed instance. When backup a
  volume-backed instance, it will raise InvalidRequest.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1739542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1739950] Re: Nova config reference pike

2018-01-03 Thread Matt Riedemann
** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Fix Released

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova
 Assignee: (unassigned) => Stephen Finucane (stephenfinucane)

** Changed in: nova/pike
   Status: New => In Progress

** Changed in: nova/pike
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: nova/pike
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1739950

Title:
  Nova config reference pike

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  In Progress

Bug description:
  hi,

  Looking at https://docs.openstack.org/pike/configuration/ i cant find
  the nova documentation. Should it be there or is that located
  somewhere else?

  b

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1739950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1740802] Re: execute nova boot with two port command failed

2018-01-03 Thread Matt Riedemann
*** This bug is a duplicate of bug 1706597 ***
https://bugs.launchpad.net/bugs/1706597

This is a python-novaclient bug, which version of python-novaclient are
you using?

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1740802

Title:
  execute nova boot with two port command failed

Status in python-novaclient:
  New

Bug description:
  Description
  ===
  execute nova boot with two port command failed

  Steps to reproduce
  ===
  [root@E9000slot5 versions(keystone_admin)]# neutron net-list
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  
+--++--+---+
  | id   | name   | tenant_id   
 | subnets   |
  
+--++--+---+
  | 63bece24-45a3-445a-b170-ed9c34797f26 | net2   | 
e05b5607ff3344909a67490ba6c16384 |  
 |
  | 95ac4b4e-84c9-4875-8c3f-4ed037b75f21 | net1   | 
e05b5607ff3344909a67490ba6c16384 | a84cf9dc-13e2-4f5d-828d-68ea83e960cf 
11.11.11.0/24|
  
+--++--+---+
  [root@E9000slot5 versions(keystone_admin)]# glance image-list
  
+--+---+
  | ID   | Name 
 |
  
+--+---+
  | ec3019a1-ebcd-4b12-9afe-f67ea1cc60ed | cirros228
 |
  
+--+---+
  [root@E9000slot5 versions(keystone_admin)]# nova flavor-list
  
+--++---+--+---+--+---+-+---+
  | ID   | Name   | 
Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
  
+--++---+--+---+--+---+-+---+
  | 100  | 2h2g2gdiskwithnuma | 2048
  | 2| 0   |  | 2 | 1.0 | True  |
  
+--++---+--+---+--+---+-+---+
  [root@E9000slot5 versions(keystone_admin)]# nova boot --flavor 100 --image 
ec3019a1-ebcd-4b12-9afe-f67ea1cc60ed --nic 
net-id=95ac4b4e-84c9-4875-8c3f-4ed037b75f21  --nic 
net-id=63bece24-45a3-445a-b170-ed9c34797f26 test_tx
  ERROR (CommandError): Invalid nic argument 
'net-id=63bece24-45a3-445a-b170-ed9c34797f26'. Nic arguments must be of the 
form --nic 
,
 with only one of net-id, net-name or port-id specified. Specifying a --nic of 
auto or none cannot be used with any other --nic value

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1740802/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1741001] Re: Got an unexpected keyword argument when starting nova-api

2018-01-03 Thread Matt Riedemann
This was the related nova change:

https://github.com/openstack/nova/commit/409d7db21e2d7faf0bbf76f982a3403c79897e4f
#diff-8fec546e4c39f78d233f8e21dadaa3ff

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1741001

Title:
  Got an unexpected keyword argument when starting nova-api

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===
  After upgrading oslo.db, nova-api service was started failed.

  Steps to reproduce
  ==
  * pip install oslo.db==4.24.0
  * start nova-api service is OK
  * `pip install upgrade oslo.db` to 4.32.0
  * failed to start nova-api 

  Expected result
  ===
  In the requirement.txt, oslo.db >= 4.24.0.
  But at version 4.32.0(latest), got 'unexpected keyword' error.

  Actual result
  =
  'nova-api' is running OK.

  Environment
  ===
  1. nova version
  # rpm -qa | grep nova
  openstack-nova-console-16.0.3-2.el7.noarch
  openstack-nova-common-16.0.3-2.el7.noarch
  python2-novaclient-9.1.1-1.el7.noarch
  openstack-nova-scheduler-16.0.3-2.el7.noarch
  openstack-nova-api-16.0.3-2.el7.noarch
  openstack-nova-placement-api-16.0.3-2.el7.noarch
  python-nova-16.0.3-2.el7.noarch
  openstack-nova-conductor-16.0.3-2.el7.noarch
  openstack-nova-novncproxy-16.0.3-2.el7.noarch

  Logs & Configs
  ==
  Jan  3 06:59:13 host-172-23-59-134 systemd: Starting OpenStack Nova API 
Server...
  Jan  3 06:59:16 host-172-23-59-134 nova-api: Traceback (most recent call 
last):
  Jan  3 06:59:16 host-172-23-59-134 nova-api: File "/usr/bin/nova-api", line 
6, in 
  Jan  3 06:59:16 host-172-23-59-134 nova-api: from nova.cmd.api import main
  Jan  3 06:59:16 host-172-23-59-134 nova-api: File 
"/usr/lib/python2.7/site-packages/nova/cmd/api.py", line 29, in 
  Jan  3 06:59:16 host-172-23-59-134 nova-api: from nova import config
  Jan  3 06:59:16 host-172-23-59-134 nova-api: File 
"/usr/lib/python2.7/site-packages/nova/config.py", line 23, in 
  Jan  3 06:59:16 host-172-23-59-134 nova-api: from nova.db.sqlalchemy import 
api as sqlalchemy_api
  Jan  3 06:59:16 host-172-23-59-134 nova-api: File 
"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 925, in 

  Jan  3 06:59:16 host-172-23-59-134 nova-api: retry_on_request=True)
  Jan  3 06:59:16 host-172-23-59-134 nova-api: TypeError: __init__() got an 
unexpected keyword argument 'retry_on_request'
  Jan  3 06:59:16 host-172-23-59-134 systemd: openstack-nova-api.service: main 
process exited, code=exited, status=1/FAILURE

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1741001/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1735687] Re: Not able to list the compute_nodes mapped under single cell using nova-manage

2018-01-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/524755
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=c7b51a63b0c3b5ad80979060bbfe49287e6da847
Submitter: Zuul
Branch:master

commit c7b51a63b0c3b5ad80979060bbfe49287e6da847
Author: Hongbin Lu 
Date:   Fri Dec 1 23:03:04 2017 +

Add support for listing hosts in cellv2

Add a ``nova-manage cell_v2 list_hosts`` command for listing hosts
in one or all v2 cells.

Change-Id: Ie8eaa8701aafac10e030568107b8e6255a60434d
Closes-Bug: #1735687


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1735687

Title:
  Not able to list the compute_nodes mapped under single cell using
  nova-manage

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  ! READ THIS !!

  Description
  ===
  Currently I am not able to list the compute_nodes present under single cell 
using nova-manage.  Only way to check the available compute_nodes under 
specific cell is from the host_mappings table.

  Steps to reproduce
  ==

  * nova-manage cell_v2 list_hosts --cell_uuid 
  * No method found

  
  Expected result
  ===
  nova-manage cell_v2 list_hosts --cell_uuid 
1794f657-d8e9-41ad-bff7-01b284b55a9b

  ++--+---+
  | Id |   Host   | Cell Name |
  ++--+---+
  | 2  | compute2 |   cell2   |
  ++--+---+

  Actual result
  =
  No such method (list_hosts) found from nova-manage

  Environment
  ===

  Using Devstack stable/pike, nova-manage (16.0.3 ).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1735687/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1741125] [NEW] Instance resize always fails when rescheduling

2018-01-03 Thread Ed Leafe
Public bug reported:

When resizing an instance, the migration task calls 
"replace_allocation_with_migration()', which checks to verify that the instance 
has allocations against the source compute node [0]. It then replaces the 
instance's allocation with the migration, using it's UUID as the consumer 
instead of the instance's. However, if the first attempt at migrating fails and 
the resize is rescheduled, when "replace_allocation_with_migration()" is called 
again, the allocations now have the migration UUID as the consumer, and so the 
check that the instance has allocations on the source compute node will fail, 
and the migration is put in an ERROR state.
 
[0] 
https://github.com/openstack/nova/blob/f95f165b49fbc0efe29450b0e858a3ccadecedea/nova/conductor/tasks/migrate.py#L47-L48

** Affects: nova
 Importance: High
 Assignee: Ed Leafe (ed-leafe)
 Status: Confirmed


** Tags: migration placement queens-rc-potential resize

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1741125

Title:
  Instance resize always fails when rescheduling

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  When resizing an instance, the migration task calls 
"replace_allocation_with_migration()', which checks to verify that the instance 
has allocations against the source compute node [0]. It then replaces the 
instance's allocation with the migration, using it's UUID as the consumer 
instead of the instance's. However, if the first attempt at migrating fails and 
the resize is rescheduled, when "replace_allocation_with_migration()" is called 
again, the allocations now have the migration UUID as the consumer, and so the 
check that the instance has allocations on the source compute node will fail, 
and the migration is put in an ERROR state.
   
  [0] 
https://github.com/openstack/nova/blob/f95f165b49fbc0efe29450b0e858a3ccadecedea/nova/conductor/tasks/migrate.py#L47-L48

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1741125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1741123] [NEW] neutron-lbaas endless haproxy restarts due to faulty Neutron port fixed_ip field

2018-01-03 Thread Alvaro Uría
Public bug reported:

Xenial-Ocata

A LBaaS has a port binding that did not complete (fixed_ip is empty).
This causes: https://pastebin.ubuntu.com/26315229/ , making the LB Agent
restart every minute (recreating all the LB running on that agent).

"neutron port-show" shows:

{
  "allowed_address_pairs": [], 
  "extra_dhcp_opts": [], 
  "updated_at": "2018-01-03T18:14:31Z", 
  "device_owner": "neutron:LOADBALANCERV2", 
  "revision_number": 58905, 
  "binding:profile": {}, 
  "port_security_enabled": true, 
  "fixed_ips": [], 
  "id": "f28b4089-86f1-408e-9259-778171e3d21a", 
  "security_groups": [
"49316f86-4d84-4ff6-8485-aa0b6b925a21"
  ], 
  "binding:vif_details": {
"port_filter": true, 
"ovs_hybrid_plug": true
  }, 
  "binding:vif_type": "ovs", 
  "mac_address": "fa:16:3e:21:95:4b", 
  "project_id": "077055e5a7b74c29a7a7f77769b48a30", 
  "status": "BUILD", 
  "binding:host_id": "REDACTED", 
  "description": null, 
  "tags": [], 
  "dns_assignment": [], 
  "device_id": "08434380-226c-4ccd-be12-34cb6ba5efab", 
  "name": "loadbalancer-08434380-226c-4ccd-be12-34cb6ba5efab", 
  "admin_state_up": true, 
  "network_id": "cce08c18-d734-4d42-a0be-b630fdfde87a", 
  "dns_name": "", 
  "created_at": "2017-12-05T09:54:40Z", 
  "binding:vnic_type": "normal", 
  "tenant_id": "077055e5a7b74c29a7a7f77769b48a30"
}

By updating the lbaas with "--admin-state-up False", LB Agent restarts
stopped happening. I think this should be the default to avoid affecting
other loadbalancers.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1741123

Title:
  neutron-lbaas endless haproxy restarts due to faulty Neutron port
  fixed_ip field

Status in neutron:
  New

Bug description:
  Xenial-Ocata

  A LBaaS has a port binding that did not complete (fixed_ip is empty).
  This causes: https://pastebin.ubuntu.com/26315229/ , making the LB
  Agent restart every minute (recreating all the LB running on that
  agent).

  "neutron port-show" shows:

  {
"allowed_address_pairs": [], 
"extra_dhcp_opts": [], 
"updated_at": "2018-01-03T18:14:31Z", 
"device_owner": "neutron:LOADBALANCERV2", 
"revision_number": 58905, 
"binding:profile": {}, 
"port_security_enabled": true, 
"fixed_ips": [], 
"id": "f28b4089-86f1-408e-9259-778171e3d21a", 
"security_groups": [
  "49316f86-4d84-4ff6-8485-aa0b6b925a21"
], 
"binding:vif_details": {
  "port_filter": true, 
  "ovs_hybrid_plug": true
}, 
"binding:vif_type": "ovs", 
"mac_address": "fa:16:3e:21:95:4b", 
"project_id": "077055e5a7b74c29a7a7f77769b48a30", 
"status": "BUILD", 
"binding:host_id": "REDACTED", 
"description": null, 
"tags": [], 
"dns_assignment": [], 
"device_id": "08434380-226c-4ccd-be12-34cb6ba5efab", 
"name": "loadbalancer-08434380-226c-4ccd-be12-34cb6ba5efab", 
"admin_state_up": true, 
"network_id": "cce08c18-d734-4d42-a0be-b630fdfde87a", 
"dns_name": "", 
"created_at": "2017-12-05T09:54:40Z", 
"binding:vnic_type": "normal", 
"tenant_id": "077055e5a7b74c29a7a7f77769b48a30"
  }

  By updating the lbaas with "--admin-state-up False", LB Agent restarts
  stopped happening. I think this should be the default to avoid
  affecting other loadbalancers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1741123/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1737900] Re: Random volume type on create volume from image

2018-01-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/529910
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=3aee4cbac666f080133671790baa9dd42223c996
Submitter: Zuul
Branch:master

commit 3aee4cbac666f080133671790baa9dd42223c996
Author: wei.ying 
Date:   Fri Nov 17 12:03:56 2017 +0800

Fix incorrect volume type value in ng images create volume form

The value of the 'volume_type' is determined by the 'volumeType'
object [1]. The value of the 'volume_type' changes only the
'volumeType' object changes. However, the current value of the
'volume_type' is changed according to the volume object change [2].

It will cause a phenomenon, when the page is initialized, the value
of 'volume_type' is empty, when we switch volume types drop-down box,
its value is also empty, only change the name, description, size and
availability zone, the value of 'volume_type' is the page selection.

[1] 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/static/app/core/images/steps/create-volume/create-volume.html#L45
[2] 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/static/app/core/images/steps/create-volume/create-volume.controller.js#L140

Change-Id: If754d0c2ced844414c35829d4cefa1fb861522d5
Closes-Bug:#1737900


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1737900

Title:
  Random volume type on create volume from image

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  I want to create a new volume from image, starting from context menu on 
images list 
  (url: horizon/project/images). 
  There is a form, where I can specify name, size, AZ, and volume type. 
  For all my tests, volume type is randomly choosen from existing two (in my 
setup).
  The same actions called from CLI are correct.
  There is no error in logs.

  
  environment:
  Ocata
  openstack-dashboard 3:11.0.1-0ubuntu1~cloud0
  python-glanceclient 1:2.6.0-0ubuntu1~cloud0
  python-cinderclient 1:1.11.0-0ubuntu2~cloud0

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1737900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1613900] Re: Unable to use 'Any' availability zone when spawning instance

2018-01-03 Thread Corey Bryant
** Also affects: cloud-archive/mitaka
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/mitaka
   Status: New => Triaged

** Changed in: horizon (Ubuntu Xenial)
   Status: New => Triaged

** Changed in: cloud-archive/mitaka
   Importance: Undecided => High

** Changed in: horizon (Ubuntu Xenial)
   Importance: Undecided => High

** Changed in: horizon (Ubuntu)
   Status: In Progress => Invalid

** Changed in: cloud-archive
   Status: In Progress => Invalid

** Changed in: cloud-archive/mitaka
 Assignee: (unassigned) => Shane Peters (shaner)

** Changed in: horizon (Ubuntu Xenial)
 Assignee: (unassigned) => Shane Peters (shaner)

** Changed in: cloud-archive/mitaka
 Assignee: Shane Peters (shaner) => (unassigned)

** Changed in: cloud-archive/mitaka
 Assignee: (unassigned) => Shane Peters (shaner)

** Changed in: cloud-archive
 Assignee: Shane Peters (shaner) => (unassigned)

** Changed in: horizon (Ubuntu)
 Assignee: Shane Peters (shaner) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1613900

Title:
  Unable to use 'Any' availability zone when spawning instance

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  Invalid
Status in horizon source package in Xenial:
  Triaged

Bug description:
  While using Mitaka, we found that by default, using js backend, it is
  not possible to choose 'any' availability zone. The issue is not fixed
  in master branch.

  For python implementation the logic is:
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py#L390

  The JS implementation miss the logic if number of AZs is >1
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/static/dashboard/project/workflow/launch-instance/launch-instance-model.service.js#L321

  Also, JS implementation looks ugly if you have lot of subnets per
  network...

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1613900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1740998] Re: DHCP Agents of multiple Allocation Pools

2018-01-03 Thread Miguel Lavalle
No, there is no way to see the address assigned to the DHCP port from
the subnet creation response. That is because the subnet creation
operation is independent from the DHCP port creation operation

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1740998

Title:
  DHCP Agents of multiple Allocation Pools

Status in neutron:
  Opinion

Bug description:
  Problem :
  OpenStack has down time to establish the ports for DHCP by creating subnet 
("enable_dhcp":true), I can not control and know which IP is going to be or 
apply as DHCP agents. The response can't provide the specific IP which is 
selected as Attached Device(network:dhcp) immediatly. In my observations and 
experiments, create the subnet with multiple allocation Pools, the IP will be 
used as DHCP agents randomly by different allocation pools. I need to use the 
Get /v2.0/ports to map the networks and subnets repeatly until the first IP is 
show up in the repsonse.

  If there's a way let us know which IP is used as DHCP agents by
  multiple allocation Pools after geting the response by creating the
  subnet.

  --

  I sent a post which references the Networking API v2.0 to create a
  subnet within multiple allocation Pools by the network, and I also
  enable DHCP with ture.

  version: mitaka

  POST /v2.0/subnets

  #Response

  {"subnet":{"allocation_pools":[
  {"start":"172.18.0.1","end":"172.18.0.10"},
  {"start":"172.18.0.20","end":"172.18.0.30"}],
  "ipv6_address_mode":null,
  "tenant_id":"b3746b09cacb421cba95ce1afd380762",
  "subnetpool_id":null,
  "gateway_ip":"172.18.0.254",
  "cidr":"172.18.0.0/24",
  "id":"73b8ffaa-6fed-425a-b00b-fb75dffdaa57",
  "updated_at":"2018-01-03T04:18:22",
  "network_id":"0926e907-8e69-4231-ae2f-a051b2d77bf2",
  "dns_nameservers":["8.8.8.8"],
  "description":"","name":"",
  "enable_dhcp":true,
  
"created_at":"2018-01-03T04:18:22","ipv6_ra_mode":null,"host_routes":[],"ip_version":4}}

  

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1740998/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1734167] Re: DNS doesn't work in no-cloud as launched by ubuntu

2018-01-03 Thread Launchpad Bug Tracker
This bug was fixed in the package systemd - 235-3ubuntu3

---
systemd (235-3ubuntu3) bionic; urgency=medium

  * netwokrd: add support for RequiredForOnline stanza. (LP: #1737570)
  * resolved.service: set DefaultDependencies=no (LP: #1734167)
  * systemd.postinst: enable persistent journal. (LP: #1618188)
  * core: add support for non-writable unified cgroup hierarchy for container 
support.
(LP: #1734410)

 -- Dimitri John Ledkov   Tue, 12 Dec 2017 13:25:32
+

** Changed in: systemd (Ubuntu Bionic)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1734167

Title:
  DNS doesn't work in no-cloud as launched by ubuntu

Status in cloud-init:
  Confirmed
Status in cloud-init package in Ubuntu:
  Confirmed
Status in systemd package in Ubuntu:
  Fix Released
Status in cloud-init source package in Zesty:
  Fix Released
Status in systemd source package in Zesty:
  Fix Released
Status in cloud-init source package in Artful:
  Confirmed
Status in systemd source package in Artful:
  In Progress
Status in cloud-init source package in Bionic:
  Confirmed
Status in systemd source package in Bionic:
  Fix Released

Bug description:
  I use no-cloud to test the kernel in CI (I am maintainer of the bcache
  subsystem), and have been running it successfully under 16.04 cloud
  images from qemu, using a qemu command that includes:

  -smbios "type=1,serial=ds=nocloud-
  net;s=https://raw.githubusercontent.com/mlyle/mlyle/master/cloud-
  metadata/linuxtst/"

  As documented here:

  http://cloudinit.readthedocs.io/en/latest/topics/datasources/nocloud.html

  Under the new 17.10 cloud images, this doesn't work: the network comes
  up, but name resolution doesn't work-- /etc/resolv.conf is a symlink
  to a nonexistent file at this point of the boot and systemd-resolved
  is not running.  When I manually hack /etc/resolv.conf in the cloud
  image to point to 4.2.2.1 it works fine.

  I don't know if nameservice not working is by design, but it seems
  like it should work.  The documentation states:

  "With ds=nocloud-net, the seedfrom value must start with http://,
  https:// or ftp://;

  And https is not going to work for a raw IP address.

  Related bugs:
   * bug 1734939: #include fails silently.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1734167/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1741093] [NEW] cloud-init clean traceback on instance dir symlink

2018-01-03 Thread Chad Smith
Public bug reported:

cloud-init clean command traces when attempting to del_dir on the
'instance' symlink if instance is ordered before 'instances' when
traversing the /var/lib/cloud directory.


smoser@milhouse:~$ lxc launch ubuntu-daily:bionic b4
Creating b4
Starting b4
smoser@milhouse:~$ lxc exec b4 -- ls -l /var/lib/cloud
total 4
drwxr-xr-x 2 root root  6 Jan  3 17:35 data
drwxr-xr-x 2 root root  2 Jan  3 17:35 handlers
lrwxrwxrwx 1 root root 27 Jan  3 17:35 instance -> /var/lib/cloud/instances/b4
drwxr-xr-x 3 root root  3 Jan  3 17:35 instances
drwxr-xr-x 6 root root  6 Jan  3 17:35 scripts
drwxr-xr-x 3 root root  3 Jan  3 17:35 seed
drwxr-xr-x 2 root root  2 Jan  3 17:35 sem
smoser@milhouse:~$ lxc exec b4 cloud-init clean
ERROR: Could not remove instance: Cannot call rmtree on a symbolic link

** Affects: cloud-init
 Importance: Low
 Assignee: Chad Smith (chad.smith)
 Status: In Progress

** Changed in: cloud-init
   Importance: Undecided => Low

** Changed in: cloud-init
 Assignee: (unassigned) => Chad Smith (chad.smith)

** Changed in: cloud-init
   Status: New => Incomplete

** Changed in: cloud-init
   Status: Incomplete => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1741093

Title:
  cloud-init clean traceback on instance dir symlink

Status in cloud-init:
  In Progress

Bug description:
  cloud-init clean command traces when attempting to del_dir on the
  'instance' symlink if instance is ordered before 'instances' when
  traversing the /var/lib/cloud directory.

  
  smoser@milhouse:~$ lxc launch ubuntu-daily:bionic b4
  Creating b4
  Starting b4
  smoser@milhouse:~$ lxc exec b4 -- ls -l /var/lib/cloud
  total 4
  drwxr-xr-x 2 root root  6 Jan  3 17:35 data
  drwxr-xr-x 2 root root  2 Jan  3 17:35 handlers
  lrwxrwxrwx 1 root root 27 Jan  3 17:35 instance -> /var/lib/cloud/instances/b4
  drwxr-xr-x 3 root root  3 Jan  3 17:35 instances
  drwxr-xr-x 6 root root  6 Jan  3 17:35 scripts
  drwxr-xr-x 3 root root  3 Jan  3 17:35 seed
  drwxr-xr-x 2 root root  2 Jan  3 17:35 sem
  smoser@milhouse:~$ lxc exec b4 cloud-init clean
  ERROR: Could not remove instance: Cannot call rmtree on a symbolic link

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1741093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1741092] [NEW] project admin can delete everything in all domains

2018-01-03 Thread HT
Public bug reported:


Any user with admin role in any project can perform random operation in any 
other domain and project, included 'Default'. For example deleting cinder 
volumes and nova instances.
If I ask domain scoped token (as domain admin) from openstack cli or directly 
from keystone api via curl than I can not do operations outside of that 
particular domain - as expected.

Everything behaves normally when domain admin concept is not used at all
eg. there is one Default domain, one user with admin role and all other
users in other domains are using _member_ role.

Horizon and keystone are using policy from here:
https://git.openstack.org/cgit/openstack/keystone/tree/etc/policy.v3cloudsample.json

Snippet from horizon local_settings.py
...
# Path to directory containing policy.json files
ROOT_PATH = '/etc/horizon/'
POLICY_FILES_PATH = os.path.join(ROOT_PATH, "conf")
POLICY_FILES = {
'identity': 'keystone_policy.json',
}
...


Versions:
horizon (12.0.2.dev6)
keystone (12.0.1.dev6)
keystoneauth1 (3.1.0)
keystonemiddleware (4.17.0)
python-keystoneclient (3.13.0)

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1741092

Title:
  project admin can delete everything in all domains

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  
  Any user with admin role in any project can perform random operation in any 
other domain and project, included 'Default'. For example deleting cinder 
volumes and nova instances.
  If I ask domain scoped token (as domain admin) from openstack cli or directly 
from keystone api via curl than I can not do operations outside of that 
particular domain - as expected.

  Everything behaves normally when domain admin concept is not used at
  all eg. there is one Default domain, one user with admin role and all
  other users in other domains are using _member_ role.

  Horizon and keystone are using policy from here:
  
https://git.openstack.org/cgit/openstack/keystone/tree/etc/policy.v3cloudsample.json

  Snippet from horizon local_settings.py
  ...
  # Path to directory containing policy.json files
  ROOT_PATH = '/etc/horizon/'
  POLICY_FILES_PATH = os.path.join(ROOT_PATH, "conf")
  POLICY_FILES = {
  'identity': 'keystone_policy.json',
  }
  ...

  
  Versions:
  horizon (12.0.2.dev6)
  keystone (12.0.1.dev6)
  keystoneauth1 (3.1.0)
  keystonemiddleware (4.17.0)
  python-keystoneclient (3.13.0)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1741092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1740259] Re: Document testing guide for new API contributions

2018-01-03 Thread Matt Riedemann
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1740259

Title:
  Document testing guide for new API contributions

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  https://review.openstack.org/529618
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 749f1ecbc5f3bf0a499bec328b158bf2b1d69240
  Author: Matt Riedemann 
  Date:   Thu Dec 21 10:22:49 2017 -0500

  Document testing guide for new API contributions
  
  This fills in the TODOs for the unit, functional and
  docs part of the API contributor guide.
  
  Since we don't rely on the DocImpact tag in the commit
  message for API changes (that tag results in a nova bug
  and was meant mostly for making changes to docs external
  to the nova repo, which is no longer the case), this
  changes that section to just talk about the in-tree docs
  that should be updated for API changes.
  
  Change-Id: I9ca423c09185d2e3733357fd47aaba82d716eea4

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1740259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1613900] Re: Unable to use 'Any' availability zone when spawning instance

2018-01-03 Thread Billy Olsen
** Also affects: horizon (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1613900

Title:
  Unable to use 'Any' availability zone when spawning instance

Status in Ubuntu Cloud Archive:
  In Progress
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  In Progress

Bug description:
  While using Mitaka, we found that by default, using js backend, it is
  not possible to choose 'any' availability zone. The issue is not fixed
  in master branch.

  For python implementation the logic is:
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py#L390

  The JS implementation miss the logic if number of AZs is >1
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/static/dashboard/project/workflow/launch-instance/launch-instance-model.service.js#L321

  Also, JS implementation looks ugly if you have lot of subnets per
  network...

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1613900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1613900] Re: Unable to use 'Any' availability zone when spawning instance

2018-01-03 Thread Billy Olsen
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1613900

Title:
  Unable to use 'Any' availability zone when spawning instance

Status in Ubuntu Cloud Archive:
  In Progress
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  In Progress

Bug description:
  While using Mitaka, we found that by default, using js backend, it is
  not possible to choose 'any' availability zone. The issue is not fixed
  in master branch.

  For python implementation the logic is:
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py#L390

  The JS implementation miss the logic if number of AZs is >1
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/static/dashboard/project/workflow/launch-instance/launch-instance-model.service.js#L321

  Also, JS implementation looks ugly if you have lot of subnets per
  network...

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1613900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1741079] [NEW] Deleting heat stack doesnt delete dns records

2018-01-03 Thread Mark Ts
Public bug reported:

Environment: Ubuntu 16.04.3 LTS, Ocata

Summary:
For each new stack created records automatically created for each instance,
on the other hand, deleting the stack doesn't trigger the deletion of those 
records.


We have configured internal dns integration using designate,
creating instance /port triggers record creation, deleting instance /port 
triggers record deletion.

However, while creating a heat stack, record creation works great for
each instance that is part of the stack, but deleting the stack does not
trigger record deletion.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1741079

Title:
  Deleting heat stack doesnt delete dns records

Status in neutron:
  New

Bug description:
  Environment: Ubuntu 16.04.3 LTS, Ocata

  Summary:
  For each new stack created records automatically created for each instance,
  on the other hand, deleting the stack doesn't trigger the deletion of those 
records.

  
  We have configured internal dns integration using designate,
  creating instance /port triggers record creation, deleting instance /port 
triggers record deletion.

  However, while creating a heat stack, record creation works great for
  each instance that is part of the stack, but deleting the stack does
  not trigger record deletion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1741079/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1741075] [NEW] Excessive warning logs for "Detach interface failed" when the instance was deleted

2018-01-03 Thread Matt Riedemann
Public bug reported:

Looking at these nova-compute logs:

http://logs.openstack.org/60/513160/4/check/tempest-dsvm-neutron-nova-
next-
full/6458405/logs/screen-n-cpu.txt.gz?level=TRACE#_Dec_08_23_13_42_867535

This type of warning message shows up 42 times:

Dec 08 23:13:42.867535 ubuntu-xenial-ovh-gra1-0001332290 nova-
compute[31536]: WARNING nova.compute.manager [None req-5378dd72-7f5d-
4c39-82ec-2989263136fb service nova] [instance: 7a32f2d9-a7e6-42fa-
b0a6-def8fdb56887] Detach interface failed, port_id=a5cd433c-589f-4c65
-814a-e71923e408cb, reason: Instance 7a32f2d9-a7e6-42fa-
b0a6-def8fdb56887 could not be found.: InstanceNotFound: Instance
7a32f2d9-a7e6-42fa-b0a6-def8fdb56887 could not be found.

If we failed to detach an interface because the instance was deleted,
it's not really a warning that the operator should care about, so we
should downgrade this from warning to debug.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: compute logging serviceability

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1741075

Title:
  Excessive warning logs for "Detach interface failed" when the instance
  was deleted

Status in OpenStack Compute (nova):
  New

Bug description:
  Looking at these nova-compute logs:

  http://logs.openstack.org/60/513160/4/check/tempest-dsvm-neutron-nova-
  next-
  full/6458405/logs/screen-n-cpu.txt.gz?level=TRACE#_Dec_08_23_13_42_867535

  This type of warning message shows up 42 times:

  Dec 08 23:13:42.867535 ubuntu-xenial-ovh-gra1-0001332290 nova-
  compute[31536]: WARNING nova.compute.manager [None req-5378dd72-7f5d-
  4c39-82ec-2989263136fb service nova] [instance: 7a32f2d9-a7e6-42fa-
  b0a6-def8fdb56887] Detach interface failed, port_id=a5cd433c-589f-4c65
  -814a-e71923e408cb, reason: Instance 7a32f2d9-a7e6-42fa-
  b0a6-def8fdb56887 could not be found.: InstanceNotFound: Instance
  7a32f2d9-a7e6-42fa-b0a6-def8fdb56887 could not be found.

  If we failed to detach an interface because the instance was deleted,
  it's not really a warning that the operator should care about, so we
  should downgrade this from warning to debug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1741075/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1736171] Re: Update OS API charm default haproxy timeout values

2018-01-03 Thread Ryan Beisner
The heat charm previously lacked the haproxy timeout controls, and that
was resolved with https://review.openstack.org/#/c/526674/.  With that
landed, the default values should now be proposed against it.

** Also affects: charm-barbican
   Importance: Undecided
   Status: New

** Changed in: charm-barbican
   Importance: Undecided => Medium

** Changed in: charm-barbican
   Status: New => Fix Committed

** Changed in: charm-barbican
Milestone: None => 18.02

** Changed in: charm-barbican
 Assignee: (unassigned) => David Ames (thedac)

** Changed in: charm-keystone
   Status: Triaged => Fix Committed

** Changed in: charm-keystone
 Assignee: (unassigned) => David Ames (thedac)

** Changed in: charm-glance
   Status: Triaged => Fix Committed

** Changed in: charm-glance
 Assignee: (unassigned) => David Ames (thedac)

** Changed in: charm-cinder
   Status: Triaged => Fix Committed

** Changed in: charm-cinder
 Assignee: (unassigned) => David Ames (thedac)

** Changed in: charm-neutron-api
   Status: Triaged => Fix Committed

** Changed in: charm-neutron-api
 Assignee: (unassigned) => David Ames (thedac)

** Changed in: charm-nova-cloud-controller
   Status: Triaged => Fix Committed

** Changed in: charm-nova-cloud-controller
 Assignee: (unassigned) => David Ames (thedac)

** Also affects: charm-ceilometer
   Importance: Undecided
   Status: New

** Changed in: charm-ceilometer
   Importance: Undecided => Medium

** Changed in: charm-ceilometer
   Status: New => Fix Committed

** Changed in: charm-ceilometer
Milestone: None => 18.02

** Changed in: charm-ceilometer
 Assignee: (unassigned) => David Ames (thedac)

** Also affects: charm-swift-proxy
   Importance: Undecided
   Status: New

** Changed in: charm-swift-proxy
   Importance: Undecided => Medium

** Changed in: charm-swift-proxy
   Status: New => Fix Committed

** Changed in: charm-swift-proxy
Milestone: None => 18.02

** Changed in: charm-swift-proxy
 Assignee: (unassigned) => David Ames (thedac)

** Changed in: charm-ceph-radosgw
   Status: Triaged => Fix Committed

** Changed in: charm-ceph-radosgw
 Assignee: (unassigned) => David Ames (thedac)

** Changed in: charm-openstack-dashboard
   Status: Triaged => Fix Committed

** Changed in: charm-openstack-dashboard
 Assignee: (unassigned) => David Ames (thedac)

** Also affects: charm-manila
   Importance: Undecided
   Status: New

** Changed in: charm-manila
   Importance: Undecided => Medium

** Changed in: charm-manila
   Status: New => Fix Committed

** Changed in: charm-manila
Milestone: None => 18.02

** Changed in: charm-manila
 Assignee: (unassigned) => David Ames (thedac)

** Also affects: charm-aodh
   Importance: Undecided
   Status: New

** Changed in: charm-aodh
   Importance: Undecided => Medium

** Changed in: charm-aodh
   Status: New => Fix Committed

** Changed in: charm-aodh
Milestone: None => 18.02

** Changed in: charm-aodh
 Assignee: (unassigned) => David Ames (thedac)

** Also affects: charm-designate
   Importance: Undecided
   Status: New

** Changed in: charm-designate
   Importance: Undecided => Medium

** Changed in: charm-designate
   Status: New => Fix Committed

** Changed in: charm-designate
Milestone: None => 18.02

** Changed in: charm-designate
 Assignee: (unassigned) => David Ames (thedac)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1736171

Title:
  Update OS API charm default haproxy timeout values

Status in OpenStack AODH Charm:
  Fix Committed
Status in OpenStack Barbican Charm:
  Fix Committed
Status in OpenStack ceilometer charm:
  Fix Committed
Status in OpenStack ceph-radosgw charm:
  Fix Committed
Status in OpenStack cinder charm:
  Fix Committed
Status in OpenStack Designate Charm:
  Fix Committed
Status in OpenStack glance charm:
  Fix Committed
Status in OpenStack heat charm:
  In Progress
Status in OpenStack keystone charm:
  Fix Committed
Status in OpenStack Manila Charm:
  Fix Committed
Status in OpenStack neutron-api charm:
  Fix Committed
Status in OpenStack neutron-gateway charm:
  Invalid
Status in OpenStack nova-cloud-controller charm:
  Fix Committed
Status in OpenStack openstack-dashboard charm:
  Fix Committed
Status in OpenStack swift-proxy charm:
  Fix Committed
Status in neutron:
  Invalid

Bug description:
  Change OpenStack API charm haproxy timeout values

haproxy-server-timeout: 9
haproxy-client-timeout: 9
haproxy-connect-timeout: 9000
haproxy-queue-timeout: 9000

  Workaround until this lands is to set these values in config:

  juju config neutron-api haproxy-server-timeout=9 haproxy-client-
  timeout=9 haproxy-queue-timeout=9000 haproxy-connect-timeout=9000

  
  --- Original Bug -
  NeutronNetworks.create_and_delete_subnets is 

[Yahoo-eng-team] [Bug 1721286] Re: Create volume from image displays incorrect AZs

2018-01-03 Thread Corey Bryant
I've verified this bug is fixed in artful-proposed and pike-proposed by
creating a volume from the images tab and it is successfully created in
the selected volume AZ.

** Changed in: cloud-archive/pike
   Status: Fix Committed => Fix Released

** Changed in: horizon (Ubuntu Artful)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1721286

Title:
  Create volume from image displays incorrect AZs

Status in OpenStack openstack-dashboard charm:
  Invalid
Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in Ubuntu Cloud Archive pike series:
  Fix Released
Status in Ubuntu Cloud Archive queens series:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  Fix Released
Status in horizon source package in Zesty:
  Fix Released
Status in horizon source package in Artful:
  Fix Released
Status in horizon source package in Bionic:
  Fix Released

Bug description:
  [Impact]

  Horizon will fetch and show the correct Cinder availability zones
  during volume creation and volume creation from image.

  [Test Case]

  See original description.

  [Regression Potential]

  Very low as this patch is cherry-picked with no additional changes
  from the upstream stable/ocata branch.

  [Original Description]

  Running openstack-origin cloud:ocata-xenial on stable/17.08 charms, we have 
an environment with the following Nova availability zones:
    west-java-1a
    west-java-1b
    west-java-1c
  When going to the Images tab of the project in Horizon, and selecting the 
far-right drop-down menu for any image and selecting "Create Volume" we are 
presented with a dialog which includes an Availability Zone drop-down which 
lists the above three AZs, none of which have a cinder-api or cinder-volume 
host residing within.  When trying to create a volume from an instance on this 
dashboard, we get the error:

  Invalid input received: Availability zone 'west-java-1a' is invalid.
  (HTTP 400)

  When using Launch Instance with Create New Image = Yes, from same
  Image on the Images tab, we still get the same AZ dropdowns, but the
  system initializes the new volume and attaches to a running instance
  in that AZ properly.

  Also, when using the Volumes tab and pressing the Create New Volume
  button, we can create a volume from any image, and the Availability
  Zone in this dialog only shows the "nova" AZ.

  To re-create, build openstack ocata-xenial with three computes, one in
  each of 3 new AZs, cinder-api, cinder-ceph, and a minimal ceph
  cluster, all with defaults and load image into glance either with
  glance-simplestreams-sync or other method.  Click into Horizon's
  Images tab of admin project and click the drop-down of an image and
  select Create Volume.  Fill out the form, you should only see the 3
  new AZs but no nova AZ for creation of the volume, it should give the
  404 error.

  You'll notice that you might have the following availability zones:
  openstack availability zone list
  +--+-+
  | Zone Name| Zone Status |
  +--+-+
  | internal | available   |
  | west-java-1a | available   |
  | west-java-1b | available   |
  | west-java-1c | available   |
  | nova | available   |
  | nova | available   |
  | nova | available   |
  +--+-+

  This 404 error is coming from the cinder api and has nothing to do
  with glance/images.  It's simply that cinder's availability zone is
  "nova" and the nova aggregate-based availability zones should not be
  used in a cinder availability zone pull-down on the Images tab Create
  Volume dialog.

  jujumanage@cmg01z00infr001:~/charms/cinder$ openstack volume create 
--availability-zone nova --size 50  foo
  +-+--+
  | Field   | Value|
  +-+--+
  | attachments | []   |
  | availability_zone   | nova |
  | bootable| false|
  | consistencygroup_id | None |
  | created_at  | 2017-10-04T15:37:34.804855   |
  | description | None |
  | encrypted   | False|
  | id  | ca32eb14-60f8-42c8-a5ef-d7687d25d606 |
  | migration_status| None |
  | multiattach | False|
  | name| foo  |
  | properties  |  |
  | 

[Yahoo-eng-team] [Bug 1741051] [NEW] Views accessible via url even if user doesn't match policy rules

2018-01-03 Thread David Gutman
Public bug reported:

When a user doesn't match the policy rules of a panel then the panel tab
is removed from the menu of the left, but panel views are still
accessible using directly the url (ex /admin/flavors/).

In most of the case, views won't work correctly because of the lack of
right in the backend, but it may cause trouble when you play with
policies.

I think it could be more elegant to return directly a "You are not
authorized to access this page" from the frontend when user try to
access a view of a panel (via url) without matching the policy rules.

** Affects: horizon
 Importance: Undecided
 Assignee: David Gutman (david.gutman)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => David Gutman (david.gutman)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1741051

Title:
  Views accessible via url even if user doesn't match policy rules

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When a user doesn't match the policy rules of a panel then the panel
  tab is removed from the menu of the left, but panel views are still
  accessible using directly the url (ex /admin/flavors/).

  In most of the case, views won't work correctly because of the lack of
  right in the backend, but it may cause trouble when you play with
  policies.

  I think it could be more elegant to return directly a "You are not
  authorized to access this page" from the frontend when user try to
  access a view of a panel (via url) without matching the policy rules.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1741051/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1741009] [NEW] use ram to filter specific flavor when launch instance has error

2018-01-03 Thread ninolin
Public bug reported:

When I choose a flavor to launch instance, I use filter input to find
specific flavor.

Steps to reproduce:
1. Admin login
2. Use menu to images page (Admin -> Compute -> Images)
3. Use image to Launch 
4. Use filter input to find flavor
5. Enter RAM:2 find 8GB RAM flavor

Expected:
1. Enter RAM:2 can't find 8GB RAM flavor

Openstack 3.13.0

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "horizon-bug.PNG"
   
https://bugs.launchpad.net/bugs/1741009/+attachment/5030362/+files/horizon-bug.PNG

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1741009

Title:
  use ram to filter specific flavor when launch instance has error

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When I choose a flavor to launch instance, I use filter input to find
  specific flavor.

  Steps to reproduce:
  1. Admin login
  2. Use menu to images page (Admin -> Compute -> Images)
  3. Use image to Launch 
  4. Use filter input to find flavor
  5. Enter RAM:2 find 8GB RAM flavor

  Expected:
  1. Enter RAM:2 can't find 8GB RAM flavor

  Openstack 3.13.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1741009/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1741007] [NEW] If projects are too much, list couldn't show completely

2018-01-03 Thread Wu Ya Han
Public bug reported:

If the number of projects is too much (about over 20), the project list
couldn't show completely.(unless you Zoom out of the website)

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1741007

Title:
  If projects are too much, list couldn't show completely

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If the number of projects is too much (about over 20), the project
  list couldn't show completely.(unless you Zoom out of the website)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1741007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp