[Yahoo-eng-team] [Bug 1569283] Re: Table column doesn't align with the operation in Horizon
[Expired for OpenStack Dashboard (Horizon) because there has been no activity for 60 days.] ** Changed in: horizon Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1569283 Title: Table column doesn't align with the operation in Horizon Status in OpenStack Dashboard (Horizon): Expired Status in OpenStack Compute (nova): Invalid Bug description: The column 'id' fro table 'instance_types' in database 'nova' is auto- increased. MariaDB [nova]> desc instance_types -> ; +--+--+--+-+-++ | Field| Type | Null | Key | Default | Extra | +--+--+--+-+-++ | id | int(11) | NO | PRI | NULL| auto_increment | +--+--+--+-+-++ While in the operation 'create flavor', the default value of 'id' is 'auto', which doesn't mean to be 'AUTO_INCREMENT', instead it generate a GUID for it, like '0257d5be-821e-4cdd-ad2c-45139e067aad'. So the database table design doesn't align with the operation in Horizon. This should be fixed? Thanks. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1569283/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592253] Re: Bug: migrate instance after delete flavor
Hi, Matt Riedemann Thank you for comment. I think this is not the same issue with bug 1570748. In bug 1570748, only resize-revert bug is fixed and above issue is for nova migrate. (I reported bug 1570748 and I wrote that issue can be reproduced in nova resize but actually that was nova migrate. So the people who tested that couldn't reproduce the issue for "nova resize".) When "nova resize" is requested there is no problem because there is flavor. : nova resize But, when "nova migrate" is requested it can be error state because there is no flavor. : nova migrate So, please check again this. You can simply reproduce this with above steps. Thank you. ** This bug is no longer a duplicate of bug 1570748 Bug: resize instance after edit flavor with horizon -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1592253 Title: Bug: migrate instance after delete flavor Status in OpenStack Compute (nova): New Bug description: Error occured when migrate instance after delete flavor. Reproduce step : 1. create flavor A 2. boot instance using flavor A 3. delete flavor A 4. migrate instance (ex : nova migrate [instance_uuid]) 5. Error occured Error Log : == nova-compute.log File "/opt/openstack/src/nova/nova/conductor/manager.py", line 422, in _object_dispatch return getattr(target, method)(*args, **kwargs) File "/opt/openstack/src/nova/nova/objects/base.py", line 163, in wrapper result = fn(cls, context, *args, **kwargs) File "/opt/openstack/src/nova/nova/objects/flavor.py", line 132, in get_by_id db_flavor = db.flavor_get(context, id) File "/opt/openstack/src/nova/nova/db/api.py", line 1479, in flavor_get return IMPL.flavor_get(context, id) File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 233, in wrapper return f(*args, **kwargs) File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 4732, in flavor_get raise exception.FlavorNotFound(flavor_id=id) FlavorNotFound: Flavor 8 could not be found. == This Error is occured when resize_instance method is called as below code: /opt/openstack/src/nova/nova/compute/manager.py def resize_instance(self, context, instance, image, reservations, migration, instance_type, clean_shutdown=True): if (not instance_type or not isinstance(instance_type, objects.Flavor)): instance_type = objects.Flavor.get_by_id( context, migration['new_instance_type_id']) context parameter has this data: {'domain': None, 'project_name': u'admin', 'project_domain': None, 'timestamp': '2016-06-14T04:34:50.759410', 'auth_token': u'457802dc378442a6ac4a5b952587927e', 'remote_address': u'10.10.10.5, 'quota_class': None, 'resource_uuid': None, 'is_admin': True, 'user': u'694df2010229405e966aafc16a30784f', 'service_catalog': [{u'endpoints': [{u'adminURL': u'https://devel-api.com:8776/v2/9b7ce4df5e1549058687d82e31d127b1', u'region': u'RegionOne', u'internalURL': u'https://devel-api.com:8776/v2/9b7ce4df5e1549058687d82e31d127b1', u'publicURL': u'https://devel-api.com:8776/v2/9b7ce4df5e1549058687d82e31d127b1'}], u'type': u'volumev2', u'name': u'cinderv2'}, {u'endpoints': [{u'adminURL': u'https://devel-api.com:8776/v1/9b7ce4df5e1549058687d82e31d127b1', u'region': u'RegionOne', u'internalURL': u'https://devel-api.com:8776/v1/9b7ce4df5e1549058687d82e31d127b1', u'publicURL': u'https://devel-api.com:8776/v1/9b7ce4df5e1549058687d82e31d127b1'}], u'type': u'volume', u'name': u'cinder'}], 'tenant': u'9b7ce4df5e1549058687d82e31d127b1', 'read_only': False, 'project_id': u'9b7ce4df5e1549058687d82e31d127b1', 'user_id': u'694df2010229405e966aafc16a30784f', 'show_deleted': False, 'roles': [u'admin'], 'user_identity': '694df2010229405e966aafc16a30784f 9b7ce4df5e1549058687d82e31d127b1 - - -', 'read_deleted': 'no', 'request_id': u'req-59dca904-6384-4ca0-b696-5731c80198d7', 'instance_lock_checked': False, 'user_domain': None, 'user_name': u'admin'} When objects.Flavor.get_by_id method is called, error is occurred because the default value of read_deleted is "no". So, I think context.read_deleted attribute should be set to "yes" before objects.Flavor.get_by_id method is called. I've tested this using stable/kilo and I think liberty, mitaka has same problem. Thanks. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1592253/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help
[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong
** Also affects: glance-store Importance: Undecided Status: New ** Changed in: glance-store Assignee: (unassigned) => yuyafei (yu-yafei) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1259292 Title: Some tests use assertEqual(observed, expected) , the argument order is wrong Status in Barbican: In Progress Status in Ceilometer: Invalid Status in Cinder: Fix Released Status in congress: Fix Released Status in Designate: Fix Released Status in Glance: Fix Released Status in glance_store: New Status in heat: Fix Released Status in OpenStack Dashboard (Horizon): In Progress Status in OpenStack Identity (keystone): Fix Released Status in Magnum: Fix Released Status in Manila: Fix Released Status in Mistral: Fix Released Status in Murano: Fix Released Status in OpenStack Compute (nova): Won't Fix Status in os-brick: In Progress Status in python-ceilometerclient: Invalid Status in python-cinderclient: Fix Released Status in python-designateclient: Fix Committed Status in python-glanceclient: New Status in python-mistralclient: Fix Released Status in python-solumclient: Fix Released Status in Python client library for Zaqar: Fix Released Status in Sahara: Fix Released Status in zaqar: Fix Released Bug description: The test cases will produce a confusing error message if the tests ever fail, so this is worth fixing. To manage notifications about this bug go to: https://bugs.launchpad.net/barbican/+bug/1259292/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592627] [NEW] sriov bond don't support.
Public bug reported: Now sriov don't have the function of bond. In NFV, we need to add a bond port using sriov. One option is to use two sriov port using as a bond port. Another option is one port and add option of port create. Such as "--bond" to add two vf in a port when we use "neutron port-create". I prefer the second way,because it is easy to use. ** Affects: neutron Importance: Undecided Status: New ** Tags: rfe ** Tags added: rfe -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1592627 Title: sriov bond don't support. Status in neutron: New Bug description: Now sriov don't have the function of bond. In NFV, we need to add a bond port using sriov. One option is to use two sriov port using as a bond port. Another option is one port and add option of port create. Such as "--bond" to add two vf in a port when we use "neutron port- create". I prefer the second way,because it is easy to use. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1592627/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong
** Also affects: python-glanceclient Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1259292 Title: Some tests use assertEqual(observed, expected) , the argument order is wrong Status in Barbican: In Progress Status in Ceilometer: Invalid Status in Cinder: Fix Released Status in congress: Fix Released Status in Designate: Fix Released Status in Glance: Fix Released Status in heat: Fix Released Status in OpenStack Dashboard (Horizon): In Progress Status in OpenStack Identity (keystone): Fix Released Status in Magnum: Fix Released Status in Manila: Fix Released Status in Mistral: Fix Released Status in Murano: Fix Released Status in OpenStack Compute (nova): Won't Fix Status in os-brick: New Status in python-ceilometerclient: Invalid Status in python-cinderclient: Fix Released Status in python-designateclient: Fix Committed Status in python-glanceclient: New Status in python-mistralclient: Fix Released Status in python-solumclient: Fix Released Status in Python client library for Zaqar: Fix Released Status in Sahara: Fix Released Status in zaqar: Fix Released Bug description: The test cases will produce a confusing error message if the tests ever fail, so this is worth fixing. To manage notifications about this bug go to: https://bugs.launchpad.net/barbican/+bug/1259292/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong
** Also affects: os-brick Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1259292 Title: Some tests use assertEqual(observed, expected) , the argument order is wrong Status in Barbican: In Progress Status in Ceilometer: Invalid Status in Cinder: Fix Released Status in congress: Fix Released Status in Designate: Fix Released Status in Glance: Fix Released Status in heat: Fix Released Status in OpenStack Dashboard (Horizon): In Progress Status in OpenStack Identity (keystone): Fix Released Status in Magnum: Fix Released Status in Manila: Fix Released Status in Mistral: Fix Released Status in Murano: Fix Released Status in OpenStack Compute (nova): Won't Fix Status in os-brick: New Status in python-ceilometerclient: Invalid Status in python-cinderclient: Fix Released Status in python-designateclient: Fix Committed Status in python-glanceclient: New Status in python-mistralclient: Fix Released Status in python-solumclient: Fix Released Status in Python client library for Zaqar: Fix Released Status in Sahara: Fix Released Status in zaqar: Fix Released Bug description: The test cases will produce a confusing error message if the tests ever fail, so this is worth fixing. To manage notifications about this bug go to: https://bugs.launchpad.net/barbican/+bug/1259292/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1548682] Re: xenapi: swap_xapi_host should use urlunparse to create new url
This looks like it should be proposed as a spec. More information about blueprints and specs available here: https://wiki.openstack.org/wiki/Blueprints For an example spec, see here: https://specs.openstack.org/openstack /nova-specs/specs/ocata/template.html ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1548682 Title: xenapi: swap_xapi_host should use urlunparse to create new url Status in OpenStack Compute (nova): Invalid Bug description: Currently swap_xapi_host() only use replace() to change the hostname of original url, but this could not deal with the situation like: swap_xapi_host("http://hostname:port/hostname;, 'otherserver') Should use the result of urlparse to create a new result with the host changed, and then use urlunparse() To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1548682/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1549590] Re: the ram and disk should have a unit when display
Closing this out as invalid because it's been incomplete for a few months and we don't have enough information to reproduce or even identify an issue to fix. ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1549590 Title: the ram and disk should have a unit when display Status in OpenStack Compute (nova): Invalid Bug description: "ram:%s disk:%s" should with unit like that "ram:%smb disk:%smb" To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1549590/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1542302] Re: We should to initialize request_spec to handle expected exception
** Changed in: nova Status: Incomplete => Won't Fix ** Changed in: nova Status: Won't Fix => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1542302 Title: We should to initialize request_spec to handle expected exception Status in OpenStack Compute (nova): Invalid Bug description: in nova/conductor/manager.py, in method build_instances populate_retry is only doing arithmetic without to interact with third parts and can throw an *expected* exception where 'build_request_spec()' not. So we should to change initializes request_spec soon as possible since it's used in case where that *expected* exception is raised. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1542302/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1531826] Re: nova boot from image with attach volume failed
Marked as fix released, per OP's comment, a fix has been released for python-novaclient that resolves this issue. ** Changed in: nova Status: Incomplete => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1531826 Title: nova boot from image with attach volume failed Status in OpenStack Compute (nova): Fix Released Bug description: Reproducing method as following: 1. use CLI command create a new server . the server boot from image, at the same time attach a volume for this server. the command as following: [root@sl3j ~(keystone_admin)]#nova boot --flavor 7c063ff7-244a- 46d8-b166-d15eae9ea172 --image f9b1084c-2527-4f27-844e-cc21d766d32c --block-device source=volume,id=cbf360c2-eb2c- 47e3-a501-795406a9542b,dest=volume,device=vdb --nic port-id=fbc32d35 -820a-409d-8768-aa1adea69142 test_vm you will find the exception raise: ERROR (BadRequest): Block Device Mapping is Invalid: Boot sequence for the instance and image/block device mapping combination is not valid. (HTTP 400) (Request-ID: req-ab0f5fb2-5597-4c92-b1dd-d50364f84a57) 2. Therefore, this VM uses the above step 1 command to create failed . I use kilo_2015.1.0 version. this command in icehouse version can create vm success. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1531826/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592612] [NEW] TLS container could not be found
Public bug reported: I went through https://wiki.openstack.org/wiki/Network/LBaaS/docs/how- to-create-tls-loadbalancer with devstack. And all my branches were set to stable/mitaka. If I set my user and tenant as "admin admin", the workflow passed. But it failed if I set the user and tenant to "admin demo" and rerun all the steps. Steps to reproduce: 1. source ~/devstack/openrc admin demo 2. barbican secret store --payload-content-type='text/plain' --name='certificate' --payload="$(cat server.crt)" 3. barbican secret store --payload-content-type='text/plain' --name='private_key' --payload="$(cat server.key)" 4 .barbican secret container create --name='tls_container' --type='certificate' --secret="certificate=$(barbican secret list | awk '/ certificate / {print $2}')" --secret="private_key=$(barbican secret list | awk '/ private_key / {print $2}')" 5. neutron lbaas-loadbalancer-create $(neutron subnet-list | awk '/ private-subnet / {print $2}') --name lb1 6. neutron lbaas-listener-create --loadbalancer lb1 --protocol-port 443 --protocol TERMINATED_HTTPS --name listener1 --default-tls-container=$(barbican secret container list | awk '/ tls_container / {print $2}') The error msg I got is $ neutron lbaas-listener-create --loadbalancer 738689bd-b54e-485e-b742-57bd6e812270 --protocol-port 443 --protocol TERMINATED_HTTPS --name listener2 --default-tls-container=$(barbican secret container list | awk '/ tls_container / {print $2}') WARNING:barbicanclient.barbican:This Barbican CLI interface has been deprecated and will be removed in the O release. Please use the openstack unified client instead. DEBUG:stevedore.extension:found extension EntryPoint.parse('table = cliff.formatters.table:TableFormatter') DEBUG:stevedore.extension:found extension EntryPoint.parse('json = cliff.formatters.json_format:JSONFormatter') DEBUG:stevedore.extension:found extension EntryPoint.parse('csv = cliff.formatters.commaseparated:CSVLister') DEBUG:stevedore.extension:found extension EntryPoint.parse('value = cliff.formatters.value:ValueFormatter') DEBUG:stevedore.extension:found extension EntryPoint.parse('yaml = cliff.formatters.yaml_format:YAMLFormatter') DEBUG:barbicanclient.client:Creating Client object DEBUG:barbicanclient.containers:Listing containers - offset 0 limit 10 name None type None DEBUG:keystoneclient.auth.identity.v2:Making authentication request to http://192.168.100.148:5000/v2.0/tokens INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): 192.168.100.148 Starting new HTTP connection (1): 192.168.100.148 DEBUG:requests.packages.urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" 200 3924 DEBUG:keystoneclient.session:REQ: curl -g -i -X GET http://192.168.100.148:9311 -H "Accept: application/json" -H "User-Agent: python-keystoneclient" INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): 192.168.100.148 Starting new HTTP connection (1): 192.168.100.148 DEBUG:requests.packages.urllib3.connectionpool:"GET / HTTP/1.1" 300 353 DEBUG:keystoneclient.session:RESP: [300] Content-Length: 353 Content-Type: application/json; charset=UTF-8 Connection: close RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2015-04-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.key-manager-v1+json"}], "id": "v1", "links": [{"href": "http://192.168.100.148:9311/v1/;, "rel": "self"}, {"href": "http://docs.openstack.org/;, "type": "text/html", "rel": "describedby"}]}]}} DEBUG:keystoneclient.session:REQ: curl -g -i -X GET http://192.168.100.148:9311/v1/containers -H "User-Agent: python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}203d7de65f6cfb1fb170437ae2da98fef35f0942" INFO:requests.packages.urllib3.connectionpool:Resetting dropped connection: 192.168.100.148 Resetting dropped connection: 192.168.100.148 DEBUG:requests.packages.urllib3.connectionpool:"GET /v1/containers?limit=10=0 HTTP/1.1" 200 585 DEBUG:keystoneclient.session:RESP: [200] Connection: close Content-Type: application/json; charset=UTF-8 Content-Length: 585 x-openstack-request-id: req-aa4bb861-3d1d-42c6-be3d-5d3935622043 RESP BODY: {"total": 1, "containers": [{"status": "ACTIVE", "updated": "2016-06-10T01:14:45", "name": "tls_container", "consumers": [], "created": "2016-06-10T01:14:45", "container_ref": "http://192.168.100.148:9311/v1/containers/4ca420a1-ed23-4e91-a08a-311dad3df801;, "creator_id": "9ee7d4959bc74d2988d50e0e3a965c64", "secret_refs": [{"secret_ref": "http://192.168.100.148:9311/v1/secrets/c96944b3-174e-418f-8598-8979eafaa537;, "name": "certificate"}, {"secret_ref": "http://192.168.100.148:9311/v1/secrets/2e25ad05-ecd6-43bd-95fa-046b9cbe2600;, "name": "private_key"}], "type": "certificate"}]} DEBUG:barbicanclient.client:Response status 200 DEBUG:barbicanclient.secrets:Getting secret - Secret href: http://192.168.100.148:9311/v1/secrets/2e25ad05-ecd6-43bd-95fa-046b9cbe2600
[Yahoo-eng-team] [Bug 1592167] Re: Deleted keypair causes metadata failure
Reviewed: https://review.openstack.org/329661 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=4317166b72bb0aadd0321acdf9f2450c1a99d0a4 Submitter: Jenkins Branch:master commit 4317166b72bb0aadd0321acdf9f2450c1a99d0a4 Author: Matt RiedemannDate: Tue Jun 14 16:05:35 2016 -0400 Handle keypair not found from metadata server With commit e83842b80b73c451f78a4bb9e7bd5dfcebdefcab we attempt to load keypairs for an instance from instance_extra, but if that hasn't been migrated yet we fall back to loading the keypair from the database by name. If the keypair was deleted, the instance object will just set an empty KeyPairList for instance.keypairs and we'll get an IndexError when using self.instance.keypairs[0] in _metadata_as_json. This adds a check that instance.keypairs actually has something in it. If not, we log a message and don't return any key values in the metadata dict - same as if instance.key_name wasn't set to begin with. Change-Id: If823867d1df4bafa46978e62e05826d1f12c9269 Closes-Bug: #1592167 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1592167 Title: Deleted keypair causes metadata failure Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) mitaka series: Confirmed Bug description: Description === If a user deletes a keypair that was used to create an instance, that instance receives HTTP 400 errors when attempting to get metadata via http://169.254.169.254/openstack/latest/meta_data.json. This causes problems in the instance when cloud-init fails to retrieve the OpenStack datasource. Steps to reproduce == 1. Create instance with SSH keypair defined. 2. Delete SSH keypair 3. Attempt 'curl http://169.254.169.254/openstack/latest/meta_data.json' from the instance Expected result === Instance receives metadata from http://169.254.169.254/openstack/latest/meta_data.json Actual result = Instance receives HTTP 400 error. Additionally, Ubuntu Cloud Image instances will fail back to the ec2 datasource and re-generate Host SSH keys. Environment === Nova: 2015.1.4.2 Hypervisor: Libvirt + KVM Storage:Ceph Network:Liberty Neutron ML2+OVS Logs [req-a8385839-6993-4289-96dc-1714afe82597 - - - - -] FaultWrapper error Traceback (most recent call last): File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/api/ec2/__init__.py", line 93, in __call__ return req.get_response(self.application) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/request.py", line 1299, in send application, catch_exc_info=False) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/request.py", line 1263, in call_application app_iter = application(self.environ, start_response) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__ resp = self.call_func(req, *args, **self.kwargs) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func return self.func(req, *args, **kwargs) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/api/ec2/__init__.py", line 105, in __call__ rv = req.get_response(self.application) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/request.py", line 1299, in send application, catch_exc_info=False) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/request.py", line 1263, in call_application app_iter = application(self.environ, start_response) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__ resp = self.call_func(req, *args, **self.kwargs) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func return self.func(req, *args, **kwargs) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/api/metadata/handler.py", line 137, in __call__ data = meta_data.lookup(req.path_info) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/api/metadata/base.py", line 418, in lookup data = self.get_openstack_item(path_tokens[1:]) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/api/metadata/base.py", line 297, in get_openstack_item return self._route_configuration().handle_path(path_tokens) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/api/metadata/base.py", line 491, in handle_path return
[Yahoo-eng-team] [Bug 1583601] Re: Duplicated sg rules could be created with diff description
Reviewed: https://review.openstack.org/318981 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=387283d8de81fa4c7e28bc41d6c7707a388aeeef Submitter: Jenkins Branch:master commit 387283d8de81fa4c7e28bc41d6c7707a388aeeef Author: Hong Hui XiaoDate: Thu May 19 13:12:06 2016 + Prevent adding duplicated sg rules with diff description Now the security group rules can be added with same content but different description. This should be prevented to stop creating duplicated sg rules. Change-Id: Ibafe39f9652ecd24ad9536e6abc7c4f4384b3a22 Closes-bug: #1583601 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1583601 Title: Duplicated sg rules could be created with diff description Status in neutron: Fix Released Bug description: I can create multiple security group rules with same content, but different descriptions. For example, [fedora@normal2 ~]$ neutron security-group-rule-create test --protocol tcp --remote-group-id 1b8c08e5-728d-48ef-a24b-e4ebc20808a3 Created a new security_group_rule: +---+--+ | Field | Value| +---+--+ | description | | | direction | ingress | | ethertype | IPv4 | | id| 09eaa983-7884-4c27-bffb-81064d164688 | | port_range_max| | | port_range_min| | | protocol | tcp | | remote_group_id | 1b8c08e5-728d-48ef-a24b-e4ebc20808a3 | | remote_ip_prefix | | | security_group_id | db8d1386-0b2e-4f0c-b4c2-16c10b30fd92 | | tenant_id | 02178a7c126a4066ab5c3fae571d89c8 | +---+--+ [fedora@normal2 ~]$ neutron security-group-rule-create test --protocol tcp --remote-group-id 1b8c08e5-728d-48ef-a24b-e4ebc20808a3 --description "123" Created a new security_group_rule: +---+--+ | Field | Value| +---+--+ | description | 123 | | direction | ingress | | ethertype | IPv4 | | id| 5282599c-4262-4c48-b999-052a0ce5cff7 | | port_range_max| | | port_range_min| | | protocol | tcp | | remote_group_id | 1b8c08e5-728d-48ef-a24b-e4ebc20808a3 | | remote_ip_prefix | | | security_group_id | db8d1386-0b2e-4f0c-b4c2-16c10b30fd92 | | tenant_id | 02178a7c126a4066ab5c3fae571d89c8 | +---+--+ This should be prevented. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1583601/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1582966] Re: Broken URL in Mitaka Neutron Release Notes
Reviewed: https://review.openstack.org/318134 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=a474eb0cb6ea4bba4cd4ee3144261f1ca3aeb9a6 Submitter: Jenkins Branch:master commit a474eb0cb6ea4bba4cd4ee3144261f1ca3aeb9a6 Author: shmcfarlDate: Wed May 18 08:06:51 2016 -0600 Fix broken URL in Mitaka Neutron release note The section "Other Notes" has a bullet referring to "Please read the OpenStack Networking Guide." The URL referenced in that bullet was in correct. An updated URL of http://docs.openstack.org/mitaka/networking-guide/adv-config-availability-zone.html was added to the file https://github.com/openstack/neutron/blob/master/releasenotes/notes/add-availability-zone-4440cf00be7c54ba.yaml Change-Id: I3e6f6a6a15705820bc0ed489861465f54e07ae52 Closes-Bug: #1582966 ** Changed in: neutron Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1582966 Title: Broken URL in Mitaka Neutron Release Notes Status in neutron: Fix Released Bug description: In the Mitaka Neutron Release Notes (http://docs.openstack.org/releasenotes/neutron/mitaka.html) there is a broken URL under the section "Other Notes" which is incorrectly linked with the text "Please read the OpenStack Networking Guide." The URL incorrectly links to "http://docs.openstack.org/mitaka/networking- guide/adv_config_availability_zone.html" which doesn't exist and it would be the wrong topic even if it did exist. The correct URL that should be referenced is http://docs.openstack.org/mitaka/networking-guide/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1582966/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592594] [NEW] DHCP: delete config option dnsmasq_dns_server
Public bug reported: https://review.openstack.org/329306 Dear bug triager. This bug was created since a commit was marked with DOCIMPACT. Your project "openstack/neutron" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to. commit eb965f99ded8a46db220d540bcaea5d67f5b2d08 Author: Gary KottonDate: Tue Jun 14 00:47:14 2016 -0700 DHCP: delete config option dnsmasq_dns_server This field was marked as deprecated In commit a269541c603f8923b35b7e722f1b8c0ebd42c95a. That was in Kilo which has provided enough time for admins to addjust to this. In addition to this the patch sets the default value as []. If a value is not specified this is None and that should not be the default list. DocImpact UpgradeImpact Change-Id: Ieaf18ffc9baf7e1caebe9de47017338bebd92c84 ** Affects: neutron Importance: Undecided Status: New ** Tags: doc neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1592594 Title: DHCP: delete config option dnsmasq_dns_server Status in neutron: New Bug description: https://review.openstack.org/329306 Dear bug triager. This bug was created since a commit was marked with DOCIMPACT. Your project "openstack/neutron" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to. commit eb965f99ded8a46db220d540bcaea5d67f5b2d08 Author: Gary Kotton Date: Tue Jun 14 00:47:14 2016 -0700 DHCP: delete config option dnsmasq_dns_server This field was marked as deprecated In commit a269541c603f8923b35b7e722f1b8c0ebd42c95a. That was in Kilo which has provided enough time for admins to addjust to this. In addition to this the patch sets the default value as []. If a value is not specified this is None and that should not be the default list. DocImpact UpgradeImpact Change-Id: Ieaf18ffc9baf7e1caebe9de47017338bebd92c84 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1592594/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1477490] Re: Ironic: Deleting while spawning can leave orphan ACTIVE nodes in Ironic
Assigned to Lucas in Lucas in the hope he'll fix it :) -- John and Michael... ** Also affects: ironic Importance: Undecided Status: New ** Changed in: ironic Status: New => Confirmed ** Changed in: ironic Importance: Undecided => Medium ** Changed in: ironic Assignee: (unassigned) => Lucas Alvares Gomes (lucasagomes) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1477490 Title: Ironic: Deleting while spawning can leave orphan ACTIVE nodes in Ironic Status in Ironic: Confirmed Status in OpenStack Compute (nova): Confirmed Bug description: The Ironic nova driver won't try to delete the instance in Ironic if the node's provision state is DEPLOYING [1] , this is known to fail with the current Ironic code because we just can't abort the installation at the DEPLOYING stage. But the Ironic nova driver just keep going and tries to clean up the deployment environment (without telling Ironic to unprovision the instance) and it will fail as well. But the the code that cleans up the instance will keep retrying [3] because there's a transition in progress and it can't update the node. But when the node finishes the deployment, if the retrying didn't timed out, the destroy() method from the Nova driver will succeed cleaning deployment environment and the Nova instance will be deleted but the Ironic node will continue to marked as ACTIVE in Ironic and now orphan because there's no instance in Nova associated with it [4] The good news is that since nova clean up the network stuff the instance won't be accessible. WORKAROUND: Unprovision the node using the Ironic API directly $ ironic node-set-provision-state deleted PROPOSED FIX: IMO the ironic nova driver should try to tell Ironic to delete the instance even when the provision state of the node is DEPLOYING. If it fails the nova delete command will fail saying it can not delete the instance, which is fine until this gets resolved in Ironic (there's work going on to be able to abort a deployment at any stage) [1] https://github.com/openstack/nova/blob/6a24bbeecd8a6d6d3135a10f4917b071896d14ee/nova/virt/ironic/driver.py#L865-L868 [2] https://github.com/openstack/nova/blob/6a24bbeecd8a6d6d3135a10f4917b071896d14ee/nova/virt/ironic/driver.py#L871 [3] From the nova-compute logs {"error_message": "{\"debuginfo\": null, \"faultcode\": \"Client\", \"faultstring\": \"Node d240ae0d-1844-48f0-adcf-b70680a1b6ce can not be updated while a state transition is in progress.\"}"} from (pid=6672) log_http_response /usr/local/lib/python2.7/dist-packages/ironicclient/common/http.py:260 2015-07-23 11:07:40.358 WARNING ironicclient.common.http [req-24b39fe8-435d-4869-970f-53f64b3512a8 demo demo] Request returned failure status. 2015-07-23 11:07:40.358 WARNING ironicclient.common.http [req-24b39fe8-435d-4869-970f-53f64b3512a8 demo demo] Error contacting Ironic server: Node d240ae0d-1844-48f0-adcf-b70680a1b6ce can not be updated while a state transition is in progress. (HTTP 409). Attempt 3 of 6 [4] http://paste.openstack.org/show/403569/ To manage notifications about this bug go to: https://bugs.launchpad.net/ironic/+bug/1477490/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592564] [NEW] gate-neutron-lbaasv2-dsvm-minimal is failing
Public bug reported: https://review.openstack.org/#/c/329481/ https://review.openstack.org/#/c/219215/ 2016-06-14 17:05:38.571 | 2016-06-14 17:05:38.528 | + /opt/stack/new/neutron-lbaas/neutron_lbaas/tests/contrib/post_test_hook.sh tempest lbaasv2 minimal 2016-06-14 17:05:38.573 | 2016-06-14 17:05:38.530 | + NEUTRON_LBAAS_DIR=/opt/stack/new/neutron-lbaas 2016-06-14 17:05:38.587 | 2016-06-14 17:05:38.532 | + TEMPEST_CONFIG_DIR=/opt/stack/new/tempest/etc 2016-06-14 17:05:38.587 | 2016-06-14 17:05:38.534 | + SCRIPTS_DIR=/usr/os-testr-env/bin 2016-06-14 17:05:38.588 | 2016-06-14 17:05:38.535 | + OCTAVIA_DIR=/opt/stack/new/octavia 2016-06-14 17:05:38.588 | 2016-06-14 17:05:38.539 | ++ dirname /opt/stack/new/neutron-lbaas/neutron_lbaas/tests/contrib/post_test_hook.sh 2016-06-14 17:05:38.589 | 2016-06-14 17:05:38.541 | + . /opt/stack/new/neutron-lbaas/neutron_lbaas/tests/contrib/decode_args.sh 2016-06-14 17:05:38.589 | 2016-06-14 17:05:38.543 | ++ lbaasversion=tempest 2016-06-14 17:05:38.589 | 2016-06-14 17:05:38.544 | ++ lbaastest=lbaasv2 2016-06-14 17:05:38.589 | 2016-06-14 17:05:38.546 | +++ echo lbaasv2 2016-06-14 17:05:38.589 | 2016-06-14 17:05:38.548 | +++ perl -ne '/^(.*)-([^-]+)$/ && print "$1";' 2016-06-14 17:05:38.590 | 2016-06-14 17:05:38.550 | ++ lbaasenv= 2016-06-14 17:05:38.590 | 2016-06-14 17:05:38.552 | ++ '[' -z '' ']' 2016-06-14 17:05:38.590 | 2016-06-14 17:05:38.554 | ++ lbaasenv=lbaasv2 2016-06-14 17:05:38.590 | 2016-06-14 17:05:38.556 | +++ echo lbaasv2 2016-06-14 17:05:38.601 | 2016-06-14 17:05:38.557 | +++ perl -ne '/^(.*)-([^-]+)$/ && print "$2";' 2016-06-14 17:05:38.601 | 2016-06-14 17:05:38.559 | ++ lbaasdriver= 2016-06-14 17:05:38.602 | 2016-06-14 17:05:38.561 | ++ '[' -z '' ']' 2016-06-14 17:05:38.602 | 2016-06-14 17:05:38.563 | ++ lbaasdriver=octavia 2016-06-14 17:05:38.602 | 2016-06-14 17:05:38.564 | + LBAAS_VERSION=tempest 2016-06-14 17:05:38.603 | 2016-06-14 17:05:38.566 | + LBAAS_TEST=lbaasv2 2016-06-14 17:05:38.603 | 2016-06-14 17:05:38.568 | + LBAAS_DRIVER=octavia 2016-06-14 17:05:38.603 | 2016-06-14 17:05:38.569 | + '[' tempest = lbaasv1 ']' 2016-06-14 17:05:38.604 | 2016-06-14 17:05:38.571 | + testenv=apiv2 2016-06-14 17:05:38.606 | 2016-06-14 17:05:38.573 | + case "$LBAAS_TEST" in 2016-06-14 17:05:38.609 | 2016-06-14 17:05:38.575 | + testenv=lbaasv2 2016-06-14 17:05:38.611 | 2016-06-14 17:05:38.578 | + owner=tempest 2016-06-14 17:05:38.613 | 2016-06-14 17:05:38.581 | + cd /opt/stack/new/neutron-lbaas 2016-06-14 17:05:38.634 | 2016-06-14 17:05:38.583 | + sudo chown -R tempest:stack /opt/stack/new/neutron-lbaas 2016-06-14 17:05:38.634 | 2016-06-14 17:05:38.585 | + '[' octavia = octavia ']' 2016-06-14 17:05:38.634 | 2016-06-14 17:05:38.588 | + sudo chown -R tempest:stack /opt/stack/new/octavia 2016-06-14 17:05:38.634 | 2016-06-14 17:05:38.590 | + sudo_env=' OS_TESTR_CONCURRENCY=1' 2016-06-14 17:05:38.635 | 2016-06-14 17:05:38.591 | + sudo_env+=' TEMPEST_CONFIG_DIR=/opt/stack/new/tempest/etc' 2016-06-14 17:05:38.635 | 2016-06-14 17:05:38.594 | + '[' lbaasv2 = apiv2 ']' 2016-06-14 17:05:38.635 | 2016-06-14 17:05:38.596 | + '[' lbaasv2 = apiv1 ']' 2016-06-14 17:05:38.635 | 2016-06-14 17:05:38.598 | + '[' lbaasv2 = scenario ']' 2016-06-14 17:05:38.635 | 2016-06-14 17:05:38.600 | + echo 'ERROR: unsupported testenv: lbaasv2' 2016-06-14 17:05:38.636 | 2016-06-14 17:05:38.602 | ERROR: unsupported testenv: lbaasv2 2016-06-14 17:05:38.637 | 2016-06-14 17:05:38.604 | + exit 1 2016-06-14 17:05:38.637 | + return 1 ** Affects: neutron Importance: Critical Assignee: Elena Ezhova (eezhova) Status: Confirmed ** Changed in: neutron Importance: Undecided => Critical ** Changed in: neutron Assignee: (unassigned) => Elena Ezhova (eezhova) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1592564 Title: gate-neutron-lbaasv2-dsvm-minimal is failing Status in neutron: Confirmed Bug description: https://review.openstack.org/#/c/329481/ https://review.openstack.org/#/c/219215/ 2016-06-14 17:05:38.571 | 2016-06-14 17:05:38.528 | + /opt/stack/new/neutron-lbaas/neutron_lbaas/tests/contrib/post_test_hook.sh tempest lbaasv2 minimal 2016-06-14 17:05:38.573 | 2016-06-14 17:05:38.530 | + NEUTRON_LBAAS_DIR=/opt/stack/new/neutron-lbaas 2016-06-14 17:05:38.587 | 2016-06-14 17:05:38.532 | + TEMPEST_CONFIG_DIR=/opt/stack/new/tempest/etc 2016-06-14 17:05:38.587 | 2016-06-14 17:05:38.534 | + SCRIPTS_DIR=/usr/os-testr-env/bin 2016-06-14 17:05:38.588 | 2016-06-14 17:05:38.535 | + OCTAVIA_DIR=/opt/stack/new/octavia 2016-06-14 17:05:38.588 | 2016-06-14 17:05:38.539 | ++ dirname /opt/stack/new/neutron-lbaas/neutron_lbaas/tests/contrib/post_test_hook.sh 2016-06-14 17:05:38.589 | 2016-06-14 17:05:38.541 | + . /opt/stack/new/neutron-lbaas/neutron_lbaas/tests/contrib/decode_args.sh 2016-06-14 17:05:38.589 | 2016-06-14 17:05:38.543 | ++
[Yahoo-eng-team] [Bug 1592167] Re: Deleted keypair causes metadata failure
** Also affects: nova/mitaka Importance: Undecided Status: New ** Changed in: nova/mitaka Status: New => Confirmed ** Changed in: nova/mitaka Importance: Undecided => Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1592167 Title: Deleted keypair causes metadata failure Status in OpenStack Compute (nova): In Progress Status in OpenStack Compute (nova) mitaka series: Confirmed Bug description: Description === If a user deletes a keypair that was used to create an instance, that instance receives HTTP 400 errors when attempting to get metadata via http://169.254.169.254/openstack/latest/meta_data.json. This causes problems in the instance when cloud-init fails to retrieve the OpenStack datasource. Steps to reproduce == 1. Create instance with SSH keypair defined. 2. Delete SSH keypair 3. Attempt 'curl http://169.254.169.254/openstack/latest/meta_data.json' from the instance Expected result === Instance receives metadata from http://169.254.169.254/openstack/latest/meta_data.json Actual result = Instance receives HTTP 400 error. Additionally, Ubuntu Cloud Image instances will fail back to the ec2 datasource and re-generate Host SSH keys. Environment === Nova: 2015.1.4.2 Hypervisor: Libvirt + KVM Storage:Ceph Network:Liberty Neutron ML2+OVS Logs [req-a8385839-6993-4289-96dc-1714afe82597 - - - - -] FaultWrapper error Traceback (most recent call last): File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/api/ec2/__init__.py", line 93, in __call__ return req.get_response(self.application) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/request.py", line 1299, in send application, catch_exc_info=False) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/request.py", line 1263, in call_application app_iter = application(self.environ, start_response) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__ resp = self.call_func(req, *args, **self.kwargs) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func return self.func(req, *args, **kwargs) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/api/ec2/__init__.py", line 105, in __call__ rv = req.get_response(self.application) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/request.py", line 1299, in send application, catch_exc_info=False) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/request.py", line 1263, in call_application app_iter = application(self.environ, start_response) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__ resp = self.call_func(req, *args, **self.kwargs) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func return self.func(req, *args, **kwargs) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/api/metadata/handler.py", line 137, in __call__ data = meta_data.lookup(req.path_info) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/api/metadata/base.py", line 418, in lookup data = self.get_openstack_item(path_tokens[1:]) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/api/metadata/base.py", line 297, in get_openstack_item return self._route_configuration().handle_path(path_tokens) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/api/metadata/base.py", line 491, in handle_path return path_handler(version, path) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/api/metadata/base.py", line 316, in _metadata_as_json self.instance.key_name) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/objects/base.py", line 163, in wrapper result = fn(cls, context, *args, **kwargs) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/objects/keypair.py", line 60, in get_by_name db_keypair = db.key_pair_get(context, user_id, name) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/db/api.py", line 937, in key_pair_get return IMPL.key_pair_get(context, user_id, name) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 233, in wrapper return f(*args, **kwargs) File "/opt/cat/openstack/nova/local/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2719, in key_pair_get raise
[Yahoo-eng-team] [Bug 1592553] [NEW] Port Validator Failing
Public bug reported: A recent update to netutils allows users to select port number 0. This change is causing the below to fail. == FAIL: test_port_range_validator (horizon.test.tests.utils.ValidatorsTests) -- Traceback (most recent call last): File "/opt/stack/horizon/horizon/test/tests/utils.py", line 257, in test_port_range_validator self.assertRaises(ValidationError, test_call, prange) AssertionError: ValidationError not raised == FAIL: test_port_validator (horizon.test.tests.utils.ValidatorsTests) -- Traceback (most recent call last): File "/opt/stack/horizon/horizon/test/tests/utils.py", line 207, in test_port_validator port) AssertionError: ValidationError not raised ** Affects: horizon Importance: Undecided Assignee: Ankur (ankur-gupta-f) Status: New ** Changed in: horizon Assignee: (unassigned) => Ankur (ankur-gupta-f) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1592553 Title: Port Validator Failing Status in OpenStack Dashboard (Horizon): New Bug description: A recent update to netutils allows users to select port number 0. This change is causing the below to fail. == FAIL: test_port_range_validator (horizon.test.tests.utils.ValidatorsTests) -- Traceback (most recent call last): File "/opt/stack/horizon/horizon/test/tests/utils.py", line 257, in test_port_range_validator self.assertRaises(ValidationError, test_call, prange) AssertionError: ValidationError not raised == FAIL: test_port_validator (horizon.test.tests.utils.ValidatorsTests) -- Traceback (most recent call last): File "/opt/stack/horizon/horizon/test/tests/utils.py", line 207, in test_port_validator port) AssertionError: ValidationError not raised To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1592553/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592546] [NEW] OVSLibTestCase.test_db_find_column_type_list is not isolated
Public bug reported: Spotted in a functional test log: neutron.tests.functional.agent.test_ovs_lib.OVSLibTestCase.test_db_find_column_type_list(vsctl) --- Captured traceback: ~~~ Traceback (most recent call last): File "neutron/tests/functional/agent/test_ovs_lib.py", line 395, in test_db_find_column_type_list self.assertEqual(tags_present, len_0_list) File "/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py", line 411, in assertEqual self.assertThat(observed, matcher, message) File "/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py", line 498, in assertThat raise mismatch_error testtools.matchers._impl.MismatchError: [{u'tag': 42}, {u'tag': 1}] != [{u'tag': 42}, {u'tag': 1}, {u'tag': 1567}] Note the extra {'tag': 1567}. ** Affects: neutron Importance: Undecided Status: New ** Tags: functional-tests gate-failure ** Tags added: functional-tests gate-failure ** Description changed: - Spotted in a a functional test log: + Spotted in a functional test log: neutron.tests.functional.agent.test_ovs_lib.OVSLibTestCase.test_db_find_column_type_list(vsctl) --- Captured traceback: ~~~ - Traceback (most recent call last): - File "neutron/tests/functional/agent/test_ovs_lib.py", line 395, in test_db_find_column_type_list - self.assertEqual(tags_present, len_0_list) - File "/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py", line 411, in assertEqual - self.assertThat(observed, matcher, message) - File "/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py", line 498, in assertThat - raise mismatch_error - testtools.matchers._impl.MismatchError: [{u'tag': 42}, {u'tag': 1}] != [{u'tag': 42}, {u'tag': 1}, {u'tag': 1567}] - + Traceback (most recent call last): + File "neutron/tests/functional/agent/test_ovs_lib.py", line 395, in test_db_find_column_type_list + self.assertEqual(tags_present, len_0_list) + File "/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py", line 411, in assertEqual + self.assertThat(observed, matcher, message) + File "/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py", line 498, in assertThat + raise mismatch_error + testtools.matchers._impl.MismatchError: [{u'tag': 42}, {u'tag': 1}] != [{u'tag': 42}, {u'tag': 1}, {u'tag': 1567}] Note the extra {'tag': 1567}. -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1592546 Title: OVSLibTestCase.test_db_find_column_type_list is not isolated Status in neutron: New Bug description: Spotted in a functional test log: neutron.tests.functional.agent.test_ovs_lib.OVSLibTestCase.test_db_find_column_type_list(vsctl) --- Captured traceback: ~~~ Traceback (most recent call last): File "neutron/tests/functional/agent/test_ovs_lib.py", line 395, in test_db_find_column_type_list self.assertEqual(tags_present, len_0_list) File "/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py", line 411, in assertEqual self.assertThat(observed, matcher, message) File "/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py", line 498, in assertThat raise mismatch_error testtools.matchers._impl.MismatchError: [{u'tag': 42}, {u'tag': 1}] != [{u'tag': 42}, {u'tag': 1}, {u'tag': 1567}] Note the extra {'tag': 1567}. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1592546/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592543] [NEW] Images registration is using old registration technique
Public bug reported: The Images module is using an experimental version of getResourceType() to register names instead of the .setNames() registry feature. ** Affects: horizon Importance: Undecided Assignee: Matt Borland (palecrow) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1592543 Title: Images registration is using old registration technique Status in OpenStack Dashboard (Horizon): In Progress Bug description: The Images module is using an experimental version of getResourceType() to register names instead of the .setNames() registry feature. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1592543/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592535] [NEW] wrong default value for $pybasedir
Public bug reported: In nova/conf/paths.py, I can read: path_opts = [ cfg.StrOpt('pybasedir', default=os.path.abspath(os.path.join(os.path.dirname(__file__), '../../')), help='Directory where the nova python module is installed'), This means that, wherever nova source code is installed to generate nova.conf is going to be the default value for pybasedir. This has all the chances in the world to be a wrong value. For example, if building from /home/zigo/sources/mitaka/nova/nova, we end up with: # Directory where the nova python module is installed (string value) #pybasedir = /home/zigo/sources/openstack/mitaka/nova/build-area/nova-13.0.0/debian/tmp/usr/lib/python2.7/dist-packages instead of: #pybasedir = /usr/lib/python2.7/dist-packages Unfortunately, this ends up in the package. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1592535 Title: wrong default value for $pybasedir Status in OpenStack Compute (nova): New Bug description: In nova/conf/paths.py, I can read: path_opts = [ cfg.StrOpt('pybasedir', default=os.path.abspath(os.path.join(os.path.dirname(__file__), '../../')), help='Directory where the nova python module is installed'), This means that, wherever nova source code is installed to generate nova.conf is going to be the default value for pybasedir. This has all the chances in the world to be a wrong value. For example, if building from /home/zigo/sources/mitaka/nova/nova, we end up with: # Directory where the nova python module is installed (string value) #pybasedir = /home/zigo/sources/openstack/mitaka/nova/build-area/nova-13.0.0/debian/tmp/usr/lib/python2.7/dist-packages instead of: #pybasedir = /usr/lib/python2.7/dist-packages Unfortunately, this ends up in the package. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1592535/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592253] Re: Bug: migrate instance after delete flavor
*** This bug is a duplicate of bug 1570748 *** https://bugs.launchpad.net/bugs/1570748 This was the kilo fix: https://review.openstack.org/#/c/309168/ For bug 1570748. I'm going to duplicate this bug against bug 1570748 - if it's not the same issue, please re-open and explain why. ** This bug has been marked a duplicate of bug 1570748 Bug: resize instance after edit flavor with horizon -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1592253 Title: Bug: migrate instance after delete flavor Status in OpenStack Compute (nova): New Bug description: Error occured when migrate instance after delete flavor. Reproduce step : 1. create flavor A 2. boot instance using flavor A 3. delete flavor A 4. migrate instance (ex : nova migrate [instance_uuid]) 5. Error occured Error Log : == nova-compute.log File "/opt/openstack/src/nova/nova/conductor/manager.py", line 422, in _object_dispatch return getattr(target, method)(*args, **kwargs) File "/opt/openstack/src/nova/nova/objects/base.py", line 163, in wrapper result = fn(cls, context, *args, **kwargs) File "/opt/openstack/src/nova/nova/objects/flavor.py", line 132, in get_by_id db_flavor = db.flavor_get(context, id) File "/opt/openstack/src/nova/nova/db/api.py", line 1479, in flavor_get return IMPL.flavor_get(context, id) File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 233, in wrapper return f(*args, **kwargs) File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 4732, in flavor_get raise exception.FlavorNotFound(flavor_id=id) FlavorNotFound: Flavor 8 could not be found. == This Error is occured when resize_instance method is called as below code: /opt/openstack/src/nova/nova/compute/manager.py def resize_instance(self, context, instance, image, reservations, migration, instance_type, clean_shutdown=True): if (not instance_type or not isinstance(instance_type, objects.Flavor)): instance_type = objects.Flavor.get_by_id( context, migration['new_instance_type_id']) context parameter has this data: {'domain': None, 'project_name': u'admin', 'project_domain': None, 'timestamp': '2016-06-14T04:34:50.759410', 'auth_token': u'457802dc378442a6ac4a5b952587927e', 'remote_address': u'10.10.10.5, 'quota_class': None, 'resource_uuid': None, 'is_admin': True, 'user': u'694df2010229405e966aafc16a30784f', 'service_catalog': [{u'endpoints': [{u'adminURL': u'https://devel-api.com:8776/v2/9b7ce4df5e1549058687d82e31d127b1', u'region': u'RegionOne', u'internalURL': u'https://devel-api.com:8776/v2/9b7ce4df5e1549058687d82e31d127b1', u'publicURL': u'https://devel-api.com:8776/v2/9b7ce4df5e1549058687d82e31d127b1'}], u'type': u'volumev2', u'name': u'cinderv2'}, {u'endpoints': [{u'adminURL': u'https://devel-api.com:8776/v1/9b7ce4df5e1549058687d82e31d127b1', u'region': u'RegionOne', u'internalURL': u'https://devel-api.com:8776/v1/9b7ce4df5e1549058687d82e31d127b1', u'publicURL': u'https://devel-api.com:8776/v1/9b7ce4df5e1549058687d82e31d127b1'}], u'type': u'volume', u'name': u'cinder'}], 'tenant': u'9b7ce4df5e1549058687d82e31d127b1', 'read_only': False, 'project_id': u'9b7ce4df5e1549058687d82e31d127b1', 'user_id': u'694df2010229405e966aafc16a30784f', 'show_deleted': False, 'roles': [u'admin'], 'user_identity': '694df2010229405e966aafc16a30784f 9b7ce4df5e1549058687d82e31d127b1 - - -', 'read_deleted': 'no', 'request_id': u'req-59dca904-6384-4ca0-b696-5731c80198d7', 'instance_lock_checked': False, 'user_domain': None, 'user_name': u'admin'} When objects.Flavor.get_by_id method is called, error is occurred because the default value of read_deleted is "no". So, I think context.read_deleted attribute should be set to "yes" before objects.Flavor.get_by_id method is called. I've tested this using stable/kilo and I think liberty, mitaka has same problem. Thanks. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1592253/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592283] Re: openstack-db not drop nova_api database
What is the openstack-db script? Which package does it come from because I don't think it's part of nova. ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1592283 Title: openstack-db not drop nova_api database Status in OpenStack Compute (nova): Invalid Bug description: Description === When use openstack-db script to drop nova service, the nova database was dropped, but the nova_api database wasn't dropped. Steps to reproduce == /usr/bin/openstack-db --service nova --drop Expected result === Command 'mysql -uroot -p' to login mysql, the nova and nova_api database should be dropped. Actual result = The nova database was dropped, but the nova_api database wasn't dropped. Environment === openstack mitaka version. openstack-nova-common-13.0.0-1.el7.noarch openstack-nova-api-13.0.0-1.el7.noarch To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1592283/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1582790] Re: Selecting first option in transfer table confuses styling
Reviewed: https://review.openstack.org/317598 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=180bcf23748d1f7661565c623c61773fb481e088 Submitter: Jenkins Branch:master commit 180bcf23748d1f7661565c623c61773fb481e088 Author: Matt BorlandDate: Tue May 17 09:57:37 2016 -0600 Correcting detail-row logic to not disrupt styling The detail-rows on source and flavor transfer tables get styling confused when items are selected because their detail-row elements remain and thus confuse the alternate styling. This patch fixes both of those tables to exclude the detail-row when the primary row is also excluded. Test this by (before the patch) selecting, say, the first option from the available and see how the styling screws up on the remaining ones. Then apply the patch and see how everything looks beautiful. Change-Id: Ia19fd4b47ea08c28dfb95c369b6b3f7271420f04 Closes-Bug: 1582790 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1582790 Title: Selecting first option in transfer table confuses styling Status in OpenStack Dashboard (Horizon): Fix Released Bug description: If you go to the Launch Instance wizard and select the first available option, for example in Source, you'll see the table-striping styling gets messed up (if you have the striping patch). This is because the markup improperly lets the unshown details row be presented; this needs to be restricted similarly to the primary row. To test the problem, verify that the Source option stripes rows properly when nothing is selected. Then select the first option. See how the rows now are formatted similarly (not striped). To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1582790/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592476] [NEW] Branding: Detail Actions need Context
Public bug reported: Detail Page Action Menus need Context class to enable customization. horizon/templates/horizon/common/_detail_header.html ** Affects: horizon Importance: Low Assignee: Diana Whitten (hurgleburgler) Status: In Progress ** Tags: branding -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1592476 Title: Branding: Detail Actions need Context Status in OpenStack Dashboard (Horizon): In Progress Bug description: Detail Page Action Menus need Context class to enable customization. horizon/templates/horizon/common/_detail_header.html To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1592476/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592463] [NEW] Avoid removing SegmentHostMapping in other host when update agent
Public bug reported: Found this when working on OVN, but it should also apply to topology with l2 agent. Steps to reproduce: 1) Have segment1 with physical network physical_net1 Have segment2 with physical network physical_net2 2) Have 2 agents(host1, host2), both configured with physical_net1. When the agent created/updated in neutron, there will be a SegmentHostMapping for segment1->host1, and a SegmentHostMapping for segment1->host2. 3) Update agent at host2 to only configure with physical_net2. There will be only one SegmentHostMapping for host2, segment2->host2. But the SegmentHostMapping for segment1->host1 will also be deleted. This is not expected. ** Affects: neutron Importance: Undecided Assignee: Hong Hui Xiao (xiaohhui) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1592463 Title: Avoid removing SegmentHostMapping in other host when update agent Status in neutron: In Progress Bug description: Found this when working on OVN, but it should also apply to topology with l2 agent. Steps to reproduce: 1) Have segment1 with physical network physical_net1 Have segment2 with physical network physical_net2 2) Have 2 agents(host1, host2), both configured with physical_net1. When the agent created/updated in neutron, there will be a SegmentHostMapping for segment1->host1, and a SegmentHostMapping for segment1->host2. 3) Update agent at host2 to only configure with physical_net2. There will be only one SegmentHostMapping for host2, segment2->host2. But the SegmentHostMapping for segment1->host1 will also be deleted. This is not expected. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1592463/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592438] [NEW] [LBaaS] Devstack plugin sets auth_uri option instead of auth_url in neutron.conf and neutron_lbaas.conf
Public bug reported: plugin.sh script calls iniset $NEUTRON_LBAAS_CONF service_auth auth_uri $AUTH_URI [1] and iniset $NEUTRON_CONF service_auth auth_uri $AUTH_URI [2] but auth_uri option doesn't exist, in neutron-lbaas there is an auth_url option [3] that is used for authentication in keystone [4]. The way how devstack deploys neutron-lbaas currently works only because AUTH_URI default value in settings file [5] equals auth_url default value. We need to set auth_url in plugin.sh instead of auth_uri. [1] https://github.com/openstack/neutron-lbaas/blob/master/devstack/plugin.sh#L53 [2] https://github.com/openstack/neutron-lbaas/blob/master/devstack/plugin.sh#L60 [3] https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/common/keystone.py#L29-L33 [4] https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/common/keystone.py#L95 [5] https://github.com/openstack/neutron-lbaas/blob/master/devstack/settings#L15 ** Affects: neutron Importance: Undecided Assignee: Elena Ezhova (eezhova) Status: New ** Tags: lbaas ** Tags added: lbaas ** Changed in: neutron Assignee: (unassigned) => Elena Ezhova (eezhova) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1592438 Title: [LBaaS] Devstack plugin sets auth_uri option instead of auth_url in neutron.conf and neutron_lbaas.conf Status in neutron: New Bug description: plugin.sh script calls iniset $NEUTRON_LBAAS_CONF service_auth auth_uri $AUTH_URI [1] and iniset $NEUTRON_CONF service_auth auth_uri $AUTH_URI [2] but auth_uri option doesn't exist, in neutron-lbaas there is an auth_url option [3] that is used for authentication in keystone [4]. The way how devstack deploys neutron-lbaas currently works only because AUTH_URI default value in settings file [5] equals auth_url default value. We need to set auth_url in plugin.sh instead of auth_uri. [1] https://github.com/openstack/neutron-lbaas/blob/master/devstack/plugin.sh#L53 [2] https://github.com/openstack/neutron-lbaas/blob/master/devstack/plugin.sh#L60 [3] https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/common/keystone.py#L29-L33 [4] https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/common/keystone.py#L95 [5] https://github.com/openstack/neutron-lbaas/blob/master/devstack/settings#L15 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1592438/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1416838] Re: Network Details should be a tabbed page
Reviewed: https://review.openstack.org/303510 Committed: https://git.openstack.org/cgit/openstack/horizon/commit/?id=fe76b2f11f07d4c9f5a0f0ab117e05924e6babbe Submitter: Jenkins Branch:master commit fe76b2f11f07d4c9f5a0f0ab117e05924e6babbe Author: Paul KarikhDate: Tue Jun 14 12:08:58 2016 +0300 Refactoring of network details pages * network details tables refactored into tabs * filter actions added to each tab * commit affects both admin and project pages This refactoring gives us ability to implement pagination for this new tabs to improve horizon performance. Closes-Bug: #1416838 Change-Id: I662ad8a8e1914a99cc97a65a4d90d08b9bea7a86 ** Changed in: horizon Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1416838 Title: Network Details should be a tabbed page Status in OpenStack Dashboard (Horizon): Fix Released Bug description: Network Details shows Network info and then has 3 tables inline. This quickly becomes very impractical, and should be changed into a tabbed page, with Overview, Subnets, Ports and DHCP Agents tabs. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1416838/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592396] [NEW] Specifying which floatingip to create should not be a restricted operation
Public bug reported: Hello In my opinion, the --floating-ip-address option to "neutron floatingip- create" should by default not be restricted to the admin user. I notice that we now have the option, when creating a floating ip, to specify which IP to create as opposed to only getting a (semi-)random IP from the pool. netron floatingip-create --floating-ip-address xx.xx.xx.xx external Which is very nice. But I also noticed that this option, by default is limited to an admin user. So why is this? If a user really want an IP which is free, he can likely get it by creating and deleting addresses until the one he wants comes up. In my opinion, we should therefore relax the default policy, allowing ordinary users to specify which floating IP to use. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1592396 Title: Specifying which floatingip to create should not be a restricted operation Status in neutron: New Bug description: Hello In my opinion, the --floating-ip-address option to "neutron floatingip-create" should by default not be restricted to the admin user. I notice that we now have the option, when creating a floating ip, to specify which IP to create as opposed to only getting a (semi-)random IP from the pool. netron floatingip-create --floating-ip-address xx.xx.xx.xx external Which is very nice. But I also noticed that this option, by default is limited to an admin user. So why is this? If a user really want an IP which is free, he can likely get it by creating and deleting addresses until the one he wants comes up. In my opinion, we should therefore relax the default policy, allowing ordinary users to specify which floating IP to use. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1592396/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1590298] Re: DB retry wrapper needs to look for savepoint errors
Reviewed: https://review.openstack.org/326927 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=f21eed3998a793f5c40eb248469dc4245de3415a Submitter: Jenkins Branch:master commit f21eed3998a793f5c40eb248469dc4245de3415a Author: Kevin BentonDate: Tue Jun 7 14:28:59 2016 -0700 Check for mysql SAVEPOINT error in retry decorator Due to lost savepoints in mysql on deadlock errors, we can get a non-existent savepoint error that is just masking a deadlock error. This patch adjusts the retry decorator to check for these savepoint errors as well. See the bug for more details about the failure. Closes-Bug: #1590298 Change-Id: I29905817ad7c69986f182ff3f0d58496608cd665 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1590298 Title: DB retry wrapper needs to look for savepoint errors Status in neutron: Fix Released Bug description: If mysql triggers a deadlock error while in a nested transaction, the savepoint can be lost, which will cause a DBError from sqlalchemy that looks like the following: 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters [req-287a245f-f5da-4126-9625-148e889b3443 tempest-NetworksTestDHCPv6-2134889417 -] DBAPIError exception wrapped from (pymysql.err.InternalError) (1305, u'SAVEPOINT sa_savepoint_1 does not exist') [SQL: u'ROLLBACK TO SAVEPOINT sa_savepoint_1'] 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters Traceback (most recent call last): 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters context) 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 450, in do_execute 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters cursor.execute(statement, parameters) 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 161, in execute 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters result = self._query(query) 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 317, in _query 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters conn.query(q) 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 835, in query 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters self._affected_rows = self._read_query_result(unbuffered=unbuffered) 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 1019, in _read_query_result 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters result.read() 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 1302, in read 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters first_packet = self.connection._read_packet() 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 981, in _read_packet 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters packet.check_error() 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 393, in check_error 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters err.raise_mysql_exception(self._data) 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/local/lib/python2.7/dist-packages/pymysql/err.py", line 120, in raise_mysql_exception 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters _check_mysql_exception(errinfo) 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters File "/usr/local/lib/python2.7/dist-packages/pymysql/err.py", line 115, in _check_mysql_exception 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters raise InternalError(errno, errorvalue) 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters InternalError: (1305, u'SAVEPOINT sa_savepoint_1 does not exist') 2016-06-08 05:15:22.609 18530 ERROR oslo_db.sqlalchemy.exc_filters /usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py:68:
[Yahoo-eng-team] [Bug 1549527] Re: midonet decomposition mismatch between neutron and plugin
** Changed in: neutron Status: In Progress => Fix Released ** Changed in: networking-midonet Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1549527 Title: midonet decomposition mismatch between neutron and plugin Status in networking-midonet: Fix Released Status in neutron: Fix Released Bug description: both of neutron and network-midonet provides "midonet" stevedore alias for the plugin. we should either backport https://review.openstack.org/#/c/219174/ to liberty or revert the corresponding changes in plugin. To manage notifications about this bug go to: https://bugs.launchpad.net/networking-midonet/+bug/1549527/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592376] [NEW] Cinder driver the function add calculate size_gb need improve
Public bug reported: In the line https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/cinder.py#L436, Its intent is to get ceiling of size_gb. we can use python math module math.ceil() function. This can improve the code readability. So i suggest improve it. ** Affects: glance Importance: Undecided Assignee: YaoZheng_ZTE (zheng-yao1) Status: New ** Changed in: glance Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1) ** Description changed: - In the line https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/cinder.py#L436, Its intent is to get ceiling of size_gb. we can use python math module - math.ceil() function. This can improve the code readability. So i suggest improve it. + In the line + https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/cinder.py#L436, + Its intent is to get ceiling of size_gb. we can use python math module + math.ceil() function. This can improve the code readability. So i + suggest improve it. -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1592376 Title: Cinder driver the function add calculate size_gb need improve Status in Glance: New Bug description: In the line https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/cinder.py#L436, Its intent is to get ceiling of size_gb. we can use python math module math.ceil() function. This can improve the code readability. So i suggest improve it. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1592376/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592362] [NEW] [XenAPI] add a maximum retry count for vbd unplug
Public bug reported: https://github.com/openstack/nova/blob/bc5035343d366a18cae587f92ecb4e871aba974a/plugins/xenserver/xenapi/etc/xapi.d/plugins/pluginlib_nova.py#L139 If the vbd unplug always return DEVICE_DETACH_REJECTED, it will be in loop for ever. So need add a maximum retry count in _vbd_unplug_with_retry. ** Affects: nova Importance: Undecided Assignee: Jianghua Wang (wjh-fresh) Status: New ** Changed in: nova Assignee: (unassigned) => Jianghua Wang (wjh-fresh) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1592362 Title: [XenAPI] add a maximum retry count for vbd unplug Status in OpenStack Compute (nova): New Bug description: https://github.com/openstack/nova/blob/bc5035343d366a18cae587f92ecb4e871aba974a/plugins/xenserver/xenapi/etc/xapi.d/plugins/pluginlib_nova.py#L139 If the vbd unplug always return DEVICE_DETACH_REJECTED, it will be in loop for ever. So need add a maximum retry count in _vbd_unplug_with_retry. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1592362/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1490917] Re: create_router regression for some of plugins
gate-tempest-dsvm-networking-midonet-ml2 failure with the following backtrace. i think it can happen with any plugins with a surrounding transaction for create_router. http://logs.openstack.org/42/328842/2/check/gate-tempest-dsvm- networking-midonet- ml2/32b6d98/logs/screen-q-svc.txt.gz?#_2016-06-14_09_02_13_857 2016-06-14 09:02:13.857 12058 ERROR root [req-753083b5-ba59-47d6-a4e8-90df2bcac4bd tempest-RoutersTest-1344545865 -] Original exception being dropped: ['Traceback (most recent call last):\n', ' File "/opt/stack/new/neutron/neutron/db/l3_db.py", line 236, in create_router\n gw_info, router=router_db)\n', ' File "/opt/stack/new/neutron/neutron/db/l3_gwmode_db.py", line 69, in _update_router_gw_info\ncontext, router_id, info, router=router)\n', ' File "/opt/stack/new/neutron/neutron/db/l3_db.py", line 479, in _update_router_gw_info\next_ips)\n', ' File "/opt/stack/new/neutron/neutron/db/l3_db.py", line 449, in _create_gw_port\n new_network_id, ext_ips)\n', ' File "/opt/stack/new/neutron/neutron/db/l3_db.py", line 350, in _create_router_gw_port\ncontext.elevated(), {\'port\': port_data})\n', ' File "/opt/stack/new/neutron/neutron/plugins/common/utils.py", line 164, in create_port\nreturn core_plugin.create_port(context, {\'port\': port_data})\n', ' File "/opt /stack/new/neutron/neutron/plugins/ml2/plugin.py", line 1149, in create_port\n result, mech_context = self._create_port_db(context, port)\n', ' File "/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py", line 1139, in _create_port_db\nself._setup_dhcp_agent_provisioning_component(context, result)\n', ' File "/usr/lib/python2.7/contextlib.py", line 35, in __exit__\n self.gen.throw(type, value, traceback)\n', ' File "/opt/stack/new/neutron/neutron/db/api.py", line 67, in exc_to_retry\nraise db_exc.RetryRequest(e)\n', 'RetryRequest\n'] 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource [req-753083b5-ba59-47d6-a4e8-90df2bcac4bd tempest-RoutersTest-1344545865 -] create failed 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource Traceback (most recent call last): 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/api/v2/resource.py", line 78, in resource 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource result = method(request=request, **args) 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/api/v2/base.py", line 424, in create 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource return self._create(request, body, **kwargs) 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in wrapper 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource ectxt.value = e.inner_exc 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, in __exit__ 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource self.force_reraise() 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in force_reraise 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb) 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource return f(*args, **kwargs) 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/api/v2/base.py", line 535, in _create 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource obj = do_create(body) 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/api/v2/base.py", line 517, in do_create 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource request.context, reservation.reservation_id) 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, in __exit__ 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource self.force_reraise() 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in force_reraise 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb) 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource File "/opt/stack/new/neutron/neutron/api/v2/base.py", line 510, in do_create 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource return obj_creator(request.context, **kwargs) 2016-06-14 09:02:13.871 12058 ERROR neutron.api.v2.resource File "/usr/local/lib/python2.7/dist-packages/oslo_log/helpers.py",
[Yahoo-eng-team] [Bug 1592349] [NEW] Page timeout in selenium integration tests is too small
Public bug reported: The default page timeout in selenium integration tests is set to 30 seconds, which is too small for a first couple of tests which are run after Apache in devstack is restarted. It takes around 30 seconds to just log in into Horizon, because a lot of data is fetched into the memory of Apache workers during these first tests, hence they time out and fail. ** Affects: horizon Importance: Medium Assignee: Timur Sufiev (tsufiev-x) Status: In Progress ** Tags: integration-tests -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1592349 Title: Page timeout in selenium integration tests is too small Status in OpenStack Dashboard (Horizon): In Progress Bug description: The default page timeout in selenium integration tests is set to 30 seconds, which is too small for a first couple of tests which are run after Apache in devstack is restarted. It takes around 30 seconds to just log in into Horizon, because a lot of data is fetched into the memory of Apache workers during these first tests, hence they time out and fail. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1592349/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1589916] Re: "glance location-add" failed when url is "cinder://volume-id"
Thank you, wangxiyuan, Kairat. That openstack is not exist and my new openstack has no this error. ** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1589916 Title: "glance location-add" failed when url is "cinder://volume-id" Status in Glance: Invalid Bug description: The version is mitaka. Glance Configuration: show_image_direct_url=True show_multiple_locations=True. Steps: 1. Upload a image (cirros-0.3.1-x86_64-disk.img, f71dff58-36ca-46ea-8258-0f3c9a4cd747); 2. Create a volume(id:123fb906-bed5-4b55-8a82-1f2e6bed424b) from the image(backend is fujitsu, others same); 3. Add a location to the image(url:http), success; #glance location-add --url http://10.43.176.8/images/cirros-0.3.1-x86_64-disk.img f71dff58-36ca-46ea-8258-0f3c9a4cd747 4. Add a location to the image(url:cinder//volume-id) failed; #glance location-add --url cinder://123fb906-bed5-4b55-8a82-1f2e6bed424b f71dff58-36ca-46ea-8258-0f3c9a4cd747 400 Bad Request Invalid location (HTTP 400) The glance-api log is: 2016-06-08 01:38:04.265 DEBUG eventlet.wsgi.server [-] (30577) accepted ('10.43.203.135', 58926) from (pid=30577) server /usr/lib/python2.7/site-packages/eventlet/wsgi.py:868 2016-06-08 01:38:04.267 DEBUG glance.api.middleware.version_negotiation [-] Determining version of request: GET /versions Accept: */* from (pid=30577) process_request /opt/stack/glance/glance/api/middleware/version_negotiation.py:46 2016-06-08 01:38:04.269 INFO eventlet.wsgi.server [-] 10.43.203.135 - - [08/Jun/2016 01:38:04] "GET /versions HTTP/1.1" 200 793 0.001778 2016-06-08 01:38:04.373 DEBUG eventlet.wsgi.server [-] (30577) accepted ('10.43.203.135', 58929) from (pid=30577) server /usr/lib/python2.7/site-packages/eventlet/wsgi.py:868 2016-06-08 01:38:04.374 DEBUG glance.api.middleware.version_negotiation [-] Determining version of request: PATCH /v2/images/f71dff58-36ca-46ea-8258-0f3c9a4cd747 Accept: */* from (pid=30577) process_request /opt/stack/glance/glance/api/middleware/version_negotiation.py:46 2016-06-08 01:38:04.375 DEBUG glance.api.middleware.version_negotiation [-] Using url versioning from (pid=30577) process_request /opt/stack/glance/glance/api/middleware/version_negotiation.py:58 2016-06-08 01:38:04.375 DEBUG glance.api.middleware.version_negotiation [-] Matched version: v2 from (pid=30577) process_request /opt/stack/glance/glance/api/middleware/version_negotiation.py:70 2016-06-08 01:38:04.376 DEBUG glance.api.middleware.version_negotiation [-] new path /v2/images/f71dff58-36ca-46ea-8258-0f3c9a4cd747 from (pid=30577) process_request /opt/stack/glance/glance/api/middleware/version_negotiation.py:71 2016-06-08 01:38:04.604 INFO eventlet.wsgi.server [req-fe0ec689-75f0-4f11-b7ff-692ec84c3a2d 346ce385360c43588f48349ed8f4159e 97330b92c2144c0ea9b8826038d3abe3] 10.43.203.135 - - [08/Jun/2016 01:38:04] "PATCH /v2/images/f71dff58-36ca-46ea-8258-0f3c9a4cd747 HTTP/1.1" 400 254 0.229389 To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1589916/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1522677] Re: AZAwareWeightScheduler is not totally based on weight
Reviewed: https://review.openstack.org/253330 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=5160d4e2ae285c62ab52c9dfc61121f8ca4ecd57 Submitter: Jenkins Branch:master commit 5160d4e2ae285c62ab52c9dfc61121f8ca4ecd57 Author: Hong Hui XiaoDate: Fri Dec 4 00:19:00 2015 -0500 Make sure AZAwareWeightScheduler base on weight of agent Problem details can be found in bug description. AZ here stands available zone. heapq.heapify() will sort tuple according to the first element. If the first element is equal, then the second element is used. When creating a new network, each AZ doesn't hold the network. So, the AZ handling list is actually a name ordered list. As a consequence, when creating a new network, a certain AZ will always be used, for example, 'nova1' in ['nova1', 'nova2', 'nova3']. This patch will sort the resource_hostable_agents firstly, so that the AZ that holds dhcp-agent with least load comes first. Then use min() to get the first AZ. Change-Id: Id57b4656337ab8f1bd2dc3e8bd679a23778a2dea Closes-Bug: #1522677 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1522677 Title: AZAwareWeightScheduler is not totally based on weight Status in neutron: Fix Released Bug description: The AZ(available zone) for network has been enable with the merging of [1]. I try in local devstack with latest code. 1) I deploy 3 dhcp-agent in 3 AZs (nova1, nova2, nova3). 2) set the dhcp_agent_per_network=1, don't set default_availability_zones, and set network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.AZAwareWeightScheduler 3) create 10 networks without specifying the availability_zone_hints 10 networks all go to nova1. It is not a reasonable result. [1] https://review.openstack.org/#/c/204436/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1522677/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592323] [NEW] VNC console page gets too slow when with Google Chrome
Public bug reported: I found out it gets too slow and hangs when I open the the instance's console page with Google Chrome browser. I thought there might be some network problem between my desktop and the Openstack servers but it was not. If I open the console page with Firefox, it really works well getting faster on the response. root@mitaka-horizon:~# dpkg -l | egrep 'horizon|dashboard' ii openstack-dashboard 2:9.0.0-0ubuntu2.16.04.1all Django web interface for OpenStack ii openstack-dashboard-ubuntu-theme 2:9.0.0-0ubuntu2.16.04.1all Ubuntu theme for the OpenStack dashboard ii python-django-horizon2:9.0.0-0ubuntu2.16.04.1all Django module providing web based interaction with OpenStack dongwoncho:~$ dpkg -l | grep "chrome" ii google-chrome-stable 51.0.2704.84-1 amd64The web browser from Google ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1592323 Title: VNC console page gets too slow when with Google Chrome Status in OpenStack Dashboard (Horizon): New Bug description: I found out it gets too slow and hangs when I open the the instance's console page with Google Chrome browser. I thought there might be some network problem between my desktop and the Openstack servers but it was not. If I open the console page with Firefox, it really works well getting faster on the response. root@mitaka-horizon:~# dpkg -l | egrep 'horizon|dashboard' ii openstack-dashboard 2:9.0.0-0ubuntu2.16.04.1all Django web interface for OpenStack ii openstack-dashboard-ubuntu-theme 2:9.0.0-0ubuntu2.16.04.1all Ubuntu theme for the OpenStack dashboard ii python-django-horizon2:9.0.0-0ubuntu2.16.04.1all Django module providing web based interaction with OpenStack dongwoncho:~$ dpkg -l | grep "chrome" ii google-chrome-stable 51.0.2704.84-1 amd64The web browser from Google To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1592323/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592300] [NEW] Any message with more than one variable should use named interpolation instead of positional
Public bug reported: Any message with more than one variable should use named interpolation instead of positional, to allow translators to move the variables around in the string to account for differences in grammar and writing direction. For example, do not do this: # WRONG raise ValueError(_('some message: v1=%s v2=%s') % (v1, v2)) Instead, use this style: # RIGHT raise ValueError(_('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2}) Refer to this document: http://docs.openstack.org/developer/oslo.i18n/guidelines.html ** Affects: horizon Importance: Undecided Assignee: zhang.xiuhua (zhang-xiuhua) Status: In Progress ** Changed in: horizon Assignee: (unassigned) => zhang.xiuhua (zhang-xiuhua) ** Changed in: horizon Status: New => In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1592300 Title: Any message with more than one variable should use named interpolation instead of positional Status in OpenStack Dashboard (Horizon): In Progress Bug description: Any message with more than one variable should use named interpolation instead of positional, to allow translators to move the variables around in the string to account for differences in grammar and writing direction. For example, do not do this: # WRONG raise ValueError(_('some message: v1=%s v2=%s') % (v1, v2)) Instead, use this style: # RIGHT raise ValueError(_('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2}) Refer to this document: http://docs.openstack.org/developer/oslo.i18n/guidelines.html To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1592300/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592294] [NEW] delete router interface failed because of vpnservice
Public bug reported: I created a network and attached a subnet to it, then added this network to two routers named router1 and router2, and I created a vpnservice with router1 and the subnet.When I delete router interface of this subnet from router2,neutron raises SubnetInUseByVPNService exception: Subnet is used by VPNService.I have to delete the vpnservice with router1 first. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1592294 Title: delete router interface failed because of vpnservice Status in neutron: New Bug description: I created a network and attached a subnet to it, then added this network to two routers named router1 and router2, and I created a vpnservice with router1 and the subnet.When I delete router interface of this subnet from router2,neutron raises SubnetInUseByVPNService exception: Subnet is used by VPNService.I have to delete the vpnservice with router1 first. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1592294/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1502933] Re: [OSSA-2016-009] ICMPv6 anti-spoofing rules are too permissive (CVE-2015-8914)
** Summary changed: - ICMPv6 anti-spoofing rules are too permissive (CVE-2015-8914) + [OSSA-2016-009] ICMPv6 anti-spoofing rules are too permissive (CVE-2015-8914) ** Changed in: ossa Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1502933 Title: [OSSA-2016-009] ICMPv6 anti-spoofing rules are too permissive (CVE-2015-8914) Status in neutron: Fix Committed Status in OpenStack Security Advisory: Fix Released Bug description: ICMPv6 default firewall rules are too permissive on the hypervisors leaving VMs able to do ICMPv6 source address spoofing. Pre-condition: - having a provider-network providing IPv6 connectivity to the VMs - in my case the controllers are providing statefull DHCPv6 and my physical router provides the default gateway using Router Advertisements. How to reproduce: - spin a VM and attach to it an IPv6 enabled network - obtain an IPv6 address using #dhclient -6 - try to ping6 an IPv6 enabled host - remove your IPv6 address from the interface: #sudo ip addr del 2001:0DB8::100/32 dev eth0 - add a forged IPv6 address to your interface, into the same subnet of the original IPv6 address: #sudo ip addr add 2001:0DB8::200/32 dev eth0 - try to ping6 the previous IPv6 enabled host, it will still work - try to assign another IPv6 address to your NIC, completely outside your IPv6 assignment: sudo ip addr add 2001:dead:beef::1/64 dev eth0 - try to ping6 the previous IPv6 enabled host -> the destination will still receive your echo requests with your forget address but you won't receive answers, they won't be router back to you. Expected behavior: - VMs should not be able to spoof their IPv6 address and issue forged ICMPv6 packets. The firewall rules on the hypervisor should restrict ICMPv6 egress to the VMs link-local and global-unicast addresses. Affected versions: - I saw the issue into OpenStack Juno, under Ubuntu 14.04. But according to the upstream code, the issue is still present into the master branch, into; neutron/agent/linux/iptables_firewall.py, into line 385: ipv6_rules += [comment_rule('-p icmpv6 -j RETURN', comment=ic.IPV6_ICMP_ALLOW)] To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1502933/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1558658] Re: [OSSA-2016-009] Security Groups do not prevent MAC and/or IPv4 spoofing in DHCP requests (CVE-2016-5362 and CVE-2016-5363)
** Summary changed: - Security Groups do not prevent MAC and/or IPv4 spoofing in DHCP requests (CVE-2016-5362 and CVE-2016-5363) + [OSSA-2016-009] Security Groups do not prevent MAC and/or IPv4 spoofing in DHCP requests (CVE-2016-5362 and CVE-2016-5363) ** Changed in: ossa Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1558658 Title: [OSSA-2016-009] Security Groups do not prevent MAC and/or IPv4 spoofing in DHCP requests (CVE-2016-5362 and CVE-2016-5363) Status in neutron: Fix Released Status in neutron kilo series: Fix Released Status in OpenStack Security Advisory: Fix Released Bug description: The IptablesFirewallDriver does not prevent spoofing other instances or a routers MAC and/or IP addresses. The rule to permit DHCP discovery and request messages: ipv4_rules += [comment_rule('-p udp -m udp --sport 68 --dport 67 ' '-j RETURN', comment=ic.DHCP_CLIENT)] is too permissive, it does not enforce the source MAC or IP address. This is the IPv4 case of public bug https://bugs.launchpad.net/neutron/+bug/1502933, and a solution was previously mentioned in June 2013 in https://bugs.launchpad.net/neutron/+bug/1427054. If L2population is not used, an instance can spoof the Neutron router's MAC address and cause the switches to learn a MAC move, allowing the instance to intercept other instances traffic potentially belonging to other tenants if this is shared network. The solution for this is to permit this DHCP traffic only from the instance's IP address and the unspecified IPv4 address 0.0.0.0/32 rather than from an IPv4 source, additionally the source MAC address should be restricted to MAC addresses assigned to the instance's Neutron port. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1558658/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592283] [NEW] openstack-db not drop nova_api database
Public bug reported: Description === When use openstack-db script to drop nova service, the nova database was dropped, but the nova_api database wasn't dropped. Steps to reproduce == /usr/bin/openstack-db --service nova --drop Expected result === Command 'mysql -uroot -p' to login mysql, the nova and nova_api database should be dropped. Actual result = The nova database was dropped, but the nova_api database wasn't dropped. Environment === openstack mitaka version. openstack-nova-common-13.0.0-1.el7.noarch openstack-nova-api-13.0.0-1.el7.noarch ** Affects: nova Importance: Undecided Assignee: zhaolihui (zhaolh) Status: New ** Changed in: nova Assignee: (unassigned) => zhaolihui (zhaolh) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1592283 Title: openstack-db not drop nova_api database Status in OpenStack Compute (nova): New Bug description: Description === When use openstack-db script to drop nova service, the nova database was dropped, but the nova_api database wasn't dropped. Steps to reproduce == /usr/bin/openstack-db --service nova --drop Expected result === Command 'mysql -uroot -p' to login mysql, the nova and nova_api database should be dropped. Actual result = The nova database was dropped, but the nova_api database wasn't dropped. Environment === openstack mitaka version. openstack-nova-common-13.0.0-1.el7.noarch openstack-nova-api-13.0.0-1.el7.noarch To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1592283/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1592270] [NEW] can get shared network/subnet, but fail to create port when fixed_ip is specified
Public bug reported: For user who doesn't have admin role or isn't shared network's owner, he / she can see shared network and related subnet, but fail to create port when specifying fixed_ips. Policy to allow GET, but disallow to create port when specified fixed_ips. #user can see share networks "get_network": "rule:admin_or_owner or rule:shared or rule:external or rule:context_is_advsvc", #user can see share subnets "get_subnet": "rule:admin_or_owner or rule:shared", #user won't be able to create port when specifying fixed_ips "create_port:fixed_ips": "rule:context_is_advsvc or rule:admin_or_network_owner", ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1592270 Title: can get shared network/subnet, but fail to create port when fixed_ip is specified Status in neutron: New Bug description: For user who doesn't have admin role or isn't shared network's owner, he / she can see shared network and related subnet, but fail to create port when specifying fixed_ips. Policy to allow GET, but disallow to create port when specified fixed_ips. #user can see share networks "get_network": "rule:admin_or_owner or rule:shared or rule:external or rule:context_is_advsvc", #user can see share subnets "get_subnet": "rule:admin_or_owner or rule:shared", #user won't be able to create port when specifying fixed_ips "create_port:fixed_ips": "rule:context_is_advsvc or rule:admin_or_network_owner", To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1592270/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp