[Yahoo-eng-team] [Bug 1393693] [NEW] Integrating OpenStack L3 agent to forward L3 calls (router and FIP) to OpenDaylight
Public bug reported: I am trying to enable L3 calls from OpenStack to OpenDaylight (ODL). As per my understanding ML2 plugin presently forwards the neutron calls for network, port and subnet, but for L3 calls (router, floating IP) an additional service plugin needs to be integrated. For the same I am using L3 plugin (https://github.com/dave-tucker/odl-neutron-drivers). Steps followed: 1. Enabled the neutron-l3-agent as described http://docs.openstack.org/trunk/config-reference/content/section_adv_cfg_l3_agent.html 2.) Followed the steps mentioned at https://github.com/dave-tucker/odl- neutron-drivers I tried to debug the L3 calls which return from create_router() in /opt/stack/neutron/neutron/l3_db.py , without forwarding the calls to the L3 plugin. I believe l3_db.py is not handling the functionality to forward the calls to L3 plugin which will in turn redirect the neutron calls to ODL neutron. ** Affects: neutron Importance: Undecided Status: New ** Tags: opendaylight -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1393693 Title: Integrating OpenStack L3 agent to forward L3 calls (router and FIP) to OpenDaylight Status in OpenStack Neutron (virtual network service): New Bug description: I am trying to enable L3 calls from OpenStack to OpenDaylight (ODL). As per my understanding ML2 plugin presently forwards the neutron calls for network, port and subnet, but for L3 calls (router, floating IP) an additional service plugin needs to be integrated. For the same I am using L3 plugin (https://github.com/dave-tucker/odl-neutron-drivers). Steps followed: 1. Enabled the neutron-l3-agent as described http://docs.openstack.org/trunk/config-reference/content/section_adv_cfg_l3_agent.html 2.) Followed the steps mentioned at https://github.com/dave-tucker /odl-neutron-drivers I tried to debug the L3 calls which return from create_router() in /opt/stack/neutron/neutron/l3_db.py , without forwarding the calls to the L3 plugin. I believe l3_db.py is not handling the functionality to forward the calls to L3 plugin which will in turn redirect the neutron calls to ODL neutron. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1393693/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1393409] Re: In class CiscoCsrIPsecVpnAgentApi used unknown method from L3RouterPlugin
@Eugene, Unfortunately I'm not pay attention to new changes following https://review.openstack.org/#/c/123877 and changes in plugin configuration. Bug market as invalid. ** Changed in: neutron Status: Incomplete = Invalid ** Changed in: neutron Assignee: Eugene Nikanorov (enikanorov) = (unassigned) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1393409 Title: In class CiscoCsrIPsecVpnAgentApi used unknown method from L3RouterPlugin Status in OpenStack Neutron (virtual network service): Invalid Bug description: During testing of VPNaaS feature I'm found problem with incorrect calls from one class to another. Class CiscoCsrIPsecVpnAgentApi try to call unknown method get_host_for_router from class L3RouterPlugin. Tempest test is tempest.api.network.test_vpnaas_extensions.VPNaaSTestJSON.test_create_update_delete_vpn_service[gate,smoke]. Tempest log: 2014-11-16 20:03:45,861 7747 DEBUG[tempest.common.rest_client] Request (VPNaaSTestJSON:test_create_update_delete_vpn_service): 201 POST http://192.168.0.84:9696/v2.0/vpn/vpnservices 0.066s Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': 'omitted'} Body: {vpnservice: {subnet_id: 20da093a-3c97-4757-859c-537a4206d967, router_id: 0e383b58-33c3-4d55-b72a-ae19e76b2f6f, name: vpn-service-1842998599, admin_state_up: true}} Response - Headers: {'status': '201', 'content-length': '322', 'connection': 'close', 'date': 'Sun, 16 Nov 2014 20:03:45 GMT', 'content-type': 'application/json; charset=UTF-8', 'x-openstack-request-id': 'req-a747adae-5586-4f52-b876-3d072ec28ce9'} Body: {vpnservice: {router_id: 0e383b58-33c3-4d55-b72a-ae19e76b2f6f, status: PENDING_CREATE, name: vpn-service-1842998599, admin_state_up: true, subnet_id: 20da093a-3c97-4757-859c-537a4206d967, tenant_id: d6e97fa90ddc4ebab91ab57019826728, id: a521e5b9-3edf-4236-a0b0-3df3f9dce602, description: }} 2014-11-16 20:03:45,873 7747 DEBUG[tempest.common.rest_client] Request (VPNaaSTestJSON:test_create_update_delete_vpn_service): 200 GET http://192.168.0.84:9696/v2.0/vpn/vpnservices 0.011s Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': 'omitted'} Body: None Response - Headers: {'status': '200', 'content-length': '632', 'content-location': 'http://192.168.0.84:9696/v2.0/vpn/vpnservices', 'connection': 'close', 'date': 'Sun, 16 Nov 2014 20:03:45 GMT', 'content-type': 'application/json; charset=UTF-8', 'x-openstack-request-id': 'req-a1fb0f92-3ab0-4c19-9de1-c17b603afd41'} Body: {vpnservices: [{router_id: 0e383b58-33c3-4d55-b72a-ae19e76b2f6f, status: PENDING_CREATE, name: vpnservice--691534310, admin_state_up: true, subnet_id: 20da093a-3c97-4757-859c-537a4206d967, tenant_id: d6e97fa90ddc4ebab91ab57019826728, id: 5a1e5e70-069b-462a-bc13-6c4b0945228f, description: }, {router_id: 0e383b58-33c3-4d55-b72a-ae19e76b2f6f, status: PENDING_CREATE, name: vpn-service-1842998599, admin_state_up: true, subnet_id: 20da093a-3c97-4757-859c-537a4206d967, tenant_id: d6e97fa90ddc4ebab91ab57019826728, id: a521e5b9-3edf-4236-a0b0-3df3f9dce602, description: }]} 2014-11-16 20:03:45,916 7747 DEBUG[tempest.common.rest_client] Request (VPNaaSTestJSON:_run_cleanups): 500 DELETE http://192.168.0.84:9696/v2.0/vpn/vpnservices/a521e5b9-3edf-4236-a0b0-3df3f9dce602 0.038s Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': 'omitted'} Body: None Response - Headers: {'status': '500', 'content-length': '150', 'connection': 'close', 'date': 'Sun, 16 Nov 2014 20:03:45 GMT', 'content-type': 'application/json; charset=UTF-8', 'x-openstack-request-id': 'req-1ab34864-f9bf-45ec-a870-439f88b4b974'} Body: {NeutronError: {message: Request Failed: internal server error while processing your request., type: HTTPInternalServerError, detail: }} }}} Traceback (most recent call last): File tempest/api/network/test_vpnaas_extensions.py, line 86, in _delete_vpn_service self.client.delete_vpnservice(vpn_service_id) File tempest/services/network/network_client_base.py, line 124, in _delete resp, body = self.delete(uri) File tempest/services/network/network_client_base.py, line 83, in delete return self.rest_client.delete(uri, headers) File tempest/common/rest_client.py, line 240, in delete return self.request('DELETE', url, extra_headers, headers, body) File tempest/common/rest_client.py, line 454, in request resp, resp_body) File tempest/common/rest_client.py, line 550, in _error_checker raise exceptions.ServerFault(message) ServerFault: Got server fault Details: Request Failed: internal server error while processing your request. Traceback (most
[Yahoo-eng-team] [Bug 1374497] Re: change in oslo.db ping handling is causing issues in projects that are not using transactions
** Changed in: oslo.db Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1374497 Title: change in oslo.db ping handling is causing issues in projects that are not using transactions Status in OpenStack Identity (Keystone): Triaged Status in Oslo Database library: Fix Released Status in oslo.db juno series: Fix Released Bug description: in https://review.openstack.org/#/c/106491/, the ping listener which emits SELECT 1 at connection start was moved from being a connection pool checkout handler to a transaction on begin handler. Apparently Keystone and possibly others are using the Session in autocommit mode, despite that this is explicitly warned against in SQLAlchemy's docs (see http://docs.sqlalchemy.org/en/rel_0_9/orm/session.html#autocommit- mode), and for these projects they are seeing failed connections not transparently recovered (see https://bugs.launchpad.net/keystone/+bug/1361378). Alternatives include: 1. move the ping listener back to being a checkout handler 2. fix downstream projects to not use the session in autocommit mode In all likelihood, the fix here should involve both. I have a longer term plan to fix EngineFacade once and for all so that the correct use patterns are explicit, but that still has to be blueprinted. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1374497/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1376211] Re: Retry mechanism does not work on startup when used with MySQL
** Changed in: oslo.db Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1376211 Title: Retry mechanism does not work on startup when used with MySQL Status in OpenStack Neutron (virtual network service): Confirmed Status in Oslo Database library: Fix Released Status in oslo.db juno series: Confirmed Bug description: This is initially revealed as Red Hat bug: https://bugzilla.redhat.com/show_bug.cgi?id=1144181 The problem shows up when Neutron or any other oslo.db based projects start while MySQL server is not up yet. Instead of retrying connection as per max_retries and retry_interval, service just crashes with return code 1. This is because during engine initialization, engine.execute(SHOW VARIABLES LIKE 'sql_mode') is called, which opens the connection, *before* _test_connection() succeeds. So the server just bail out to sys.exit() at the top of the stack. This behaviour was checked for both oslo.db 0.4.0 and 1.0.1. I suspect this is a regression from the original db code from oslo- incubator though I haven't checked it specifically. The easiest way to reproduce the traceback is: 1. stop MariaDB. 2. execute the following Python script: ''' import oslo.db.sqlalchemy.session url = 'mysql://neutron:123456@10.35.161.235/neutron' engine = oslo.db.sqlalchemy.session.EngineFacade(url) ''' The following traceback can be seen in service log: 2014-10-01 13:46:10.588 5812 TRACE neutron Traceback (most recent call last): 2014-10-01 13:46:10.588 5812 TRACE neutron File /usr/bin/neutron-server, line 10, in module 2014-10-01 13:46:10.588 5812 TRACE neutron sys.exit(main()) 2014-10-01 13:46:10.588 5812 TRACE neutron File /usr/lib/python2.7/site-packages/neutron/server/__init__.py, line 47, in main 2014-10-01 13:46:10.588 5812 TRACE neutron neutron_api = service.serve_wsgi(service.NeutronApiService) 2014-10-01 13:46:10.588 5812 TRACE neutron File /usr/lib/python2.7/site-packages/neutron/service.py, line 105, in serve_wsgi 2014-10-01 13:46:10.588 5812 TRACE neutron LOG.exception(_('Unrecoverable error: please check log ' 2014-10-01 13:46:10.588 5812 TRACE neutron File /usr/lib/python2.7/site-packages/neutron/openstack/common/excutils.py, line 82, in __exit__ 2014-10-01 13:46:10.588 5812 TRACE neutron six.reraise(self.type_, self.value, self.tb) 2014-10-01 13:46:10.588 5812 TRACE neutron File /usr/lib/python2.7/site-packages/neutron/service.py, line 102, in serve_wsgi 2014-10-01 13:46:10.588 5812 TRACE neutron service.start() 2014-10-01 13:46:10.588 5812 TRACE neutron File /usr/lib/python2.7/site-packages/neutron/service.py, line 73, in start 2014-10-01 13:46:10.588 5812 TRACE neutron self.wsgi_app = _run_wsgi(self.app_name) 2014-10-01 13:46:10.588 5812 TRACE neutron File /usr/lib/python2.7/site-packages/neutron/service.py, line 168, in _run_wsgi 2014-10-01 13:46:10.588 5812 TRACE neutron app = config.load_paste_app(app_name) 2014-10-01 13:46:10.588 5812 TRACE neutron File /usr/lib/python2.7/site-packages/neutron/common/config.py, line 182, in load_paste_app 2014-10-01 13:46:10.588 5812 TRACE neutron app = deploy.loadapp(config:%s % config_path, name=app_name) 2014-10-01 13:46:10.588 5812 TRACE neutron File /usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py, line 247, in loadapp 2014-10-01 13:46:10.588 5812 TRACE neutron return loadobj(APP, uri, name=name, **kw) 2014-10-01 13:46:10.588 5812 TRACE neutron File /usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py, line 272, in loadobj 2014-10-01 13:46:10.588 5812 TRACE neutron return context.create() 2014-10-01 13:46:10.588 5812 TRACE neutron File /usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py, line 710, in create 2014-10-01 13:46:10.588 5812 TRACE neutron return self.object_type.invoke(self) 2014-10-01 13:46:10.588 5812 TRACE neutron File /usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py, line 144, in invoke 2014-10-01 13:46:10.588 5812 TRACE neutron **context.local_conf) 2014-10-01 13:46:10.588 5812 TRACE neutron File /usr/lib/python2.7/site-packages/paste/deploy/util.py, line 56, in fix_call 2014-10-01 13:46:10.588 5812 TRACE neutron val = callable(*args, **kw) 2014-10-01 13:46:10.588 5812 TRACE neutron File /usr/lib/python2.7/site-packages/paste/urlmap.py, line 25, in urlmap_factory 2014-10-01 13:46:10.588 5812 TRACE neutron app = loader.get_app(app_name, global_conf=global_conf) 2014-10-01 13:46:10.588 5812 TRACE neutron File /usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py, line 350, in get_app 2014-10-01 13:46:10.588 5812 TRACE neutron name=name, global_conf=global_conf).create() 2014-10-01 13:46:10.588 5812 TRACE neutron File
[Yahoo-eng-team] [Bug 1393727] [NEW] Admin Create Image, Image File Image Location not marked as required
Public bug reported: Admin Images Create Image The Image Location or Image File fields are not marked, yet they are required fields for image creation. ** Affects: horizon Importance: Undecided Assignee: Rob Cresswell (robcresswell) Status: New ** Changed in: horizon Assignee: (unassigned) = Rob Cresswell (robcresswell) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1393727 Title: Admin Create Image, Image File Image Location not marked as required Status in OpenStack Dashboard (Horizon): New Bug description: Admin Images Create Image The Image Location or Image File fields are not marked, yet they are required fields for image creation. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1393727/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1315032] Re: when using --size after image is created, the size shown is the image actual size
The behavior is consistent with the design. This is one of the drivers why --size was set read only for v2 api. We should not change the v1 behavior at state when we are trying to deprecate it. ** Changed in: glance Status: Incomplete = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1315032 Title: when using --size after image is created, the size shown is the image actual size Status in OpenStack Image Registry and Delivery Service (Glance): Won't Fix Bug description: when I create an image using --size, while image is in status saving we see the requested size but after images is created we see the image's actual size: during saving: [root@orange-vdsf images(keystone_admin)]# glance image-list --human +--+-+-+--+-++ | ID | Name| Disk Format | Container Format | Size| Status | +--+-+-+--+-++ | 292c11dd-a16b-46a0-b176-d665293273b3 | dafna_zise1 | qcow2 | bare | 19.5MB | saving | after image was saved: [root@orange-vdsf images(keystone_admin)]# glance image-list --human +--+-+-+--+-++ | ID | Name| Disk Format | Container Format | Size| Status | +--+-+-+--+-++ | 292c11dd-a16b-46a0-b176-d665293273b3 | dafna_zise1 | qcow2 | bare | 6GB | active | To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1315032/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1393780] [NEW] Do not import objects, only modules
Public bug reported: Due to http://docs.openstack.org/developer/hacking/ [H302] We do not import objects, only modules . Exceptions are: imports from migrate package imports from sqlalchemy package imports from oslo-incubator.openstack.common.gettextutils module ** Affects: glance Importance: Undecided Assignee: Roman Vasilets (rvasilets) Status: New ** Changed in: glance Assignee: (unassigned) = Roman Vasilets (rvasilets) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1393780 Title: Do not import objects, only modules Status in OpenStack Image Registry and Delivery Service (Glance): New Bug description: Due to http://docs.openstack.org/developer/hacking/ [H302] We do not import objects, only modules . Exceptions are: imports from migrate package imports from sqlalchemy package imports from oslo-incubator.openstack.common.gettextutils module To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1393780/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1338795] Re: VMware store: upload and download performance need to be improved
** Project changed: glance = glance-store -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1338795 Title: VMware store: upload and download performance need to be improved Status in OpenStack Glance backend store-drivers library (glance_store): In Progress Bug description: It takes too much time to upload to the VMware store. The bits are uploaded to Glance then go through vCenter, then through ESXi to finally land on the datastore. The upload time is not necessarily good, also uploading through vCenter adds unnecessary load on the vCenter server. Since VC 5.5, it is possible to get a ticket from VC to upload to a specific host directly. This way, we bypass vCenter which makes the upload much faster. To manage notifications about this bug go to: https://bugs.launchpad.net/glance-store/+bug/1338795/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1361197] Re: Glance image-upload truncates the image.
** Project changed: glance = glance-store -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1361197 Title: Glance image-upload truncates the image. Status in OpenStack Glance backend store-drivers library (glance_store): New Bug description: This may be a DUP of #1240355 but I am not sure. I have three hosts which all are connected to the same shared NFS datastore and glance configured to use it. I am uploading an image and then try to download, but download returns empty string and there is an ERROR in the glance/api.log: 2014-08-25 09:11:01.438 2724 ERROR glance.api.common [893f9ace-0176-42b1-947f-21b8875547be cffc8c555ebe44bb97b48baabd92e606 94a68b099a674d55986f4ce15fbb946b - - -] Backend storage for image 46b9b487-9c49-47a4-87aa-a11d0b17b6ff disconnected after writing only 0 bytes The reproducer: # echo 123456 | glance -d image-create --name foo --disk-format raw --container-format bare curl -i -X POST -H 'x-image-meta-container_format: bare' -H 'Transfer-Encoding: chunked' -H 'User-Agent: python-glanceclient' -H 'x-image-meta-is_public: False' -H 'X-Auth-Token: ***' -H 'Content-Type: application/octet-stream' -H 'x-image-meta-disk_format: raw' -H 'x-image-meta-name: foo' -d 'open file 'stdin', mode 'r' at 0x7f38eea620c0' http://172.16.40.19:9292/v1/images HTTP/1.1 201 Created content-length: 467 etag: f447b20a7fcbf53a5d5be013ea0b15af location: http://172.16.40.19:9292/v1/images/46b9b487-9c49-47a4-87aa-a11d0b17b6ff date: Mon, 25 Aug 2014 13:10:38 GMT content-type: application/json x-openstack-request-id: req-c63d01a6-6c84-4867-8944-f9113497546c {image: {status: active, deleted: false, container_format: bare, min_ram: 0, updated_at: 2014-08-25T13:10:30, owner: 94a68b099a674d55986f4ce15fbb946b, min_disk: 0, is_public: false, deleted_at: null, id: 46b9b487-9c49-47a4-87aa-a11d0b17b6ff, size: 7, virtual_size: null, name: foo, checksum: f447b20a7fcbf53a5d5be013ea0b15af, created_at: 2014-08-25T13:10:20, disk_format: raw, properties: {}, protected: false}} +--+--+ | Property | Value| +--+--+ | checksum | f447b20a7fcbf53a5d5be013ea0b15af | | container_format | bare | | created_at | 2014-08-25T13:10:20 | | deleted | False| | deleted_at | None | | disk_format | raw | | id | 46b9b487-9c49-47a4-87aa-a11d0b17b6ff | | is_public| False| | min_disk | 0| | min_ram | 0| | name | foo | | owner| 94a68b099a674d55986f4ce15fbb946b | | protected| False| | size | 7| | status | active | | updated_at | 2014-08-25T13:10:30 | | virtual_size | None | +--+--+ [root@incomplete-read ~(keystone_admin)]# glance -d image-download foo curl -i -X GET -H 'X-Auth-Token: ***' -H 'Content-Type: application/json' -H 'User-Agent: python-glanceclient' http://172.16.40.19:9292/v1/images/detail?limit=20name=foo HTTP/1.1 200 OK date: Mon, 25 Aug 2014 13:10:52 GMT content-length: 470 content-type: application/json; charset=UTF-8 x-openstack-request-id: req-b8f1c595-baf4-4a15-b9ae-407e7db3899a {images: [{status: active, deleted_at: null, name: foo, deleted: false, container_format: bare, created_at: 2014-08-25T13:10:20, disk_format: raw, updated_at: 2014-08-25T13:10:30, min_disk: 0, protected: false, id: 46b9b487-9c49-47a4-87aa-a11d0b17b6ff, min_ram: 0, checksum: f447b20a7fcbf53a5d5be013ea0b15af, owner: 94a68b099a674d55986f4ce15fbb946b, is_public: false, virtual_size: null, properties: {}, size: 7}]} curl -i -X GET -H 'X-Auth-Token: ***' -H 'Content-Type: application/octet-stream' -H 'User-Agent: python-glanceclient' http://172.16.40.19:9292/v1/images/46b9b487-9c49-47a4-87aa-a11d0b17b6ff '' To manage notifications about this bug go to: https://bugs.launchpad.net/glance-store/+bug/1361197/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1350010] Re: VMware: re-use session token
** Project changed: glance = glance-store -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1350010 Title: VMware: re-use session token Status in OpenStack Glance backend store-drivers library (glance_store): In Progress Bug description: Currently, we generate a session token for each API call. Since the token generation can take some time, each API call is seeing some overhead which makes Glance slower than it should be. To manage notifications about this bug go to: https://bugs.launchpad.net/glance-store/+bug/1350010/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1350892] Re: Nova VMWare provisioning errors
** Project changed: glance = glance-store -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1350892 Title: Nova VMWare provisioning errors Status in OpenStack Glance backend store-drivers library (glance_store): New Status in OpenStack Compute (Nova): Invalid Bug description: I am trying to provision a RHEL VMWare image (custom vmdk created through template) Openstack dashboard shows provisioning status for a long time, however no activity on vCenter. PS- CirrOS VMDK (Conveterted from qemu-img gets deployed with out errors) Request help here 2014-07-31 16:48:44.017 2931 WARNING nova.openstack.common.loopingcall [-] task run outlasted interval by 11.98221 sec 2014-07-31 16:50:27.015 2931 WARNING nova.openstack.common.loopingcall [-] task run outlasted interval by 12.987183 sec 2014-07-31 16:51:57.715 2931 WARNING nova.openstack.common.loopingcall [-] task run outlasted interval by 0.696367 sec 2014-07-31 16:58:32.860 2931 ERROR suds.client [-] ?xml version=1.0 encoding=UTF-8? SOAP-ENV:Envelope xmlns:ns0=urn:vim25 xmlns:ns1=http://schemas.xmlsoap.org/soap/envelope/; xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; xmlns:SOAP-ENV=http://schemas.xmlsoap.org/soap/envelope/; ns1:Body ns0:SessionIsActive ns0:_this type=SessionManagerSessionManager/ns0:_this ns0:sessionID5216dd75-609c-3c5a-b7e6-9708bd7dc786/ns0:sessionID ns0:userNameAdministrator/ns0:userName /ns0:SessionIsActive /ns1:Body /SOAP-ENV:Envelope 2014-07-31 16:58:32.863 2931 WARNING nova.virt.vmwareapi.driver [req-e6f5ba33-a37a-476b-a6b6-801ccd80bac6 6b875fcfe8344addb87382298c1a75be dad97a29e60849a2a6ad9d0ffb353161] Unable to validate session 5216dd75-609c-3c5a-b7e6-9708bd7dc786! 2014-07-31 16:58:32.863 2931 WARNING nova.virt.vmwareapi.driver [req-e6f5ba33-a37a-476b-a6b6-801ccd80bac6 6b875fcfe8344addb87382298c1a75be dad97a29e60849a2a6ad9d0ffb353161] Session 5216dd75-609c-3c5a-b7e6-9708bd7dc786 is inactive! 2014-07-31 16:58:48.406 2931 ERROR suds.client [-] ?xml version=1.0 encoding=UTF-8? SOAP-ENV:Envelope xmlns:ns0=urn:vim25 xmlns:ns1=http://schemas.xmlsoap.org/soap/envelope/; xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; xmlns:SOAP-ENV=http://schemas.xmlsoap.org/soap/envelope/; ns1:Body ns0:TerminateSession ns0:_this type=SessionManagerSessionManager/ns0:_this ns0:sessionId5216dd75-609c-3c5a-b7e6-9708bd7dc786/ns0:sessionId /ns0:TerminateSession /ns1:Body /SOAP-ENV:Envelope To manage notifications about this bug go to: https://bugs.launchpad.net/glance-store/+bug/1350892/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1240355] Re: Broken pipe error when copying image from glance to vSphere
** Project changed: glance = glance-store -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1240355 Title: Broken pipe error when copying image from glance to vSphere Status in OpenStack Glance backend store-drivers library (glance_store): Confirmed Status in OpenStack Compute (Nova): Invalid Bug description: Using the VMwareVCDriver on the latest nova (6affe67067) from master, launching an image for the first time is failing when copying image from Glance to vSphere. The error that shows in the nova log is: Traceback (most recent call last): File /opt/stack/nova/nova/virt/vmwareapi/io_util.py, line 176, in _inner self.output.write(data) File /opt/stack/nova/nova/virt/vmwareapi/read_write_util.py, line 143, in write self.file_handle.send(data) File /usr/lib/python2.7/httplib.py, line 790, in send self.sock.sendall(data) File /usr/local/lib/python2.7/dist-packages/eventlet/green/ssl.py, line 131, in sendall v = self.send(data[count:]) File /usr/local/lib/python2.7/dist-packages/eventlet/green/ssl.py, line 107, in send super(GreenSSLSocket, self).send, data, flags) File /usr/local/lib/python2.7/dist-packages/eventlet/green/ssl.py, line 77, in return func(*a, **kw) File /usr/lib/python2.7/ssl.py, line 198, in send v = self._sslobj.write(data) error: [Errno 32] Broken pipe To reproduce, launch an instance using an image that has not yet been uploaded to vSphere. I have attached the full log here: http://paste.openstack.org/show/48536/ To manage notifications about this bug go to: https://bugs.launchpad.net/glance-store/+bug/1240355/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1360235] Re: swift functional tests are broken
Removing Glance as the issue is rather on store that has been marked already. ** No longer affects: glance -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1360235 Title: swift functional tests are broken Status in OpenStack Glance backend store-drivers library (glance_store): New Bug description: Hi, The following tests are failing in master : GLANCE_TEST_SWIFT_CONF=/etc/glance/glance-api.conf ./run_tests.sh glance.tests.functional.store.test_swift.TestSwiftStore; I believe this commit https://github.com/openstack/glance/commit/63195aaa3b12e56ae787598e001ac44d62e52865 broke them. The problem , i believe, is that in glance/store/swift.py the line SWIFT_STORE_REF_PARAMS = swift_store_utils.SwiftParams().params is evaluated when the file is imported which is too early for tests/functional/store/test_swift.py:TestSwiftStore (see method setUp). Also glance.tests.functional.store.test_swift.TestSwiftStore.test_delayed_delete_with_auth is broken, because of this commit : https://github.com/openstack/glance/commit/66d24bb1a130902e824ca76cbee1deb6ef564873 Jordan To manage notifications about this bug go to: https://bugs.launchpad.net/glance-store/+bug/1360235/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1258342] Re: glance /v1.0/images does not return reasonable results
Two months without activity, closing. ** Changed in: glance Status: Incomplete = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1258342 Title: glance /v1.0/images does not return reasonable results Status in OpenStack Image Registry and Delivery Service (Glance): Invalid Bug description: This may not be a bug, per se. What is returned from the basic GET request of v1/images/detail does not make sense. Glance appears to only query for public images. [1] Indeed, adding a is_public=False results in private images being shown. However, in this case it shows *all* private images, not just the images that are part of my tenant. Furthermore, I would expect the basic query to return images for which either (a) my tenant has access to or (b) I personally have access to. Neither of these cases appear to be queried in the default request. [1] http://paste.openstack.org/show/54544/ To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1258342/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 810493] Re: No support for sparse images
Moving to store as that's where this should be coming from. ** Project changed: glance = glance-store -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/810493 Title: No support for sparse images Status in OpenStack Glance backend store-drivers library (glance_store): Confirmed Bug description: I could have sworn I filed this bug already, but I don't see it now. Oh, well. Glance does not seem to support any sort of sparse images. For example, Ubuntu's cloud images are a 1½ GB filesystem, but if it were sparsely allocated it would only take up a couple of hundred MB. Amazon handles this by using tarballs as their image transport format. To manage notifications about this bug go to: https://bugs.launchpad.net/glance-store/+bug/810493/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1263067] Re: Glance HTTP store doesn't work behind proxy
Moved to glance-store instead of glance. ** Project changed: glance = glance-store -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1263067 Title: Glance HTTP store doesn't work behind proxy Status in OpenStack Glance backend store-drivers library (glance_store): Confirmed Status in Python client library for Glance: New Bug description: If Glance is deployed in an environment which require proxy for most connections than images stored in the http store will fail to be downloaded because the HTTP store use httplib (low level http library) which doesn't care about any proxy settings. To manage notifications about this bug go to: https://bugs.launchpad.net/glance-store/+bug/1263067/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1346525] Re: Snapshots when using RBD backend make full copy then upload unnecessarily
** Project changed: glance = glance-store -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1346525 Title: Snapshots when using RBD backend make full copy then upload unnecessarily Status in OpenStack Glance backend store-drivers library (glance_store): In Progress Status in OpenStack Compute (Nova): In Progress Bug description: When performing a snapshot a local copy is made. In the case of RBD, it reads what libvirt thinks is a raw block device and then converts that to a local raw file. The file is then uploaded to glance, which reads the whole raw file and stores it in the backend, if the backend is Ceph this is completely unnecessary and defeats the whole point of having a Ceph cluster. The fix should go something like this: 1. Tell Ceph to make a snapshot of the RBD 2. Get Ceph metadata from backend, send that to Glance 3. Glance gets metadata, if it has Ceph backend no download is necessary 4. If it doesn't, download image from Ceph location, store in backend To manage notifications about this bug go to: https://bugs.launchpad.net/glance-store/+bug/1346525/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1309338] Re: VMware store: problem with api_insecure configuration variable
Fix in glance-store, moving the bug there as well. ** Project changed: glance = glance-store ** Tags added: vmware -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1309338 Title: VMware store: problem with api_insecure configuration variable Status in OpenStack Glance backend store-drivers library (glance_store): In Progress Bug description: The api_insecure configuration variable is documented to be True when an SSL connection is used without checking the certificates. However, the implementation is using the variable to use HTTP or HTTPS. This is confusing for deployers especially those using the cinder client where api_insecure really means do not check the certificates. It would be better to change the variable to vmware_https_connection (True = HTTPS, False = HTTP). To manage notifications about this bug go to: https://bugs.launchpad.net/glance-store/+bug/1309338/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1383117] Re: ./run_tests.sh raises error for glance_store in debug mode
** Project changed: glance = glance-store -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1383117 Title: ./run_tests.sh raises error for glance_store in debug mode Status in OpenStack Glance backend store-drivers library (glance_store): Fix Committed Bug description: When we try to run glance_store test suit in debug mode it raises error “ImportError: Start directory is not importable” with the following stack trace $ ./run_tests.sh -d Traceback (most recent call last): File /usr/lib/python2.7/runpy.py, line 162, in _run_module_as_main __main__, fname, loader, pkg_name) File /usr/lib/python2.7/runpy.py, line 72, in _run_code exec code in run_globals File /home/openstack/glance_store/glance_store/.venv/lib/python2.7/site-packages/testtools/run.py, line 535, in module main(sys.argv, sys.stdout) File /home/openstack/glance_store/glance_store/.venv/lib/python2.7/site-packages/testtools/run.py, line 532, in main stdout=stdout) File /home/openstack/glance_store/glance_store/.venv/lib/python2.7/site-packages/testtools/run.py, line 218, in __init__ self.parseArgs(argv) File /home/openstack/glance_store/glance_store/.venv/lib/python2.7/site-packages/testtools/run.py, line 257, in parseArgs self._do_discovery(argv[2:]) File /home/openstack/glance_store/glance_store/.venv/lib/python2.7/site-packages/testtools/run.py, line 376, in _do_discovery loaded = loader.discover(start_dir, pattern, top_level_dir) File /usr/lib/python2.7/unittest/loader.py, line 204, in discover raise ImportError('Start directory is not importable: %r' % start_dir) ImportError: Start directory is not importable: './glance/tests' To manage notifications about this bug go to: https://bugs.launchpad.net/glance-store/+bug/1383117/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1377797] Re: VMware: Can't use the datastore under Folder / Datastore Cluster
** Tags added: vmware ** Project changed: glance = glance-store -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1377797 Title: VMware: Can't use the datastore under Folder / Datastore Cluster Status in OpenStack Glance backend store-drivers library (glance_store): In Progress Bug description: We should support the datastore under Folder / Datastore Cluster. It is a common usage to use Folder / Datastore Cluster under a DataCenter. Currently, only support storing glance images on the datastore under top of datacenter path. ---Current--- Inventory Path: /${vmware_datacenter_path}/${vmware_datastore_name}/ - Supporting Folder / Datastore Cluster improves datastore management (Default: None). ---After--- Inventory Path: /${vmware_datacenter_path}/${vmware_folder_path}/${vmware_datastore_name}/ --- To realize it, we should add valiable on glance-api.conf ---glance-api.conf--- ... # Inventory path to a datacenter (string value) # Value optional when vmware_server_ip is an ESX/ESXi host: if specified # should be `ha-datacenter`. vmware_datacenter_path = DC1 + # Folder / Datastore Cluster path (string value) + # Optional value: Default None + vmware_folder_path = openstack # Datastore associated with the datacenter (string value) vmware_datastore_name = datastore01 ... - Add + bullet lines. Add vmware_folder_path valiable and define default value as None. To manage notifications about this bug go to: https://bugs.launchpad.net/glance-store/+bug/1377797/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1382681] Re: Support for cinder as glance default_store
Moved to glance-store as the driver is there. ** Project changed: glance = glance-store -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1382681 Title: Support for cinder as glance default_store Status in OpenStack Glance backend store-drivers library (glance_store): New Bug description: Cinder as glance default_store is not working as of now. Throws a 500 internal server error https://ask.openstack.org/en/question/7322/how-to-use-cinder-as- glance-default_store/ To manage notifications about this bug go to: https://bugs.launchpad.net/glance-store/+bug/1382681/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1386060] Re: [glance_store] should use os.statvfs to get available disk space in filesystem driver
** Project changed: glance = glance-store -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1386060 Title: [glance_store] should use os.statvfs to get available disk space in filesystem driver Status in OpenStack Glance backend store-drivers library (glance_store): Fix Committed Bug description: df = processutils.execute(df, -k, mount_point)[0].strip('\n') total_available_space = int(df.split('\n')[1].split()[3]) * units.Ki The above code raises IndexError: list index out of range if the name of the Filesystem is too long. e.g [root@a-b-c-d ~]# df -k /root/.emacs.d/ Filesystem 1K-blocksUsed Available Use% Mounted on /dev/mapper/vg_foo-by_root 51606140 6169468 42815232 13% / The available size is in the third line instead of the second line. To manage notifications about this bug go to: https://bugs.launchpad.net/glance-store/+bug/1386060/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1387311] Re: Unprocessable Entity error for large images on Ceph Swift store
** Project changed: glance = glance-store -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1387311 Title: Unprocessable Entity error for large images on Ceph Swift store Status in OpenStack Glance backend store-drivers library (glance_store): New Bug description: There is an implementation difference between Ceph Swift and OS Swift in how the ETag/checksum of a dynamic large object (DLO) manifest object is verified. OS Swift verifies it just like any other object, md5’ing the content of the object: https://github.com/openstack/swift/blob/master/swift/obj/server.py#L439-L459 Ceph Swift actually does the full DLO checksum across all the component objects: https://github.com/ceph/ceph/blob/master/src/rgw/rgw_op.cc#L1765-L1781 The Glance Swift store driver assumes the OS Swift behavior, and sends an ETag of md5() with the PUT request for the manifest object. Technically, this is correct, since that object itself is a zero-byte object: https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/swift/store.py#L552 However, when using a Ceph Swift store, this results in a 422 Unprocessable Entity response from Swift, because the provided ETag doesn't match the expected ETag for the DLO. It would seem to make sense to just not send any ETag with the manifest object PUT request. It is not required by the API, and only marginally improves the validation of the object. To manage notifications about this bug go to: https://bugs.launchpad.net/glance-store/+bug/1387311/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1354512] Re: Anonymous user can download public image through Swift
Fix committed to glance-store, aiming bug to that as well. ** Project changed: glance = glance-store -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1354512 Title: Anonymous user can download public image through Swift Status in OpenStack Glance backend store-drivers library (glance_store): Fix Committed Status in OpenStack Security Advisories: Won't Fix Status in OpenStack Security Notes: Fix Released Bug description: When Glance uses Swift as backend, and Swift uses delay_auth_decision feature (for temporary urls, for example), anyone can download public images anonymously from Swift by direct url. Steps to reproduce: 1 Set delay_auth_decision = 1 in Swift's proxy-server.conf. Set default_store = swift swift_store_multi_tenant = True swift_store_create_container_on_put = True in Glance's glance-api.conf. 2 Create a public image. glance image-create --name fake_image --file some_text_file_name --is-public True You may use a text file to reproduce the error for descriptive reasons. Use the got image id at the next step. 3 Download created image by curl. curl swift_endpoint/glance_image_id/image_id See your file in the output. If swift_store_container in your glance-api.conf is not 'glance', use appropriate prefix in the command above. Glance set read ACL to '.r:*,.rlistings' for all public images. Thus since anyone has access into Swift (by delay_auth_decision parameter), anyone can download a public image. To manage notifications about this bug go to: https://bugs.launchpad.net/glance-store/+bug/1354512/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1393831] [NEW] [heat] certain template errors display a long stack trace in the red error popup
Public bug reported: I have found that certain types of template errors are not caught in the first validation pass, and instead fail in the second phase, when all the parameters are submitted. For example, adding an invalid property to an output does this. Here is an example (only showing the outputs section): outputs: mysql_instance_name: description: Name of the MySQL instance value: { get_attr: [mysql_instance, name] } mysql_instance_ip: description: The IP address of the MySQL instance value: { get_attr: [mysql_instance, first_address] } mysql_password: description: The MySQL root password value: { get_attr: [mysql_password, value] } hidden: true The hidden parameter is invalid for an output, but the initial template validation does not find an error (likely a Heat problem). When the template is finally submitted a stack trace appears in the red popup error (see attached screenshot). ** Affects: horizon Importance: Undecided Assignee: Miguel Grinberg (miguelgrinberg) Status: New ** Attachment added: screenshot of the error https://bugs.launchpad.net/bugs/1393831/+attachment/4263034/+files/heat_error.jpg ** Changed in: horizon Assignee: (unassigned) = Miguel Grinberg (miguelgrinberg) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1393831 Title: [heat] certain template errors display a long stack trace in the red error popup Status in OpenStack Dashboard (Horizon): New Bug description: I have found that certain types of template errors are not caught in the first validation pass, and instead fail in the second phase, when all the parameters are submitted. For example, adding an invalid property to an output does this. Here is an example (only showing the outputs section): outputs: mysql_instance_name: description: Name of the MySQL instance value: { get_attr: [mysql_instance, name] } mysql_instance_ip: description: The IP address of the MySQL instance value: { get_attr: [mysql_instance, first_address] } mysql_password: description: The MySQL root password value: { get_attr: [mysql_password, value] } hidden: true The hidden parameter is invalid for an output, but the initial template validation does not find an error (likely a Heat problem). When the template is finally submitted a stack trace appears in the red popup error (see attached screenshot). To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1393831/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1393837] [NEW] --member-status STATUS returns images whose member status is not STATUS
Public bug reported: Note: the cirros-* images are public, 'private-image' is private, 'share-test' is shared with this user (in pending state): Standard image list works ok: $ glance --os-image-api-version 2 image-list +--+-+ | ID | Name| +--+-+ | 281d576a-9e4b-4d11-94bb-8b1e89f62a71 | private-image | | 3def22ed-e2eb-4575-b242-8704517e95d7 | cirros-0.3.2-x86_64-uec | | 32d53b24-5498-469e-a27a-328f87213738 | cirros-0.3.2-x86_64-uec-ramdisk | | ca34f1b7-a839-474f-a16b-34779c65905c | cirros-0.3.2-x86_64-uec-kernel | +--+-+ Filtering on status 'rejected' includes images which do not have status 'rejected' (a bug -- I think no images should be listed here): $ glance --os-image-api-version 2 image-list --member-status rejected +--+-+ | ID | Name| +--+-+ | 281d576a-9e4b-4d11-94bb-8b1e89f62a71 | private-image | | 3def22ed-e2eb-4575-b242-8704517e95d7 | cirros-0.3.2-x86_64-uec | | 32d53b24-5498-469e-a27a-328f87213738 | cirros-0.3.2-x86_64-uec-ramdisk | | ca34f1b7-a839-474f-a16b-34779c65905c | cirros-0.3.2-x86_64-uec-kernel | +--+-+ Filtering on status 'accepted' includes images which do not have status 'accepted' (a bug -- I think no images should be listed here): $ glance --os-image-api-version 2 image-list --member-status accepted +--+-+ | ID | Name| +--+-+ | 281d576a-9e4b-4d11-94bb-8b1e89f62a71 | private-image | | 3def22ed-e2eb-4575-b242-8704517e95d7 | cirros-0.3.2-x86_64-uec | | 32d53b24-5498-469e-a27a-328f87213738 | cirros-0.3.2-x86_64-uec-ramdisk | | ca34f1b7-a839-474f-a16b-34779c65905c | cirros-0.3.2-x86_64-uec-kernel | +--+-+ Filtering on status 'pending' includes images which do not have status 'pending' (a bug I think): $ glance --os-image-api-version 2 image-list --member-status pending +--+-+ | ID | Name| +--+-+ | 281d576a-9e4b-4d11-94bb-8b1e89f62a71 | private-image | | 795518ca-13a6-4493-b3a3-91519ad7c067 | share-test | I think only this should be listed | 3def22ed-e2eb-4575-b242-8704517e95d7 | cirros-0.3.2-x86_64-uec | | 32d53b24-5498-469e-a27a-328f87213738 | cirros-0.3.2-x86_64-uec-ramdisk | | ca34f1b7-a839-474f-a16b-34779c65905c | cirros-0.3.2-x86_64-uec-kernel | +--+-+ The API request is: GET /v2/images?limit=20member_status=pending ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1393837 Title: --member-status STATUS returns images whose member status is not STATUS Status in OpenStack Image Registry and Delivery Service (Glance): New Bug description: Note: the cirros-* images are public, 'private-image' is private, 'share-test' is shared with this user (in pending state): Standard image list works ok: $ glance --os-image-api-version 2 image-list +--+-+ | ID | Name| +--+-+ | 281d576a-9e4b-4d11-94bb-8b1e89f62a71 | private-image | | 3def22ed-e2eb-4575-b242-8704517e95d7 | cirros-0.3.2-x86_64-uec | | 32d53b24-5498-469e-a27a-328f87213738 | cirros-0.3.2-x86_64-uec-ramdisk | | ca34f1b7-a839-474f-a16b-34779c65905c | cirros-0.3.2-x86_64-uec-kernel | +--+-+ Filtering on status 'rejected' includes images which do not have status 'rejected' (a bug -- I think no images should be listed here): $ glance --os-image-api-version 2 image-list --member-status rejected +--+-+ | ID
[Yahoo-eng-team] [Bug 1329653] Re: Copying objects in swift with a leading / in the path
Thanks for including the details. This scenario works fine for me on master, so closing. Please re-open if you can reproduce it on Juno or Kilo. ** Changed in: horizon Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1329653 Title: Copying objects in swift with a leading / in the path Status in OpenStack Dashboard (Horizon): Invalid Bug description: (RDO Havana) This seems to ignore the path altogether or give an error depending on the path and destination container. That said, I'm not sure what the correct way to handle a leading slash is, but currently it's inconsistent. The resulting copies are visible in horizon, but can't be downloaded. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1329653/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1390102] Re: Block device mapping not deleted
Thanks, it seems to be fixed now. Closing ** Changed in: nova Status: Incomplete = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1390102 Title: Block device mapping not deleted Status in OpenStack Compute (Nova): Invalid Bug description: Deleting an instance does not mark the block device mapping as deleted. For example, boot an instance ( type doesn't matter, I found this on an ephemeral instance) then delete the instance. In the database, you see that deleted=1 for the instance, but for the corresponding entry in block_device_mapping is not deleted This becomes a problem during archive_deleted_rows, or in the proposed purge_deleted_rows because of a foreign key constraint To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1390102/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1393693] Re: Integrating OpenStack L3 agent to forward L3 calls (router and FIP) to OpenDaylight
That looks like a support request rather than a bug, I suggest you use ask.openstack.org for this. Marking as Invalid. ** Changed in: neutron Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1393693 Title: Integrating OpenStack L3 agent to forward L3 calls (router and FIP) to OpenDaylight Status in OpenStack Neutron (virtual network service): Invalid Bug description: I am trying to enable L3 calls from OpenStack to OpenDaylight (ODL). As per my understanding ML2 plugin presently forwards the neutron calls for network, port and subnet, but for L3 calls (router, floating IP) an additional service plugin needs to be integrated. For the same I am using L3 plugin (https://github.com/dave-tucker/odl-neutron-drivers). Steps followed: 1. Enabled the neutron-l3-agent as described http://docs.openstack.org/trunk/config-reference/content/section_adv_cfg_l3_agent.html 2.) Followed the steps mentioned at https://github.com/dave-tucker /odl-neutron-drivers I tried to debug the L3 calls which return from create_router() in /opt/stack/neutron/neutron/l3_db.py , without forwarding the calls to the L3 plugin. I believe l3_db.py is not handling the functionality to forward the calls to L3 plugin which will in turn redirect the neutron calls to ODL neutron. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1393693/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1393925] [NEW] Race condition adding a security group rule when another is in-progress
Public bug reported: I've come across a race condition where I sometimes see a security group rule is never added to iptables, if the OVS agent is in the middle of applying another security group rule when the RPC arrives. Here's an example scenario: nova boot --flavor 1 --image $nova_image dev_server1 sleep 4 neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min --port_range_max default neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 1112 --port_range_max 1112 default Wait for VM to complete booting, then check iptables: $ sudo iptables-save | grep 111 -A neutron-openvswi-i741ff910-1 -p tcp -m tcp --dport -j RETURN The second rule is missing, and will only get added if you either add another rule, or restart the agent. My config is just devstack, running with the latest openstack bits as of today. OVS agent w/vxlan and DVR enabled, nothing fancy. I've been able to track this down to the following code (i'll attach the complete log as a file due to line wraps): OVS agent receives RPC to setup port Port info is gathered for devices and filters for security groups are created Iptables apply is called New security group rule is added, triggering RPC message RPC received, and agent seems to add device to list that needs refresh Security group rule updated on remote: [u'5f0f5036-d14c-4b57-a855-ed39deaea256'] security_groups_rule_updated Security group rule updated [u'5f0f5036-d14c-4b57-a855-ed39deaea256'] Adding [u'741ff910-12ba-4c1e-9dc9-38f7cbde0dc4'] devices to the list of devices for which firewall needs to be refreshed _security_group_updated Iptables apply is finished rpc_loop() in OVS agent does not notice there is more work to do on next loop, so rule never gets added At this point I'm thinking it could be that self.devices_to_refilter is modified in both _security_group_updated() and setup_port_filters() without any lock/semaphore, but the log doesn't explicity implicate it (perhaps we trust the timestamps too much?). I will continue to investigate, but if someone has an aha! moment after reading this far please add a note. A colleague here has also been able to duplicate this on his own devstack install, so it wasn't my fat-fingering that caused it. ** Affects: neutron Importance: Undecided Status: New ** Attachment added: OVS agent log around time of VM boot and SG rule addition https://bugs.launchpad.net/bugs/1393925/+attachment/4263208/+files/ovs-agent-sg.log -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1393925 Title: Race condition adding a security group rule when another is in- progress Status in OpenStack Neutron (virtual network service): New Bug description: I've come across a race condition where I sometimes see a security group rule is never added to iptables, if the OVS agent is in the middle of applying another security group rule when the RPC arrives. Here's an example scenario: nova boot --flavor 1 --image $nova_image dev_server1 sleep 4 neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min --port_range_max default neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 1112 --port_range_max 1112 default Wait for VM to complete booting, then check iptables: $ sudo iptables-save | grep 111 -A neutron-openvswi-i741ff910-1 -p tcp -m tcp --dport -j RETURN The second rule is missing, and will only get added if you either add another rule, or restart the agent. My config is just devstack, running with the latest openstack bits as of today. OVS agent w/vxlan and DVR enabled, nothing fancy. I've been able to track this down to the following code (i'll attach the complete log as a file due to line wraps): OVS agent receives RPC to setup port Port info is gathered for devices and filters for security groups are created Iptables apply is called New security group rule is added, triggering RPC message RPC received, and agent seems to add device to list that needs refresh Security group rule updated on remote: [u'5f0f5036-d14c-4b57-a855-ed39deaea256'] security_groups_rule_updated Security group rule updated [u'5f0f5036-d14c-4b57-a855-ed39deaea256'] Adding [u'741ff910-12ba-4c1e-9dc9-38f7cbde0dc4'] devices to the list of devices for which firewall needs to be refreshed _security_group_updated Iptables apply is finished rpc_loop() in OVS agent does not notice there is more work to do on next loop, so rule never gets added At this point I'm thinking it could be that self.devices_to_refilter is modified in both _security_group_updated() and
[Yahoo-eng-team] [Bug 1291414] Re: image create/edit image wizard allows name with white spaces only
** Changed in: horizon Status: In Progress = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1291414 Title: image create/edit image wizard allows name with white spaces only Status in OpenStack Dashboard (Horizon): Won't Fix Bug description: Description of problem: When we create /modify image name , the user interface does not allow to set empty string When user set single white space , the image name changed to image_id . when name contains only what spaces it should be denied as empty string . Version-Release number of selected component (if applicable): How reproducible: always Steps to Reproduce: create image name from dashboard . change the name with edit option , set spaces only . save and name changed to image_id . Actual results: the image name taken from image_id in database. Expected results: white spaces name should be rejected by ui . To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1291414/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1393958] [NEW] VMware: HTTPInternalServerError (HTTP 500) using --location vsphere://
Public bug reported: If I run the following command to create a new image using glance CLI, the glance API server will throw a 500 error and exception in the logs. Looks like the code needs to be more robust on handling junk in the URI. glance image-create --location vsphere://10.1.1.1/folder/images /openstack-template/openstack-template.vmdk I think this occurs because I did not include ?dcpath=Datacenter/dsName in the URI 2014-11-18 21:59:05.266 11866 INFO glance.wsgi.server [1dd339bd-bb0d-4ae1-a6cc-b3abc64a0644 3380f60eedc54f6eab23ed465b57b24c acbc0289cb974fee9e96e22674b7847f - - -] Traceback (most recent call last): File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 384, in handle_one_response result = self.application(self.environ, start_response) File /usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__ resp = self.call_func(req, *args, **self.kwargs) File /usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func return self.func(req, *args, **kwargs) File /usr/lib/python2.7/dist-packages/glance/common/wsgi.py, line 378, in __call__ response = req.get_response(self.application) File /usr/lib/python2.7/dist-packages/webob/request.py, line 1320, in send application, catch_exc_info=False) File /usr/lib/python2.7/dist-packages/webob/request.py, line 1284, in call_application app_iter = application(self.environ, start_response) File /usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py, line 582, in __call__ return self.app(env, start_response) File /usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__ resp = self.call_func(req, *args, **self.kwargs) File /usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func return self.func(req, *args, **kwargs) File /usr/lib/python2.7/dist-packages/glance/common/wsgi.py, line 378, in __call__ response = req.get_response(self.application) File /usr/lib/python2.7/dist-packages/webob/request.py, line 1320, in send application, catch_exc_info=False) File /usr/lib/python2.7/dist-packages/webob/request.py, line 1284, in call_application app_iter = application(self.environ, start_response) File /usr/lib/python2.7/dist-packages/paste/urlmap.py, line 206, in __call__ return app(environ, start_response) File /usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__ return resp(environ, start_response) File /usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in __call__ response = self.app(environ, start_response) File /usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__ return resp(environ, start_response) File /usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__ resp = self.call_func(req, *args, **self.kwargs) File /usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func return self.func(req, *args, **kwargs) File /usr/lib/python2.7/dist-packages/glance/common/wsgi.py, line 644, in __call__ request, **action_args) File /usr/lib/python2.7/dist-packages/glance/common/wsgi.py, line 668, in dispatch return method(*args, **kwargs) File /usr/lib/python2.7/dist-packages/glance/common/utils.py, line 438, in wrapped return func(self, req, *args, **kwargs) File /usr/lib/python2.7/dist-packages/glance/api/v1/images.py, line 795, in create image_meta = self._reserve(req, image_meta) File /usr/lib/python2.7/dist-packages/glance/api/v1/images.py, line 504, in _reserve store = get_store_from_location(location) File /usr/lib/python2.7/dist-packages/glance/store/__init__.py, line 314, in get_store_from_location loc = location.get_location_from_uri(uri) File /usr/lib/python2.7/dist-packages/glance/store/location.py, line 75, in get_location_from_uri store_location_class=scheme_info['location_class']) File /usr/lib/python2.7/dist-packages/glance/store/location.py, line 116, in __init__ self.store_location.parse_uri(uri) File /usr/lib/python2.7/dist-packages/glance/store/vmware_datastore.py, line 398, in parse_uri self.query = path[1] IndexError: list index out of range ** Affects: glance Importance: Undecided Status: New ** Tags: vmware -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1393958 Title: VMware: HTTPInternalServerError (HTTP 500) using --location vsphere:// Status in OpenStack Image Registry and Delivery Service (Glance): New Bug description: If I run the following command to create a new image using glance CLI, the glance API server will throw a 500 error and exception in the logs. Looks like the code needs to be more robust on handling junk in the URI. glance image-create --location vsphere://10.1.1.1/folder/images /openstack-template/openstack-template.vmdk I think this occurs because I did not
[Yahoo-eng-team] [Bug 1393991] [NEW] big switch L3: missing tenant ID for external gateway causes expensive query on backend
Public bug reported: The external gateway info in the router update only includes the network_id. Even though this is a unique ID on the openstack side, this field isn't unique on the backend so the query is very expensive. The plugin side should just lookup the network and include it in the request to the backend. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1393991 Title: big switch L3: missing tenant ID for external gateway causes expensive query on backend Status in OpenStack Neutron (virtual network service): New Bug description: The external gateway info in the router update only includes the network_id. Even though this is a unique ID on the openstack side, this field isn't unique on the backend so the query is very expensive. The plugin side should just lookup the network and include it in the request to the backend. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1393991/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1393920] Re: I18n
** Also affects: keystone Importance: Undecided Status: New ** No longer affects: openstack-manuals -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1393920 Title: I18n Status in OpenStack Identity (Keystone): New Bug description: https://review.openstack.org/131287 Dear documentation bug triager. This bug was created here because we did not know how to map the project name openstack/keystonemiddleware to a launchpad project name. This indicates that the notify_impact config needs tweaks. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to. commit 273539bf26a7848a8e49e3ec60b12d2964daca67 Author: Brant Knudson bknud...@us.ibm.com Date: Mon Oct 27 16:46:04 2014 -0500 I18n The strings weren't marked for translation. DocImpact implements bp keystonemiddleware-i18n Change-Id: Ic7da29b54b1547ff8df002bd77f61f2ebff35217 To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1393920/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1356053] Re: Doesn't properly get keystone endpoint when Keystone is configured to use templated catalog
** Also affects: python-saharaclient Importance: Undecided Status: New ** Changed in: python-saharaclient Assignee: (unassigned) = Telles Mota Vidal Nóbrega (tellesmvn) ** Changed in: sahara Status: In Progress = Confirmed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1356053 Title: Doesn't properly get keystone endpoint when Keystone is configured to use templated catalog Status in devstack - openstack dev environments: In Progress Status in OpenStack Dashboard (Horizon): In Progress Status in Python client library for Sahara (ex. Savanna): New Status in OpenStack Data Processing (Sahara, ex. Savanna): Confirmed Bug description: When using the keystone static catalog file to register endpoints (http://docs.openstack.org/developer/keystone/configuration.html#file-based-service-catalog-templated-catalog), an endpoint registered (correctly) as catalog.region.data_processing gets read as data-processing by keystone. Thus, when Sahara looks for an endpoint, it is unable to find one for data_processing. This causes a problem with the commandline interface and the dashboard. Keystone seems to be converting underscores to dashes here: https://github.com/openstack/keystone/blob/master/keystone/catalog/backends/templated.py#L47 modifying this line to not perform the replacement seems to work fine for me, but may have unintended consequences. To manage notifications about this bug go to: https://bugs.launchpad.net/devstack/+bug/1356053/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1356053] Re: Doesn't properly get keystone endpoint when Keystone is configured to use templated catalog
Fix proposed to branch: master Review: https://review.openstack.org/135458 ** Changed in: horizon Status: Won't Fix = In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1356053 Title: Doesn't properly get keystone endpoint when Keystone is configured to use templated catalog Status in devstack - openstack dev environments: In Progress Status in OpenStack Dashboard (Horizon): In Progress Status in Python client library for Sahara (ex. Savanna): New Status in OpenStack Data Processing (Sahara, ex. Savanna): Confirmed Bug description: When using the keystone static catalog file to register endpoints (http://docs.openstack.org/developer/keystone/configuration.html#file-based-service-catalog-templated-catalog), an endpoint registered (correctly) as catalog.region.data_processing gets read as data-processing by keystone. Thus, when Sahara looks for an endpoint, it is unable to find one for data_processing. This causes a problem with the commandline interface and the dashboard. Keystone seems to be converting underscores to dashes here: https://github.com/openstack/keystone/blob/master/keystone/catalog/backends/templated.py#L47 modifying this line to not perform the replacement seems to work fine for me, but may have unintended consequences. To manage notifications about this bug go to: https://bugs.launchpad.net/devstack/+bug/1356053/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1394009] [NEW] unmocked get_network calls in admin subnet tests
Public bug reported: _get_network on the subnets table calls api.neutron.network_get() in all the tests, it is not mocked. Although it's not showing up in test output, it's causing problems in a subsequent change. ** Affects: horizon Importance: Low Assignee: David Lyle (david-lyle) Status: New ** Tags: neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1394009 Title: unmocked get_network calls in admin subnet tests Status in OpenStack Dashboard (Horizon): New Bug description: _get_network on the subnets table calls api.neutron.network_get() in all the tests, it is not mocked. Although it's not showing up in test output, it's causing problems in a subsequent change. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1394009/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1394015] [NEW] volume should not have delete volume action item when it has snapshots
Public bug reported: In Project-Compute-Volumes and Admin-Volumes Current: When a volume has a snapshot, the delete volume action item shows up, when you click on the delete volume, it comes back with error saying Unable to delete volume, one or more snapshots depend on it. Expected: Should not show the delete volume action item if the volume has a snapshot on it. ** Affects: horizon Importance: Undecided Assignee: Gloria Gu (gloria-gu) Status: New ** Changed in: horizon Assignee: (unassigned) = Gloria Gu (gloria-gu) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1394015 Title: volume should not have delete volume action item when it has snapshots Status in OpenStack Dashboard (Horizon): New Bug description: In Project-Compute-Volumes and Admin-Volumes Current: When a volume has a snapshot, the delete volume action item shows up, when you click on the delete volume, it comes back with error saying Unable to delete volume, one or more snapshots depend on it. Expected: Should not show the delete volume action item if the volume has a snapshot on it. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1394015/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1394020] [NEW] Fix enable_metadata_network flag
Public bug reported: The following patch: 9569b2fe broke the desired functionality of the enable_metadata_network flag. The author of this patch was not aware that the enable_metadata_network flag was used to spin up ns-metadata-proxies for plugins that do not use the l3-agent (where this agent will spin up the metadata proxy). ** Affects: neutron Importance: High Assignee: Aaron Rosen (arosen) Status: New ** Changed in: neutron Importance: Undecided = Critical ** Changed in: neutron Importance: Critical = High ** Changed in: neutron Assignee: (unassigned) = Aaron Rosen (arosen) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1394020 Title: Fix enable_metadata_network flag Status in OpenStack Neutron (virtual network service): New Bug description: The following patch: 9569b2fe broke the desired functionality of the enable_metadata_network flag. The author of this patch was not aware that the enable_metadata_network flag was used to spin up ns-metadata-proxies for plugins that do not use the l3-agent (where this agent will spin up the metadata proxy). To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1394020/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1394022] [NEW] v2.1 api sample tests taking a long time to run
Public bug reported: There have been reports that the api sample tests for v2.1 are taking significantly longer to run that the v2 versions. This needs to be investigated and fixed. Eg. is it setup time (stevedore?), is it jsonschema input validation? Or something else ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1394022 Title: v2.1 api sample tests taking a long time to run Status in OpenStack Compute (Nova): New Bug description: There have been reports that the api sample tests for v2.1 are taking significantly longer to run that the v2 versions. This needs to be investigated and fixed. Eg. is it setup time (stevedore?), is it jsonschema input validation? Or something else To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1394022/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1394026] [NEW] floatingip_agent_gateway port is not deleted on fip disassociate
Public bug reported: When the last FIP is disassociated on a node, the floating IP agent gateway port should be deleted from the db. The same thing should happen when a nova VM is deleted on a host which was the last FIP associated VM. The delete VM path is currently working, but the disassociate path is not. ** Affects: neutron Importance: Undecided Status: New ** Tags: l3-dvr-backlog ** Tags added: l3-dvr-backlog -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1394026 Title: floatingip_agent_gateway port is not deleted on fip disassociate Status in OpenStack Neutron (virtual network service): New Bug description: When the last FIP is disassociated on a node, the floating IP agent gateway port should be deleted from the db. The same thing should happen when a nova VM is deleted on a host which was the last FIP associated VM. The delete VM path is currently working, but the disassociate path is not. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1394026/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1394030] [NEW] big switch: optimized floating IP calls missing data
Public bug reported: The newest version of the backend controller supports a floating IP API instead of propagating floating IP operations through full network updates. When testing with this new API, we found that the data is missing from the body on the Big Switch neutron plugin side so the optimized path doesn't work correctly. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1394030 Title: big switch: optimized floating IP calls missing data Status in OpenStack Neutron (virtual network service): New Bug description: The newest version of the backend controller supports a floating IP API instead of propagating floating IP operations through full network updates. When testing with this new API, we found that the data is missing from the body on the Big Switch neutron plugin side so the optimized path doesn't work correctly. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1394030/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1394034] [NEW] No ports available for users without admin role
Public bug reported: When regular user (without admin role) tries to associate a Floating IP with an instance, the list of ports is empty. When the user has admin role, the problem doesn't appear. I believe the regular user does not have access to something that is needed to ensure reachability of the the network from Floating IP's router. This problem does not have anything to do with similar reports related to DVR (decentralized routers). This bug is caused by fix to bug #1252403. Going one commit before that bug's fix was commited solves the problem. Going back in RDO packages to openstack- dashboard-2014.2-0.2.el7.centos.noarch.rpm (from Sep 16) also solves the problem. ** Affects: horizon Importance: Undecided Status: New ** Tags: neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1394034 Title: No ports available for users without admin role Status in OpenStack Dashboard (Horizon): New Bug description: When regular user (without admin role) tries to associate a Floating IP with an instance, the list of ports is empty. When the user has admin role, the problem doesn't appear. I believe the regular user does not have access to something that is needed to ensure reachability of the the network from Floating IP's router. This problem does not have anything to do with similar reports related to DVR (decentralized routers). This bug is caused by fix to bug #1252403. Going one commit before that bug's fix was commited solves the problem. Going back in RDO packages to openstack- dashboard-2014.2-0.2.el7.centos.noarch.rpm (from Sep 16) also solves the problem. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1394034/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1394035] [NEW] Regular users cannot disassociate or delete Floating IP address
Public bug reported: In the Access Security view, regular users (Member, not admin role) do not see the buttons to disassociate or delete Floating IP addresses. This can be temporarily fixed by changing policy.json files for neutron that are fed to Horizon: update_floatingip: rule:admin_or_owner, delete_floatingip: rule:admin_or_owner, --- update_floatingip: rule:admin_or_owner or rule:regular_user, delete_floatingip: rule:admin_or_owner or rule:regular_user, Adding rule:regular_user forces Horizon to display those buttons. This is a bad workaround and I believe the real problem lies somewhere in the way Horizon is determining admin_or_owner rule's value. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1394035 Title: Regular users cannot disassociate or delete Floating IP address Status in OpenStack Dashboard (Horizon): New Bug description: In the Access Security view, regular users (Member, not admin role) do not see the buttons to disassociate or delete Floating IP addresses. This can be temporarily fixed by changing policy.json files for neutron that are fed to Horizon: update_floatingip: rule:admin_or_owner, delete_floatingip: rule:admin_or_owner, --- update_floatingip: rule:admin_or_owner or rule:regular_user, delete_floatingip: rule:admin_or_owner or rule:regular_user, Adding rule:regular_user forces Horizon to display those buttons. This is a bad workaround and I believe the real problem lies somewhere in the way Horizon is determining admin_or_owner rule's value. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1394035/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1394041] [NEW] Demo user's Identity dashboard :Info: Insufficient privilege level to view project information.
Public bug reported: Testing step: 1: git clone https://git.openstack.org/openstack-dev/devstack 2: cd devstack; ./stack.sh 3: login as demo 4: there is a Identity Dashboard but report that Info: Insufficient privilege level to view project information. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1394041 Title: Demo user's Identity dashboard :Info: Insufficient privilege level to view project information. Status in OpenStack Dashboard (Horizon): New Bug description: Testing step: 1: git clone https://git.openstack.org/openstack-dev/devstack 2: cd devstack; ./stack.sh 3: login as demo 4: there is a Identity Dashboard but report that Info: Insufficient privilege level to view project information. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1394041/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1394043] [NEW] KeyError: 'gw_port_host' seen for DVR router removal
Public bug reported: In some multi-node setups, a qrouter namespace might be hosted on a node where only a dhcp port is hosted (no VMs, no SNAT). When the router is removed from the db, the host with only the qrouter and dhcp namespace will have the qrouter namespace remain. Other hosts with the same qrouter will remove the namespace. The following KeyError is seen on the host with the remaining namespace - 2014-11-18 17:18:43.334 ERROR neutron.agent.l3_agent [-] 'gw_port_host' 2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent Traceback (most recent call last): 2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent File /opt/stack/neutron/neutron/common/utils.py, line 341, in call 2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent return func(*args, **kwargs) 2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent File /opt/stack/neutron/neutron/agent/l3_agent.py, line 958, in process_router 2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent self.external_gateway_removed(ri, ri.ex_gw_port, interface_name) 2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent File /opt/stack/neutron/neutron/agent/l3_agent.py, line 1429, in external_gateway_removed 2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent ri.router['gw_port_host'] == self.host): 2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent KeyError: 'gw_port_host' 2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent Traceback (most recent call last): File /usr/local/lib/python2.7/dist-packages/eventlet/greenpool.py, line 82, in _spawn_n_impl func(*args, **kwargs) File /opt/stack/neutron/neutron/agent/l3_agent.py, line 1842, in _process_router_update self._process_router_if_compatible(router) File /opt/stack/neutron/neutron/agent/l3_agent.py, line 1817, in _process_router_if_compatible self.process_router(ri) File /opt/stack/neutron/neutron/common/utils.py, line 344, in call self.logger(e) File /opt/stack/neutron/neutron/openstack/common/excutils.py, line 82, in __exit__ six.reraise(self.type_, self.value, self.tb) File /opt/stack/neutron/neutron/common/utils.py, line 341, in call return func(*args, **kwargs) File /opt/stack/neutron/neutron/agent/l3_agent.py, line 958, in process_router self.external_gateway_removed(ri, ri.ex_gw_port, interface_name) File /opt/stack/neutron/neutron/agent/l3_agent.py, line 1429, in external_gateway_removed ri.router['gw_port_host'] == self.host): KeyError: 'gw_port_host' For the issue to be seen, the router in question needs to have the router-gateway-set previously. ** Affects: neutron Importance: Undecided Assignee: Mike Smith (michael-smith6) Status: New ** Tags: l3-dvr-backlog ** Changed in: neutron Assignee: (unassigned) = Mike Smith (michael-smith6) ** Tags added: l3-dvr-backlog -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1394043 Title: KeyError: 'gw_port_host' seen for DVR router removal Status in OpenStack Neutron (virtual network service): New Bug description: In some multi-node setups, a qrouter namespace might be hosted on a node where only a dhcp port is hosted (no VMs, no SNAT). When the router is removed from the db, the host with only the qrouter and dhcp namespace will have the qrouter namespace remain. Other hosts with the same qrouter will remove the namespace. The following KeyError is seen on the host with the remaining namespace - 2014-11-18 17:18:43.334 ERROR neutron.agent.l3_agent [-] 'gw_port_host' 2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent Traceback (most recent call last): 2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent File /opt/stack/neutron/neutron/common/utils.py, line 341, in call 2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent return func(*args, **kwargs) 2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent File /opt/stack/neutron/neutron/agent/l3_agent.py, line 958, in process_router 2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent self.external_gateway_removed(ri, ri.ex_gw_port, interface_name) 2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent File /opt/stack/neutron/neutron/agent/l3_agent.py, line 1429, in external_gateway_removed 2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent ri.router['gw_port_host'] == self.host): 2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent KeyError: 'gw_port_host' 2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent Traceback (most recent call last): File /usr/local/lib/python2.7/dist-packages/eventlet/greenpool.py, line 82, in _spawn_n_impl func(*args, **kwargs) File /opt/stack/neutron/neutron/agent/l3_agent.py, line 1842, in _process_router_update self._process_router_if_compatible(router) File /opt/stack/neutron/neutron/agent/l3_agent.py, line 1817, in _process_router_if_compatible
[Yahoo-eng-team] [Bug 1356053] Re: Doesn't properly get keystone endpoint when Keystone is configured to use templated catalog
** Also affects: tempest Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1356053 Title: Doesn't properly get keystone endpoint when Keystone is configured to use templated catalog Status in devstack - openstack dev environments: In Progress Status in OpenStack Dashboard (Horizon): In Progress Status in Python client library for Sahara (ex. Savanna): In Progress Status in OpenStack Data Processing (Sahara, ex. Savanna): In Progress Status in Tempest: New Bug description: When using the keystone static catalog file to register endpoints (http://docs.openstack.org/developer/keystone/configuration.html#file-based-service-catalog-templated-catalog), an endpoint registered (correctly) as catalog.region.data_processing gets read as data-processing by keystone. Thus, when Sahara looks for an endpoint, it is unable to find one for data_processing. This causes a problem with the commandline interface and the dashboard. Keystone seems to be converting underscores to dashes here: https://github.com/openstack/keystone/blob/master/keystone/catalog/backends/templated.py#L47 modifying this line to not perform the replacement seems to work fine for me, but may have unintended consequences. To manage notifications about this bug go to: https://bugs.launchpad.net/devstack/+bug/1356053/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1394051] [NEW] Can't display port list in Manage Floating IP Associations page
Public bug reported: I used below command to configure floating IP. Juno on CentOS 7. neutron net-create public --shared --router:external True --provider:network_type vlan --provider:physical_network physnet2 --provider:segmentation_id 125 neutron subnet-create public --name public-subnet \ --allocation-pool start=125.2.249.170,end=125.2.249.248 \ --disable-dhcp --gateway 125.2.249.1 --dns-nameserver 125.1.166.20 125.2.249.0/24 neutron net-create --shared OAM120 \ --provider:network_type vlan --provider:physical_network physnet2 --provider:segmentation_id 120 neutron subnet-create --name oam120-subnet \ --allocation-pool start=192.168.120.1,end=192.168.120.200 \ --gateway 192.168.120.254 --dns-nameserver 10.1.1.1 --dns-nameserver 125.1.166.20 OAM120 192.168.120.0/24 neutron router-create my-router neutron router-interface-add my-router oam120-subnet neutron router-gateway-set my-router public Just checked the dashborad code, It seems that there are some errors in below code. /usr/share/openstack-dashboard/openstack_dashboard/api/neutron.py def _get_reachable_subnets(self, ports): # Retrieve subnet list reachable from external network ext_net_ids = [ext_net.id for ext_net in self.list_pools()] gw_routers = [r.id for r in router_list(self.request) if (r.external_gateway_info and r.external_gateway_info.get('network_id') in ext_net_ids)] reachable_subnets = set([p.fixed_ips[0]['subnet_id'] for p in ports if ((p.device_owner == 'network:router_interface') and (p.device_id in gw_routers))]) return reachable_subnets Why only list device_owner = 'network:router_interface', I guess it should list all device_owner = 'compute:xxx' Here is my work around: diff output: /usr/share/openstack-dashboard [root@jn-controller openstack-dashboard]# diff ./openstack_dashboard/api/neutron.py.orig ./openstack_dashboard/api/neutron.py 413,415c415 if ((p.device_owner == 'network:router_interface') and (p.device_id in gw_routers))]) --- if (p.device_owner.startswith('compute:'))]) ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1394051 Title: Can't display port list in Manage Floating IP Associations page Status in OpenStack Dashboard (Horizon): New Bug description: I used below command to configure floating IP. Juno on CentOS 7. neutron net-create public --shared --router:external True --provider:network_type vlan --provider:physical_network physnet2 --provider:segmentation_id 125 neutron subnet-create public --name public-subnet \ --allocation-pool start=125.2.249.170,end=125.2.249.248 \ --disable-dhcp --gateway 125.2.249.1 --dns-nameserver 125.1.166.20 125.2.249.0/24 neutron net-create --shared OAM120 \ --provider:network_type vlan --provider:physical_network physnet2 --provider:segmentation_id 120 neutron subnet-create --name oam120-subnet \ --allocation-pool start=192.168.120.1,end=192.168.120.200 \ --gateway 192.168.120.254 --dns-nameserver 10.1.1.1 --dns-nameserver 125.1.166.20 OAM120 192.168.120.0/24 neutron router-create my-router neutron router-interface-add my-router oam120-subnet neutron router-gateway-set my-router public Just checked the dashborad code, It seems that there are some errors in below code. /usr/share/openstack-dashboard/openstack_dashboard/api/neutron.py def _get_reachable_subnets(self, ports): # Retrieve subnet list reachable from external network ext_net_ids = [ext_net.id for ext_net in self.list_pools()] gw_routers = [r.id for r in router_list(self.request) if (r.external_gateway_info and r.external_gateway_info.get('network_id') in ext_net_ids)] reachable_subnets = set([p.fixed_ips[0]['subnet_id'] for p in ports if ((p.device_owner == 'network:router_interface') and (p.device_id in gw_routers))]) return reachable_subnets Why only list device_owner = 'network:router_interface', I guess it should list all device_owner = 'compute:xxx' Here is my work around: diff output: /usr/share/openstack-dashboard [root@jn-controller openstack-dashboard]# diff ./openstack_dashboard/api/neutron.py.orig ./openstack_dashboard/api/neutron.py 413,415c415 if
[Yahoo-eng-team] [Bug 1394052] [NEW] Fix exception handling in _get_host_metrics()
Public bug reported: In resource_tracker.py, the exception path of _get_host_metrics() contains a wrong variable name. for monitor in self.monitors: try: metrics += monitor.get_metrics(nodename=nodename) except Exception: LOG.warn(_(Cannot get the metrics from %s.), monitors) -- Need to change 'monitors' to 'monitor' ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1394052 Title: Fix exception handling in _get_host_metrics() Status in OpenStack Compute (Nova): New Bug description: In resource_tracker.py, the exception path of _get_host_metrics() contains a wrong variable name. for monitor in self.monitors: try: metrics += monitor.get_metrics(nodename=nodename) except Exception: LOG.warn(_(Cannot get the metrics from %s.), monitors) -- Need to change 'monitors' to 'monitor' To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1394052/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1394061] [NEW] unable to set resolve.conf
Public bug reported: # The top level settings are used as module # and system configuration. # A set of users which may be applied and/or used by various modules # when a 'default' entry is found it will reference the 'default_user' # from the distro configuration specified below users: - default # If this is set, 'root' will not be able to ssh in and they # will get a message to login instead as the above $user (ubuntu) disable_root: true locale: en_US.UTF-8 apt_preserve_sources_list: true # This will cause the set+update hostname module to not operate (if true) preserve_hostname: true manage_etc_hosts: false manage-resolv-conf: true resolv_conf: nameservers: ['208.67.222.222', '127.0.0.1', '208.67.220.220'] searchdomains: - my.domain.net domain: domain.net options: rotate: true timeout: 1 # write_files: # - path: /etc/resolv.conf #permissions: 0644 #owner: root #content: | # nameserver 208.67.222.222 # nameserver 127.0.0.1 # nameserver 208.67.220.220 # Example datasource config # datasource: #Ec2: # metadata_urls: [ 'blah.com' ] # timeout: 5 # (defaults to 50 seconds) # max_wait: 10 # (defaults to 120 seconds) # The modules that run in the 'init' stage cloud_init_modules: - migrator - seed_random - bootcmd - write-files - growpart - resizefs - set_hostname - update_hostname - update_etc_hosts - ca-certs - rsyslog - users-groups - ssh # - resolv_conf # The modules that run in the 'config' stage cloud_config_modules: # Emit the cloud config ready event # this can be used by upstart jobs for 'start on cloud-config'. - emit_upstart - disk_setup - mounts - ssh-import-id - locale - set-passwords - grub-dpkg - apt-pipelining - apt-configure - package-update-upgrade-install - landscape - timezone - puppet - chef - salt-minion - mcollective - disable-ec2-metadata - runcmd - byobu # The modules that run in the 'final' stage cloud_final_modules: - rightscale_userdata - scripts-vendor - scripts-per-once - scripts-per-boot - scripts-per-instance - scripts-user - ssh-authkey-fingerprints - keys-to-console - phone-home - final-message - power-state-change # System and/or distro specific settings # (not accessible to handlers/transforms) system_info: # This will affect which distro class gets used distro: ubuntu # Default user name + that default users groups (if added/used) default_user: name: admin lock_passwd: false gecos: Ubuntu groups: [adm, audio, cdrom, dialout, dip, floppy, netdev, plugdev, sudo, video] sudo: [ALL=(ALL) NOPASSWD:ALL] shell: /bin/bash # Other config here will be given to the distro class and/or path classes paths: cloud_dir: /var/lib/cloud/ templates_dir: /etc/cloud/templates/ upstart_dir: /etc/init/ ssh_svcname: ssh For some off reason resolv.conf get keep getting overwritten. ** Affects: cloud-init Importance: Undecided Status: New ** Description changed: # The top level settings are used as module # and system configuration. # A set of users which may be applied and/or used by various modules # when a 'default' entry is found it will reference the 'default_user' # from the distro configuration specified below users: -- default + - default - # If this is set, 'root' will not be able to ssh in and they + # If this is set, 'root' will not be able to ssh in and they # will get a message to login instead as the above $user (ubuntu) disable_root: true locale: en_US.UTF-8 apt_preserve_sources_list: true # This will cause the set+update hostname module to not operate (if true) preserve_hostname: true manage_etc_hosts: false manage-resolv-conf: true resolv_conf: - nameservers: ['208.67.222.222', '127.0.0.1', '208.67.220.220'] - searchdomains: - - arjun.refugeez.net - domain: refugeez.net - options: - rotate: true - timeout: 1 + nameservers: ['208.67.222.222', '127.0.0.1', '208.67.220.220'] + searchdomains: + - my.domain.net + domain: domain.net + options: + rotate: true + timeout: 1 # write_files: # - path: /etc/resolv.conf #permissions: 0644 #owner: root #content: | # nameserver 208.67.222.222 # nameserver 127.0.0.1 # nameserver 208.67.220.220 - # Example datasource config - # datasource: - #Ec2: + # datasource: + #Ec2: # metadata_urls: [ 'blah.com' ] # timeout: 5 # (defaults to 50 seconds) # max_wait: 10 # (defaults to 120 seconds) # The modules that run in the 'init' stage cloud_init_modules: - - migrator - - seed_random - - bootcmd - - write-files - - growpart - - resizefs - - set_hostname - - update_hostname - - update_etc_hosts - - ca-certs - - rsyslog - - users-groups - - ssh + - migrator + - seed_random + - bootcmd + - write-files + - growpart + - resizefs + - set_hostname + -
[Yahoo-eng-team] [Bug 1374573] Re: Server hang on external network deletion with FIPs
Reviewed: https://review.openstack.org/124722 Committed: https://git.openstack.org/cgit/openstack/tempest/commit/?id=52ee13612feae9edadecb847f90d9f568aca69ec Submitter: Jenkins Branch:master commit 52ee13612feae9edadecb847f90d9f568aca69ec Author: Yair Fried yfr...@redhat.com Date: Mon Sep 29 14:47:03 2014 +0300 Adds test for deleting external network with floatingIPs The attached neutron bug causes server to hang when deleting external network that still has a floating IP in it. This test should recreate the bug, and verify it is fixed Closes-Bug: #1374573 Change-Id: Ib7d8dcbb4485e87a49cb008ace37c81f6b06a32c ** Changed in: tempest Status: In Progress = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1374573 Title: Server hang on external network deletion with FIPs Status in OpenStack Neutron (virtual network service): Fix Released Status in Tempest: Fix Released Bug description: This happens on master: Follow these steps: 1) neutron net-create test --router:external=True 2) neutron subnet-create test 200.0.0.0/22 --name test 3) neutron floatingip-create test 4) neutron net-delete test Watch command 4) hang (the server never comes back). Expected behavior would be for the command to succeed and delete the network successfully. This looks like a regression caused by commit: b1677dcb80ce8b83aadb2180efad3527a96bd3bc (https://review.openstack.org/#/c/82945/) To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1374573/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1394083] [NEW] ldap user_filter is not honored while authenticating
Public bug reported: When full LDAP logging is enabled, we can see that the inital LDAP search query does not use the user_filter while it tries to find the user DN from the LDAP. This causes authentication to fail if we have two users with same name in the LDAP in the same tree but with different ids. We use memberOf filter to limit which users are seen by Keystone. I traced the issue to keystone/common/ldap/core.py method get_by_name which only seems to filter by user name ignoring the filter set in the configuration. ** Affects: keystone Importance: Undecided Status: New ** Tags: ldap ** Tags added: ldap -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1394083 Title: ldap user_filter is not honored while authenticating Status in OpenStack Identity (Keystone): New Bug description: When full LDAP logging is enabled, we can see that the inital LDAP search query does not use the user_filter while it tries to find the user DN from the LDAP. This causes authentication to fail if we have two users with same name in the LDAP in the same tree but with different ids. We use memberOf filter to limit which users are seen by Keystone. I traced the issue to keystone/common/ldap/core.py method get_by_name which only seems to filter by user name ignoring the filter set in the configuration. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1394083/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1385318] Re: Nova fails to add fixed IP
** Also affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1385318 Title: Nova fails to add fixed IP Status in OpenStack Neutron (virtual network service): New Status in OpenStack Compute (Nova): New Bug description: I created instance with one NIC attached. Then I try to attach another NIC: nova add-fixed-ip ServerId NetworkId Nova compute raises exception: 2014-10-24 15:40:33.925 31955 ERROR oslo.messaging.rpc.dispatcher [req-43570a05-937a-4ddf-a0e9-e05d42660817 ] Exception during message handling: Network could not be found for instance 09b6e137-37d6-475d-992c-bdcb7d3cb841. 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 134, in _dispatch_and_reply 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher incoming.message)) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 177, in _dispatch 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 123, in _do_dispatch 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py, line 414, in decorated_function 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/exception.py, line 88, in wrapped 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher payload) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py, line 82, in __exit__ 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/exception.py, line 71, in wrapped 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher return f(self, context, *args, **kw) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py, line 326, in decorated_function 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher kwargs['instance'], e, sys.exc_info()) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py, line 82, in __exit__ 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py, line 314, in decorated_function 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py, line 3915, in add_fixed_ip_to_instance 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher network_id) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/network/base_api.py, line 61, in wrapper 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher res = f(self, context, *args, **kwargs) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher File /home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/network/neutronv2/api.py, line 684, in add_fixed_ip_to_instance 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher instance_id=instance['uuid']) 2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher NetworkNotFoundForInstance: Network could not be found for instance 09b6e137-37d6-475d-992c-bdcb7d3cb841. 2014-10-24
[Yahoo-eng-team] [Bug 1394041] Re: Demo user's Identity dashboard :Info: Insufficient privilege level to view project information.
Closing it because it is not reproducible anymore ** Changed in: horizon Status: Incomplete = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1394041 Title: Demo user's Identity dashboard :Info: Insufficient privilege level to view project information. Status in OpenStack Dashboard (Horizon): Invalid Bug description: Testing step: 1: git clone https://git.openstack.org/openstack-dev/devstack 2: cd devstack; ./stack.sh 3: login as demo 4: there is a Identity Dashboard but report that Info: Insufficient privilege level to view project information. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1394041/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp