[Yahoo-eng-team] [Bug 1301532] Re: Quotas can be exceeded by making highly parallel requests
** Information type changed from Private Security to Public ** Changed in: ossa Status: Incomplete = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1301532 Title: Quotas can be exceeded by making highly parallel requests Status in OpenStack Compute (Nova): New Status in OpenStack Security Advisories: Won't Fix Bug description: By making parallel API requests to create new keypairs I was able to create 162 keypairs when my quota only allows for 100. I suspect this is due to the code in Nova doing the check for how many keypairs the user currently has at the beginning of the request cycle, and if enough requests check in parallel they all return zero before any are created, allowing far too many to sneak through. I also suspect this behavior is true for any quota'd resource that doesn't go through the scheduler. This doesn't seem like a high-priority issue with the data currently available, but it may be potentially exploitable, hence I'm setting the security flag on the ticket just to be sure it gets triaged appropriately before we allow any malicious user on the internet to exceed their quotas. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1301532/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1312690] [NEW] nova-docker snapshot does not return proper image ID
Public bug reported: With the current impl of the nova-docker virt driver and the docker- registry (https://github.com/dotcloud/docker-registry) snapshotting a docker container does not return the image ID of the final image created from the snapshot operation. For example consumer code should be able to do something like this: image_uuid = self.clients(nova).servers.create_image(server, server.name) image = self.clients(nova).images.get(image_uuid) image = bench_utils.wait_for( image, is_ready=bench_utils.resource_is(ACTIVE), update_resource=bench_utils.get_from_manager(), timeout=CONF.benchmark.nova_server_image_create_timeout, check_interval= CONF.benchmark.nova_server_image_create_poll_interval ) That is, the image returned from the create_image should reflect the image UUID of the final image created during capture. However with docker driver the process actually creates a final image call image_name:latest. Example: - Install devstack + nova-docker driver - Pull, tag and push a docker image into glance using docker-registry with glance store - Create a nova server for docker -- results in a docker container - Use the nova python api to snapshot the server (see code snippet above). - The image_uuid returned in the above snippet might point to an image named 'myzirdsivgoftfqp'. However the actual final image created by the snapshot is named 'myzirdsivgoftfqp:latest' and is not the same image referred to in the return response from the create_image call Such behavior impacts consumers and is not consistent with the nova snapshot behavior. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1312690 Title: nova-docker snapshot does not return proper image ID Status in OpenStack Compute (Nova): New Bug description: With the current impl of the nova-docker virt driver and the docker- registry (https://github.com/dotcloud/docker-registry) snapshotting a docker container does not return the image ID of the final image created from the snapshot operation. For example consumer code should be able to do something like this: image_uuid = self.clients(nova).servers.create_image(server, server.name) image = self.clients(nova).images.get(image_uuid) image = bench_utils.wait_for( image, is_ready=bench_utils.resource_is(ACTIVE), update_resource=bench_utils.get_from_manager(), timeout=CONF.benchmark.nova_server_image_create_timeout, check_interval= CONF.benchmark.nova_server_image_create_poll_interval ) That is, the image returned from the create_image should reflect the image UUID of the final image created during capture. However with docker driver the process actually creates a final image call image_name:latest. Example: - Install devstack + nova-docker driver - Pull, tag and push a docker image into glance using docker-registry with glance store - Create a nova server for docker -- results in a docker container - Use the nova python api to snapshot the server (see code snippet above). - The image_uuid returned in the above snippet might point to an image named 'myzirdsivgoftfqp'. However the actual final image created by the snapshot is named 'myzirdsivgoftfqp:latest' and is not the same image referred to in the return response from the create_image call Such behavior impacts consumers and is not consistent with the nova snapshot behavior. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1312690/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1297309] Re: Unauthorized: Unknown auth strategy
There was a nova fix here too: https://review.openstack.org/#/c/82851/ ** Also affects: nova Importance: Undecided Status: New ** Changed in: nova Status: New = Fix Committed ** Changed in: nova Assignee: (unassigned) = Armando Migliaccio (armando-migliaccio) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1297309 Title: Unauthorized: Unknown auth strategy Status in OpenStack Neutron (virtual network service): New Status in OpenStack Compute (Nova): Fix Committed Status in Python client library for Neutron: Fix Committed Bug description: I have seen this error occasionally in various (Nova) logs: 2014-03-25 10:42:58.182 31770 TRACE nova.api.openstack raise exceptions.Unauthorized(message=_('Unknown auth strategy')) 2014-03-25 10:42:58.182 31770 TRACE nova.api.openstack Unauthorized: Unknown auth strategy Full stacktrace can be found with this logstash query: http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVW5hdXRob3JpemVkOiBVbmtub3duIGF1dGggc3RyYXRlZ3lcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiODY0MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzk1NzU2MzQxNTc3fQ== As a first step, it would be nice for neutronclient to say what auth strategy it is unable to handle. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1297309/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1302774] Re: Failed to detach volume because of volume not found error prevents vm teardown
** Changed in: nova Status: New = Confirmed ** Changed in: cinder Importance: Undecided = High ** Changed in: tempest Assignee: (unassigned) = Sean Dague (sdague) ** Changed in: tempest Status: New = Fix Released ** Changed in: tempest Importance: Undecided = High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1302774 Title: Failed to detach volume because of volume not found error prevents vm teardown Status in Cinder: In Progress Status in OpenStack Compute (Nova): Confirmed Status in Tempest: Fix Released Bug description: When running the boto tests in the gate we get a periodic race on Cinder volumes that can be seen as follows: Attach the volume (vdh) - http://logs.openstack.org/29/84829/6/check /check-dg-tempest-dsvm-full- reexec/7e595ac/logs/screen-n-cpu.txt.gz?level=INFO#_2014-04-04_11_38_35_995 Detach the volume (vdh) - http://logs.openstack.org/29/84829/6/check /check-dg-tempest-dsvm-full- reexec/7e595ac/logs/screen-n-cpu.txt.gz?level=INFO#_2014-04-04_11_38_41_477 Stack trace horribly because the volume is not found - http://logs.openstack.org/29/84829/6/check/check-dg-tempest-dsvm-full- reexec/7e595ac/logs/screen-n-cpu.txt.gz?level=INFO#_2014-04-04_11_38_43_866 Stack trace horribly because we try again? - http://logs.openstack.org/29/84829/6/check/check-dg-tempest-dsvm-full- reexec/7e595ac/logs/screen-n-cpu.txt.gz?level=INFO#_2014-04-04_11_42_29_353 Because of this we end up with a volume in an undeletable state, and the tests fail in making it go away (it remains marked in-use even though the guest that was using it is gone) - http://logs.openstack.org/29/84829/6/check/check-dg-tempest-dsvm-full- reexec/7e595ac/console.html#_2014-04-04_11_42_47_940 Logstash for these results is: http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRGlza05vdEZvdW5kOiBObyBkaXNrIGF0XCIgQU5EIHRhZ3M6c2NyZWVuLW4tY3B1LnR4dCIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5NjYzNDM5MjIyOX0= To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1302774/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1211602] Re: revocation list should not be a protected resource
** Changed in: keystone Status: Confirmed = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1211602 Title: revocation list should not be a protected resource Status in OpenStack Identity (Keystone): Won't Fix Bug description: The Revocation List resources should not be protected. The Revocation list is akin to a CRL and likely should be available for public consumption as a CRL would be. Resources are: v3: /auth/tokens/OS-PKI/revoked v2: /tokens/revoked To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1211602/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1312730] [NEW] exceptions must be old-style classes or derived from BaseException, not NoneType (HTTP 400)
Public bug reported: I follow: http://docs.openstack.org/trunk/install-guide/install/yum/content/keystone-users.html When I run: keystone --os-token 487089b7d994017258c2 --os-endpoint http://controller:35357/v2.0 user-create --name=admin --pass=743b12e76412acb4bdc9 --email=root@localhost it creates user correctly: +--+--+ | Property | Value | +--+--+ | email | root@localhost | | enabled | True | |id| 32862c8d841c46bf8bd0212ea64db72c | | name | admin | | username | admin | +--+--+ But if I run this command again, I will get: exceptions must be old-style classes or derived from BaseException, not NoneType (HTTP 400) # rpm -qf /usr/bin/keystone python-keystoneclient-0.8.0-2.el6.noarch I would expect more helpful message like: User already exist. ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1312730 Title: exceptions must be old-style classes or derived from BaseException, not NoneType (HTTP 400) Status in OpenStack Identity (Keystone): New Bug description: I follow: http://docs.openstack.org/trunk/install-guide/install/yum/content/keystone-users.html When I run: keystone --os-token 487089b7d994017258c2 --os-endpoint http://controller:35357/v2.0 user-create --name=admin --pass=743b12e76412acb4bdc9 --email=root@localhost it creates user correctly: +--+--+ | Property | Value | +--+--+ | email | root@localhost | | enabled | True | |id| 32862c8d841c46bf8bd0212ea64db72c | | name | admin | | username | admin | +--+--+ But if I run this command again, I will get: exceptions must be old-style classes or derived from BaseException, not NoneType (HTTP 400) # rpm -qf /usr/bin/keystone python-keystoneclient-0.8.0-2.el6.noarch I would expect more helpful message like: User already exist. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1312730/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1312221] Re: Add user objects to mapping rules examples in OS-FEDERATION docs
** Project changed: keystone = openstack-api-site ** Changed in: openstack-api-site Status: Incomplete = Confirmed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1312221 Title: Add user objects to mapping rules examples in OS-FEDERATION docs Status in OpenStack API documentation site: Confirmed Bug description: All the mapping rules should produce not only a set of Keystone group ids but also a user_id. It's is also required by mapping engine (https://github.com/openstack/keystone/blob/master/keystone/contrib/federation/utils.py#L224). Unfortunately not all examples in the OS-FEDERATION extension include. This should be fixed, as well as docs should clearly state that all the rules should map the user name. To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-api-site/+bug/1312221/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1306699] Re: utils.find_resource return resource not depends on query
** Changed in: python-openstackclient Status: Invalid = Confirmed ** Changed in: keystone Status: Incomplete = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1306699 Title: utils.find_resource return resource not depends on query Status in OpenStack Identity (Keystone): Invalid Status in OpenStack Command Line Client: Confirmed Bug description: When I have one group, the query /groups?display_name=bogus returns: {u'groups': [{u'id': u'6ce42989b4ae41f89323813812ca6208', u'name': u'asdf', u'domain_id': u'default', u'links': {u'self': u'http://172.20.1.112:5000/v3/groups/6ce42989b4ae41f89323813812ca6208'}, u'description': u''}], u'links': {u'self': u'http://172.20.1.112:5000/v3/groups', u'next': None, u'previous': None}} Even though the query did not match the query string. I have defined only one resource of keystone (one user and one group), then I try command which call method utils.find_resource. This resource would be returned by utils.find_resource not depends on what was specified as name or id. Examples: (.venv)stack@eu:/opt/stack/python-openstackclient$ openstack user list --role --os-identity-api-version 3 non_existing_user | 54fbed994dc84616b2118e4fe6b77d8f | Member | (.venv)stack@eu:/opt/stack/python-openstackclient$ openstack user list --role --os-identity-api-version 3 admin | 54fbed994dc84616b2118e4fe6b77d8f | Member | openstack group list --role --os-identity-api-version 3 --domain admin group_not_exist | 54fbed994dc84616b2118e4fe6b77d8f | Member | Default | t_dr | So, utils.find_resource tries to find user/group with incorrect names but doesn't fail in case when only one resource of such type is specified. But it should raise an exception that it can't find resource with specified name or ID. I tried this also with nova and cinder commands, it works correct with this services. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1306699/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1312759] [NEW] Restore the Show Terminated link on the Usage reports
Public bug reported: The Usage tables used to have an option for also showing Terminated instances that were active during the selected time period. The backend code to do this is still there (you can verify this by navigating to /project/?show_terminated=True manually) but it would appear we lost the link. You can see an example of how it used to be in the templates here: https://github.com/openstack/horizon/commit/64b81acc0a98a3e8a5cee834d206c443098a012b ** Affects: horizon Importance: Wishlist Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1312759 Title: Restore the Show Terminated link on the Usage reports Status in OpenStack Dashboard (Horizon): New Bug description: The Usage tables used to have an option for also showing Terminated instances that were active during the selected time period. The backend code to do this is still there (you can verify this by navigating to /project/?show_terminated=True manually) but it would appear we lost the link. You can see an example of how it used to be in the templates here: https://github.com/openstack/horizon/commit/64b81acc0a98a3e8a5cee834d206c443098a012b To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1312759/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1312781] [NEW] Inconsistent data in the Overview Usage Summary
Public bug reported: The Usage summary on the overview shows information for a range of time. However, it's not consistent: it appears some of the data is related to the period and some of the data is related to the current status. I think all the data should be related to the current period. Steps to reproduce: 1. Start a VM and let it idle for a while. 2. Check the Overview page, the summary data is similar to this: Active Instances: 1 Active RAM: 512MB This Period's VCPU-Hours: 0.54 This Period's GB-Hours: 1.27 3. Terminate the VM and revisit the Overview page Actual results: 4. The data is like this: Active Instances: 0 Active RAM: 0Bytes This Period's VCPU-Hours: 0.55 This Period's GB-Hours: 1.28 Expected results: I'm not actually sure how to make the data be useful. I think having 2 fields related to right now and 2 fields related to the period is confusing. I'm not sure if showing the total cumulative RAM used in the period is useful. Perhaps we could leave the Instances/RAM count as is and move it to be closer to the usage table (as the table only display current data by default, too, unless show_terminated is specifically set - see bug 1312759). This is somewhat related to bug 1286286. ** Affects: horizon Importance: Undecided Status: Confirmed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1312781 Title: Inconsistent data in the Overview Usage Summary Status in OpenStack Dashboard (Horizon): Confirmed Bug description: The Usage summary on the overview shows information for a range of time. However, it's not consistent: it appears some of the data is related to the period and some of the data is related to the current status. I think all the data should be related to the current period. Steps to reproduce: 1. Start a VM and let it idle for a while. 2. Check the Overview page, the summary data is similar to this: Active Instances: 1 Active RAM: 512MB This Period's VCPU-Hours: 0.54 This Period's GB-Hours: 1.27 3. Terminate the VM and revisit the Overview page Actual results: 4. The data is like this: Active Instances: 0 Active RAM: 0Bytes This Period's VCPU-Hours: 0.55 This Period's GB-Hours: 1.28 Expected results: I'm not actually sure how to make the data be useful. I think having 2 fields related to right now and 2 fields related to the period is confusing. I'm not sure if showing the total cumulative RAM used in the period is useful. Perhaps we could leave the Instances/RAM count as is and move it to be closer to the usage table (as the table only display current data by default, too, unless show_terminated is specifically set - see bug 1312759). This is somewhat related to bug 1286286. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1312781/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1312796] [NEW] we are able to destroy an instance while taking a snapshot
Public bug reported: we are able to destroy an instance when taking a snapshot. the new image status would depend if it was already created and uploaded to /var/lib/glance/images I think that if we allow to destroy the instance when taking the snapshot we run the risk of data corruption on the new snapshot or the snapshot not being created at all. so I think that to destroy the instance while taking the snapshot we should have a --force in while the admin user knowingly destroys the instance. [root@puma31 ~(keystone_admin)]# nova list nov+--+--++--+-+--+ | ID | Name | Status | Task State | Power State | Networks | +--+--++--+-+--+ | e00ae899-e285-4f09-8cda-2c2680799bba | from | ACTIVE | image_pending_upload | Running | novanetwork=192.168.32.2 | +--+--++--+-+--+ [root@puma31 ~(keystone_admin)]# nova delete e00ae899-e285-4f09-8cda-2c2680799bba [root@puma31 ~(keystone_admin)]# nova list +--+--+++-+--+ | ID | Name | Status | Task State | Power State | Networks | +--+--+++-+--+ | e00ae899-e285-4f09-8cda-2c2680799bba | from | ACTIVE | deleting | Running | novanetwork=192.168.32.2 | +--+--+++-+--+ [root@puma31 ~(keystone_admin)]# nova image-create e00ae899-e285-4f09-8cda-2c2680799bba destroy_test --poll Server snapshotting... 50% complete Server snapshotting... 50% complete Server snapshotting... 100% complete Finished ERROR: Instance could not be found (HTTP 404) (Request-ID: req-b6b7b066-0da8-441a-8788-b6969d7b1527) [root@puma31 ~(keystone_admin)]# [root@puma31 ~(keystone_admin)]# [root@puma31 ~(keystone_admin)]# glance image-list +--+--+-+--+++ | ID | Name | Disk Format | Container Format | Size | Status | +--+--+-+--+++ | 6aa2362c-a1bb-490a-aeeb-3786ad7b9312 | destroy_test | qcow2 | bare | 3629645824 | active | | 73f92385-3080-4a4e-a100-76de38a3a569 | new_snap | qcow2 | bare | 3628728320 | active | | deddabea-475f-4c2f-88e3-0c76612e529c | poll-test1 | qcow2 | bare | 3629383680 | active | | df06e227-0d6a-4e2c-90c1-13cd32721360 | rhel | qcow2 | bare | 3628990464 | active | | 6175a441-8cb2-4d35-9b7d-241d51eaa270 | rhel1| qcow2 | bare | 3629383680 | active | +--+--+-+--+++ ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1312796 Title: we are able to destroy an instance while taking a snapshot Status in OpenStack Compute (Nova): New Bug description: we are able to destroy an instance when taking a snapshot. the new image status would depend if it was already created and uploaded to /var/lib/glance/images I think that if we allow to destroy the instance when taking the snapshot we run the risk of data corruption on the new snapshot or the snapshot not being created at all. so I think that to destroy the instance while taking the snapshot we should have a --force in while the admin user knowingly destroys the instance. [root@puma31 ~(keystone_admin)]# nova list nov+--+--++--+-+--+ | ID | Name | Status | Task State | Power State | Networks | +--+--++--+-+--+ | e00ae899-e285-4f09-8cda-2c2680799bba | from | ACTIVE | image_pending_upload | Running | novanetwork=192.168.32.2 | +--+--++--+-+--+ [root@puma31 ~(keystone_admin)]# nova delete e00ae899-e285-4f09-8cda-2c2680799bba [root@puma31 ~(keystone_admin)]# nova list
[Yahoo-eng-team] [Bug 1312833] [NEW] no error is reported or logged when we fail to create a volume on Image minDisk
Public bug reported: I tried to create a volume from an image and gave the volume size which is less than the glance parameter: --min-disk All I got was an error that I cannot create the volume. there was nothing logged in horizon and since there is no image id it was really problematic trying to find anything in cinder logs. I only managed to debug this by running the cinder create command in cli and getting this error: [root@orange-vdsf ~(keystone_admin)]# cinder create 4 --image-id 6175a441-8cb2-4d35-9b7d-241d51eaa270 ERROR: Invalid input received: Image minDisk size 40 is larger than the volume size 4. (HTTP 400) (Request-ID: req-5b50c2db-19f1-40eb-8237-5f955a90caab) Can we add an error or print something to horizon log? ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1312833 Title: no error is reported or logged when we fail to create a volume on Image minDisk Status in OpenStack Dashboard (Horizon): New Bug description: I tried to create a volume from an image and gave the volume size which is less than the glance parameter: --min-disk All I got was an error that I cannot create the volume. there was nothing logged in horizon and since there is no image id it was really problematic trying to find anything in cinder logs. I only managed to debug this by running the cinder create command in cli and getting this error: [root@orange-vdsf ~(keystone_admin)]# cinder create 4 --image-id 6175a441-8cb2-4d35-9b7d-241d51eaa270 ERROR: Invalid input received: Image minDisk size 40 is larger than the volume size 4. (HTTP 400) (Request-ID: req-5b50c2db-19f1-40eb-8237-5f955a90caab) Can we add an error or print something to horizon log? To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1312833/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1312858] [NEW] Keystone + Devstack fail when KEYSTONE_TOKEN_FORMAT=UUID
Public bug reported: Running devstack in fresh Ubuntu 12.04 virtual machine with: $ cat local_rc KEYSTONE_TOKEN_FORMAT=UUID ...fails to start Keystone. Despite being configured for the UUID provider, keystone attempts to read `/etc/keystone/ssl/certs/signing_cert.pem` and fails (because it doesn't exist): 2014-04-25 10:36:25.289 INFO eventlet.wsgi.server [-] 192.168.121.46 - - [25/Apr/2014 10:36:25] GET /v2.0/tokens/69da781ae31c405e9aaa7adbf8f6f806 HTTP/1.1 200 3988 0.009096 2014-04-25 10:36:25.294 DEBUG keystone.middleware.core [-] RBAC: auth_context: {'project_id': u'7fab1d7a9ba741208bd748749a394902', 'user_id': u'8d21c5353bdd4eb7a1a805cb3b7fd1b2', 'roles': [u'_member_', u'service ']} from (pid=13334) process_request /opt/stack/keystone/keystone/middleware/core.py:281 2014-04-25 10:36:25.296 DEBUG keystone.common.wsgi [-] arg_dict: {} from (pid=13334) __call__ /opt/stack/keystone/keystone/common/wsgi.py:181 2014-04-25 10:36:25.296 DEBUG keystone.common.controller [-] RBAC: Authorizing identity:revocation_list() from (pid=13334) _build_policy_check_credentials /opt/stack/keystone/keystone/common/controller.py:54 2014-04-25 10:36:25.297 DEBUG keystone.common.controller [-] RBAC: using auth context from the request environment from (pid=13334) _build_policy_check_credentials /opt/stack/keystone/keystone/common/controller. py:59 2014-04-25 10:36:25.297 DEBUG keystone.policy.backends.rules [-] enforce identity:revocation_list: {'project_id': u'7fab1d7a9ba741208bd748749a394902', 'user_id': u'8d21c5353bdd4eb7a1a805cb3b7fd1b2', 'roles': [u' _member_', u'service']} from (pid=13334) enforce /opt/stack/keystone/keystone/policy/backends/rules.py:101 2014-04-25 10:36:25.297 DEBUG keystone.openstack.common.policy [-] Rule identity:revocation_list will be now enforced from (pid=13334) enforce /opt/stack/keystone/keystone/openstack/common/policy.py:287 2014-04-25 10:36:25.298 DEBUG keystone.common.controller [-] RBAC: Authorization granted from (pid=13334) inner /opt/stack/keystone/keystone/common/controller.py:151 2014-04-25 10:36:25.309 ERROR keystoneclient.common.cms [-] Signing error: Error opening signer certificate /etc/keystone/ssl/certs/signing_cert.pem 140424564475552:error:02001002:system library:fopen:No such file or directory:bss_file.c:398:fopen('/etc/keystone/ssl/certs/signing_cert.pem','r') 140424564475552:error:20074002:BIO routines:FILE_CTRL:system lib:bss_file.c:400: unable to load certificate 2014-04-25 10:36:25.310 ERROR keystone.common.wsgi [-] Command 'openssl' returned non-zero exit status 3 2014-04-25 10:36:25.310 TRACE keystone.common.wsgi Traceback (most recent call last): 2014-04-25 10:36:25.310 TRACE keystone.common.wsgi File /opt/stack/keystone/keystone/common/wsgi.py, line 207, in __call__ 2014-04-25 10:36:25.310 TRACE keystone.common.wsgi result = method(context, **params) 2014-04-25 10:36:25.310 TRACE keystone.common.wsgi File /opt/stack/keystone/keystone/common/controller.py, line 152, in inner 2014-04-25 10:36:25.310 TRACE keystone.common.wsgi return f(self, context, *args, **kwargs)
[Yahoo-eng-team] [Bug 1312858] Re: Keystone + Devstack fail when KEYSTONE_TOKEN_FORMAT=UUID
Can you paste the keystone.conf that results from setting KEYSTONE_TOKEN_FORMAT=UUID ? ** Also affects: devstack Importance: Undecided Status: New ** Changed in: keystone Status: New = Incomplete ** Changed in: keystone Importance: Undecided = High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1312858 Title: Keystone + Devstack fail when KEYSTONE_TOKEN_FORMAT=UUID Status in devstack - openstack dev environments: New Status in OpenStack Identity (Keystone): Incomplete Bug description: Running devstack in fresh Ubuntu 12.04 virtual machine with: $ cat local_rc KEYSTONE_TOKEN_FORMAT=UUID ...fails to start Keystone. Despite being configured for the UUID provider, keystone attempts to read `/etc/keystone/ssl/certs/signing_cert.pem` and fails (because it doesn't exist): 2014-04-25 10:36:25.289 INFO eventlet.wsgi.server [-] 192.168.121.46 - - [25/Apr/2014 10:36:25] GET /v2.0/tokens/69da781ae31c405e9aaa7adbf8f6f806 HTTP/1.1 200 3988 0.009096 2014-04-25 10:36:25.294 DEBUG keystone.middleware.core [-] RBAC: auth_context: {'project_id': u'7fab1d7a9ba741208bd748749a394902', 'user_id': u'8d21c5353bdd4eb7a1a805cb3b7fd1b2', 'roles': [u'_member_', u'service ']} from (pid=13334) process_request /opt/stack/keystone/keystone/middleware/core.py:281 2014-04-25 10:36:25.296 DEBUG keystone.common.wsgi [-] arg_dict: {} from (pid=13334) __call__ /opt/stack/keystone/keystone/common/wsgi.py:181 2014-04-25 10:36:25.296 DEBUG keystone.common.controller [-] RBAC: Authorizing identity:revocation_list() from (pid=13334) _build_policy_check_credentials /opt/stack/keystone/keystone/common/controller.py:54 2014-04-25 10:36:25.297 DEBUG keystone.common.controller [-] RBAC: using auth context from the request environment from (pid=13334) _build_policy_check_credentials /opt/stack/keystone/keystone/common/controller. py:59 2014-04-25 10:36:25.297 DEBUG keystone.policy.backends.rules [-] enforce identity:revocation_list: {'project_id': u'7fab1d7a9ba741208bd748749a394902', 'user_id': u'8d21c5353bdd4eb7a1a805cb3b7fd1b2', 'roles': [u' _member_', u'service']} from (pid=13334) enforce /opt/stack/keystone/keystone/policy/backends/rules.py:101 2014-04-25 10:36:25.297 DEBUG keystone.openstack.common.policy [-] Rule identity:revocation_list will be now enforced from (pid=13334) enforce /opt/stack/keystone/keystone/openstack/common/policy.py:287 2014-04-25 10:36:25.298 DEBUG keystone.common.controller [-] RBAC: Authorization granted from (pid=13334) inner /opt/stack/keystone/keystone/common/controller.py:151 2014-04-25 10:36:25.309 ERROR keystoneclient.common.cms [-] Signing error: Error opening signer certificate /etc/keystone/ssl/certs/signing_cert.pem 140424564475552:error:02001002:system library:fopen:No such file or directory:bss_file.c:398:fopen('/etc/keystone/ssl/certs/signing_cert.pem','r') 140424564475552:error:20074002:BIO routines:FILE_CTRL:system lib:bss_file.c:400: unable to load certificate 2014-04-25 10:36:25.310 ERROR keystone.common.wsgi [-] Command 'openssl' returned non-zero exit status 3 2014-04-25 10:36:25.310 TRACE keystone.common.wsgi Traceback (most recent call last): 2014-04-25 10:36:25.310 TRACE keystone.common.wsgi File /opt/stack/keystone/keystone/common/wsgi.py, line 207, in __call__
[Yahoo-eng-team] [Bug 1312730] Re: exceptions must be old-style classes or derived from BaseException, not NoneType (HTTP 400)
*** This bug is a duplicate of bug 1276221 *** https://bugs.launchpad.net/bugs/1276221 ** This bug has been marked a duplicate of bug 1276221 Keystone returns HTTP 400 as SQLAlchemy raises None exceptions -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1312730 Title: exceptions must be old-style classes or derived from BaseException, not NoneType (HTTP 400) Status in OpenStack Identity (Keystone): Incomplete Bug description: I follow: http://docs.openstack.org/trunk/install-guide/install/yum/content/keystone-users.html When I run: keystone --os-token 487089b7d994017258c2 --os-endpoint http://controller:35357/v2.0 user-create --name=admin --pass=743b12e76412acb4bdc9 --email=root@localhost it creates user correctly: +--+--+ | Property | Value | +--+--+ | email | root@localhost | | enabled | True | |id| 32862c8d841c46bf8bd0212ea64db72c | | name | admin | | username | admin | +--+--+ But if I run this command again, I will get: exceptions must be old-style classes or derived from BaseException, not NoneType (HTTP 400) # rpm -qf /usr/bin/keystone python-keystoneclient-0.8.0-2.el6.noarch I would expect more helpful message like: User already exist. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1312730/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1312874] [NEW] resizing an instance - causes the drives to dissapear - With it hung during reboot
Public bug reported: I have my openstack cluster on Havvana. Resizing an instance (Deb-7-based) to have more disk space - followed by a reboot (soft) - causes the drives to dissapear. And the instance would be hung permanently at the boot screen for it cannot find any drives. STEPS: -- 1. Create an instance (deb-7) 2. Resize the instance - with a flavour to have more disk space. 3. After the instance is resized, the instance is permanently set in ERROR state - eventhough you can take a console of it and login as usual. amande@axcient:~/VMs/IMAGES$ nova list +--+---+++-+---+ | ID | Name | Status | Task State | Power State | Networks | +--+---+++-+---+ | cc867684-0fe9-48a7-95e9-60890d6e4fd0 | vapp | ERROR | None | Running | 172_22-public=172.22.0.49 | +--+---+++-+---+ 4. Reset state of the instance. amande@axcient:~/VMs/IMAGES$ nova reset-state --active cc867684-0fe9-48a7-95e9-60890d6e4fd0 amande@axcient:~/VMs/IMAGES$ nova list +--+---+++-+---+ | ID | Name | Status | Task State | Power State | Networks | +--+---+++-+---+ | cc867684-0fe9-48a7-95e9-60890d6e4fd0 | vapp | ACTIVE | None | Running | 172_22-public=172.22.0.49 | +--+---+++-+---+ 5. After it has its state changed, Soft Reboot the instance. 6. right at this stage, the drives are not to be seen. and the instance will be hung at the boot screen forever- for it cannot find any drives to mount. LOGS 2014-04-24 23:14:48 DEBUG nova.virt.disk.api [req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande df9536c185814408adce9b8c1cafcf1c] Checking if we can resize image /var/lib/nova/instances/cc867684-0fe9-48a7-95e9-60890d6e4fd0/disk. size=563714457600 can_resize_image /usr/lib/python2.6/site-packages/nova/virt/disk/api.py:157 2014-04-24 23:14:48 DEBUG nova.openstack.common.processutils [req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande df9536c185814408adce9b8c1cafcf1c] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc867684-0fe9-48a7-95e9-60890d6e4fd0/disk execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:147 2014-04-24 23:14:48 DEBUG nova.openstack.common.processutils [req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande df9536c185814408adce9b8c1cafcf1c] Running cmd (subprocess): qemu-img resize /var/lib/nova/instances/cc867684-0fe9-48a7-95e9-60890d6e4fd0/disk 563714457600 execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:147 2014-04-24 23:14:48 DEBUG nova.virt.disk.api [req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande df9536c185814408adce9b8c1cafcf1c] Checking if we can resize filesystem inside /var/lib/nova/instances/cc867684-0fe9-48a7-95e9-60890d6e4fd0/disk. CoW=True is_image_partitionless /usr/lib/python2.6/site-packages/nova/virt/disk/api.py:171 2014-04-24 23:14:48 DEBUG nova.virt.disk.vfs.api [req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande df9536c185814408adce9b8c1cafcf1c] Instance for image imgfile=/var/lib/nova/instances/cc867684-0fe9-48a7-95e9-60890d6e4fd0/disk imgfmt=qcow2 partition=None instance_for_image /usr/lib/python2.6/site-packages/nova/virt/disk/vfs/api.py:31 2014-04-24 23:14:48 DEBUG nova.virt.disk.vfs.api [req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande df9536c185814408adce9b8c1cafcf1c] Trying to import guestfs instance_for_image /usr/lib/python2.6/site-packages/nova/virt/disk/vfs/api.py:34 2014-04-24 23:14:48 DEBUG nova.virt.disk.vfs.api [req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande df9536c185814408adce9b8c1cafcf1c] Using primary VFSGuestFS instance_for_image /usr/lib/python2.6/site-packages/nova/virt/disk/vfs/api.py:41 2014-04-24 23:14:48 DEBUG nova.virt.disk.vfs.guestfs [req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande df9536c185814408adce9b8c1cafcf1c] Setting up appliance for /var/lib/nova/instances/cc867684-0fe9-48a7-95e9-60890d6e4fd0/disk qcow2 setup /usr/lib/python2.6/site-packages/nova/virt/disk/vfs/guestfs.py:111 2014-04-24 23:14:48 DEBUG nova.virt.disk.api [req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande df9536c185814408adce9b8c1cafcf1c] Unable to mount image
[Yahoo-eng-team] [Bug 1281940] Re: launch instance with volume fail when using PKIV3 with nocatalog token
In general nova doesn't support keystone v3 yet. ** Changed in: nova Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1281940 Title: launch instance with volume fail when using PKIV3 with nocatalog token Status in OpenStack Compute (Nova): Invalid Bug description: I want to launch a instance with a bootable volume. I get the token by PKI format using V3 commands, but without catalog. keystone URL: http://127.0.0.1:35357/v3/auth/tokens?nocatalog Then, launch fails, shows that: {badRequest: {message: Block Device Mapping is Invalid: failed to get volume ***., code: 400}} I traced it, found that in cinderclient.service_catalog.py, raise the exception of cinderclient.exceptions.EndpointNotFound. If I use keystone URL: /v3/auth/tokens, then another exception raise(exceptions.AmbiguousEndpoints). see: https://bugs.launchpad.net/python-cinderclient/+bug/1263876 https://bugs.launchpad.net/python-novaclient/+bug/1154809 see another bug: https://bugs.launchpad.net/keystone/+bug/1186177, so that I add the nocatalog. But it will not return endpoints infomation. So nova can't get the cinder endpoint by PKI format token. Although we can use /v3/auth/tokens to avoid it, I still think it is a bug once user add nocatalog to get a token. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1281940/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1295976] Re: check-tempest-dsvm-full: Image failed to reach ACTIVE status within the required time (196 s). Current status: SAVING.
This is too vague of a bug, too many things can cause this. ** Changed in: nova Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1295976 Title: check-tempest-dsvm-full: Image failed to reach ACTIVE status within the required time (196 s). Current status: SAVING. Status in OpenStack Compute (Nova): Invalid Bug description: check-tempest-dsvm-full: Image 5155226c-598a-40ff-b2dd-a4cbc26f7a82 failed to reach ACTIVE status within the required time (196 s). Current status: SAVING. http://logstash.openstack.org/index.html#eyJzZWFyY2giOiJtZXNzYWdlOlwiZmFpbGVkIHRvIHJlYWNoIEFDVElWRSBzdGF0dXMgd2l0aGluIHRoZSByZXF1aXJlZCB0aW1lICgxOTYgcykuIEN1cnJlbnQgc3RhdHVzOiBTQVZJTkcuXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJmcm9tIjoiMjAxNC0wMy0wMVQwNzo0MDo0MCswMDowMCIsInRvIjoiMjAxNC0wMy0yMlQwNzo0MDo0MCswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxMzk1NDc0MzI5MjgzfQ== To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1295976/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1299517] Re: quota-class-update
** Changed in: nova Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1299517 Title: quota-class-update Status in OpenStack Compute (Nova): Invalid Bug description: Cant update default quota: root@blade1-1-live:~# nova --debug quota-class-update --ram -1 default REQ: curl -i 'http://XXX.XXX.XXX.XXX:8774/v2/1eaf475499f8479d94d5ed7a4af68703/os-quota-class-sets/default' -X PUT -H X-Auth-Project-Id: admin -H User-Agent: python-novaclient -H Content-Type: application/json -H Accept: application/json -H X-Auth-Token: 62837311542a42a495442d911cc8b12a -d '{quota_class_set: {ram: -1}}' New session created for: (http://XXX.XXX.XXX.XXX:8774) INFO (connectionpool:258) Starting new HTTP connection (1): XXX.XXX.XXX.XXX DEBUG (connectionpool:375) Setting read timeout to 600.0 DEBUG (connectionpool:415) PUT /v2/1eaf475499f8479d94d5ed7a4af68703/os-quota-class-sets/default HTTP/1.1 404 52 RESP: [404] CaseInsensitiveDict({'date': 'Sat, 29 Mar 2014 17:17:32 GMT', 'content-length': '52', 'content-type': 'text/plain; charset=UTF-8'}) RESP BODY: 404 Not Found The resource could not be found. DEBUG (shell:777) Not found (HTTP 404) Traceback (most recent call last): File /usr/lib/python2.7/dist-packages/novaclient/shell.py, line 774, in main OpenStackComputeShell().main(map(strutils.safe_decode, sys.argv[1:])) File /usr/lib/python2.7/dist-packages/novaclient/shell.py, line 710, in main args.func(self.cs, args) File /usr/lib/python2.7/dist-packages/novaclient/v1_1/shell.py, line 3378, in do_quota_class_update _quota_update(cs.quota_classes, args.class_name, args) File /usr/lib/python2.7/dist-packages/novaclient/v1_1/shell.py, line 3164, in _quota_update manager.update(identifier, **updates) File /usr/lib/python2.7/dist-packages/novaclient/v1_1/quota_classes.py, line 44, in update 'quota_class_set') File /usr/lib/python2.7/dist-packages/novaclient/base.py, line 165, in _update _resp, body = self.api.client.put(url, body=body) File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 289, in put return self._cs_request(url, 'PUT', **kwargs) File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 260, in _cs_request **kwargs) File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 242, in _time_request resp, body = self.request(url, method, **kwargs) File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 236, in request raise exceptions.from_response(resp, body, url, method) NotFound: Not found (HTTP 404) ERROR: Not found (HTTP 404) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1299517/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1312858] Re: Keystone + Devstack fail when KEYSTONE_TOKEN_FORMAT=UUID
** Also affects: python-keystoneclient Importance: Undecided Status: New ** Changed in: python-keystoneclient Assignee: (unassigned) = Brant Knudson (blk-u) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1312858 Title: Keystone + Devstack fail when KEYSTONE_TOKEN_FORMAT=UUID Status in devstack - openstack dev environments: New Status in OpenStack Identity (Keystone): Confirmed Status in Python client library for Keystone: New Bug description: Running devstack in fresh Ubuntu 12.04 virtual machine with: $ cat local_rc KEYSTONE_TOKEN_FORMAT=UUID ...fails to start Keystone. Despite being configured for the UUID provider, keystone attempts to read `/etc/keystone/ssl/certs/signing_cert.pem` and fails (because it doesn't exist): 2014-04-25 10:36:25.289 INFO eventlet.wsgi.server [-] 192.168.121.46 - - [25/Apr/2014 10:36:25] GET /v2.0/tokens/69da781ae31c405e9aaa7adbf8f6f806 HTTP/1.1 200 3988 0.009096 2014-04-25 10:36:25.294 DEBUG keystone.middleware.core [-] RBAC: auth_context: {'project_id': u'7fab1d7a9ba741208bd748749a394902', 'user_id': u'8d21c5353bdd4eb7a1a805cb3b7fd1b2', 'roles': [u'_member_', u'service ']} from (pid=13334) process_request /opt/stack/keystone/keystone/middleware/core.py:281 2014-04-25 10:36:25.296 DEBUG keystone.common.wsgi [-] arg_dict: {} from (pid=13334) __call__ /opt/stack/keystone/keystone/common/wsgi.py:181 2014-04-25 10:36:25.296 DEBUG keystone.common.controller [-] RBAC: Authorizing identity:revocation_list() from (pid=13334) _build_policy_check_credentials /opt/stack/keystone/keystone/common/controller.py:54 2014-04-25 10:36:25.297 DEBUG keystone.common.controller [-] RBAC: using auth context from the request environment from (pid=13334) _build_policy_check_credentials /opt/stack/keystone/keystone/common/controller. py:59 2014-04-25 10:36:25.297 DEBUG keystone.policy.backends.rules [-] enforce identity:revocation_list: {'project_id': u'7fab1d7a9ba741208bd748749a394902', 'user_id': u'8d21c5353bdd4eb7a1a805cb3b7fd1b2', 'roles': [u' _member_', u'service']} from (pid=13334) enforce /opt/stack/keystone/keystone/policy/backends/rules.py:101 2014-04-25 10:36:25.297 DEBUG keystone.openstack.common.policy [-] Rule identity:revocation_list will be now enforced from (pid=13334) enforce /opt/stack/keystone/keystone/openstack/common/policy.py:287 2014-04-25 10:36:25.298 DEBUG keystone.common.controller [-] RBAC: Authorization granted from (pid=13334) inner /opt/stack/keystone/keystone/common/controller.py:151 2014-04-25 10:36:25.309 ERROR keystoneclient.common.cms [-] Signing error: Error opening signer certificate /etc/keystone/ssl/certs/signing_cert.pem 140424564475552:error:02001002:system library:fopen:No such file or directory:bss_file.c:398:fopen('/etc/keystone/ssl/certs/signing_cert.pem','r') 140424564475552:error:20074002:BIO routines:FILE_CTRL:system lib:bss_file.c:400: unable to load certificate 2014-04-25 10:36:25.310 ERROR keystone.common.wsgi [-] Command 'openssl' returned non-zero exit status 3 2014-04-25 10:36:25.310 TRACE keystone.common.wsgi Traceback (most recent call last): 2014-04-25 10:36:25.310 TRACE keystone.common.wsgi File /opt/stack/keystone/keystone/common/wsgi.py, line 207, in __call__
[Yahoo-eng-team] [Bug 1312874] Re: resizing an instance - causes the drives to dissapear - With it hung during reboot
** Also affects: nova Importance: Undecided Status: New ** No longer affects: neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1312874 Title: resizing an instance - causes the drives to dissapear - With it hung during reboot Status in OpenStack Compute (Nova): New Bug description: I have my openstack cluster on Havvana. Resizing an instance (Deb-7-based) to have more disk space - followed by a reboot (soft) - causes the drives to dissapear. And the instance would be hung permanently at the boot screen for it cannot find any drives. STEPS: -- 1. Create an instance (deb-7) 2. Resize the instance - with a flavour to have more disk space. 3. After the instance is resized, the instance is permanently set in ERROR state - eventhough you can take a console of it and login as usual. amande@axcient:~/VMs/IMAGES$ nova list +--+---+++-+---+ | ID | Name | Status | Task State | Power State | Networks | +--+---+++-+---+ | cc867684-0fe9-48a7-95e9-60890d6e4fd0 | vapp | ERROR | None | Running | 172_22-public=172.22.0.49 | +--+---+++-+---+ 4. Reset state of the instance. amande@axcient:~/VMs/IMAGES$ nova reset-state --active cc867684-0fe9-48a7-95e9-60890d6e4fd0 amande@axcient:~/VMs/IMAGES$ nova list +--+---+++-+---+ | ID | Name | Status | Task State | Power State | Networks | +--+---+++-+---+ | cc867684-0fe9-48a7-95e9-60890d6e4fd0 | vapp | ACTIVE | None | Running | 172_22-public=172.22.0.49 | +--+---+++-+---+ 5. After it has its state changed, Soft Reboot the instance. 6. right at this stage, the drives are not to be seen. and the instance will be hung at the boot screen forever- for it cannot find any drives to mount. LOGS 2014-04-24 23:14:48 DEBUG nova.virt.disk.api [req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande df9536c185814408adce9b8c1cafcf1c] Checking if we can resize image /var/lib/nova/instances/cc867684-0fe9-48a7-95e9-60890d6e4fd0/disk. size=563714457600 can_resize_image /usr/lib/python2.6/site-packages/nova/virt/disk/api.py:157 2014-04-24 23:14:48 DEBUG nova.openstack.common.processutils [req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande df9536c185814408adce9b8c1cafcf1c] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cc867684-0fe9-48a7-95e9-60890d6e4fd0/disk execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:147 2014-04-24 23:14:48 DEBUG nova.openstack.common.processutils [req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande df9536c185814408adce9b8c1cafcf1c] Running cmd (subprocess): qemu-img resize /var/lib/nova/instances/cc867684-0fe9-48a7-95e9-60890d6e4fd0/disk 563714457600 execute /usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:147 2014-04-24 23:14:48 DEBUG nova.virt.disk.api [req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande df9536c185814408adce9b8c1cafcf1c] Checking if we can resize filesystem inside /var/lib/nova/instances/cc867684-0fe9-48a7-95e9-60890d6e4fd0/disk. CoW=True is_image_partitionless /usr/lib/python2.6/site-packages/nova/virt/disk/api.py:171 2014-04-24 23:14:48 DEBUG nova.virt.disk.vfs.api [req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande df9536c185814408adce9b8c1cafcf1c] Instance for image imgfile=/var/lib/nova/instances/cc867684-0fe9-48a7-95e9-60890d6e4fd0/disk imgfmt=qcow2 partition=None instance_for_image /usr/lib/python2.6/site-packages/nova/virt/disk/vfs/api.py:31 2014-04-24 23:14:48 DEBUG nova.virt.disk.vfs.api [req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande df9536c185814408adce9b8c1cafcf1c] Trying to import guestfs instance_for_image /usr/lib/python2.6/site-packages/nova/virt/disk/vfs/api.py:34 2014-04-24 23:14:48 DEBUG nova.virt.disk.vfs.api [req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande df9536c185814408adce9b8c1cafcf1c] Using primary VFSGuestFS instance_for_image /usr/lib/python2.6/site-packages/nova/virt/disk/vfs/api.py:41
[Yahoo-eng-team] [Bug 1312942] [NEW] the tap interface created on network node - doesn't accept packets
Public bug reported: An instance newly created, cannot access network (in and out); cannot be sshed into neither can be pinged. Only console works! Analysis: - the instance cannot be pinged from the network node, but the network node can ping the DNS by itself. [A] Ping from an outside server to the instance: After listening on to the interface that the instance is associated on (tap-interface) on that node, we can see that the interface does receive the packets sent in. but it doesn't transmit the packets from their on to the instance itself. Solution: - service neutron-openvswitch-agent restart doesn't resolve the problem and the instance' network start working, only for sometime, until the bug kicks in. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1312942 Title: the tap interface created on network node - doesn't accept packets Status in OpenStack Neutron (virtual network service): New Bug description: An instance newly created, cannot access network (in and out); cannot be sshed into neither can be pinged. Only console works! Analysis: - the instance cannot be pinged from the network node, but the network node can ping the DNS by itself. [A] Ping from an outside server to the instance: After listening on to the interface that the instance is associated on (tap-interface) on that node, we can see that the interface does receive the packets sent in. but it doesn't transmit the packets from their on to the instance itself. Solution: - service neutron-openvswitch-agent restart doesn't resolve the problem and the instance' network start working, only for sometime, until the bug kicks in. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1312942/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1312953] [NEW] neutron router-interface-add fails
Public bug reported: Invalid input for operation: IP address 10.0.0.1 is not a valid IP for the defined subnet This was found on my devstack when I enabled neutron. It looks neutron tries to use the default NETWORK_GATEWAY, i.e. 10.0.0.1 for the subnet 192.168.78.0/24 ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1312953 Title: neutron router-interface-add fails Status in OpenStack Neutron (virtual network service): New Bug description: Invalid input for operation: IP address 10.0.0.1 is not a valid IP for the defined subnet This was found on my devstack when I enabled neutron. It looks neutron tries to use the default NETWORK_GATEWAY, i.e. 10.0.0.1 for the subnet 192.168.78.0/24 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1312953/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1312964] [NEW] lock wait timeout in update_port_status
Public bug reported: There have been several occurences of this bug in check/gate queues. http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiKE9wZXJhdGlvbmFsRXJyb3IpICgxMjA1LCAnTG9jayB3YWl0IHRpbWVvdXQgZXhjZWVkZWQ7IHRyeSByZXN0YXJ0aW5nIHRyYW5zYWN0aW9uJylcIiAgQU5EIG1lc3NhZ2U6XCJVUERBVEUgcG9ydHMgU0VUIHN0YXR1c1wiIEFORCBOT1QgbWVzc2FnZTpcIlRyYWNlYmFjayAobW9zdCByZWNlbnQgY2FsbCBsYXN0XCIgQU5EIHRhZ3M6XCJzY3JlZW4tcS1zdmMudHh0XCIgQU5EIE5PVCBidWlsZF9uYW1lOlwiY2hlY2stdGVtcGVzdC1tYXN0ZXItZHN2bS1uZXV0cm9uLWhhdmFuYVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzk4NDM5MjQ0NDY2fQ== 266 hits in 7 days at the time of bug report, excluding havana jobs (which are susceptible to lock wait timeout errors anyway because of fixes not backportable from icehouse) Build failure rate: 20.6% Hits in gate queue: 26 Failures in gate queue: 1 Notes: 1) Occurrences and failure rate in gate queue are lower because most of the failure happen with the full job, which is not yet voting. 2) Even if failure rate is generally low, a lock wait timeout should be always considered an error, regardless of the outcome of the build job. 3) A detailed look at the logs reveals a pattern similar to bug 1283522, whose fingerprint is being matched. It seems the semaphore lock is ignored, but lockutils lacks the necessary logging to reveal whether a semaphore has been released or not; more investigations are in progress. ** Affects: neutron Importance: High Assignee: Salvatore Orlando (salvatore-orlando) Status: New ** Changed in: neutron Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando) ** Changed in: neutron Importance: Undecided = High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1312964 Title: lock wait timeout in update_port_status Status in OpenStack Neutron (virtual network service): New Bug description: There have been several occurences of this bug in check/gate queues. http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiKE9wZXJhdGlvbmFsRXJyb3IpICgxMjA1LCAnTG9jayB3YWl0IHRpbWVvdXQgZXhjZWVkZWQ7IHRyeSByZXN0YXJ0aW5nIHRyYW5zYWN0aW9uJylcIiAgQU5EIG1lc3NhZ2U6XCJVUERBVEUgcG9ydHMgU0VUIHN0YXR1c1wiIEFORCBOT1QgbWVzc2FnZTpcIlRyYWNlYmFjayAobW9zdCByZWNlbnQgY2FsbCBsYXN0XCIgQU5EIHRhZ3M6XCJzY3JlZW4tcS1zdmMudHh0XCIgQU5EIE5PVCBidWlsZF9uYW1lOlwiY2hlY2stdGVtcGVzdC1tYXN0ZXItZHN2bS1uZXV0cm9uLWhhdmFuYVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzk4NDM5MjQ0NDY2fQ== 266 hits in 7 days at the time of bug report, excluding havana jobs (which are susceptible to lock wait timeout errors anyway because of fixes not backportable from icehouse) Build failure rate: 20.6% Hits in gate queue: 26 Failures in gate queue: 1 Notes: 1) Occurrences and failure rate in gate queue are lower because most of the failure happen with the full job, which is not yet voting. 2) Even if failure rate is generally low, a lock wait timeout should be always considered an error, regardless of the outcome of the build job. 3) A detailed look at the logs reveals a pattern similar to bug 1283522, whose fingerprint is being matched. It seems the semaphore lock is ignored, but lockutils lacks the necessary logging to reveal whether a semaphore has been released or not; more investigations are in progress. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1312964/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1312971] [NEW] mod_wsgi exception processing UTF-F Header
Public bug reported: Using master version of python-keystoneclient (not yet released) gives the following error when running with Keystone in Apache HTTPD and requesting a V3 Token [Fri Apr 25 18:28:14.775659 2014] [:error] [pid 5075] [remote 10.10.63.250:2982] mod_wsgi (pid=5075): Exception occurred processing WSGI script '/var/www/cgi-bin/keystone/main'. [Fri Apr 25 18:28:14.775801 2014] [:error] [pid 5075] [remote 10.10.63.250:2982] TypeError: expected byte string object for header value, value of type unicode found Its due to the utf-8 encoding in keystoneclient/common/cms.py which is making the PKI token Unicode instead of str. ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1312971 Title: mod_wsgi exception processing UTF-F Header Status in OpenStack Identity (Keystone): New Bug description: Using master version of python-keystoneclient (not yet released) gives the following error when running with Keystone in Apache HTTPD and requesting a V3 Token [Fri Apr 25 18:28:14.775659 2014] [:error] [pid 5075] [remote 10.10.63.250:2982] mod_wsgi (pid=5075): Exception occurred processing WSGI script '/var/www/cgi-bin/keystone/main'. [Fri Apr 25 18:28:14.775801 2014] [:error] [pid 5075] [remote 10.10.63.250:2982] TypeError: expected byte string object for header value, value of type unicode found Its due to the utf-8 encoding in keystoneclient/common/cms.py which is making the PKI token Unicode instead of str. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1312971/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1312971] Re: mod_wsgi exception processing UTF-F Header
** Project changed: keystone = python-keystoneclient ** Changed in: python-keystoneclient Importance: Undecided = High ** Changed in: python-keystoneclient Status: New = Triaged -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1312971 Title: mod_wsgi exception processing UTF-F Header Status in Python client library for Keystone: Triaged Bug description: Using master version of python-keystoneclient (not yet released) gives the following error when running with Keystone in Apache HTTPD and requesting a V3 Token [Fri Apr 25 18:28:14.775659 2014] [:error] [pid 5075] [remote 10.10.63.250:2982] mod_wsgi (pid=5075): Exception occurred processing WSGI script '/var/www/cgi-bin/keystone/main'. [Fri Apr 25 18:28:14.775801 2014] [:error] [pid 5075] [remote 10.10.63.250:2982] TypeError: expected byte string object for header value, value of type unicode found Its due to the utf-8 encoding in keystoneclient/common/cms.py which is making the PKI token Unicode instead of str. To manage notifications about this bug go to: https://bugs.launchpad.net/python-keystoneclient/+bug/1312971/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1312971] Re: mod_wsgi exception processing UTF-8 Header
Lets track it for both: it might not really be an issue that cms has converted from str to utf-8 for most things, just that mod_wsgi is enforcing what comes across in the header. I have a patch submitted alrady that mitigates the Keystone problem: https://review.openstack.org/#/c/90476/ I'll make sure it gets linked here. ** Summary changed: - mod_wsgi exception processing UTF-F Header + mod_wsgi exception processing UTF-8 Header ** Also affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1312971 Title: mod_wsgi exception processing UTF-8 Header Status in OpenStack Identity (Keystone): New Status in Python client library for Keystone: Triaged Bug description: Using master version of python-keystoneclient (not yet released) gives the following error when running with Keystone in Apache HTTPD and requesting a V3 Token [Fri Apr 25 18:28:14.775659 2014] [:error] [pid 5075] [remote 10.10.63.250:2982] mod_wsgi (pid=5075): Exception occurred processing WSGI script '/var/www/cgi-bin/keystone/main'. [Fri Apr 25 18:28:14.775801 2014] [:error] [pid 5075] [remote 10.10.63.250:2982] TypeError: expected byte string object for header value, value of type unicode found Its due to the utf-8 encoding in keystoneclient/common/cms.py which is making the PKI token Unicode instead of str. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1312971/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1313009] [NEW] Memory reported improperly in admin dashboard
Public bug reported: The admin dashboard works with memory totals and usages as integers. This means that, for example, if you have a total of 1.95 TB of memory in your hypervisors you'll see it reported as 1 TB. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1313009 Title: Memory reported improperly in admin dashboard Status in OpenStack Dashboard (Horizon): New Bug description: The admin dashboard works with memory totals and usages as integers. This means that, for example, if you have a total of 1.95 TB of memory in your hypervisors you'll see it reported as 1 TB. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1313009/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp