[Yahoo-eng-team] [Bug 1726914] Re: Non admin can change external of network through RBAC policy
I noticed current RBAC design must allow regular user to change attribute for admin like external and shared. We must understand the design. ** Changed in: neutron Status: In Progress => Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1726914 Title: Non admin can change external of network through RBAC policy Status in neutron: Opinion Bug description: Generally we prohibits non admin from creating and updating network with external option[1, 2] by policy.json. However, non admin can change the external option using rbac policy. [1]: https://github.com/openstack/neutron/blob/master/etc/policy.json#L52 [2]: https://github.com/openstack/neutron/blob/master/etc/policy.json#L64 $ openstack network create net +---+--+ | Field | Value| +---+--+ | admin_state_up| UP | | availability_zone_hints | | | availability_zones| | | created_at| 2017-10-24T15:15:22Z | | description | | | dns_domain| None | | id| db82dcea-9e91-4f81-9447-6d90bccb050f | | ipv4_address_scope| None | | ipv6_address_scope| None | | is_default| False| | is_vlan_transparent | None | | mtu | 1450 | | name | net | | port_security_enabled | True | | project_id| 9e01496fa46a425bb5228f3b6d73ca6c | | provider:network_type | None | | provider:physical_network | None | | provider:segmentation_id | None | | qos_policy_id | None | | revision_number | 2| | router:external | Internal | | segments | None | | shared| False| | status| ACTIVE | | subnets | | | tags | | | updated_at| 2017-10-24T15:15:22Z | +---+--+ Non admin user cannot update his network's external option. $ openstack network set --external net HttpException: Forbidden (HTTP 403) (Request-ID: req-21866c75-25f5-416c-80b3-312fef71b36f), (rule:update_network and rule:update_network:router:external) is disallowed by policy Non admin user can update his network's external option using rbac policy. $ openstack network rbac create --type network --target-project 3b3ff25f99884355932f5d316847ebbe --action access_as_external net +---+--+ | Field | Value| +---+--+ | action| access_as_external | | id| 95bade41-77f7-4495-a90a-29fa6eba0518 | | name | None | | object_id | db82dcea-9e91-4f81-9447-6d90bccb050f | | object_type | network | | project_id| 9e01496fa46a425bb5228f3b6d73ca6c | | target_project_id | 3b3ff25f99884355932f5d316847ebbe | +---+--+ $ openstack network show net +---+--+ | Field | Value| +---+--+ | admin_state_up| UP | | availability_zone_hints | | | availability_zones| | | created_at| 2017-10-24T15:15:22Z | | description | | | dns_domain| None | | id| db82dcea-9e91-4f81-9447-6d90bccb050f |
[Yahoo-eng-team] [Bug 1728900] Re: Some link urls about gerrit is invalid
It's true that the whiteboard can be updated manually. But why not correct the initial url to reduce the complicated and meaningless work? ** Project changed: nova => gerrit ** Changed in: gerrit Status: Invalid => New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1728900 Title: Some link urls about gerrit is invalid Status in gerrit: New Bug description: In blueprint web page, the "Gerrit topic" link url's pattern is 'https://review.openstack.org/#q,topic:\1,n,z', which is invalid now. The current valid pattern seems to be 'https://review.openstack.org/#/q/topic:\1'. To manage notifications about this bug go to: https://bugs.launchpad.net/gerrit/+bug/1728900/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1705772] Re: hardware offload support for openvswitch
Reviewed: https://review.openstack.org/504911 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=51758c8c39e48d6e2d8b8c24b5fa0ffaf2861f17 Submitter: Zuul Branch:master commit 51758c8c39e48d6e2d8b8c24b5fa0ffaf2861f17 Author: Lenny VerkhovskyDate: Mon Sep 18 10:50:00 2017 + Adding OVS Offload documentation Closes-Bug: #1705772 Change-Id: I0923609e172b1051e9df99a464b22c3fba440ee2 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1705772 Title: hardware offload support for openvswitch Status in neutron: Fix Released Bug description: https://review.openstack.org/275616 Dear bug triager. This bug was created since a commit was marked with DOCIMPACT. Your project "openstack/neutron" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to. commit 8ff25e725e148acf83ef6e1f6a3c445da00e3932 Author: Moshe Levi Date: Mon Jan 4 11:14:58 2016 +0200 hardware offload support for openvswitch In Kernel 4.8 we introduced Traffic Control (TC see [1]) hardware offloads framework for SR-IOV VFs which allows us to configure the NIC [2]. Subsequent OVS patches [3] allow us to use the TC framework to offload OVS datapath rules. This patch allow OVS mech driver to bind direct (SR-IOV) port. This will allow to offload the OVS flows using tc to the SR-IOV NIC and gain accelerate OVS. [1] https://linux.die.net/man/8/tc [2] http://netdevconf.org/1.2/papers/efraim-gerlitz-sriov-ovs-final.pdf [3] https://mail.openvswitch.org/pipermail/ovs-dev/2017-April/330606.html DocImpact: Add SR-IOV offload support for OVS mech driver Partial-Bug: #1627987 Depends-On: I6bc2539a1ddbf7990164abeb8bb951ddcb45c993 Change-Id: I77650be5f04775a72e2bdf694f93988825a84b72 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1705772/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1716081] Re: Validate security group rules for port ranges
Reviewed: https://review.openstack.org/510698 Committed: https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=6b5018ad3fb70dfad07cebbc95f429a61029e5df Submitter: Zuul Branch:master commit 6b5018ad3fb70dfad07cebbc95f429a61029e5df Author: Boden RDate: Mon Oct 9 15:13:49 2017 -0600 add DCCP, SCTP and UDP-Lite to validated protos for port ranges This patch updates the description for security group rules port range to clarify that it now validates ranges for DCCP, SCTP or UDP-Lite. See the related bug for more details. The patch also removes 2 related port range parameters that aren't used in our api-ref today. Change-Id: I0ae2b1aee19d76e1ef61e424dd2f9224a53b91d9 Closes-Bug: #1716081 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1716081 Title: Validate security group rules for port ranges Status in neutron: Fix Released Bug description: https://review.openstack.org/496083 Dear bug triager. This bug was created since a commit was marked with DOCIMPACT. Your project "openstack/neutron" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to. commit f711ad78c5c0af44318c6234957590c91592b984 Author: IWAMOTO Toshihiro Date: Tue Aug 22 12:55:32 2017 +0900 Validate security group rules for port ranges Port ranges validation has been done only for TCP and UDP. Use the same validation logic for DCCP, SCTP and UDP-Lite, too. APIImpact DocImpact Change-Id: Ife90be597d1a59a634d5474dad543dc1803e8242 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1716081/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1575935] Re: Rebuild should also accept a configdrive
Reviewed: https://review.openstack.org/501761 Committed: https://git.openstack.org/cgit/openstack/ironic/commit/?id=6a8b38a2caafe89ebd73fda8c72b8f4fbbdc9f6b Submitter: Zuul Branch:master commit 6a8b38a2caafe89ebd73fda8c72b8f4fbbdc9f6b Author: Mathieu GagnéDate: Thu Sep 7 10:34:45 2017 -0400 Add ability to provide configdrive when rebuilding Previously, the configdrive could only be set when setting the node's provisioning state to "active". When rebuilding, the old configdrive was used and therefore was never updated with latest content. This change introduces the API microversion 1.35 which will allow configdrive to be provided when setting the node's provisioning state to "rebuild". Closes-bug: #1575935 Change-Id: I9a5529f9fa796c75621e9f4354886bf3032cc248 ** Changed in: ironic Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1575935 Title: Rebuild should also accept a configdrive Status in Ironic: Fix Released Status in OpenStack Compute (nova): Invalid Bug description: Users desire the ability to rebuild pre-existing hosts and update the configuration drive, especially in CI environments. https://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1/node.py#L518 Presently does not pass a submitted configuration drive. Compared with Line 516. That being said, logic further down in the deployment (both legacy iscsi deployment and full disk deployment) processes should be checked to ensure that nothing else is broken, however this is standard behavior PRESENTLY because this is how nova submits requests. To manage notifications about this bug go to: https://bugs.launchpad.net/ironic/+bug/1575935/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1729449] [NEW] Horizon keystone api doesn't support include_names for role assignment list
Public bug reported: Since keystone API version 3.6 the role assignment list operation has supported include_names see https://developer.openstack.org/api- ref/identity/v3/#what-s-new-in-version-3-6 I'm writing a third party plugin and need to use this option, instead of me having to replicate all the keystone api in horizon it would be good if it could be supported natively. ** Affects: horizon Importance: Undecided Assignee: Sam Morrison (sorrison) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1729449 Title: Horizon keystone api doesn't support include_names for role assignment list Status in OpenStack Dashboard (Horizon): In Progress Bug description: Since keystone API version 3.6 the role assignment list operation has supported include_names see https://developer.openstack.org/api- ref/identity/v3/#what-s-new-in-version-3-6 I'm writing a third party plugin and need to use this option, instead of me having to replicate all the keystone api in horizon it would be good if it could be supported natively. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1729449/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1719492] Re: admin_token_auth not found
Yeah this looks like it's a duplicate of the bug that Craig posted. ** Changed in: keystone Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1719492 Title: admin_token_auth not found Status in OpenStack Identity (keystone): Invalid Bug description: in the /etc/keystone/keystone-paste.ini ,the admin_token_auth is not found.rmemove it plaese. This bug tracker is for errors with the documentation, use the following as a template and remove or add fields as you see fit. Convert [ ] into [x] to check boxes: - [ ] This doc is inaccurate in this way: __ - [ ] This is a doc addition request. - [ ] I have a fix to the document that I can paste below including example: input and output. If you have a troubleshooting or support issue, use the following resources: - Ask OpenStack: http://ask.openstack.org - The mailing list: http://lists.openstack.org - IRC: 'openstack' channel on Freenode --- Release: 12.0.0.0rc3.dev2 on 2017-08-26 22:01 SHA: 5a9aeefff06678d790d167b6dac752677f02edf9 Source: https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-verify-ubuntu.rst URL: https://docs.openstack.org/keystone/pike/install/keystone-verify-ubuntu.html To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1719492/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1729445] [NEW] Potential IndexError if using the CachingScheduler and not getting alternates
Public bug reported: If we're using the CachingScheduler and we're not getting alternates, maybe because conductor is old, we'll get an IndexError because we're not returning a list of list of selected hosts, we're just returning a list of selected hosts here: https://github.com/openstack/nova/blob/f974e3c3566f379211d7fdc790d07b5680925584/nova/scheduler/filter_scheduler.py#L342 And the IndexError would happen here: https://github.com/openstack/nova/blob/f974e3c3566f379211d7fdc790d07b5680925584/nova/scheduler/filter_scheduler.py#L120 We obviously don't have a test covering this scenario. ** Affects: nova Importance: Medium Status: Triaged ** Tags: scheduler -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1729445 Title: Potential IndexError if using the CachingScheduler and not getting alternates Status in OpenStack Compute (nova): Triaged Bug description: If we're using the CachingScheduler and we're not getting alternates, maybe because conductor is old, we'll get an IndexError because we're not returning a list of list of selected hosts, we're just returning a list of selected hosts here: https://github.com/openstack/nova/blob/f974e3c3566f379211d7fdc790d07b5680925584/nova/scheduler/filter_scheduler.py#L342 And the IndexError would happen here: https://github.com/openstack/nova/blob/f974e3c3566f379211d7fdc790d07b5680925584/nova/scheduler/filter_scheduler.py#L120 We obviously don't have a test covering this scenario. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1729445/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1728722] Re: Resize test fails in conductor during migration/instance allocation swap: "Unable to replace resource claim on source host"
Reviewed: https://review.openstack.org/516708 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=62d35009755a45c39a0b1cdc4c69791f469e469e Submitter: Zuul Branch:master commit 62d35009755a45c39a0b1cdc4c69791f469e469e Author: Dan SmithDate: Tue Oct 31 07:46:36 2017 -0700 Make put_allocations() retry on concurrent update This adds a retries decorator to the scheduler report client and modifies put_allocations() so that it will detect a concurrent update, raising the Retry exception to trigger the decorator. This should be usable by other methods in the client easily, but this patch only modifies put_allocations() to fix the bug. Change-Id: Ic32a54678dd413668f02e77d5e6c4195664ac24c Closes-Bug: #1728722 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1728722 Title: Resize test fails in conductor during migration/instance allocation swap: "Unable to replace resource claim on source host" Status in OpenStack Compute (nova): Fix Released Bug description: Resize tests are intermittently failing in the gate: http://logs.openstack.org/96/516396/1/check/legacy-tempest-dsvm- py35/ecb9db4/logs/screen-n-super- cond.txt.gz?level=TRACE#_Oct_30_18_01_18_003148 Oct 30 18:01:18.003148 ubuntu-xenial-inap-mtl01-586035 nova- conductor[22452]: ERROR nova.conductor.tasks.migrate [None req- 2818e7b7-6881-4cfb-ae79-1816cb948748 tempest- ListImageFiltersTestJSON-1403553182 tempest- ListImageFiltersTestJSON-1403553182] [instance: f5aec132-8a62-47a5-a967-8e5d18a9c6f8] Unable to replace resource claim on source host ubuntu-xenial-inap-mtl01-586035 node ubuntu-xenial- inap-mtl01-586035 for instance The request in the placement logs starts here: http://logs.openstack.org/96/516396/1/check/legacy-tempest-dsvm- py35/ecb9db4/logs/screen-placement-api.txt.gz#_Oct_30_18_01_16_940644 Oct 30 18:01:17.993287 ubuntu-xenial-inap-mtl01-586035 devstack@placement-api.service[15936]: DEBUG nova.api.openstack.placement.wsgi_wrapper [None req-7eec8dd2-f65c-43fa-b3df-cdf7a236aa03 service placement] Placement API returning an error response: Inventory changed while attempting to allocate: Another thread concurrently updated the data. Please retry your update {{(pid=15938) call_func /opt/stack/new/nova/nova/api/openstack/placement/wsgi_wrapper.py:31}} Oct 30 18:01:17.994558 ubuntu-xenial-inap-mtl01-586035 devstack@placement-api.service[15936]: INFO nova.api.openstack.placement.requestlog [None req-7eec8dd2-f65c-43fa-b3df-cdf7a236aa03 service placement] 198.72.124.85 "PUT /placement/allocations/52b215a6-0d60-4fcc-8389-2645ffb22562" status: 409 len: 305 microversion: 1.8 The error from placement is a bit misleading. It's probably not that inventory has changed, but allocations have changed in the meantime since this is a single-node environment, so capacity changd and conductor needs to retry, just like the scheduler does. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1728722/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1702454] Re: Transforming the RequestSpec object into legacy dicts doesn't support the requested_destination field
Reviewed: https://review.openstack.org/481116 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=7317afd3201c4f9f21d3f37dab08423dae236b17 Submitter: Zuul Branch:master commit 7317afd3201c4f9f21d3f37dab08423dae236b17 Author: Sylvain BauzaDate: Thu Jul 6 17:17:31 2017 +0200 Pass requested_destination in filter_properties When we added the requested_destination field for the RequestSpec object in Newton, we forgot to pass it to the legacy dictionary when wanting to use scheduler methods not yet supporting the NovaObject. As a consequence, when we were transforming the RequestSpec object into a tuple of (request_spec, filter_props) dicts and then rehydrating a new RequestSpec object using those dicts, the newly created object was not keeping that requested_destination field from the original. Change-Id: Iba0b88172e9a3bfd4f216dd364d70f7e01c60ee2 Closes-Bug: #1702454 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1702454 Title: Transforming the RequestSpec object into legacy dicts doesn't support the requested_destination field Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) newton series: In Progress Status in OpenStack Compute (nova) ocata series: In Progress Status in OpenStack Compute (nova) pike series: In Progress Bug description: We added a new field in the RequestSpec object called 'requested_destination' and we began using it for evacuations by https://review.openstack.org/#/c/315572/ (Newton) That object was tranformed into legacy dictionaries (called "filter_properties" and "request_spec") before being rehydrated for the rebuild_instance() method in the conductor service. That said, when transforming, we were forgetting about the 'requested_destination' field in the object so that when we were calling the scheduler, we were never using that field. That bug was fixed implicitly by https://review.openstack.org/#/c/469037/ which is now merged in master, but the issue is still there in stable branches, and if you need to use the legacy methods, you'll not have it. As a consequence, the feature to pass a destination for evacuation is not working in Newton and Ocata. Fortunately, given we didn't transformed the object into dicts before calling the scheduler for live-migrations, it does work for that action. A proper resolution would be to make sure that we pass the requested_destination field into 'filter_properties' so that when transforming again into an object, we set again the field. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1702454/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1671815] Re: Can not use custom network interfaces with stable/newton Ironic
newton is EOL now, I guess we can close this. ** Changed in: ironic Status: In Progress => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1671815 Title: Can not use custom network interfaces with stable/newton Ironic Status in Ironic: Won't Fix Status in OpenStack Compute (nova): Invalid Bug description: When using network interfaces in Ironic, nova shouldn't bind the port that it creates so that the ironic network interface can do it later in the process. Nova should only bind the port for the flat network interface for backwards compatibility with stable/mitaka. The logic for this in the ironic virt driver is incorrect, and will bind the port for any ironic network interface except the neutron one. To manage notifications about this bug go to: https://bugs.launchpad.net/ironic/+bug/1671815/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1729363] Re: udercloud deployment fails due to database table exists
Looking again, I found an earlier issue: 2017-11-01 13:34:23,197 INFO: + puppet apply --detailed-exitcodes /etc/puppet/manifests/puppet-stack-config.pp 2017-11-01 13:34:26,677 INFO: ESC[1;33mWarning: Facter: Fact resolution fact='systemd_internal_services', resolution='' resolved to an invalid value: Expected disabled to be one of [Integer, Float, Tr ueClass, FalseClass, NilClass, String, Array, Hash], but was SymbolESC[0m 2017-11-01 13:34:26,699 INFO: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) 2017-11-01 13:34:26,862 INFO: ESC[mNotice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backendESC[0m 2017-11-01 13:34:26,942 INFO: ESC[1;33mWarning: ModuleLoader: module 'openstacklib' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information abou t modules 2017-11-01 13:34:26,942 INFO:(file & line not available)ESC[0m 2017-11-01 13:34:27,158 INFO: ESC[mNotice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backendESC[0m 2017-11-01 13:34:27,360 INFO: ESC[1;33mWarning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/la ng_data_type.html#match-expressions. at ["/etc/puppet/modules/rabbitmq/manifests/install/rabbitmqadmin.pp", 37]:["/etc/puppet/modules/rabbitmq/manifests/init.pp", 314] 2017-11-01 13:34:27,360 INFO:(at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:25:in `deprecation')ESC[0m 2017-11-01 13:34:27,479 INFO: ESC[mNotice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.ESC[0m 2017-11-01 13:34:27,571 INFO: ESC[1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, with Stdlib::Compat::Hash. There is further documentation for validate_legacy function i n the README. at ["/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp", 97]:["/etc/puppet/manifests/puppet-stack-config.pp", 73] 2017-11-01 13:34:27,571 INFO:(at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:25:in `deprecation')ESC[0m 2017-11-01 13:34:27,635 INFO: ESC[1;33mWarning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modul es 2017-11-01 13:34:27,635 INFO:(file & line not available)ESC[0m 2017-11-01 13:34:27,825 INFO: ESC[1;33mWarning: ModuleLoader: module 'keystone' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about mo dules 2017-11-01 13:34:27,825 INFO:(file & line not available)ESC[0m 2017-11-01 13:34:28,194 INFO: ESC[1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function, with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/keystone/manifests/db/mysql.pp", 63]: 2017-11-01 13:34:28,194 INFO:(at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:25:in `deprecation')ESC[0m 2017-11-01 13:34:28,213 INFO: ESC[1;33mWarning: ModuleLoader: module 'glance' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules With that, I believe it doesn't affect neutron anymore, it looks like a puppet issue. ** Changed in: neutron Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1729363 Title: udercloud deployment fails due to database table exists Status in neutron: Invalid Status in tripleo: New Bug description: openstack-neutron-common-12.0.0-0.20171031133005.c4ab792.el7.centos.noarch openstack-tripleo-heat-templates-8.0.0-0.20171101050604.d6a2160.el7.centos.noarch I ran bash devmode.sh --no-gate --ovb -d -w /var/tmp/tripleo_local It fails with this error: Wednesday 01 November 2017 14:34:04 +0100 (0:00:08.612) 1:01:58.659 fatal: [undercloud]: FAILED! => {"changed": true, "cmd": "set -o pipefail && /home/stack/undercloud-install.sh 2>&1 | awk '{ print strftime(\"%Y-%m-%d %H:%M:%S |\"), $0; fflush(); }' > /home/stack/undercloud_install.log", "delta": "0:54:11.019567", "end": "2017-11-01 14:28:15.963100", "failed": true, "rc": 1, "start": "2017-11-01 13:34:04.943533", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []} PLAY RECAP * localhost : ok=28 changed=19 unreachable=0failed=0 undercloud : ok=57 changed=34 unreachable=0failed=1 Logs were not
[Yahoo-eng-team] [Bug 1724520] Re: nova-consoleauth failed after restart jujud-machine-0
** Also affects: juju Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1724520 Title: nova-consoleauth failed after restart jujud-machine-0 Status in juju: New Status in OpenStack Compute (nova): New Bug description: Hi, On a HA environment the 3x nova-consoleauth failed after restart juju- agent jujud-machine-0 There are not very useful information on /var/log/nova/nova-consoleauth.log No ERRORs neither fails I checked the status of nova-consoleauth services and were the follow: $ sudo systemctl status nova-consoleauth.service http://pastebin.ubuntu.com/25764904/ To manage notifications about this bug go to: https://bugs.launchpad.net/juju/+bug/1724520/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1728900] Re: Some link urls about gerrit is invalid
As Colin pointed out, this isn't a code bug, the gerrit topic URLs in the whiteboard can be updated in the blueprints in launchpad if someone cares to update them. ** Changed in: nova Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1728900 Title: Some link urls about gerrit is invalid Status in OpenStack Compute (nova): Invalid Bug description: In blueprint web page, the "Gerrit topic" link url's pattern is 'https://review.openstack.org/#q,topic:\1,n,z', which is invalid now. The current valid pattern seems to be 'https://review.openstack.org/#/q/topic:\1'. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1728900/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1729294] Re: debug message not correct in inject_data
This looks correct to me, I don't really understand your point in why you think this is a bug: LOG.debug('Checking root disk injection %(info)s', info=str(injection_info), instance=instance) ** Changed in: nova Status: New => Opinion ** Changed in: nova Importance: Undecided => Low -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1729294 Title: debug message not correct in inject_data Status in OpenStack Compute (nova): Opinion Bug description: When creating instance with files to inject,it would use _inject_data method in nova, then one incorrect debug message in nova-compute.log: 2017-11-01 17:38:15.083 26293 DEBUG nova.virt.libvirt.driver [req- 278573ea-702c-4c4a-8b9d-ff25dd2af764 admin admin] [instance: 20e9bd26-028e-4051-bd99-ef30cb39228d] Checking root disk injection %(info)s _inject_data /opt/stack/nova/nova/virt/libvirt/driver.py:3160 I guess that attemps to show all inject_info such as network_info,admin_pass,files.So it should be modified in _inject_data. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1729294/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1729371] Re: ResourceTracker races to delete instance allocations before instance is mapped to a cell
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Unable%20to%20find%20existing%20allocations%20for%20instance%5C%22%20AND%20tags%3A%5C%22screen-n -super-cond.txt%5C%22=7d ** Also affects: nova/pike Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1729371 Title: ResourceTracker races to delete instance allocations before instance is mapped to a cell Status in OpenStack Compute (nova): Triaged Status in OpenStack Compute (nova) pike series: Triaged Bug description: We hit this in queens CI where we go to swap instance allocations to the migration uuid during a resize and the instance allocations on the source node are not found, which shouldn't happen: http://logs.openstack.org/08/516708/3/gate/legacy-tempest-dsvm- py35/4d8d6a3/logs/screen-n-super-cond.txt.gz#_Oct_31_23_18_04_391235 Oct 31 23:18:04.391235 ubuntu-xenial-rax-iad-635032 nova-conductor[22376]: ERROR nova.conductor.tasks.migrate [None req-2bd7178b-307e-4342-a156-f9645b7f75a5 tempest-MigrationsAdminTest-834796780 tempest-MigrationsAdminTest-834796780] [instance: 87329b8a-0fa5-4467-b69e-6c43d4633f54] Unable to find existing allocations for instance Oct 31 23:18:04.421480 ubuntu-xenial-rax-iad-635032 nova-conductor[22376]: WARNING nova.scheduler.utils [None req-2bd7178b-307e-4342-a156-f9645b7f75a5 tempest-MigrationsAdminTest-834796780 tempest-MigrationsAdminTest-834796780] Failed to compute_task_migrate_server: Instance 87329b8a-0fa5-4467-b69e-6c43d4633f54 is unacceptable: Instance has no source node allocation: nova.exception.InstanceUnacceptable: Instance 87329b8a-0fa5-4467-b69e-6c43d4633f54 is unacceptable: Instance has no source node allocation Looking in Placement for that instance, it's allocations are created by the scheduler here: http://logs.openstack.org/08/516708/3/gate/legacy-tempest-dsvm- py35/4d8d6a3/logs/screen-placement-api.txt.gz#_Oct_31_23_18_00_805083 Oct 31 23:18:00.637846 ubuntu-xenial-rax-iad-635032 devstack@placement-api.service[15777]: DEBUG nova.api.openstack.placement.requestlog [None req-d285a5a5-704b-4f46-98d5-f6e5ae7f6b4f service placement] Starting request: 104.239.175.193 "PUT /placement/allocations/87329b8a-0fa5-4467-b69e-6c43d4633f54" {{(pid=15780) __call__ /opt/stack/new/nova/nova/api/openstack/placement/requestlog.py:38}} Oct 31 23:18:00.684152 ubuntu-xenial-rax-iad-635032 devstack@placement-api.service[15777]: DEBUG nova.api.openstack.placement.handlers.allocation [None req-d285a5a5-704b-4f46-98d5-f6e5ae7f6b4f service placement] Successfully wrote allocations AllocationList[Allocation(consumer_id=87329b8a-0fa5-4467-b69e-6c43d4633f54,id=23,project_id='d349ceecbff744a2943294be5bb7e427',resource_class='MEMORY_MB',resource_provider=ResourceProvider(ec18595d-7007-47c9-bf13-38c8cf8a8bb0),used=64,user_id='d44d813424704df8996b7d77840283c9'), Allocation(consumer_id=87329b8a-0fa5-4467-b69e-6c43d4633f54,id=24,project_id='d349ceecbff744a2943294be5bb7e427',resource_class='VCPU',resource_provider=ResourceProvider(ec18595d-7007-47c9-bf13-38c8cf8a8bb0),used=1,user_id='d44d813424704df8996b7d77840283c9')] {{(pid=15780) _set_allocations /opt/stack/new/nova/nova/api/openstack/placement/handlers/allocation.py:249}} And shortly after that we see them deleted: http://logs.openstack.org/08/516708/3/gate/legacy-tempest-dsvm- py35/4d8d6a3/logs/screen-placement-api.txt.gz#_Oct_31_23_18_00_805083 Oct 31 23:18:00.805083 ubuntu-xenial-rax-iad-635032 devstack@placement-api.service[15777]: DEBUG nova.api.openstack.placement.requestlog [None req-1fc99e13-67ca-4180-9a7f-d3a01a219b15 service placement] Starting request: 104.239.175.193 "DELETE /placement/allocations/87329b8a-0fa5-4467-b69e-6c43d4633f54" {{(pid=15780) __call__ /opt/stack/new/nova/nova/api/openstack/placement/requestlog.py:38}} Oct 31 23:18:00.814342 ubuntu-xenial-rax-iad-635032 devstack@placement-api.service[15777]: DEBUG nova.api.openstack.placement.handlers.allocation [None req-1fc99e13-67ca-4180-9a7f-d3a01a219b15 service placement] Successfully deleted allocations AllocationList[Allocation(consumer_id=87329b8a-0fa5-4467-b69e-6c43d4633f54,id=,project_id=,resource_class='MEMORY_MB',resource_provider=ResourceProvider(ec18595d-7007-47c9-bf13-38c8cf8a8bb0),used=64,user_id=), Allocation(consumer_id=87329b8a-0fa5-4467-b69e-6c43d4633f54,id=,project_id=,resource_class='VCPU',resource_provider=ResourceProvider(ec18595d-7007-47c9-bf13-38c8cf8a8bb0),used=1,user_id=)] {{(pid=15780) delete_allocations /opt/stack/new/nova/nova/api/openstack/placement/handlers/allocation.py:307}} Looking at the compute service logs, it looks like what is happening is the update_available_resource periodic task on the compute is running in between the time that the allocations are
[Yahoo-eng-team] [Bug 1729371] [NEW] ResourceTracker races to delete instance allocations before instance is mapped to a cell
Public bug reported: We hit this in queens CI where we go to swap instance allocations to the migration uuid during a resize and the instance allocations on the source node are not found, which shouldn't happen: http://logs.openstack.org/08/516708/3/gate/legacy-tempest-dsvm- py35/4d8d6a3/logs/screen-n-super-cond.txt.gz#_Oct_31_23_18_04_391235 Oct 31 23:18:04.391235 ubuntu-xenial-rax-iad-635032 nova-conductor[22376]: ERROR nova.conductor.tasks.migrate [None req-2bd7178b-307e-4342-a156-f9645b7f75a5 tempest-MigrationsAdminTest-834796780 tempest-MigrationsAdminTest-834796780] [instance: 87329b8a-0fa5-4467-b69e-6c43d4633f54] Unable to find existing allocations for instance Oct 31 23:18:04.421480 ubuntu-xenial-rax-iad-635032 nova-conductor[22376]: WARNING nova.scheduler.utils [None req-2bd7178b-307e-4342-a156-f9645b7f75a5 tempest-MigrationsAdminTest-834796780 tempest-MigrationsAdminTest-834796780] Failed to compute_task_migrate_server: Instance 87329b8a-0fa5-4467-b69e-6c43d4633f54 is unacceptable: Instance has no source node allocation: nova.exception.InstanceUnacceptable: Instance 87329b8a-0fa5-4467-b69e-6c43d4633f54 is unacceptable: Instance has no source node allocation Looking in Placement for that instance, it's allocations are created by the scheduler here: http://logs.openstack.org/08/516708/3/gate/legacy-tempest-dsvm- py35/4d8d6a3/logs/screen-placement-api.txt.gz#_Oct_31_23_18_00_805083 Oct 31 23:18:00.637846 ubuntu-xenial-rax-iad-635032 devstack@placement-api.service[15777]: DEBUG nova.api.openstack.placement.requestlog [None req-d285a5a5-704b-4f46-98d5-f6e5ae7f6b4f service placement] Starting request: 104.239.175.193 "PUT /placement/allocations/87329b8a-0fa5-4467-b69e-6c43d4633f54" {{(pid=15780) __call__ /opt/stack/new/nova/nova/api/openstack/placement/requestlog.py:38}} Oct 31 23:18:00.684152 ubuntu-xenial-rax-iad-635032 devstack@placement-api.service[15777]: DEBUG nova.api.openstack.placement.handlers.allocation [None req-d285a5a5-704b-4f46-98d5-f6e5ae7f6b4f service placement] Successfully wrote allocations AllocationList[Allocation(consumer_id=87329b8a-0fa5-4467-b69e-6c43d4633f54,id=23,project_id='d349ceecbff744a2943294be5bb7e427',resource_class='MEMORY_MB',resource_provider=ResourceProvider(ec18595d-7007-47c9-bf13-38c8cf8a8bb0),used=64,user_id='d44d813424704df8996b7d77840283c9'), Allocation(consumer_id=87329b8a-0fa5-4467-b69e-6c43d4633f54,id=24,project_id='d349ceecbff744a2943294be5bb7e427',resource_class='VCPU',resource_provider=ResourceProvider(ec18595d-7007-47c9-bf13-38c8cf8a8bb0),used=1,user_id='d44d813424704df8996b7d77840283c9')] {{(pid=15780) _set_allocations /opt/stack/new/nova/nova/api/openstack/placement/handlers/allocation.py:249}} And shortly after that we see them deleted: http://logs.openstack.org/08/516708/3/gate/legacy-tempest-dsvm- py35/4d8d6a3/logs/screen-placement-api.txt.gz#_Oct_31_23_18_00_805083 Oct 31 23:18:00.805083 ubuntu-xenial-rax-iad-635032 devstack@placement-api.service[15777]: DEBUG nova.api.openstack.placement.requestlog [None req-1fc99e13-67ca-4180-9a7f-d3a01a219b15 service placement] Starting request: 104.239.175.193 "DELETE /placement/allocations/87329b8a-0fa5-4467-b69e-6c43d4633f54" {{(pid=15780) __call__ /opt/stack/new/nova/nova/api/openstack/placement/requestlog.py:38}} Oct 31 23:18:00.814342 ubuntu-xenial-rax-iad-635032 devstack@placement-api.service[15777]: DEBUG nova.api.openstack.placement.handlers.allocation [None req-1fc99e13-67ca-4180-9a7f-d3a01a219b15 service placement] Successfully deleted allocations AllocationList[Allocation(consumer_id=87329b8a-0fa5-4467-b69e-6c43d4633f54,id=,project_id=,resource_class='MEMORY_MB',resource_provider=ResourceProvider(ec18595d-7007-47c9-bf13-38c8cf8a8bb0),used=64,user_id=), Allocation(consumer_id=87329b8a-0fa5-4467-b69e-6c43d4633f54,id=,project_id=,resource_class='VCPU',resource_provider=ResourceProvider(ec18595d-7007-47c9-bf13-38c8cf8a8bb0),used=1,user_id=)] {{(pid=15780) delete_allocations /opt/stack/new/nova/nova/api/openstack/placement/handlers/allocation.py:307}} Looking at the compute service logs, it looks like what is happening is the update_available_resource periodic task on the compute is running in between the time that the allocations are created on the compute node via the scheduler and before the instance is created in the cell database: http://logs.openstack.org/08/516708/3/gate/legacy-tempest-dsvm- py35/4d8d6a3/logs/screen-n-cpu.txt.gz#_Oct_31_23_18_00_165850 http://logs.openstack.org/08/516708/3/gate/legacy-tempest-dsvm- py35/4d8d6a3/logs/screen-n-cpu.txt.gz#_Oct_31_23_18_00_780729 Oct 31 23:18:00.780729 ubuntu-xenial-rax-iad-635032 nova-compute[23251]: DEBUG nova.compute.resource_tracker [None req-9743bef5-318d-4e7e-a71b-26c62cd0af2d tempest-MigrationsAdminTest-834796780 tempest-MigrationsAdminTest-834796780] Instance 87329b8a-0fa5-4467-b69e-6c43d4633f54 has been deleted (perhaps locally). Deleting allocations that
[Yahoo-eng-team] [Bug 1729363] [NEW] udercloud deployment fails due to database table exists
Public bug reported: openstack-neutron-common-12.0.0-0.20171031133005.c4ab792.el7.centos.noarch openstack-tripleo-heat-templates-8.0.0-0.20171101050604.d6a2160.el7.centos.noarch I ran bash devmode.sh --no-gate --ovb -d -w /var/tmp/tripleo_local It fails with this error: Wednesday 01 November 2017 14:34:04 +0100 (0:00:08.612) 1:01:58.659 fatal: [undercloud]: FAILED! => {"changed": true, "cmd": "set -o pipefail && /home/stack/undercloud-install.sh 2>&1 | awk '{ print strftime(\"%Y-%m-%d %H:%M:%S |\"), $0; fflush(); }' > /home/stack/undercloud_install.log", "delta": "0:54:11.019567", "end": "2017-11-01 14:28:15.963100", "failed": true, "rc": 1, "start": "2017-11-01 13:34:04.943533", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []} PLAY RECAP * localhost : ok=28 changed=19 unreachable=0failed=0 undercloud : ok=57 changed=34 unreachable=0failed=1 Logs were not really helpful, pointing me at logs pointing to the same for more info. Finally: oslo_db.exception.DBError: (pymysql.err.InternalError) (1050, u"Table 'qos_policies_default' already exists") [SQL: u'\nCREATE TABLE qos_policies_default (\n\tqos_policy_id VARCHAR(36) NOT NULL, \n\tproject_id VARCHAR(255) NOT NULL, \n\tPRIMARY KEY (project_id), \n\tFOREIGN KEY(qos_policy_id) REFERENCES qos_policies (id) ON DELETE CASCADE\n)ENGINE=InnoDB\n\n'] ** Affects: neutron Importance: Undecided Status: New ** Affects: tripleo Importance: High Status: New ** Changed in: tripleo Importance: Undecided => High ** Changed in: tripleo Milestone: None => queens-2 ** Also affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1729363 Title: udercloud deployment fails due to database table exists Status in neutron: New Status in tripleo: New Bug description: openstack-neutron-common-12.0.0-0.20171031133005.c4ab792.el7.centos.noarch openstack-tripleo-heat-templates-8.0.0-0.20171101050604.d6a2160.el7.centos.noarch I ran bash devmode.sh --no-gate --ovb -d -w /var/tmp/tripleo_local It fails with this error: Wednesday 01 November 2017 14:34:04 +0100 (0:00:08.612) 1:01:58.659 fatal: [undercloud]: FAILED! => {"changed": true, "cmd": "set -o pipefail && /home/stack/undercloud-install.sh 2>&1 | awk '{ print strftime(\"%Y-%m-%d %H:%M:%S |\"), $0; fflush(); }' > /home/stack/undercloud_install.log", "delta": "0:54:11.019567", "end": "2017-11-01 14:28:15.963100", "failed": true, "rc": 1, "start": "2017-11-01 13:34:04.943533", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []} PLAY RECAP * localhost : ok=28 changed=19 unreachable=0failed=0 undercloud : ok=57 changed=34 unreachable=0failed=1 Logs were not really helpful, pointing me at logs pointing to the same for more info. Finally: oslo_db.exception.DBError: (pymysql.err.InternalError) (1050, u"Table 'qos_policies_default' already exists") [SQL: u'\nCREATE TABLE qos_policies_default (\n\tqos_policy_id VARCHAR(36) NOT NULL, \n\tproject_id VARCHAR(255) NOT NULL, \n\tPRIMARY KEY (project_id), \n\tFOREIGN KEY(qos_policy_id) REFERENCES qos_policies (id) ON DELETE CASCADE\n)ENGINE=InnoDB\n\n'] To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1729363/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1729356] [NEW] queens: resize fails with "Instance image invalid" 400 error when actually migration failed to swap allocations in placement
Public bug reported: This failed in a non-obvious way: http://logs.openstack.org/08/516708/3/gate/legacy-tempest-dsvm- py35/4d8d6a3/logs/screen-n-super-cond.txt.gz#_Oct_31_23_18_04_391235 Oct 31 23:18:04.391235 ubuntu-xenial-rax-iad-635032 nova- conductor[22376]: ERROR nova.conductor.tasks.migrate [None req-2bd7178b- 307e-4342-a156-f9645b7f75a5 tempest-MigrationsAdminTest-834796780 tempest-MigrationsAdminTest-834796780] [instance: 87329b8a-0fa5-4467 -b69e-6c43d4633f54] Unable to find existing allocations for instance That raises InstanceUnacceptable which extends Invalid which is handled in the REST API controller here: https://github.com/openstack/nova/blob/f974e3c3566f379211d7fdc790d07b5680925584/nova/api/openstack/compute/servers.py#L818 And you get this "Instance invalid image" message which is misleading: http://logs.openstack.org/08/516708/3/gate/legacy-tempest-dsvm- py35/4d8d6a3/logs/screen-n-api.txt.gz#_Oct_31_23_18_04_588849 Oct 31 23:18:04.588849 ubuntu-xenial-rax-iad-635032 devstack@n-api.service[14381]: INFO nova.api.openstack.wsgi [None req- 2bd7178b-307e-4342-a156-f9645b7f75a5 tempest- MigrationsAdminTest-834796780 tempest-MigrationsAdminTest-834796780] HTTP exception thrown: Invalid instance image. Why the allocations are not found is a separate issue. This bug needs to deal with the misleading message coming back out of the API - it should be a 500 error because the user can't do anything about this (unless the instance is concurrently deleted or something, but that's not what happened here). ** Affects: nova Importance: Medium Status: New ** Tags: api resize -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1729356 Title: queens: resize fails with "Instance image invalid" 400 error when actually migration failed to swap allocations in placement Status in OpenStack Compute (nova): New Bug description: This failed in a non-obvious way: http://logs.openstack.org/08/516708/3/gate/legacy-tempest-dsvm- py35/4d8d6a3/logs/screen-n-super-cond.txt.gz#_Oct_31_23_18_04_391235 Oct 31 23:18:04.391235 ubuntu-xenial-rax-iad-635032 nova- conductor[22376]: ERROR nova.conductor.tasks.migrate [None req- 2bd7178b-307e-4342-a156-f9645b7f75a5 tempest- MigrationsAdminTest-834796780 tempest-MigrationsAdminTest-834796780] [instance: 87329b8a-0fa5-4467-b69e-6c43d4633f54] Unable to find existing allocations for instance That raises InstanceUnacceptable which extends Invalid which is handled in the REST API controller here: https://github.com/openstack/nova/blob/f974e3c3566f379211d7fdc790d07b5680925584/nova/api/openstack/compute/servers.py#L818 And you get this "Instance invalid image" message which is misleading: http://logs.openstack.org/08/516708/3/gate/legacy-tempest-dsvm- py35/4d8d6a3/logs/screen-n-api.txt.gz#_Oct_31_23_18_04_588849 Oct 31 23:18:04.588849 ubuntu-xenial-rax-iad-635032 devstack@n-api.service[14381]: INFO nova.api.openstack.wsgi [None req- 2bd7178b-307e-4342-a156-f9645b7f75a5 tempest- MigrationsAdminTest-834796780 tempest-MigrationsAdminTest-834796780] HTTP exception thrown: Invalid instance image. Why the allocations are not found is a separate issue. This bug needs to deal with the misleading message coming back out of the API - it should be a 500 error because the user can't do anything about this (unless the instance is concurrently deleted or something, but that's not what happened here). To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1729356/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1611171] Re: re-runs self via sudo
Reviewed: https://review.openstack.org/513665 Committed: https://git.openstack.org/cgit/openstack/designate/commit/?id=440a67cab18e3ab725383d01b4ed26fa3b1d3da0 Submitter: Zuul Branch:master commit 440a67cab18e3ab725383d01b4ed26fa3b1d3da0 Author: Jens HarbottDate: Fri Oct 20 08:34:18 2017 + Don't attempt to escalate designate-manage privileges Remove code which allowed designate-manage to attempt to escalate privileges so that configuration files can be read by users who normally wouldn't have access, but do have sudo access. Simpler version of [1]. [1] I03063d2af14015e6506f1b6e958f5ff219aa4a87 Closes-Bug: 1611171 Change-Id: I013754da27e9dd13493bee1abfada3fbc2a004c0 ** Changed in: designate Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1611171 Title: re-runs self via sudo Status in Cinder: Fix Released Status in Designate: Fix Released Status in ec2-api: Fix Released Status in gce-api: Fix Released Status in Manila: In Progress Status in masakari: Fix Released Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) newton series: Fix Committed Status in OpenStack Security Advisory: Won't Fix Status in Rally: Fix Released Bug description: Hello, I'm looking through Designate source code to determine if is appropriate to include in Ubuntu Main. This isn't a full security audit. This looks like trouble: ./designate/cmd/manage.py def main(): CONF.register_cli_opt(category_opt) try: utils.read_config('designate', sys.argv) logging.setup(CONF, 'designate') except cfg.ConfigFilesNotFoundError: cfgfile = CONF.config_file[-1] if CONF.config_file else None if cfgfile and not os.access(cfgfile, os.R_OK): st = os.stat(cfgfile) print(_("Could not read %s. Re-running with sudo") % cfgfile) try: os.execvp('sudo', ['sudo', '-u', '#%s' % st.st_uid] + sys.argv) except Exception: print(_('sudo failed, continuing as if nothing happened')) print(_('Please re-run designate-manage as root.')) sys.exit(2) This is an interesting decision -- if the configuration file is _not_ readable by the user in question, give the executing user complete privileges of the user that owns the unreadable file. I'm not a fan of hiding privilege escalation / modifications in programs -- if a user had recently used sudo and thus had the authentication token already stored for their terminal, this 'hidden' use of sudo may be unexpected and unwelcome, especially since it appears that argv from the first call leaks through to the sudo call. Is this intentional OpenStack style? Or unexpected for you guys too? (Feel free to make this public at your convenience.) Thanks To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1611171/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1729294] Re: debug message not correct in inject_data
** Changed in: fuel Status: New => Invalid ** Project changed: fuel => nova ** Changed in: nova Status: Invalid => New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1729294 Title: debug message not correct in inject_data Status in OpenStack Compute (nova): New Bug description: When creating instance with files to inject,it would use _inject_data method in nova, then one incorrect debug message in nova-compute.log: 2017-11-01 17:38:15.083 26293 DEBUG nova.virt.libvirt.driver [req- 278573ea-702c-4c4a-8b9d-ff25dd2af764 admin admin] [instance: 20e9bd26-028e-4051-bd99-ef30cb39228d] Checking root disk injection %(info)s _inject_data /opt/stack/nova/nova/virt/libvirt/driver.py:3160 I guess that attemps to show all inject_info such as network_info,admin_pass,files.So it should be modified in _inject_data. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1729294/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1729294] [NEW] debug message not correct in inject_data
You have been subscribed to a public bug: When creating instance with files to inject,it would use _inject_data method in nova, then one incorrect debug message in nova-compute.log: 2017-11-01 17:38:15.083 26293 DEBUG nova.virt.libvirt.driver [req- 278573ea-702c-4c4a-8b9d-ff25dd2af764 admin admin] [instance: 20e9bd26-028e-4051-bd99-ef30cb39228d] Checking root disk injection %(info)s _inject_data /opt/stack/nova/nova/virt/libvirt/driver.py:3160 I guess that attemps to show all inject_info such as network_info,admin_pass,files.So it should be modified in _inject_data. ** Affects: nova Importance: Undecided Assignee: guanzy (guanzy) Status: Invalid -- debug message not correct in inject_data https://bugs.launchpad.net/bugs/1729294 You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1729224] [NEW] dsid_missing_source: off
Public bug reported: dsid_missing_source: off ** Affects: cloud-init Importance: Undecided Status: New ** Tags: dsid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1729224 Title: dsid_missing_source: off Status in cloud-init: New Bug description: dsid_missing_source: off To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1729224/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1729229] [NEW] router lost ports when creating VM
Public bug reported: In a pike all-in-one env(created by using kolla), every time when I creating a VM, I see OVS reconnecting with neutron-ovs-agent. after reconnection, gateway port of virtual route created beforehand is lost. This is the route's ports before creating VM. [root@travel daisy]# ip netns exec qrouter-297ae7fb-6c5c-47e0-8c79-08f526b840e6 ifconfig lo: flags=73mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qg-2e8399f4-4c: flags=323 mtu 1500 inet 192.168.1.210 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::f816:3eff:fed2:f36e prefixlen 64 scopeid 0x20 ether fa:16:3e:d2:f3:6e txqueuelen 1000 (Ethernet) RX packets 223 bytes 13418 (13.1 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 13 bytes 910 (910.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 qr-843934a0-e1: flags=323 mtu 1450 inet 10.0.0.1 netmask 255.255.255.0 broadcast 10.0.0.255 inet6 fe80::f816:3eff:fe92:ef64 prefixlen 64 scopeid 0x20 ether fa:16:3e:92:ef:64 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 13 bytes 910 (910.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 This is the route's ports after creating VM. [root@travel daisy]# ip netns exec qrouter-297ae7fb-6c5c-47e0-8c79-08f526b840e6 ifconfig lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Related ovs-vswitchd log: (port lost at 2017-11-01T07:18:50 ~ 2017-11-01T07:18:51) [root@travel kolla-ansible-pike]# cat /var/lib/docker/volumes/kolla_logs/_data/openvswitch/ovs-vswitchd.log 2017-11-01T07:08:08.610Z|1|vlog|INFO|opened log file /var/log/kolla/openvswitch/ovs-vswitchd.log 2017-11-01T07:08:08.616Z|2|ovs_numa|INFO|Discovered 12 CPU cores on NUMA node 0 2017-11-01T07:08:08.616Z|3|ovs_numa|INFO|Discovered 12 CPU cores on NUMA node 1 2017-11-01T07:08:08.616Z|4|ovs_numa|INFO|Discovered 2 NUMA nodes and 24 CPU cores 2017-11-01T07:08:08.616Z|5|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... 2017-11-01T07:08:08.616Z|6|reconnect|INFO|unix:/run/openvswitch/db.sock: connected 2017-11-01T07:08:08.619Z|7|dpdk|INFO|DPDK Disabled - Use other_config:dpdk-init to enable 2017-11-01T07:08:08.621Z|8|ofproto_dpif|INFO|system@ovs-system: Datapath supports recirculation 2017-11-01T07:08:08.621Z|9|ofproto_dpif|INFO|system@ovs-system: MPLS label stack length probed as 1 2017-11-01T07:08:08.621Z|00010|ofproto_dpif|INFO|system@ovs-system: Datapath does not support truncate action 2017-11-01T07:08:08.621Z|00011|ofproto_dpif|INFO|system@ovs-system: Datapath supports unique flow ids 2017-11-01T07:08:08.621Z|00012|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_state 2017-11-01T07:08:08.621Z|00013|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_zone 2017-11-01T07:08:08.621Z|00014|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_mark 2017-11-01T07:08:08.621Z|00015|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_label 2017-11-01T07:08:08.621Z|00016|ofproto_dpif|INFO|system@ovs-system: Datapath supports ct_state_nat 2017-11-01T07:08:08.694Z|1|ofproto_dpif_upcall(handler1)|INFO|received packet on unassociated datapath port 0 2017-11-01T07:08:08.695Z|00017|bridge|INFO|bridge br-ex: added interface p1p1 on port 1 2017-11-01T07:08:08.699Z|00018|bridge|INFO|bridge br-ex: added interface br-ex on port 65534 2017-11-01T07:08:08.700Z|00019|bridge|INFO|bridge br-ex: using datapath ID 001b213655fc 2017-11-01T07:08:08.700Z|00020|connmgr|INFO|br-ex: added service controller "punix:/var/run/openvswitch/br-ex.mgmt" 2017-11-01T07:08:08.756Z|00021|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.7.2 2017-11-01T07:08:18.757Z|00022|memory|INFO|221336 kB peak resident set size after 10.1 seconds 2017-11-01T07:08:18.757Z|00023|memory|INFO|handlers:17 ports:2 revalidators:7 rules:5 udpif keys:4 2017-11-01T07:10:02.376Z|00024|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports recirculation 2017-11-01T07:10:02.376Z|00025|ofproto_dpif|INFO|netdev@ovs-netdev: MPLS label stack length probed as 3 2017-11-01T07:10:02.376Z|00026|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath
[Yahoo-eng-team] [Bug 1729213] [NEW] neutron-ovs-agnet report error when batch delete instances
Public bug reported: I created some instance with same network and same security group. The security group's remote secrity group is set to itself. Then I batch delete these instances.when the first instance's port is deleted,it will notify other neutron-ovs-agent so that these agent could update filter rules for other instance's port. But variable "defer_refresh_firewall" is set to true,so the update process will not be executed immediately. During the subsequent instance deletion,the process will be triggered,But the port within subsequent instance is deleted,So the agent will report error. 2017-11-01 09:30:29.695 9779 ERROR neutron.agent.linux.openvswitch_firewall.firewall [req-610bc095-2445 -4f2e-8a46-181a82087348 - - - - -] Initializing unfiltered port 2c8f784d-1e73-4e4d-874d-65dd32de4e22 that does not exist in ovsdb: Port 2c8f784d-1e73-4e4d-874d-65dd32de4e22 is not managed by this agent..: OVSFWPortNotFound: Port 2c8f784d-1e73-4e4d-874d-65dd32de4e22 is not managed by this agent. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1729213 Title: neutron-ovs-agnet report error when batch delete instances Status in neutron: New Bug description: I created some instance with same network and same security group. The security group's remote secrity group is set to itself. Then I batch delete these instances.when the first instance's port is deleted,it will notify other neutron-ovs-agent so that these agent could update filter rules for other instance's port. But variable "defer_refresh_firewall" is set to true,so the update process will not be executed immediately. During the subsequent instance deletion,the process will be triggered,But the port within subsequent instance is deleted,So the agent will report error. 2017-11-01 09:30:29.695 9779 ERROR neutron.agent.linux.openvswitch_firewall.firewall [req-610bc095-2445 -4f2e-8a46-181a82087348 - - - - -] Initializing unfiltered port 2c8f784d-1e73-4e4d-874d-65dd32de4e22 that does not exist in ovsdb: Port 2c8f784d-1e73-4e4d-874d-65dd32de4e22 is not managed by this agent..: OVSFWPortNotFound: Port 2c8f784d-1e73-4e4d-874d-65dd32de4e22 is not managed by this agent. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1729213/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp