[Yahoo-eng-team] [Bug 1798224] [NEW] DeprecationWarning: The behavior of .best_match for the Accept classes is currently being maintained for backward compatibility, but the method will be deprecated
Public bug reported: When executing 'tox -e py35', the following deprecation warning is shown. It should be fixed. 2018-10-16 03:36:49.117553 | ubuntu-xenial | {5} nova.tests.unit.api.openstack.compute.test_disk_config.DiskConfigTestCaseV21.test_update_server_override_auto [0.544275s] ... ok 2018-10-16 03:36:49.117626 | ubuntu-xenial | 2018-10-16 03:36:49.117666 | ubuntu-xenial | Captured stderr: 2018-10-16 03:36:49.117703 | ubuntu-xenial | (snipped...) 2018-10-16 03:36:49.118228 | ubuntu-xenial | b'/home/zuul/src/git.openstack.org/openstack/nova/.tox/py35/lib/python3.5/site-packages/webob/acceptparse.py:1379: DeprecationWarning: The behavior of .best_match for the Accept classes is currently being maintained for backward compatibility, but the method will be deprecated in the future, as its behavior is not specified in (and currently does not conform to) RFC 7231.' 2018-10-16 03:36:49.118288 | ubuntu-xenial | b' DeprecationWarning,' 2018-10-16 03:36:49.118319 | ubuntu-xenial | b'' ** Affects: nova Importance: Undecided Assignee: Takashi NATSUME (natsume-takashi) Status: In Progress ** Tags: testing -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1798224 Title: DeprecationWarning: The behavior of .best_match for the Accept classes is currently being maintained for backward compatibility, but the method will be deprecated in the future Status in OpenStack Compute (nova): In Progress Bug description: When executing 'tox -e py35', the following deprecation warning is shown. It should be fixed. 2018-10-16 03:36:49.117553 | ubuntu-xenial | {5} nova.tests.unit.api.openstack.compute.test_disk_config.DiskConfigTestCaseV21.test_update_server_override_auto [0.544275s] ... ok 2018-10-16 03:36:49.117626 | ubuntu-xenial | 2018-10-16 03:36:49.117666 | ubuntu-xenial | Captured stderr: 2018-10-16 03:36:49.117703 | ubuntu-xenial | (snipped...) 2018-10-16 03:36:49.118228 | ubuntu-xenial | b'/home/zuul/src/git.openstack.org/openstack/nova/.tox/py35/lib/python3.5/site-packages/webob/acceptparse.py:1379: DeprecationWarning: The behavior of .best_match for the Accept classes is currently being maintained for backward compatibility, but the method will be deprecated in the future, as its behavior is not specified in (and currently does not conform to) RFC 7231.' 2018-10-16 03:36:49.118288 | ubuntu-xenial | b' DeprecationWarning,' 2018-10-16 03:36:49.118319 | ubuntu-xenial | b'' To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1798224/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1794364] Re: 'nova-manage db online_data_migrations' count fail
** Also affects: cinder Importance: Undecided Status: New ** Changed in: cinder Assignee: (unassigned) => iain MacDonnell (imacdonn) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1794364 Title: 'nova-manage db online_data_migrations' count fail Status in Cinder: New Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) ocata series: Fix Committed Status in OpenStack Compute (nova) pike series: Fix Committed Status in OpenStack Compute (nova) queens series: Fix Committed Status in OpenStack Compute (nova) rocky series: Fix Committed Bug description: 'nova-manage db online_data_migrations' attempts to display summary counts of migrations "Needed" and "Completed" in a pretty table at the end, but fails to accumulate the totals between successive invocations of _run_migration(), and ends up reporting zeroes. # nova-manage db online_data_migrations /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported exception.NotSupportedWarning Running batches of 50 until complete /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (3090, u"Changing sql mode 'NO_AUTO_CREATE_USER' is deprecated. It will be removed in a future release.") result = self._query(query) 2 rows matched query migrate_instances_add_request_spec, 0 migrated 13 rows matched query migrate_quota_limits_to_api_db, 13 migrated 37 rows matched query populate_uuids, 37 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 50 rows matched query populate_uuids, 50 migrated 21 rows matched query populate_uuids, 21 migrated +-+--+---+ | Migration | Total Needed | Completed | +-+--+---+ | delete_build_requests_with_no_instance_uuid | 0 | 0 | |migrate_aggregate_reset_autoincrement| 0 | 0 | | migrate_aggregates | 0 | 0 | | migrate_instance_groups_to_api_db | 0 | 0 | | migrate_instances_add_request_spec | 0 | 0 | | migrate_keypairs_to_api_db | 0 | 0 | | migrate_quota_classes_to_api_db | 0 | 0 | |migrate_quota_limits_to_api_db |
[Yahoo-eng-team] [Bug 1798188] Re: VNC stops working in rolling upgrade by default
One suggestion in IRC today was that we could add a "nova-status upgrade check" which iterates the cell DBs looking to see if there are any non- deleted/disabled nova-consoleauth services table records and if so, check to see if there are no console_auth_tokens entries in that DB and if [workarounds]/enable_consoleauth is False, fail (or warn?) the status check b/c it means that cell is using nova-consoleauth but doesn't have tokens in the DB so it needs to run the consoleauth service. ** Also affects: nova/rocky Importance: Undecided Status: New ** Changed in: nova/rocky Status: New => Confirmed ** Changed in: nova/rocky Importance: Undecided => High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1798188 Title: VNC stops working in rolling upgrade by default Status in OpenStack Compute (nova): Confirmed Status in OpenStack Compute (nova) rocky series: Confirmed Bug description: During a rolling upgrade, once the control plane is upgraded and running on Rocky (but computes still in Queens), the consoles will stop working. It is not obvious however it seems that the following is missing: ``` [workarounds] enable_consoleauth = True ``` There isn't a really obvious document or anything explaining this, leaving the user confused To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1798188/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1798188] [NEW] VNC stops working in rolling upgrade by default
Public bug reported: During a rolling upgrade, once the control plane is upgraded and running on Rocky (but computes still in Queens), the consoles will stop working. It is not obvious however it seems that the following is missing: ``` [workarounds] enable_consoleauth = True ``` There isn't a really obvious document or anything explaining this, leaving the user confused ** Affects: nova Importance: High Assignee: melanie witt (melwitt) Status: Confirmed ** Tags: console upgrade -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1798188 Title: VNC stops working in rolling upgrade by default Status in OpenStack Compute (nova): Confirmed Bug description: During a rolling upgrade, once the control plane is upgraded and running on Rocky (but computes still in Queens), the consoles will stop working. It is not obvious however it seems that the following is missing: ``` [workarounds] enable_consoleauth = True ``` There isn't a really obvious document or anything explaining this, leaving the user confused To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1798188/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1798189] [NEW] cloud-init query: /run/cloud/instance-data.json no regenerated on upgrade
Public bug reported: /run/cloud-init/instance-data.json & instance-data-sensitive.json not regenerated on upgrade. Between cloud-init from 18.3-9 -> 18.4.0 cloud-init transitioned from a single sensitive /run/cloud-init/instance-data.json that was read-only root to two separate files: /run/cloud-init/instance-data- sensitive.json (root readable) and /run/cloud-init/instance-data.json (world readable). cloud-init query subcommand attempts to read the instance-data.json when getuid is non-root, and instance-data-sensitive.json when getuid is root. Since /run/cloud-init/instance-data*json is only regenerated on reboot, "cloud-init query" after an upgrade emits the following errors # as non-root ubuntu@mybox $ cloud-init query --all ERROR: Missing instance-data.json file: /run/cloud-init/instance-data.json # as root user ubuntu@mybox $ sudo cloud-init query --all ERROR: Missing instance-data.json file: /run/cloud-init/instance-data-sensitive.json ** Affects: cloud-init Importance: Medium Status: Confirmed ** Description changed: /run/cloud-init/instance-data.json & instance-data-sensitive.json not regenerated on upgrade. - Between cloud-init from 18.3-9 -> 18.4.0 cloud-init transitioned from a single sensitive /run/cloud-init/instance-data.json that was read-only root to two separate files: /run/cloud-init/instance-data- sensitive.json (root readable) and /run/cloud-init/instance-data.json (world readable). - cloud-init query subcommand attempts to read the instance-data.json when getuid is non-root, and instance-data-sensitive.json when getuid is root. Since /run/cloud-init/instance-data*json is only regenerated on reboot, "cloud-init query" after an upgrade emits the following errors # as non-root - ubuntu@mybox $ cloud-init query + ubuntu@mybox $ cloud-init query --all ERROR: Missing instance-data.json file: /run/cloud-init/instance-data.json # as root user - ubuntu@mybox $ sudo cloud-init query + ubuntu@mybox $ sudo cloud-init query --all ERROR: Missing instance-data.json file: /run/cloud-init/instance-data-sensitive.json ** Changed in: cloud-init Importance: Undecided => Medium ** Changed in: cloud-init Status: New => Confirmed ** Summary changed: - cloud-init query: /run/cloud/instance-data.json wrong perms on upgrade + cloud-init query: /run/cloud/instance-data.json no regenerated on upgrade -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1798189 Title: cloud-init query: /run/cloud/instance-data.json no regenerated on upgrade Status in cloud-init: Confirmed Bug description: /run/cloud-init/instance-data.json & instance-data-sensitive.json not regenerated on upgrade. Between cloud-init from 18.3-9 -> 18.4.0 cloud-init transitioned from a single sensitive /run/cloud-init/instance-data.json that was read- only root to two separate files: /run/cloud-init/instance-data- sensitive.json (root readable) and /run/cloud-init/instance-data.json (world readable). cloud-init query subcommand attempts to read the instance-data.json when getuid is non-root, and instance-data-sensitive.json when getuid is root. Since /run/cloud-init/instance-data*json is only regenerated on reboot, "cloud-init query" after an upgrade emits the following errors # as non-root ubuntu@mybox $ cloud-init query --all ERROR: Missing instance-data.json file: /run/cloud-init/instance-data.json # as root user ubuntu@mybox $ sudo cloud-init query --all ERROR: Missing instance-data.json file: /run/cloud-init/instance-data-sensitive.json To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1798189/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1798184] [NEW] PY3: python3-ldap does not allow bytes for DN/RDN/field names
Public bug reported: Under Python 2, python-ldap uses bytes by default. Under Python 3 this is removed and bytes aren't allowed for DN/RDN/field names. More details are here: http://www.python-ldap.org/en/latest/bytes_mode.html#bytes-mode and here: https://github.com/python-ldap/python-ldap/blob/python-ldap-3.1.0/Lib/ldap/ldapobject.py#L111 == initial traceback == Here's the initial traceback from the failure: https://paste.ubuntu.com/p/67THZb2m5m/ The last bit of the error is: File "/usr/lib/python3/dist-packages/ldap/ldapobject.py", line 314, in _ldap_call result = func(*args,**kwargs) TypeError: simple_bind() argument 1 must be str or None, not bytes A closer look at func shows: func= args=(b'cn=admin,dc=test,dc=com', b'crapper', None, None) == keystone ldap backend use of python-ldap == In simple_bind_s() of keystone's ldap backend, who and cred are encoded as byte strings: https://github.com/openstack/keystone/blob/14.0.0/keystone/identity/backends/ldap/common.py#L885 but that appears to no longer be valid use of python-ldap for py3. ** Affects: keystone Importance: Undecided Status: New ** Summary changed: - PY3: python3-ldap does not allow bytes for no bytes for DN/RDN/field names + PY3: python3-ldap does not allow bytes for DN/RDN/field names -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1798184 Title: PY3: python3-ldap does not allow bytes for DN/RDN/field names Status in OpenStack Identity (keystone): New Bug description: Under Python 2, python-ldap uses bytes by default. Under Python 3 this is removed and bytes aren't allowed for DN/RDN/field names. More details are here: http://www.python-ldap.org/en/latest/bytes_mode.html#bytes-mode and here: https://github.com/python-ldap/python-ldap/blob/python-ldap-3.1.0/Lib/ldap/ldapobject.py#L111 == initial traceback == Here's the initial traceback from the failure: https://paste.ubuntu.com/p/67THZb2m5m/ The last bit of the error is: File "/usr/lib/python3/dist-packages/ldap/ldapobject.py", line 314, in _ldap_call result = func(*args,**kwargs) TypeError: simple_bind() argument 1 must be str or None, not bytes A closer look at func shows: func= args=(b'cn=admin,dc=test,dc=com', b'crapper', None, None) == keystone ldap backend use of python-ldap == In simple_bind_s() of keystone's ldap backend, who and cred are encoded as byte strings: https://github.com/openstack/keystone/blob/14.0.0/keystone/identity/backends/ldap/common.py#L885 but that appears to no longer be valid use of python-ldap for py3. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1798184/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1798172] Re: Ironic driver tries to update the compute_node's UUID which of course fails in case of existing compute_nodes
** Changed in: nova Status: New => Triaged ** Changed in: nova Importance: Undecided => High ** Also affects: nova/rocky Importance: Undecided Status: New ** Changed in: nova/rocky Status: New => Confirmed ** Changed in: nova/rocky Importance: Undecided => High ** Tags added: upgrade -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1798172 Title: Ironic driver tries to update the compute_node's UUID which of course fails in case of existing compute_nodes Status in OpenStack Compute (nova): Triaged Status in OpenStack Compute (nova) rocky series: Confirmed Bug description: The patch - https://review.openstack.org/#/c/571535 was introduced with the aim of keeping the same uuid value for ironic nodes and their corresponding compute_node records. This works fine for when new nodes are created. However upon restart/periodic updates from the ironic driver, it tries to update the uuid of an existing compute_node record based on the resource update from the ironic driver and this fails since once a uuid is set it cannot be changed/updated. Error traceback: 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager [req-62cf93be-023b-41ca-8971-d3dbab4324f8 - - - - -] Error updating resources for node e3ef5531-3c39-458a-99f0-fd44592ae1ae.: ReadOnlyFieldError: Cannot modify readonly field uuid 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager Traceback (most recent call last): 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7337, in update_available_resource_for_node 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager rt.update_available_resource(context, nodename) 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 680, in update_available_resource 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager self._update_available_resource(context, resources) 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager return f(*args, **kwargs) 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 704, in _update_available_resource 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager self._init_compute_node(context, resources) 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 559, in _init_compute_node 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager self._copy_resources(cn, resources) 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 617, in _copy_resources 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager compute_node.update_from_virt_driver(resources) 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/objects/compute_node.py", line 353, in update_from_virt_driver 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager setattr(self, key, resources[key]) 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 77, in setter 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager raise exception.ReadOnlyFieldError(field=name) 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager ReadOnlyFieldError: Cannot modify readonly field uuid To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1798172/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1798172] [NEW] Ironic driver tries to update the compute_node's UUID which of course fails in case of existing compute_nodes
Public bug reported: The patch - https://review.openstack.org/#/c/571535 was introduced with the aim of keeping the same uuid value for ironic nodes and their corresponding compute_node records. This works fine for when new nodes are created. However upon restart/periodic updates from the ironic driver, it tries to update the uuid of an existing compute_node record based on the resource update from the ironic driver and this fails since once a uuid is set it cannot be changed/updated. Error traceback: 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager [req-62cf93be-023b-41ca-8971-d3dbab4324f8 - - - - -] Error updating resources for node e3ef5531-3c39-458a-99f0-fd44592ae1ae.: ReadOnlyFieldError: Cannot modify readonly field uuid 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager Traceback (most recent call last): 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7337, in update_available_resource_for_node 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager rt.update_available_resource(context, nodename) 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 680, in update_available_resource 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager self._update_available_resource(context, resources) 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager return f(*args, **kwargs) 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 704, in _update_available_resource 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager self._init_compute_node(context, resources) 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 559, in _init_compute_node 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager self._copy_resources(cn, resources) 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 617, in _copy_resources 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager compute_node.update_from_virt_driver(resources) 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/objects/compute_node.py", line 353, in update_from_virt_driver 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager setattr(self, key, resources[key]) 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 77, in setter 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager raise exception.ReadOnlyFieldError(field=name) 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager ReadOnlyFieldError: Cannot modify readonly field uuid ** Affects: nova Importance: High Status: Triaged ** Affects: nova/rocky Importance: High Status: Confirmed ** Tags: compute ironic upgrade -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1798172 Title: Ironic driver tries to update the compute_node's UUID which of course fails in case of existing compute_nodes Status in OpenStack Compute (nova): Triaged Status in OpenStack Compute (nova) rocky series: Confirmed Bug description: The patch - https://review.openstack.org/#/c/571535 was introduced with the aim of keeping the same uuid value for ironic nodes and their corresponding compute_node records. This works fine for when new nodes are created. However upon restart/periodic updates from the ironic driver, it tries to update the uuid of an existing compute_node record based on the resource update from the ironic driver and this fails since once a uuid is set it cannot be changed/updated. Error traceback: 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager [req-62cf93be-023b-41ca-8971-d3dbab4324f8 - - - - -] Error updating resources for node e3ef5531-3c39-458a-99f0-fd44592ae1ae.: ReadOnlyFieldError: Cannot modify readonly field uuid 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager Traceback (most recent call last): 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7337, in update_available_resource_for_node 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager rt.update_available_resource(context, nodename) 2018-10-11 02:30:34.142 21850 ERROR nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 680, in
[Yahoo-eng-team] [Bug 1798158] Re: Non-templated transport_url will fail if not defined in config
** Also affects: nova/rocky Importance: Undecided Status: New ** Changed in: nova/rocky Importance: Undecided => Medium ** Changed in: nova/rocky Status: New => Triaged ** Tags added: cells -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1798158 Title: Non-templated transport_url will fail if not defined in config Status in OpenStack Compute (nova): In Progress Status in OpenStack Compute (nova) rocky series: Triaged Bug description: If transport_url is not defined in the config, we will fail to format a non-templated transport_url in the database like this: ERROR nova.objects.cell_mapping [None req-34831485-adf4-4a0d-bb20-e1736d93a451 None None] Failed to parse [DEFAULT]/transport_url to format cell mapping: AttributeError: 'NoneType' object has no attribute 'find' ERROR nova.objects.cell_mapping Traceback (most recent call last): ERROR nova.objects.cell_mapping File "/opt/stack/nova/nova/objects/cell_mapping.py", line 150, in _format_mq_url ERROR nova.objects.cell_mapping return CellMapping._format_url(url, CONF.transport_url) ERROR nova.objects.cell_mapping File "/opt/stack/nova/nova/objects/cell_mapping.py", line 101, in _format_url ERROR nova.objects.cell_mapping default_url = urlparse.urlparse(default) ERROR nova.objects.cell_mapping File "/usr/lib/python2.7/urlparse.py", line 143, in urlparse ERROR nova.objects.cell_mapping tuple = urlsplit(url, scheme, allow_fragments) ERROR nova.objects.cell_mapping File "/usr/lib/python2.7/urlparse.py", line 182, in urlsplit ERROR nova.objects.cell_mapping i = url.find(':') ERROR nova.objects.cell_mapping AttributeError: 'NoneType' object has no attribute 'find' ERROR nova.objects.cell_mapping To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1798158/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1798158] [NEW] Non-templated transport_url will fail if not defined in config
Public bug reported: If transport_url is not defined in the config, we will fail to format a non-templated transport_url in the database like this: ERROR nova.objects.cell_mapping [None req-34831485-adf4-4a0d-bb20-e1736d93a451 None None] Failed to parse [DEFAULT]/transport_url to format cell mapping: AttributeError: 'NoneType' object has no attribute 'find' ERROR nova.objects.cell_mapping Traceback (most recent call last): ERROR nova.objects.cell_mapping File "/opt/stack/nova/nova/objects/cell_mapping.py", line 150, in _format_mq_url ERROR nova.objects.cell_mapping return CellMapping._format_url(url, CONF.transport_url) ERROR nova.objects.cell_mapping File "/opt/stack/nova/nova/objects/cell_mapping.py", line 101, in _format_url ERROR nova.objects.cell_mapping default_url = urlparse.urlparse(default) ERROR nova.objects.cell_mapping File "/usr/lib/python2.7/urlparse.py", line 143, in urlparse ERROR nova.objects.cell_mapping tuple = urlsplit(url, scheme, allow_fragments) ERROR nova.objects.cell_mapping File "/usr/lib/python2.7/urlparse.py", line 182, in urlsplit ERROR nova.objects.cell_mapping i = url.find(':') ERROR nova.objects.cell_mapping AttributeError: 'NoneType' object has no attribute 'find' ERROR nova.objects.cell_mapping ** Affects: nova Importance: Undecided Assignee: Dan Smith (danms) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1798158 Title: Non-templated transport_url will fail if not defined in config Status in OpenStack Compute (nova): In Progress Bug description: If transport_url is not defined in the config, we will fail to format a non-templated transport_url in the database like this: ERROR nova.objects.cell_mapping [None req-34831485-adf4-4a0d-bb20-e1736d93a451 None None] Failed to parse [DEFAULT]/transport_url to format cell mapping: AttributeError: 'NoneType' object has no attribute 'find' ERROR nova.objects.cell_mapping Traceback (most recent call last): ERROR nova.objects.cell_mapping File "/opt/stack/nova/nova/objects/cell_mapping.py", line 150, in _format_mq_url ERROR nova.objects.cell_mapping return CellMapping._format_url(url, CONF.transport_url) ERROR nova.objects.cell_mapping File "/opt/stack/nova/nova/objects/cell_mapping.py", line 101, in _format_url ERROR nova.objects.cell_mapping default_url = urlparse.urlparse(default) ERROR nova.objects.cell_mapping File "/usr/lib/python2.7/urlparse.py", line 143, in urlparse ERROR nova.objects.cell_mapping tuple = urlsplit(url, scheme, allow_fragments) ERROR nova.objects.cell_mapping File "/usr/lib/python2.7/urlparse.py", line 182, in urlsplit ERROR nova.objects.cell_mapping i = url.find(':') ERROR nova.objects.cell_mapping AttributeError: 'NoneType' object has no attribute 'find' ERROR nova.objects.cell_mapping To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1798158/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1798117] [NEW] juju sends "network" top level key to user.network-config in lxd containers
Public bug reported: == Short summary == In lxd containers launched by juju, /var/lib/cloud/seed/nocloud-net/network-config has: has: network: config: disabled That is invalid content. Cloud-init assumes content in 'network-config' is already namespaced to 'network'. The correct content would be: config: disabled == Easy recreate == $ lxc launch ubuntu-daily:bionic \ "--config=user.network-config={'network': {'config': {'disabled'}}}" == Longer Info == When looking at bug 1651497, I see containers that run cloud-init have errors in a container's cloud-init log (http://paste.ubuntu.com/p/5mKXC8pMwH/) like: AttributeError: 'NoneType' object has no attribute 'iter_interfaces' and Failed to rename devices: Failed to apply network config names. Found bad network config version: None After some looking guessing I realized that juju must be attempting to disable cloud-init's network configuration via sending the following into the nocloud seed (/var/lib/cloud/seed/nocloud-net/network-config) via 'user.network-config'. cloud-init can clearly handle this better, but juju should not be sending invalid configuration. Related bugs: * bug 1651497: iscsid.service fails to start in container, results in failed dist-upgrade later on ProblemType: Bug DistroRelease: Ubuntu 18.04 Package: cloud-init 18.3-9-g2e62cb8a-0ubuntu1~18.04.2 ProcVersionSignature: Ubuntu 4.18.0-8.9-generic 4.18.7 Uname: Linux 4.18.0-8-generic x86_64 ApportVersion: 2.20.9-0ubuntu7.4 Architecture: amd64 CloudName: NoCloud Date: Tue Oct 16 14:33:12 2018 PackageArchitecture: all ProcEnviron: TERM=xterm-256color PATH=(custom, no user) LANG=C.UTF-8 SourcePackage: cloud-init UpgradeStatus: No upgrade log present (probably fresh install) cloud-init-log-warnings: 2018-10-16 14:32:01,706 - stages.py[WARNING]: Failed to rename devices: Failed to apply network config names. Found bad network config version: None 2018-10-16 14:32:01,707 - util.py[WARNING]: failed stage init-local AttributeError: 'NoneType' object has no attribute 'version' 2018-10-16 14:32:02,366 - stages.py[WARNING]: Failed to rename devices: Failed to apply network config names. Found bad network config version: None user_data.txt: #cloud-config {} ** Affects: cloud-init Importance: Medium Status: Confirmed ** Affects: juju Importance: Undecided Status: New ** Affects: cloud-init (Ubuntu) Importance: Medium Status: Confirmed ** Tags: amd64 apport-bug bionic uec-images ** Also affects: cloud-init Importance: Undecided Status: New ** Changed in: cloud-init Status: New => Confirmed ** Changed in: cloud-init (Ubuntu) Status: New => Confirmed ** Changed in: cloud-init Importance: Undecided => Medium ** Changed in: cloud-init (Ubuntu) Importance: Undecided => Medium ** Also affects: juju Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1798117 Title: juju sends "network" top level key to user.network-config in lxd containers Status in cloud-init: Confirmed Status in juju: New Status in cloud-init package in Ubuntu: Confirmed Bug description: == Short summary == In lxd containers launched by juju, /var/lib/cloud/seed/nocloud-net/network-config has: has: network: config: disabled That is invalid content. Cloud-init assumes content in 'network-config' is already namespaced to 'network'. The correct content would be: config: disabled == Easy recreate == $ lxc launch ubuntu-daily:bionic \ "--config=user.network-config={'network': {'config': {'disabled'}}}" == Longer Info == When looking at bug 1651497, I see containers that run cloud-init have errors in a container's cloud-init log (http://paste.ubuntu.com/p/5mKXC8pMwH/) like: AttributeError: 'NoneType' object has no attribute 'iter_interfaces' and Failed to rename devices: Failed to apply network config names. Found bad network config version: None After some looking guessing I realized that juju must be attempting to disable cloud-init's network configuration via sending the following into the nocloud seed (/var/lib/cloud/seed/nocloud-net/network-config) via 'user.network-config'. cloud-init can clearly handle this better, but juju should not be sending invalid configuration. Related bugs: * bug 1651497: iscsid.service fails to start in container, results in failed dist-upgrade later on ProblemType: Bug DistroRelease: Ubuntu 18.04 Package: cloud-init 18.3-9-g2e62cb8a-0ubuntu1~18.04.2 ProcVersionSignature: Ubuntu 4.18.0-8.9-generic 4.18.7 Uname: Linux 4.18.0-8-generic x86_64 ApportVersion: 2.20.9-0ubuntu7.4 Architecture: amd64 CloudName: NoCloud Date: Tue Oct 16 14:33:12 2018 PackageArchitecture: all ProcEnviron: TERM=xterm-256color PATH=(custom, no user)
[Yahoo-eng-team] [Bug 1798085] [NEW] [Fullstack] L3 agent should be run in host namespace, together with L2 agent
Public bug reported: In fullstack tests L3 agent is not running in host namespace currently. That works fine e.g. with Linuxbridge agent, which is spawned in host-XXX namespace when test is creating and connecting FakeVM as it is always done in this host namespace and agent can see it. However in case of using L3 agent this won't work because L3 agent creates veth connection between qrouter- namespace and global scope (no namespace) so linuxbridge agent can't see such tap device and is not creating proper network configuration for such router interfaces. ** Affects: neutron Importance: Medium Assignee: Slawek Kaplonski (slaweq) Status: Confirmed ** Tags: fullstack -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1798085 Title: [Fullstack] L3 agent should be run in host namespace, together with L2 agent Status in neutron: Confirmed Bug description: In fullstack tests L3 agent is not running in host namespace currently. That works fine e.g. with Linuxbridge agent, which is spawned in host-XXX namespace when test is creating and connecting FakeVM as it is always done in this host namespace and agent can see it. However in case of using L3 agent this won't work because L3 agent creates veth connection between qrouter- namespace and global scope (no namespace) so linuxbridge agent can't see such tap device and is not creating proper network configuration for such router interfaces. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1798085/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1798055] [NEW] Cloud init rsyslog module will not start on FreeBSD
Public bug reported: Cloud Provider: Openstack Cloud init configuration is embedded in the attached archive Cloud init platform: FreeBSD FreeBSD image built using CLOUDWARE. Cloud init port (net/cloud-init) on FreeBSD will not load rsyslog module because is not in configuration file. ** Affects: cloud-init Importance: Undecided Status: New ** Attachment added: "Cloud init config file and logs using cloud-init collect-logs" https://bugs.launchpad.net/bugs/1798055/+attachment/5201618/+files/cloud-init.tar.gz -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1798055 Title: Cloud init rsyslog module will not start on FreeBSD Status in cloud-init: New Bug description: Cloud Provider: Openstack Cloud init configuration is embedded in the attached archive Cloud init platform: FreeBSD FreeBSD image built using CLOUDWARE. Cloud init port (net/cloud-init) on FreeBSD will not load rsyslog module because is not in configuration file. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1798055/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1798048] Re: cant modify project quotas
** Also affects: centos Importance: Undecided Status: New ** No longer affects: centos -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1798048 Title: cant modify project quotas Status in OpenStack Dashboard (Horizon): New Bug description: error_log (httpd) [Tue Oct 16 04:42:47.473489 2018] [:error] [pid 4688] Traceback (most recent call last): [Tue Oct 16 04:42:47.473635 2018] [:error] [pid 4688] File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit [Tue Oct 16 04:42:47.473780 2018] [:error] [pid 4688] msg = self.format(record) [Tue Oct 16 04:42:47.473817 2018] [:error] [pid 4688] File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format [Tue Oct 16 04:42:47.473867 2018] [:error] [pid 4688] return fmt.format(record) [Tue Oct 16 04:42:47.473896 2018] [:error] [pid 4688] File "/usr/lib64/python2.7/logging/__init__.py", line 464, in format [Tue Oct 16 04:42:47.473943 2018] [:error] [pid 4688] record.message = record.getMessage() [Tue Oct 16 04:42:47.473971 2018] [:error] [pid 4688] File "/usr/lib64/python2.7/logging/__init__.py", line 328, in getMessage [Tue Oct 16 04:42:47.474014 2018] [:error] [pid 4688] msg = msg % self.args [Tue Oct 16 04:42:47.474084 2018] [:error] [pid 4688] TypeError: not all arguments converted during string formatting [Tue Oct 16 04:42:47.474109 2018] [:error] [pid 4688] Logged from file quotas.py, line 242 [Tue Oct 16 04:42:47.474415 2018] [:error] [pid 4688] UnhashableKeyWarning: The key of openstack_dashboard.usage.quotas tenant_quota_usages is not hashable and cannot be memoized: ((,), (('targets', ('share_snapshots', 'share_gigabytes', 'share_snapshot_gigabytes', 'shares', 'share_networks')), ('tenant_id', u'81cc9c2be8a9476fbf81f9fbe4d2c86b'))) [Tue Oct 16 04:42:47.474589 2018] [:error] [pid 4688] Traceback (most recent call last): [Tue Oct 16 04:42:47.474635 2018] [:error] [pid 4688] File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit [Tue Oct 16 04:42:47.474685 2018] [:error] [pid 4688] msg = self.format(record) [Tue Oct 16 04:42:47.474713 2018] [:error] [pid 4688] File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format [Tue Oct 16 04:42:47.474752 2018] [:error] [pid 4688] return fmt.format(record) [Tue Oct 16 04:42:47.474778 2018] [:error] [pid 4688] File "/usr/lib64/python2.7/logging/__init__.py", line 464, in format [Tue Oct 16 04:42:47.474827 2018] [:error] [pid 4688] record.message = record.getMessage() [Tue Oct 16 04:42:47.474856 2018] [:error] [pid 4688] File "/usr/lib64/python2.7/logging/__init__.py", line 328, in getMessage [Tue Oct 16 04:42:47.474895 2018] [:error] [pid 4688] msg = msg % self.args [Tue Oct 16 04:42:47.474931 2018] [:error] [pid 4688] TypeError: not all arguments converted during string formatting [Tue Oct 16 04:42:47.474951 2018] [:error] [pid 4688] Logged from file quotas.py, line 242 [Tue Oct 16 04:42:47.476646 2018] [:error] [pid 4688] ERROR django.request Internal Server Error: /dashboard/identity/81cc9c2be8a9476fbf81f9fbe4d2c86b/update_quotas/ [Tue Oct 16 04:42:47.476673 2018] [:error] [pid 4688] Traceback (most recent call last): [Tue Oct 16 04:42:47.476684 2018] [:error] [pid 4688] File "/usr/lib/python2.7/site-packages/django/core/handlers/exception.py", line 41, in inner [Tue Oct 16 04:42:47.476694 2018] [:error] [pid 4688] response = get_response(request) [Tue Oct 16 04:42:47.476703 2018] [:error] [pid 4688] File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 187, in _get_response [Tue Oct 16 04:42:47.476712 2018] [:error] [pid 4688] response = self.process_exception_by_middleware(e, request) [Tue Oct 16 04:42:47.476721 2018] [:error] [pid 4688] File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 185, in _get_response [Tue Oct 16 04:42:47.476731 2018] [:error] [pid 4688] response = wrapped_callback(request, *callback_args, **callback_kwargs) [Tue Oct 16 04:42:47.476740 2018] [:error] [pid 4688] File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 36, in dec [Tue Oct 16 04:42:47.476749 2018] [:error] [pid 4688] return view_func(request, *args, **kwargs) [Tue Oct 16 04:42:47.476758 2018] [:error] [pid 4688] File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 52, in dec [Tue Oct 16 04:42:47.476767 2018] [:error] [pid 4688] return view_func(request, *args, **kwargs) [Tue Oct 16 04:42:47.476785 2018] [:error] [pid 4688] File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 36, in dec [Tue Oct 16 04:42:47.476796 2018] [:error] [pid 4688] return view_func(request, *args, **kwargs) [Tue Oct 16 04:42:47.476805 2018] [:error] [pid 4688] File
[Yahoo-eng-team] [Bug 1798048] [NEW] cant modify project quotas
Public bug reported: error_log (httpd) [Tue Oct 16 04:42:47.473489 2018] [:error] [pid 4688] Traceback (most recent call last): [Tue Oct 16 04:42:47.473635 2018] [:error] [pid 4688] File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit [Tue Oct 16 04:42:47.473780 2018] [:error] [pid 4688] msg = self.format(record) [Tue Oct 16 04:42:47.473817 2018] [:error] [pid 4688] File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format [Tue Oct 16 04:42:47.473867 2018] [:error] [pid 4688] return fmt.format(record) [Tue Oct 16 04:42:47.473896 2018] [:error] [pid 4688] File "/usr/lib64/python2.7/logging/__init__.py", line 464, in format [Tue Oct 16 04:42:47.473943 2018] [:error] [pid 4688] record.message = record.getMessage() [Tue Oct 16 04:42:47.473971 2018] [:error] [pid 4688] File "/usr/lib64/python2.7/logging/__init__.py", line 328, in getMessage [Tue Oct 16 04:42:47.474014 2018] [:error] [pid 4688] msg = msg % self.args [Tue Oct 16 04:42:47.474084 2018] [:error] [pid 4688] TypeError: not all arguments converted during string formatting [Tue Oct 16 04:42:47.474109 2018] [:error] [pid 4688] Logged from file quotas.py, line 242 [Tue Oct 16 04:42:47.474415 2018] [:error] [pid 4688] UnhashableKeyWarning: The key of openstack_dashboard.usage.quotas tenant_quota_usages is not hashable and cannot be memoized: ((,), (('targets', ('share_snapshots', 'share_gigabytes', 'share_snapshot_gigabytes', 'shares', 'share_networks')), ('tenant_id', u'81cc9c2be8a9476fbf81f9fbe4d2c86b'))) [Tue Oct 16 04:42:47.474589 2018] [:error] [pid 4688] Traceback (most recent call last): [Tue Oct 16 04:42:47.474635 2018] [:error] [pid 4688] File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit [Tue Oct 16 04:42:47.474685 2018] [:error] [pid 4688] msg = self.format(record) [Tue Oct 16 04:42:47.474713 2018] [:error] [pid 4688] File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format [Tue Oct 16 04:42:47.474752 2018] [:error] [pid 4688] return fmt.format(record) [Tue Oct 16 04:42:47.474778 2018] [:error] [pid 4688] File "/usr/lib64/python2.7/logging/__init__.py", line 464, in format [Tue Oct 16 04:42:47.474827 2018] [:error] [pid 4688] record.message = record.getMessage() [Tue Oct 16 04:42:47.474856 2018] [:error] [pid 4688] File "/usr/lib64/python2.7/logging/__init__.py", line 328, in getMessage [Tue Oct 16 04:42:47.474895 2018] [:error] [pid 4688] msg = msg % self.args [Tue Oct 16 04:42:47.474931 2018] [:error] [pid 4688] TypeError: not all arguments converted during string formatting [Tue Oct 16 04:42:47.474951 2018] [:error] [pid 4688] Logged from file quotas.py, line 242 [Tue Oct 16 04:42:47.476646 2018] [:error] [pid 4688] ERROR django.request Internal Server Error: /dashboard/identity/81cc9c2be8a9476fbf81f9fbe4d2c86b/update_quotas/ [Tue Oct 16 04:42:47.476673 2018] [:error] [pid 4688] Traceback (most recent call last): [Tue Oct 16 04:42:47.476684 2018] [:error] [pid 4688] File "/usr/lib/python2.7/site-packages/django/core/handlers/exception.py", line 41, in inner [Tue Oct 16 04:42:47.476694 2018] [:error] [pid 4688] response = get_response(request) [Tue Oct 16 04:42:47.476703 2018] [:error] [pid 4688] File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 187, in _get_response [Tue Oct 16 04:42:47.476712 2018] [:error] [pid 4688] response = self.process_exception_by_middleware(e, request) [Tue Oct 16 04:42:47.476721 2018] [:error] [pid 4688] File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 185, in _get_response [Tue Oct 16 04:42:47.476731 2018] [:error] [pid 4688] response = wrapped_callback(request, *callback_args, **callback_kwargs) [Tue Oct 16 04:42:47.476740 2018] [:error] [pid 4688] File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 36, in dec [Tue Oct 16 04:42:47.476749 2018] [:error] [pid 4688] return view_func(request, *args, **kwargs) [Tue Oct 16 04:42:47.476758 2018] [:error] [pid 4688] File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 52, in dec [Tue Oct 16 04:42:47.476767 2018] [:error] [pid 4688] return view_func(request, *args, **kwargs) [Tue Oct 16 04:42:47.476785 2018] [:error] [pid 4688] File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 36, in dec [Tue Oct 16 04:42:47.476796 2018] [:error] [pid 4688] return view_func(request, *args, **kwargs) [Tue Oct 16 04:42:47.476805 2018] [:error] [pid 4688] File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 113, in dec [Tue Oct 16 04:42:47.476814 2018] [:error] [pid 4688] return view_func(request, *args, **kwargs) [Tue Oct 16 04:42:47.476822 2018] [:error] [pid 4688] File "/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 68, in view [Tue Oct 16 04:42:47.476831 2018] [:error] [pid 4688] return self.dispatch(request, *args, **kwargs) [Tue Oct 16 04:42:47.476840 2018]