[Yahoo-eng-team] [Bug 1718356] Re: Include default config files in python wheel
Reviewed: https://review.openstack.org/554819 Committed: https://git.openstack.org/cgit/openstack/karbor/commit/?id=c20e61d04221695bfd316417254649733e7ae692 Submitter: Zuul Branch:master commit c20e61d04221695bfd316417254649733e7ae692 Author: Nguyen Van Trung Date: Wed Mar 21 14:40:21 2018 +0700 Add default configuration files to data_files In order to make it simpler to use the default configuration files when deploying services from source, the files are added to pbr's data_files section so that the files are included in the built wheels and therefore deployed with the code. Packaging and deployment tools can then more easily use the default files if they wish to. This pattern is already established with similar files for neutron and the glance metadefs as has been mentioned in the related bug report. Change-Id: I81b5c00ace7b6d5fab4f982f403149a87b3afbd0 Closes-Bug: #1718356 ** Changed in: karbor Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1718356 Title: Include default config files in python wheel Status in Barbican: Fix Released Status in Cinder: Fix Released Status in Cyborg: Fix Released Status in Designate: Fix Released Status in Fuxi: New Status in Glance: Fix Released Status in OpenStack Heat: In Progress Status in Ironic: Fix Released Status in Karbor: Fix Released Status in OpenStack Identity (keystone): Fix Released Status in kuryr-libnetwork: New Status in Magnum: In Progress Status in neutron: Fix Released Status in OpenStack Compute (nova): In Progress Status in octavia: Invalid Status in openstack-ansible: Confirmed Status in Sahara: Fix Released Status in OpenStack DBaaS (Trove): Fix Released Status in Zun: In Progress Bug description: The projects which deploy OpenStack from source or using python wheels currently have to either carry templates for api-paste, policy and rootwrap files or need to source them from git during deployment. This results in some rather complex mechanisms which could be radically simplified by simply ensuring that all the same files are included in the built wheel. A precedence for this has already been set in neutron [1], glance [2] and designate [3] through the use of the data_files option in the files section of setup.cfg. [1] https://github.com/openstack/neutron/blob/d3c393ff6b5fbd0bdaabc8ba678d755ebfba08f7/setup.cfg#L24-L39 [2] https://github.com/openstack/glance/blob/02cd5cba70a8465a951cb813a573d390887174b7/setup.cfg#L20-L21 [3] https://github.com/openstack/designate/blob/25eb143db04554d65efe2e5d60ad3afa6b51d73a/setup.cfg#L30-L37 This bug will be used for a cross-project implementation of patches to normalise the implementation across the OpenStack projects. Hopefully the result will be a consistent implementation across all the major projects. A mailing list thread corresponding to this standard setting was begun: http://lists.openstack.org/pipermail/openstack-dev/2017-September/122794.html To manage notifications about this bug go to: https://bugs.launchpad.net/barbican/+bug/1718356/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1765828] [NEW] FIP attached to fixed-ip remains even when port is update with other fixed-ips
Public bug reported: When we create a port on a network and attach fip to the port with let's say fixed-ip IP1 Then we attach FIP to the port. Now update the port with other fixed-ips other than IP1, the port gets updated but the FIP is not and the FIP is still attached to the port with fixed-ip IP1. Neutron should block updation of port if that Fip is already attached to the vport with fixed-ip. Step-by-step reproduction steps: neutron router-create router neutron net-create private neutron subnet-create private 10.0.0.0/24 --name private_subnet neutron router-interface-add router private_subnet neutron net-create public --router:external=True neutron subnet-create public 192.124.0.0/24 --name public_subnet --enable_dhcp=False --allocation-pool start=192.124.0.5,end=192.124.0.250 --gateway=192.124.0.1 neutron router-gateway-set router public neutron port-create private --fixed-ip subnet_id=,ip_address=10.0.0.10 --name port1 neutron floatingip-create public --name fip neutron floatingip-associate fip port1 neutron port-update port1 --fixed-ip subnet_id=,ip_address=10.0.0.11 --fixed-ip subnet_id=,ip_address=10.0.0.12 --fixed-ip subnet_id=,ip_address=10.0.0.13 neutron floatingip-show fip1 -> will show the fip is associate with port with fixed-ip 10.0.0.10 which even doesn Expected output: I think we should fail at a point when we update a port with fixed-ips, if fixed-ip doesn't contain original fixed-ip and fip is attached to it. Actual output: did the system silently fail (in this case log traces are useful)? https://pastebin.com/J2Yv947c * Version: ** stable/ocata, stable/pike etc... ** Ubuntu16.04 ** DevStack Perceived severity: Medium ** Affects: neutron Importance: Undecided Status: New ** Tags: doc -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1765828 Title: FIP attached to fixed-ip remains even when port is update with other fixed-ips Status in neutron: New Bug description: When we create a port on a network and attach fip to the port with let's say fixed-ip IP1 Then we attach FIP to the port. Now update the port with other fixed-ips other than IP1, the port gets updated but the FIP is not and the FIP is still attached to the port with fixed-ip IP1. Neutron should block updation of port if that Fip is already attached to the vport with fixed-ip. Step-by-step reproduction steps: neutron router-create router neutron net-create private neutron subnet-create private 10.0.0.0/24 --name private_subnet neutron router-interface-add router private_subnet neutron net-create public --router:external=True neutron subnet-create public 192.124.0.0/24 --name public_subnet --enable_dhcp=False --allocation-pool start=192.124.0.5,end=192.124.0.250 --gateway=192.124.0.1 neutron router-gateway-set router public neutron port-create private --fixed-ip subnet_id=,ip_address=10.0.0.10 --name port1 neutron floatingip-create public --name fip neutron floatingip-associate fip port1 neutron port-update port1 --fixed-ip subnet_id=,ip_address=10.0.0.11 --fixed-ip subnet_id=,ip_address=10.0.0.12 --fixed-ip subnet_id=,ip_address=10.0.0.13 neutron floatingip-show fip1 -> will show the fip is associate with port with fixed-ip 10.0.0.10 which even doesn Expected output: I think we should fail at a point when we update a port with fixed-ips, if fixed-ip doesn't contain original fixed-ip and fip is attached to it. Actual output: did the system silently fail (in this case log traces are useful)? https://pastebin.com/J2Yv947c * Version: ** stable/ocata, stable/pike etc... ** Ubuntu16.04 ** DevStack Perceived severity: Medium To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1765828/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1765830] [NEW] migration from osp 10 -> 11 -> 12 fails because migration script 22 is not idempotent
Public bug reported: This is described in https://bugzilla.redhat.com/show_bug.cgi?id=1569605 Basically, the fix for https://bugzilla.redhat.com/show_bug.cgi?id=1541142 was backported to pike and queens and included the migration script #22 (renamed to #17 and #5) On upgrade, we attempt to run script again, and fail because the script is not idempotent. ** Affects: keystone Importance: Undecided Assignee: Ade Lee (alee-3) Status: New ** Changed in: keystone Assignee: (unassigned) => Ade Lee (alee-3) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1765830 Title: migration from osp 10 -> 11 -> 12 fails because migration script 22 is not idempotent Status in OpenStack Identity (keystone): New Bug description: This is described in https://bugzilla.redhat.com/show_bug.cgi?id=1569605 Basically, the fix for https://bugzilla.redhat.com/show_bug.cgi?id=1541142 was backported to pike and queens and included the migration script #22 (renamed to #17 and #5) On upgrade, we attempt to run script again, and fail because the script is not idempotent. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1765830/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1765801] [NEW] network should be optionally reconfigured on every boot
Public bug reported: LP#1571004 made it so that networking is applied on the first boot of an instance. This makes sense in some cases, but not in others. In the case of Joyent's cloud, there is a need to support network reconfiguration on reboot. The proposed approach is to add a new metadata key 'maintain_network'. This will default to False. When set to True, network settings will be applied PER_ALWAYS. SmartOS will begin to support the sdc:maintain_network key in its metadata service. The SmartOS change is being tracked at https://smartos.org/bugview/OS-6902 . ** Affects: cloud-init Importance: Undecided Assignee: Mike Gerdts (mgerdts) Status: Confirmed ** Changed in: cloud-init Status: New => Confirmed ** Changed in: cloud-init Assignee: (unassigned) => Mike Gerdts (mgerdts) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1765801 Title: network should be optionally reconfigured on every boot Status in cloud-init: Confirmed Bug description: LP#1571004 made it so that networking is applied on the first boot of an instance. This makes sense in some cases, but not in others. In the case of Joyent's cloud, there is a need to support network reconfiguration on reboot. The proposed approach is to add a new metadata key 'maintain_network'. This will default to False. When set to True, network settings will be applied PER_ALWAYS. SmartOS will begin to support the sdc:maintain_network key in its metadata service. The SmartOS change is being tracked at https://smartos.org/bugview/OS-6902 . To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1765801/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1679750] Re: Allocations are not cleaned up in placement for instance 'local delete' case
Reviewed: https://review.openstack.org/560706 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=ea9d0af31395fbe1686fa681cd91226ee580796e Submitter: Zuul Branch:master commit ea9d0af31395fbe1686fa681cd91226ee580796e Author: Matt Riedemann Date: Wed Apr 11 21:24:43 2018 -0400 Delete allocations from API if nova-compute is down When performing a "local delete" of an instance, we need to delete the allocations that the instance has against any resource providers in Placement. It should be noted that without this change, restarting the nova-compute service will delete the allocations for its compute node (assuming the compute node UUID is the same as before the instance was deleted). That is shown in the existing functional test modified here. The more important reason for this change is that in order to fix bug 1756179, we need to make sure the resource provider allocations for a given compute node are gone by the time the compute service is deleted. This adds a new functional test and a release note for the new behavior and need to configure nova-api for talking to placement, which is idempotent if not configured thanks to the @safe_connect decorator used in SchedulerReportClient. Closes-Bug: #1679750 Related-Bug: #1756179 Change-Id: If507e23f0b7e5fa417041c3870d77786498f741d ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1679750 Title: Allocations are not cleaned up in placement for instance 'local delete' case Status in OpenStack Compute (nova): Fix Released Status in OpenStack Compute (nova) pike series: Confirmed Status in OpenStack Compute (nova) queens series: Confirmed Bug description: This is semi-related to bug 1661312 for evacuate. This is the case: 1. Create an instance on host A successfully. There are allocation records in the placement API for the instance (consumer for the allocation records) and host A (resource provider). 2. Host A goes down. 3. Delete the instance. This triggers the local delete flow in the compute API where we can't RPC cast to the compute to delete the instance because the nova-compute service is down. So we do the delete in the database from the compute API (local to compute API, hence local delete). The problem is in #3 we don't remove the allocations for the instance from the host A resource provider during the local delete flow. Maybe this doesn't matter while host A is down, since the scheduler can't schedule to it anyway. But if host A comes back up, it will have allocations tied to it for deleted instances. On init_host in the compute service we call _complete_partial_deletion but that's only for instances with a vm_state of 'deleted' but aren't actually deleted in the database. I don't think that's going to cover this case because the local delete code in the compute API calls instance.destroy() which deletes the instance from the database (updates instances.deleted != 0 in the DB so it's "soft" deleted). To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1679750/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1765122] Re: qemu-img execute not mocked in unit tests
Reviewed: https://review.openstack.org/562339 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=815b11f70a3fd31888fb3e3e7b38e3a697bf6244 Submitter: Zuul Branch:master commit 815b11f70a3fd31888fb3e3e7b38e3a697bf6244 Author: Jay Pipes Date: Wed Apr 18 12:39:35 2018 -0400 mock utils.execute() in qemu-img unit test There was a unit test that was not mocking utils.execute() and relying on qemu-img being installed on the machine running the tests. This fixes the test to mock out utils.execute() and raise a ProcessExecutionError that simulates the expected behaviour from qemu-img. Fixes bug: #1765122 Change-Id: Ia6fc089fce0cc0ba1fb8d4d4ffbf7f47968a0507 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1765122 Title: qemu-img execute not mocked in unit tests Status in OpenStack Compute (nova): Fix Released Bug description: nova.tests.unit.virt.test_images.QemuTestCase.test_qemu_info_with_errors is failing in both py27 and py36 tox environments due to a missing mock. This system does not have qemu(-img) installed in it and running unit tests returns the following: == Failed 1 tests - output below: == nova.tests.unit.virt.test_images.QemuTestCase.test_qemu_info_with_errors Captured traceback: ~~~ b'Traceback (most recent call last):' b' File "/home/jaypipes/src/git.openstack.org/openstack/nova/nova/virt/images.py", line 73, in qemu_img_info' b'out, err = utils.execute(*cmd, prlimit=QEMU_IMG_LIMITS)' b' File "/home/jaypipes/src/git.openstack.org/openstack/nova/nova/utils.py", line 231, in execute' b'return processutils.execute(*cmd, **kwargs)' b' File "/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/oslo_concurrency/processutils.py", line 424, in execute' b'cmd=sanitized_cmd)' b'oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command.' b'Command: /home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /fake/path' b'Exit code: 127' b"Stdout: ''" b"Stderr: '/usr/bin/env: \xe2\x80\x98qemu-img\xe2\x80\x99: No such file or directory\\n'" b'' b'During handling of the above exception, another exception occurred:' b'' b'Traceback (most recent call last):' b' File "/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/mock/mock.py", line 1305, in patched' b'return func(*args, **keywargs)' b' File "/home/jaypipes/src/git.openstack.org/openstack/nova/nova/tests/unit/virt/test_images.py", line 37, in test_qemu_info_with_errors' b"'/fake/path')" b' File "/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py", line 485, in assertRaises' b'self.assertThat(our_callable, matcher)' b' File "/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py", line 496, in assertThat' b'mismatch_error = self._matchHelper(matchee, matcher, message, verbose)' b' File "/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py", line 547, in _matchHelper' b'mismatch = matcher.match(matchee)' b' File "/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/testtools/matchers/_exception.py", line 108, in match' b'mismatch = self.exception_matcher.match(exc_info)' b' File "/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/testtools/matchers/_higherorder.py", line 62, in match' b'mismatch = matcher.match(matchee)' b' File "/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py", line 475, in match' b'reraise(*matchee)' b' File "/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/testtools/_compat3x.py", line 16, in reraise' b'raise exc_obj.with_traceback(exc_tb)' b' File "/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/testtools/matchers/_exception.py", line 101, in match' b'result = matchee()' b' File "/home/jaypipes/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py", lin
[Yahoo-eng-team] [Bug 1765748] Re: webob-1.8.1 breaks projects
** Also affects: nova Importance: Undecided Status: New ** Also affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1765748 Title: webob-1.8.1 breaks projects Status in Glance: New Status in OpenStack Identity (keystone): New Status in OpenStack Compute (nova): New Status in OpenStack Global Requirements: In Progress Bug description: Stuff like this ft2.2: glance.tests.unit.common.test_wsgi.ResourceTest.test_translate_exception_StringException: Traceback (most recent call last): File "/home/zuul/src/git.openstack.org/openstack/glance/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", line 1297, in patched arg = patching.__enter__() File "/home/zuul/src/git.openstack.org/openstack/glance/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", line 1369, in __enter__ original, local = self.get_original() File "/home/zuul/src/git.openstack.org/openstack/glance/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", line 1343, in get_original "%s does not have the attribute %r" % (target, name) AttributeError: does not have the attribute 'best_match' To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1765748/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1765748] Re: webob-1.8.1 breaks projects
** Also affects: glance Importance: Undecided Status: New ** Changed in: openstack-requirements Assignee: (unassigned) => Matthew Thode (prometheanfire) ** Changed in: openstack-requirements Status: New => In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1765748 Title: webob-1.8.1 breaks projects Status in Glance: New Status in OpenStack Identity (keystone): New Status in OpenStack Compute (nova): New Status in OpenStack Global Requirements: In Progress Bug description: Stuff like this ft2.2: glance.tests.unit.common.test_wsgi.ResourceTest.test_translate_exception_StringException: Traceback (most recent call last): File "/home/zuul/src/git.openstack.org/openstack/glance/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", line 1297, in patched arg = patching.__enter__() File "/home/zuul/src/git.openstack.org/openstack/glance/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", line 1369, in __enter__ original, local = self.get_original() File "/home/zuul/src/git.openstack.org/openstack/glance/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", line 1343, in get_original "%s does not have the attribute %r" % (target, name) AttributeError: does not have the attribute 'best_match' To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1765748/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1752896] Re: novncproxy in Newton uses outdated novnc 0.5 which breaks Nova noVNC consoles
>From a tripleo standpoint, it's a packaging issue in RDO. Additionally newton is basically EOL so closing that out as won't fix. ** Changed in: tripleo Status: New => Won't Fix ** Changed in: tripleo Importance: Undecided => Medium ** Changed in: tripleo Milestone: None => rocky-1 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1752896 Title: novncproxy in Newton uses outdated novnc 0.5 which breaks Nova noVNC consoles Status in OpenStack Compute (nova): In Progress Status in tripleo: Won't Fix Bug description: Delorean Newton (CentOS 7) ships with noVNC 0.5.2 in the Overcloud images. Even building an Overcloud image (DIB) produces an image with noVNC 0.5.2. The problem seems to be that CentOS 7 does not ship anything newer than 0.5.2. However, Red Hat Enterprise Linux 7 does indeed ship noVNC 0.6. In any case, Nova noVNC consoles in Newton don't work with noVNC 0.5.2. My workaround was to customize the Overcloud base image and replace the 0.5.2 RPM with a 0.6.2 RPM that I downloaded from some CentOS CI repository. Steps to reproduce == Follow instructions from https://docs.openstack.org/tripleo-docs/latest/install/installation/installing.html to install an OpenStack Undercloud using Newton, and either download the Overcloud base images from here https://images.rdoproject.org/newton/delorean/consistent/testing/ or build them yourself directly from the Undercloud. In any case, the Overcloud base image ships with noVNC 0.5.2-1 instead of 0.6.*. Expected result === A newer version of noVNC that does not break the Nova noVNC console. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1752896/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1765742] [NEW] virt_driver.detach_volume rollback calls in attach_volume failures uses wrong method signature
Public bug reported: Seen here: https://github.com/openstack/nova/blob/64e503d82fe5966cb50b650b3309a7fc36c63cda/nova/virt/block_device.py#L473 and here: https://github.com/openstack/nova/blob/64e503d82fe5966cb50b650b3309a7fc36c63cda/nova/virt/block_device.py#L570 Those are using the wrong method signature since 'context' was added to the driver detach_volume method signature in change: https://review.openstack.org/#/c/549411/ ** Affects: nova Importance: High Assignee: Matt Riedemann (mriedem) Status: Triaged ** Affects: nova/queens Importance: High Status: Confirmed ** Tags: volumes ** Also affects: nova/queens Importance: Undecided Status: New ** Changed in: nova/queens Status: New => Confirmed ** Changed in: nova/queens Importance: Undecided => High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1765742 Title: virt_driver.detach_volume rollback calls in attach_volume failures uses wrong method signature Status in OpenStack Compute (nova): Triaged Status in OpenStack Compute (nova) queens series: Confirmed Bug description: Seen here: https://github.com/openstack/nova/blob/64e503d82fe5966cb50b650b3309a7fc36c63cda/nova/virt/block_device.py#L473 and here: https://github.com/openstack/nova/blob/64e503d82fe5966cb50b650b3309a7fc36c63cda/nova/virt/block_device.py#L570 Those are using the wrong method signature since 'context' was added to the driver detach_volume method signature in change: https://review.openstack.org/#/c/549411/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1765742/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1765737] [NEW] Feature Classification in nova - bad links for admin-guide
Public bug reported: - [x] This doc is inaccurate in this way: Since the docs were moved into the nova repo in pike, the admin-guide links in this page are wrong, for example, the 'Volume operations' section links to: http://docs.openstack.org/admin-guide/blockstorage-manage-volumes.html Which no longer exists, and is now: https://docs.openstack.org/cinder/latest/cli/cli-manage-volumes.html Some of these also just don't make sense, like the "Custom neutron configurations on boot" section links to http://docs.openstack.org /admin-guide/compute-manage-volumes.html which has nothing to do with networking. --- Release: 17.0.0.0rc2.dev795 on 2018-04-20 06:01 SHA: 8a407bd288bb3116e50ea3b29a125caa036204cb Source: https://git.openstack.org/cgit/openstack/nova/tree/doc/source/user/feature-classification.rst URL: https://docs.openstack.org/nova/latest/user/feature-classification.html ** Affects: nova Importance: Low Status: Triaged ** Affects: nova/pike Importance: Low Status: Confirmed ** Affects: nova/queens Importance: Low Status: Confirmed ** Tags: docs ** Also affects: nova/pike Importance: Undecided Status: New ** Also affects: nova/queens Importance: Undecided Status: New ** Changed in: nova/pike Status: New => Confirmed ** Changed in: nova/pike Importance: Undecided => Low ** Changed in: nova/queens Status: New => Confirmed ** Changed in: nova/queens Importance: Undecided => Low -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1765737 Title: Feature Classification in nova - bad links for admin-guide Status in OpenStack Compute (nova): Triaged Status in OpenStack Compute (nova) pike series: Confirmed Status in OpenStack Compute (nova) queens series: Confirmed Bug description: - [x] This doc is inaccurate in this way: Since the docs were moved into the nova repo in pike, the admin-guide links in this page are wrong, for example, the 'Volume operations' section links to: http://docs.openstack.org/admin-guide/blockstorage-manage-volumes.html Which no longer exists, and is now: https://docs.openstack.org/cinder/latest/cli/cli-manage-volumes.html Some of these also just don't make sense, like the "Custom neutron configurations on boot" section links to http://docs.openstack.org /admin-guide/compute-manage-volumes.html which has nothing to do with networking. --- Release: 17.0.0.0rc2.dev795 on 2018-04-20 06:01 SHA: 8a407bd288bb3116e50ea3b29a125caa036204cb Source: https://git.openstack.org/cgit/openstack/nova/tree/doc/source/user/feature-classification.rst URL: https://docs.openstack.org/nova/latest/user/feature-classification.html To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1765737/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1660612] Re: Tempest full jobs time out on execution
As there is no hits in Neutron jobs in last 30 days I will remove Neutron from affected projects for now. If we will hit it again, we can add it back here ** No longer affects: neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1660612 Title: Tempest full jobs time out on execution Status in tempest: New Bug description: It took a bit above 1 hour to run tempest with linux bridge agent and then the job was terminated: http://logs.openstack.org/47/416647/4/check/gate-tempest-dsvm-neutron- linuxbridge-ubuntu- xenial/7d1d4e5/console.html#_2017-01-30_17_51_50_343819 e-r-q: http://logstash.openstack.org/#dashboard/file/logstash.json?query=build_name %3Agate-tempest-dsvm-neutron-linuxbridge-ubuntu- xenial%20AND%20message%3A%5C%22Killed%5C%22%20AND%20message%3A%5C%22timeout%20-s%209%5C%22%20AND%20tags%3Aconsole To manage notifications about this bug go to: https://bugs.launchpad.net/tempest/+bug/1660612/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1765691] [NEW] OVN vlan networks use geneve tunneling for SNAT traffic
Public bug reported: In OVN driver, traffic from vlan (or any other) tenant network to external network uses a geneve tunnel between the compute node and the gateway node. So MTU for the VLAN networks needs to account for geneve tunnel overhead. This doc [1] explains about OVN vlan networks and current issue and future enhancements. There is ovs-discuss mailing list thread [2] discussing the surprising geneve tunnel usage. [1] https://docs.google.com/document/d/1JecGIXPH0RAqfGvD0nmtBdEU1zflHACp8WSRnKCFSgg/edit#heading=h.st3xgs77kfx4 [2] https://mail.openvswitch.org/pipermail/ovs-discuss/2018-April/046543.html ** Affects: neutron Importance: Undecided Assignee: venkata anil (anil-venkata) Status: New ** Changed in: neutron Assignee: (unassigned) => venkata anil (anil-venkata) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1765691 Title: OVN vlan networks use geneve tunneling for SNAT traffic Status in neutron: New Bug description: In OVN driver, traffic from vlan (or any other) tenant network to external network uses a geneve tunnel between the compute node and the gateway node. So MTU for the VLAN networks needs to account for geneve tunnel overhead. This doc [1] explains about OVN vlan networks and current issue and future enhancements. There is ovs-discuss mailing list thread [2] discussing the surprising geneve tunnel usage. [1] https://docs.google.com/document/d/1JecGIXPH0RAqfGvD0nmtBdEU1zflHACp8WSRnKCFSgg/edit#heading=h.st3xgs77kfx4 [2] https://mail.openvswitch.org/pipermail/ovs-discuss/2018-April/046543.html To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1765691/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1331913] Re: tempest.api.volume.test_volumes_actions.VolumesActionsTestXML.test_volume_upload fails
Looks like not related to glance and never occurred again as well. ** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1331913 Title: tempest.api.volume.test_volumes_actions.VolumesActionsTestXML.test_volume_upload fails Status in Glance: Invalid Status in tempest: Invalid Bug description: See: http://logs.openstack.org/07/81707/7/check/check-tempest-dsvm- full/8b1ee80/console.html 2014-06-19 03:37:29.394 | tempest.api.volume.test_volumes_actions.VolumesActionsTestXML.test_volume_upload[gate,image] 2014-06-19 03:37:29.394 | 2014-06-19 03:37:29.395 | 2014-06-19 03:37:29.395 | Captured traceback: 2014-06-19 03:37:29.395 | ~~~ 2014-06-19 03:37:29.395 | Traceback (most recent call last): 2014-06-19 03:37:29.395 | File "tempest/test.py", line 126, in wrapper 2014-06-19 03:37:29.395 | return f(self, *func_args, **func_kwargs) 2014-06-19 03:37:29.395 | File "tempest/api/volume/test_volumes_actions.py", line 107, in test_volume_upload 2014-06-19 03:37:29.395 | self.image_client.wait_for_image_status(image_id, 'active') 2014-06-19 03:37:29.395 | File "tempest/services/image/v1/json/image_client.py", line 289, in wait_for_image_status 2014-06-19 03:37:29.395 | status=status) 2014-06-19 03:37:29.395 | ImageKilledException: Image ecd98deb-ca3d-4207-b6c9-49ae6434e765 'killed' while waiting for 'active' To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1331913/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1338567] Re: delete the image using v2 api when we upload a image using v1 api, glance don't delete the image data after finishing the uploading.
V1 is deprecated and will be removed in Rocky also as per comments it looks like it is not reproducible any more. ** Changed in: glance Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1338567 Title: delete the image using v2 api when we upload a image using v1 api, glance don't delete the image data after finishing the uploading. Status in Glance: Won't Fix Bug description: First, I use glance cli to upload a image glance image-create --name myimage --disk-format=raw --container-format=bare --file /path/to/file.img At the same time, I use the v2 api to delete the image curl -i -X DELETE -H 'X-Auth-Token: $TOKNE_ID' -H 'Content-Type: application/json' http://localhost:9292/v2/images/$IMAGE_ID. After the uploading is finished, the response shows that the image status is active and the image is deleted. The image data that has been uploaded has not been removed from glance store backend. The right response should be "Image could not be found after upload. The image may have been deleted during the upload." as we see when we upload image using v1 api and delete using v1 api or we upload image using v2 api and delete using v2 api. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1338567/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1327775] Re: tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestXML.test_create_delete_image timed out
Looks like not related to glance. ** Changed in: glance Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1327775 Title: tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestXML.test_create_delete_image timed out Status in Glance: Won't Fix Status in OpenStack Object Storage (swift): Invalid Bug description: http://logs.openstack.org/44/98044/1/gate/gate-tempest-dsvm- neutron/525dcba/ 2014-06-08 00:45:51.509 | Captured traceback-1: 2014-06-08 00:45:51.509 | ~ 2014-06-08 00:45:51.509 | Traceback (most recent call last): 2014-06-08 00:45:51.509 | File "tempest/api/compute/images/test_images_oneserver.py", line 31, in tearDown 2014-06-08 00:45:51.510 | self.server_check_teardown() 2014-06-08 00:45:51.510 | File "tempest/api/compute/base.py", line 161, in server_check_teardown 2014-06-08 00:45:51.510 | 'ACTIVE') 2014-06-08 00:45:51.510 | File "tempest/services/compute/xml/servers_client.py", line 388, in wait_for_server_status 2014-06-08 00:45:51.510 | raise_on_error=raise_on_error) 2014-06-08 00:45:51.510 | File "tempest/common/waiters.py", line 93, in wait_for_server_status 2014-06-08 00:45:51.510 | raise exceptions.TimeoutException(message) 2014-06-08 00:45:51.510 | TimeoutException: Request timed out 2014-06-08 00:45:51.510 | Details: (ImagesOneServerTestXML:tearDown) Server 72897dd4-cd42-4e0b-af15-3eec5b677d0b failed to reach ACTIVE status and task state "None" within the required time (196 s). Current status: ACTIVE. Current task state: image_snapshot. 2014-06-08 00:45:51.510 | 2014-06-08 00:45:51.510 | 2014-06-08 00:45:51.511 | Captured traceback: 2014-06-08 00:45:51.511 | ~~~ 2014-06-08 00:45:51.511 | Traceback (most recent call last): 2014-06-08 00:45:51.511 | File "tempest/api/compute/images/test_images_oneserver.py", line 77, in test_create_delete_image 2014-06-08 00:45:51.511 | self.client.wait_for_image_status(image_id, 'ACTIVE') 2014-06-08 00:45:51.511 | File "tempest/services/compute/xml/images_client.py", line 140, in wait_for_image_status 2014-06-08 00:45:51.511 | waiters.wait_for_image_status(self, image_id, status) 2014-06-08 00:45:51.511 | File "tempest/common/waiters.py", line 129, in wait_for_image_status 2014-06-08 00:45:51.511 | raise exceptions.TimeoutException(message) 2014-06-08 00:45:51.511 | TimeoutException: Request timed out 2014-06-08 00:45:51.511 | Details: (ImagesOneServerTestXML:test_create_delete_image) Image fbe2b95d-7126-444d-be5a-e4104ec7d799 failed to reach ACTIVE status within the required time (196 s). Current status: SAVING. logstash query: http://logstash.openstack.org/#eyJzZWFyY2giOiJmaWxlbmFtZTpcImNvbnNvbGUuaHRtbFwiIEFORCBtZXNzYWdlOlwiRGV0YWlsczogKEltYWdlc09uZVNlcnZlclRlc3RYTUw6dGVzdF9jcmVhdGVfZGVsZXRlX2ltYWdlKSBJbWFnZVwiIEFORCBtZXNzYWdlOlwiIGZhaWxlZCB0byByZWFjaCBBQ1RJVkUgc3RhdHVzIHdpdGhpbiB0aGUgcmVxdWlyZWQgdGltZSAoMTk2IHMpLiBDdXJyZW50IHN0YXR1czogU0FWSU5HLlwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDAyMjMzOTQwMjA1fQ== To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1327775/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1325425] Re: Serverfault on API tests
Pretty old and not encountered again by anyone else. ** Changed in: glance Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1325425 Title: Serverfault on API tests Status in Glance: Won't Fix Status in tempest: Invalid Bug description: One of the test runs failed in several unrelated categories to the tested patch with an error like the one below: ft15.1: setUpClass (tempest.api.compute.v3.servers.test_server_addresses.ServerAddressesV3Test)_StringException: Traceback (most recent call last): File "tempest/api/compute/v3/servers/test_server_addresses.py", line 32, in setUpClass resp, cls.server = cls.create_test_server(wait_until='ACTIVE') File "tempest/api/compute/base.py", line 247, in create_test_server raise ex ServerFault: Got server fault Details: The server has either erred or is incapable of performing the requested operation. Full test results: http://logs.openstack.org/57/96957/1/check/check-grenade-dsvm-neutron/e15aebd/logs/testr_results.html.gz To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1325425/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1326899] Re: FAIL: glance.tests.unit.v1.test_api.TestGlanceAPI.test_add_copy_from_image_authorized_upload_image_authorized
V1 tests will be removed in Rocky so no need to keep this open. ** Changed in: glance Status: Triaged => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1326899 Title: FAIL: glance.tests.unit.v1.test_api.TestGlanceAPI.test_add_copy_from_image_authorized_upload_image_authorized Status in Glance: Won't Fix Bug description: 2014-05-27 22:27:18.106 | ${PYTHON:-python} -m subunit.run discover -t ./ ./glance/tests 2014-05-27 22:27:18.106 | == 2014-05-27 22:27:18.106 | FAIL: glance.tests.unit.v1.test_api.TestGlanceAPI.test_add_copy_from_image_authorized_upload_image_authorized 2014-05-27 22:27:18.106 | tags: worker-0 2014-05-27 22:27:18.106 | -- 2014-05-27 22:27:18.106 | Traceback (most recent call last): 2014-05-27 22:27:18.106 | File "glance/tests/unit/v1/test_api.py", line 904, in test_add_copy_from_image_authorized_upload_image_authorized 2014-05-27 22:27:18.107 | self.assertEqual(res.status_int, 201) 2014-05-27 22:27:18.107 | File "/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py", line 321, in assertEqual 2014-05-27 22:27:18.107 | self.assertThat(observed, matcher, message) 2014-05-27 22:27:18.107 | File "/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py", line 406, in assertThat 2014-05-27 22:27:18.107 | raise mismatch_error 2014-05-27 22:27:18.107 | MismatchError: 400 != 201 2014-05-27 22:27:18.107 | == 2014-05-27 22:27:18.107 | FAIL: process-returncode 2014-05-27 22:27:18.107 | tags: worker-0 2014-05-27 22:27:18.107 | -- 2014-05-27 22:27:18.107 | Binary content: 2014-05-27 22:27:18.108 | traceback (test/plain; charset="utf8") 2014-05-27 22:27:18.108 | Ran 2249 tests in 777.846s 2014-05-27 22:27:18.108 | FAILED (id=0, failures=2, skips=33) 2014-05-27 22:27:18.108 | error: testr failed (1) 2014-05-27 22:27:18.190 | ERROR: InvocationError: '/home/jenkins/workspace/gate-glance-python26/.tox/py26/bin/python -m glance.openstack.common.lockutils python setup.py test --slowest --testr-args=--concurrency 1 ' 2014-05-27 22:27:18.190 | ___ summary 2014-05-27 22:27:18.190 | ERROR: py26: commands failed To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1326899/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1378510] Re: creating snapshot
Not sure if this is the case anymore, and no one has hit this issue ever since. Safe to mark as won't fix. ** Changed in: glance Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1378510 Title: creating snapshot Status in Glance: Won't Fix Bug description: If this needs to be under NOVA please advise. Running ice house on ubuntu 14.04 When snapshot is created by a user in a project, the snapshot is not visible. Determined snapshot gets listed only to admin system panel, and must be made public for anyone else to see it. Should be listed to users project only, that created the snapshot. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1378510/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1346463] Re: Glance registry needs notifications config after using oslo.messaging
** Changed in: glance Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1346463 Title: Glance registry needs notifications config after using oslo.messaging Status in Glance: Won't Fix Bug description: A good example of this use case is https://review.openstack.org/#/c/107594 where the notifications need to be added to the sample config file provided in Glance so that, the patch would work with devstack. However, g-reg does not send any notifications so, we need to either find a way to remove the need for them or merge the g-api and g-reg configs to avoid this situation. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1346463/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1351846] Re: notification image.update/image.upload meter does not works with qpid
** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1351846 Title: notification image.update/image.upload meter does not works with qpid Status in Ceilometer: Invalid Status in Glance: Invalid Bug description: The following tempest test fails when I select 'qpid' instead of 'rabbit' ENABLED_SERVICES in the devstack local.conf: tempest.api.telemetry.test_telemetry_notification_api.TelemetryNotificationAPITestXML.test_check_glance_v1_notifications[gate,image,smoke] tempest.api.telemetry.test_telemetry_notification_api.TelemetryNotificationAPITestXML.test_check_glance_v2_notifications[gate,image,smoke] tempest.api.telemetry.test_telemetry_notification_api.TelemetryNotificationAPITestJSON.test_check_glance_v1_notifications[gate,image,smoke] tempest.api.telemetry.test_telemetry_notification_api.TelemetryNotificationAPITestJSON.test_check_glance_v2_notifications[gate,image,smoke] https://github.com/openstack/tempest/blob/master/tempest/api/telemetry/test_telemetry_notification_api.py After a $ glance image-create --file /etc/passwd --name passwd --container-format bare --disk-format raw the $ ceilometer meter-list does not shows 'image.update' or 'image.upload' meter. To manage notifications about this bug go to: https://bugs.launchpad.net/ceilometer/+bug/1351846/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1379774] Re: 017_quote_encrypted_swift_credentials is a NOOP
Not really a issue, just a INFO message for admin. ** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1379774 Title: 017_quote_encrypted_swift_credentials is a NOOP Status in Glance: Invalid Bug description: After running glance-manage db_sync I saw the following entry in the log messages. I am not sure if this is intended. I have not configured Swift as a storage adapter. But I think that should not change the database schema? 2014-10-10 12:16:10.362 12396 INFO 017_quote_encrypted_swift_credentials [-] 'metadata_encryption_key' was not specified in the config file or a config file was not specified. This means that this migration is a NOOP. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1379774/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1381365] Re: SSL Version and cipher selection not possible
** Changed in: glance Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1381365 Title: SSL Version and cipher selection not possible Status in Cinder: Won't Fix Status in Glance: Won't Fix Status in OpenStack Identity (keystone): Won't Fix Status in OpenStack Compute (nova): Won't Fix Status in OpenStack Security Advisory: Won't Fix Bug description: We configure keystone to use SSL always. Due to the poodle issue, I was trying to configure keystone to disable SSLv3 completely. http://googleonlinesecurity.blogspot.fi/2014/10/this-poodle-bites-exploiting-ssl-30.html https://www.openssl.org/~bodo/ssl-poodle.pdf It seems that keystone has no support for configring SSL versions, nor ciphers. If I'm not mistaken the relevant code is in the start function in common/environment/eventlet_server.py It calls eventlet.wrap_ssl but with no SSL version nor cipher options. Since the interface is identical, I assume it uses ssl.wrap_socket. The default here seems to be PROTOCOL_SSLv23 (SSL2 disabled), which would make this vulnerable to the poodle issue. SSL conifgs should probably be possible to be set in the config file (with sane defaults), so that current and newly detected weak ciphers can be disabled without code changes. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1381365/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1404082] Re: rbd_store_chunk_size Sets to Bytes instead of kB
Fixed in glance_store with patch https://review.openstack.org/#/c/121992/ ** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1404082 Title: rbd_store_chunk_size Sets to Bytes instead of kB Status in Glance: Invalid Bug description: In Juno-Release I am experiancing rbd_store_chunk_size setting the image object size to bytes rather than kB resulting in much much more objects being created Icehouse Install root@node-1:~# rbd info images/62e06da9-2d39-4c7f-a2d8-d869953b9996@snap rbd image '62e06da9-2d39-4c7f-a2d8-d869953b9996': size 4096 MB in 512 objects order 23 (8192 kB objects) block_name_prefix: rbd_data.6a282e18d096 format: 2 features: layering protected: True Juno Install root@hvm003 ~]# rbd info images/136dd921-f6a2-432f-b4d6-e9902f71baa6@snap rbd image '136dd921-f6a2-432f-b4d6-e9902f71baa6': size 4096 MB in 524288 objects order 13 (8192 bytes objects) block_name_prefix: rbd_data.10d73ac85fb6 format: 2 features: layering protected: True Either the documentation needs updating or rbd_store_chunk_size needs to be changed to set the object size in kB. Currently the workaround to get back to 8MB object size is 'rbd_store_chunk_size = 8192' To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1404082/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1389110] Re: 'glance member-list' CLI lists information with wrong tenant-id
Glance does not validates the tenant-id so for invalid tenant-id it returns the available tenants. ** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1389110 Title: 'glance member-list' CLI lists information with wrong tenant-id Status in Glance: Invalid Bug description: "glance member-list" CLI with wrong tenant-id returns "200 OK" and generate information while it is expected 404 Not Found. Below are the command execution logs- $ keystone tenant-list +--+--+-+ |id| name | enabled | +--+--+-+ | a1c37cc595024369aa2124b50adaa0b8 | admin | True | | 31dd5bdca08e4ce0b208ef618142875b | cephtest | True | | 944ffc3c82f088eb7f61bc77bef0 | demo | True | | ed34d901e2314ab6a93e01ebad44e445 | service | True | +--+--+-+ $ glance --debug member-list --tenant-id 31dd5bd curl -i -X GET -H 'User-Agent: python-glanceclient' -H 'Content-Type: application/octet-stream' -H 'Accept-Encoding: gzip, deflate, compress' -H 'Accept: */*' -H 'X-Auth-Token: {SHA1}bf02401aa8b7a79b29e5db2c015bee9d111ea600' http://controller:9292/v1/shared-images/31dd5bd HTTP/1.1 200 OK date: Tue, 04 Nov 2014 07:44:49 GMT content-length: 21 content-type: application/json; charset=UTF-8 x-openstack-request-id: req-398f3434-d5d1-4d22-a126-c4edcd6b7cfd {"shared_images": []} +--+---+---+ | Image ID | Member ID | Can Share | +--+---+---+ +--+---+---+ To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1389110/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1388912] Re: Snapshot with image size 0 bits
Not encountered by anyone else since reported. ** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1388912 Title: Snapshot with image size 0 bits Status in Glance: Invalid Bug description: Hello, We have a problem when we generate a snapshot. Informations: - nova --version 2.17.0 - cinder --version 1.0.8 - glance --version 0.12.0 Our problem: To create a snapshot, we use the CLI by nova image-create. To verify the snapshot, we go in Images tab et click on the image of the snapshot, but in the specification we have a size of 0 bits : Spécifications Taille 0 octet Format du Conteneur AUCUN Format du Disque AUCUN With the cli, we see the lvm snapshot but we can't found any image of the snapshot (qcow i presume). With glance image-list or cinder list, we can't see the snapshot to. To backup our VM, whe use cinder in CLI with cinder upload-image, that's okay, it's worked. I activate debug/verbose for nova/cinder/glance but we haven't any explicit error in the log files. When we create a snapshot, i took some logs of nova, cinder and glance. Do you have an idea of the problem? Logs for Nova: 2014-10-31 16:40:52.533 3777 DEBUG keystoneclient.middleware.auth_token [-] Authenticating user token __call__ /usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py:666 2014-10-31 16:40:52.534 3777 DEBUG keystoneclient.middleware.auth_token [-] Removing headers from request environment: X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role _remove_auth_headers /usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py:725 2014-10-31 16:40:52.534 3777 DEBUG keystoneclient.middleware.auth_token [-] Returning cached token _cache_get /usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py:1124 2014-10-31 16:40:52.535 3777 DEBUG keystoneclient.middleware.auth_token [-] Storing token in cache _cache_put /usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py:1234 2014-10-31 16:40:52.535 3777 DEBUG keystoneclient.middleware.auth_token [-] Received request from user: 1e7cb96667d049ccbcbd4aac84ca71b2 with project_id : 003048198aa94a0bb07b00802931d332 and roles: _member_ _build_user_headers /usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py:1021 2014-10-31 16:40:52.537 3777 DEBUG routes.middleware [-] Matched GET /003048198aa94a0bb07b00802931d332/servers/5749c7ac-7567-4b4d-a3ec-9bf248e4c450 __call__ /usr/lib/python2.7/dist-packages/routes/middleware.py:100 2014-10-31 16:40:52.537 3777 DEBUG routes.middleware [-] Route path: '/{project_id}/servers/:(id)', defaults: {'action': u'show', 'controller': } __call__ /usr/lib/python2.7/dist-packages/routes/middleware.py:102 2014-10-31 16:40:52.537 3777 DEBUG routes.middleware [-] Match dict: {'action': u'show', 'controller': , 'project_id': u'003048198aa94a0bb07b00802931d332', 'id': u'5749c7ac-7567-4b4d-a3ec-9bf248e4c450'} __call__ /usr/lib/python2.7/dist-packages/routes/middleware.py:103 2014-10-31 16:40:52.538 3777 DEBUG nova.api.openstack.wsgi [req-06cc551d-72f8-4fa7-b65d-022fa03b656e 1e7cb96667d049ccbcbd4aac84ca71b2 003048198aa94a0bb07b00802931d332] Calling method '>' (Content-type='None', Accept='application/json') _process_stack /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:945 2014-10-31 16:40:52.595 3777 INFO nova.osapi_compute.wsgi.server [req-06cc551d-72f8-4fa7-b65d-022fa03b656e 1e7cb96667d049ccbcbd4aac84ca71b2 003048198aa94a0bb07b00802931d332] 62.210.200.76 "GET /v2/003048198aa94a0bb07b00802931d332/servers/5749c7ac-7567-4b4d-a3ec-9bf248e4c450 HTTP/1.1" status: 200 len: 1920 time: 0.0621860 2014-10-31 16:40:57.563 3777 DEBUG keystoneclient.middleware.auth_token [-] Authenticating user token __call__ /usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py:666 2014-10-31 16:40:57.564 3777 DEBUG keystoneclient.middleware.auth_token [-] Removing headers from request environment: X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role _remove_auth_headers /usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py:725 2014-10-31 16:40:57.564 3777 DEBUG keystoneclient.middleware.auth_token [-] Returning cached token _cache_get /usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py:1124 2014-10-31 16:40:57.565 3777 DEBUG keystoneclient.middleware.auth_token [-] Stori
[Yahoo-eng-team] [Bug 1307696] Re: deprecated method in sqlalchemy causes 500 errors in glance
Again related to v1 and no one else has encountered this issue, safe to mark as won't fix. ** Changed in: glance Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1307696 Title: deprecated method in sqlalchemy causes 500 errors in glance Status in Glance: Won't Fix Bug description: I'm working on setting up Havana on CentOS 6.5, bare metal. I'm using http://docs.openstack.org/havana/install-guide/install/yum/content /glance-verify.html for my install doc. [root@fw1 ~]# glance --version 0.12.0 When I tried to load the test image into glance: glance image-create --name="CirrOS 0.3.1" --disk-format=qcow2 \ --container-format=bare --is-public=true < cirros-0.3.1-x86_64-disk.img I get a 500 error: http://pastie.org/private/l9hzbuacftyt9ashgt7ang I poked around, and discovered that I also get the same 500 error when I do a glance image-list: [root@box images]# glance image-list Request returned failure status. HTTPInternalServerError (HTTP 500) Doing a curl: curl -i -X GET -H 'X-Auth-Token: blahblah' -H 'Content-Type: application/json' -H 'User-Agent: python-glanceclient' http://192.168.20.1:9292/v1/images/detail?sort_key=name&sort_dir=asc&limit=20 returns: HTTP/1.1 500 Internal Server Error Content-Type: text/plain Content-Length: 0 Date: Sun, 13 Apr 2014 19:44:51 GMT Connection: close So, I bumped up to debug output for both the registry and api, and this is what I've got: Doing a glance image-list, registry log: http://pastie.org/private/cshdglpjxasmitl480ljq Doing a glance image-list, api log: http://pastie.org/private/wkceq0bdrunsvvm9qemdow Doing a glance image-create blahblahblah, registry log: http://pastie.org/private/xxetsfl0nopcaszvsiemw Doing a glance image-create blahblahblah, api log: http://pastie.org/private/jmsa5y30xn6xpay77weea So, here's the registry log with sqlalchemy debug added into the mix. * during a "glance image-list": http://pastie.org/private/gwkvnzh6n3o0koka5ohw * and during a "glance image-create blah blah < blah.img" http://pastie.org/private/yqzjb7lg2odnevnzof67zg Also, I tried to drop and recreate the database using: openstack-db --drop --service glance openstack-db --init --service glance --password glance (yeah, simple passwords, it's internal use only) and I got the following error output: http://hastebin.com/xanutudafa.md I verified the mysql connectivity works fine and i can write to the image store dir: http://pastie.org/private/vsnijulitckjhkg1tkwlnq After discussion with some devs, it seems likely that there's something inconsistent with the way that package was built: the call is being made to a deprecated method in sqlalchemy and the migration (db_sync) does not seem to be happening successfully, causing the 500's. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1307696/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1319150] Re: v1 API: image data downloads for images in pending_delete fails
This bug is reported for v1 api which is deprecated and will be removed during rocky cycle, so marking this as won't fix. ** Changed in: glance Status: In Progress => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1319150 Title: v1 API: image data downloads for images in pending_delete fails Status in Glance: Won't Fix Status in OpenStack Compute (nova): Opinion Bug description: Use case: user spawns an instance, then deletes the boot image, to prevent new instances from being spawned with it. Later, the user wants to live migrate the instance to a new hypervisor. As part of this, nova will attempt to download all the images used by the instance on the target node to populate its image cache. Because the image used is no longer in an explicit "active" state, the Glance v1 API returns a 404 Not Found. According to the docs, pending_delete is supposed to be recoverable, which could arguably imply that downloading and re-uploading the image would be allowed; so this smells like a bug. I will not let the fact that Mark just told me in as many words that it is not dissuade me from this point of view. ;-) To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1319150/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1309597] Re: wrong swfit acl format when setting swift_store_admin_tenant
** Changed in: glance Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1309597 Title: wrong swfit acl format when setting swift_store_admin_tenant Status in Glance: Invalid Bug description: Hi, I'm testing on icehouse (git >> 2c4bd695652a628758eb56cb36394940a855d696) Here how to reproduce: http://paste.openstack.org/show/61570/ I have Three problems here * The admin tenant list seems to be applied only on the write ACL. * Whatever the value of the admin_tenants list, the string ":*" is added to the list, which will result in having a bad format or the ACL string, Swift accept this string format but will end up with an error when container is accessed by glance. Making the image unusable in some cases * Also write acls value are not correct when the set acl fucntion is called several times, because write_tenant has [] as a default parameter (know python problem when a mutable object is passed as default parameter). I checked the code and we can easily see the problem here. in glance.store.swift http://paste.openstack.org/show/61574/ To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1309597/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1319903] Re: preallocated snapshot images should not be determined by format
Not sure if this is still reproducible. Marking this as won't fix as its a four year old and lot of water has been flown under the bridge for both nova and glance. ** Changed in: glance Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1319903 Title: preallocated snapshot images should not be determined by format Status in Glance: Won't Fix Bug description: In nova, we can create a raw preallocated disk and a qcow2 preallocated disk - both would be preallocated although the format would change. nova.conf configured with preallocation+qcow2: [root@orange-vdsf _base(keystone_admin)]# qemu-img info /var/lib/nova/instances/e143d1e7-d665-4129-80fd-729c2df63262/disk image: /var/lib/nova/instances/e143d1e7-d665-4129-80fd-729c2df63262/disk file format: qcow2 virtual size: 10G (10737418240 bytes) disk size: 10G cluster_size: 65536 backing file: /var/lib/nova/instances/_base/8dc2949154674d316d255dc55d411867614e6eb4 Format specific information: compat: 1.1 lazy refcounts: false nova.conf configured with preallocation and raw: [root@orange-vdsf _base(keystone_admin)]# qemu-img info /var/lib/nova/instances/1ed7becd-4635-4ba5-a9dd-2a6c5f6f58d3/disk image: /var/lib/nova/instances/1ed7becd-4635-4ba5-a9dd-2a6c5f6f58d3/disk file format: raw virtual size: 10G (10737418240 bytes) disk size: 10G this is a snapshot image created from a qcow2+preallocated instance image: /var/lib/glance/images/90983f5a-08a8-48b0-a209-926e6ccfcd4f file format: qcow2 virtual size: 10G (10737418240 bytes) disk size: 18M cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: false this is a snapshot image created from a raw+preallocated instance [root@orange-vdsf _base(keystone_admin)]# qemu-img info /var/lib/glance/images/9fbfb9b0-f008-48b7-8684-c91eeb69406c image: /var/lib/glance/images/9fbfb9b0-f008-48b7-8684-c91eeb69406c file format: raw virtual size: 10G (10737418240 bytes) disk size: 10G oot@orange-vdsf _base(keystone_admin)]# glance image-show 90983f5a-08a8-48b0-a209-926e6ccfcd4f +---+--+ | Property | Value | +---+--+ | Property 'base_image_ref' | 566a54a6-ff91-4546-bbf5-ad3f240e29f0 | | Property 'image_location' | snapshot | | Property 'image_state'| available | | Property 'image_type' | snapshot | | Property 'instance_type_ephemeral_gb' | 0 | | Property 'instance_type_flavorid' | 961b00cd-d199-416b-886a-e9f5ecafebd0 | | Property 'instance_type_id' | 6 | | Property 'instance_type_memory_mb'| 512 | | Property 'instance_type_name' | Dafna | | Property 'instance_type_root_gb' | 10 | | Property 'instance_type_rxtx_factor' | 1.0 | | Property 'instance_type_swap' | 0 | | Property 'instance_type_vcpus'| 1 | | Property 'instance_uuid' | e143d1e7-d665-4129-80fd-729c2df63262 | | Property 'network_allocated' | True | | Property 'owner_id' | 4ad766166539403189f2caca1ba306aa | | Property 'user_id'| c9062d562d9f41e4a1fdce36a4f176f6 | | checksum | 4b92737f3e2b8b0fa239b10f895644ca | | container_format | bare | | created_at| 2014-05-15T15:02:44 | | deleted | False | | disk_format | qcow2 | | id| 90983f5a-08a8-48b0-a209-926e6ccfcd4f | | is_public | False | | min_disk | 10 | | min_ram | 0 | | name | qcow_preallocated | | owner | 4ad766166539403189f2caca1ba306aa | | protected | False | | size | 19333120
[Yahoo-eng-team] [Bug 1747848] Re: qcow2 format image uploaded to raw format, failed to start the virtual machine with ceph as backend storage
Glance does not, by default, intervene with the image data. It is really user's responsibility to provide the correct metadata related to the image data they are uploading to provide bootable image. ** Changed in: glance Status: Incomplete => Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1747848 Title: qcow2 format image uploaded to raw format, failed to start the virtual machine with ceph as backend storage Status in Glance: Opinion Bug description: In the glance-api.conf configuration(Use ceph with backend storage): [DEFAULT] .. show_image_direct_url = True show_multiple_locations = True .. [glance_store] filesystem_store_datadir = /opt/stack/data/glance/images/ default_store = rbd stores = rbd rbd_store_pool = images rbd_store_user = admin rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_chunk_size = 8 rbd_secret_uuid = 08bf86f1-09c0-4f03-90e6-ae361d520c57 If the qcow2 format image is uploaded as raw format, use this mirror launch virtual machine, to be completed when the virtual machine is created, open the virtual machine console. In the console, you can see "No boot device", the virtual machine can not start from this image. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1747848/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1765309] Re: 500 error on image-download if image is not present at any location
So if you look at the code there that you modified to introduce this bug, you are bypassing the try..catch block that will set the exception that would be raised at the failure causing it trying to raise None which obviously will fail. ** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1765309 Title: 500 error on image-download if image is not present at any location Status in Glance: Invalid Bug description: As of now for downloading the image it iterates through all the locations and if data (image) is not present at any of the location in the end it Logs the error message as "Glance tried all active locations to get data for image but all have failed." Further it fails with 500 error. This issue can occur either if image is not available in any of the location or any error occured during the download process. NOTE: To reproduce this issue I have manually added 'continue' statement at [1] to assume that image is not available at any of the location. [1] https://github.com/openstack/glance/blob/master/glance/location.py#L470 Steps to reproduce: 1. Modify glance/location.py as mentioned in NOTE 2. Restart glance-api service 3. Download image using: $ glance image-download --file downloaded_image Expected Result: Image data should be downloaded and saved in 'downloaded_image' file Actual Result: glance image-download 780ffe26-b95a-4b6f-b2c0-c38803bed73a --file abcdUnable to download image '780ffe26-b95a-4b6f-b2c0-c38803bed73a'. (502 Proxy Error: Proxy Error: The proxy server received an invalid: response from an upstream server.: The proxy server could not handle the request GET /image/v2/images/780ffe26-b95a-4b6f-b2c0-c38803bed73a/file.: Reason: Error reading from remote server: Apache/2.4.18 (Ubuntu) Server at 192.168.0.6 Port 80 (HTTP 502)) Glance API logs: Apr 19 06:37:58 signature devstack@g-api.service[5419]: ERROR glance.location [None req-cfca70d4-d9a7-495f-9762-0fb2a8eee769 admin admin] Glance tried all active locations to get data for image 780ffe26-b95a-4b6f-b2c0-c38803bed73a but all have failed. Apr 19 06:37:58 signature devstack@g-api.service[5419]: CRITICAL glance [None req-cfca70d4-d9a7-495f-9762-0fb2a8eee769 admin admin] Unhandled error: TypeError: 'ImageProxy' object is not callable Apr 19 06:37:58 signature devstack@g-api.service[5419]: ERROR glance Traceback (most recent call last): Apr 19 06:37:58 signature devstack@g-api.service[5419]: ERROR glance File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 131, in __call__ Apr 19 06:37:58 signature devstack@g-api.service[5419]: ERROR glance resp = self.call_func(req, *args, **self.kwargs) Apr 19 06:37:58 signature devstack@g-api.service[5419]: ERROR glance File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 196, in call_func Apr 19 06:37:58 signature devstack@g-api.service[5419]: ERROR glance return self.func(req, *args, **kwargs) Apr 19 06:37:58 signature devstack@g-api.service[5419]: ERROR glance File "/usr/local/lib/python2.7/dist-packages/oslo_middleware/base.py", line 131, in __call__ Apr 19 06:37:58 signature devstack@g-api.service[5419]: ERROR glance response = req.get_response(self.application) Apr 19 06:37:58 signature devstack@g-api.service[5419]: ERROR glance File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1327, in send Apr 19 06:37:58 signature devstack@g-api.service[5419]: ERROR glance application, catch_exc_info=False) Apr 19 06:37:58 signature devstack@g-api.service[5419]: ERROR glance File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1291, in call_application Apr 19 06:37:58 signature devstack@g-api.service[5419]: ERROR glance app_iter = application(self.environ, start_response) Apr 19 06:37:58 signature devstack@g-api.service[5419]: ERROR glance File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 131, in __call__ Apr 19 06:37:58 signature devstack@g-api.service[5419]: ERROR glance resp = self.call_func(req, *args, **self.kwargs) Apr 19 06:37:58 signature devstack@g-api.service[5419]: ERROR glance File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 196, in call_func Apr 19 06:37:58 signature devstack@g-api.service[5419]: ERROR glance return self.func(req, *args, **kwargs) Apr 19 06:37:58 signature devstack@g-api.service[5419]: ERROR glance File "/usr/local/lib/python2.7/dist-packages/oslo_middleware/base.py", line 131, in __call__ Apr 19 06:37:58 signature devstack@g-api.service[5419]: ERROR glance response = req.get_response(self.application) Apr 19 06:37:58 signature devstack@g-api.service[5419]: ERROR glance File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1327, in send Apr 19
[Yahoo-eng-team] [Bug 1264639] Re: Glance v1 unit test code can do with some refactoring
V1 tests has been removed in rocky cycle, no need to act on this anymore. ** Changed in: glance Status: In Progress => Invalid ** Changed in: glance Status: Invalid => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1264639 Title: Glance v1 unit test code can do with some refactoring Status in Glance: Won't Fix Bug description: The Glance v1 unit test code (tests/unit/v1/test_api.py) can do with some refactoring - there are lots of repeated patterns throughout the code. Refactoring it can help with understanding and maintaining the code and also make adding future tests simpler. This bug is an action item to track the review comment for https://review.openstack.org/#/c/64079/4 raised by Steve Kowalik. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1264639/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1272464] Re: changing image owner is broken in swift multi tenant mode (v1)
This bug is reported for v1 api which is deprecated and will be removed during rocky cycle, so marking this as invalid. ** Changed in: glance Status: In Progress => Invalid ** Changed in: glance Status: Invalid => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1272464 Title: changing image owner is broken in swift multi tenant mode (v1) Status in Glance: Won't Fix Bug description: Hi, In v1 and when using swift (multi tenant) as backend, If we change the image owner, the new owner is not granted read acl on the swift container holding the image. as a result, the new owner is able to list the image but unable to use it. example here : http://paste.openstack.org/show/61841/ To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1272464/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1269634] Re: Glance reports NO ERROR for failed upload
This bug is reported for v1 api which is deprecated and will be removed during rocky cycle, so marking this as invalid. ** Changed in: glance Status: New => Invalid ** Changed in: glance Status: Invalid => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1269634 Title: Glance reports NO ERROR for failed upload Status in Glance: Won't Fix Bug description: On a 0.12.0 client, attempt to upload an image to glance: $ glance -d -v image-create --name=Test --disk-format=qcow2 --container-format=bare --file test.img Error communicating with http://example.com:9292 [Errno 35] Resource temporarily unavailable In debug mode: curl -i -X POST -H 'x-image-meta-container_format: bare' -H 'Transfer-Encoding: chunked' -H 'User-Agent: python-glanceclient' -H 'x-image-meta-size: 13147648' -H 'x-image-meta-is_public: False' -H 'X-Auth-Token: tokentoken' -H 'Content-Type: application/octet-stream' -H 'x-image-meta-disk_format: qcow2' -H 'x-image-meta-name: Test' -d '' http://example.com:9292/v1/images Relevant glance logs: http://paste.openstack.org/show/61317/ The bug is that no errors are reported on the failure in glance-api or glance-registry. This should at least raise a warning and return a non-200 status back to the client. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1269634/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1273171] Re: On Glance API, "changes-since" parameter filters out images which have been update at the same date as the specified timestamp
This bug is reported for v1 api which is deprecated and will be removed during rocky cycle, so marking this as won't fix. ** Changed in: glance Status: In Progress => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1273171 Title: On Glance API, "changes-since" parameter filters out images which have been update at the same date as the specified timestamp Status in Glance: Won't Fix Bug description: environment: Openstack deployed by devstack. Steps to reproduce: 1. Check "image" tables. 2. Request glance API whith "changes-since" parameter, whose value is the same as "update_at" of an image in "images" table. 3.filters out images whose "update_at" is the same of "changes-since" parameter. Expected result: in step3, images are'nt filtered out. Remark: this is simillar to https://review.openstack.org/#/c/60157/. example) -execution: $mysql -u root glance $select * from images; +--+---+-+ | id |***| updated_at | +--+---+-+ | 1d88c716-ecd8-4ca1-9fc5-3bda1cf5affc |***| 2014-01-24 17:18:23 | | b7bc3608-f19e-4eb1-a178-f3c59af2ba22 |***| 2014-01-24 17:18:24 | | ca15b4d7-6c8b-4d7e-a4bd-a6186373e4d9 |***| 2014-01-24 17:18:25 | +--+---+-+ $curl *** http://192.168.0.10:9292/v1/images/detail?changes-since=2014-01-24T17:18:25 HTTP/1.1 200 OK *** {"images": []} -Expected result: image whose id is "ca15b4d7-6c8b-4d7e-a4bd-a6186373e4d9" is filter out. this image should'nt be filtered out from the viewpoint of its name which includes "since". To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1273171/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1744494] Re: Swift backend does not support insecure Keystone v3 with SSL
** Also affects: glance-store Importance: Undecided Status: New ** Changed in: glance Importance: Undecided => Low ** Changed in: glance-store Importance: Undecided => Low -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1744494 Title: Swift backend does not support insecure Keystone v3 with SSL Status in Glance: New Status in glance_store: New Bug description: The swift glance_store client does not create an insecure auth client when using Keystone v3 with an unsigned cert delivering Swift service endpoints. With keystone authtoken insecure=true and swift_store_auth_insecure=true, Glance returns the following error when uploading a new image: http://paste.openstack.org/show/648868/ glance-api_1 | 2018-01-20 19:50:43.409 208 ERROR glance.common.wsgi BackendException: Cannot find swift service endpoint : Unable to establish connection to https://192.168.1.44:35357/v3/auth/tokens: HTTPSConnectionPool(host='192.168.1.44', port=35357): Max retries exceeded with url: /v3/auth/tokens (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),)) To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1744494/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1747727] Re: Unit test test_download_service_unavailable fails behind proxy
by the looks of it, this affected only v1 tests ** Changed in: glance Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1747727 Title: Unit test test_download_service_unavailable fails behind proxy Status in Glance: Won't Fix Bug description: A patch was submitted some time back to allow some tests to run behind a http proxy [0]. It's unclear to me why '0.0.0.0:1' was used rather than something like '127.0.0.1' which will be commonly in an environment's $no_proxy variable, whereas the former will not. Unless there's some special reason this value was used I propose switching instances of 0.0.0.0:1 with 127.0.0.1 [0] https://review.openstack.org/#/c/316965/ To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1747727/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1256593] Re: image size should not be overwritten when using --copy-from
This bug is reported for v1 api which is deprecated and will be removed during rocky cycle, so marking this as invalid. ** Changed in: glance Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1256593 Title: image size should not be overwritten when using --copy-from Status in Glance: Won't Fix Bug description: when uploading a new image using either stdin or --file we return 500 if the size passed in the request does not match what we gather from the backing store the --copy-from option behaves inconsistently as it overwrites the size passed by the uploader with the value from the store so we don't enforce the aforementioned check To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1256593/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1665851] Re: Newton: Heat not validating images
ditto ** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1665851 Title: Newton: Heat not validating images Status in Glance: Invalid Status in OpenStack Heat: Invalid Bug description: I'm seeing this error when Heat validates the existence of an image: 2017-02-18 00:44:03.777 7906 INFO heat.engine.resource [req-593e2bad-c87f-4308-8fe8-fe8652286201 - - - - -] Validating Server "server" 2017-02-18 00:44:03.779 7906 DEBUG heat.engine.stack [req-593e2bad-c87f-4308-8fe8-fe8652286201 - - - - -] Property error: resources.server.properties.image: "cirros" does not validate glance.image (constraint not found) validate /usr/lib/python2.7/dist-packages/heat/engine/stack.py:825 2017-02-18 00:44:03.783 7906 DEBUG oslo_messaging.rpc.server [req-593e2bad-c87f-4308-8fe8-fe8652286201 - - - - -] Expected exception during message handling () _process_incoming /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py:158 The image, however, exists and is public: os@controller:~$ openstack image list +--+++ | ID | Name | Status | +--+++ | 7ab5d7aa-0d0d-4a38-bf05-03089f49d2d7 | cirros | active | +--+++ I have been updating some componentes related to Tacker and Openstackclient, so I think one of those updates triggered the bug. Please let me know which information to recollect. Thanks. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1665851/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1764200] Re: Glance Cinder backed images & multiple regions
** Also affects: glance-store Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1764200 Title: Glance Cinder backed images & multiple regions Status in Glance: New Status in glance_store: New Bug description: When using the cinder backed images as per https://docs.openstack.org/cinder/latest/admin/blockstorage-volume- backed-image.html We have multiple locations, glance configured as /etc/glance/glance-api.conf [glance_store] stores = swift, cinder default_store = swift -snip- cinder_store_auth_address = https://hostname:5000/v3 cinder_os_region_name = Region cinder_store_user_name = glance cinder_store_password = Password cinder_store_project_name = cinder-images cinder_catalog_info = volume:cinder:internalURL cinder clones the volume correctly, then talks to glance to add the location of cinder:// glance then talks to cinder to validate the volume id, however this step uses the wrong cinder endpoint and checks the other region. From /usr/lib/python2.7/site-packages/glance_store/_drivers/cinder.py It appears the region name is only used when not passing in the project/user/password. Passing the os_region_name to the cinderclient.Client call on line 351 appears to fix this. ie c = cinderclient.Client(username, password, project, auth_url=url, region_name=glance_store.cinder_os_region_name, insecure=glance_store.cinder_api_insecure, retries=glance_store.cinder_http_retries, cacert=glance_store.cinder_ca_certificates_file) To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1764200/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1658590] Re: Windows2012 r2 OS deployment getting Blue Screen death using .qcow2
Good to hear this got fixed. Was never really a Glance bug and slipped under our radar. ** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1658590 Title: Windows2012 r2 OS deployment getting Blue Screen death using .qcow2 Status in Glance: Invalid Status in CentOS: Unknown Bug description: I was tried to deploy windows 2012 operating system as instance in my open stack set (Newton) To create instance I followed the below mentioned url http://docs.openstack.org/image-guide/windows-image.html Even I tried direct download qcow2 deployment also by using following url https://cloudbase.it/windows-cloud-images/ in all these cases we are getting Blue Screen Death with following error message "DRIVER_IRQL_NOT_LESS_OR_EQUAL" Please find the attached screenshot for your reference To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1658590/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1640706] Re: glance image-list throws 'An auth plugin is required to fetch a token' error
Might have been valid bug, we just missed to flag it'd fix. Anyways seems that this is not a problem anymore. ** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1640706 Title: glance image-list throws 'An auth plugin is required to fetch a token' error Status in Glance: Invalid Bug description: Following variables are defined via environment variable : OS_PROJECT_NAME OS_PASSWORD OS_AUTH_URL OS_USERNAME OS_TENANT_NAME OS_PROJECT_DOMAIN_NAME But while trying to get image list, glance throws error that 'An auth plugin is required to fetch a token' # glance image-list An auth plugin is required to fetch a token To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1640706/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1651336] Re: Not able to add image via API
** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1651336 Title: Not able to add image via API Status in Glance: Invalid Bug description: When I am hitting an API for adding a glance image. Image is not been adding but template is creating. But if I am creating via dashboard , able to launch successfully. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1651336/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1633561] Re: Downloaded latest devstack. Did stack.sh and getting glance error
closing this as we haven't seen this behavior nor heard back for over a year ** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1633561 Title: Downloaded latest devstack. Did stack.sh and getting glance error Status in Glance: Invalid Bug description: 2016-10-14 16:09:36.506 | 2016-10-14 12:09:36.506 INFO migrate.versioning.api [-] done 2016-10-14 16:09:36.506 | 2016-10-14 12:09:36.506 INFO migrate.versioning.api [-] 6 -> 7... 2016-10-14 16:09:37.325 | 2016-10-14 12:09:37.325 INFO migrate.versioning.api [-] done 2016-10-14 16:09:37.325 | 2016-10-14 12:09:37.325 INFO migrate.versioning.api [-] 7 -> 8... 2016-10-14 16:09:37.349 | 2016-10-14 12:09:37.334 INFO glance.db.sqlalchemy.migrate_repo.schema [-] creating table image_members 2016-10-14 16:09:39.162 | 2016-10-14 12:09:39.162 INFO migrate.versioning.api [-] done 2016-10-14 16:09:39.162 | 2016-10-14 12:09:39.162 INFO migrate.versioning.api [-] 8 -> 9... 2016-10-14 16:09:40.765 | 2016-10-14 12:09:40.765 INFO migrate.versioning.api [-] done 2016-10-14 16:09:40.765 | 2016-10-14 12:09:40.765 INFO migrate.versioning.api [-] 9 -> 10... 2016-10-14 16:09:40.830 | 2016-10-14 12:09:40.830 INFO migrate.versioning.api [-] done 2016-10-14 16:09:40.831 | 2016-10-14 12:09:40.830 INFO migrate.versioning.api [-] 10 -> 11... 2016-10-14 16:09:42.522 | 2016-10-14 12:09:42.522 INFO migrate.versioning.api [-] done 2016-10-14 16:09:42.522 | 2016-10-14 12:09:42.522 INFO migrate.versioning.api [-] 11 -> 12... 2016-10-14 16:09:42.580 | 2016-10-14 12:09:42.524 CRITICAL glance [-] File "/opt/stack/glance/glance/db/sqlalchemy/migrate_repo/versions/012_id_to_uuid.py", line 123 2016-10-14 16:09:42.580 | SyntaxError: Non-ASCII character '\x94' in file /opt/stack/glance/glance/db/sqlalchemy/migrate_repo/versions/012_id_to_uuid.py on line 123, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details 2016-10-14 16:09:42.580 | 2016-10-14 16:09:42.580 | 2016-10-14 12:09:42.524 TRACE glance Traceback (most recent call last): 2016-10-14 16:09:42.580 | 2016-10-14 12:09:42.524 TRACE glance File "/usr/local/bin/glance-manage", line 10, in 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance sys.exit(main()) 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance File "/opt/stack/glance/glance/cmd/manage.py", line 330, in main 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance return CONF.command.action_fn() 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance File "/opt/stack/glance/glance/cmd/manage.py", line 190, in sync 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance CONF.command.current_version) 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance File "/opt/stack/glance/glance/cmd/manage.py", line 108, in sync 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance version) 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance File "/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/migration.py", line 78, in db_sync 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance migration = versioning_api.upgrade(engine, repository, version) 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance File "/usr/local/lib/python2.7/dist-packages/migrate/versioning/api.py", line 186, in upgrade 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance return _migrate(url, repository, version, upgrade=True, err=err, **opts) 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance File "", line 2, in _migrate 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance File "/usr/local/lib/python2.7/dist-packages/migrate/versioning/util/__init__.py", line 160, in with_engine 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance return f(*a, **kw) 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance File "/usr/local/lib/python2.7/dist-packages/migrate/versioning/api.py", line 366, in _migrate 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance schema.runchange(ver, change, changeset.step) 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance File "/usr/local/lib/python2.7/dist-packages/migrate/versioning/schema.py", line 93, in runchange 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance change.run(self.engine, step) 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance File "/usr/local/lib/python2.7/dist-packages/migrate/versioning/script/py.py", line 141, in run 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance script_func = self._func(funcname) 2016-10-14 16:09:42.581 | 2016-10-14 12:09:42.524 TRACE glance File "/usr/local/lib/python2.7/dist-p
[Yahoo-eng-team] [Bug 1668604] Re: Uploading image from Horizon into Glance fails with xmlrequest blocked
** Changed in: glance Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1668604 Title: Uploading image from Horizon into Glance fails with xmlrequest blocked Status in Glance: Invalid Status in OpenStack Dashboard (Horizon): Invalid Bug description: Hi there, When setup and running, I can upload images via glance image-create. However through Horizon I get "xmlhttprequest blocked" by chrome because my dashboard is configured on SSL (https://) but the response from the glance-api has the "upload_url" as none-ssl (http://) I have tried to change what the URL protocol is coming back but haven't found any option, I suspect this must be a bug. Thanks, Karl. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1668604/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp