Re: [openstack-dev] [tripleo][openstack-ansible][nova][placement] Owners needed for placement extraction upgrade deployment tooling
On 30-10-18 14:29:12, Emilien Macchi wrote: > On the TripleO side, it sounds like Lee Yarwood is taking the lead with a > first commit in puppet-placement: > https://review.openstack.org/#/c/604182/ > > Lee, can you confirm that you and your team are working on it for Stein > cycle? ACK, just getting back online after being out for three weeks but still planning on getting everything in place by the original M2 goal we agreed to at PTG. I'll try to post more details by the end of the week. Cheers, Lee > On Thu, Oct 25, 2018 at 1:34 PM Matt Riedemann wrote: > > > Hello OSA/TripleO people, > > > > A plan/checklist was put in place at the Stein PTG for extracting > > placement from nova [1]. The first item in that list is done in grenade > > [2], which is the devstack-based upgrade project in the integrated gate. > > That should serve as a template for the necessary upgrade steps in > > deployment projects. The related devstack change for extracted placement > > on the master branch (Stein) is [3]. Note that change has some > > dependencies. > > > > The second point in the plan from the PTG was getting extracted > > placement upgrade tooling support in a deployment project, notably > > TripleO (and/or OpenStackAnsible). > > > > Given the grenade change is done and passing tests, TripleO/OSA should > > be able to start coding up and testing an upgrade step when going from > > Rocky to Stein. My question is who can we name as an owner in either > > project to start this work? Because we really need to be starting this > > as soon as possible to flush out any issues before they are too late to > > correct in Stein. > > > > So if we have volunteers or better yet potential patches that I'm just > > not aware of, please speak up here so we know who to contact about > > status updates and if there are any questions with the upgrade. > > > > [1] > > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134541.html > > [2] https://review.openstack.org/#/c/604454/ > > [3] https://review.openstack.org/#/c/600162/ -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet] [placement]
On 17-09-18 08:48:01, Emilien Macchi wrote: > On Mon, Sep 17, 2018 at 5:29 AM Lee Yarwood wrote: > > > FWIW I've also started work on the RDO packaging front [1] and would be > > happy to help with this puppet extraction. > > > > Good to know, thanks. > Once we have the repo in place, here is a plan proposal: > > * Populate the repo with cookiecutter & adjust to Placement service > * cp code from nova::placement (and nova::wsgi::apache_placement) > * package placement and puppet-placement in RDO > * start testing puppet-placement in puppet-openstack-integration > * switch tripleo-common / THT to deploy placement in nova_placement > container > * switch tripleo to use puppet-placement (in THT) > * probably rename nova_placement container/service into placement or > something generic > > Feedback is welcome, Thanks Emilien, The only thing I'd add would be TripleO/THT powered upgrades, after switching to puppet-placement. We discussed this in both the Nova and Upgrades SIG rooms and the end goal was to have TripleO able to extract placement during an upgrade to S by M2. I appreciate this is an optimistic goal for upgrades but I think it's just about possible given the extended cycle. Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [puppet] [placement]
On 15-09-18 11:42:37, Emilien Macchi wrote: > I'm currently taking care of creating puppet-placement: > https://review.openstack.org/#/c/602870/ > https://review.openstack.org/#/c/602871/ > https://review.openstack.org/#/c/602869/ > > Once these merge, we'll use cookiecutter, and move things from puppet-nova. > We'll also find a way to call puppet-placement from nova::placement class, > eventually. > Hopefully we can make the switch to new placement during Stein! Thanks Emilien, FWIW I've also started work on the RDO packaging front [1] and would be happy to help with this puppet extraction. Cheers, Lee [1] https://gitlab.com/lyarwood/placement-distgit -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [stable][nova] Nominating melwitt for nova stable core
On 28-08-18 15:26:02, Matt Riedemann wrote: > I hereby nominate Melanie Witt for nova stable core. Mel has shown that she > knows the stable branch policy and is also an active reviewer of nova stable > changes. > > +1/-1 comes from the stable-maint-core team [1] and then after a week with > no negative votes I think it's a done deal. Of course +1/-1 from existing > nova-stable-maint [2] is also good feedback. > > [1] https://review.openstack.org/#/admin/groups/530,members > [2] https://review.openstack.org/#/admin/groups/540,members +1 from me FWIW. Thanks, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-operators] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration)
On 20-08-18 16:29:52, Matthew Booth wrote: > For those who aren't familiar with it, nova's volume-update (also > called swap volume by nova devs) is the nova part of the > implementation of cinder's live migration (also called retype). > Volume-update is essentially an internal cinder<->nova api, but as > that's not a thing it's also unfortunately exposed to users. Some > users have found it and are using it, but because it's essentially an > internal cinder<->nova api it breaks pretty easily if you don't treat > it like a special snowflake. It looks like we've finally found a way > it's broken for non-cinder callers that we can't fix, even with a > dirty hack. > > volume-updateessentially does a live copy of the > data on volume to volume, then seamlessly swaps the > attachment to from to . The guest OS on > will not notice anything at all as the hypervisor swaps the storage > backing an attached volume underneath it. > > When called by cinder, as intended, cinder does some post-operation > cleanup such that is deleted and inherits the same > volume_id; that is effectively becomes . When called any > other way, however, this cleanup doesn't happen, which breaks a bunch > of assumptions. One of these is that a disk's serial number is the > same as the attached volume_id. Disk serial number, in KVM at least, > is immutable, so can't be updated during volume-update. This is fine > if we were called via cinder, because the cinder cleanup means the > volume_id stays the same. If called any other way, however, they no > longer match, at least until a hard reboot when it will be reset to > the new volume_id. It turns out this breaks live migration, but > probably other things too. We can't think of a workaround. > > I wondered why users would want to do this anyway. It turns out that > sometimes cinder won't let you migrate a volume, but nova > volume-update doesn't do those checks (as they're specific to cinder > internals, none of nova's business, and duplicating them would be > fragile, so we're not adding them!). Specifically we know that cinder > won't let you migrate a volume with snapshots. There may be other > reasons. If cinder won't let you migrate your volume, you can still > move your data by using nova's volume-update, even though you'll end > up with a new volume on the destination, and a slightly broken > instance. Apparently the former is a trade-off worth making, but the > latter has been reported as a bug. > > I'd like to make it very clear that nova's volume-update, isn't > expected to work correctly except when called by cinder. Specifically > there was a proposal that we disable volume-update from non-cinder > callers in some way, possibly by asserting volume state that can only > be set by cinder. However, I'm also very aware that users are calling > volume-update because it fills a need, and we don't want to trap data > that wasn't previously trapped. > > Firstly, is anybody aware of any other reasons to use nova's > volume-update directly? > > Secondly, is there any reason why we shouldn't just document then you > have to delete snapshots before doing a volume migration? Hopefully > some cinder folks or operators can chime in to let me know how to back > them up or somehow make them independent before doing this, at which > point the volume itself should be migratable? > > If we can establish that there's an acceptable alternative to calling > volume-update directly for all use-cases we're aware of, I'm going to > propose heading off this class of bug by disabling it for non-cinder > callers. I'm definitely in favor of hiding this from users eventually but wouldn't this require some form of deprecation cycle? Warnings within the API documentation would also be useful and even something we could backport to stable to highlight just how fragile this API is ahead of any policy change. Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach
;> __ > >> > OpenStack Development Mailing List (not for usage questions) > >> > Unsubscribe: > >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> __ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > __ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] minimum libvirt version for nova-compute
On 20-06-18 13:54:29, Lee Yarwood wrote: > On 20-06-18 07:32:08, Matt Riedemann wrote: > > On 6/20/2018 6:54 AM, Lee Yarwood wrote: > > > We can bump the minimum here but then we have to play a game of working > > > out the oldest version the above fix was backported to across the > > > various distros. I'd rather see this address by the Libvirt maintainers > > > in Debian if I'm honest. > > > > Just a thought, but in nova we could at least do: > > > > 1. Add a 'known issues' release note about the issue and link to the libvirt > > patch. > > ACK > > > and/or > > > > 2. Handle libvirtError in that case, check for the "Incorrect number of > > padding bytes" string in the error, and log something with a breadcrumb to > > the libvirt fix - that would be for people that miss the release note, or > > hit the issue past rocky and wouldn't have found the release note because > > they're on Stein+ now. > > Yeah that's fair, I'll submit something for both of the above today. libvirt: Log breadcrumb for known encryption bug https://review.openstack.org/577164 Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] minimum libvirt version for nova-compute
On 20-06-18 07:32:08, Matt Riedemann wrote: > On 6/20/2018 6:54 AM, Lee Yarwood wrote: > > We can bump the minimum here but then we have to play a game of working > > out the oldest version the above fix was backported to across the > > various distros. I'd rather see this address by the Libvirt maintainers > > in Debian if I'm honest. > > Just a thought, but in nova we could at least do: > > 1. Add a 'known issues' release note about the issue and link to the libvirt > patch. ACK > and/or > > 2. Handle libvirtError in that case, check for the "Incorrect number of > padding bytes" string in the error, and log something with a breadcrumb to > the libvirt fix - that would be for people that miss the release note, or > hit the issue past rocky and wouldn't have found the release note because > they're on Stein+ now. Yeah that's fair, I'll submit something for both of the above today. Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] minimum libvirt version for nova-compute
On 20-06-18 11:23:24, Thomas Goirand wrote: > Hi, > > Trying to get puppet-openstack to validate with Debian, I got surprised > that mounting encrypted volume didn't work for me, here's the stack dump > with libvirt 3.0.0 from Debian Stretch: > >File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", > line 1463, in attach_volume > guest.attach_device(conf, persistent=True, live=live) >File "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", > line 303, in attach_device > self._domain.attachDeviceFlags(device_xml, flags=flags) >File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 186, in > doit > result = proxy_call(self._autowrap, f, *args, **kwargs) >File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 144, in > proxy_call > rv = execute(f, *args, **kwargs) >File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 125, in > execute > six.reraise(c, e, tb) >File "/usr/lib/python3/dist-packages/eventlet/support/six.py", line > 625, in reraise > raise value >File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 83, in > tworker > rv = meth(*args, **kwargs) >File "/usr/lib/python3/dist-packages/libvirt.py", line 585, in > attachDeviceFlags > if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() > failed', dom=self) > libvirt.libvirtError: internal error: unable to execute QEMU command > 'object-add': Incorrect number of padding bytes (57) found on decrypted data That's actually a bug and not a lack of support in the version of libvirt you're using: Unable to use LUKS passphrase that is exactly 16 bytes long https://bugzilla.redhat.com/show_bug.cgi?id=1447297 [libvirt] [PATCH] Fix padding of encrypted data https://www.redhat.com/archives/libvir-list/2017-May/msg00030.html > After switching to libvirt 4.3.0 (my own backport from Debian Testing), > it does work. So, while the minimum version of libvirt seems to be > enough for normal operation, it isn't for encrypted volumes. > > Therefore, I wonder if Nova shouldn't declare a minimum version of > libvirt higher than it claims at the moment. I'm stating that, > especially because we had this topic a few weeks ago. We can bump the minimum here but then we have to play a game of working out the oldest version the above fix was backported to across the various distros. I'd rather see this address by the Libvirt maintainers in Debian if I'm honest. Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Reminder to add "nova-status upgrade check" to deployment tooling
On 13-06-18 10:14:32, Matt Riedemann wrote: > I was going through some recently reported nova bugs and came across [1] > which I opened at the Summit during one of the FFU sessions where I realized > the nova upgrade docs don't mention the nova-status upgrade check CLI [2] > (added in Ocata). > > As a result, I was wondering how many deployment tools out there support > upgrades and from those, which are actually integrating that upgrade status > check command. TripleO doesn't at present but like OSA it looks trivial to add: https://github.com/openstack/tripleo-heat-templates/blob/master/docker/services/nova-api.yaml I've created the following bug to track this: https://bugs.launchpad.net/tripleo/+bug/1777060 > I'm not really familiar with most of them, but I've dabbled in OSA enough to > know where the code lived for nova upgrades, so I posted a patch [3]. > > I'm hoping this can serve as a template for other deployment projects to > integrate similar checks into their upgrade (and install verification) > flows. > > [1] https://bugs.launchpad.net/nova/+bug/1772973 > [2] https://docs.openstack.org/nova/latest/cli/nova-status.html > [3] https://review.openstack.org/#/c/575125/ Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova][cinder] Concurrent requests to attach the same non-multiattach volume to multiple instances can succeed
Hello all, I just wanted to draw some attention to the following bug I stumbled across yesterday when sending concurrent requests to attach a non-multiattach volume to multiple instances : https://bugs.launchpad.net/cinder/+bug/1762687 Scanning over the v3 API code in Cinder suggests that this could be due to a complete lack of locking when creating the initial attachment but I might be missing something here. I've marked this as impacting both Nova and Cinder for now while but if I'm honest this strikes me as something we need to resolve in c-api alone. Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [ffu][upgrades] Dublin PTG room and agenda
Hello all, A very late mail to highlight that there will once again be a 1 day track/room dedicated to talking about Fast-forward upgrades at the upcoming PTG in Dublin. The etherpad for which is listed below: https://etherpad.openstack.org/p/ffu-ptg-rocky Please feel free to add items to the pad, I'd really like to see some concrete action items finally come from these discussions ahead of R. Thanks in advance and see you in Dublin! -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ptg] etherpad for Fast Forward Upgrading?
On 15-02-18 09:28:09, Thierry Carrez wrote: > Luo, Lujin wrote: > > Can someone be nice enough to point me to the Rocky Fast Forward Upgrading > > etherpad page? > > > > I am seeing Fast Forward Upgrading scheduled on Monday [1], but the > > etherpad for it is not listed in [2]. > > Indeed, the etherpad is missing, and I realize we don't have anyone > signed up yet to clearly lead that track... > > Is anyone interested in leading that track ? I did sign up a while ago, I've just failed to follow up during the last few weeks. I'll try to get things moving today. Regards, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ptg] etherpad for Fast Forward Upgrading?
On 15-02-18 02:19:48, Luo, Lujin wrote: > Hello everyone, > > Can someone be nice enough to point me to the Rocky Fast Forward Upgrading > etherpad page? > > I am seeing Fast Forward Upgrading scheduled on Monday [1], but the etherpad > for it is not listed in [2]. > > Thanks in advance. > > [1] https://www.openstack.org/ptg/#tab_schedule > [2] https://wiki.openstack.org/wiki/PTG/Rocky/Etherpads Hello Lujin, My apologies, I created this a while ago and forgot to add it to the list and ask for input on the ML: https://etherpad.openstack.org/p/ffu-ptg-rocky I'll get this added to the list now and will send a separate note to the ML later today seeking additional input on the agenda. Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Native QEMU LUKS decryption review overview ahead of FF
On 23-01-18 13:44:49, Lee Yarwood wrote: > A breif progress update in-line below. > > On 22-01-18 14:22:12, Lee Yarwood wrote: > > Hello, > > > > With M3 and FF rapidly approaching this week I wanted to post a brief > > overview of the QEMU native LUKS series. > > > > The full series is available on the following topic, I'll go into more > > detail on each of the changes below: > > > > https://review.openstack.org/#/q/topic:bp/libvirt-qemu-native-luks+status:open > > > > libvirt: Collocate encryptor and volume driver calls > > https://review.openstack.org/#/c/460243/ (Missing final +2 and +W) > > > > This refactor of the Libvirt driver connect and disconnect volume code > > has the added benefit of also correcting a number of bugs around the > > attaching and detaching of os-brick encryptors. IMHO this would be > > useful in Queens even if the rest of the series doesn't land. > > > > libvirt: Introduce disk encryption config classes > > https://review.openstack.org/#/c/464008/ (Missing final +2 and +W) > > > > This is the most straight forward change of the series and simply > > introduces the required config classes to wire up native LUKS decryption > > within the domain XML of an instance. Hopefully nothing controversial. > > Both of these have landed, my thanks to jaypipes for his reviews! > > > libvirt: QEMU native LUKS decryption for encrypted volumes > > https://review.openstack.org/#/c/523958/ (Missing both +2s and +W) > > > > This change carries the bulk of the implementation, wiring up encrypted > > volumes during their initial attachment. The commit message has a > > detailed run down of the various upgrade and LM corner cases we attempt > > to handle here, such as LM from a P to Q compute, detaching a P attached > > encrypted volume after upgrading to Q etc. > > Thanks to melwitt and mdbooth for your reviews! I've respun to address > the various nits and typos pointed out in this change. Ready and waiting > to respin again if any others crop up. My thanks again to melwitt for another review on this final patch. I'm going to be offline for most of Thursday ahead of the FF deadline so if any non-RH core reviewers are able to look at this today I'll do my best to address any nits, concerns, facepalms etc ASAP. Cheers, Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Native QEMU LUKS decryption review overview ahead of FF
On 23-01-18 16:52:30, Corey Bryant wrote: > On Tue, Jan 23, 2018 at 8:44 AM, Lee Yarwood <lyarw...@redhat.com> wrote: >> grenade-dsvm-neutron-multinode-live-migration is currently failing due >> to our use of the Ocata UCA on stable/pike leading to the following >> issue with the libvirt 2.5.0 build it provides: >> >> libvirt 2.5.0-3ubuntu5.6~cloud0 appears to be compiled without gnutls >> https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1744758 >> > Hey Lee, > > We have a new version of libvirt in ocata-proposed now that should fix your > issue and is ready for testing. Thanks for your work on this and for > opening the bug. Thanks Corey, as reported in the bug this WORKSFORME. Thanks for the quick turn around with this, it's really appreciated! >> I've cherry-picked the following devstack change back to stable/pike and >> pulled it into the test change above for Nova, hopefully working around >> these failures: >> >> Update to using pike cloud-archive >> https://review.openstack.org/#/c/536798/ FWIW I still think we should enable the Pike UCA for our stable/pike jobs. As noted in the stable review, testing the Ocata UCA with stable/pike strikes me as pointless as no one will ever use that combination of UCA and stable/pike bits in a real world deployment. Cheers, Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Native QEMU LUKS decryption review overview ahead of FF
A breif progress update in-line below. On 22-01-18 14:22:12, Lee Yarwood wrote: > Hello, > > With M3 and FF rapidly approaching this week I wanted to post a brief > overview of the QEMU native LUKS series. > > The full series is available on the following topic, I'll go into more > detail on each of the changes below: > > https://review.openstack.org/#/q/topic:bp/libvirt-qemu-native-luks+status:open > > libvirt: Collocate encryptor and volume driver calls > https://review.openstack.org/#/c/460243/ (Missing final +2 and +W) > > This refactor of the Libvirt driver connect and disconnect volume code > has the added benefit of also correcting a number of bugs around the > attaching and detaching of os-brick encryptors. IMHO this would be > useful in Queens even if the rest of the series doesn't land. > > libvirt: Introduce disk encryption config classes > https://review.openstack.org/#/c/464008/ (Missing final +2 and +W) > > This is the most straight forward change of the series and simply > introduces the required config classes to wire up native LUKS decryption > within the domain XML of an instance. Hopefully nothing controversial. Both of these have landed, my thanks to jaypipes for his reviews! > libvirt: QEMU native LUKS decryption for encrypted volumes > https://review.openstack.org/#/c/523958/ (Missing both +2s and +W) > > This change carries the bulk of the implementation, wiring up encrypted > volumes during their initial attachment. The commit message has a > detailed run down of the various upgrade and LM corner cases we attempt > to handle here, such as LM from a P to Q compute, detaching a P attached > encrypted volume after upgrading to Q etc. Thanks to melwitt and mdbooth for your reviews! I've respun to address the various nits and typos pointed out in this change. Ready and waiting to respin again if any others crop up. > Upgrade and LM testing is enabled by the following changes: > > fixed_key: Use a single hardcoded key across devstack deployments > https://review.openstack.org/#/c/536343/ > > compute: Introduce an encrypted volume LM test > https://review.openstack.org/#/c/536177/ > > This is being tested by tempest-dsvm-multinode-live-migration and > grenade-dsvm-neutron-multinode-live-migration in the following DNM Nova > change, enabling volume backed LM tests: > > DNM: Test LM with encrypted volumes > https://review.openstack.org/#/c/536350/ > > Hopefully that covers everything but please feel free to ping if you > would like more detail, background etc. Thanks in advance, grenade-dsvm-neutron-multinode-live-migration is currently failing due to our use of the Ocata UCA on stable/pike leading to the following issue with the libvirt 2.5.0 build it provides: libvirt 2.5.0-3ubuntu5.6~cloud0 appears to be compiled without gnutls https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1744758 I've cherry-picked the following devstack change back to stable/pike and pulled it into the test change above for Nova, hopefully working around these failures: Update to using pike cloud-archive https://review.openstack.org/#/c/536798/ tempest-dsvm-multinode-live-migration is also failing but AFAICT they are unrelated to this overall series and appear to be more generic volume backed live migration failures. Thanks again! Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Native QEMU LUKS decryption review overview ahead of FF
Hello, With M3 and FF rapidly approaching this week I wanted to post a brief overview of the QEMU native LUKS series. The full series is available on the following topic, I'll go into more detail on each of the changes below: https://review.openstack.org/#/q/topic:bp/libvirt-qemu-native-luks+status:open libvirt: Collocate encryptor and volume driver calls https://review.openstack.org/#/c/460243/ (Missing final +2 and +W) This refactor of the Libvirt driver connect and disconnect volume code has the added benefit of also correcting a number of bugs around the attaching and detaching of os-brick encryptors. IMHO this would be useful in Queens even if the rest of the series doesn't land. libvirt: Introduce disk encryption config classes https://review.openstack.org/#/c/464008/ (Missing final +2 and +W) This is the most straight forward change of the series and simply introduces the required config classes to wire up native LUKS decryption within the domain XML of an instance. Hopefully nothing controversial. libvirt: QEMU native LUKS decryption for encrypted volumes https://review.openstack.org/#/c/523958/ (Missing both +2s and +W) This change carries the bulk of the implementation, wiring up encrypted volumes during their initial attachment. The commit message has a detailed run down of the various upgrade and LM corner cases we attempt to handle here, such as LM from a P to Q compute, detaching a P attached encrypted volume after upgrading to Q etc. Upgrade and LM testing is enabled by the following changes: fixed_key: Use a single hardcoded key across devstack deployments https://review.openstack.org/#/c/536343/ compute: Introduce an encrypted volume LM test https://review.openstack.org/#/c/536177/ This is being tested by tempest-dsvm-multinode-live-migration and grenade-dsvm-neutron-multinode-live-migration in the following DNM Nova change, enabling volume backed LM tests: DNM: Test LM with encrypted volumes https://review.openstack.org/#/c/536350/ Hopefully that covers everything but please feel free to ping if you would like more detail, background etc. Thanks in advance, Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][stable] What nova needs to get to newton end of life
On 14-12-17 09:15:18, Matt Riedemann wrote: > I'm not sure how many other projects still have an active stable/newton > branch, but I know nova is one of them. > > At this point, these are I think the things that need to get done to end of > life the newton branch for nova: > > 1. We have a set of existing stable/newton backports that need to get > merged: > > https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/newton > > 3 of those are related to a CVE errata, and the other is an API regression > introduced in Newton (trivial low-risk fix). > > Those can't merge until the corresponding Ocata backports are merged first. > I'll start pinging people for reviews on the Ocata backports. The Ocata changes have mereged and the remaining Newton changes are approved. I'll keep an eye on these during the day to ensure they land. > 2. Fix and backport https://bugs.launchpad.net/nova/+bug/1738094 > > This came up just yesterday but it's an upgrade impact introduced in Newton > so while we have the branch available I think we should get a fix there > before EOL. There are going to be at least two fixes for this bug: > > a) Don't store all of the instance group (members and policies) in the > request_specs table. I think this is a correct fix but I also think because > of how instance groups and request spec code tends to surprise you with > funny bugs in funny ways, it's high risk to backport this to newton. Dan has > a patch started though: https://review.openstack.org/#/c/527799/3 This merged into master so I went ahead and posted the stable backports: https://review.openstack.org/#/q/topic:bug/1738094+(status:open+OR+status:merged) > b) Alter the request_specs.spec column from TEXT to MEDIUMTEXT, just like > the build_requests.instance column was increased for similar reasons > (instance.user_data alone is a MEDIUMTEXT column). This is a straight > forward schema migration and I think is low risk to backport all the way to > Newton. FWIW this is the master change - https://review.openstack.org/#/c/528012/ Cheers, Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tripleo][ffu] Fast-forward upgrades M2 progress report
Hello all, This is a brief progress report from the Upgrades squad for the fast-forward upgrades (FFU) feature in TripleO, introducing N to Q upgrades. tl;dr Good initial progress, missed M2 goal of nv CI jobs, pushing on to M3. Overview For anyone unfamiliar with the concept of fast-forward upgrades the following sentence from the spec gives a brief high level introduction: > Fast-forward upgrades are upgrades that move an environment from release `N` > to `N+X` in a single step, where `X` is greater than `1` and for fast-forward > upgrades is typically `3`. The spec itself obviously goes into more detail and I'd recommend anyone wanting to know more about our approach for FFU in TripleO to start there: https://specs.openstack.org/openstack/tripleo-specs/specs/queens/fast-forward-upgrades.html Note that the spec is being updated at present by the following change, introducing more details on the FFU task layout, ordering, dependency on the on-going major upgrade rework in Q, canary compute validation etc: WIP ffu: Spec update for M2 https://review.openstack.org/#/c/526353/ M2 Status - The original goal for Queens M2 was to have one or more non-voting FFU jobs deployed *somewhere* able to run through the basic undercloud and overcloud upgrade workflows, exercising as many compute service dependencies as we could up to and including Nova. Unfortunately while Sofer has made some great progress with this we do not have any running FFU jobs at present: http://lists.openstack.org/pipermail/openstack-dev/2017-December/125316.html We do however have documented demos that cover FFU for some limited overcloud environments from Newton to Queens: OpenStack TripleO FFU Keystone Demo N to Q https://blog.yarwood.me.uk/2017/11/16/openstack_fastforward_tripleo_keystone/ OpenStack TripleO FFU Nova Demo N to Q https://blog.yarwood.me.uk/2017/12/01/openstack_fastforward_tripleo_nova/ These demos currently use a stack of changes against THT with the first ~4 or so changes introducing the FFU framework: https://review.openstack.org/#/q/status:open+project:openstack/tripleo-heat-templates+branch:master+topic:bp/fast-forward-upgrades FWIW getting these initial changes merged would help avoid the current change storm every time this series is rebased to pick up upgrade or deploy related bug fixes. Also note that the demos currently use the raw Ansible playbooks stack outputs to run through the FFU tasks, upgrade tasks and deploy tasks. This is by no means what the final UX will be, with python-tripleoclient and workflow work to be completed ahead of M3. M3 Goals The squad will be focusing on the following goals for M3: - Non-voting RDO CI jobs defined and running - FFU THT changes tested by the above jobs and merged - python-tripleoclient & required Mistral workflows merged - Use of ceph-ansible for Ceph upgrades - Draft developer and user docs under review FFU squad - Finally, a quick note to highlight that this report marks the end of my own personal involvement with the FFU feature in TripleO. I'm not going far, returning to work on Nova and happy to make time to talk about and review FFU related changes etc. The members of the upgrade squad taking this forward and your main points of contact for FFU in TripleO will be: - Sofer (chem) - Lukas (social) - Marios (marios) My thanks again to Sofer, Lukas, Marios, the rest of the upgrade squad and wider TripleO community for your guidance and patience when putting up with my constant inane questioning regarding FFU over the past few months! Cheers, Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo][quickstart] Trying to create a release config for a Master UC and Newton OC
On 10-11-17 09:25:14, Lee Yarwood wrote: > On 01-11-17 18:50:23, Lee Yarwood wrote: > > Hello all, > > > > I'm attempting save future contributors to the fast forward upgrades feature > > some time by introducing a quickstart release config that deploys a Master > > UC > > and Newton OC: > > > > config: Provide a Master UC and Newton OC release config > > https://review.openstack.org/#/c/511464/ > > > > This initial attempt did appear to work when I created the review some > > weeks ago > > but now results in a Pike OC. I'm now trying to avoid this by adding repos, > > repo_cmd_before and repo_cmd_after as in other release configs without any > > luck: > > > > $ cat $WD/config/release/master-undercloud-newton-overcloud.yml > > release: master > > overcloud_release: newton > > undercloud_image_url: > > https://images.rdoproject.org/master/delorean/current-tripleo/stable/undercloud.qcow2 > > ipa_image_url: > > https://images.rdoproject.org/master/delorean/current-tripleo/stable/ironic-python-agent.tar > > overcloud_image_url: > > https://images.rdoproject.org/newton/delorean/consistent/stable/overcloud-full.tar > > images: > > - name: undercloud > > url: "{{ undercloud_image_url }}" > > type: qcow2 > > - name: overcloud-full > > url: "{{ overcloud_image_url }}" > > type: tar > > - name: ipa_images > > url: "{{ ipa_image_url }}" > > type: tar > > > > repos: > > - type: file > > filename: delorean.repo > > down_url: > > https://trunk.rdoproject.org/centos7-newton/current/delorean.repo > > > > - type: file > > filename: delorean-deps.repo > > down_url: http://trunk.rdoproject.org/centos7-newton/delorean-deps.repo > > > > repo_cmd_before: | > > sudo yum clean all; > > sudo yum-config-manager --disable "*" > > sudo rm -rf /etc/yum.repos.d/delorean*; > > sudo rm -rf /etc/yum.repos.d/*.rpmsave; > > > > repo_cmd_after: | > > sudo yum repolist; > > sudo yum update -y > > > > This still results in a Pike OC, with the original overcloud-full image on > > the > > virthost originally using the Newton repos: > > > > $ bash quickstart.sh -w $WD \ > > -t all \ > > -c config/general_config/minimal-keystone-only.yml \ > > -R master-undercloud-newton-overcloud \ > > -N config/nodes/1ctlr_keystone.yml $VIRTHOST > > [..] > > $ ssh -F $WD/ssh.config.ansible virthost > > [..] > > $ virt-cat -a overcloud-full.qcow2 /etc/yum.repos.d/delorean.repo > > [delorean] > > name=delorean-instack-undercloud-61e201bd3cf65e931cc865a1018cf9441e50dab8 > > baseurl=https://trunk.rdoproject.org/centos7-newton/61/e2/61e201bd3cf65e931cc865a1018cf9441e50dab8_be559bb4 > > enabled=1 > > gpgcheck=0 > > > > $ ssh -F $WD/ssh.config.ansible undercloud > > [..] > > $ virt-cat -a overcloud-full.qcow2 /etc/yum.repos.d/delorean.repo > > [delorean] > > name=delorean > > baseurl=https://trunk.rdoproject.org/centos7-master/f4/42/f442a3aa35981c3d6d7e312599dde2a1b1d202c9_0468cca4/ > > gpgcheck=0 > > enabled=1 > > priority=20 > > > > $ ssh -F $WD/ssh.config.ansible overcloud-controller-0 > > [..] > > $ cat /etc/yum.repos.d/delorean.repo > > [delorean] > > name=delorean > > baseurl=https://trunk.rdoproject.org/centos7-master/f4/42/f442a3aa35981c3d6d7e312599dde2a1b1d202c9_0468cca4/ > > gpgcheck=0 > > enabled=1 > > priority=20 > > $ grep keystone /var/log/yum.log > > $ > > > > The weird thing is that the repo-setup role doesn't appear to run at all > > with > > the above config. Something is obviously changing the repos and running `yum > > update -y` prior to the overcloud instances being provisioned but I can't > > seem > > to track it down. Any suggestions would be really appreciated! > > I've not made any progress on this since my original post. I still can't > understand why the overcloud-full.qcow2 image uploaded to the undercloud and > then used by the overcloud has been modified. I've attempted to use > inject_images as suggested on #tripleo without any success. > > If anyone does have any insight into this then please let me know, deploying a > master UC with Newton OC will save so much time for anyone looking to work on > FFU tasks for Queens! Thanks to Yolanda for pointing out that you also need to set `use_ext
Re: [openstack-dev] [tripleo][quickstart] Trying to create a release config for a Master UC and Newton OC
On 01-11-17 18:50:23, Lee Yarwood wrote: > Hello all, > > I'm attempting save future contributors to the fast forward upgrades feature > some time by introducing a quickstart release config that deploys a Master UC > and Newton OC: > > config: Provide a Master UC and Newton OC release config > https://review.openstack.org/#/c/511464/ > > This initial attempt did appear to work when I created the review some weeks > ago > but now results in a Pike OC. I'm now trying to avoid this by adding repos, > repo_cmd_before and repo_cmd_after as in other release configs without any > luck: > > $ cat $WD/config/release/master-undercloud-newton-overcloud.yml > release: master > overcloud_release: newton > undercloud_image_url: > https://images.rdoproject.org/master/delorean/current-tripleo/stable/undercloud.qcow2 > ipa_image_url: > https://images.rdoproject.org/master/delorean/current-tripleo/stable/ironic-python-agent.tar > overcloud_image_url: > https://images.rdoproject.org/newton/delorean/consistent/stable/overcloud-full.tar > images: > - name: undercloud > url: "{{ undercloud_image_url }}" > type: qcow2 > - name: overcloud-full > url: "{{ overcloud_image_url }}" > type: tar > - name: ipa_images > url: "{{ ipa_image_url }}" > type: tar > > repos: > - type: file > filename: delorean.repo > down_url: > https://trunk.rdoproject.org/centos7-newton/current/delorean.repo > > - type: file > filename: delorean-deps.repo > down_url: http://trunk.rdoproject.org/centos7-newton/delorean-deps.repo > > repo_cmd_before: | > sudo yum clean all; > sudo yum-config-manager --disable "*" > sudo rm -rf /etc/yum.repos.d/delorean*; > sudo rm -rf /etc/yum.repos.d/*.rpmsave; > > repo_cmd_after: | > sudo yum repolist; > sudo yum update -y > > This still results in a Pike OC, with the original overcloud-full image on the > virthost originally using the Newton repos: > > $ bash quickstart.sh -w $WD \ > -t all \ > -c config/general_config/minimal-keystone-only.yml \ > -R master-undercloud-newton-overcloud \ > -N config/nodes/1ctlr_keystone.yml $VIRTHOST > [..] > $ ssh -F $WD/ssh.config.ansible virthost > [..] > $ virt-cat -a overcloud-full.qcow2 /etc/yum.repos.d/delorean.repo > [delorean] > name=delorean-instack-undercloud-61e201bd3cf65e931cc865a1018cf9441e50dab8 > baseurl=https://trunk.rdoproject.org/centos7-newton/61/e2/61e201bd3cf65e931cc865a1018cf9441e50dab8_be559bb4 > enabled=1 > gpgcheck=0 > > $ ssh -F $WD/ssh.config.ansible undercloud > [..] > $ virt-cat -a overcloud-full.qcow2 /etc/yum.repos.d/delorean.repo > [delorean] > name=delorean > baseurl=https://trunk.rdoproject.org/centos7-master/f4/42/f442a3aa35981c3d6d7e312599dde2a1b1d202c9_0468cca4/ > gpgcheck=0 > enabled=1 > priority=20 > > $ ssh -F $WD/ssh.config.ansible overcloud-controller-0 > [..] > $ cat /etc/yum.repos.d/delorean.repo > [delorean] > name=delorean > baseurl=https://trunk.rdoproject.org/centos7-master/f4/42/f442a3aa35981c3d6d7e312599dde2a1b1d202c9_0468cca4/ > gpgcheck=0 > enabled=1 > priority=20 > $ grep keystone /var/log/yum.log > $ > > The weird thing is that the repo-setup role doesn't appear to run at all with > the above config. Something is obviously changing the repos and running `yum > update -y` prior to the overcloud instances being provisioned but I can't seem > to track it down. Any suggestions would be really appreciated! I've not made any progress on this since my original post. I still can't understand why the overcloud-full.qcow2 image uploaded to the undercloud and then used by the overcloud has been modified. I've attempted to use inject_images as suggested on #tripleo without any success. If anyone does have any insight into this then please let me know, deploying a master UC with Newton OC will save so much time for anyone looking to work on FFU tasks for Queens! Thanks again in advance, Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tripleo][quickstart] Trying to create a release config for a Master UC and Newton OC
Hello all, I'm attempting save future contributors to the fast forward upgrades feature some time by introducing a quickstart release config that deploys a Master UC and Newton OC: config: Provide a Master UC and Newton OC release config https://review.openstack.org/#/c/511464/ This initial attempt did appear to work when I created the review some weeks ago but now results in a Pike OC. I'm now trying to avoid this by adding repos, repo_cmd_before and repo_cmd_after as in other release configs without any luck: $ cat $WD/config/release/master-undercloud-newton-overcloud.yml release: master overcloud_release: newton undercloud_image_url: https://images.rdoproject.org/master/delorean/current-tripleo/stable/undercloud.qcow2 ipa_image_url: https://images.rdoproject.org/master/delorean/current-tripleo/stable/ironic-python-agent.tar overcloud_image_url: https://images.rdoproject.org/newton/delorean/consistent/stable/overcloud-full.tar images: - name: undercloud url: "{{ undercloud_image_url }}" type: qcow2 - name: overcloud-full url: "{{ overcloud_image_url }}" type: tar - name: ipa_images url: "{{ ipa_image_url }}" type: tar repos: - type: file filename: delorean.repo down_url: https://trunk.rdoproject.org/centos7-newton/current/delorean.repo - type: file filename: delorean-deps.repo down_url: http://trunk.rdoproject.org/centos7-newton/delorean-deps.repo repo_cmd_before: | sudo yum clean all; sudo yum-config-manager --disable "*" sudo rm -rf /etc/yum.repos.d/delorean*; sudo rm -rf /etc/yum.repos.d/*.rpmsave; repo_cmd_after: | sudo yum repolist; sudo yum update -y This still results in a Pike OC, with the original overcloud-full image on the virthost originally using the Newton repos: $ bash quickstart.sh -w $WD \ -t all \ -c config/general_config/minimal-keystone-only.yml \ -R master-undercloud-newton-overcloud \ -N config/nodes/1ctlr_keystone.yml $VIRTHOST [..] $ ssh -F $WD/ssh.config.ansible virthost [..] $ virt-cat -a overcloud-full.qcow2 /etc/yum.repos.d/delorean.repo [delorean] name=delorean-instack-undercloud-61e201bd3cf65e931cc865a1018cf9441e50dab8 baseurl=https://trunk.rdoproject.org/centos7-newton/61/e2/61e201bd3cf65e931cc865a1018cf9441e50dab8_be559bb4 enabled=1 gpgcheck=0 $ ssh -F $WD/ssh.config.ansible undercloud [..] $ virt-cat -a overcloud-full.qcow2 /etc/yum.repos.d/delorean.repo [delorean] name=delorean baseurl=https://trunk.rdoproject.org/centos7-master/f4/42/f442a3aa35981c3d6d7e312599dde2a1b1d202c9_0468cca4/ gpgcheck=0 enabled=1 priority=20 $ ssh -F $WD/ssh.config.ansible overcloud-controller-0 [..] $ cat /etc/yum.repos.d/delorean.repo [delorean] name=delorean baseurl=https://trunk.rdoproject.org/centos7-master/f4/42/f442a3aa35981c3d6d7e312599dde2a1b1d202c9_0468cca4/ gpgcheck=0 enabled=1 priority=20 $ grep keystone /var/log/yum.log $ The weird thing is that the repo-setup role doesn't appear to run at all with the above config. Something is obviously changing the repos and running `yum update -y` prior to the overcloud instances being provisioned but I can't seem to track it down. Any suggestions would be really appreciated! Thanks in advance, Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo][quickstart][rdo] shipping python-virtualbmc in Newton to allow undercloud upgrades from Newton to Queens
On 05-10-17 09:57:26, Lee Yarwood wrote: > On 05-10-17 09:33:29, Steven Hardy wrote: >> This sounds reasonable to me, but note another option for testing >> fast-forward overcloud upgrades would be to deploy a trunk/pike >> undercloud, then use it to deploy a newton overcloud (use newton >> overcloud-full image and tripleo-heat-templates). >> >> We already do a mixed version deploy like this if the upgrade CI jobs, >> although those are only deploying the N-1 release not N-3, but I think >> in theory it should work the same. > > Thanks Steven, great point, I wanted to run through the full upgrade > somewhere in our RDO CI but for devs working on this deploying a Newton > overcloud from a master undercloud would be the quickest to get up and > running. > > I've just had a look through the current upgrade jobs and have copied > their approach of using config/release/*.yml to control this. I'll > report back if/when I've managed to sucessfully deploy this. Apologies, I forgot to follow up here, I've submitted the following tripleo-quickstart change to provide a very basic master undercloud, newton overcloud release config: config: Provide a Master UC and Newton OC release config https://review.openstack.org/511464 I'm working on another change to have tripleo-quickstart snapshot this initial deployment now, hopefully further reducing the pain of working on fast-forward upgrades during Queens Thanks again, Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [rdo-list] [tripleo][quickstart][rdo] shipping python-virtualbmc in Newton to allow undercloud upgrades from Newton to Queens
On 05-10-17 10:48:35, Javier Pena wrote: > > Adding rdo-list in an attempt to get more feeback regarding this > > proposal, tl;dr can we ship python-virtualbmc in Newton? > > > > Given the background, I think it's reasonable to add it to Newton, > even though it is close to EOL. > > Could you open a review to rdoinfo and add the required tag after > https://github.com/redhat-openstack/rdoinfo/blob/master/rdo.yml#L4736-L4737 > ? We could iron out any details in the review. Thanks Javier, I've created the following review for this: https://review.rdoproject.org/r/9981 Add python-virtualbmc to Newton Thanks! Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo][quickstart][rdo] shipping python-virtualbmc in Newton to allow undercloud upgrades from Newton to Queens
On 05-10-17 09:33:29, Steven Hardy wrote: > On Wed, Oct 4, 2017 at 1:08 PM, Lee Yarwood <lyarw...@redhat.com> wrote: > > Hello all, > > > > I'm currently working to get the tripleo-spec for fast-forward upgrades > > out of WIP and merged ahead of the Queens M-1 milestone next week. One > > of the documented pre-requisite steps for fast-forward upgrades is for > > an operator to linearly upgrade the undercloud from Newton (N) to Queens > > (N+3): > > > > https://review.openstack.org/#/c/497257/ > > > > This is not possible at present with tripleo-quickstart deployed virtual > > environments thanks to our use of the pxe_ssh Ironic driver in Newton > > that has now been removed in Pike: > > > > https://docs.openstack.org/releasenotes/ironic/pike.html#id14 > > > > I briefly looked into migrating between pxe_ssh and the new default of > > vbmc during the Ocata to Pike undercloud upgrade but I'd much rather > > just deploy Newton using vbmc. AFAICT the only issue here is packaging > > with the python-virtualbmc package not present in the Newton repos. > > > > With that in mind I've submitted the following changes that remove the > > various conditionals in tripleo-quickstart that block the use of vbmc in > > Newton and verified that this works by using the Ocata python-virtualbmc > > package: > > > > https://review.openstack.org/#/q/topic:allow_vbmc_newton+(status:open+OR+status:merged) > > > > FWIW I can deploy successfully on Newton with these changes and then > > upgrade the undercloud to Pike just fine. > > > > Would anyone be able to confirm *if* we could ship python-virtualbmc in > > the Newton relevant repos? > > This sounds reasonable to me, but note another option for testing > fast-forward overcloud upgrades would be to deploy a trunk/pike > undercloud, then use it to deploy a newton overcloud (use newton > overcloud-full image and tripleo-heat-templates). > > We already do a mixed version deploy like this if the upgrade CI jobs, > although those are only deploying the N-1 release not N-3, but I think > in theory it should work the same. Thanks Steven, great point, I wanted to run through the full upgrade somewhere in our RDO CI but for devs working on this deploying a Newton overcloud from a master undercloud would be the quickest to get up and running. I've just had a look through the current upgrade jobs and have copied their approach of using config/release/*.yml to control this. I'll report back if/when I've managed to sucessfully deploy this. Cheers, Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo][quickstart][rdo] shipping python-virtualbmc in Newton to allow undercloud upgrades from Newton to Queens
Adding rdo-list in an attempt to get more feeback regarding this proposal, tl;dr can we ship python-virtualbmc in Newton? On 04-10-17 12:08:25, Lee Yarwood wrote: > Hello all, > > I'm currently working to get the tripleo-spec for fast-forward upgrades > out of WIP and merged ahead of the Queens M-1 milestone next week. One > of the documented pre-requisite steps for fast-forward upgrades is for > an operator to linearly upgrade the undercloud from Newton (N) to Queens > (N+3): > > https://review.openstack.org/#/c/497257/ > > This is not possible at present with tripleo-quickstart deployed virtual > environments thanks to our use of the pxe_ssh Ironic driver in Newton > that has now been removed in Pike: > > https://docs.openstack.org/releasenotes/ironic/pike.html#id14 > > I briefly looked into migrating between pxe_ssh and the new default of > vbmc during the Ocata to Pike undercloud upgrade but I'd much rather > just deploy Newton using vbmc. AFAICT the only issue here is packaging > with the python-virtualbmc package not present in the Newton repos. > > With that in mind I've submitted the following changes that remove the > various conditionals in tripleo-quickstart that block the use of vbmc in > Newton and verified that this works by using the Ocata python-virtualbmc > package: > > https://review.openstack.org/#/q/topic:allow_vbmc_newton+(status:open+OR+status:merged) > > FWIW I can deploy successfully on Newton with these changes and then > upgrade the undercloud to Pike just fine. > > Would anyone be able to confirm *if* we could ship python-virtualbmc in > the Newton relevant repos? > > Thanks in advance, > > Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tripleo][quickstart][rdo] shipping python-virtualbmc in Newton to allow undercloud upgrades from Newton to Queens
On 04-10-17 12:17:30, Dmitry Tantsur wrote: > Hi! > > The only issue I can think of is python-pyghmi version. I think one in > Newton is too old and has to be bumped to at least one in Ocata. But if you > say you've deployed successfully, probably it was already bumped for some > reason. Yeah we appear to be shipping python-pyghmi-1.0.12-1.el7.noarch in newton-testing and that meets the requirements of python-virtualbmc allowing me to directly install it from the Ocata repos. Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [tripleo][quickstart][rdo] shipping python-virtualbmc in Newton to allow undercloud upgrades from Newton to Queens
Hello all, I'm currently working to get the tripleo-spec for fast-forward upgrades out of WIP and merged ahead of the Queens M-1 milestone next week. One of the documented pre-requisite steps for fast-forward upgrades is for an operator to linearly upgrade the undercloud from Newton (N) to Queens (N+3): https://review.openstack.org/#/c/497257/ This is not possible at present with tripleo-quickstart deployed virtual environments thanks to our use of the pxe_ssh Ironic driver in Newton that has now been removed in Pike: https://docs.openstack.org/releasenotes/ironic/pike.html#id14 I briefly looked into migrating between pxe_ssh and the new default of vbmc during the Ocata to Pike undercloud upgrade but I'd much rather just deploy Newton using vbmc. AFAICT the only issue here is packaging with the python-virtualbmc package not present in the Newton repos. With that in mind I've submitted the following changes that remove the various conditionals in tripleo-quickstart that block the use of vbmc in Newton and verified that this works by using the Ocata python-virtualbmc package: https://review.openstack.org/#/q/topic:allow_vbmc_newton+(status:open+OR+status:merged) FWIW I can deploy successfully on Newton with these changes and then upgrade the undercloud to Pike just fine. Would anyone be able to confirm *if* we could ship python-virtualbmc in the Newton relevant repos? Thanks in advance, Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Openstack-operators] [skip-level-upgrades][fast-forward-upgrades] PTG summary
On 29-09-17 11:40:21, Saverio Proto wrote: > Hello, > > sorry I could not make it to the PTG. > > I have an idea that I want to share with the community. I hope this is a > good place to start the discussion. > > After years of Openstack operations, upgrading releases from Icehouse to > Newton, the feeling is that the control plane upgrade is doable. > > But it is also a lot of pain to upgrade all the compute nodes. This > really causes downtime to the VMs that are running. > I can't always make live migrations, sometimes the VMs are just too big > or too busy. > > It would be nice to guarantee the ability to run an updated control > plane with compute nodes up to N-3 Release. > > This way even if we have to upgrade the control plane every 6 months, we > can keep a longer lifetime for compute nodes. Basically we can never > upgrade them until we decommission the hardware. > > If there are new features that require updated compute nodes, we can > always organize our datacenter in availability zones, not scheduling new > VMs to those compute nodes. > > To my understanding this means having compatibility at least for the > nova-compute agent and the neutron-agents running on the compute node. > > Is it a very bad idea ? > > Do other people feel like me that upgrading all the compute nodes is > also a big part of the burden regarding the upgrade ? Yeah, I don't think the Nova community would ever be able or willing to verify and maintain that level of backward compatibility. Ultimately there's nothing stopping you from upgrading Nova on the computes while also keeping instance running. You only run into issues with kernel, OVS and QEMU (for n-cpu with libvirt) etc upgrades that require reboots or instances to be restarted (either hard or via live-migration). If you're unable or just unwilling to take downtime for instances that can't be moved when these components require an update then you have bigger problems IMHO. Regards, Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary
On 21-09-17 15:10:52, Thierry Carrez wrote: > Sean Dague wrote: > > Agreed. We're already at 5 upgrade tags now? > > > > I think honestly we're going to need a picture to explain the > > differences between them. Based on the confusion that kept seeming to > > come during discussions at the PTG, I think we need to circle around and > > figure out if there are different ways to explain this to have greater > > clarity. > > In the TC/SWG room we reviewed the tags, and someone suggested that any > tag that doesn't even have one project to apply it to should probably be > removed. > > That would get us rid of 3 of them: supports-accessible-upgrade, > supports-zero-downtime-upgrade, and supports-zero-impact-upgrade (+ > supports-api-interoperability which has had little support so far). > > They can always be resurrected when a project reaches new heights? I've added some brief comments to the following change looking to remove the `supports-accessible-upgrade` tag: Remove assert:supports-accessible-upgrade tag https://review.openstack.org/#/c/506263/ Grenade already verifies that some resources are accessible once services are offline at the start of an upgrade[1][2] for a number of projects such as nova[3] and cinder[4]. I think that's enough to keep the tag around and to also associate any such project with this tag. [1] https://github.com/openstack-dev/grenade#basic-flow [2] https://github.com/openstack-dev/grenade/blob/03de9e0fc7f4fc50a00db5d547413e26cf0780dd/grenade.sh#L315-L317 [3] https://github.com/openstack-dev/grenade/blob/master/projects/60_nova/resources.sh#L134-L137 [4] https://github.com/openstack-dev/grenade/blob/master/projects/70_cinder/resources.sh#L230-L243 Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 signature.asc Description: PGP signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary
On 20-09-17 14:56:20, arkady.kanev...@dell.com wrote: > Lee, > I can chair meeting in Sydney. > Thanks, > Arkady Thanks Arkady! FYI I see that emccormickva has created the following Forum session to discuss FF upgrades: http://forumtopics.openstack.org/cfp/details/19 You might want to reach out to him to help craft the agenda for the session based on our discussions in Denver. Thanks again, Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary
an Ops lead for the documentation effort I failed to take down the names of some of the operators who were talking this through at the time. If they or anyone else is still interested in helping here please let me know! - Find or create a relevant SIG for this effort As discussed above this could be as part of the lifecycle SIG or an independent upgrades SIG. Expect a separate mail to the SIG list regarding this shortly. - Identify a room chair for Sydney Unfortunately I will not be present in Sydney to lead a similar session. If anyone is interested in helping please feel free to respond here or reach out to me directly! My thanks again to everyone who attended the track, I had a blast leading the room and hope that the attendees found both the track and some of the outcomes listed above useful. Cheers, Lee [1] https://twitter.com/lyarwood_/status/907310970229415937 [2] https://review.openstack.org/#/q/topic:ironic-offline-migration [3] https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html [4] https://governance.openstack.org/tc/reference/tags/assert_supports-accessible-upgrade.html [5] https://github.com/NguyenHoaiNam/Jump-Over-Release/blob/test_dynamic_section/README.md -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [skip-level-upgrades][upgrades] Denver PTG room & etherpad
On 21-08-17 15:56:53, Lee Yarwood wrote: > Hello all, > > This is a brief announcement to highlight that there will be a skip > level upgrades room again at the PTG in Denver. I'll be chairing the > room and have seeded the etherpad below with a few goal and topic ideas. > I'd really welcome additional input from others, especially if you were > present at the previous discussions in Boston! > > https://etherpad.openstack.org/p/queens-PTG-skip-level-upgrades As suggested on the pad I've associated each topic with a timeslot leaving as much time as I could for people to attend other tracks over the two days. At present we have the following: Monday 10:00 - 10:30 - Retrospective of what was discussed in Boston 10:30 - 11:00 - Have operator requirements changed since Boston? 14:00 - 16:00 - What efforts (if any) are underway to enable skip level upgrades within the community? Tuesday 10:30 - 11:00 - NFV considerations 11:00 - 11:30 - API versions control 14:00 - 16:00 - How can we collaborate and share tools for skip level upgrades within the community? 16:00 - 18:00 - Should we think about a different way of releasing? Can I ask anyone who had previously added a topic to the pad to revisit and add additional details to these suggestions prior to the PTG. Obviously this is all subject to change and so I'd be happy to see more suggestions from community before Monday! Thanks again, Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [skip-level-upgrades][upgrades] Denver PTG room & etherpad
Hello all, This is a brief announcement to highlight that there will be a skip level upgrades room again at the PTG in Denver. I'll be chairing the room and have seeded the etherpad below with a few goal and topic ideas. I'd really welcome additional input from others, especially if you were present at the previous discussions in Boston! https://etherpad.openstack.org/p/queens-PTG-skip-level-upgrades Thanks in advance and see you in Denver! Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][cinder][barbican] Why is Cinder creating symmetric keys in Barbican for use with encrypted volumes?
On 31-05-17 20:06:01, Farr, Kaitlin M. wrote: >> IMHO for now we are better off storing a secret passphrase in Barbican >> for use with these encrypted volumes, would there be any objections to >> this? Are there actual plans to use a symmetric key stored in Barbican >> to directly encrypt and decrypt volumes? > > It sounds like you're thinking that using a key manager object with the > type > "passphrase" is closer to how the encryptors are using the bytes than using > the > "symmetric key" type, but if you switch over to using passphrases, > where are you going to generate the random bytes? Would you prefer the > user to input their own passphrase? The benefit of continuing to use > symmetric > keys as "passphrases" is that the key manager can randomly generate the bytes. > Key generation is a standard feature of key managers, but password generation > Is not. Thanks for responding Kaitlin, I'd be happy to have the key manager generate a random passphrase of a given length as defined by the volume encryption type. I don't think we would want any user input here as ultimately the encryption is transparent to them. > On a side note, I thought the latest QEMU encryption feature was supposed to > have support for passing in key material directly to the encryptors? Perhaps > this is not true and I am misremembering. That isn't the case, with the native LUKS support in QEMU we can now skip the use of the front-end encryptors entirely. We simply provide the passphrase via a libvirt secret associated with the volume that is then passed to QEMU in a secure fashion [1] to unlock the LUKS volume. [1] https://www.berrange.com/posts/2016/04/01/improving-qemu-security-part-3-securely-passing-in-credentials/ -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][cinder][barbican] Why is Cinder creating symmetric keys in Barbican for use with encrypted volumes?
On 26-05-17 17:25:15, Duncan Thomas wrote: > On 25 May 2017 12:33 pm, "Lee Yarwood" <lyarw...@redhat.com> wrote: > > On 25-05-17 11:38:44, Duncan Thomas wrote: > > On 25 May 2017 at 11:00, Lee Yarwood <lyarw...@redhat.com> wrote: > > > This has also reminded me that the plain (dm-crypt) format really needs > > > to be deprecated this cycle. I posted to the dev and ops ML [2] last > > > year about this but received no feedback. Assuming there are no last > > > minute objections I'm going to move forward with deprecating this format > > > in os-brick this cycle. > > > > What is the reasoning for this? There are plenty of people using it, and > > you're going to break them going forward if you remove it. > > I didn't receive any feedback indicating that we had any users of plain > when I initially posted to the ML. That said there obviously can be > users out there and my intention isn't to pull support for this format > immediately without any migration path to LUKS etc. > > > Ok, after a few emails, of the users I knew about, one is happy with luks > and the others are no longer running openstack. Apologies for the mis-steer No problem, thanks for replying! Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][cinder][barbican] Why is Cinder creating symmetric keys in Barbican for use with encrypted volumes?
On 25-05-17 11:00:26, Lee Yarwood wrote: > Hello all, > > I'm currently working on enabling QEMU's native LUKS support within Nova > [1]. While testing this work with Barbican I noticed that Cinder is > creating symmetric keys for use with encrypted volumes : > > https://github.com/openstack/cinder/blob/63433278a485b65ae6ed1998e7bc83933ceee167/cinder/volume/flows/api/create_volume.py#L385 > https://github.com/openstack/castellan/blob/64207e303529b7fceb3b8b0f0a65f8f49b3f9b26/castellan/key_manager/barbican_key_manager.py#L206 > > However the only supported disk encryption formats on the front-end at > present are plain (dm-crypt) and LUKS, neither of which use the supplied > key to directly encrypt or decrypt data. Plain derives a fixed length > master key from the provided key / passphrase and LUKS uses PBKDF2 to > derive a key from the key / passphrase that unlocks a separate master > key. > > https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions > - 2.4 What is the difference between "plain" and LUKS format? > > I also can't find any evidence of these keys being used directly on the > backend for any direct encryption of volumes within c-vol. Happy to be > corrected here if there are out-of-tree drivers etc that do this. > > IMHO for now we are better off storing a secret passphrase in Barbican > for use with these encrypted volumes, would there be any objections to > this? Are there actual plans to use a symmetric key stored in Barbican > to directly encrypt and decrypt volumes? I've documented this as a cinder bug below, still happy to discuss this here on the ML if anyone from Cinder or Barbican disagrees with my suggestion of passphrases over symmetric keys : Cinder creating and associating symmetric keys with encrypted volumes when used with Barbican https://bugs.launchpad.net/cinder/+bug/1693840 Thanks again, Lee > [1] > https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/libvirt-qemu-native-luks.html > [2] > http://lists.openstack.org/pipermail/openstack-dev/2016-November/106956.html -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][cinder][barbican] Why is Cinder creating symmetric keys in Barbican for use with encrypted volumes?
On 25-05-17 11:38:44, Duncan Thomas wrote: > On 25 May 2017 at 11:00, Lee Yarwood <lyarw...@redhat.com> wrote: > > This has also reminded me that the plain (dm-crypt) format really needs > > to be deprecated this cycle. I posted to the dev and ops ML [2] last > > year about this but received no feedback. Assuming there are no last > > minute objections I'm going to move forward with deprecating this format > > in os-brick this cycle. > > What is the reasoning for this? There are plenty of people using it, and > you're going to break them going forward if you remove it. I didn't receive any feedback indicating that we had any users of plain when I initially posted to the ML. That said there obviously can be users out there and my intention isn't to pull support for this format immediately without any migration path to LUKS etc. As for the reasoning, the main issue I've seen reported against plain is that there's always a potential for data loss if an incorrect passphrase or options are provided when opening the device [1]. There are further reasons for choosing LUKS over plain documented in various places [2][3][4] that all seem to suggest that it is a better and safer choice. Lee [1] https://bugs.launchpad.net/nova/+bug/1639221 [2] https://security.stackexchange.com/questions/90468/why-is-plain-dm-crypt-only-recommended-for-experts [3] https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions [4] https://wiki.archlinux.org/index.php/Disk_encryption#Block_device_encryption -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova][cinder][barbican] Why is Cinder creating symmetric keys in Barbican for use with encrypted volumes?
Hello all, I'm currently working on enabling QEMU's native LUKS support within Nova [1]. While testing this work with Barbican I noticed that Cinder is creating symmetric keys for use with encrypted volumes : https://github.com/openstack/cinder/blob/63433278a485b65ae6ed1998e7bc83933ceee167/cinder/volume/flows/api/create_volume.py#L385 https://github.com/openstack/castellan/blob/64207e303529b7fceb3b8b0f0a65f8f49b3f9b26/castellan/key_manager/barbican_key_manager.py#L206 However the only supported disk encryption formats on the front-end at present are plain (dm-crypt) and LUKS, neither of which use the supplied key to directly encrypt or decrypt data. Plain derives a fixed length master key from the provided key / passphrase and LUKS uses PBKDF2 to derive a key from the key / passphrase that unlocks a separate master key. https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions - 2.4 What is the difference between "plain" and LUKS format? I also can't find any evidence of these keys being used directly on the backend for any direct encryption of volumes within c-vol. Happy to be corrected here if there are out-of-tree drivers etc that do this. IMHO for now we are better off storing a secret passphrase in Barbican for use with these encrypted volumes, would there be any objections to this? Are there actual plans to use a symmetric key stored in Barbican to directly encrypt and decrypt volumes? This has also reminded me that the plain (dm-crypt) format really needs to be deprecated this cycle. I posted to the dev and ops ML [2] last year about this but received no feedback. Assuming there are no last minute objections I'm going to move forward with deprecating this format in os-brick this cycle. Thanks in advance, Lee [1] https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/libvirt-qemu-native-luks.html [2] http://lists.openstack.org/pipermail/openstack-dev/2016-November/106956.html -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Do we have users of CryptsetupEncryptor and if so why?
On 07-11-16 17:42:02, Lee Yarwood wrote: > Hello all, > > The following bug was recently discovered where encrypted volumes > created prior to Newton use a slightly mangled passphrase : > > The passphrase used to encrypt or decrypt volumes was mangled prior to Newton > https://launchpad.net/bugs/1633518 > > This is currently being resolved for LUKS based volumes in the following > change with the incorrect passphrase being removed and replaced : > > encryptors: Workaround mangled passphrases > https://review.openstack.org/#/c/386670/ > > Unfortunately we can't do the same for volumes using the plain format > provided by the CryptsetupEncryptor class. While the above change does > include a workaround it would be better if we could deprecate this > format and encryptor for new volumes ASAP and move everyone to LUKS etc. > > Before deprecating CryptsetupEncryptor I wanted to ask this list if we > have any active users of this encryptor and if so why is it being used? > Is there a specific use case where plain is better than LUKS and thus > needs to stay around? > > Thanks in advance, > > Lee CC'ing openstack-dev for some additional feedback. -- Lee Yarwood Senior Software Engineer Red Hat PGP : A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [cinder] Issue with live migration of instances with encrypted volumes
On 02-11-16 08:55:08, Carlton, Paul (Cloud Services) wrote: > Lee > > I see this in a multiple node devstack without shared storage, although that > shouldn't be relevant > > I do a live migration of an instance > > I then hard reboot it > > I you are not seeing the same outcome I'll look at this again Apologies if I'm not being clear here Paul but I'm asking if we can't fix the hard reboot issue directly instead of reverting the serial console fix. Given that you actually need the serial console fix to avoid calling connect_volume multiple times on the destination host. Lee > From: Lee Yarwood <lyarw...@redhat.com> > Sent: 02 November 2016 08:17:35 > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [nova] [cinder] Issue with live migration of > instances with encrypted volumes > > On 01-11-16 15:22:57, Carlton, Paul (Cloud Services) wrote: > > Lee > > > > That change is in my test version or was till I reverted it with > > https://review.openstack.org/#/c/391418, > > > > If you live migrate with the change you mentioned the instance goes to > > error when you try to hard reboot > > Hey Paul, > > I can't see a bug referenced by the revert above, have you looked into > why this is happening and if a full revert is really required? It might > be easier to fix this corner case, leaving the new method of fetching > the domain XML in post_live_migration_at_destination and thus working > around your issue. > > Lee > > > From: Lee Yarwood <lyarw...@redhat.com> > > Sent: 01 November 2016 14:58:58 > > To: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] [nova] [cinder] Issue with live migration of > > instances with encrypted volumes > > > > On 01-11-16 12:02:55, Carlton, Paul (Cloud Services) wrote: > > > Daniel > > > > > > Yes, thanks, but the thing is this does not occur with regular volumes! > > > The process seems to be you need to connect the volume then the encryptor. > > > In pre migration at the destination I connect the volume and then setup > > > the encryptor and that works fine, but in post migration > > > at destination it rebuilds the instance xml and defines the vm which > > > calls _get_guest_storage_config which does another call to > > > connect_volume. This seems redundant to me, because it is already > > > connected, > > > but it works for normal volumes and if I bypass it for encrypted volumes > > > it just fails with the same error when the same function is > > > called as part of a subsequent hard reboot. > > > > Try rebasing on the following change that reworked > > post_live_migration_at_destination to fetch the domain XML from libvirt > > instead of asking Nova to rebuild it : > > > > libvirt: fix serial console not correctly defined after live-migration > > https://review.openstack.org/#/c/356335/ > > > > I think you've highlighted that this caused issues with hard rebooting > > elsewhere right? > > > > Lee > > > > > From: Daniel P. Berrange <berra...@redhat.com> > > > Sent: 01 November 2016 11:29:51 > > > To: OpenStack Development Mailing List (not for usage questions) > > > Subject: Re: [openstack-dev] [nova] [cinder] Issue with live migration of > > > instances with encrypted volumes > > > > > > On Tue, Nov 01, 2016 at 11:22:25AM +, Carlton, Paul (Cloud Services) > > > wrote: > > > > I'm working on a bug https://bugs.launchpad.net/nova/+bug/1633033 with > > > > the live migration of > > > > > > > > instances with encrypted volumes. I've submitted a work in progress > > > > version of a patch > > > > > > > > https://review.openstack.org/#/c/389608 but I can't overcome an issue > > > > with an iscsi command > > > > > > > > failure that only occurs for encrypted volumes during the post > > > > migration processing, see > > > > > > > > http://paste.openstack.org/show/587535/ > > > > > > > > > > > > Does anyone have any thoughts on how to proceed with this issue? > > > > > > No particular ideas, but I wanted to point out that the scsi_id command > > > shown in that stack trace has a device path that points to the raw > > > iSCSI LUN, not to the dm-crypt overlay. So it looks like you're hitting > > > a failure before you get the encryption part, so encryption might be > > > unrela
Re: [openstack-dev] [nova] [cinder] Issue with live migration of instances with encrypted volumes
On 01-11-16 15:22:57, Carlton, Paul (Cloud Services) wrote: > Lee > > That change is in my test version or was till I reverted it with > https://review.openstack.org/#/c/391418, > > If you live migrate with the change you mentioned the instance goes to error > when you try to hard reboot Hey Paul, I can't see a bug referenced by the revert above, have you looked into why this is happening and if a full revert is really required? It might be easier to fix this corner case, leaving the new method of fetching the domain XML in post_live_migration_at_destination and thus working around your issue. Lee > From: Lee Yarwood <lyarw...@redhat.com> > Sent: 01 November 2016 14:58:58 > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [nova] [cinder] Issue with live migration of > instances with encrypted volumes > > On 01-11-16 12:02:55, Carlton, Paul (Cloud Services) wrote: > > Daniel > > > > Yes, thanks, but the thing is this does not occur with regular volumes! > > The process seems to be you need to connect the volume then the encryptor. > > In pre migration at the destination I connect the volume and then setup the > > encryptor and that works fine, but in post migration > > at destination it rebuilds the instance xml and defines the vm which calls > > _get_guest_storage_config which does another call to > > connect_volume. This seems redundant to me, because it is already > > connected, > > but it works for normal volumes and if I bypass it for encrypted volumes > > it just fails with the same error when the same function is > > called as part of a subsequent hard reboot. > > Try rebasing on the following change that reworked > post_live_migration_at_destination to fetch the domain XML from libvirt > instead of asking Nova to rebuild it : > > libvirt: fix serial console not correctly defined after live-migration > https://review.openstack.org/#/c/356335/ > > I think you've highlighted that this caused issues with hard rebooting > elsewhere right? > > Lee > > > From: Daniel P. Berrange <berra...@redhat.com> > > Sent: 01 November 2016 11:29:51 > > To: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] [nova] [cinder] Issue with live migration of > > instances with encrypted volumes > > > > On Tue, Nov 01, 2016 at 11:22:25AM +, Carlton, Paul (Cloud Services) > > wrote: > > > I'm working on a bug https://bugs.launchpad.net/nova/+bug/1633033 with > > > the live migration of > > > > > > instances with encrypted volumes. I've submitted a work in progress > > > version of a patch > > > > > > https://review.openstack.org/#/c/389608 but I can't overcome an issue > > > with an iscsi command > > > > > > failure that only occurs for encrypted volumes during the post migration > > > processing, see > > > > > > http://paste.openstack.org/show/587535/ > > > > > > > > > Does anyone have any thoughts on how to proceed with this issue? > > > > No particular ideas, but I wanted to point out that the scsi_id command > > shown in that stack trace has a device path that points to the raw > > iSCSI LUN, not to the dm-crypt overlay. So it looks like you're hitting > > a failure before you get the encryption part, so encryption might be > > unrelated. -- Lee Yarwood Senior Software Engineer Red Hat PGP : A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [cinder] Issue with live migration of instances with encrypted volumes
On 01-11-16 12:02:55, Carlton, Paul (Cloud Services) wrote: > Daniel > > Yes, thanks, but the thing is this does not occur with regular volumes! > The process seems to be you need to connect the volume then the encryptor. > In pre migration at the destination I connect the volume and then setup the > encryptor and that works fine, but in post migration > at destination it rebuilds the instance xml and defines the vm which calls > _get_guest_storage_config which does another call to > connect_volume. This seems redundant to me, because it is already connected, > but it works for normal volumes and if I bypass it for encrypted volumes > it just fails with the same error when the same function is > called as part of a subsequent hard reboot. Try rebasing on the following change that reworked post_live_migration_at_destination to fetch the domain XML from libvirt instead of asking Nova to rebuild it : libvirt: fix serial console not correctly defined after live-migration https://review.openstack.org/#/c/356335/ I think you've highlighted that this caused issues with hard rebooting elsewhere right? Lee > From: Daniel P. Berrange <berra...@redhat.com> > Sent: 01 November 2016 11:29:51 > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [nova] [cinder] Issue with live migration of > instances with encrypted volumes > > On Tue, Nov 01, 2016 at 11:22:25AM +, Carlton, Paul (Cloud Services) > wrote: > > I'm working on a bug https://bugs.launchpad.net/nova/+bug/1633033 with the > > live migration of > > > > instances with encrypted volumes. I've submitted a work in progress version > > of a patch > > > > https://review.openstack.org/#/c/389608 but I can't overcome an issue with > > an iscsi command > > > > failure that only occurs for encrypted volumes during the post migration > > processing, see > > > > http://paste.openstack.org/show/587535/ > > > > > > Does anyone have any thoughts on how to proceed with this issue? > > No particular ideas, but I wanted to point out that the scsi_id command > shown in that stack trace has a device path that points to the raw > iSCSI LUN, not to the dm-crypt overlay. So it looks like you're hitting > a failure before you get the encryption part, so encryption might be > unrelated. -- Lee Yarwood Senior Software Engineer Red Hat PGP : A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova][cinder] Addressing mangled LUKS passphrases (bug#1633518)
Hello, I documented bug#1633518 [1] last week in which volumes encrypted prior to Ib563b0ea [2] used a slightly mangled passphrase instead of the original passphrase provided by the configured key manager. My first attempt at resolving this [3] prompted an alternative suggestion from mdbooth of adding the correct passphrase to the LUKS device when we detect the use of a mangled passphrase. I'm slightly wary of this option given the changing of passphrases so I'd really appreciate input from the wider Nova and Cinder groups on your preference for resolving this : 1. Keep the mangled passphrase in place and attempt to use it after getting a permission denied error during luksOpen. 2. Add the correct passphrase and remove the mangled passphrase from the LUKS device with luksChangeKey when we detect the use of the mangled passphrase. 3. An alternative suggestion. FYI, as os-brick has now copied the encryptor classes from Nova into their own tree any fix will be cherry-picked across shortly after landing in Nova. I'm also looking into dropping these classes from Nova for Ocata so we can avoid duplicating effort like this in future. Thanks in advance, Lee [1] https://launchpad.net/bugs/1633518 [2] https://review.openstack.org/#/c/309614/ [3] https://review.openstack.org/#/c/386670/ -- Lee Yarwood Senior Software Engineer Red Hat PGP : A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] focused review pipeline of bug fix changes?
On 12-07-16 09:59:12, Markus Zoeller wrote: > After closing the old (>18months) bug reports nobody is working on a few > days ago [1], it became clear that the "in progress" reports are the > majority [2]. After asking Gerrit how long it usually takes to get a fix > merged [3], this is the result: > > number of merged bug fixes within the last 365 days: 689 > merged within ~1m : 409 (~59%) > merged within ~2m : 102 (~14%) > merged within ~3m : 57 (~ 8%) > merged > 3month : 121 (~17%) > > Note: This doesn't reflect the time a patch might be marked as > "WIP". It also doesn't add up to 100% as I rounded down the > percentages. > > This made me thinking about ways to increase the review throughput of > bug fix changes, especially the bug fixes in the "~2m" and "~3m" area. I > *assume* that the fixes in the ">3m" area had inherent problems or > waited for basic structural changes, but that's just guesswork. > > The proposal is this: > * have a TBD list with max 10 items on it (see list possibilities below) > * add new items during nova meeting if slots are free > * Change owners must propose their changes as meeting agenda items > * drop change from list in nova meeting if progress is not satisfying > > List possibilities: > 1) etherpad of doom? maintained by (?|me) > + easy to add/remove from everyone > - hard to query > 2) gerrit: starred by (?|me) > + easy to add/remove from the list maintainer > + easy to query > - No additions/removals when the list maintainer is on vacation > 3) gerrit: add a comment flag TOP10BUGFIX and DROPTOP10BUGFIX > + easy to add/remove from everyone > + easy to query (comment:TOP10BUGFIX not comment:DROPTOP10BUGFIX) > - once removed with a comment "DROPTOP10BUGFIX", a repeated > addition is not practical anymore. > 4) gerrit: tag a change > + easy to add/remove from everyone > + easy to query > - not yet available in our setup > > Personally I prefer 3, as it doesn't rely on a single person and the > tooling is ready for that. It could be sufficient until one of the next > infra Gerrit updates brings us 4. I'd like to avoid 1+2. > > My hope is, that a focused list helps us to get (few) things done faster > and increase the overall velocity. Is this a feasible proposal from your > point of view? Which concerns do you have? > > References: > [1] http://lists.openstack.org/pipermail/openstack-dev/2016-July/098792.html > [2] http://45.55.105.55:3000/dashboard/db/openstack-bugs > [3] > https://github.com/markuszoeller/openstack/blob/master/scripts/gerrit/bug_fix_histogram.py Thanks for bringing this up Markus! IMHO tags against either in-progress launchpad bugs and/or gerrit reviews would work best here. Cheers, Lee -- Lee Yarwood Senior Software Engineer Red Hat PGP : A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Can not rebuild boot-from-volume instance
On 12-07-16 15:12:24, zehua wrote: > Hi, all > I booted a new instance from a volume and then attached an other volume to it, > and created image for the instance whos block_device_mapping like this: > > > | block_device_mapping | [{"guest_format": null, "boot_index": 0, > "delete_on_termination": false, > | | "no_device": null, "snapshot_id": > "61c18329-420d-4765-ab5a-c626d9b1ebcd", > | | "device_name": "/dev/vda", "disk_bus": "virtio", > "image_id": null, > | | "source_type": "snapshot", "device_type": "disk", > "volume_id": null, > | | "destination_type": "volume", "volume_size": 20}, > {"guest_format": null, > | | "boot_index": null, "delete_on_termination": false, > "no_device": null, > | | "snapshot_id": "438cd325-3fcd-4769-a3e9-c0a9aeaa2437", > "device_name": > | | "/dev/vdb", "disk_bus": null, "image_id": null, > "source_type": "snapshot", > | | "device_type": null, "volume_id": null, > "destination_type": "volume", > | | "volume_size": 10}] > > There’s no problem when use the snapshot image to boot a new instance, that > mergs > block_device_mapping in image and one provied manually. > But rebuilding instance from the image ignores the block_device_mapping > attribute. Should we > replace all origin volumes by new volumes provided byimage > block_device_mapping attribute > according to device name? I *think* the following change is attempting to do this, however I'm still not sure that this is the correct behaviour during a rebuild : Replace root volume during rebuild https://review.openstack.org/#/c/305079/ Lee -- Lee Yarwood Senior Software Engineer Red Hat PGP : A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Stable disk device instance rescue reviews for libvirt and possible implementation by other virt drivers
Hello all, https://review.openstack.org/#/q/topic:bp/virt-rescue-stable-disk-devices I've been aimlessly pushing my patches around for this spec for a while now and would really appreciate reviews from the community. Tempest and devstack patches are also included in the above topic, reviews would again be really appreciated for these. I'd also like to ask if any other virt driver maintainers are looking to implement this spec for their backends in Newton? The spec itself is pretty straight forward but I'd be happy to help if there are questions or concerns around getting this implemented outside of libvirt. Thanks in advance, Lee -- Lee Yarwood Red Hat PGP : A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova][stable/liberty] Are release tarballs still provided via Launchpad?
Hello all, $subject, for example with Kilo : https://launchpad.net/nova/kilo/2015.1.2/+download/nova-2015.1.2.tar.gz If not, should we be using http://tarballs.openstack.org/nova/ directly for these stable releases? Thanks in advance, Lee -- Lee Yarwood Senior Software Engineer Red Hat PGP : A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev