Re: [openstack-dev] [nova] [cinder] Issue with live migration of instances with encrypted volumes
Lee I see this in a multiple node devstack without shared storage, although that shouldn't be relevant I do a live migration of an instance I then hard reboot it I you are not seeing the same outcome I'll look at this again Paul Carlton Software Engineer Cloud Services Hewlett Packard Enterprise BUK03:T242 Longdown Avenue Stoke Gifford Bristol BS34 8QZ Office: +44 (0) 1173 162189 Mobile:+44 (0)7768 994283 Email:paul.carl...@hpe.com Hewlett-Packard Enterprise Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England. The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error, you should delete it from your system immediately and advise the sender. To any recipient of this message within HP, unless otherwise stated you should consider this message and attachments as "HP CONFIDENTIAL". From: Lee Yarwood <lyarw...@redhat.com> Sent: 02 November 2016 08:17:35 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova] [cinder] Issue with live migration of instances with encrypted volumes On 01-11-16 15:22:57, Carlton, Paul (Cloud Services) wrote: > Lee > > That change is in my test version or was till I reverted it with > https://review.openstack.org/#/c/391418, > > If you live migrate with the change you mentioned the instance goes to error > when you try to hard reboot Hey Paul, I can't see a bug referenced by the revert above, have you looked into why this is happening and if a full revert is really required? It might be easier to fix this corner case, leaving the new method of fetching the domain XML in post_live_migration_at_destination and thus working around your issue. Lee > From: Lee Yarwood <lyarw...@redhat.com> > Sent: 01 November 2016 14:58:58 > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [nova] [cinder] Issue with live migration of > instances with encrypted volumes > > On 01-11-16 12:02:55, Carlton, Paul (Cloud Services) wrote: > > Daniel > > > > Yes, thanks, but the thing is this does not occur with regular volumes! > > The process seems to be you need to connect the volume then the encryptor. > > In pre migration at the destination I connect the volume and then setup the > > encryptor and that works fine, but in post migration > > at destination it rebuilds the instance xml and defines the vm which calls > > _get_guest_storage_config which does another call to > > connect_volume. This seems redundant to me, because it is already > > connected, > > but it works for normal volumes and if I bypass it for encrypted volumes > > it just fails with the same error when the same function is > > called as part of a subsequent hard reboot. > > Try rebasing on the following change that reworked > post_live_migration_at_destination to fetch the domain XML from libvirt > instead of asking Nova to rebuild it : > > libvirt: fix serial console not correctly defined after live-migration > https://review.openstack.org/#/c/356335/ > > I think you've highlighted that this caused issues with hard rebooting > elsewhere right? > > Lee > > > From: Daniel P. Berrange <berra...@redhat.com> > > Sent: 01 November 2016 11:29:51 > > To: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] [nova] [cinder] Issue with live migration of > > instances with encrypted volumes > > > > On Tue, Nov 01, 2016 at 11:22:25AM +, Carlton, Paul (Cloud Services) > > wrote: > > > I'm working on a bug https://bugs.launchpad.net/nova/+bug/1633033 with > > > the live migration of > > > > > > instances with encrypted volumes. I've submitted a work in progress > > > version of a patch > > > > > > https://review.openstack.org/#/c/389608 but I can't overcome an issue > > > with an iscsi command > > > > > > failure that only occurs for encrypted volumes during the post migration > > > processing, see > > > > > > http://paste.openstack.org/show/587535/ > > > > > > > > > Does anyone have any thoughts on how to proceed with this issue? > > > > No particular ideas, but I wanted to point out that the scsi_id command > > shown in that stack trace has a device path that points to the raw > > iSCSI LUN, not to the dm-crypt overlay. So it looks like you're hitting > > a failure before you get the encryption part, so encryption might be > > unrelated. -- Lee Yarwood Senior Software Engineer Red Hat PGP : A5D1 9385
Re: [openstack-dev] [nova] [cinder] Issue with live migration of instances with encrypted volumes
Lee That change is in my test version or was till I reverted it with https://review.openstack.org/#/c/391418, If you live migrate with the change you mentioned the instance goes to error when you try to hard reboot Paul Carlton Software Engineer Cloud Services Hewlett Packard Enterprise BUK03:T242 Longdown Avenue Stoke Gifford Bristol BS34 8QZ Office: +44 (0) 1173 162189 Mobile:+44 (0)7768 994283 Email:paul.carl...@hpe.com Hewlett-Packard Enterprise Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England. The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error, you should delete it from your system immediately and advise the sender. To any recipient of this message within HP, unless otherwise stated you should consider this message and attachments as "HP CONFIDENTIAL". From: Lee Yarwood <lyarw...@redhat.com> Sent: 01 November 2016 14:58:58 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova] [cinder] Issue with live migration of instances with encrypted volumes On 01-11-16 12:02:55, Carlton, Paul (Cloud Services) wrote: > Daniel > > Yes, thanks, but the thing is this does not occur with regular volumes! > The process seems to be you need to connect the volume then the encryptor. > In pre migration at the destination I connect the volume and then setup the > encryptor and that works fine, but in post migration > at destination it rebuilds the instance xml and defines the vm which calls > _get_guest_storage_config which does another call to > connect_volume. This seems redundant to me, because it is already connected, > but it works for normal volumes and if I bypass it for encrypted volumes > it just fails with the same error when the same function is > called as part of a subsequent hard reboot. Try rebasing on the following change that reworked post_live_migration_at_destination to fetch the domain XML from libvirt instead of asking Nova to rebuild it : libvirt: fix serial console not correctly defined after live-migration https://review.openstack.org/#/c/356335/ I think you've highlighted that this caused issues with hard rebooting elsewhere right? Lee > From: Daniel P. Berrange <berra...@redhat.com> > Sent: 01 November 2016 11:29:51 > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [nova] [cinder] Issue with live migration of > instances with encrypted volumes > > On Tue, Nov 01, 2016 at 11:22:25AM +, Carlton, Paul (Cloud Services) > wrote: > > I'm working on a bug https://bugs.launchpad.net/nova/+bug/1633033 with the > > live migration of > > > > instances with encrypted volumes. I've submitted a work in progress version > > of a patch > > > > https://review.openstack.org/#/c/389608 but I can't overcome an issue with > > an iscsi command > > > > failure that only occurs for encrypted volumes during the post migration > > processing, see > > > > http://paste.openstack.org/show/587535/ > > > > > > Does anyone have any thoughts on how to proceed with this issue? > > No particular ideas, but I wanted to point out that the scsi_id command > shown in that stack trace has a device path that points to the raw > iSCSI LUN, not to the dm-crypt overlay. So it looks like you're hitting > a failure before you get the encryption part, so encryption might be > unrelated. -- Lee Yarwood Senior Software Engineer Red Hat PGP : A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] [cinder] Issue with live migration of instances with encrypted volumes
Daniel Yes, thanks, but the thing is this does not occur with regular volumes! The process seems to be you need to connect the volume then the encryptor. In pre migration at the destination I connect the volume and then setup the encryptor and that works fine, but in post migration at destination it rebuilds the instance xml and defines the vm which calls _get_guest_storage_config which does another call to connect_volume. This seems redundant to me, because it is already connected, but it works for normal volumes and if I bypass it for encrypted volumes it just fails with the same error when the same function is called as part of a subsequent hard reboot. Paul Carlton Software Engineer Cloud Services Hewlett Packard Enterprise BUK03:T242 Longdown Avenue Stoke Gifford Bristol BS34 8QZ Office: +44 (0) 1173 162189 Mobile:+44 (0)7768 994283 Email:paul.carl...@hpe.com Hewlett-Packard Enterprise Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England. The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error, you should delete it from your system immediately and advise the sender. To any recipient of this message within HP, unless otherwise stated you should consider this message and attachments as "HP CONFIDENTIAL". From: Daniel P. Berrange <berra...@redhat.com> Sent: 01 November 2016 11:29:51 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova] [cinder] Issue with live migration of instances with encrypted volumes On Tue, Nov 01, 2016 at 11:22:25AM +0000, Carlton, Paul (Cloud Services) wrote: > I'm working on a bug https://bugs.launchpad.net/nova/+bug/1633033 with the > live migration of > > instances with encrypted volumes. I've submitted a work in progress version > of a patch > > https://review.openstack.org/#/c/389608 but I can't overcome an issue with an > iscsi command > > failure that only occurs for encrypted volumes during the post migration > processing, see > > http://paste.openstack.org/show/587535/ > > > Does anyone have any thoughts on how to proceed with this issue? No particular ideas, but I wanted to point out that the scsi_id command shown in that stack trace has a device path that points to the raw iSCSI LUN, not to the dm-crypt overlay. So it looks like you're hitting a failure before you get the encryption part, so encryption might be unrelated. Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| [https://farm9.staticflickr.com/8415/29946935211_5a6ba4ef9b_q.jpg] <https://www.flickr.com/photos/dberrange/29946935211/> [https://farm9.staticflickr.com/8415/29946935211_5a6ba4ef9b_b.jpg] [https://farm9.staticflickr.com/8571/29916493322_c0f93f561b_q.jpg] <https://www.flickr.com/photos/dberrange/29916493322/> [https://farm9.staticflickr.com/8571/29916493322_c0f93f561b_b.jpg] [https://farm9.staticflickr.com/8529/29606308140_bb9b431935_q.jpg] <https://www.flickr.com/photos/dberrange/29606308140/> [https://farm9.staticflickr.com/8529/29606308140_bb9b431935_b.jpg] [https://farm9.staticflickr.com/8037/29587752710_f796a9cd03_q.jpg] <https://www.flickr.com/photos/dberrange/29587752710/> [https://farm9.staticflickr.com/8037/29587752710_f796a9cd03_b.jpg] [https://farm9.staticflickr.com/8258/28856954053_52370f685f_q.jpg] <https://www.flickr.com/photos/dberrange/28856954053/> [https://farm9.staticflickr.com/8258/28856954053_52370f685f_b.jpg] [https://farm9.staticflickr.com/8125/29479072195_cf5172f125_q.jpg] <https://www.flickr.com/photos/dberrange/29479072195/> [https://farm9.staticflickr.com/8125/29479072195_cf5172f125_b.jpg] [https://farm9.staticflickr.com/8294/28791422423_f82b18e31c_q.jpg] <https://www.flickr.com/photos/dberrange/28791422423/> [https://farm9.staticflickr.com/8294/28791422423_f82b18e31c_b.jpg] [https://farm9.staticflickr.com/8432/29297131575_f7f39d36d7_q.jpg] <https://www.flickr.com/photos/dberrange/29297131575/> [https://farm9.staticflickr.com/8432/29297131575_f7f39d36d7_b.jpg] Dan Berrange<http://berrange.com/> berrange.com Until today, libvirt has used a 3 digit version number for monthly releases off the git master branch, and a 4 digit version number for maintenance releases off ... |: http://libvirt.org -o- http://virt-manager.org :| libvirt: The virtualization API<http://libvirt.org/> libvirt.org The virtualization API libvirt is: A toolkit to interact with the virtualization capabilities of recent versions of Linux (and other OSes), see our project ... |: http://entangle-photo.org -o-http://search.cpan.org/~danberr/ :| __
[openstack-dev] [nova] [cinder] Issue with live migration of instances with encrypted volumes
I'm working on a bug https://bugs.launchpad.net/nova/+bug/1633033 with the live migration of instances with encrypted volumes. I've submitted a work in progress version of a patch https://review.openstack.org/#/c/389608 but I can't overcome an issue with an iscsi command failure that only occurs for encrypted volumes during the post migration processing, see http://paste.openstack.org/show/587535/ Does anyone have any thoughts on how to proceed with this issue? Thanks Paul Carlton Software Engineer Cloud Services Hewlett Packard Enterprise BUK03:T242 Longdown Avenue Stoke Gifford Bristol BS34 8QZ Office: +44 (0) 1173 162189 Mobile:+44 (0)7768 994283 Email:paul.carl...@hpe.com Hewlett-Packard Enterprise Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England. The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error, you should delete it from your system immediately and advise the sender. To any recipient of this message within HP, unless otherwise stated you should consider this message and attachments as "HP CONFIDENTIAL". __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Functional tests
Hi I've inherited a series of changes from a co-worker who has moved on and have rebased them but now I'm hitting some issues with functional tests which I can't figure out how to resolve. The changes are https://review.openstack.org/#/c/268053 and https://review.openstack.org/#/c/326899. The former causes an existing related test to fail due to a cinder error and the latter introduces a new api version and using this seems to break existing functionality. Any suggestions as to how I might debug these issues? Thanks Paul Carlton Software Engineer Cloud Services Hewlett Packard Enterprise BUK03:T242 Longdown Avenue Stoke Gifford Bristol BS34 8QZ Office: +44 (0) 1173 162189 Mobile:+44 (0)7768 994283 Email:paul.carl...@hpe.com Hewlett-Packard Enterprise Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England. The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error, you should delete it from your system immediately and advise the sender. To any recipient of this message within HP, unless otherwise stated you should consider this message and attachments as "HP CONFIDENTIAL". __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Setting up vmware workstation 12 vm to have numa nodes and pci devices
Gary/All I run devstack environments in vmware workstation and I'd like to create a vm that has multiple numa nodes and pci devices so I can test nova code that utilizes these features. I've tried playing with the setting documented in the vmware documentation, i.e. adding numa.vcpu.maxPerVirtualNode etc in the configuration file, without success. I wondered if you had any experience of doing this or could point me at any information that might help? Thanks Paul Carlton Software Engineer Cloud Services Hewlett Packard Enterprise BUK03:T242 Longdown Avenue Stoke Gifford Bristol BS34 8QZ Office: +44 (0) 1173 162189 Mobile:+44 (0)7768 994283 Email:paul.carl...@hpe.com Hewlett-Packard Enterprise Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England. The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error, you should delete it from your system immediately and advise the sender. To any recipient of this message within HP, unless otherwise stated you should consider this message and attachments as "HP CONFIDENTIAL". __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova] Seeking info on PCI Passthrough scheduling
Hi I'm investigating a issue raised by our QA team and trying to locate the documentation or specifications that describe how instances that use PCI-PT devices get scheduled. I'm trying to understand what the expected behavior is when scheduling an instance that used a port associated with a pci device and a flavor that defines numa and cpu requirements etc. Thanks Paul Carlton Software Engineer Cloud Services Hewlett Packard Enterprise BUK03:T242 Longdown Avenue Stoke Gifford Bristol BS34 8QZ Office: +44 (0) 1173 162189 Mobile:+44 (0)7768 994283 Email:paul.carl...@hpe.com Hewlett-Packard Enterprise Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England. The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error, you should delete it from your system immediately and advise the sender. To any recipient of this message within HP, unless otherwise stated you should consider this message and attachments as "HP CONFIDENTIAL". __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Priority Spec for Libvirt Storage Pools
Matt, could you review https://review.openstack.org/#/c/310505 and https://review.openstack.org/#/c/310538/ please, hoping to get them approved by end of week deadline Thanks Paul Carlton Software Engineer Cloud Services Hewlett Packard Enterprise BUK03:T242 Longdown Avenue Stoke Gifford Bristol BS34 8QZ Office: +44 (0) 1173 162189 Mobile:+44 (0)7768 994283 Email:paul.carl...@hpe.com Hewlett-Packard Enterprise Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England. The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error, you should delete it from your system immediately and advise the sender. To any recipient of this message within HP, unless otherwise stated you should consider this message and attachments as "HP CONFIDENTIAL". ____ From: Carlton, Paul (Cloud Services) Sent: 25 July 2016 08:21:41 To: Matthew Booth Cc: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] Priority Spec for Libvirt Storage Pools Matt With help from Maxim Nestratov of Virtuozzo I made some progress with the issues relating to my libvirt storage pools spec at the mid cycle last week, could you take another look at https://review.openstack.org/#/c/310505/ please, I'd like to get this approved so I can land some changes in Newton Thanks Paul Carlton Software Engineer Cloud Services Hewlett Packard Enterprise BUK03:T242 Longdown Avenue Stoke Gifford Bristol BS34 8QZ Office: +44 (0) 1173 162189 Mobile:+44 (0)7768 994283 Email:paul.carl...@hpe.com Hewlett-Packard Enterprise Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England. The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error, you should delete it from your system immediately and advise the sender. To any recipient of this message within HP, unless otherwise stated you should consider this message and attachments as "HP CONFIDENTIAL". __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Priority Spec for Libvirt Storage Pools
Matt With help from Maxim Nestratov of Virtuozzo I made some progress with the issues relating to my libvirt storage pools spec at the mid cycle last week, could you take another look at https://review.openstack.org/#/c/310505/ please, I'd like to get this approved so I can land some changes in Newton Thanks Paul Carlton Software Engineer Cloud Services Hewlett Packard Enterprise BUK03:T242 Longdown Avenue Stoke Gifford Bristol BS34 8QZ Office: +44 (0) 1173 162189 Mobile:+44 (0)7768 994283 Email:paul.carl...@hpe.com Hewlett-Packard Enterprise Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England. The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error, you should delete it from your system immediately and advise the sender. To any recipient of this message within HP, unless otherwise stated you should consider this message and attachments as "HP CONFIDENTIAL". __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [keystone] Addressing issue of keysone token expiry during long running operations
Jamie John Garbutt suggested I follow up this issue with you. I understand you may be leading the effort to address the issue of token expiry during a long running operation. Nova encounter this scenario during image snapshots and live migrations. Is there a keystone blueprint for this issue? Thanks -- Paul Carlton Software Engineer Cloud Services Hewlett Packard BUK03:T242 Longdown Avenue Stoke Gifford Bristol BS34 8QZ Mobile:+44 (0)7768 994283 Email:mailto:paul.carlt...@hpe.com Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England. The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error, you should delete it from your system immediately and advise the sender. To any recipient of this message within HP, unless otherwise stated you should consider this message and attachments as "HP CONFIDENTIAL". smime.p7s Description: S/MIME cryptographic signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev