Re: [openstack-dev] KVM Forum 2017: Call For Participation

2017-06-06 Thread Daniel P. Berrange
A quick reminder that the deadline for submissions to the KVM Forum
2017 is just 10 days away now, June 15.

On Tue, May 09, 2017 at 01:50:52PM +0100, Daniel P. Berrange wrote:
> 
> KVM Forum 2017: Call For Participation
> October 25-27, 2017 - Hilton Prague - Prague, Czech Republic
> 
> (All submissions must be received before midnight June 15, 2017)
> =
> 
> KVM Forum is an annual event that presents a rare opportunity
> for developers and users to meet, discuss the state of Linux
> virtualization technology, and plan for the challenges ahead. 
> We invite you to lead part of the discussion by submitting a speaking
> proposal for KVM Forum 2017.
> 
> At this highly technical conference, developers driving innovation
> in the KVM virtualization stack (Linux, KVM, QEMU, libvirt) can
> meet users who depend on KVM as part of their offerings, or to
> power their data centers and clouds.
> 
> KVM Forum will include sessions on the state of the KVM
> virtualization stack, planning for the future, and many
> opportunities for attendees to collaborate. As we celebrate ten years
> of KVM development in the Linux kernel, KVM continues to be a
> critical part of the FOSS cloud infrastructure.
> 
> This year, KVM Forum is joining Open Source Summit in Prague, 
> Czech Republic. Selected talks from KVM Forum will be presented on
> Wednesday October 25 to the full audience of the Open Source Summit.
> Also, attendees of KVM Forum will have access to all of the talks from
> Open Source Summit on Wednesday.
> 
> http://events.linuxfoundation.org/cfp
> 
> Suggested topics:
> * Scaling, latency optimizations, performance tuning, real-time guests
> * Hardening and security
> * New features
> * Testing
> 
> KVM and the Linux kernel:
> * Nested virtualization
> * Resource management (CPU, I/O, memory) and scheduling
> * VFIO: IOMMU, SR-IOV, virtual GPU, etc.
> * Networking: Open vSwitch, XDP, etc.
> * virtio and vhost
> * Architecture ports and new processor features
> 
> QEMU:
> * Management interfaces: QOM and QMP
> * New devices, new boards, new architectures
> * Graphics, desktop virtualization and virtual GPU
> * New storage features
> * High availability, live migration and fault tolerance
> * Emulation and TCG
> * Firmware: ACPI, UEFI, coreboot, U-Boot, etc.
> 
> Management and infrastructure
> * Managing KVM: Libvirt, OpenStack, oVirt, etc.
> * Storage: Ceph, Gluster, SPDK, etc.r
> * Network Function Virtualization: DPDK, OPNFV, OVN, etc.
> * Provisioning
> 
> 
> ===
> SUBMITTING YOUR PROPOSAL
> ===
> Abstracts due: June 15, 2017
> 
> Please submit a short abstract (~150 words) describing your presentation
> proposal. Slots vary in length up to 45 minutes. Also include the proposal
> type -- one of:
> - technical talk
> - end-user talk
> 
> Submit your proposal here:
> http://events.linuxfoundation.org/cfp
> Please only use the categories "presentation" and "panel discussion"
> 
> You will receive a notification whether or not your presentation proposal
> was accepted by August 10, 2017.
> 
> Speakers will receive a complimentary pass for the event. In the instance
> that case your submission has multiple presenters, only the primary speaker 
> for a
> proposal will receive a complimentary event pass. For panel discussions, all
> panelists will receive a complimentary event pass.
> 
> TECHNICAL TALKS
> 
> A good technical talk should not just report on what has happened over
> the last year; it should present a concrete problem and how it impacts
> the user and/or developer community. Whenever applicable, focus on
> work that needs to be done, difficulties that haven't yet been solved,
> and on decisions that other developers should be aware of. Summarizing
> recent developments is okay but it should not be more than a small
> portion of the overall talk.
> 
> END-USER TALKS
> 
> One of the big challenges as developers is to know what, where and how
> people actually use our software. We will reserve a few slots for end
> users talking about their deployment challenges and achievements.
> 
> If you are using KVM in production you are encouraged submit a speaking
> proposal. Simply mark it as an end-user talk. As an end user, this is a
> unique opportunity to get your input to developers.
> 
> HANDS-ON / BOF SESSIONS
> 
> We will reserve some time for people to get together and discuss
> strategic decisions as well as other topics that are best solved within
> smaller groups.
> 
> These sessions will be announc

Re: [openstack-dev] [nova][cinder][barbican] Why is Cinder creating symmetric keys in Barbican for use with encrypted volumes?

2017-05-25 Thread Daniel P. Berrange
On Thu, May 25, 2017 at 11:38:44AM +0100, Duncan Thomas wrote:
> On 25 May 2017 at 11:00, Lee Yarwood  wrote:
> > This has also reminded me that the plain (dm-crypt) format really needs
> > to be deprecated this cycle. I posted to the dev and ops ML [2] last
> > year about this but received no feedback. Assuming there are no last
> > minute objections I'm going to move forward with deprecating this format
> > in os-brick this cycle.
> 
> What is the reasoning for this? There are plenty of people using it, and
> you're going to break them going forward if you remove it.

It has bad security management characteristics because the passphrase is
directly used to create the encryption key. Thus there's no way to update
the passphrase without re-encrypting all data in the device. If your passphrase
is compromised all data is compromised until you can do such re-encryption,
or you have to shred all copies of it, including any backups. If you want
todo the encryption in-place your VMs have to be taken offline too.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] KVM Forum 2017: Call For Participation

2017-05-09 Thread Daniel P. Berrange

KVM Forum 2017: Call For Participation
October 25-27, 2017 - Hilton Prague - Prague, Czech Republic

(All submissions must be received before midnight June 15, 2017)
=

KVM Forum is an annual event that presents a rare opportunity
for developers and users to meet, discuss the state of Linux
virtualization technology, and plan for the challenges ahead. 
We invite you to lead part of the discussion by submitting a speaking
proposal for KVM Forum 2017.

At this highly technical conference, developers driving innovation
in the KVM virtualization stack (Linux, KVM, QEMU, libvirt) can
meet users who depend on KVM as part of their offerings, or to
power their data centers and clouds.

KVM Forum will include sessions on the state of the KVM
virtualization stack, planning for the future, and many
opportunities for attendees to collaborate. As we celebrate ten years
of KVM development in the Linux kernel, KVM continues to be a
critical part of the FOSS cloud infrastructure.

This year, KVM Forum is joining Open Source Summit in Prague, 
Czech Republic. Selected talks from KVM Forum will be presented on
Wednesday October 25 to the full audience of the Open Source Summit.
Also, attendees of KVM Forum will have access to all of the talks from
Open Source Summit on Wednesday.

http://events.linuxfoundation.org/cfp

Suggested topics:
* Scaling, latency optimizations, performance tuning, real-time guests
* Hardening and security
* New features
* Testing

KVM and the Linux kernel:
* Nested virtualization
* Resource management (CPU, I/O, memory) and scheduling
* VFIO: IOMMU, SR-IOV, virtual GPU, etc.
* Networking: Open vSwitch, XDP, etc.
* virtio and vhost
* Architecture ports and new processor features

QEMU:
* Management interfaces: QOM and QMP
* New devices, new boards, new architectures
* Graphics, desktop virtualization and virtual GPU
* New storage features
* High availability, live migration and fault tolerance
* Emulation and TCG
* Firmware: ACPI, UEFI, coreboot, U-Boot, etc.

Management and infrastructure
* Managing KVM: Libvirt, OpenStack, oVirt, etc.
* Storage: Ceph, Gluster, SPDK, etc.r
* Network Function Virtualization: DPDK, OPNFV, OVN, etc.
* Provisioning


===
SUBMITTING YOUR PROPOSAL
===
Abstracts due: June 15, 2017

Please submit a short abstract (~150 words) describing your presentation
proposal. Slots vary in length up to 45 minutes. Also include the proposal
type -- one of:
- technical talk
- end-user talk

Submit your proposal here:
http://events.linuxfoundation.org/cfp
Please only use the categories "presentation" and "panel discussion"

You will receive a notification whether or not your presentation proposal
was accepted by August 10, 2017.

Speakers will receive a complimentary pass for the event. In the instance
that case your submission has multiple presenters, only the primary speaker for 
a
proposal will receive a complimentary event pass. For panel discussions, all
panelists will receive a complimentary event pass.

TECHNICAL TALKS

A good technical talk should not just report on what has happened over
the last year; it should present a concrete problem and how it impacts
the user and/or developer community. Whenever applicable, focus on
work that needs to be done, difficulties that haven't yet been solved,
and on decisions that other developers should be aware of. Summarizing
recent developments is okay but it should not be more than a small
portion of the overall talk.

END-USER TALKS

One of the big challenges as developers is to know what, where and how
people actually use our software. We will reserve a few slots for end
users talking about their deployment challenges and achievements.

If you are using KVM in production you are encouraged submit a speaking
proposal. Simply mark it as an end-user talk. As an end user, this is a
unique opportunity to get your input to developers.

HANDS-ON / BOF SESSIONS

We will reserve some time for people to get together and discuss
strategic decisions as well as other topics that are best solved within
smaller groups.

These sessions will be announced during the event. If you are interested
in organizing such a session, please add it to the list at

  http://www.linux-kvm.org/page/KVM_Forum_2017_BOF

Let people you think who might be interested know about your BOF, and encourage
them to add their names to the wiki page as well. Please try to
add your ideas to the list before KVM Forum starts.


PANEL DISCUSSIONS

If you are proposing a panel discussion, please make sure that you list
all of your potential panelists in your the abstract. We will request full
biographies if a panel is accepted.


===
HOTEL / TRAVEL
===

This year's event will take place at the Hilton Prague.
For information on discounted room rates for conference attendees
and on other hotels close to the 

Re: [openstack-dev] [nova] Discussions for DPDK support in OpenStack

2017-05-08 Thread Daniel P. Berrange
On Fri, Apr 28, 2017 at 09:38:38AM +0100, sfinu...@redhat.com wrote:
> On Fri, 2017-04-28 at 13:23 +0900, TETSURO NAKAMURA wrote:
> > Hi Nova team,
> > 
> > I'm writing this e-mail because I'd like to have a discussion about
> > DPDK support at OpenStack Summit in Boston.
> > 
> > We have developed a dpdk-based patch panel named SPP[1], and we'd
> > like to start working on Openstack (ML2 driver) to develop
> > "networking-spp".
> > 
> > Especially, we'd like to use DPDK-ivshmem that was used to be used
> > to create "dpdkr" interface in ovs-dpdk[2].
> 
> To the best of my knowledge, IVSHMEM ports are no longer supported in
> upstream. The documentation for this feature was recently removed from
> OVS [1] stating:
> 
>   - The ivshmem library has been removed in DPDK since DPDK 16.11.
>   - The instructions/scheme provided will not work with current
>     supported and future DPDK versions.
>   - The linked patch needed to enable support in QEMU has never
>     been upstreamed and does not apply to the last 4 QEMU releases.
>   - Userspace vhost has become the defacto OVS-DPDK path to the guest.
> 
> Note: I worked on DPDK vSwitch [2] way back when, and there were severe
> security implications with sharing a chunk of host memory between
> multiple guests (which is how IVSHMEM works). I'm not at all surprised
> the feature was killed.

Security is only one of the issues. Upstream QEMU maintainers considered
the ivshmem device to have a seriously flawed design and discourage anyone
from using it. For anything network related QEMU maintainers strongly
recommand using vhost-user.

IIUC, there is some experimental work to create a virtio based replacement
for ivshmem, for non-network related vm-2-vm communications, but that is
not going to be something usable for a while yet. This however just
reinforces the point that ivshmem is considered obsolete / flawed
technology by QEMU maintainers.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching gate testing to use Ubuntu Cloud Archive

2017-04-04 Thread Daniel P. Berrange
On Tue, Apr 04, 2017 at 09:21:27AM -0700, Clark Boylan wrote:
> On Tue, Apr 4, 2017, at 06:36 AM, Daniel P. Berrange wrote:
> > On Mon, Apr 03, 2017 at 04:06:53PM -0700, Clark Boylan wrote:
> > > I have pushed a change to devstack [3] to enable using UCA which pulls
> > > in new Libvirt and mostly seems to work. I think we should consider
> > > switching to UCA as this may fix our Libvirt problems and if it doesn't,
> > > we will be closer to a version of Libvirt that upstream should be
> > > willing to fix.
> > > 
> > > This isn't the most straightfoward switch as UCA has a different repo
> > > for each OpenStack release. libvirt-python is sensitve to the underlying
> > > library changing; it is backward compatible but libvirt-python built
> > > against older libvirt won't work against new libvirt. The result is a
> > > libvirt-python wheel built on our wheel mirror does not work with UCA.
> > 
> > I'm surprised about that - could you elaborate on what's broken for you.
> > The libvirt.so provides a stable public API, and the standalone python
> > binding only uses public APIs from libvirt.so.  IOW you should be able
> > to build libvirt-python against 1.3.0 and then use it against 2.5.0 with
> > no problems.
> > 
> > NB, *before* libvirt-python lived on pypi, it used some non-public
> > libvirt.so symbols, so was tied to the exact libvirt.so it was built
> > against. Assuming you're using the pypi version this should no longer
> > apply.
> 
> The specific issue was "AttributeError: 'module' object has no attribute
> 'VIR_MIGRATE_POSTCOPY'" where module here is libvirt (full log and
> traceback at [0]). The libvirt-python module here was built against
> Libvirt 1.3.1 turned into a wheel and copied into our wheel mirror. Then
> when running against Libvirt 2.5.0 Nova seems to have detected that
> newer features should be present that are not reflected in the compiled
> libvirt-python resulting in the error. This crashed nova compute.
> 
> Problem was easily corrected by preventing devstack from using our wheel
> mirror for libvirt-python which resulted in a new installation built
> against Libvirt 2.5.0.
> 
> It seems like the API is stable enough for backward compatibility but
> not forward compatibility. Its also possible that Nova is doing version
> checking in a buggy way and should be checking what the libvirt-python
> version is and what it supports?

Ok, so yeah your last sentance is the correct interpretation. You've built
libvirt-python against libvirt v1.3.1, so it only includes support for
constants & methods that exist in that version. The VIR_MIGRATE_POSTCOPY
constant was introduced in v1.3.3, so it will not be included in the
libvirt-python you built.

When checking features Nova calls a libvirt API that returns the version
of the libvirtd daemon, which is v2.5.0, and then just blindly assumes
libvirt-python has the same version.

Unfortunately there is no way for Nova to determine what libvirt version
the python binding was built against, so it can't improve its version
check in this respect. To deal with this, Nova would two options:

 - Provide a nova.conf parameter to force it to assume an older libvirt
   version, thus disabling the features regardless of what libvirtd
   supports
 - Make nova check for existance of the python constants / APIs it is
   trying to use, in addition to checking the libvirt version

The first option is pretty trivial to do if needed. The second option would
be the more correct approach, but a much bigger maint burden, so I'm not
convinced it is worth it.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching gate testing to use Ubuntu Cloud Archive

2017-04-04 Thread Daniel P. Berrange
On Tue, Apr 04, 2017 at 07:16:51AM -0400, Sean Dague wrote:
> This is definitely a trade off, I know the last time we tried UCA we had
> such a high failure rate we had to revert. But, that was a much younger
> libvirt that was only just starting to get heavy testing in OpenStack.
> So it feels like it's worth a shot. It will at least be interesting to
> see if it makes things better.
> 
> The libvirt bump will bring in libvirtd and live migration postcopy for
> testability on the Nova side, both of which would be good things.

NB, you'd need corresponding QEMU bump too for post-copy, but IIUC the
UCA contain that, so it'd be fine.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching gate testing to use Ubuntu Cloud Archive

2017-04-04 Thread Daniel P. Berrange
On Mon, Apr 03, 2017 at 04:06:53PM -0700, Clark Boylan wrote:
> Hello,
> 
> One of the major sets of issues currently affecting gate testing is
> Libvirt stability. Elastic-recheck is tracking Libvirt crashes for us
> and they happen frequently [0][1][2]. These issues appear to only affect
> Ubuntu Xenial (and not Trusty or CentOS or Fedora) and after talking in
> #openstack-nova it is clear that Libvirt isn't interested in debugging
> such an old version of Libvirt (1.3.1). And while it isn't entirely
> clear to me which exact version would be acceptable to them the Ubuntu
> Cloud Archive (UCA) does publish a much newer Libvirt (2.5.0).

If going to the libvirt upstream community for help, we'd generally want
to see the latest upstream release being used. Ideally along with willingness
to test git master if investigating a troublesome issue, but we understand
using git master is not practical for many people.

If using an old version provided by an OS distro, then we would generally
expect the OS distro maintainers to lead the investigation, and take the
responsibility for reproducing on latest upstream. Upstream libvirt simply
doesn't have bandwidth to do the OS distro maintainers job for them when
using old distro versions.

> I have pushed a change to devstack [3] to enable using UCA which pulls
> in new Libvirt and mostly seems to work. I think we should consider
> switching to UCA as this may fix our Libvirt problems and if it doesn't,
> we will be closer to a version of Libvirt that upstream should be
> willing to fix.
> 
> This isn't the most straightfoward switch as UCA has a different repo
> for each OpenStack release. libvirt-python is sensitve to the underlying
> library changing; it is backward compatible but libvirt-python built
> against older libvirt won't work against new libvirt. The result is a
> libvirt-python wheel built on our wheel mirror does not work with UCA.

I'm surprised about that - could you elaborate on what's broken for you.
The libvirt.so provides a stable public API, and the standalone python
binding only uses public APIs from libvirt.so.  IOW you should be able
to build libvirt-python against 1.3.0 and then use it against 2.5.0 with
no problems.

NB, *before* libvirt-python lived on pypi, it used some non-public
libvirt.so symbols, so was tied to the exact libvirt.so it was built
against. Assuming you're using the pypi version this should no longer
apply.

> Now its entirely possible that newer Libvirt will be worse than current
> (old) Libvirt; however, being closer to upstream should make getting
> fixes easier. Would be great if those with a better understanding of
> Libvirt could chime in on this if I am completely wrong here.

As a general rule your expectation is right - newer libvirt should
generally be better. There is always the chance of screwups, but we issue
maint releases where needed - the only question mark would be whether UCA
pulls in any maint releases. I would like to think that if such a problem
happened, openstack would be able to escalate it to a Canonical maintainer
to get a maint release / patch into UCA, since presumably any such bug
would be important to Canonical customers using OpenStack too.

> Finally it is worth noting that we will get newer packages of other
> software as well, most notably openvswitch will be version 2.6.1 instead
> of 2.5.0.

IIUC, you'd get newer QEMU/KVM too, which is arguably just as desirable
as getting newer libvirt.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Core team changes

2017-03-10 Thread Daniel P. Berrange
On Thu, Mar 09, 2017 at 05:14:08PM -0600, Matt Riedemann wrote:
> I wanted to let everyone know of some various core team changes within Nova.
> 
> nova-core
> -
> 
> I've discussed it with and removed Daniel Berrange (danpb) and Michael Still
> (mikal) from the nova-core team. Both are busy working on other projects and
> have been for awhile now, and I wanted to have the list reflect that
> reality. I'm sure both would have a short on-ramp to get back in should the
> situation change.
> 
> nova-specs-core
> ---
> 
> I've also removed Dan and Michael from nova-specs-core for the same reasons.
> 
> I've added Jay Pipes (jaypipes) and Sylvain Bauza (bauzas) to the
> nova-specs-core team. This was probably a long time coming. Both are very
> influential in the project and the direction and priorities from release to
> release.
> 
> nova-stable-maint
> -
> 
> During the PTG I added Sylvain to the nova-stable-maint core team. Sylvain
> knows the rules about the stable branch support phases and has a keen eye
> for what's appropriate and what's not for a backport.
> 
> --
> 
> Thank you to Daniel and Michael for everything they've done for Nova over
> the years and I hope them the best in their current work.  And thank you to
> Jay and Sylvain for the continuing work that you're doing to keep moving
> Nova forward.

FYI, I am also going to remove myself from os-vif core for the same reasons.
There are still seven other os-vif core members who are doing a fine job
at dealing with ongoing work there.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][horizon][keystone][nova][qa][swift] Feedback needed: Removal of legacy per-project vanity domain redirects

2017-03-08 Thread Daniel P. Berrange
On Wed, Mar 08, 2017 at 09:12:59AM -0600, Monty Taylor wrote:
> Hey all,
> 
> We have a set of old vanity redirect URLs from back when we made a URL
> for each project:
> 
> cinder.openstack.org
> glance.openstack.org
> horizon.openstack.org
> keystone.openstack.org
> nova.openstack.org
> qa.openstack.org
> swift.openstack.org
> 
> They are being served from an old server we'd like to retire. Obviously,
> moving a set of http redirects is trivial, but these domains have been
> deprecated for about 4 now, so we figured we'd clean house if we can.
> 
> We know that the swift team has previously expressed that there are
> links out in the wild pointing to swift.o.o/content that still work and
> that they don't want to break anyone, which is fine. (although if the
> swift team has changed their minds, that's also welcome)
> 
> for the rest of you, can we kill these rather than transfer them?

Does the server have any access log that could provide stats on whether
any of the subdomains are a receiving a meaningful amount of traffic ?
Easy to justify removing them if they're not seeing any real traffic.

If there's any referrer logs present, that might highlight which places
still have outdated links that need updating to kill off remaining
traffic.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party][ci] openstack CI VM template

2017-02-28 Thread Daniel P. Berrange
On Tue, Feb 28, 2017 at 07:49:01AM -0600, Mikhail Medvedev wrote:
> On Tue, Feb 28, 2017 at 2:52 AM, Guo, Ruijing  wrote:
> > Hi, CI Team,
> >
> >
> >
> > I’d like to know if openstack CI VM support nested virtualization.
> >
> 
> OpenStack CI infrastructure is using nested visualization inside of
> devstack VMs to perform tempest testing. But at the moment accel=tcg
> is used (emulation) for second level virt. IIRC it is done because
> some of the provider clouds had problems with KVM acceleration.

FYI, the QEMU & KVM maintainers still recommend *against* use of
nested-KVM in any production deployment, since they are not confident
of the security at this time. ie risk a level-2 guest could potentially
break out into either the level-1 guest or the physical host. This is
why the kvm kernel module requires an explicit opt-in to enable nested-KVM
on a host.

nested-KVM is improving, but there's no target date when it will
be considered ready for production use.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-27 Thread Daniel P. Berrange
On Mon, Feb 27, 2017 at 10:30:33AM -0500, Artom Lifshitz wrote:
> >  - virtio-vsock - think of this as UNIX domain sockets between the host and
> >guest.  This is to deal with the valid use case of people wanting to use
> >a network protocol, but not wanting an real NIC exposed to the guest/host
> >for security concerns. As such I think it'd be useful to run the metadata
> >service over virtio-vsock as an option. It'd likely address at lesat some
> >people's security concerns wrt metadata service. It would also fix the
> >ability to use the metadat service in IPv6-only environments, as we would
> >not be using IP at all :-)
> 
> Is this currently exposed by libvirt? I had a look at [1] and couldn't
> find any mention of 'vsock' or anything that resembles what you've
> described.

Not yet. The basic QEMU feature merged in 2.8.0, but we're still wiring
up varous bits of userspace. eg selinux-policy, libvirt, nfs server, and
so on to understand vsock

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Daniel P. Berrange
On Mon, Feb 20, 2017 at 12:46:09PM -0500, Artom Lifshitz wrote:
> > But before doing that though, I think it'd be worth understanding whether
> > metadata-over-vsock support would be acceptable to people who refuse
> > to deploy metadata-over-TCPIP today.
> 
> Sure, although I'm still concerned that it'll effectively make tagged
> hotplug libvirt-only.

Well there's still the option of accessing the metadata server the
traditional way over IP which is fully portable.  If some deployments
choose to opt-out of this facility I don't neccessarily think we need
to continue to invent further mechanisms. At some point you have to
say what's there is good enough and if people choose to trade off
features against some other criteria so be it.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Daniel P. Berrange
On Mon, Feb 20, 2017 at 05:07:53PM +, Tim Bell wrote:
> Is there cloud-init support for this mode or do we still need to mount
> as a config drive?

I don't think it particularly makes sense to expose the config drive
via NVDIMM - it wouldn't solve any of the problems that config drive
has today and it'd be less portable wrt guest OS.

Rather I was suggesting we should consider NVDIMM as a transport for
the role device tagging metadata standalone, as that could provide us
a way to live-update the metadata on the fly, which is impractical /
impossible when the metadata is hidden inside the config drive.

But before doing that though, I think it'd be worth understanding whether
metadata-over-vsock support would be acceptable to people who refuse
to deploy metadata-over-TCPIP today.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Daniel P. Berrange
On Mon, Feb 20, 2017 at 10:46:09AM -0500, Artom Lifshitz wrote:
> I don't think we're trying to re-invent configuration management in
> Nova. We have this problem where we want to communicate to the guest,
> from the host, a bunch of dynamic metadata that can change throughout
> the guest's lifetime. We currently have two possible avenues for this
> already in place, and both have problems:
> 
> 1. The metadata service isn't universally deployed by operators for
> security and other reasons.
> 2. The config drive was never designed for dynamic metadata.
> 
> So far in this thread we've mostly been discussing ways to shoehorn a
> solution into the config drive avenue, but that's going to be ugly no
> matter what because it was never designed for what we're trying to do
> in the first place.
> 
> Some folks are saying that we admit that the config drive is only for
> static information and metadata that is known at boot time, and work
> on a third way to communicate dynamic metadata to the guest. I can get
> behind that 100%. I like the virtio-vsock option, but that's only
> supported by libvirt IIUC. We've got device tagging support in hyper-v
> as well, and xenapi hopefully on the way soon [1], so we need
> something a bit more universal. How about fixing up the metadata
> service to be more deployable, both in terms of security, and IPv6
> support?

FYI, virtio-vsock is not actually libvirt specific. the VSOCK sockets
transport was in fact invented by VMWare and first merged into Linux
in 2013 as a vmware guest driver.

A mapping of the VSOCK protocol over virtio was later defined to enable
VSOCK to be used with QEMU, KVM and Xen all of which support virtio.
The intention was explicitly that applications consuming VSOCK in the
guest would be portable between KVM & VMWare.

That said I don't think it is available via XenAPI, and doubt hyperv
will support it any time soon, but it is none the less a portable
standard if HVs decide they want such a feature.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Daniel P. Berrange
On Mon, Feb 20, 2017 at 02:24:12PM +, Jeremy Stanley wrote:
> On 2017-02-20 13:38:31 + (+), Daniel P. Berrange wrote:
> [...]
> >Rather than mounting as a filesystem, you can also use NVDIMM directly
> >as a raw memory block, in which case it can contain whatever data format
> >you want - not merely a filesystem. With the right design, you could come
> >up with a format that let you store the role device metadata in a NVDIMM
> >and be able to update its contents on the fly for the guest during 
> > hotplug.
> [...]
> 
> Maybe it's just me, but this begs for a (likely fairly trivial?)
> kernel module exposing that data under /sys or /proc (at least for
> *nix guests).

The data is exposed either as a block device or as a character device
in Linux - which one depends on how the NVDIMM is configured. Once
opening the right device you can simply mmap() the FD and read the
data. So exposing it as a file under sysfs doesn't really buy you
anything better.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Daniel P. Berrange
On Sat, Feb 18, 2017 at 01:54:11PM -0500, Artom Lifshitz wrote:
> A few good points were made:
> 
> * the config drive could be VFAT, in which case we can't trust what's
> on it because the guest has write access
> * if the config drive is ISO9660, we can't selectively write to it, we
> need to regenerate the whole thing - but in this case it's actually
> safe to read from (right?)
> * the point about the precedent being set that the config drive
> doesn't change... I'm not sure I 100% agree. There's definitely a
> precedent that information on the config drive will remain present for
> the entire instance lifetime (so the admin_pass won't disappear after
> a reboot, even if using that "feature" in a workflow seems ludicrous),
> but we've made no promises that the information itself will remain
> constant. For example, nothing says the device metadata must remain
> unchanged after a reboot.
> 
> Based on that here's what I propose:
> 
> If the config drive is vfat, we can just update the information on it
> that we need to update. In the device metadata case, we write a new
> JSON file, overwriting the old one.
> 
> If the config drive is ISO9660, we can safely read from it to fill in
> what information isn't persisted anywhere else, then update it with
> the new stuff we want to change. Then write out the new image.

Neither of these really cope with dynamically updating the role device
metdata for a *running* guest during a disk/nic hotplug for example.
You can't have the guest re-write the FS data that's in use by a running
guest.

For the CDROM based config drive, you would have to eject the virtual
media and insert new media.

IMHO, I'd just declare config drive readonly no matter what and anything
which requires dynamic data must use a different mechanism. Trying to
make config drive at all dynamic just opens a can of worms.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Daniel P. Berrange
On Sat, Feb 18, 2017 at 08:11:10AM -0500, Artom Lifshitz wrote:
> In reply to Michael:
> 
> > We have had this discussion several times in the past for other reasons. The
> > reality is that some people will never deploy the metadata API, so I feel
> > like we need a better solution than what we have now.
> 
> Aha, that's definitely a good reason to continue making the config
> drive a first-class citizen.

FYI, there are a variety of other options available in QEMU for exposing
metadata from the host to the guest that may be a better option than either
config drive or network metadata service, that we should consider.

 - NVDIMM - this is an arbitrary block of data mapped into the guest OS
   memory space. As the name suggests, from a physical hardware POV this
   is non-volatile RAM, but in the virt space we have much more flexibilty.
   It is possible to back an NVDIMM in the guest with a plain file in the
   host, or with volatile ram in the host.

   In the guest, the NVDIMM can be mapped as a block device, and from there
   mounted as a filesystem. Now this isn't actually more useful that config
   drive really, since guest filesystem drivers get upset if the host changes
   the filesystem config behind its back. So this wouldn't magically make it
   possible to dynamically update role device metdata at hotplug time.

   Rather than mounting as a filesystem, you can also use NVDIMM directly
   as a raw memory block, in which case it can contain whatever data format
   you want - not merely a filesystem. With the right design, you could come
   up with a format that let you store the role device metadata in a NVDIMM
   and be able to update its contents on the fly for the guest during hotplug.

 - virtio-vsock - think of this as UNIX domain sockets between the host and
   guest.  This is to deal with the valid use case of people wanting to use
   a network protocol, but not wanting an real NIC exposed to the guest/host
   for security concerns. As such I think it'd be useful to run the metadata
   service over virtio-vsock as an option. It'd likely address at lesat some
   people's security concerns wrt metadata service. It would also fix the
   ability to use the metadat service in IPv6-only environments, as we would
   not be using IP at all :-)


Both of these are pretty new features only recently added to qemu/libvirt
so its not going to immediately obsolete the config drive / IPv4 metadata
service, but they're things to consider IMHO. It would be valid to say
the config drive role device tagging metadata will always be readonly,
and if you want dynamic data you must use the metdata service over IPv4
or virtio-vsock.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Next minimum libvirt version

2017-02-10 Thread Daniel P. Berrange
On Thu, Feb 09, 2017 at 05:29:22PM -0600, Matt Riedemann wrote:
> Since danpb hasn't been around I've sort of forgotten about this, but we
> should talk about bumping the minimum required libvirt version in nova.
> 
> Currently it's 1.2.1 and the next was set to 1.2.9.
> 
> On master we're gating on ubuntu 14.04 which has libvirt 1.3.1 (14.04 had
> 1.2.2).
> 
> If we move to require 1.2.9 that effectively kills 14.04 support for
> devstack + libvirt on master, which is probably OK.
> 
> There is also the distro support wiki [1] which hasn't been updated in
> awhile.
> 
> I'm wondering if 1.2.9 is a safe move for the next required minimum version
> and if so, does anyone have ideas on the next required version after that?

I think libvirt 1.2.9 is absolutely fine as a next version. It is still
ancient history comparatively speaking.

The more difficult question is what happens after that. To go further than
that effectively requires dropping Debian as a supportable platform since
AFAIK, they never rebase libvirt & next Debian major release is still
unannounced.  So the question is whether "stock" Debian is something the
project cares about targetting or will the answer be that Debian users
are required to pull in newer libvirt from elsewhere.

Also, it is just as important to consider minimum QEMU versions at the
same time, though it could just be set to the lowest common denominator
across distros that remain, after choosing the libvirt version.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Automatically disabling compute service on RBD EMFILE failures

2017-01-09 Thread Daniel P. Berrange
On Sat, Jan 07, 2017 at 12:04:25PM -0600, Matt Riedemann wrote:
> A few weeks ago someone in the operators channel was talking about issues
> with ceph-backed nova-compute and OSErrors for too many open files causing
> issues.
> 
> We have a bug reported that's very similar sounding:
> 
> https://bugs.launchpad.net/nova/+bug/1651526
> 
> During the periodic update_available_resource audit, the call to RBD to get
> disk usage fails with the EMFILE OSError. Since this is in a periodic it
> doesn't cause any direct operations to fail, but it will cause issues with
> scheduling as that host is really down, however, nothing sets the service to
> down (disabled).
> 
> I had proposed a solution in the bug report that we could automatically
> disable the service for that host when this happens, and then automatically
> enable the service again if/when the next periodic task run is successful.
> Disabling the service would take that host out of contention for scheduling
> and may also trigger an alarm for the operator to investigate the failure
> (although if there are EMFILE errors from the ceph cluster I'm guessing
> alarms should already be going off).
> 
> Anyway, I wanted to see how hacky of an idea this is. We already
> automatically enable/disable the service from the libvirt driver when the
> connection to libvirt itself drops via an event callback. This would be
> similar albeit less sophisticated as it's not using an event listening
> mechanism, we'd have to maintain some local state in memory to know if we
> need to enable/disable the service again. And it seems pretty
> hacky/one-offish to handle this just for the RBD failure, but maybe we just
> generically handle it for any EMFILE error when collecting disk usage in the
> resource audit?

Presumably this deployment was using the default Linux file limits
which are at a ridiculously low value of 1024. Ceph with 900 OSDs
will potentially need 900 files, not really leaving any slack for
Nova todo other work. I'd be willing to bet there are other scenarios
in which Nova would hit the 1024 FD limit under high usage, not merely
Ceph. So perhaps regardless of whether Ceph is used, we should just
recommend that you always run Nova with 4096 fds, and check that in
initialize() on startup and log a warning if the num files is lower
than this.

With pretty much all distros using systemd, it would be nice if Nova
shipped a standard systemd unit file, which could then also contain
the recommended higher FD limit so people get sane limits out of the
box.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nominating Stephen Finucane for nova-core

2016-12-02 Thread Daniel P. Berrange
On Fri, Dec 02, 2016 at 09:22:54AM -0600, Matt Riedemann wrote:
> I'm proposing that we add Stephen Finucane to the nova-core team. Stephen
> has been involved with nova for at least around a year now, maybe longer, my
> ability to tell time in nova has gotten fuzzy over the years. Regardless,
> he's always been eager to contribute and over the last several months has
> done a lot of reviews, as can be seen here:
> 
> https://review.openstack.org/#/q/reviewer:sfinucan%2540redhat.com
> 
> http://stackalytics.com/report/contribution/nova/180
> 
> Stephen has been a main contributor and mover for the config option cleanup
> series that last few cycles, and he's a go-to person for a lot of the
> NFV/performance features in Nova like NUMA, CPU pinning, huge pages, etc.
> 
> I think Stephen does quality reviews, leaves thoughtful comments, knows when
> to hold a +1 for a patch that needs work, and when to hold a -1 from a patch
> that just has some nits, and helps others in the project move their changes
> forward, which are all qualities I look for in a nova-core member.
> 
> I'd like to see Stephen get a bit more vocal / visible, but we all handle
> that differently and I think it's something Stephen can grow into the more
> involved he is.
> 
> So with all that said, I need a vote from the core team on this nomination.
> I honestly don't care to look up the rules too much on number of votes or
> timeline, I think it's pretty obvious once the replies roll in which way
> this goes.

+1


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Creating a new IRC meeting room ?

2016-12-02 Thread Daniel P. Berrange
On Fri, Dec 02, 2016 at 11:35:05AM +0100, Thierry Carrez wrote:
> Hi everyone,
> 
> There has been a bit of tension lately around creating IRC meetings.
> I've been busy[1] cleaning up unused slots and defragmenting biweekly
> ones to open up possibilities, but truth is, even with those changes
> approved, there will still be a number of time slots that are full:
> 
> Tuesday 14utc -- only biweekly available
> Tuesday 16utc -- full
> Wednesday 15utc -- only biweekly available
> Wednesday 16utc -- full
> Thursday 14utc -- only biweekly available
> Thursday 17utc -- only biweekly available
> 
> [1] https://review.openstack.org/#/q/topic:dec2016-cleanup
> 
> Historically, we maintained a limited number of meeting rooms in order
> to encourage teams to spread around and limit conflicts. This worked for
> a time, but those days I feel like team members don't have that much
> flexibility in picking a time that works for everyone. If the miracle
> slot that works for everyone is not available on the calendar, they tend
> to move the meeting elsewhere (private IRC channel, Slack, Hangouts)
> rather than change time to use a less-busy slot.
> 
> So I'm now wondering how much that artificial scarcity policy is hurting
> us more than it helps us. I'm still convinced it's very valuable to have
> a number of "meetings rooms" that you can lurk in and be available for
> pings, without having to join hundreds of channels where meetings might
> happen. But I'm not sure anymore that maintaining an artificial scarcity
> is helpful in limiting conflicts, and I can definitely see that it
> pushes some meetings away from the meeting channels, defeating their
> main purpose.
> TL;DR:
> - is it time for us to add #openstack-meeting-5 ?
> - should we more proactively add meeting channels in the future ?

Do we have any real data on just how many contributors really do
lurk in the meeting rooms permanently, as opposed to merely joining
rooms at start of the meeting & leaving immediately thereafter ?

Likewise any data on how many contributors are actively participate
in meetings across different projects, vs silod just in their own
one project ?

If the latter is in the clear majority, then you might as well just
have #openstack-meeting-$PROJECT and thus mostly avoid the problem
of conflicting demands for a limited set of channels.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] away from Nova development for forseeable future

2016-11-28 Thread Daniel P. Berrange
Hi Folks,

I was recently tasked with spending some months working full time on
another project unrelated to OpenStack Nova. As such I am not likely
to be participating in any Nova related work for at least the Ocata
development cycle. At this time, I don't know whether I'll be returning
to Nova in the Pike cycle or not. I hope that other Red Hat folks will
be able to take over involvement in any work I was responsible for (eg in
particular os-vif related stuff or any libvirt driver work). Since I
won't be on IRC, if there's some show stopper that needs my help, I'd
encourage people to ping other Red Hat or Nova team members who should
have enough knowledge to help, or failing that, email me.

I've not resigned from Nova core, but realistically I'm not going to be
doing any reviews for 3-4 months at a minimum, so I'll leave it up to
PTL to decide what action to take, if any. Regardless I think that
Nova needs to look core membership more broadly given the recent loss of
Andrew from the team too.

Thus I'd encourage the project to either promote more community members
to the Nova core team to increase bandwidth, and/or consider alternative
strategies to reduce the core bottleneck, such as an intermediate layer
of people who have +2, but not +A.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] More file injection woes

2016-11-14 Thread Daniel P. Berrange
On Fri, Nov 11, 2016 at 07:11:51PM -0600, Matt Riedemann wrote:
> Chris Friesen reported a bug [1] where injected files on a server aren't in
> the guest after it's evacuated to another compute host. This is because the
> injected files aren't persisted in the nova database at all. Evacuate and
> rebuild use similar code paths, but rebuild is a user operation and the
> command line is similar to boot, but evacuate is an admin operation and the
> admin doesn't have the original injected files.
> 
> We've talked about issues with file injection before [2] - in that case not
> being able to tell if it can be honored and it just silently doesn't inject
> the files but the server build doesn't fail. We could eventually resolve
> that with capabilities discovery in the API.
> 
> There are other issues with file injection, like potential security issues,
> and we've talked about getting rid of it for years because you can use the
> config drive.
> 
> The metadata service is not a replacement, as noted in the code [3], because
> the files aren't persisted in nova so they can't be served up later.
> 
> I'm sure we've talked about this before, but if we were to seriously
> consider deprecating file injection, what does that look like?  Thoughts off
> the top of my head are:
> 
> 1. Add a microversion to the server create and rebuild REST APIs such that
> the personality files aren't accepted unless:
> 
> a) you're also building the server with a config drive
> b) or CONF.force_config_drive is True
> c) or the image has the 'img_config_drive=mandatory' property
> 
> 2. Deprecate VFSLocalFS in Ocata for removal in Pike. That means libguestfs
> is required. We'd do this because I think VFSLocalFS is the one with
> potential security issues.

Yes, VFSLocalFS is the dangerous one if used with untrustworthy disk images
(essentially all public cloud images are untrustworth) because malicious
images could be used to exploit bugs in the host kernels' filesystem drivers.
This isn't theoretical - we've seen bugs in popular linux filesystems (ie
ext3) lie mistakenly unfixed for years https://lwn.net/Articles/538898/

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Live migrations: Post-Copy and Auto-Converge features research

2016-11-08 Thread Daniel P. Berrange
On Mon, Nov 07, 2016 at 10:34:39PM +0200, Vlad Nykytiuk wrote:
> Hi,
> 
> As you may know, currently QEMU supports several features that help live
> migrations to operate more predictively. These features are: auto-converge
> and post-copy. 
> I made a research on performance characteristics of these two features, you
> can find it by the following link:
> https://rk4n.github.io/2016/08/10/qemu-post-copy-and-auto-converge-features/
> 

Thanks for the report, it looks to affirm the results that I've got
previously that show post-copy as the clear winner, and auto-converge
a viable alternative if post-copy is not available.

I've got a few suggestions if you want to do further investigation

 - Look at larger guests - a 1 vCPU guest with 2 GB of RAM is not
   particularly difficult to migrate when you have 10 Gig-E networking
   or even 1 Gig-E networking.  4 vCPU with 8 GB of RAM, with  4 guest
   workers dirtying all 8 GB of RAM is a hard test. Even with autoconverge
   such guests may not successfully complete in < 5 minutes.

 - Measure the guest CPU performance eg time to write to 1 GB of RAM
   While auto-converge can ensure completion, it has a really high and
   prolonged impact on guest CPU performance, much worse than is
   seen with post-copy.  For example, time to write to 1 GB will degrade
   from 400 ms/GB, to as much as 7000 ms/GB during post-copy, and this
   hit may last many minutes. For post-copy, there will be small spikes
   at the start of each iteration of migration ( 400ms/GB -> 1000ms/GB),
   and a big spike at the switch over (400ms/GB -> 7000ms/GB), but the
   duration of the spikes is very short (less than a second), so is a
   clear winner over auto-converge where the guest CPU performance
   hit lasts many minutes.

 - Measure the overall CPU utilization of QEMU as a whole. This will
   show impact of using compression, which is is to burn massive
   amounts of CPU time in the QEMU migration thread

I've published by previous results here:

  
https://www.berrange.com/posts/2016/05/12/analysis-of-techniques-for-ensuring-migration-completion-with-kvm/

and the framework I used to collect all this data is distributed in
QEMU git repo now.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] Issue with live migration of instances with encrypted volumes

2016-11-02 Thread Daniel P. Berrange
On Wed, Nov 02, 2016 at 10:43:44AM +, Lee Yarwood wrote:
> On 02-11-16 08:55:08, Carlton, Paul (Cloud Services) wrote:
> > Lee
> > 
> > I see this in a multiple node devstack without shared storage, although 
> > that shouldn't be relevant
> > 
> > I do a live migration of an instance
> > 
> > I then hard reboot it
> > 
> > I you are not seeing the same outcome I'll look at this again
> 
> Apologies if I'm not being clear here Paul but I'm asking if we can't
> fix the hard reboot issue directly instead of reverting the serial
> console fix. Given that you actually need the serial console fix to
> avoid calling connect_volume multiple times on the destination host.

Agreed, we should diagnose the hard reboot issue rather than just
blindly revert. Based on the bug info - which points to a failure
in neutron port binding - I'm not even convinced that the serial
console fix is the ultimate cause - it may just have exposed a
different latent bug.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] Issue with live migration of instances with encrypted volumes

2016-11-01 Thread Daniel P. Berrange
On Tue, Nov 01, 2016 at 11:22:25AM +, Carlton, Paul (Cloud Services) wrote:
> I'm working on a bug https://bugs.launchpad.net/nova/+bug/1633033 with the 
> live migration of
> 
> instances with encrypted volumes. I've submitted a work in progress version 
> of a patch
> 
> https://review.openstack.org/#/c/389608 but I can't overcome an issue with an 
> iscsi command
> 
> failure that only occurs for encrypted volumes during the post migration 
> processing, see
> 
> http://paste.openstack.org/show/587535/
> 
> 
> Does anyone have any thoughts on how to proceed with this issue?

No particular ideas, but I wanted to point out that the scsi_id command
shown in that stack trace has a device path that points to the raw
iSCSI LUN, not to the dm-crypt overlay. So it looks like you're hitting
a failure before you get the encryption part, so encryption might be
unrelated.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Cannot bind instance to generic vhostuser interface

2016-10-28 Thread Daniel P. Berrange
On Fri, Oct 28, 2016 at 10:34:50AM +, Tomas Cechvala -X (tcechval - 
PANTHEON TECHNOLOGIES at Cisco) wrote:
> Hi nova devs,
> 
> I'm trying to bind nova instances to generic vhostuser interface (created by 
> VPP).Based on the log output it seems that vhostuser vif_type is not 
> recognized by python's libvirt.
> http://pastebin.com/raw/C3NLsYfP
> 
> Is this a bug or have I misconfigured something?

That suggests your libvirt version is too old - you need v 1.2.7 of libvirt


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Addressing mangled LUKS passphrases (bug#1633518)

2016-10-23 Thread Daniel P. Berrange
On Fri, Oct 21, 2016 at 12:07:08PM +0100, Lee Yarwood wrote:
> Hello,
> 
> I documented bug#1633518 [1] last week in which volumes encrypted prior
> to Ib563b0ea [2] used a slightly mangled passphrase instead of the
> original passphrase provided by the configured key manager.
> 
> My first attempt at resolving this [3] prompted an alternative
> suggestion from mdbooth of adding the correct passphrase to the LUKS
> device when we detect the use of a mangled passphrase.
> 
> I'm slightly wary of this option given the changing of passphrases so
> I'd really appreciate input from the wider Nova and Cinder groups on
> your preference for resolving this :
> 
> 1. Keep the mangled passphrase in place and attempt to use it after
> getting a permission denied error during luksOpen. 

This is going to be painful when we switch to using QEMU for LUKS,
because it is going to amount to starting QEMU, watching it fail
to open disks and then trying to start QEMU again. IMHO we need to
fix the broken passphrases globally asap.

> 2. Add the correct passphrase and remove the mangled passphrase from the
> LUKS device with luksChangeKey when we detect the use of the mangled
> passphrase.

Yes we should be doing this to fix up the broken devices.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Lets make libvirt's domain XML canonical

2016-10-03 Thread Daniel P. Berrange
On Fri, Sep 30, 2016 at 04:36:31PM +, Murray, Paul (HP Cloud) wrote:
> 
> We have a problem migrating rescued instances that has a fix in progress based
> on regenerating the xml on unrescue, see:
> 
>  https://blueprints.launchpad.net/nova/+spec/live-migrate-rescued-instances
> 
> That might be another case for generating the xml.
> 
> I thought it was a counter-argument (unless I've misunderstood). If you 
> migrate the instance as-is without modification, you don't need to worry 
> about whether it's currently a rescue instance. This problem goes away.
> 
> The major complication I can think of is things which genuinely must change 
> during a migration, for example cpu pinning.
> 
> Rescue currently saves the libvirt xml in a separate file and replaces it
> with new xml to define a vm with a rescue boot disk; to unrescue it replaces
> the libvirt xml used for the rescue with the saved file to go back to the
> original state. On migration that saved xml would be lost because its an
> arbitrary file that is not handled in the migration. The spec above relies
> on the fact that we do not need to save it or copy it because we can recreate
> it from nova. So yes, the migration just works for the rescued vm, but when
> it is unrescued the original libvirt xml would be regenerated.

During rescue, nova should really not be touching the XML on disk at all. That
should have been left to reflect the "normal" XML of the guest. Instead nova
should have just called 'createXML' method, to boot the guest with a one-time
different XML config. There is no reason to define the XM on disk with the
custom rescue config.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Lets make libvirt's domain XML canonical

2016-10-03 Thread Daniel P. Berrange
On Mon, Oct 03, 2016 at 10:11:34AM +0300, Timofei Durakov wrote:
> Hi team,
> 
> I agree that it's kind of strange thing that nova dumps xml definition to
> the disk but doesn't use it(at least I do not aware of it).
> How the proposed changed would be aligned with other drivers? The worst
> case I could imagine here is that libvirt has an xml as a source of truth,
> while others use nova for the same purposes. Taking that into account, the
> question here would be:  why not to store all required information(e.g.
> boot order) in DB instead?

That is duplicating information already stored in libvirt - any time you
change the guest you have the job of updating the DB copy to mirror this
change. This gets particularly fun (aka error prone) during migration
if there's a failure during migration, as you can get the two copies out
of sync (eg if libvirt completes, but something causes an exception in
nova's post-migration logic).

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] RFC: "next" min libvirt/qemu requirement for Pike release

2016-09-27 Thread Daniel P. Berrange
In the Newton release we increased the min required libvirt to 1.2.1
and min QEMU to 1.5.3 We did not set any "next" versions for Ocata,
so Ocata will not be changing.

I think we should consider increasing min versions in the Pike release
though to let us cut out more back-compatibility code for versions that
will be pretty obsolete by the time Pike is released.

I've put up this proposed change:

  https://review.openstack.org/#/c/377923/

Using this is as the guide:

   https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix

It proposes  min libvirt 1.2.9 and min QEMU 2.1.0 These are the versions
present in Debian Jessie.

Out of the major distros currently supported by Ocata, this would eliminate
support for the following in Pike:

  - Ubuntu Trusty. Workaround: enable the "Cloud Archive" the addon
repository, or upgrade to Ubuntu Xenial
  - SLES 12. Workaround: upgrade to 12SP1
  - RHEL 7.1. Workaround: upgrade to 7.2 or newer

There is one extra complication in that alot of upstream CI jobs currently
use Trusty VMs, although things are increasingly migrating to Xenial based
images. Clearly if we drop Trusty support in Nova for Pike, then the CI jobs
for Nova have to be fully migrated to Xenail by that time.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Lets make libvirt's domain XML canonical

2016-09-27 Thread Daniel P. Berrange
On Tue, Sep 27, 2016 at 10:40:34AM -0600, Chris Friesen wrote:
> On 09/27/2016 10:17 AM, Matthew Booth wrote:
> 
> > I think we should be able to create a domain, but once created we should 
> > never
> > redefine a domain. We can do adding and removing devices dynamically using
> > libvirt's apis, secure in the knowledge that libvirt will persist this for 
> > us.
> > When we upgrade the host, libvirt can ensure we don't break guests which 
> > are on
> > it. Evacuate should be pretty much the only reason to start again.
> 
> Sounds interesting.  How would you handle live migration?
> 
> Currently we regenerate the XML file on the destination from the nova DB.  I
> guess in your proposal we'd need some way of copying the XML file from the
> source to the dest, and then modifying the appropriate segments to adjust
> things like CPU/NUMA pinning?

Use the flag VIR_MIGRATE_PERSIST_XML and libvirt will write out the
new persistent XML on the target host automatically.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Lets make libvirt's domain XML canonical

2016-09-27 Thread Daniel P. Berrange
On Tue, Sep 27, 2016 at 05:17:29PM +0100, Matthew Booth wrote:
> Currently the libvirt driver (mostly) considers the nova db canonical. That
> is, we can throw away libvirt's domain XML at any time and recreate it from
> Nova. Anywhere that doesn't assume this is a bug, because whatever
> direction we choose we don't need 2 different sources of truth. The
> thinking behind this is that we should always know what we told libvirt,
> and if we lose that information then that's a bug.
> 
> This is true to a degree, and it's the reason I proposed the persistent
> instance storage metadata spec: we lose track of how we configured an
> instance's storage. I realised recently that this isn't the whole story,
> though. Libvirt also automatically creates a bunch of state for us which we
> didn't specify explicitly. We lose this every time we drop it and recreate.
> For example, consider device addressing and ordering:
> 
> $ nova boot ...
> 
> We tell libvirt to give us a root disk, config disk, and a memballoon
> device (amongst other things).
> 
> Libvirt assigns pci addresses to all of these things.
> 
> $ nova volume-attach ...
> 
> We tell libvirt to create a new disk attached to the given volume.
> 
> Libvirt assigns it a pci address.
> 
> $ nova reboot
> 
> We throw away libvirt's domain xml and create a new one from scratch.
> 
> Libvirt assigns new addresses for all of these devices.
> 
> Before reboot, the device order was: root disk, config disk, memballoon,
> volume. After reboot the device order is: root disk, volume, config disk,
> memballoon. Not only have all our devices changed address, which makes
> Windows sad and paranoid about its licensing, and causes it to offline
> volumes under certain circumstances, but our disks have been reordered.

It is worth pointing out that we do have the device metadata role
tagging support now, which lets guest OS identify devices automatically
at startup. In theory you could say guests should rely on using that
on *every* boot, not merely the first boot after provisioning.

I think there is reasonable case to be made, however, that we should
maintain a stable device configuration for an instance after its
initial boot attempt. Arbitrarily changing hardware config on every
reboot is being gratuitously nasty to guest admins. The example about
causing Windows to require license reaactivation is on its own, enough
of a reason to ensure stable hardware once initial provisioning is
done.


> This isn't all we've thrown away, though. Libvirt also gave us a default
> machine type. When we create a new domain we'll get a new default machine
> type. If libvirt has been upgraded, eg during host maintenance, this isn't
> necessarily what it was before. Again, this can make our guests sad. Same
> goes for CPU model, default devices, and probably many more things I
> haven't thought of.

Yes indeed.

> Also... we lost the storage configuration of the guest: the information I
> propose to persist in persistent instance storage metadata.
> 
> We could store all of this information in Nova, but with the possible
> exception of storage metadata it really isn't at the level of 'management':
> it's the minutia of the hypervisor. In order to persist all of these things
> in Nova we'd have to implement them explicitly, and when libvirt/kvm grows
> more stuff we'll have to do that too. We'll need to mirror the
> functionality of libvirt in Nova, feature for feature. This is a red flag
> for me, and I think it means we should switch to libvirt being canonical.
> 
> I think we should be able to create a domain, but once created we should
> never redefine a domain. We can do adding and removing devices dynamically
> using libvirt's apis, secure in the knowledge that libvirt will persist
> this for us. When we upgrade the host, libvirt can ensure we don't break
> guests which are on it. Evacuate should be pretty much the only reason to
> start again.

And in fact we do persist the guest XML with libvirt already. We sadly
never use that info though - we just blindly overwrite it every time
with newly generated XML.

Fixing this should not be technically difficult for the most part.

> I raised this in the live migration sub-team meeting, and the immediate
> response was understandably conservative. I think this solves more problems
> than it creates, though, and it would result in Nova's libvirt driver
> getting a bit smaller and a bit simpler. That's a big win in my book.

I don't think it'll get significantly smaller/simpler, but it will
definitely be more intelligent and user friendly to do this IMHO.
As mentioned above, I think the windows license reactivation issue
alone is enough of a reason todo this.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org 

Re: [openstack-dev] [nova] Can all virt drivers provide a disk 'id' for the diagnostics API?

2016-09-26 Thread Daniel P. Berrange
On Mon, Sep 26, 2016 at 09:31:39PM +0800, Alex Xu wrote:
> 2016-09-23 20:38 GMT+08:00 Daniel P. Berrange <berra...@redhat.com>:
> 
> > On Fri, Sep 23, 2016 at 07:32:36AM -0500, Matt Riedemann wrote:
> > > On 9/23/2016 3:54 AM, Daniel P. Berrange wrote:
> > > > On Thu, Sep 22, 2016 at 01:54:21PM -0500, Matt Riedemann wrote:
> > > > > Sergey is working on a spec to use the standardized virt driver
> > instance
> > > > > diagnostics in the os-diagnostics API. A question came up during
> > review of
> > > > > the spec about how to define a disk 'id':
> > > > >
> > > > > https://review.openstack.org/#/c/357884/2/specs/ocata/
> > approved/restore-vm-diagnostics.rst@140
> > > > >
> > > > > The existing diagnostics code doesn't set a disk id in the list of
> > disk
> > > > > dicts, but I think with at least libvirt we can set that to the
> > target
> > > > > device from the disk device xml.
> > > > >
> > > > > The xenapi code for getting this info is a bit confusing for me at
> > least,
> > > > > but it looks like it's possible to get the disks, but the id might
> > need to
> > > > > be parsed out (as a side note, it looks like the cpu/memory/disk
> > diagnostics
> > > > > are not even populated in the get_instance_diagnostics method for
> > xen).
> > > > >
> > > > > vmware is in the same boat as xen, it's not fully implemented:
> > > > >
> > > > > https://github.com/openstack/nova/blob/
> > 64cbd7c51a5a82b965dab53eccfaecba45be9c27/nova/virt/
> > vmwareapi/vmops.py#L1561
> > > > >
> > > > > Hyper-v and Ironic virt drivers haven't implemented
> > get_instance_diagnostics
> > > > > yet.
> > > >
> > > > The key value of this field (which we should call "device_name", not
> > "id"),
> > > > is to allow the stats data to be correlated with the entries in the
> > block
> > > > device mapping list used to configure storage when bootin the VM. As
> > such
> > > > we should declare its value to match the corresponding field in BDM.
> > > >
> > > > Regards,
> > > > Daniel
> > > >
> > >
> > > Well, except that we don't want people specifying a device name in the
> > block
> > > device list when creating a server, and the libvirt driver ignores that
> > > altogether. In fact, I think Dan Smith was planning on adding a
> > microversion
> > > in Ocata to remove that field from the server create request since we
> > can't
> > > guarantee it's what you'll end up with for all virt drivers.
> >
> > We don't want people specifying it, but we should report the auto-allocated
> > names back when you query the data after instance creation, don't we ? If
> > we don't, then there's no way for users to correlate the disks that they
> > requested with the instance diagnostic stats, which severely limits their
> > usefulness.
> >
> 
> So what use-case for this API? I thought it is used by admin user to
> diagnose the cloud. If that is the right use-case, we can expose the disk
> image path in the API for admin user to correlate the disks. In the
> libvirt, it would looks like
> "/opt/stack/data/nova/instances/cbc7985c-434d-4ec3-8d96-d99ad6afb618/disk".
> As this is admin-only API, and for diagnostics, this info is safe to expose
> in this API.

You can't assume that all disks have a local path in the filesystem.
Any disks using a QEMU built-in network client (eg rbd) will not
appear there.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Can all virt drivers provide a disk 'id' for the diagnostics API?

2016-09-23 Thread Daniel P. Berrange
On Fri, Sep 23, 2016 at 07:32:36AM -0500, Matt Riedemann wrote:
> On 9/23/2016 3:54 AM, Daniel P. Berrange wrote:
> > On Thu, Sep 22, 2016 at 01:54:21PM -0500, Matt Riedemann wrote:
> > > Sergey is working on a spec to use the standardized virt driver instance
> > > diagnostics in the os-diagnostics API. A question came up during review of
> > > the spec about how to define a disk 'id':
> > > 
> > > https://review.openstack.org/#/c/357884/2/specs/ocata/approved/restore-vm-diagnostics.rst@140
> > > 
> > > The existing diagnostics code doesn't set a disk id in the list of disk
> > > dicts, but I think with at least libvirt we can set that to the target
> > > device from the disk device xml.
> > > 
> > > The xenapi code for getting this info is a bit confusing for me at least,
> > > but it looks like it's possible to get the disks, but the id might need to
> > > be parsed out (as a side note, it looks like the cpu/memory/disk 
> > > diagnostics
> > > are not even populated in the get_instance_diagnostics method for xen).
> > > 
> > > vmware is in the same boat as xen, it's not fully implemented:
> > > 
> > > https://github.com/openstack/nova/blob/64cbd7c51a5a82b965dab53eccfaecba45be9c27/nova/virt/vmwareapi/vmops.py#L1561
> > > 
> > > Hyper-v and Ironic virt drivers haven't implemented 
> > > get_instance_diagnostics
> > > yet.
> > 
> > The key value of this field (which we should call "device_name", not "id"),
> > is to allow the stats data to be correlated with the entries in the block
> > device mapping list used to configure storage when bootin the VM. As such
> > we should declare its value to match the corresponding field in BDM.
> > 
> > Regards,
> > Daniel
> > 
> 
> Well, except that we don't want people specifying a device name in the block
> device list when creating a server, and the libvirt driver ignores that
> altogether. In fact, I think Dan Smith was planning on adding a microversion
> in Ocata to remove that field from the server create request since we can't
> guarantee it's what you'll end up with for all virt drivers.

We don't want people specifying it, but we should report the auto-allocated
names back when you query the data after instance creation, don't we ? If
we don't, then there's no way for users to correlate the disks that they
requested with the instance diagnostic stats, which severely limits their
usefulness.

> I'm fine with calling the field device_name though.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Can all virt drivers provide a disk 'id' for the diagnostics API?

2016-09-23 Thread Daniel P. Berrange
On Thu, Sep 22, 2016 at 01:54:21PM -0500, Matt Riedemann wrote:
> Sergey is working on a spec to use the standardized virt driver instance
> diagnostics in the os-diagnostics API. A question came up during review of
> the spec about how to define a disk 'id':
> 
> https://review.openstack.org/#/c/357884/2/specs/ocata/approved/restore-vm-diagnostics.rst@140
> 
> The existing diagnostics code doesn't set a disk id in the list of disk
> dicts, but I think with at least libvirt we can set that to the target
> device from the disk device xml.
> 
> The xenapi code for getting this info is a bit confusing for me at least,
> but it looks like it's possible to get the disks, but the id might need to
> be parsed out (as a side note, it looks like the cpu/memory/disk diagnostics
> are not even populated in the get_instance_diagnostics method for xen).
> 
> vmware is in the same boat as xen, it's not fully implemented:
> 
> https://github.com/openstack/nova/blob/64cbd7c51a5a82b965dab53eccfaecba45be9c27/nova/virt/vmwareapi/vmops.py#L1561
> 
> Hyper-v and Ironic virt drivers haven't implemented get_instance_diagnostics
> yet.

The key value of this field (which we should call "device_name", not "id"),
is to allow the stats data to be correlated with the entries in the block
device mapping list used to configure storage when bootin the VM. As such
we should declare its value to match the corresponding field in BDM.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] ops meetup feedback

2016-09-20 Thread Daniel P. Berrange
On Tue, Sep 20, 2016 at 11:36:29AM -0400, Sean Dague wrote:
> On 09/20/2016 11:20 AM, Daniel P. Berrange wrote:
> > On Tue, Sep 20, 2016 at 11:01:23AM -0400, Sean Dague wrote:
> >> On 09/20/2016 10:38 AM, Daniel P. Berrange wrote:
> >>> On Tue, Sep 20, 2016 at 09:20:15AM -0400, Sean Dague wrote:
> >>>> This is a bit delayed due to the release rush, finally getting back to
> >>>> writing up my experiences at the Ops Meetup.
> >>>>
> >>>> Nova Feedback Session
> >>>> =
> >>>>
> >>>> We had a double session for Feedback for Nova from Operators, raw
> >>>> etherpad here - https://etherpad.openstack.org/p/NYC-ops-Nova.
> >>>>
> >>>> The median release people were on in the room was Kilo. Some were
> >>>> upgrading to Liberty, many had older than Kilo clouds. Remembering
> >>>> these are the larger ops environments that are engaged enough with the
> >>>> community to send people to the Ops Meetup.
> >>>>
> >>>>
> >>>> Performance Bottlenecks
> >>>> ---
> >>>>
> >>>> * scheduling issues with Ironic - (this is a bug we got through during
> >>>>   the week after the session)
> >>>> * live snapshots actually end up performance issue for people
> >>>>
> >>>> The workarounds config group was not well known, and everyone in the
> >>>> room wished we advertised that a bit more. The solution for snapshot
> >>>> performance is in there
> >>>>
> >>>> There were also general questions about what scale cells should be
> >>>> considered at.
> >>>>
> >>>> ACTION: we should make sure workarounds are advertised better
> >>>
> >>> Workarounds ought to be something that admins are rarely, if
> >>> ever, having to deal with.
> >>>
> >>> If the lack of live snapshot is such a major performance problem
> >>> for ops, this tends to suggest that our default behaviour is wrong,
> >>> rather than a need to publicise that operators should set this
> >>> workaround.
> >>>
> >>> eg, instead of optimizing for the case of a broken live snapshot
> >>> support by default, we should optimize for the case of working
> >>> live snapshot by default. The broken live snapshot stuff was so
> >>> rare that no one has ever reproduced it outside of the gate
> >>> AFAIK.
> >>>
> >>> IOW, rather than hardcoding disable_live_snapshot=True in nova,
> >>> we should just set it in the gate CI configs, and leave it set
> >>> to False in Nova, so operators get good performance out of the
> >>> box.
> >>>
> >>> Also it has been a while since we added the workaround, and IIRC,
> >>> we've got newer Ubuntu available on at least some of the gate
> >>> hosts now, so we have the ability to test to see if it still
> >>> hits newer Ubuntu. 
> >>
> >> Here is my reconstruction of the snapshot issue from what I can remember
> >> of the conversation.
> >>
> >> Nova defaults to live snapshots. This uses the libvirt facility which
> >> dumps both memory and disk. And then we throw away the memory. For large
> >> memory guests (especially volume backed ones that might have a fast path
> >> for the disk) this leads to a lot of overhead for no gain. The
> >> workaround got them past it.
> > 
> > I think you've got it backwards there.
> > 
> > Nova defaults to *not* using live snapshots:
> > 
> > cfg.BoolOpt(
> > 'disable_libvirt_livesnapshot',
> > default=True,
> > help="""
> > Disable live snapshots when using the libvirt driver.
> > ...""")
> > 
> > 
> > When live snapshot is disabled like this, the snapshot code is unable
> > to guarantee a consistent disk state. So the libvirt nova driver will
> > stop the guest by doing a managed save (this saves all memory to
> > disk), then does the disk snapshot, then restores the managed saved
> > (which loads all memory from disk).
> > 
> > This is terrible for multiple reasons
> > 
> >   1. the guest workload stops running while snapshot is taken
> >   2. we churn disk I/O saving & loading VM memory
> >   3. you can't do it at all if host PCI devices are at

Re: [openstack-dev] [nova] ops meetup feedback

2016-09-20 Thread Daniel P. Berrange
On Tue, Sep 20, 2016 at 11:01:23AM -0400, Sean Dague wrote:
> On 09/20/2016 10:38 AM, Daniel P. Berrange wrote:
> > On Tue, Sep 20, 2016 at 09:20:15AM -0400, Sean Dague wrote:
> >> This is a bit delayed due to the release rush, finally getting back to
> >> writing up my experiences at the Ops Meetup.
> >>
> >> Nova Feedback Session
> >> =
> >>
> >> We had a double session for Feedback for Nova from Operators, raw
> >> etherpad here - https://etherpad.openstack.org/p/NYC-ops-Nova.
> >>
> >> The median release people were on in the room was Kilo. Some were
> >> upgrading to Liberty, many had older than Kilo clouds. Remembering
> >> these are the larger ops environments that are engaged enough with the
> >> community to send people to the Ops Meetup.
> >>
> >>
> >> Performance Bottlenecks
> >> ---
> >>
> >> * scheduling issues with Ironic - (this is a bug we got through during
> >>   the week after the session)
> >> * live snapshots actually end up performance issue for people
> >>
> >> The workarounds config group was not well known, and everyone in the
> >> room wished we advertised that a bit more. The solution for snapshot
> >> performance is in there
> >>
> >> There were also general questions about what scale cells should be
> >> considered at.
> >>
> >> ACTION: we should make sure workarounds are advertised better
> > 
> > Workarounds ought to be something that admins are rarely, if
> > ever, having to deal with.
> > 
> > If the lack of live snapshot is such a major performance problem
> > for ops, this tends to suggest that our default behaviour is wrong,
> > rather than a need to publicise that operators should set this
> > workaround.
> > 
> > eg, instead of optimizing for the case of a broken live snapshot
> > support by default, we should optimize for the case of working
> > live snapshot by default. The broken live snapshot stuff was so
> > rare that no one has ever reproduced it outside of the gate
> > AFAIK.
> > 
> > IOW, rather than hardcoding disable_live_snapshot=True in nova,
> > we should just set it in the gate CI configs, and leave it set
> > to False in Nova, so operators get good performance out of the
> > box.
> > 
> > Also it has been a while since we added the workaround, and IIRC,
> > we've got newer Ubuntu available on at least some of the gate
> > hosts now, so we have the ability to test to see if it still
> > hits newer Ubuntu. 
> 
> Here is my reconstruction of the snapshot issue from what I can remember
> of the conversation.
> 
> Nova defaults to live snapshots. This uses the libvirt facility which
> dumps both memory and disk. And then we throw away the memory. For large
> memory guests (especially volume backed ones that might have a fast path
> for the disk) this leads to a lot of overhead for no gain. The
> workaround got them past it.

I think you've got it backwards there.

Nova defaults to *not* using live snapshots:

cfg.BoolOpt(
'disable_libvirt_livesnapshot',
default=True,
help="""
Disable live snapshots when using the libvirt driver.
...""")


When live snapshot is disabled like this, the snapshot code is unable
to guarantee a consistent disk state. So the libvirt nova driver will
stop the guest by doing a managed save (this saves all memory to
disk), then does the disk snapshot, then restores the managed saved
(which loads all memory from disk).

This is terrible for multiple reasons

  1. the guest workload stops running while snapshot is taken
  2. we churn disk I/O saving & loading VM memory
  3. you can't do it at all if host PCI devices are attached to
 the VM

Enabling live snapshots by default fixes all these problems, at the
risk of hitting the live snapshot bug we saw in the gate CI but never
anywhere else.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] ops meetup feedback

2016-09-20 Thread Daniel P. Berrange
On Tue, Sep 20, 2016 at 09:20:15AM -0400, Sean Dague wrote:
> This is a bit delayed due to the release rush, finally getting back to
> writing up my experiences at the Ops Meetup.
> 
> Nova Feedback Session
> =
> 
> We had a double session for Feedback for Nova from Operators, raw
> etherpad here - https://etherpad.openstack.org/p/NYC-ops-Nova.
> 
> The median release people were on in the room was Kilo. Some were
> upgrading to Liberty, many had older than Kilo clouds. Remembering
> these are the larger ops environments that are engaged enough with the
> community to send people to the Ops Meetup.
> 
> 
> Performance Bottlenecks
> ---
> 
> * scheduling issues with Ironic - (this is a bug we got through during
>   the week after the session)
> * live snapshots actually end up performance issue for people
> 
> The workarounds config group was not well known, and everyone in the
> room wished we advertised that a bit more. The solution for snapshot
> performance is in there
> 
> There were also general questions about what scale cells should be
> considered at.
> 
> ACTION: we should make sure workarounds are advertised better

Workarounds ought to be something that admins are rarely, if
ever, having to deal with.

If the lack of live snapshot is such a major performance problem
for ops, this tends to suggest that our default behaviour is wrong,
rather than a need to publicise that operators should set this
workaround.

eg, instead of optimizing for the case of a broken live snapshot
support by default, we should optimize for the case of working
live snapshot by default. The broken live snapshot stuff was so
rare that no one has ever reproduced it outside of the gate
AFAIK.

IOW, rather than hardcoding disable_live_snapshot=True in nova,
we should just set it in the gate CI configs, and leave it set
to False in Nova, so operators get good performance out of the
box.

Also it has been a while since we added the workaround, and IIRC,
we've got newer Ubuntu available on at least some of the gate
hosts now, so we have the ability to test to see if it still
hits newer Ubuntu. 


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][stable/liberty] Backport impasse: "virt: set address space & CPU time limits when running qemu-img"

2016-09-20 Thread Daniel P. Berrange
On Tue, Sep 20, 2016 at 12:48:49PM +0200, Kashyap Chamarthy wrote:
> The said patch in question fixes a CVE[x] in stable/liberty.
> 
> We currently have two options, both of them have caused an impasse with
> the Nova upstream / stable maintainers.  We've had two-ish months to
> mull over this.  I'd prefer to get this out of a limbo, & bring this to
> a logical conclusion.
> 
> The two options at hand:
> 
> (1) Nova backport from master (that also adds a check for the presence
> of 'ProcessLimits' attribute which is only present in
> oslo.concurrency>=2.6.1; and a conditional check for 'prlimit'
> parameter in qemu_img_info() method.)
> 
> https://review.openstack.org/#/c/327624/ -- "virt: set address space
> & CPU time limits when running qemu-img"
> 
> (2) Or bump global-requirements for 'oslo.concurrency'
> 
> https://review.openstack.org/#/c/337277/5 -- Bump
> 'global-requirements' for 'oslo.concurrency' to 2.6.1

Actually we have 3 options

  (3) Do nothing, leave the bug unfixed in stable/liberty

While this is a security bug, it is one that has existed in every single
openstack release ever, and it is not a particularly severe bug. Even if
we fixed in liberty, it would still remain unfixed in every release before
liberty. We're in the verge of releasing Newton at which point liberty
becomes less relevant. So I question whether it is worth spending more
effort on dealing with this in liberty upstream.  Downstream vendors
still have the option to do either (1) or (2) in their own private
branches if they so desire, regardless of whether we fix it upstream.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-08 Thread Daniel P. Berrange
On Thu, Sep 08, 2016 at 10:24:09AM +0200, Thierry Carrez wrote:
> Avishay Traeger wrote:
> > There are a number of drivers that require closed-source tools to
> > communicate with the storage.  3 others that I've come across recently:
> > 
> >   * EMC VNX: requires Navisphere CLI v7.32 or higher
> >   * Hitachi storage volume driver: requires RAID Manager Ver 01-32-03/01
> > or later for VSP G1000/VSP/HUS VM, Hitachi Storage Navigator Modular
> > 2 (HSNM2) Ver 27.50 or later for HUS 100 Family
> >   * Infortrend driver: requires raidcmd ESDS10
> 
> If those proprietary dependencies are required for operation, those
> would probably violate our licensing policy[1] and should probably be
> removed:
> 
> "In order to be acceptable as dependencies of OpenStack projects,
> external libraries (produced and published by 3rd-party developers) must
> be licensed under an OSI-approved license that does not restrict
> distribution of the consuming project. The list of acceptable licenses
> includes ASLv2, BSD (both forms), MIT, PSF, LGPL, ISC, and MPL. Licenses
> considered incompatible with this requirement include GPLv2, GPLv3, and
> AGPL."

That policy is referring to libraries (ie, python modules that we'd
actually "import" at the python level), while the list above seems to be
referring to external command line tools that we merely invoke from the
python code. From a license compatibility POV there's no problem, as there's
a boundary between the open source openstack code, and the closed source
external program. Talking to a closed source external command over stdio,
is conceptually no different to talking to a closed source server over
some remote API.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Interface in PEER2PEER live migration

2016-08-25 Thread Daniel P. Berrange
On Thu, Aug 25, 2016 at 12:01:40PM +0200, Alberto Planas Dominguez wrote:
> On Wed, 2016-08-24 at 11:18 -0400, Daniel P. Berrange wrote:
> > On Wed, Aug 24, 2016 at 05:07:50PM +0200, Alberto Planas Dominguez
> > wrote:
> 
> Daniel, thanks for the fast reply!!
> 
> > > Unfortunately was closed as invalid, and the solution provided is
> > > completely unrelated. The solution suggested is based on
> > > `live_migration_inbound_addr`, that is related with the libvirtd
> > > URI,
> > > not the qmeu one. I tested several times and yes, this solution is
> > > not
> > > related with the problem.
> > 
> > The "live_migration_inbound_addr" configuration parameters was
> > intended
> > to affect both libvirt & QEMU traffic. If that is not working
> > correctly,
> > then we should be fixing that, nto adding yet another parameter.
> 
> The code in libvirt is very clear: if uri_in is NULL will ask to the
> hostname to the other side. I checked the code in 1.2.18:
> 
> https://github.com/libvirt/libvirt/blob/v1.2.18-maint/src/qemu/qemu_mig
> ration.c#L3601
> 
> https://github.com/libvirt/libvirt/blob/v1.2.18-maint/src/qemu/qemu_mig
> ration.c#L3615
> 
> The same logic is in master:
> 
> https://github.com/libvirt/libvirt/blob/master/src/qemu/qemu_migration.
> c#L4013
> 
> But we can go back to 0.9.12:
> 
> https://github.com/libvirt/libvirt/blob/v0.9.12-maint/src/qemu/qemu_mig
> ration.c#L1472
> 
> Nova set migration_uri parameter to None, that this means that uri_in
> is NULL.
> 
> How can I affect the the QEMU part? The code path AAIU is: if we do not
> set miguri (migrateToURI2) or migrate_uri (migrateToURI3), is a
> uri_in=NULL.
> 
> I am not familiar with libvirt code, please, help me to find how I can
> affect this uri_in parameter to have a value different from the
> hostname of the other node, without setting the correct value in
> migrateToURI[23] in the Nova side.

I think where the confusion is coming is that libvirt will work in two
different ways with P2P migration. If the TUNNELLED flag is set, then the
migration data will go over the Libvirtd <-> libvirtd connection, which is
influenced by the live_migration_inbound_addr parameter. If the TUNNELED
flag is not set the data is QEMU <-> QEMU directly, and that needs the
extra URI set.

What we need todo is fix the Nova code so that when the TUNNELLED flag
is *not* set, we also provide the extra URI, using the hostname/ip addr
listed in live_migration_inbound_addr, falling back to the compute hostname
if live_migration_inbound_addr is not set.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Interface in PEER2PEER live migration

2016-08-24 Thread Daniel P. Berrange
On Wed, Aug 24, 2016 at 05:07:50PM +0200, Alberto Planas Dominguez wrote:
> Unfortunately was closed as invalid, and the solution provided is
> completely unrelated. The solution suggested is based on
> `live_migration_inbound_addr`, that is related with the libvirtd URI,
> not the qmeu one. I tested several times and yes, this solution is not
> related with the problem.

The "live_migration_inbound_addr" configuration parameters was intended
to affect both libvirt & QEMU traffic. If that is not working correctly,
then we should be fixing that, nto adding yet another parameter.

> 
> I worked in a patch for mater here:
> 
> https://review.openstack.org/#/c/356558/
> 
> This patch worked as expected. This create a second URI, based on the
> hostname given in live_migration_uri parameter, to build a second one
> that will be used by qemu/kvm for the second connection.
> 
> So, for example if:
> 
> live_migration_uri=qemu+tcp://fast.%s/system
> 
> this patch will create a second uri:
> 
> migrate_uri=tcp://fast.%s/

While you can do that hack, the fact that is works is simply luck - it
certainly was not designed with this kind of usage in mind. We would
in fact like to remove the live_migration_uri config parameter entirely
and having the libvirt driver automatically use the correct URI. As such
adding further URI config parmaeters is not a direction we want to go
in.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-capabilities library created

2016-08-12 Thread Daniel P. Berrange
On Wed, Aug 03, 2016 at 07:47:37PM -0400, Jay Pipes wrote:
> Hi Novas and anyone interested in how to represent capabilities in a
> consistent fashion.
> 
> I spent an hour creating a new os-capabilities Python library this evening:
> 
> http://github.com/jaypipes/os-capabilities
> 
> Please see the README for examples of how the library works and how I'm
> thinking of structuring these capability strings and symbols. I intend
> os-capabilities to be the place where the OpenStack community catalogs and
> collates standardized features for hardware, devices, networks, storage,
> hypervisors, etc.
> 
> Let me know what you think about the structure of the library and whether
> you would be interested in owning additions to the library of constants in
> your area of expertise.

How are you expecting that these constants are used ? It seems unlikely
the, say nova code, code is going to be explicitly accessing any of the
individual CPU flag constants. It should surely just be entirely metatadata
driven - eg libvirt driver would just parse libvirt capabilities XML and
extract all the CPU flag strings & simply export them. It would be very
undesirable to have to add new code to os-capabilities every time that
Intel/AMD create new CPU flags for new features, and force users to upgrade
openstack to be able to express requirements on those CPU flags.

> Next steps for the library include:
> 
> * Bringing in other top-level namespaces like disk: or net: and working with
> contributors to fill in the capability strings and symbols.
> * Adding constraints functionality to the library. For instance, building in
> information to the os-capabilities interface that would allow a set of
> capabilities to be cross-checked for set violations. As an example, a
> resource provider having DISK_GB inventory cannot have *both* the disk:ssd
> *and* the disk:hdd capability strings associated with it -- clearly the disk
> storage is either SSD or spinning disk.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] test strategy for the serial console feature

2016-08-11 Thread Daniel P. Berrange
On Thu, Aug 11, 2016 at 04:46:02PM +0200, Markus Zoeller wrote:
> On 11.08.2016 13:31, Daniel P. Berrange wrote:
> > On Thu, Aug 11, 2016 at 07:19:42AM -0400, Sean Dague wrote:
> >> On 08/11/2016 05:45 AM, Markus Zoeller wrote:
> >>> On 26.07.2016 12:16, Jordan Pittier wrote:
> >>>> Hi Markus
> >>>> You don"t really need a whole new job for this. Just turn that flag to 
> >>>> True
> >>>> on existing jobs.
> >>>>
> >>>> 30/40 seconds is acceptable. But I am surprised considering a VM usually
> >>>> boots in 5 sec or so. Any idea of where that slowdown comes from ?
> >>>>
> >>>> On Tue, Jul 26, 2016 at 11:50 AM, Markus Zoeller <
> >>>> mzoel...@linux.vnet.ibm.com> wrote:
> >>
> >> We just had a big chat about this in the #openstack-nova IRC channel. To
> >> summarize:
> >>
> >> The class of bugs that are really problematic are:
> >>
> >>  * https://bugs.launchpad.net/nova/+bug/1455252 - Launchpad bug 1455252
> >> in OpenStack Compute (nova) "enabling serial console breaks live
> >> migration" [High,In progress] - Assigned to sahid (sahid-ferdjaoui)
> >>
> >> * https://bugs.launchpad.net/nova/+bug/1595962 - Launchpad bug 1595962
> >> in OpenStack Compute (nova) "live migration with disabled vnc/spice not
> >> possible" [Undecided,In progress] - Assigned to Markus Zoeller
> >> (markus_z) (mzoeller)
> >>
> >> Which are both in the category of serial console breaking live
> >> migration. It's the serial device vs. live migration that's most
> >> problematic. Serial consoles themselves haven't broken badly recently.
> >> Given that we don't do live migration testing in most normal jobs, the
> >> Tempest jobs aren't really going to help here.
> >>
> >> The dedicated live-migration job is being targeted.
> >>
> >> Serial console support is currently a function at the compute level.
> >> Which is actually a little odd. Because it means that all guests on a
> >> compute must be serial console, or must not. Imagine a compute running
> >> Linux, Windows, FreeBSD guests. It's highly unlikely that you want to
> >> force serial console one way or another on all of those the same way.
> >> This is probably something that makes sense to add as an image
> >> attribute, because images will need guest configuration to support
> >> serial consoles. As an image attribute this would also help on testing
> >> because we could mix / match in a single run.
> > 
> > There is actually image properties for this, but the way it is all
> > implemented right now is just insane.
> 
> You're talking about "hw_serial_port_count" I assume? I'm not aware of
> any other property for that. Sean was talking about enabling the serial
> console per image/flavor property IIUC.

If hw_serial_port_count allowed a value of '0', and we fixed the
problem of the other random serial port with type=pty we create,
then we'd get the full serial console per image support Sean
wants when nova.conf was set to serial_console.enabled=True.

We could also discuss / notify users that we're intending to set
that nova.conf to default to True in a future release, and
then eventually delete the nova.conf setting entirely.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] test strategy for the serial console feature

2016-08-11 Thread Daniel P. Berrange
On Thu, Aug 11, 2016 at 07:19:42AM -0400, Sean Dague wrote:
> On 08/11/2016 05:45 AM, Markus Zoeller wrote:
> > On 26.07.2016 12:16, Jordan Pittier wrote:
> >> Hi Markus
> >> You don"t really need a whole new job for this. Just turn that flag to True
> >> on existing jobs.
> >>
> >> 30/40 seconds is acceptable. But I am surprised considering a VM usually
> >> boots in 5 sec or so. Any idea of where that slowdown comes from ?
> >>
> >> On Tue, Jul 26, 2016 at 11:50 AM, Markus Zoeller <
> >> mzoel...@linux.vnet.ibm.com> wrote:
> 
> We just had a big chat about this in the #openstack-nova IRC channel. To
> summarize:
> 
> The class of bugs that are really problematic are:
> 
>  * https://bugs.launchpad.net/nova/+bug/1455252 - Launchpad bug 1455252
> in OpenStack Compute (nova) "enabling serial console breaks live
> migration" [High,In progress] - Assigned to sahid (sahid-ferdjaoui)
> 
> * https://bugs.launchpad.net/nova/+bug/1595962 - Launchpad bug 1595962
> in OpenStack Compute (nova) "live migration with disabled vnc/spice not
> possible" [Undecided,In progress] - Assigned to Markus Zoeller
> (markus_z) (mzoeller)
> 
> Which are both in the category of serial console breaking live
> migration. It's the serial device vs. live migration that's most
> problematic. Serial consoles themselves haven't broken badly recently.
> Given that we don't do live migration testing in most normal jobs, the
> Tempest jobs aren't really going to help here.
> 
> The dedicated live-migration job is being targeted.
> 
> Serial console support is currently a function at the compute level.
> Which is actually a little odd. Because it means that all guests on a
> compute must be serial console, or must not. Imagine a compute running
> Linux, Windows, FreeBSD guests. It's highly unlikely that you want to
> force serial console one way or another on all of those the same way.
> This is probably something that makes sense to add as an image
> attribute, because images will need guest configuration to support
> serial consoles. As an image attribute this would also help on testing
> because we could mix / match in a single run.

There is actually image properties for this, but the way it is all
implemented right now is just insane.

For QEMU/KVM (on x86) currently, by default you get

 - a serial port which is connected to a file
 - a serial port which is connected to a pty

If you turned on the serial_console option in nova.conf you instead get

 - one or more serial ports connected to a tcp port
 - a serial port which is connected to a pty

The number of serial ports is based off an image property (
hw_serial_port_count), but strangely the code doesn't honour a
value of 0 for that. In addition the last serial port connected
to a pty should really not even exist at that point.

We should aim to get to a place where we have 'serial_console.enabled'
default to True in nova.conf and hw_serial_port_count setting how many
are created, with 0 being a valid number. Never create any other serial
ports that were not requested.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova mascot

2016-08-09 Thread Daniel P. Berrange
On Tue, Aug 09, 2016 at 11:26:21AM -0500, Matt Riedemann wrote:
> On 8/8/2016 4:10 PM, Clint Byrum wrote:
> > Excerpts from Matt Riedemann's message of 2016-08-08 14:35:12 -0500:
> > > Not to be a major curmudgeon but I think we'd basically decided at the
> > > midcycle (actually weeks before) that Nova wasn't doing the mascot thing.
> > > 
> > 
> > Could you maybe summarize the reason for this decision?
> > 
> > Seems like everybody else is taking this moment to look inward and
> > think about how they want to be seen. Why wouldn't Nova want to take an
> > opportunity to do the same?
> > 
> 
> idk, I'm open to it I guess if people are really passionate about picking a
> mascot, but for the most part when this has come up we've basically had
> jokes about horses asses and such.
> 
> Personally this feels like mandatory fun and I'm usually not interested in
> stuff like that.

It also ends up creating new problems that we then have to spend time on for
no obviously clear benefit. eg we're ging to have to collect list of proposed
mascots, check them with legal to make sure they don't clash with mascots
used by other software companies with squadrons of attack lawyers, then
arrange voting on them. Even after all that we'll probably find out later
that in some culture the mascot we've chosen has negative connotations
associated with it. All this will do nothing to improve life for people
who actually deploy and use nova, so its all rather a waste of time IMHO,

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Removal of live_migration_flag and block_migration_flag config options

2016-08-03 Thread Daniel P. Berrange
On Tue, Aug 02, 2016 at 02:36:32PM +, Koniszewski, Pawel wrote:
> In Mitaka development cycle 'live_migration_flag' and 'block_migration_flag'
> have been marked as deprecated for removal. I'm working on a patch [1] to
> remove both of them and want to ask what we should do with 
> live_migration_tunnelled
> logic.
> 
> The default configuration of both flags contain VIR_MIGRATE_TUNNELLED option. 
> It is
> there to avoid the need to configure the network to allow direct communication
> between hypervisors. However, tradeoff is that it slows down all migrations 
> by up
> to 80% due to increased number of memory copies and single-threaded encryption
> mechanism in Libvirt. By 80% here I mean that transfer between source and 
> destination
> node is around 2Gb/s on a 10Gb network. I believe that this is a 
> configuration issue
> and people deploying OpenStack are not aware that live migrations with this 
> flag will
> not work. I'm not sure that this is something we wanted to achieve. AFAIK most
> operators are turning it OFF in order to make live migration usable.

FYI, when you have post-copy migration active, live migration *will* still work.

> Going to a new flag that is there to keep possibility to turn tunneling on -
> Live_migration_tunnelled [2] which is a tri-state boolean - None, False, True:
> 
> * True - means that live migrations will be tunneled through libvirt.
> * False - no tunneling, native hypervisor transport.
> * None - nova will choose default based on, e.g., the availability of native
>   encryption support in the hypervisor. (Default value)
> 
> Right now we don't have any logic implemented for None value which is a 
> default
> value. So the question here is should I implement logic so that if
> live_migration_tunnelled=None it will still use VIR_MIGRATE_TUNNELLED if 
> native
> encryption is not available? Given the impact of this flag I'm not sure that 
> we
> really want to keep it there. Another option is to change default value of
> live_migration_tunnelled to be True. In both cases we will again end up with
> slower LM and people complaining that LM does not work at all in OpenStack.

FWIW, I have compared libvirt tunnelled migration with TLS against native QEMU
TLS encryption and the performance is approximately the same. In both cases the
bottleneck is how fast the CPU can perform AES and we're maxing out a single
thread for that. IOW, there's no getting away from the fact that encryption is
going to have a performance impact on migration when you get into range of
10-Gig networking.

So the real question is whether we want to default to a secure or an insecure
configuration. If we default to secure config then, in future with native QEMU
TLS, this will effectively force those deploying nova to deploy x509 certs for
QEMU before they can use live migration. This would be akin of having our 
default
deployment of the public REST API mandate HTTPS and not listen on HTTP out of 
the
box. IIUC, we default to HTTP for REST APIs out of the box, which would suggest
doing the same for migration and defaulting to non-encrypted. This would mean
we do *not* need to set TUNNELLED by default.

Second, with some versions of QEMU, it is *not* possible to use tunnelled
migration in combination with block migration. We don't want to have normal
live migration and block live migration use different settings. This strongly
suggests *not* defaulting to tunnelled.

So all three points (performance, x509 deployment requirements, and block
migration limitations) point to not having TUNNELLED in the default flags,
and leaving it as an opt-in.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Remove duplicate code using Data Driven Tests (DDT)

2016-07-25 Thread Daniel P. Berrange
On Mon, Jul 25, 2016 at 08:22:52AM -0400, Sean Dague wrote:
> On 07/25/2016 08:05 AM, Daniel P. Berrange wrote:
> > On Mon, Jul 25, 2016 at 07:57:08AM -0400, Sean Dague wrote:
> >> On 07/22/2016 11:30 AM, Daniel P. Berrange wrote:
> >>> On Thu, Jul 21, 2016 at 07:03:53AM -0700, Matt Riedemann wrote:
> >>>> On 7/21/2016 2:03 AM, Bhor, Dinesh wrote:
> >>>>
> >>>> I agree that it's not a bug. I also agree that it helps in some specific
> >>>> types of tests which are doing some kind of input validation (like the 
> >>>> patch
> >>>> you've proposed) or are simply iterating over some list of values (status
> >>>> values on a server instance for example).
> >>>>
> >>>> Using DDT in Nova has come up before and one of the concerns was hiding
> >>>> details in how the tests are run with a library, and if there would be a
> >>>> learning curve. Depending on the usage, I personally don't have a problem
> >>>> with it. When I used it in manila it took a little getting used to but I 
> >>>> was
> >>>> basically just looking at existing tests and figuring out what they were
> >>>> doing when adding new ones.
> >>>
> >>> I don't think there's significant learning curve there - the way it
> >>> lets you annotate the test methods is pretty easy to understand and
> >>> the ddt docs spell it out clearly for newbies. We've far worse things
> >>> in our code that create a hard learning curve which people will hit
> >>> first :-)
> >>>
> >>> People have essentially been re-inventing ddt in nova tests already
> >>> by defining one helper method and them having multiple tests methods
> >>> all calling the same helper with a different dataset. So ddt is just
> >>> formalizing what we're already doing in many places, with less code
> >>> and greater clarity.
> >>>
> >>>> I definitely think DDT is easier to use/understand than something like
> >>>> testscenarios, which we're already using in Nova.
> >>>
> >>> Yeah, testscenarios feels little over-engineered for what we want most
> >>> of the time.
> >>
> >> Except, DDT is way less clear (and deterministic) about what's going on
> >> with the test name munging. Which means failures are harder to track
> >> back to individual tests and data load. So debugging the failures is 
> >> harder.
> > 
> > I'm not sure what you think is unclear - given an annotated test:
> > 
> >@ddt.data({"foo": "test", "availability_zone": "nova1"},
> >   {"name": "  test  ", "availability_zone": "nova1"},
> >   {"name": "", "availability_zone": "nova1"},
> >   {"name": "x" * 256, "availability_zone": "nova1"},
> >   {"name": "test", "availability_zone": "x" * 256},
> >   {"name": "test", "availability_zone": "  nova1  "},
> >   {"name": "test", "availability_zone": ""},
> >   {"name": "test", "availability_zone": "nova1", "foo": "bar"})
> > def test_create_invalid_create_aggregate_data(self, value):
> > 
> > It is generated one test for each data item:
> > 
> >  test_create_invalid_create_aggregate_data_1
> >  test_create_invalid_create_aggregate_data_2
> >  test_create_invalid_create_aggregate_data_3
> >  test_create_invalid_create_aggregate_data_4
> >  test_create_invalid_create_aggregate_data_5
> >  test_create_invalid_create_aggregate_data_6
> >  test_create_invalid_create_aggregate_data_7
> >  test_create_invalid_create_aggregate_data_8
> > 
> > This seems about as obvious as you can possibly get
> 
> At least when this was attempted to be introduced into Tempest, the
> naming was a lot less clear, maybe it got better. But I still think
> milestone 3 isn't the time to start a thing like this.

Historically we've allowed patches that improve / adapt unit tests
to be merged at any time that we're not in final bug-fix only freeze
periods. So on this basis, I'm happy to see this accepted now, especially
since the module is already in global requirements, so not a new thing
from an openstack POV

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Remove duplicate code using Data Driven Tests (DDT)

2016-07-25 Thread Daniel P. Berrange
On Mon, Jul 25, 2016 at 07:57:08AM -0400, Sean Dague wrote:
> On 07/22/2016 11:30 AM, Daniel P. Berrange wrote:
> > On Thu, Jul 21, 2016 at 07:03:53AM -0700, Matt Riedemann wrote:
> >> On 7/21/2016 2:03 AM, Bhor, Dinesh wrote:
> >>
> >> I agree that it's not a bug. I also agree that it helps in some specific
> >> types of tests which are doing some kind of input validation (like the 
> >> patch
> >> you've proposed) or are simply iterating over some list of values (status
> >> values on a server instance for example).
> >>
> >> Using DDT in Nova has come up before and one of the concerns was hiding
> >> details in how the tests are run with a library, and if there would be a
> >> learning curve. Depending on the usage, I personally don't have a problem
> >> with it. When I used it in manila it took a little getting used to but I 
> >> was
> >> basically just looking at existing tests and figuring out what they were
> >> doing when adding new ones.
> > 
> > I don't think there's significant learning curve there - the way it
> > lets you annotate the test methods is pretty easy to understand and
> > the ddt docs spell it out clearly for newbies. We've far worse things
> > in our code that create a hard learning curve which people will hit
> > first :-)
> > 
> > People have essentially been re-inventing ddt in nova tests already
> > by defining one helper method and them having multiple tests methods
> > all calling the same helper with a different dataset. So ddt is just
> > formalizing what we're already doing in many places, with less code
> > and greater clarity.
> > 
> >> I definitely think DDT is easier to use/understand than something like
> >> testscenarios, which we're already using in Nova.
> > 
> > Yeah, testscenarios feels little over-engineered for what we want most
> > of the time.
> 
> Except, DDT is way less clear (and deterministic) about what's going on
> with the test name munging. Which means failures are harder to track
> back to individual tests and data load. So debugging the failures is harder.

I'm not sure what you think is unclear - given an annotated test:

   @ddt.data({"foo": "test", "availability_zone": "nova1"},
  {"name": "  test  ", "availability_zone": "nova1"},
  {"name": "", "availability_zone": "nova1"},
  {"name": "x" * 256, "availability_zone": "nova1"},
  {"name": "test", "availability_zone": "x" * 256},
  {"name": "test", "availability_zone": "  nova1  "},
  {"name": "test", "availability_zone": ""},
  {"name": "test", "availability_zone": "nova1", "foo": "bar"})
def test_create_invalid_create_aggregate_data(self, value):

It is generated one test for each data item:

 test_create_invalid_create_aggregate_data_1
 test_create_invalid_create_aggregate_data_2
 test_create_invalid_create_aggregate_data_3
 test_create_invalid_create_aggregate_data_4
 test_create_invalid_create_aggregate_data_5
 test_create_invalid_create_aggregate_data_6
 test_create_invalid_create_aggregate_data_7
 test_create_invalid_create_aggregate_data_8

This seems about as obvious as you can possibly get

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Remove duplicate code using Data Driven Tests (DDT)

2016-07-22 Thread Daniel P. Berrange
On Thu, Jul 21, 2016 at 07:03:53AM -0700, Matt Riedemann wrote:
> On 7/21/2016 2:03 AM, Bhor, Dinesh wrote:
> > Hi Nova Devs,
> > 
> > 
> > 
> > Many times, there are a number of data sets that we have to run the same
> > tests on.
> > 
> > And, to create a different test for each data set values is
> > time-consuming and inefficient.
> > 
> > 
> > 
> > Data Driven Testing [1] overcomes this issue. Data-driven testing (DDT)
> > is taking a test,
> > 
> > parameterizing it and then running that test with varying data. This
> > allows you to run the
> > 
> > same test case with many varying inputs, therefore increasing coverage
> > from a single test,
> > 
> > reduces code duplication and can ease up error tracing as well.
> > 
> > 
> > 
> > DDT is a third party library needs to be installed separately and invoke the
> > 
> > module when writing the tests. At present DDT is used in cinder and rally.
> 
> There are several projects using it:
> 
> http://codesearch.openstack.org/?q=ddt%3E%3D1.0.1=nope==
> 
> I first came across it when working a little in manila.
> 
> > 
> > 
> > 
> > To start with, I have reported this as a bug [2] and added initial patch
> > [3] for the same,
> > 
> > but couple of reviewers has suggested to discuss about this on ML as
> > this is not a real bug.
> > 
> > IMO this is not a feature implementation and it’s just a effort for
> > simplifying our tests,
> > 
> > so a blueprint will be sufficient to track its progress.
> > 
> > 
> > 
> > So please let me know whether I can file a new blueprint or nova-specs
> > to proceed with this.
> > 
> > 
> > 
> > [1] http://ddt.readthedocs.io/en/latest/index.html
> > 
> > [2] https://bugs.launchpad.net/nova/+bug/1604798
> > 
> > [3] https://review.openstack.org/#/c/344820/
> > 
> > 
> > 
> > Thank you,
> > 
> > Dinesh Bhor
> > 
> > 
> > __
> > Disclaimer: This email and any attachments are sent in strictest confidence
> > for the sole use of the addressee and may contain legally privileged,
> > confidential, and proprietary data. If you are not the intended recipient,
> > please advise the sender by replying promptly to this email and then delete
> > and destroy this email and any attachments without any further use, copying
> > or forwarding.
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> I agree that it's not a bug. I also agree that it helps in some specific
> types of tests which are doing some kind of input validation (like the patch
> you've proposed) or are simply iterating over some list of values (status
> values on a server instance for example).
> 
> Using DDT in Nova has come up before and one of the concerns was hiding
> details in how the tests are run with a library, and if there would be a
> learning curve. Depending on the usage, I personally don't have a problem
> with it. When I used it in manila it took a little getting used to but I was
> basically just looking at existing tests and figuring out what they were
> doing when adding new ones.

I don't think there's significant learning curve there - the way it
lets you annotate the test methods is pretty easy to understand and
the ddt docs spell it out clearly for newbies. We've far worse things
in our code that create a hard learning curve which people will hit
first :-)

People have essentially been re-inventing ddt in nova tests already
by defining one helper method and them having multiple tests methods
all calling the same helper with a different dataset. So ddt is just
formalizing what we're already doing in many places, with less code
and greater clarity.

> I definitely think DDT is easier to use/understand than something like
> testscenarios, which we're already using in Nova.

Yeah, testscenarios feels little over-engineered for what we want most
of the time.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FPGA as a dynamic nested resources

2016-07-21 Thread Daniel P. Berrange
On Thu, Jul 21, 2016 at 07:54:48AM +0200, Roman Dobosz wrote:
> On Wed, 20 Jul 2016 10:07:12 +0100
> "Daniel P. Berrange" <berra...@redhat.com> wrote:
> 
> Hey Daniel, thanks for the feedback.
> 
> > > Thoughts?
> > 
> > I'd suggest you'll increase your chances of success with nova design
> > approval if you focus on implementing a really simple usage scheme for
> > FPGA as the first step in Nova.
> 
> This. Maybe I'm wrong, but for me the minimal use case for FPGA would
> be ability to schedule VM which need certain accelerator from multiple
> potential ones on available FPGA/fixed slot. How insane does it sound?
> 
> Providing fixed, prepared earlier by DC administrator accelerator
> resource, doesn't bring much value, beyond what we already have in
> Nova, since PCI/SR-IOV passthrough might be used for accelerators,
> which expose their functionality via VF.

IIUC, there's plenty of FPGAs which are not SRIOV based, so there's
still scope for Nova enhancement in this area.

The fact that some FPGAs are SRIOV & some are not though, is is also
why I'm suggesting that any work related to FPGA should be based around
refactoring of the existing PCI device assignment model to form a more
generic "Hardware device assignment" model.  If we end up having a
completely distinct data model for FPGAs that is a failure. We need to
have a generalized hardware assignment model that can be used for generic
PCI devices, NICs, FPGAs, TPMs, GPUs, etc regardless of whether they
are backed by SRIOV, or their own non-PCI virtual functions. Personally
I'll reject any spec proposal that ignores existing PCI framework and
introduces a separate model for FPGA.

> > All the threads I've see go well off into the weeds about trying to 
> > solve everybody's niche/edge cases  perfectly and as a result get 
> > very complicated.
> 
> The topic is complicated :)

Which is why i'm advising to not try to solve the perfect case and instead
focus on getting something simple & good enough for common case.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FPGA as a dynamic nested resources

2016-07-20 Thread Daniel P. Berrange
On Tue, Jul 19, 2016 at 08:03:28PM +0200, Roman Dobosz wrote:
> Hi all,
> 
> Some time ago Jay Pipes published etherpad[1] with ideas around
> modelling nested resources, taking NUMA as an example. I was also
> encouraged ;) to start this thread, on last Nova scheduler meeting.
> 
> I was read mentioned etherpad and what hits me was that described
> scenario with NUMA cells resembles the way how FPGA can be managed. In
> some extent.
> 
> NUMA cell can be treated as a vessel for memory cells, and it is
> expressed as number of MB. So it is possible to extract the
> information from existing data and add another level of aggregation
> using only clever prepared SQL query.
> 
> I think, that problem might be broader, than using existing, tweaked a
> bit model. If we take a look into resources, which FPGA may expose,
> than it can be couple of levels, and each of them can be treated as
> resource.
> 
> It can identified 3 levels of FPGA resources, which can be nested one
> on the others:
> 
> 1. Whole FPGA. If used discrete FPGA, than even today it might be pass
>through to the VM.
> 
> 2. Region in FPGA. Some of the FPGA models can be divided into regions
>or slots. Also, for some model it is possible to (re)program such
>region individually - in this case there is a possibility to pass
>entire slot to the VM, so that it might be possible to reprogram
>such slot, and utilize the algorithm within the VM.
> 
> 3. Accelerator in region/FPGA. If there is an accelerator programmed
>in the slot, it is possible, that such accelerator provides us with
>Virtual Functions (similar to the SR-IOV), than every available VF
>can be treated as a resource.
> 
> 4. It might be also necessary to track every VF individually, although
>I didn't assumed it will be needed, nevertheless with nested
>resources it should be easy to handle it.
> 
> Correlation between such resources are a bit different from NUMA -
> while in NUMA case there is a possibility to either schedule a VM with
> some memory specified, or request memory within NUMA cell, in FPGA if
> there is slot taken, or accelerator already programmed and used, there
> is no way to offer FPGA as a whole to the tenant, until all
> accelerators and slots are free.
> 
> I've followed Jay idea about nested resources and having in mind
> blueprint[2] regarding dynamic resources I've prepared how it fit in.

[snip lots of complicated modelling]

> Thoughts?

I'd suggest you'll increase your chances of success with nova design
approval if you focus on implementing a really simple usage scheme for
FPGA as the first step in Nova. All the threads I've see go well off
into the weeds about trying to solve everybody's niche/edge cases
perfectly and as a result get very complicated.

For both NUMA and PCI dev assignment we got initial success by cutting
back scope and focusing on the doing the minimum possible to satisfy
the 90% common use cases, and ignoring the less common 10% initially.
Yes this is not optimal, but it is good enough to keep most people
happy without introducing massive complexity into the designs & impl.

For FPGA, I'd like to see an initial proposal that assumed the FPGA
is pre-programmed & pre-divided into a fixed number of slots and simply
deal with this. This is similar to how we dealt with PCI SR-IOV initially
where we assumed the dev is in VF-mode only. Only later did we start to
add cleverness around switching VF vs PF mode. For FPGA I think any kind
of dynamic re-allocation/re-configuration is better done as a stage 2

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova] Globally disabling hw_qemu_guest_agent support

2016-07-19 Thread Daniel P. Berrange
On Tue, Jul 19, 2016 at 12:51:07AM +, Daniel Russell wrote:
> Hi Erno,
> 
> For the size of team I am in I think it would work well but it feels like
> I am putting the security of Nova in the hands of Glance.

Yep, from an architectural pov it is not very good. Particularly in a
multi-hypervisor compute deployment you can have the situation where yoyu
want to allow a property for one type of hypervisor but forbid it for another.

What we really need is the exact same image property security restrictions
implemented by nova-compute, so we can setup compute nodes to blacklist
certain properties.

> 
> What I was more after was a setting in Nova that says 'this hypervisor
> does not allow guest sockets and will ignore any attempt to create them',
> 'this hypervisor always creates guest sockets regardless of your choice',
> 'this hypervisor will respect whatever you throw in hw_qemu_guest_agent
> with a default of no', or 'this hypervisor will respect whatever you throw
> in hw_qemu_guest_agent with a default of yes'.  It feels like a more
> appropriate place to control and manage that kind of configuration.

Nope, there's no such facility right now - glance property protection
is the only real option. I'd be very much against adding a lockdown
which was specific to the guest agent too - if we did anything it would
be to have a generic property protection model in nova that mirrors what
glance supports.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Naming polls - and some issues

2016-07-12 Thread Daniel P. Berrange
On Tue, Jul 12, 2016 at 05:40:07PM +0800, Monty Taylor wrote:
> Hey all!
> 
> The poll emails for the P and Q naming have started to go out - and
> we're experiencing some difficulties. Not sure at the moment what's
> going on ... but we'll keep working on the issues and get ballots to
> everyone as soon as we can.

You'll need to re-send at least some emails, because the link I received
is wrong - the site just reports

  "Your voter key is invalid. You should have received a correct URL by email."

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][SR-IOV][pci-passthrough] Reporting pci devices in hypervisor-show

2016-07-11 Thread Daniel P. Berrange
On Fri, Jul 08, 2016 at 03:45:10PM -0400, Jay Pipes wrote:
> On 07/08/2016 12:10 PM, Beliveau, Ludovic wrote:
> > I see a lot of values in having something like this for inventory
> > purposes and troubleshooting.
> > 
> > IMHO the information should be provided in two ways.
> > 
> > 1. Show PCI pools status per compute.  Currently the pools only have
> > information about how many devices are allocated in a pool ("count").
> > We should also derive from the pci_devices db table the number of PCI
> > devices that are available per pool (not just the number of allocated).
> > This information could be included in the hypervisor-show (or a new REST
> > API if this is found to be too noisy).
> > 
> > 2. More detailed information about each individual PCI devices (like you
> > are suggesting: parent device relationships, etc.).  This could be in a
> > separate REST API call.
> > 
> > We could even think about a third option where we could be showing
> > global PCI pools information for a whole region.
> > 
> > For discussions purposes, here's what pci_stats for a compute looks like
> > today:
> > {"count": 1, "numa_node": 0, "vendor_id": "8086", "product_id": "10fb",
> > "tags": {"dev_type": "type-PF", "physical_network": "default"}},
> > "nova_object.namespace": "nova"}
> > {"count": 3, "numa_node": 0, "vendor_id": "8086", "product_id": "10ed",
> > "tags": {"dev_type": "type-VF", "physical_network": "default"}},
> > "nova_object.namespace": "nova"}]}, "nova_object.namespace": "nova"}
> > 
> > Is there an intention to write a blueprint for this feature ?  If there
> > are interests, I don't mind working on it.
> 
> Personally, I hate the PCI device pool code and the whole concept of storing
> this aggregate "pool" information in the database (where it can easily
> become out of sync with the underlying PCI device records).

Yep, I really think we should avoid exposing this concept in our API,
at all costs. Aside from the issue you mentiohn, there's a second
issue that our PCI device code is almost certainly going to have to
be generalized in to host device code, since in order to support
TPMs, vGPUs and FPGAs, we're going to need to start tracking many
host devices which are not PCI. We should bear this in mind when
considered any public API exposure of PCI devices, as we don't want
to add an API that is immediately broken by need to add non-PCI
devices


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Disable option value interpolation in oslo.config

2016-07-08 Thread Daniel P. Berrange
On Fri, Jul 08, 2016 at 10:14:20PM +0800, ChangBo Guo wrote:
> Hi ALL,
> 
> I have been working a bug [1] about option value interpolation in
> oslo.config[2], in short, option value interpolation can't handle password
> containing special characters from environment variable and I  proposed a
> fix of provide way to forbid option value interpolation explicitly[3].
> 
> copy of Doug Hellmann's coments:
> 
> "The problem is that the end user who is setting the value of the option
> cannot control whether the option will do interpolation or not. So the
> programmer who defines the option has to make that choice, and then we
> can't change it because that would break existing deployments. The result
> is that end users won't know for any given option whether interpolation
> works or not, and if not why (did they do something wrong, or is it not
> supported).
> 
> I've set -2 on this patch because I think it's a bad approach. I see 2
> other ways we could solve the problem you describe (and I agree that it's
> an issue we should help with).
> 
> 1. We could have an option that turns off interpolation globally, and let
> the user control that by setting the flag in their configuration file. I'm
> not sure I like this, but it does give you what you're looking for at the
> risk of breaking applications that are relying on interpolation, like the
> nova example.
> 
> 2. We could disable interpolation when we get values from environment
> variables. That would be a big behavioral change, so we would need to think
> about how to roll it out carefully. For example, do we provide a helper
> function to give to application developers who are setting default values
> to environment variables so the variable value can be escaped to avoid
> interpolation? Or do we build it into the Opt class somehow? I think I like
> the helper function approach but we should give it more thought."
> I would like to collect more suggestions to decide the direction to fix
> similar bug. Any thoughts ?

I don't see a compelling need to change oslo behaviour wrt interpolation
at all, given all the option suggested here break copatibility in some
manner or another.

The curernt behaviour should just have its error reporting fixed, so that
it explicitly tells the user that the env var it tried to interpolate
contains invalid characters, instead of printing the incomprehensible
stack trace.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][nova-docker] Retiring nova-docker project

2016-07-08 Thread Daniel P. Berrange
On Fri, Jul 08, 2016 at 10:11:59AM +0200, Thierry Carrez wrote:
> Matt Riedemann wrote:
> > [...]
> > Expand the numbers to 6 months and you'll see only 13 commits.
> > 
> > It's surprisingly high in the user survey (page 39):
> > 
> > https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf
> > 
> > So I suspect most users/deployments are just running their own forks.
> 
> Why ? Is it completely unusable as it stands ? 13 commits in 6 months sounds
> like enough activity to keep something usable (if it was usable in the first
> place). We have a lot of (official) projects and libraries with less
> activity than that :)
> 
> I'm not sure we should be retiring an unofficial project if it's usable,
> doesn't have critical security issues and is used by a number of people...
> Now, if it's unusable and abandoned, that's another story.

Nova explicitly provides *zero* stable APIs for out of tree drivers to
use. Changes to Nova internals will reliably break out of tree drivers
at least once during a development cycle, often more. So you really do
need someone committed to updating out of tree drivers to cope with the
fact that they're using an explicitly unstable API. We actively intend
to keep breaking out of tree drivers as often as suits Nova's best
interests.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Fail build request if we can't inject files?

2016-07-04 Thread Daniel P. Berrange
On Sun, Jul 03, 2016 at 10:08:04AM -0500, Matt Riedemann wrote:
> I want to use the gate-tempest-dsvm-neutron-full-ssh in nova since it runs
> ssh validation + neutron + config drive + metadata service, which will test
> the virtual device tagging 2.32 microversion API (added last week).
> 
> The job has a file injection test that fails consistently which is keeping
> it from being voting.
> 
> After debugging, the problem is the files to inject are silently ignored
> because n-cpu is configured with libvirt.inject_partition=-2 by default.
> That disables file injection:
> 
> https://github.com/openstack/nova/blob/faf50a747e03873c3741dac89263a80112da915a/nova/virt/libvirt/driver.py#L3030
> 
> We don't even log a warning if the user requested files to inject and we
> can't honor it. If I were a user and tried to inject files when creating a
> server but they didn't show up in the guest, I'd open a support ticket
> against my cloud provider. So I don't think a warning (that only the admin
> sees) is sufficient here. This isn't something that's discoverable from the
> API either, it's really host configuration / capability (something we still
> need to tackle).

Won't the user provided files also get made available by the config drive /
metadata service ?  I think that's the primary reason for file injection not
being a fatal problem. Oh that and the fact that we've wanted to kill it for
at least 3 years now :-)

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra][ci] bulk repeating a test job on a single review in parallel ?

2016-07-01 Thread Daniel P. Berrange
On Fri, Jul 01, 2016 at 02:13:46PM +, Jeremy Stanley wrote:
> Have you considered just writing a throwaway devstack-gate change
> which overrides the gate_hook to run that one suspect Tempest test,
> say, a thousand times in a loop? Would be far more efficient if you
> don't need to be concerned with all the environment setup/teardown
> overhead.

Mmm, that's a possibility for initial reproducability. We've now seen
that this looks like some kind of kernel / iscsi problem, so in this
particular case I think we really do need to setup/teardown fresh
machines to ensure a "sane" initial kernel state.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra][ci] bulk repeating a test job on a single review in parallel ?

2016-07-01 Thread Daniel P. Berrange
On Fri, Jul 01, 2016 at 02:35:34PM +, Jeremy Stanley wrote:
> On 2016-07-01 15:39:10 +0200 (+0200), Kashyap Chamarthy wrote:
> > [Snip description of some nice debugging.]
> > 
> > > I'd really love it if there was
> > > 
> > >  1. the ability to request checking of just specific jobs eg
> > > 
> > >   "recheck gate-tempest-dsvm-multinode-full"
> > 
> > Yes, this would really be desirable.  I recall once asking this exact
> > question on #openstack-infra, but can't find Infra team's response to
> > that.
> 
> The challenge here is that you want to make sure it can't be used to
> recheck individual jobs until you have them all passing (like
> picking a pin and tumbler lock). The temptation to recheck-spam
> nondeterministically failing changes is already present, but this
> would make it considerably easier still for people to introduce new
> nondeterministic failures in projects. Maybe if it were tied to a
> special pipeline type, and then we set it only for the experimental
> pipeline or something?

If we don't want it to interfere with "normal" testing, then
perhaps just don't hook it under 'recheck'. Have a competely
separate command ('run-job blah') to trigger that has no influence
on the normal check status applied to a changeset, and reports it
separately too.

> > >  2. the ability to request this recheck to run multiple
> > > times in parallel. eg if i just repeat the 'recheck'
> > > command many times on the same patchset # without
> > > waiting for results
> > 
> > Yes, this too, would be _very_ useful for all the reasons you described.
> [...]
> 
> In the past we've discussed the option of having an "idle pipeline"
> which repeatedly runs specified jobs only when there are unused
> resources available, so that it doesn't significantly cut into our
> resource pool when we're under high demand but still allows to
> automatically collect a large amount of statistical data.

Yep, that could work as long as the 'idle pipeline' did have some
kind of minimal throughput. Debugging some of these things can
be time critical, so we don't neccessarily want to wait for a
fully idle time period. IOW a 'mostly-idle pipeline' which would
run jobs at any time, but rate limit them to prevent it swamping
out the normal jobs.

> Anyway, hopefully James Blair can weigh in on this, since Zuul is
> basically in a feature freeze for a while to limit the number of
> significant changes we'll need to forward-port into the v3 branch.
> We'd want to discuss these new features in the context of Zuul v3
> instead.

Sure, that's no problem - I got lucky and reproduced the problem
this time around after a few rechecks. I just wanted to raise this
as a general request, since we've hit this scenario several times
in the past, so it'd be useful to have a more general solution in
the future, whenever that's practical.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][infra][ci] bulk repeating a test job on a single review in parallel ?

2016-06-30 Thread Daniel P. Berrange
A bunch of people in Nova and upstream QEMU teams are trying to investigate
a long standing bug in live migration[1]. Unfortuntely the bug is rather
non-deterministic - eg on the multinode-live-migration tempest job it has
hit 4 times in 7 days, while on multinode-full tempest job it has hit
~70 times in 7 days.

I have a test patch which hacks nova to download & install a special QEMU
build with extra debugging output[2]. Because of the non-determinism I need
to then run the multinode-live-migration & multinode-full tempest jobs
many times to try and catch the bug.  Doing this by just entering 'recheck'
is rather tedious because you have to wait for the 1+ hour turnaround time
between each recheck.

To get around this limitation I created a chain of 10 commits [3] which just
toggled some whitespace and uploaded them all, so I can get 10 CI runs
going in parallel. This worked remarkably well - at least enough to
reproduce the more common failure of multinode-full, but not enough for
the much rarer multinode-live-migration job.

I could expand this hack and upload 100 dummy changes to get more jobs
running to increase chances of hitting the multinode-live-migration
failure. Out of the 16 jobs run on every Nova change, I only care about
running 2 of them. So to get 100 runs of the 2 live migration jobs I want,
I'd be creating 1600 CI jobs in total which is not too nice for our CI
resource pool :-(

I'd really love it if there was

 1. the ability to request checking of just specific jobs eg

  "recheck gate-tempest-dsvm-multinode-full"

 2. the ability to request this recheck to run multiple
times in parallel. eg if i just repeat the 'recheck'
command many times on the same patchset # without
waiting for results

Any one got any other tips for debugging highly non-deterministic
bugs like this which only hit perhaps 1 time in 100, without wasting
huge amounts of CI resource as I'm doing right now ?

No one has ever been able to reproduce these failures outside of
the gate CI infra, indeed certain CI hosting providers seem worse
afffected by the bug than others, so running tempest locally is not
an option.

Regards,
Daniel

[1] https://bugs.launchpad.net/nova/+bug/1524898
[2] https://review.openstack.org/#/c/335549/5
[3] https://review.openstack.org/#/q/topic:mig-debug
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

2016-06-24 Thread Daniel P. Berrange
On Fri, Jun 24, 2016 at 08:05:38AM -0600, John Griffith wrote:
> On Fri, Jun 24, 2016 at 2:19 AM, Daniel P. Berrange <berra...@redhat.com>
> wrote:
> 
> > On Thu, Jun 23, 2016 at 09:09:44AM -0700, Walter A. Boring IV wrote:
> > >
> > > volumes connected to QEMU instances eventually become directly connected?
> > >
> > > > Our long term goal is that 100% of all network storage will be
> > connected
> >
> ​Oh, didn't know this at all.  Is this something Nova has been working on
> for a while?  I'd love to hear more about the reasoning, the plan etc.  It
> would also be really neat to have an opportunity to participate.

There's no currently open Nova blueprint around this. The last time we
really discussed this in context of a spec was in relation to the request
to add LUKS support over RBD, which would have involved switching away
from using QEMU and back to in-kernel RBD.

Out of this, came work on QEMU over the last 6 months which added native
QEMU support for LUKS in QEMU 2.6. Libvirt is now integrating this and
when that's done we'll look to using it in Nova. That's like Ocata blueprint
material.


> ​This all sounds like it could be a good direction to go in.  I'd love to
> see more info on the plan, how it works, and how to test it out a bit.
> Didn't find a spec, any links, reviews or config info available?

The iSCSI stuff was originally added a several releases back:

commit f987bf1a641ffc8b26c06c920a32b8556c18e845
Author: Akira Yoshiyama <akirayoshiy...@gmail.com>
Date:   Mon Jan 12 12:11:56 2015 +

libvirt: add QEMU built-in iSCSI initiator support

This patch allows nova-compute to use QEMU iSCSI built-in
initiator for Cinder iSCSI volume drivers. It doesn't provide
iSCSI multipath capability, but host OS doesn't have to handle
iSCSI connection for volume because QEMU does it directly with
libiscsi.

To use this, you have to write a parameter at nova.conf:

  volume_drivers = 
iscsi=nova.virt.libvirt.volume.LibvirtNetVolumeDriver,iser=nova.virt.libvirt.volume.LibvirtISERVolumeDriver,...

or just

  volume_drivers = iscsi=nova.virt.libvirt.volume.LibvirtNetVolumeDriver

Note that qemu-system-x86 in Ubuntu has no iSCSI built-in
initiator support because libiscsi isn't in main repository
but universe. I've tested qemu-system-x86 built with libiscsi2
package of Debian on Ubuntu 14.04.

Change-Id: Ieb9a03d308495be4e8c54b5c6c0ff781ea7f0559
Implements: blueprint qemu-built-in-iscsi-initiator



Note that since that time though, I see we've lost ability to enable this,
since we removed the "volume_drivers" config parameter. We ought to re-add
an explicit config param to turn this on, rather than doing it indirectly
via the volume driver class choice.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] upgrades vs rootwrap

2016-06-24 Thread Daniel P. Berrange
On Fri, Jun 24, 2016 at 11:12:27AM +0200, Thierry Carrez wrote:
> Angus Lees wrote:
> > [...]
> > None of these are great, but:
> > 
> > Possibility 1:  Backdoor rootwrap
> > 
> > However if we assume rootwrap already exists then we _could_ rollout a
> > new version of oslo.rootwrap that contains a backdoor that allows
> > privsep-helper to be run as root for any context, without the need to
> > install a new rootwrap filter.
> > 
> > Disclaimers:
> > 
> > - It wouldn't work for virtualenvs, because the "privsep-helper"
> > executable won't be in sudo's usual PATH.
> > 
> > - Retro-fitting something like that to rootwrap feels like it's skirting
> > close to some sort of ethical contract we've made with admins regarding
> > rootwrap's featureset.  Not saying we shouldn't do it, just that we
> > should think about how an operator is going to feel about that.
> > 
> > 
> > Possibility 2: Wider rootwrap filter
> > 
> > In the past, I've been proposing rootwrap filters that match only
> > specific privsep "privileged contexts" by name.  On further reflection,
> > if we're assuming the existing python modules installed into root's
> > python path are already trustworthy (and we _are_ assuming that), then
> > it might also be reasonable to trust *any* privsep entrypoint declared
> > within that module path.  This gives a larger attack surface to think
> > about (particularly if python libraries including privsep decorators
> > were installed for some reason other than providing privsep entry
> > points), but there's no reason why this is _necessarily_ an issue.
> > 
> > This allows us to get to a single rootwrap filter per-project (or
> > rather, "per-rootwrap") since projects use separate rootwrap config
> > directories - so we would still have to do a thing once per project.
> > 
> > 
> > Possibility 3: Skip rootwrap, use just sudo
> > 
> > sudoers isn't very expressive - but we could install a new rootwrap-like
> > wrapper into sudoers once system-wide, which includes some sort of logic
> > to start privsep-helpers.  This could be as simple as a small shell
> > script.  The advantage this has over rootwrap is that it would contain
> > some sort of system-wide config, rather than per-project.
> > 
> > Downsides
> > 
> > - Would still need to be installed once system-wide.
> > 
> > - Would need to be configured per-virtualenv, since otherwise we have no
> > way to know which virtualenvs should be given root powers.
> > 
> > 
> > Possibility 4: Run as root initially
> > 
> > Another option would be to follow the usual Unix daemon model: Start the
> > process with all required privileges, and avoid sudo/rootwrap entirely.
> > 
> > In this version, we take a once-off hit to tell everyone to start
> > running their OpenStack agents as root (probably from init/systemd), and
> > right at the top of main() we fork() the privsep-helper and then drop to
> > a regular uid.  No sudo or rootwrap ever (although the unprivileged code
> > could continue to use it while we clean up all the legacy code).
> > 
> > A glorious future, but still a big per-project deployment change that
> > has to be managed somehow.
> 
> I'm adding Possibility (0): change Grenade so that rootwrap filters from N+1
> are put in place before you upgrade.
> 
> No perfect answer here... I'm hesitating between (0), (1) and (4). (4) is
> IMHO the right solution, but it's a larger change for downstream. (1) is a
> bit of a hack, where we basically hardcode in rootwrap that it's being
> transitioned to privsep. That's fine, but only if we get rid of rootwrap
> soon. So only if we have a plan for (4) anyway. Option (0) is a bit of a
> hard sell for upgrade procedures -- if we need to take a hit in that area,
> let's do (4) directly...
> 
> In summary, I think the choice is between (1)+(4) and doing (4) directly.
> How doable is (4) in the timeframe we have ? Do we all agree that (4) is the
> endgame ?

We've already merged change to privsep to allow nova/cinder/etc to
initialize the default helper command to use rootwrap:

  
https://github.com/openstack/oslo.privsep/commit/9bf606327d156de52c9418d5784cd7f29e243487

So we just need new release of privsep & add code to nova to initialize
it and we're sorted.

We can also revert the changes we made to devstack to update nova.conf
for privsep too.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][pci-passthrough] definitely specify VFIO driver as the host PCI driver for passthrough

2016-06-24 Thread Daniel P. Berrange
On Fri, Jun 24, 2016 at 04:52:31PM +0800, Chen Fan wrote:
> 
> 
> On 2016年06月24日 16:20, Daniel P. Berrange wrote:
> > On Fri, Jun 24, 2016 at 12:27:57PM +0800, Chen Fan wrote:
> > > hi all,
> > >   in openstack, we can use the pci passthrough feature now, refer to
> > >   https://wiki.openstack.org/wiki/Pci_passthrough
> > >   but we can't definitely specify the host pci driver is LEGACY_KVM or
> > > newer VFIO,
> > >   new VFIO driver is more safer, and higher-performance user-space 
> > > driver
> > >   than legacy kvm driver (pci-stub), the benefit relative to kvm
> > > assignment driver
> > >   could refer to http://lwn.net/Articles/474088/.
> > >   In additional, VFIO driver provides the GPU passthrough as primary 
> > > card
> > > support.
> > >   I think it is more useful for further GPU passthrough support in
> > > openstack.
> > > 
> > >   Openstack relies on the libvirt nodedev device configuration to do 
> > > pci
> > > passthrough,
> > >   with managed mode, the configured device is automatically detached 
> > > and
> > > re-attached
> > >   with KVM or VFIO driver that depended on the host driver modules
> > > configuration,
> > >   so now we can't specify the driver in openstack to VFIO mode, I 
> > > think
> > > we should need
> > >   to add this feature support in openstack to get pci passhthrough 
> > > more
> > > scalability.
> > > 
> > >   a simply idea is to add a option in nova.conf HOST_PCI_MODEL = VFIO
> > > /KVM to specify
> > >   the pci passthrough device driver is using VFIO driver.
> > >   any comments are welcome. :)
> > I don't see any reason to add a configuration option. If the host is
> > capable of doing VFIO, libvirt will always aim to use VFIO in preference
> > to the legacy system.
> Hi Daniel,
> 
> sorry, I directly reference the implementation of nodedev in libvirt, in
> function
> virHostdevPreparePCIDevices :
>  257 if (pcisrc->backend == VIR_DOMAIN_HOSTDEV_PCI_BACKEND_VFIO)
>  258 virPCIDeviceSetStubDriver(pci, VIR_PCI_STUB_DRIVER_VFIO);
>  259 else if (pcisrc->backend == VIR_DOMAIN_HOSTDEV_PCI_BACKEND_XEN)
>  260 virPCIDeviceSetStubDriver(pci, VIR_PCI_STUB_DRIVER_XEN);
>  261 else
>  262 virPCIDeviceSetStubDriver(pci, VIR_PCI_STUB_DRIVER_KVM);
> 
> IIUC, the stub driver should be "legacy KVM", do we need to change the stub
> drvier
> to VIR_PCI_STUB_DRIVER_VFIO by default.

If libvirt uses VFIO, then it'll use the VFIO stub driver, which is fine.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Status of the OpenStack port to Python 3

2016-06-24 Thread Daniel P. Berrange
On Thu, Jun 23, 2016 at 10:08:22AM +0200, Sylvain Bauza wrote:
> 
> 
> Le 23/06/2016 02:42, Tony Breeds a écrit :
> > On Wed, Jun 22, 2016 at 12:13:21PM +0200, Victor Stinner wrote:
> > > Le 22/06/2016 à 10:49, Thomas Goirand a écrit :
> > > > Do you think it looks like possible to have Nova ported to Py3 during
> > > > the Newton cycle?
> > > It doesn't depend on me: I'm sending patches, and then I have to wait for
> > > reviews. The question is more how to accelerate reviews.
> > Clearly I'm far from authorative but given how close we are to R-14 which is
> > the Nova non-priority feature freeze[1] and the python3 port isn't listed as
> > a priority[2] I'd guess that this wont land in Newton.
> > 
> > [1] 
> > http://releases.openstack.org/newton/schedule.html#nova-non-priority-feature-freeze
> > [2] 
> > http://specs.openstack.org/openstack/nova-specs/priorities/newton-priorities.html
> 
> Well, IIRC we discussed in the previous year on some of those blueprints
> (including the Py3 effort) that are not really features (rather refactoring
> items) and which shouldn't be hit by the non-priority feature freeze.
> That doesn't mean we could merge those anytime of course, but I don't think
> we would procedurally -2 them.

Certainly anything which is merely fixing unit tests is valid to be merged
pretty much any time. Stuff which touches actual functional code can be
evaluated on a case by case basis to decide whether it is reasonable to
merge at the particular point in the release process we're at. IOW, I see
no reason to arbitrarily block Py3 work on non-prio freeze - we'll just
carry on with it as part of our natural review process - it'll just have
to take back-seat to reviews for priority features.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Status of the OpenStack port to Python 3

2016-06-24 Thread Daniel P. Berrange
On Thu, Jun 23, 2016 at 11:04:28PM +0200, Thomas Goirand wrote:
> On 06/23/2016 06:11 PM, Doug Hellmann wrote:
> > I'd like for the community to set a goal for Ocata to have Python
> > 3 functional tests running for all projects.
> > 
> > As Tony points out, it's a bit late to have this as a priority for
> > Newton, though work can and should continue. But given how close
> > we are to having the initial phase of the port done (thanks Victor!),
> > and how far we are from discussions of priorities for Ocata, it
> > seems very reasonable to set a community-wide goal for our next
> > release cycle.
> > 
> > Thoughts?
> > 
> > Doug
> 
> +1
> 
> Just think about it for a while. If we get Nova to work with Py3, and
> everything else is working, including all functional tests in Tempest,
> then after Otaca, we could even start to *REMOVE* Py2 support after
> Otaca+1. That would be really awesome to stop all the compat layer
> madness and use the new features available in Py3.

Please lets not derail discussions about completing Py3 support by
opening up the can of worms wrt dropping Py2.

Lets get the Py3 support completed and in the hands of users and
proven acceptable before we talk about dropping support for the python
platform that every single deployment runs on today.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][pci-passthrough] definitely specify VFIO driver as the host PCI driver for passthrough

2016-06-24 Thread Daniel P. Berrange
On Fri, Jun 24, 2016 at 12:27:57PM +0800, Chen Fan wrote:
> hi all,
>  in openstack, we can use the pci passthrough feature now, refer to
>  https://wiki.openstack.org/wiki/Pci_passthrough
>  but we can't definitely specify the host pci driver is LEGACY_KVM or
> newer VFIO,
>  new VFIO driver is more safer, and higher-performance user-space driver
>  than legacy kvm driver (pci-stub), the benefit relative to kvm
> assignment driver
>  could refer to http://lwn.net/Articles/474088/.
>  In additional, VFIO driver provides the GPU passthrough as primary card
> support.
>  I think it is more useful for further GPU passthrough support in
> openstack.
> 
>  Openstack relies on the libvirt nodedev device configuration to do pci
> passthrough,
>  with managed mode, the configured device is automatically detached and
> re-attached
>  with KVM or VFIO driver that depended on the host driver modules
> configuration,
>  so now we can't specify the driver in openstack to VFIO mode, I think
> we should need
>  to add this feature support in openstack to get pci passhthrough more
> scalability.
> 
>  a simply idea is to add a option in nova.conf HOST_PCI_MODEL = VFIO
> /KVM to specify
>  the pci passthrough device driver is using VFIO driver.
>  any comments are welcome. :)

I don't see any reason to add a configuration option. If the host is
capable of doing VFIO, libvirt will always aim to use VFIO in preference
to the legacy system.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

2016-06-24 Thread Daniel P. Berrange
On Thu, Jun 23, 2016 at 09:09:44AM -0700, Walter A. Boring IV wrote:
> 
> volumes connected to QEMU instances eventually become directly connected?
> 
> > Our long term goal is that 100% of all network storage will be connected
> > to directly by QEMU. We already have the ability to partially do this with
> > iSCSI, but it is lacking support for multipath. As & when that gap is
> > addressed though, we'll stop using the host OS for any iSCSI stuff.
> > 
> > So if you're requiring access to host iSCSI volumes, it'll work in the
> > short-medium term, but in the medium-long term we're not going to use
> > that so plan accordingly.
> 
> What is the benefit of this largely monolithic approach?  It seems that
> moving everything into QEMU is diametrically opposed to the unix model
> itself and
> is just a re-implementation of what already exists in the linux world
> outside of QEMU.

There are many benefits to having it inside QEMU. First it gives us
improved isolation between VMs, because we can control the network
I/O directly against the VM using cgroup resource controls. It gives
us improved security, particularly in combination with LUKS encryption
since the unencrypted block device is not directly visible / accessible
to any other process. It gives us improved reliability / managability
since we avoid having to spawn the iscsi client tools which have poor
error reporting and have been frequent sources of instability in our
infrastructure (eg see how we have to blindly re-run the same command
many times over because it randomly times out). It will give us improved
I/O performance because of a shorter I/O path to get requests from QEMU
out to the network.

NB, this is not just about iSCSI, the same is all true for RBD where
we've also stopped using in-kernel RBD client and do it all in QEMU.

> Does QEMU support hardware initiators? iSER?

No, this is only for case where you're doing pure software based
iSCSI client connections. If we're relying on local hardware that's
a different story.

> 
> We regularly fix issues with iSCSI attaches in the release cycles of
> OpenStack,
> because it's all done in python using existing linux packages.  How often

This is a great example of the benefit that in-QEMU client gives us. The
Linux iSCSI client tools have proved very unreliable in use by OpenStack.
This is a reflection of the very architectural approach. We have individual
resources needed by distinct VMs, but we're having to manage them as a host
wide resource and that's creating us unneccessary complexity and having a
poor effect on our reliability overall.

> are QEMU
> releases done and upgraded on customer deployments vs. python packages
> (os-brick)?

We're removing the entire layer of instability by removing the need to
deal with any command line tools, and thus greatly simplifying our
setup on compute nodes. No matter what we might do in os-brick it'll
never give us a simple or reliable system - we're just papering over
the flaws by doing stuff like blindly re-trying iscsi commands upon
failure.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] libvirt driver: who should create the libvirt.xml file?

2016-06-23 Thread Daniel P. Berrange
On Mon, Jun 20, 2016 at 05:47:57PM +0200, Markus Zoeller wrote:
> White working on the change series to implement the virtlogd feature I
> got feedback [1] to move code which creates parts of the libvirt.xml
> file from the "driver" module into the "guest" module. I'm a bit
> hesitant to do so as the responsibility of creating a valid libvirt.xml
> file is then spread across 3 modules:
> * driver.py
> * guest.py
> * designer.py
> I'm only looking for a guideline here (The "driver" module is humongous
> and I think it would be a good thing to have the "libvirt.xml" creation
> code outside of it). Thoughts?

The designer.py file was created as a place which would ultimately hold
all the XML generator logic.

Ultimately the "_get_guest_xml" (and everything it calls) from driver.py
would move into the designer.py class. Before we could do that though, we
needed to create the host.py + guest.py classes to isolate the libvirt
API logic.

Now that the guest.py conversion/move is mostly done, we should be able
to start moving the XML generation out of driver.py  and into designer.py

I would definitely *not* put XML generation code into guest.py

In terms of your immediate patch, I'd suggest just following current
practice and putting your new code in driver.py.  We'll move everything
over to designer.py at the same time, later on.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

2016-06-23 Thread Daniel P. Berrange
On Wed, Jun 15, 2016 at 04:59:39PM -0700, Preston L. Bannister wrote:
> QEMU has the ability to directly connect to iSCSI volumes. Running the
> iSCSI connections through the nova-compute host *seems* somewhat
> inefficient.
> 
> There is a spec/blueprint and implementation that landed in Kilo:
> 
> https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html
> https://blueprints.launchpad.net/nova/+spec/qemu-built-in-iscsi-initiator
> 
> From looking at the OpenStack Nova sources ... I am not entirely clear on
> when this behavior is invoked (just for Ceph?), and how it might change in
> future.
> 
> Looking for a general sense where this is headed. (If anyone knows...)
> 
> If there is some problem with QEMU and directly attached iSCSI volumes,
> that would explain why this is not the default. Or is this simple inertia?
> 
> 
> I have a concrete concern. I work for a company (EMC) that offers backup
> products, and we now have backup for instances in OpenStack. To make this
> efficient, we need to collect changed-block information from instances.
> 
> 1)  We could put an intercept in the Linux kernel of the nova-compute host
> to track writes at the block layer. This has the merit of working for
> containers, and potentially bare-metal instance deployments. But is not
> guaranteed for instances, if the iSCSI volumes are directly attached to
> QEMU.
> 
> 2)  We could use the QEMU support for incremental backup (first bit landed
> in QEMU 2.4). This has the merit of working with any storage, by only for
> virtual machines under QEMU.
> 
> As our customers are (so far) only asking about virtual machine backup. I
> long ago settled on (2) as most promising.
> 
> What I cannot clearly determine is where (1) will fail. Will all iSCSI
> volumes connected to QEMU instances eventually become directly connected?

Our long term goal is that 100% of all network storage will be connected
to directly by QEMU. We already have the ability to partially do this with
iSCSI, but it is lacking support for multipath. As & when that gap is
addressed though, we'll stop using the host OS for any iSCSI stuff.

So if you're requiring access to host iSCSI volumes, it'll work in the
short-medium term, but in the medium-long term we're not going to use
that so plan accordingly.

> Xiao's unanswered query (below) presents another question. Is this a
> site-choice? Could I require my customers to configure their OpenStack
> clouds to always route iSCSI connections through the nova-compute host? (I
> am not a fan of this approach, but I have to ask.)

In the short term that'll work, but long term we're not intending to
support that once QEMU gains multi-path. There's no timeframe on when
that will happen though.



Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] os-brick privsep failures and an upgrade strategy?

2016-06-14 Thread Daniel P. Berrange
On Tue, Jun 14, 2016 at 07:49:54AM -0400, Sean Dague wrote:

[snip]

> The crux of the problem is that os-brick 1.4 and privsep can't be used
> without a config file change during the upgrade. Which violates our
> policy, because it breaks rolling upgrades.

os-vif support is going to face exactly the same problem. We just followed
os-brick's lead by adding a change to devstack to explicitly set the
required config options in nova.conf to change privsep to use rootwrap
instead of plain sudo.

Basically every single user of privsep is likely to face the same
problem.

> So... we have a few options:
> 
> 1) make an exception here with release notes, because it's the only way
> to move forward.

That's quite user hostile I think.

> 2) have some way for os-brick to use either mode for a transition period
> (depending on whether privsep is configured to work)

I'm not sure that's viable - at least for os-vif we started from
a clean slate to assume use of privsep, so we won't be able to have
any optional fallback to non-privsep mode.

> 3) Something else ?

3) Add an API to oslo.privsep that lets us configure the default
   command to launch the helper. Nova would invoke this on startup

  privsep.set_default_helper("sudo nova-rootwrap ")

4) Have oslo.privsep install a sudo rule that grants permission
   to run privsep-helper, without needing rootwrap.

5) Have each user of privsep install a sudo rule to grants
   permission to run privsep-helper with just their specific
   entry point context, without needing rootwrap

Any of 3/4/5 work out of the box, but I'm probably favouring
option 4, then 5, then 3.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-14 Thread Daniel P. Berrange
On Tue, Jun 14, 2016 at 02:35:57AM -0700, Kevin Benton wrote:
> In strategy 2 we just pass 1 bridge name to Nova. That's the one that is
> ensures is created and plumbs the VM to. Since it's not responsible for
> patch ports it doesn't need to know anything about the other bridge.

Ok, so we're already passing that bridge name - all we need change is
make sure it is actuall created if it doesn't already exist ? If so
that sounds simple enough to add to os-vif - we already have exactly
the same logic for the linux_bridge plugin


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-14 Thread Daniel P. Berrange
On Tue, Jun 14, 2016 at 02:10:52AM -0700, Kevin Benton wrote:
> Strategy 1 is being pitched to make it easier to implement with the current
> internals of the Neutron OVS agent (using integration bridge plugging
> events). I'm not sure that's better architecturally long term because the
> OVS agent has to have logic to wire up patch ports for the sub-interfaces
> anyway, so having the logic to make it wire up patch port for the parent
> interface is not out of place.
> 
> Also consider that we will now have to tell os-vif two bridges to use if we
> go with strategy 1. One bridge to create and attach the VM to, and another
> for the other half of the patch port. This means that we are going to have
> to leak more details of what Neutron is doing into the VIF details of the
> neutron port data model and relay that to Nova...

It sounds like strategy 2 also requires you to pass a second bridge
name to nova/os-vif, unless I'm mis-understanding the description
below.


> On Tue, Jun 14, 2016 at 1:29 AM, Daniel P. Berrange <berra...@redhat.com>
> wrote:
> 
> > On Mon, Jun 13, 2016 at 11:35:17PM +, Peters, Rawlin wrote:
> > > That said, there are currently a couple of vif-plugging strategies
> > > we could go with for wiring trunk ports for OVS, each of them
> > > requiring varying levels of os-vif augmentation:
> > >
> > > Strategy 1) When Nova is plugging a trunk port, it creates the OVS
> > > trunk bridge, attaches the tap to it, and creates one patch port
> > > pair from the trunk bridge to br-int.
> > >
> > > Strategy 2) Neutron passes a bridge name to Nova, and Nova uses this
> > > bridge name to create the OVS trunk bridge and attach the tap to it
> > > (no patch port pair plugging into br-int).
> >
> > [snip]
> >
> > > If neither of these strategies would be in danger of not making it
> > > into the Newton release, then I think we should definitely opt for
> > > Strategy 1 because it leads to a simpler overall solution. If only
> > > Strategy 2 is feasible enough to make it into os-vif for Newton,
> > > then we need to know ASAP so that we can start implementing the
> > > required functionality for the OVS agent to monitor for dynamic trunk
> > > bridge creation/deletion.
> >
> > IMHO the answer should always be to go for the right long term
> > architectural
> > solution, not take short cuts just to meet some arbitrary deadline, because
> > that will compromise the code over the long term. From what you are saying
> > it sounds like strategy 1 is the optimal long term solution, so that should
> > be where effort is focused regardless.
> >
> > Regards,
> > Daniel
> > --
> > |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> > :|
> > |: http://libvirt.org  -o- http://virt-manager.org
> > :|
> > |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> > :|
> > |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> > :|
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-14 Thread Daniel P. Berrange
On Mon, Jun 13, 2016 at 11:35:17PM +, Peters, Rawlin wrote:
> That said, there are currently a couple of vif-plugging strategies
> we could go with for wiring trunk ports for OVS, each of them
> requiring varying levels of os-vif augmentation:
>
> Strategy 1) When Nova is plugging a trunk port, it creates the OVS
> trunk bridge, attaches the tap to it, and creates one patch port
> pair from the trunk bridge to br-int.
>
> Strategy 2) Neutron passes a bridge name to Nova, and Nova uses this
> bridge name to create the OVS trunk bridge and attach the tap to it
> (no patch port pair plugging into br-int).

[snip]

> If neither of these strategies would be in danger of not making it
> into the Newton release, then I think we should definitely opt for
> Strategy 1 because it leads to a simpler overall solution. If only
> Strategy 2 is feasible enough to make it into os-vif for Newton,
> then we need to know ASAP so that we can start implementing the
> required functionality for the OVS agent to monitor for dynamic trunk
> bridge creation/deletion.

IMHO the answer should always be to go for the right long term architectural
solution, not take short cuts just to meet some arbitrary deadline, because
that will compromise the code over the long term. From what you are saying
it sounds like strategy 1 is the optimal long term solution, so that should
be where effort is focused regardless.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Initial oslo.privsep conversion?

2016-06-14 Thread Daniel P. Berrange
On Fri, Jun 10, 2016 at 09:51:03AM +1000, Tony Breeds wrote:
> On Fri, Jun 10, 2016 at 08:24:34AM +1000, Michael Still wrote:
> > On Fri, Jun 10, 2016 at 7:18 AM, Tony Breeds 
> > wrote:
> > 
> > > On Wed, Jun 08, 2016 at 08:10:47PM -0500, Matt Riedemann wrote:
> > >
> > > > Agreed, but it's the worked example part that we don't have yet,
> > > > chicken/egg. So we can drop the hammer on all new things until someone
> > > does
> > > > it, which sucks, or hope that someone volunteers to work the first
> > > example.
> > >
> > > I'll work with gus to find a good example in nova and have patches up
> > > before
> > > the mid-cycle.  We can discuss next steps then.
> > >
> > 
> > Sorry to be a pain, but I'd really like that example to be non-trivial if
> > possible. One of the advantages of privsep is that we can push the logic
> > down closer to the privileged code, instead of just doing something "close"
> > and then parsing. I think reinforcing that idea in the sample code is
> > important.
> 
> I think *any* change will show that.  I wanted to pick something achievable in
> the short timeframe.
> 
> The example I'm thinking of is nova/virt/libvirt/utils.py:update_mtime()
> 
>  * It will provide a lot of the boiler plate
>  * Show that we can now now replace an exec with pure python code.
>  * Show how you need to retrieve data from a trusted source on the priviledged
>side
>  * Migrate testing
>  * Remove an entry from compute.filters
> 
> Once that's implace chown() in the same file is probably a quick fix.
> 
> Is it super helpful? does it have a measurable impact on performance, 
> security?
> The answer is probably "no"
> 
> I still think it has value.
> 
> Handling qemu-img is probably best done by creating os-qemu (or similar) and
> designing from the ground up with provsep in mind.  Glance and Cinder would
> benefit from that also.  That howveer is waaay to big for this cycle.

Personally I'd stay away from anything related to libvirt disk file
management / qemu-img / etc. That code is in the middle of being
refactored to use libvirt storage pools, so it is going to change
in structure alot and partially eliminate the need for privileged
command execution. IOW, I don't think it'd make a good example
long term, and the code used for your example may well disappear
real soon.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-13 Thread Daniel P. Berrange
On Mon, Jun 13, 2016 at 07:39:29AM -0400, Assaf Muller wrote:
> On Mon, Jun 13, 2016 at 4:35 AM, Daniel P. Berrange <berra...@redhat.com> 
> wrote:
> > On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote:
> >> Hi,
> >>
> >> You may or may not be aware of the vlan-aware-vms effort [1] in
> >> Neutron.  If not, there is a spec and a fair number of patches in
> >> progress for this.  Essentially, the goal is to allow a VM to connect
> >> to multiple Neutron networks by tagging traffic on a single port with
> >> VLAN tags.
> >>
> >> This effort will have some effect on vif plugging because the datapath
> >> will include some changes that will effect how vif plugging is done
> >> today.
> >>
> >> The design proposal for trunk ports with OVS adds a new bridge for
> >> each trunk port.  This bridge will demux the traffic and then connect
> >> to br-int with patch ports for each of the networks.  Rawlin Peters
> >> has some ideas for expanding the vif capability to include this
> >> wiring.
> >>
> >> There is also a proposal for connecting to linux bridges by using
> >> kernel vlan interfaces.
> >>
> >> This effort is pretty important to Neutron in the Newton timeframe.  I
> >> wanted to send this out to start rounding up the reviewers and other
> >> participants we need to see how we can start putting together a plan
> >> for nova integration of this feature (via os-vif?).
> >
> > I've not taken a look at the proposal, but on the timing side of things
> > it is really way to late to start this email thread asking for design
> > input from os-vif or nova. We're way past the spec proposal deadline
> > for Nova in the Newton cycle, so nothing is going to happen until the
> > Ocata cycle no matter what Neutron want  in Newton. For os-vif our
> > focus right now is exclusively on getting existing functionality ported
> > over, and integrated into Nova in Newton. So again we're not really looking
> > to spend time on further os-vif design work right now.
> >
> > In the Ocata cycle we'll be looking to integrate os-vif into Neutron to
> > let it directly serialize VIF objects and send them over to Nova, instead
> > of using the ad-hoc port-binding dicts.  From the Nova side, we're not
> > likely to want to support any new functionality that affects port-binding
> > data until after Neutron is converted to os-vif. So Ocata at the earliest,
> > but probably more like P, unless the Neutron conversion to os-vif gets
> > completed unexpectedly quickly.
> 
> In light of this feature being requested by the NFV, container and
> baremetal communities, and that Neutron's os-vif integration work
> hasn't begun, does it make sense to block Nova VIF work? Are we
> comfortable, from a wider OpenStack perspective, to wait until
> possibly the P release? I think it's our collective responsibility as
> developers to find creative ways to meet deadlines, not serializing
> work on features and letting processes block us.

Everyone has their own personal set of features that are their personal
priority items. Nova evaluates all the competing demands and decides on
what the project's priorities are for the given cycle. For Newton Nova's
priority is to convert existing VIF functionality to use os-vif. Anything
else vif related takes a backseat to this project priority. This formal
modelling of VIFs and developing a plugin facility has already been strung
out over at least 3 release cycles now. We're finally in a position to get
it completed, and we're not going to divert attention away from this, to
other new features requests until its done as that'll increase the chances
of it getting strung out for yet another release which is in no ones
interests.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-13 Thread Daniel P. Berrange
On Mon, Jun 13, 2016 at 02:08:30PM +0200, Armando M. wrote:
> On 13 June 2016 at 10:35, Daniel P. Berrange <berra...@redhat.com> wrote:
> 
> > On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote:
> > > Hi,
> > >
> > > You may or may not be aware of the vlan-aware-vms effort [1] in
> > > Neutron.  If not, there is a spec and a fair number of patches in
> > > progress for this.  Essentially, the goal is to allow a VM to connect
> > > to multiple Neutron networks by tagging traffic on a single port with
> > > VLAN tags.
> > >
> > > This effort will have some effect on vif plugging because the datapath
> > > will include some changes that will effect how vif plugging is done
> > > today.
> > >
> > > The design proposal for trunk ports with OVS adds a new bridge for
> > > each trunk port.  This bridge will demux the traffic and then connect
> > > to br-int with patch ports for each of the networks.  Rawlin Peters
> > > has some ideas for expanding the vif capability to include this
> > > wiring.
> > >
> > > There is also a proposal for connecting to linux bridges by using
> > > kernel vlan interfaces.
> > >
> > > This effort is pretty important to Neutron in the Newton timeframe.  I
> > > wanted to send this out to start rounding up the reviewers and other
> > > participants we need to see how we can start putting together a plan
> > > for nova integration of this feature (via os-vif?).
> >
> > I've not taken a look at the proposal, but on the timing side of things
> > it is really way to late to start this email thread asking for design
> > input from os-vif or nova. We're way past the spec proposal deadline
> > for Nova in the Newton cycle, so nothing is going to happen until the
> > Ocata cycle no matter what Neutron want  in Newton.
> 
> 
> For sake of clarity, does this mean that the management of the os-vif
> project matches exactly Nova's, e.g. same deadlines and processes apply,
> even though the core team and its release model are different from Nova's?
> I may have erroneously implied that it wasn't, also from past talks I had
> with johnthetubaguy.

No, we don't intend to force ourselves to only release at milestones
like nova does. We'll release the os-vif library whenever there is new
functionality in its code that we need to make available to nova/neutron.
This could be as frequently as once every few weeks.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

2016-06-13 Thread Daniel P. Berrange
On Mon, Jun 13, 2016 at 08:06:50AM +, Wang, Shane wrote:
> Hi, OpenStackers,
> 
> As you know, Huawei, Intel and CESI are hosting the 4th China OpenStack Bug 
> Smash at Hangzhou, China.
> The 1st China Bug Smash was at Shanghai, the 2nd was at Xi'an, and the 3rd 
> was at Chengdu.
> 
> We are constructing the etherpad page for registration, and the date will
> be around July 11 (probably July 6 - 8, but to be determined very soon).

The newton-2 milestone release date is July 15th, so you certainly *don't*
want the event during that week. IOW, the 8th July is the latest you should
schedule it - don't let it slip into the next week starting July 11th, as
during the week of the n-2 milestone focus of the teams will be almost
exclusively on prep for that release, to the detriment of any bug smash
event.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-13 Thread Daniel P. Berrange
On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote:
> Hi,
> 
> You may or may not be aware of the vlan-aware-vms effort [1] in
> Neutron.  If not, there is a spec and a fair number of patches in
> progress for this.  Essentially, the goal is to allow a VM to connect
> to multiple Neutron networks by tagging traffic on a single port with
> VLAN tags.
> 
> This effort will have some effect on vif plugging because the datapath
> will include some changes that will effect how vif plugging is done
> today.
> 
> The design proposal for trunk ports with OVS adds a new bridge for
> each trunk port.  This bridge will demux the traffic and then connect
> to br-int with patch ports for each of the networks.  Rawlin Peters
> has some ideas for expanding the vif capability to include this
> wiring.
> 
> There is also a proposal for connecting to linux bridges by using
> kernel vlan interfaces.
> 
> This effort is pretty important to Neutron in the Newton timeframe.  I
> wanted to send this out to start rounding up the reviewers and other
> participants we need to see how we can start putting together a plan
> for nova integration of this feature (via os-vif?).

I've not taken a look at the proposal, but on the timing side of things
it is really way to late to start this email thread asking for design
input from os-vif or nova. We're way past the spec proposal deadline
for Nova in the Newton cycle, so nothing is going to happen until the
Ocata cycle no matter what Neutron want  in Newton. For os-vif our
focus right now is exclusively on getting existing functionality ported
over, and integrated into Nova in Newton. So again we're not really looking
to spend time on further os-vif design work right now.

In the Ocata cycle we'll be looking to integrate os-vif into Neutron to
let it directly serialize VIF objects and send them over to Nova, instead
of using the ad-hoc port-binding dicts.  From the Nova side, we're not
likely to want to support any new functionality that affects port-binding
data until after Neutron is converted to os-vif. So Ocata at the earliest,
but probably more like P, unless the Neutron conversion to os-vif gets
completed unexpectedly quickly.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-10 Thread Daniel P. Berrange
On Thu, Jun 09, 2016 at 12:35:06PM -0600, Chris Friesen wrote:
> On 06/09/2016 05:15 AM, Paul Michali wrote:
> > 1) On the host, I was seeing 32768 huge pages, of 2MB size.
> 
> Please check the number of huge pages _per host numa node_.
> 
> > 2) I changed mem_page_size from 1024 to 2048 in the flavor, and then when 
> > VMs
> > were created, they were being evenly assigned to the two NUMA nodes. Each 
> > using
> > 1024 huge pages. At this point I could create more than half, but when there
> > were 1945 pages left, it failed to create a VM. Did it fail because the
> > mem_page_size was 2048 and the available pages were 1945, even though we 
> > were
> > only requesting 1024 pages?
> 
> I do not think that "1024" is a valid page size (at least for x86).

Correct, 4k, 2M and 1GB are valid page sizes.

> Valid mem_page_size values are determined by the host CPU.  You do not need
> a larger page size for flavors with larger memory sizes.

Though note that page sizes should be a multiple of favour mem size
unless you want to waste memory. eg if you have a flavour with 750MB
RAM, then you probably don't want to use 1GB pages as it waste 250MB

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Using image metadata to sanity check supplied authentication data at nova 'create' or 'recreate' time?

2016-06-07 Thread Daniel P. Berrange
On Tue, Jun 07, 2016 at 09:37:25AM -0400, Jim Rollenhagen wrote:
> On Tue, Jun 07, 2016 at 08:31:35AM +1000, Michael Still wrote:
> > On Tue, Jun 7, 2016 at 7:41 AM, Clif Houck  wrote:
> > 
> > > Hello all,
> > >
> > > At Rackspace we're running into an interesting problem: Consider a user
> > > who boots an instance in Nova with an image which only supports SSH
> > > public-key authentication, but the user doesn't provide a public key in
> > > the boot request. As far as I understand it, today Nova will happily
> > > boot that image and it may take the user some time to realize their
> > > mistake when they can't login to the instance.
> > >
> > 
> > What about images where the authentication information is inside the image?
> > For example, there's just a standard account baked in that everyone knows
> > about? In that case Nova doesn't need to inject anything into the instance,
> > and therefore the metadata doesn't need to supply anything.
> 
> Right, so that's a third case. How I'd see this working is maybe an
> image property called "auth_requires" that could be one of ["none",
> "ssh_key", "x509_cert", "password"]. Or maybe it could be multiple
> values that are OR'd, so for example an image could require an ssh key
> or an x509 cert. If the "auth_requires" property isn't found, default to
> "none" to maintain compatibility, I guess.

NB, even if you have an image that requires an SSH key to be provided in
order to enable login, it is sometimes valid to not provide one. Not least
during development, I'm often testing images which would ordinarily require
an SSH key, but I don't actually need the ability to login, so I don't bother
to provide one.

So if we provided this ability to tag images as needing an ssh key, and then
enforced that, we would then also need to extend the API to provide a way to
tell nova to explicitly ignore this and not bother enforcing it, despite what
the image metadata says.

I'm not particularly convinced the original problem is serious enough to
warrant building such a solution. It feels like the kind of mistake that
people would do once, and then learn their mistake thereafter. IOW the
consequences of the mistake don't seem particularly severe really.

> The bigger question here is around hitting the images API syncronously
> during a boot request, and where/how/if to cache the metadata that's
> returned so we don't have to do it so often. I don't have a good answer
> for that, though.

Nova already uses image metadata for countless things during the VM boot
request, so there's nothin new in this respect. We only query glance
once, thereafter the image metadata is cached by Nova in the DB on a per
instance basis, because we need to be isolated from later changes to the
metadata in glance after the VM boots.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-03 Thread Daniel P. Berrange
On Fri, Jun 03, 2016 at 12:32:17PM +, Paul Michali wrote:
> Hi!
> 
> I've been playing with Liberty code a bit and had some questions that I'm
> hoping Nova folks may be able to provide guidance on...
> 
> If I set up a flavor with hw:mem_page_size=2048, and I'm creating (Cirros)
> VMs with size 1024, will the scheduling use the minimum of the number of

1024 what units ? 1024 MB, or 1024 huge pages aka 2048 MB ?

> huge pages available and the size requested for the VM, or will it base
> scheduling only on the number of huge pages?
> 
> It seems to be doing the latter, where I had 1945 huge pages free, and
> tried to create another VM (1024) and Nova rejected the request with "no
> hosts available".

>From this I'm guessing you're meaning 1024 huge pages aka 2 GB earlier.

Anyway, when you request huge pages to be used for a flavour, the
entire guest RAM must be able to be allocated from huge pages.
ie if you have a guest with 2 GB of RAM, you must have 2 GB worth
of huge pages available. It is not possible for a VM to use
1.5 GB of huge pages and 500 MB of normal sized pages.

> Is this still the same for Mitaka?

Yep, this use of huge pages has not changed.

> Where could I look in the code to see how the scheduling is determined?

Most logic related to huge pages is in nova/virt/hardware.py

> If I use mem_page_size=large (what I originally had), should it evenly
> assign huge pages from the available NUMA nodes (there are two in my case)?
> 
> It looks like it was assigning all VMs to the same NUMA node (0) in this
> case. Is the right way to change to 2048, like I did above?

Nova will always avoid spreading your VM across 2 host NUMA nodes,
since that gives bad performance characteristics. IOW, it will always
allocate huge pages from the NUMA node that the guest will run on. If
you explicitly want your VM to spread across 2 host NUMA nodes, then
you must tell nova to create 2 *guest* NUMA nodes for the VM. Nova
will then place each guest NUMA node, on a separate host NUMA node
and allocate huge pages from node to match. This is done using
the hw:numa_nodes=2 parameter on the flavour

> Again, has this changed at all in Mitaka?

Nope. Well aside from random bug fixes.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] [nova] live migration, libvirt 1.3, and the gate

2016-05-31 Thread Daniel P. Berrange
On Tue, May 31, 2016 at 08:19:33AM -0400, Sean Dague wrote:
> On 05/30/2016 06:25 AM, Kashyap Chamarthy wrote:
> > On Thu, May 26, 2016 at 10:55:47AM -0400, Sean Dague wrote:
> >> On 05/26/2016 05:38 AM, Kashyap Chamarthy wrote:
> >>> On Wed, May 25, 2016 at 05:42:04PM +0200, Kashyap Chamarthy wrote:
> >>>
> >>> [...]
> >>>
>  So, in short, the central issue seems to be this: the custom 'gate64'
>  model is not being trasnalted by libvirt into a model that QEMU can
>  recognize.
> >>>
> >>> An update:
> >>>
> >>> Upstream libvirt points out that this turns to be regression, and
> >>> bisected it to commit (in libvirt Git): 1.2.9-31-g445a09b -- "qemu:
> >>> Don't compare CPU against host for TCG".
> >>>
> >>> So, I expect there's going to be fix pretty soon upstream libvirt.
> >>
> >> Which is good... I wonder how long we'll be waiting for that back in our
> >> distro packages though.
> > 
> > Yeah, until the fix lands, our current options seem to be:
> > 
> >   (a) Revert to a known good version of libvirt
> 
> Downgrading libvirt so dramatically isn't a thing we'll be able to do.
> 
> >   (b) Use nested virt (i.e. ) -- I doubt is possible
> >   on RAX environment, which is using Xen, last I know.
> 
> We turned off nested virt even where it was enabled, because it locks up
> at a non trivial rate. So not really an option.

Hmm, if the guest is using 'qemu' and not 'kvm', then there should be
no dependancy between the host CPU and guest CPU whatsoever. ie we can
present arbitrary CPU to the guest, whether the host CPU has matching
features or not.

I wonder if there is a bug in Nova where it is trying todo a host/guest
CPU compatibility check even for 'qemu' guests, when it should only do
them for 'kvm' guests.

If we can avoid the CPU compatibility check with qemu guest, then the
fact that there's a libvirt bug here should be irrelevant, and we could
avoid needing to invent a gate64 CPU model too.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] [nova] live migration, libvirt 1.3, and the gate

2016-05-31 Thread Daniel P. Berrange
On Tue, May 24, 2016 at 01:59:17PM -0400, Sean Dague wrote:
> The team working on live migration testing started with an experimental
> job on Ubuntu 16.04 to try to be using the latest and greatest libvirt +
> qemu under the assumption that a set of issues we were seeing are
> solved. The short answer is, it doesn't look like this is going to work.
> 
> We run tests on a bunch of different clouds. Those clouds expose
> different cpu flags to us. These are not standard things that map to
> "Haswell". It means live migration in the multinode cases can hit cpus
> with different flags. So we found the requirement was to come up with a
> least common denominator of cpu flags, which we call gate64, and push
> that into the libvirt cpu_map.xml in devstack, and set whenever we are
> in a multinode scenario.
> (https://github.com/openstack-dev/devstack/blob/master/tools/cpu_map_update.py)
>  Not ideal, but with libvirt 1.2.2 it works fine.
> 
> It turns out it works fine because libvirt *actually* seems to take the
> data from cpu_map.xml and do a translation to what it believes qemu will
> understand. On these systems apparently this turns into "-cpu
> Opteron_G1,-pse36"
> (http://logs.openstack.org/29/42529/24/check/gate-tempest-dsvm-multinode-full/5f504c5/logs/libvirt/qemu/instance-000b.txt.gz)
> 
> At some point between libvirt 1.2.2 and 1.3.1, this changed. Now libvirt
> seems to be passing our cpu_model directly to qemu, and assumes that as
> a user you will be responsible for writing all the  stanzas to
> add/remove yourself. When libvirt sends 'gate64' to qemu, this explodes,
> as qemu has no idea what we are talking about.
> http://logs.openstack.org/34/319934/2/experimental/gate-tempest-dsvm-multinode-live-migration/b87d689/logs/screen-n-cpu.txt.gz#_2016-05-24_15_59_12_531
> 
> Unlike libvirt, which has a text file (xml) that configures the cpus
> that could exist in the world, qemu builds this in statically at compile
> time:
> http://git.qemu.org/?p=qemu.git;a=blob;f=target-i386/cpu.c;h=895a386d3b7a94e363ca1bb98821d3251e70c0e0;hb=HEAD#l694
> 
> 
> So, the existing cpu_map.xml workaround for our testing situation will
> no longer work.
> 
> So, we have a number of open questions:
> 
> * Have our cloud providers standardized enough that we might get away
> without this custom cpu model? (Have some of them done it and only use
> those for multinode?)
> * Is there any way to get this feature back in libvirt to do the cpu
> computation?
> * Would we have to build a whole nova feature around setting libvirt xml
>  to be able to test live migration in our clouds?
> * Other options?
> * Do we give up and go herd goats?

Rather than try to define our own custom CPU models, we can probably
just use one of the standard CPU models and then explicitly tell
libvirt which flags to turn off in order to get compatibility with
our cloud environments.

This is not currently possible with Nova, since our nova.conf option
only allow us to specify a bare CPU model. We would have to extend
nova.conf to allow us to specify a list of CPU features to add or
remove. Libvirt should then correctly pass these changes through
to QEMU.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Libvirt error code for failing volume in nova

2016-05-19 Thread Daniel P. Berrange
On Wed, May 18, 2016 at 07:57:17PM +, Radhakrishnan, Siva wrote:
> Hi All!
> Currently I am working on this bug 
> https://bugs.launchpad.net/nova/+bug/1168011 which says we have to change  
> error message displayed  when attaching a volume fails. Currently it catches 
> all operation errors that libvirt can raise and assume that all of them are 
> the source of device being busy. You can find the source of this code here 
> https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1160.
>  I have few questions on this bug
>  
> 1.what kind of error message and other required info should we include in 
> the exception to make it look more  generalized instead of the current 
> one ?
>  
> 2.Should we raise separate exception for "Device is Busy" or a single 
> general exception would work fine ?
>  
> 3.If we need separate exception for device being busy what would
> be the equivalent libvirt error code  for that

There is not any specific libvirt error code used for this situation that
you can detect, which is why the code is catching the general libvirt
error. 

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Thoughts on deprecating the legacy bdm v1 API support

2016-05-19 Thread Daniel P. Berrange
On Tue, May 17, 2016 at 09:48:02AM -0500, Matt Riedemann wrote:
> In the live migration meeting today mdbooth and I were chatting about how
> hard it is to follow the various BDM code through nova, because you have the
> three block_device modules:
> 
> * nova.block_device - dict that does some translation magic
> * nova.objects.block_device - contains the BDM(List) objects for RPC and DB
> access
> * nova.virt.block_device - dict that wraps a BDM object, used for attaching
> volumes to instances, updates the BDM.connection_info field in the DB via
> the wrapper on the BDM object. This module also has translation logic in it.
> 
> The BDM v1 extension translates that type of request to the BDM v2 model
> before it gets to server create, and then passed down to the
> nova.compute.api. But there is still a lot of legacy bdm v1 translation
> logic spread through the code.
> 
> So I'd like to propose that we deprecate the v1 BDM API in the same vein
> that we're deprecating other untested things, like agent-builds, cloudpipe,
> certificates, and the proxy APIs. We can't remove the code, but we can
> signal to users to not use the API and eventually when we raise the minimum
> required microversion >= the deprecation, we can drop that code. Since
> that's a long ways off, the earlier we start a deprecation clock on this the
> better - if we're going to do it.

Given that actual deletion of the code is a long way off regardless, can we
at least figure out how to isolate the BDM v1 support so that it only exists
in the very top most API entrypoint, and gets immediately converted to v2.
That way 95% of nova would not have to know / care about it, even if we don't
finally drop v1 for 10 more years.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Wiki

2016-05-11 Thread Daniel P. Berrange
On Tue, May 10, 2016 at 12:59:41PM -0400, Anita Kuno wrote:
> On 05/10/2016 12:48 PM, Dan Smith wrote:
> >>> Hmm... that's unfortunate, as we were trying to get some of our less
> >>> ephemeral items out of random etherpads and into the wiki (which has the
> >>> value of being google indexed).
> > 
> > Yeah, I'm kinda surprised anyone would consider a wiki-less world. I'm
> > definitely bummed at the thought of losing it.
> > 
> >> The Google indexing is also what makes the wiki so painful... After 6
> >> years most of the content there is inaccurate or outdated. It's a
> >> massive effort to clean it up without breaking the Google juice, and
> >> nobody has the universal knowledge to determine if pages are still
> >> accurate or not. We are bitten every day by newcomers finding wrong
> >> information on the wiki and acting using it. It's getting worse every
> >> day we keep on using it.
> > 
> > Sure, I think we all feel the pain of the stale information on the wiki.
> > What if we were to do what we do for bug or review purges and make a
> > list of pages, in reverse order of how recently they've been updated?
> > Then we can have a few sprints to tag obviously outdated things to
> > purge, and perhaps some things that just need some freshening.
> > 
> > There are a lot of nova-related things on the wiki that are the
> > prehistory equivalent of specs, most of which are very misleading to
> > people about the current state of things. I would think we could purge a
> > ton of stuff like that pretty quickly. I'll volunteer to review such a
> > list from the nova perspective.
> > 
> >> * Deprecate the current wiki and start over with another wiki (with
> >> stronger ACL support ?)
> > 
> > I'm somewhat surprised that this is an issue, because I thought that the
> > wiki requires an ubuntu login. Are spammers really getting ubuntu logins
> > so they can come over and deface our wiki?
> 
> Yes.

Rather than blocking all new accounts, can we simply restrict new wiki accounts
to people who've signed the CLA ? That would at least allow all people who
have taken the decision to become project contributors to continue to get
access to the wiki. We surely won't have large numbers of spammers signing
the CLA ??

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] License for specs repo

2016-05-05 Thread Daniel P. Berrange
On Thu, May 05, 2016 at 12:03:38PM -0400, Ben Swartzlander wrote:
> It appears that many of the existing specs repos contain a confusing mixture
> of Apache 2.0 licensed code and Creative Commons licensed docs.
> 
> The official cookie-cutter for creating new specs repos [1] appears to also
> contain a mixture of the two licenses, although it's even more confusing
> because it seems an attempt was made to change the license from Apache to
> Creative Commons [2] yet there are still several [3] places [4] where Apache
> is clearly specified.
> 
> I personally have no opinion on what license should be used, but I'd like to
> clearly specify the license for the newly-created manila-specs repo, and I'm
> happy with whatever the TC is currently recommending.

Content in the specs is often used as the basis for writing official
documentation later, so license compatibility with docs is an important
consideration. IIUC the official OpenStack manuals are Apache licensed,
while other open source 3rd party docs are often CC licensed.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Live Migration: Austin summit update

2016-05-04 Thread Daniel P. Berrange
On Tue, May 03, 2016 at 04:16:43PM -0600, Chris Friesen wrote:
> On 05/03/2016 03:14 AM, Daniel P. Berrange wrote:
> 
> >There are currently many options for live migration with QEMU that can
> >assist in completion
> 
> 
> 
> >Given this I've spent the last week creating an automated test harness
> >for QEMU upstream which triggers migration with an extreme guest CPU
> >load and measures the performance impact of these features on the guest,
> >and whether the migration actually completes.
> >
> >I hope to be able to publish the results of this investigation this week
> >which should facilitate us in deciding which is best to use for OpenStack.
> >The spoiler though is that all the options are pretty terrible, except for
> >post-copy.
> 
> Just to be clear, it's not really CPU load that's the issue though, right?
> 
> Presumably it would be more accurate to say that the issue is the rate at
> which unique memory pages are being dirtied and the total number of dirty
> pages relative to your copy bandwidth.
> 
> This probably doesn't change the results though...at a high enough dirty
> rate you either pause the VM to keep it from dirtying more memory or you
> post-copy migrate and dirty the memory on the destination.

Yes that's correct - I should have been more explicit. A high rate of
dirtying memory implies high CPU load, but high CPU load does not imply
high rate of dirtying memory. My stress test used for benchmarking is
producing a high rate of dirtying memory.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Libvirt version requirement

2016-05-03 Thread Daniel P. Berrange
On Mon, May 02, 2016 at 11:27:01AM +0800, ZhiQiang Fan wrote:
> Hi Nova cores,
> 
> There is a spec[1] submitted to Telemetry project for Newton release,
> mentioned that a new feature requires libvirt >= 1.3.4 , I'm not sure if
> this will have bad impact to Nova service, so I open this thread and wait
> for your opinions.
> 
> [1]: https://review.openstack.org/#/c/311655/

Nova's policy is that we pick a minimum requires libvirt that everyone
must have, this is shown in this table

  
https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix#Nova_release_min_version

Nova will accept code changes which use features from a newer libvirt,
as long as they don't cause breakage for people with older libvirt.
Generally this means that we'll use newer libvirt features, only for
features which are new to nova - we don't change existing Nova code
to use new libvirt features, since that would cause regression.

IOW, I don't see any problem with you using newer libvirt version,
provided you gracefully fallback without error if run against the
current min required libvirt Nova declares.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Live Migration: Austin summit update

2016-05-03 Thread Daniel P. Berrange
On Fri, Apr 29, 2016 at 10:32:09PM +, Murray, Paul (HP Cloud) wrote:
> The following summarizes status of the main topics relating to live migration
> after the Newton design summit. Please feel free to correct any inaccuracies
> or add additional information.

> Post copy
> 
> The spec to add post copy migration support in the libvirt driver was
> discussed in the live migration session. Post copy guarantees completion
> of a migration in linear time without needing to pause the VM. This can
> be used as an alternative to pausing in live-migration-force-complete.
> Pause or complete could also be invoked automatically under some
> circumstances. The issue slowing these specs is how to decide which
> method to use given they provide a different user experience but we
> don't want to expose virt specific features in the API. Two additional
> specs listed below suggest possible generic ways to address the issue.
> 
> There was no conclusions reached in the session so the debate will
> continue on the specs. The first below is the main spec for the feature.
> 
> https://review.openstack.org/#/c/301509 : Adds post-copy live migration 
> support to Nova
> https://review.openstack.org/#/c/305425 : Define instance availability 
> profiles
> https://review.openstack.org/#/c/306561 : Automatic Live Migration Completion

There are currently many options for live migration with QEMU that can
assist in completion

 - Pause the VM
 - Auto-converge
 - XBZRLE compression
 - Multi-thread compression
 - Post-copy

Combined with tunables such as max-bandwidth and max-downtime. It is
absolutely clear as mud which of these work best for ensuring completion,
and what kind of impact they have on the guest performance.

Given this I've spent the last week creating an automated test harness
for QEMU upstream which triggers migration with an extreme guest CPU
load and measures the performance impact of these features on the guest,
and whether the migration actually completes.

I hope to be able to publish the results of this investigation this week
which should facilitate us in deciding which is best to use for OpenStack.
The spoiler though is that all the options are pretty terrible, except for
post-copy.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] next min libvirt?

2016-05-03 Thread Daniel P. Berrange
On Sat, Apr 30, 2016 at 10:28:23AM -0500, Thomas Bechtold wrote:
> Hi,
> 
> On Fri, Apr 29, 2016 at 10:13:42AM -0500, Sean Dague wrote:
> > We've just landed the libvirt min to bump us up to 1.2.1 required. It's
> > probably a good time consider the appropriate bump for Otaca.
> > 
> > By that time our Ubuntu LTS will be 16.04 (libvirt 1.3.1), RHEL 7.1
> > (1.2.8). Additionally Debian Jessie has 1.2.9. RHEL 7.2 is at 1.2.17.
> >
> > My suggestion is we set MIN_LIBVIRT_VERSION to 1.2.8. This will mean
> > that NUMA support in libvirt (excepting the blacklists) and huge page
> > support is assumed on x86_64.
> 
> Works also for SUSE which has 1.2.18 already in SLE 12 SP1.

Is there any public site where I can find details of what RPM versions
are present in SLES releases ? I was trying to find details last week
but was not able to find any info. If there's no public reference, could
you update the wiki with RPM details for libvirt, kvm and libguestfs:

https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix#Distro_minimum_versions

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] next min libvirt?

2016-05-03 Thread Daniel P. Berrange
On Fri, Apr 29, 2016 at 03:16:56PM -0500, Matt Riedemann wrote:
> On 4/29/2016 10:28 AM, Daniel P. Berrange wrote:
> >On Fri, Apr 29, 2016 at 10:13:42AM -0500, Sean Dague wrote:
> >>We've just landed the libvirt min to bump us up to 1.2.1 required. It's
> >>probably a good time consider the appropriate bump for Otaca.
> >>
> >>By that time our Ubuntu LTS will be 16.04 (libvirt 1.3.1), RHEL 7.1
> >>(1.2.8). Additionally Debian Jessie has 1.2.9. RHEL 7.2 is at 1.2.17.
> >
> >By the time Ocata is released, I think it'll be valid to ignore
> >RHEL-7.1, as we'll already be onto 7.3 at that time.
> >
> >>My suggestion is we set MIN_LIBVIRT_VERSION to 1.2.8. This will mean
> >>that NUMA support in libvirt (excepting the blacklists) and huge page
> >>support is assumed on x86_64.
> >
> >If we ignore RHEL 7.1, we could go to 1.2.9 which is the min in Jessie.
> 
> Is there a simple reason why ignoring RHEL 7.1 is OK? Honestly I can't
> remember which OpenStack release came out around that time, was it Kilo?

By the time Ocata comes out, we'll be on RHEL-7.3 as latest update
so people really shouldn't be continuing to deploy on RHEL-7.1. IOW,
I think we should be aiming to target $current  & $current-1  RHEL-7
update releases.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] next min libvirt?

2016-04-29 Thread Daniel P. Berrange
On Fri, Apr 29, 2016 at 10:13:42AM -0500, Sean Dague wrote:
> We've just landed the libvirt min to bump us up to 1.2.1 required. It's
> probably a good time consider the appropriate bump for Otaca.
> 
> By that time our Ubuntu LTS will be 16.04 (libvirt 1.3.1), RHEL 7.1
> (1.2.8). Additionally Debian Jessie has 1.2.9. RHEL 7.2 is at 1.2.17.

By the time Ocata is released, I think it'll be valid to ignore
RHEL-7.1, as we'll already be onto 7.3 at that time.

> My suggestion is we set MIN_LIBVIRT_VERSION to 1.2.8. This will mean
> that NUMA support in libvirt (excepting the blacklists) and huge page
> support is assumed on x86_64.

If we ignore RHEL 7.1, we could go to 1.2.9 which is the min in Jessie.


We should also now consider our minimum QEMU versions. Jessie will have
QEMU 2.1.0,  16.04 LTS will have 2.5.0 and  RHEL 7.2 will have 2.3.0

So that'd suggest a valid QEMU/KVM version of 2.1.0, vs our current
1.5.3 version.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] KVM Forum 2016: Call For Participation

2016-04-27 Thread Daniel P. Berrange
This is just a followup to remind people that the KVM Forum
CFP deadline of May 1st is rapidly approaching.

All the CFP information is here:

  http://events.linuxfoundation.org/events/kvm-forum/program/cfp

Regards,
Daniel (on behalf of the KVM Forum 2016 Program Committee)

On Thu, Mar 10, 2016 at 06:19:36PM +, Daniel P. Berrange wrote:
> =
> KVM Forum 2016: Call For Participation
> August 24-26, 2016 - Westin Harbor Castle - Toronto, Canada
> 
> (All submissions must be received before midnight May 1, 2016)
> =
> 
> KVM Forum is an annual event that presents a rare opportunity
> for developers and users to meet, discuss the state of Linux
> virtualization technology, and plan for the challenges ahead. 
> We invite you to lead part of the discussion by submitting a speaking
> proposal for KVM Forum 2016.
> 
> At this highly technical conference, developers driving innovation
> in the KVM virtualization stack (Linux, KVM, QEMU, libvirt) can
> meet users who depend on KVM as part of their offerings, or to
> power their data centers and clouds.
> 
> KVM Forum will include sessions on the state of the KVM
> virtualization stack, planning for the future, and many
> opportunities for attendees to collaborate. As we celebrate ten years
> of KVM development in the Linux kernel, KVM continues to be a
> critical part of the FOSS cloud infrastructure.
> 
> This year, KVM Forum is joining LinuxCon and ContainerCon in Toronto, 
> Canada. Selected talks from KVM Forum will be presented on Wednesday
> August 24 to the full audience of LinuxCon and ContainerCon. Also,
> attendees of KVM Forum will have access to all of the LinuxCon and
> ContainerCon talks on Wednesday.
> 
> http://events.linuxfoundation.org/cfp
> 
> Suggested topics:
> 
> KVM and Linux
> * Scaling and optimizations
> * Nested virtualization
> * Linux kernel performance improvements
> * Resource management (CPU, I/O, memory)
> * Hardening and security
> * VFIO: SR-IOV, GPU, platform device assignment
> * Architecture ports
> 
> QEMU
> * Management interfaces: QOM and QMP
> * New devices, new boards, new architectures
> * Scaling and optimizations
> * Desktop virtualization and SPICE
> * Virtual GPU
> * virtio and vhost, including non-Linux or non-virtualized uses
> * Hardening and security
> * New storage features
> * Live migration and fault tolerance
> * High availability and continuous backup
> * Real-time guest support
> * Emulation and TCG
> * Firmware: ACPI, UEFI, coreboot, u-Boot, etc.
> * Testing
> 
> Management and infrastructure
> * Managing KVM: Libvirt, OpenStack, oVirt, etc.
> * Storage: glusterfs, Ceph, etc.
> * Software defined networking: Open vSwitch, OpenDaylight, etc.
> * Network Function Virtualization
> * Security
> * Provisioning
> * Performance tuning
> 
> 
> ===
> SUBMITTING YOUR PROPOSAL
> ===
> Abstracts due: May 1, 2016
> 
> Please submit a short abstract (~150 words) describing your presentation
> proposal. Slots vary in length up to 45 minutes. Also include the proposal
> type -- one of:
> - technical talk
> - end-user talk
> 
> Submit your proposal here:
> http://events.linuxfoundation.org/cfp
> Please only use the categories "presentation" and "panel discussion"
> 
> You will receive a notification whether or not your presentation proposal
> was accepted by May 27, 2016.
> 
> Speakers will receive a complimentary pass for the event. In the instance
> that your submission has multiple presenters, only the primary speaker for a
> proposal will receive a complementary event pass. For panel discussions, all
> panelists will receive a complimentary event pass.
> 
> TECHNICAL TALKS
> 
> A good technical talk should not just report on what has happened over
> the last year; it should present a concrete problem and how it impacts
> the user and/or developer community. Whenever applicable, focus on
> work that needs to be done, difficulties that haven't yet been solved,
> and on decisions that other developers should be aware of. Summarizing
> recent developments is okay but it should not be more than a small
> portion of the overall talk.
> 
> END-USER TALKS
> 
> One of the big challenges as developers is to know what, where and how
> people actually use our software. We will reserve a few slots for end
> users talking about their deployment challenges and achievements.
> 
> If you are using KVM in production you are encouraged submit a speaking
> proposal. Simply mark it as an end-user talk. As an end user, this is a
> unique

Re: [openstack-dev] [oslo.config] Encrypt the sensitive options

2016-04-26 Thread Daniel P. Berrange
On Tue, Apr 26, 2016 at 04:24:52PM +0200, Jordan Pittier wrote:
> On Tue, Apr 26, 2016 at 3:32 PM, Daniel P. Berrange <berra...@redhat.com>
> wrote:
> 
> > On Tue, Apr 26, 2016 at 08:19:23AM -0500, Doug Hellmann wrote:
> > > Excerpts from Guangyu Suo's message of 2016-04-26 07:28:42 -0500:
> > > > Hello, oslo team
> > > >
> > > > For now, some sensitive options like password or token are configured
> > as
> > > > plaintext, anyone who has the priviledge to read the configure file
> > can get
> > > > the real password, this may be a security problem that can't be
> > > > unacceptable for some people.
> >
> It's not a security problem if your config files have the proper
> permissions.

Permissions on disk is only one of many problems with storing passwords
in config files. When people report bugs to upstream or vendors they
frequently have to provide their configuration files as attachments to
the bug. This easily compromises their passwords unless they remember
to scrub them before attaching to the bug, which experiance shows most
people forgot todo.  We've had countless issues with code inside openstack
logging variables which contain passwords, causing us to come up with
stupid hacks to try to scrub passwords before logging.  If you want to
change your database password, you now forced to update the config
files on 100's or 1000's of nodes. Sure mgmt tools can automate this
but it would be better if the problem didn't exist in the first place

> > > > So the first solution comes to my mind is to encrypt these options when
> > > > configuring them and decrypt them when reading them in oslo.config.
> > This is
> > > > a bit like apache/openldap did, but the difference is these softwares
> > do a
> > > > salt hash to the password, this is a one-way encryption that can't be
> > > > decrypted, these softwares can recognize the hashed value. But if we do
> > > > this work in oslo.config, for example the admin_password in
> > > > keystone_middleware section, we must feed the keystone with the
> > plaintext
> > > > password which will be hashed in keystone to compare with the stored
> > hashed
> > > > password, thus the encryped value in oslo.config must be decryped to
> > > > plaintext. So we should encrypt these options using symmetrical or
> > > > unsymmetrical method with a key, and put the key in a well secured
> > place,
> > > > and decrypt them using the same key when reading them.
> >
> The issue here is to find a "well secured place". We should not only move
> the problem somewhere else.

There is already barbican which could potentially fill that role:

  "Barbican is a REST API designed for the secure storage, provisioning
   and management of secrets such as passwords, encryption keys and X.509
   Certificates." [1]

On startup a process, such as nova, could contact barbican to retrieve
the credentials it should use for authenticating with each other service
that requires a password.

> > > >
> > > > Of course, this feature should be default closed. Any ideas?
> > >
> > > Managing the encryption keys has always been the issue blocking
> > > implementing this feature when it has come up in the past. We can't have
> > > oslo.config rely on a separate OpenStack service for key management,
> > > because presumably that service would want to use oslo.config and then
> > > we have a dependency cycle.
> > >
> > > So, we need a design that lets us securely manage those encryption keys
> > > before we consider adding encryption. If we solve that, it's then
> > > probably simpler to encrypt an entire config file instead of worrying
> > > about encrypting individual values (something like how ansible vault
> > > works).
> >
> > IMHO encrypting oslo config files is addressing the wrong problem.
> > Rather than having sensitive passwords stored in the main config
> > files, we should have them stored completely separately by a secure
> > password manager of some kind. The config file would then merely
> > contain the name or uuid of an entry in the password manager. The
> > service (eg nova-compute) would then query that password manager
> > to get the actual sensitive password data it requires. At this point
> > oslo.config does not need to know/care about encryption of its data
> > as there's no longer sensitive data stored.
>
> This looks complicated. I like text files that I can quickly view and edit,
> if I am authorized to (through good old plain Linux permissions).


Re: [openstack-dev] [oslo.config] Encrypt the sensitive options

2016-04-26 Thread Daniel P. Berrange
On Tue, Apr 26, 2016 at 08:19:23AM -0500, Doug Hellmann wrote:
> Excerpts from Guangyu Suo's message of 2016-04-26 07:28:42 -0500:
> > Hello, oslo team
> > 
> > For now, some sensitive options like password or token are configured as
> > plaintext, anyone who has the priviledge to read the configure file can get
> > the real password, this may be a security problem that can't be
> > unacceptable for some people.
> > 
> > So the first solution comes to my mind is to encrypt these options when
> > configuring them and decrypt them when reading them in oslo.config. This is
> > a bit like apache/openldap did, but the difference is these softwares do a
> > salt hash to the password, this is a one-way encryption that can't be
> > decrypted, these softwares can recognize the hashed value. But if we do
> > this work in oslo.config, for example the admin_password in
> > keystone_middleware section, we must feed the keystone with the plaintext
> > password which will be hashed in keystone to compare with the stored hashed
> > password, thus the encryped value in oslo.config must be decryped to
> > plaintext. So we should encrypt these options using symmetrical or
> > unsymmetrical method with a key, and put the key in a well secured place,
> > and decrypt them using the same key when reading them.
> > 
> > Of course, this feature should be default closed. Any ideas?
> 
> Managing the encryption keys has always been the issue blocking
> implementing this feature when it has come up in the past. We can't have
> oslo.config rely on a separate OpenStack service for key management,
> because presumably that service would want to use oslo.config and then
> we have a dependency cycle.
> 
> So, we need a design that lets us securely manage those encryption keys
> before we consider adding encryption. If we solve that, it's then
> probably simpler to encrypt an entire config file instead of worrying
> about encrypting individual values (something like how ansible vault
> works).

IMHO encrypting oslo config files is addressing the wrong problem.
Rather than having sensitive passwords stored in the main config
files, we should have them stored completely separately by a secure
password manager of some kind. The config file would then merely
contain the name or uuid of an entry in the password manager. The
service (eg nova-compute) would then query that password manager
to get the actual sensitive password data it requires. At this point
oslo.config does not need to know/care about encryption of its data
as there's no longer sensitive data stored.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Encrypted Ephemeral Storage

2016-04-25 Thread Daniel P. Berrange
On Mon, Apr 25, 2016 at 04:28:17PM +, Coffman, Joel M. wrote:
> Based on the comments to the RBD encryption change [1], it looks
> like there will be a new direction for ephemeral disk encryption
> (embedding it in QEMU directly). I assume LVM will work the same
> way when the time comes. Will there be a migration path for the
> existing ephemeral disk encryption support for LVM to the new
> model?
> 
> [1] https://review.openstack.org/#/c/239798/
> 
> Yes, as I understand it, the long-term goal is to provide encryption
> support directly in QEMU and have a unified interface for LVM, RBD,
> and file-based backends. I do not yet know what the potential
> migration path will look like.

The forthcoming QEMU 2.6 release will include native support for the
LUKS data format. There is a test suite with QEMU to prove that this
is interoperable with the kernel dm-crypt/cryptsetup tools. So there
will be no data migration required. Nova will merely need to change
the way it configures to point QEMU directly to the encrypted LVM
volume, instead of creating a dm-crypt volume wrapper. QEMU will
then directly decrypt the LVM volume.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] os-vif status report

2016-04-22 Thread Daniel P. Berrange
On Fri, Apr 22, 2016 at 04:25:54AM +, Angus Lees wrote:
> In case it wasn't already assumed, anyone is welcome to contact me directly
> (irc: gus, email, or in Austin) if they have questions or want help with
> privsep integration work.  It's early days still and the docs aren't
> extensive (ahem).
> 
> os-brick privsep change just recently merged (yay), and I have the bulk of
> the neutron ip_lib conversion almost ready for review, so os-vif is a good
> candidate to focus on for this cycle.

FYI, privsep supported merged in os-vif last week and is working nicely


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   6   7   8   >