Re: [Openstack-operators] Inverted drive letters on block devices that use virtio-scsi

2018-01-26 Thread melanie witt
> On Jan 25, 2018, at 21:23, Logan V. wrote: > > There is a small patch in the bug which resolves the config drive > ordering. Without that patch I don't know of any workaround. The > config drive will always end up first in the boot order and the > instance will always fail to boot in that situa

Re: [Openstack-operators] Inverted drive letters on block devices that use virtio-scsi

2018-01-26 Thread Jay Pipes
The bug in question doesn't have anything to do with that. I've pushed a fix and a test case up here: https://review.openstack.org/538310 Best, -jay On 01/26/2018 12:16 PM, Blake Covarrubias wrote: The inconsistency in device naming is documented in https://docs.openstack.org/nova/pike/user/b

Re: [Openstack-operators] Inverted drive letters on block devices that use virtio-scsi

2018-01-26 Thread Blake Covarrubias
The inconsistency in device naming is documented in https://docs.openstack.org/nova/pike/user/block-device-mapping.html#intermezzo-problem-with-device-names . Similar to Tim's suggested approach, you can also mount the device by its UUID. A while back I wrote a small, relatively untested, Python s

Re: [Openstack-operators] Passing additional parameters to KVM for a single instance

2018-01-26 Thread Flint WALRUS
I would rather suggest you to deal with flavor/images metdata and host aggregate for such segregation of cpu capacity and versionning. If someone have another technics I’m pretty curious of it too. Le ven. 26 janv. 2018 à 17:00, Gary Molenkamp a écrit : > I'm trying to import a Solaris10 image i

[Openstack-operators] Passing additional parameters to KVM for a single instance

2018-01-26 Thread Gary Molenkamp
I'm trying to import a Solaris10 image into Ocata that is working under libvirt/KVM on a Fedora workstation.  However, in order for the kvm instance to work, it needs a few additional parameters to qemu that I use in the libvirt XML file:       Westmere       For the first parameter,

[Openstack-operators] [charms] Migrating HA control plane by scaling up and down

2018-01-26 Thread Sandor Zeestraten
Hey OpenStack Charmers, We have a Newton deployment on MAAS with 3 controller machines running all the usual OpenStack controller services in 3x HA with the hacluster charm in LXD containers. Now we'd like migrate some of those OpenStack services to 3 larger controller machines. Downtime of the se

Re: [Openstack-operators] Openstack AIO in production

2018-01-26 Thread Shake Chen
you can try kolla and kolla-ansible. first you deploy one node as master, future you can extend master to three of five, it is no problem. On Fri, Jan 26, 2018 at 7:44 PM, Debabrata Das wrote: > Hi, > > We are a small shop and have our customers in host our solution in their > data centers. We

Re: [Openstack-operators] [neutron] [os-vif] VF overcommitting and performance in SR-IOV

2018-01-26 Thread Maciej Kucia
Appreciate the feedback. It seems the conclusion is that generally one can safety enable large number of VFs with an exception of some limited hardware configurations which might require reducing VFs number due to BIOS limitation. Thanks & Regards, Maciej 2018-01-23 3:39 GMT+01:00 Blair Bethwaite

[Openstack-operators] Openstack AIO in production

2018-01-26 Thread Debabrata Das
Hi, We are a small shop and have our customers in host our solution in their data centers. We plan to use OpenStack to automate our delivery but are challenged the minimum hardware required to have an HA system. Most of our customers have a 2-4 servers in and a dedicated HA is practical for u