At: Apr 18 2016 10:34:24
Subject: Re: [Openstack-operators] Anyone else use vendordata_driver in
nova.conf?
On 04/18/2016 10:13 AM, Ned Rhudy (BLOOMBERG/ 731 LEX) wrote:
> Requiring users to remember to pass specific userdata through to their
> instance at every launch in order to r
would pursue.
What is the rationale for desiring to remove this functionality?
From: jaypi...@gmail.com
Subject: Re: [Openstack-operators] Anyone else use vendordata_driver in
nova.conf?
On 04/18/2016 09:24 AM, Ned Rhudy (BLOOMBERG/ 731 LEX) wrote:
> I noticed while reading through Mit
I noticed while reading through Mitaka release notes that vendordata_driver has
been deprecated in Mitaka (https://review.openstack.org/#/c/288107/) and is
slated for removal at some point. This came as somewhat of a surprise to me - I
searched openstack-dev for vendordata-related subject lines
We have a situation where tenant A is trying to launch large numbers of
instances from a single RBD volume snapshot in Cinder (e.g., 40 instances at
once). We made an unrelated change recently to enable
rbd_flatten_volume_from_snapshot by default in order to save tenants who create
large
Hey Saverio,
We currently implement it by setting images_type=lvm under [libvirt] in
nova.conf on hypervisors that have the LVM+RAID0 and then providing different
flavors (e1.* versus the default m1.* flavors) that launch instances on a host
aggregate for the LVM-hosting hypervisors. I suspect
We're in mostly the same boat; using nova-network with VLAN segmentation and
looking at a Neutron migration (though ours may take a more drastic path and
take us to Neutron+Calico). One question I have for you: the largest issue and
conceptual leap we had when initially prototyping
The subject says it all - does anyone know of a method by which quota can be
enforced on storage provisioned via Nova rather than Cinder? Googling around
appears to indicate that this is not possible out of the box (e.g.,
https://ask.openstack.org/en/question/8518/disk-quota-for-projects/).
Thanks Neil, very helpful.
From: neil.jer...@metaswitch.com
Subject: Re: [Openstack-operators] Anyone using Project Calico for tenant
networking?
Hi Ned,
Sorry for the delay in following up here.
On 06/02/16 14:40, Ned Rhudy (BLOOMBERG/ 731 LEX) wrote:
> Thanks. Having read the documentat
In our environments, we offer two types of storage. Tenants can either use
Ceph/RBD and trade speed/latency for reliability and protection against
physical disk failures, or they can launch instances that are realized as LVs
on an LVM VG that we create on top of a RAID 0 spanning all but the OS
EDMUND RHUDY, openstack-operators@lists.openstack.org
At: 05-Feb-2016 14:11:34
On 05/02/16 19:03, Ned Rhudy (BLOOMBERG/ 731 LEX) wrote:
> I meant in a general sense of the networking technology that you're
> using for instance networking, not in the sense of per-tenant networks,
> th
Hello operators,
We're continuing to investigate different cloud networking technologies as part
of our project to migrate from nova-network to Neutron, and one that's come up
frequently is Project Calico (http://www.projectcalico.org/). However, we can't
actually find anything in the way of
we think we could probably do
without that.
From: neil.jer...@metaswitch.com
Subject: Re: [Openstack-operators] Anyone using Project Calico for tenant
networking?
On 05/02/16 18:19, Ned Rhudy (BLOOMBERG/ 731 LEX) wrote:
> Hello operators,
>
> We're continuing to investigate differ
12 matches
Mail list logo