Re: [Openstack-operators] Anyone else use vendordata_driver in nova.conf?

2016-04-18 Thread Ned Rhudy (BLOOMBERG/ 731 LEX)
Okay, if I propose something upstream what is it expected to look like? There 
is apparently a high level of opinionation around exposed class loaders I 
wasn't aware of, and I don't think there's any one-size-fits-all solution here. 
If I suggested something like adding additional instance metadata under 
/openstack/latest/meta_data.json, that might be suitable for us as private 
cloud operators but pose security risks to public cloud operators. I don't want 
to propose something that sucks and has Bloomberg pathologies all over it but 
gets jackhammered in anyway.

From: s...@dague.net At: Apr 18 2016 10:34:24
Subject: Re: [Openstack-operators] Anyone else use vendordata_driver in 
nova.conf?

On 04/18/2016 10:13 AM, Ned Rhudy (BLOOMBERG/ 731 LEX) wrote:
> Requiring users to remember to pass specific userdata through to their
> instance at every launch in order to replace functionality that
> currently works invisible to them would be a step backwards. It's an
> alternative, yes, but it's an alternative that adds burden to our users
> and is not one we would pursue.
> 
> What is the rationale for desiring to remove this functionality?

The Nova team would like to remove every config option that specifies an
arbitrary out of tree class file at a function point. This has been the
sentiment for a while and we did a wave of deprecations at the end of
Mitaka to signal this more broadly, because as an arbitrary class loader
it completely impossible to even understand who might be using it and how.

These interfaces are not considered stable or contractual, so exposing
them as raw class loader is something that we want to stop doing, as
we're going to horribly break people at some point. It's fine if there
are multiple implementations for these things, however those should all
be upstream, and selected by a symbolic name CONF option.

One of the alternatives is to propose your solution upstream.

  -Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Anyone else use vendordata_driver in nova.conf?

2016-04-18 Thread Ned Rhudy (BLOOMBERG/ 731 LEX)
Requiring users to remember to pass specific userdata through to their instance 
at every launch in order to replace functionality that currently works 
invisible to them would be a step backwards. It's an alternative, yes, but it's 
an alternative that adds burden to our users and is not one we would pursue.

What is the rationale for desiring to remove this functionality?

From: jaypi...@gmail.com 
Subject: Re: [Openstack-operators] Anyone else use vendordata_driver in 
nova.conf?

On 04/18/2016 09:24 AM, Ned Rhudy (BLOOMBERG/ 731 LEX) wrote:
> I noticed while reading through Mitaka release notes that
> vendordata_driver has been deprecated in Mitaka
> (https://review.openstack.org/#/c/288107/) and is slated for removal at
> some point. This came as somewhat of a surprise to me - I searched
> openstack-dev for vendordata-related subject lines going back to January
> and found no discussion on the matter (IRC logs, while available on
> eavesdrop, are not trivially searchable without a little scripting to
> fetch them first, so I didn't check there yet).
>
> We at Bloomberg make heavy use of this particular feature to inject
> dynamically generated JSON into the metadata service of instances; the
> content of the JSON differs depending on the instance making the request
> to the metadata service. The functionality that adds the contents of a
> static JSON file, while remaining around, is not suitable for our use case.
>
> Please let me know if you use vendordata_driver so that I/we can present
> an organized case for why this option or equivalent functionality needs
> to remain around. The alternative is that we end up patching the
> vendordata driver directly in Nova when we move to Mitaka, which I'd
> like to avoid; as a matter of principle I would rather see more
> classloader overrides, not fewer.

Wouldn't an alternative be to use something like Chef, Puppet, Ansible, 
Saltstack, etc and their associated config variable storage services 
like Hiera or something similar to publish custom metadata? That way, 
all you need to pass to your instance (via userdata) is a URI or 
connection string and some auth details for your config storage service 
and the instance can grab whatever you need.

Thoughts?
-jay

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Anyone else use vendordata_driver in nova.conf?

2016-04-18 Thread Ned Rhudy (BLOOMBERG/ 731 LEX)
I noticed while reading through Mitaka release notes that vendordata_driver has 
been deprecated in Mitaka (https://review.openstack.org/#/c/288107/) and is 
slated for removal at some point. This came as somewhat of a surprise to me - I 
searched openstack-dev for vendordata-related subject lines going back to 
January and found no discussion on the matter (IRC logs, while available on 
eavesdrop, are not trivially searchable without a little scripting to fetch 
them first, so I didn't check there yet).

We at Bloomberg make heavy use of this particular feature to inject dynamically 
generated JSON into the metadata service of instances; the content of the JSON 
differs depending on the instance making the request to the metadata service. 
The functionality that adds the contents of a static JSON file, while remaining 
around, is not suitable for our use case.

Please let me know if you use vendordata_driver so that I/we can present an 
organized case for why this option or equivalent functionality needs to remain 
around. The alternative is that we end up patching the vendordata driver 
directly in Nova when we move to Mitaka, which I'd like to avoid; as a matter 
of principle I would rather see more classloader overrides, not fewer.___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Avoiding Cinder stampedes with RBD? (Kilo)

2016-03-22 Thread Ned Rhudy (BLOOMBERG/ 731 LEX)
We have a situation where tenant A is trying to launch large numbers of 
instances from a single RBD volume snapshot in Cinder (e.g., 40 instances at 
once). We made an unrelated change recently to enable 
rbd_flatten_volume_from_snapshot by default in order to save tenants who create 
large chains of volumes and snapshots until they run out of quota and then 
can't figure out how to unwind the chain, because the relationships are not 
trivially traceable. This change appears to be causing tenant A significant 
heartache now, because when he launches some large number of instances at once, 
most of his instance launches time out, presumably because Cinder is stampeding 
onto Ceph and trying to create flattened RBD images for every single instance 
simultaneously.

My question is, is there anything I can do to stop this stampeding? A review of 
cinder.conf options for Kilo didn't point out any obvious setting that could be 
adjusted to force Cinder to serialize its operations here, but maybe I missed 
something.___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] RAID / stripe block storage volumes

2016-03-07 Thread Ned Rhudy (BLOOMBERG/ 731 LEX)
Hey Saverio,

We currently implement it by setting images_type=lvm under [libvirt] in 
nova.conf on hypervisors that have the LVM+RAID0 and then providing different 
flavors (e1.* versus the default m1.* flavors) that launch instances on a host 
aggregate for the LVM-hosting hypervisors. I suspect this system is similar to 
what you use.

The advantage of it is it was very simple to implement and it guarantees that 
the volume will be on the same hypervisor as the instance. The disadvantages 
are probably things you've also experienced:

- no quota management because Nova considers it local storage (Warren Wang and 
I had complained about this in separate postings to this ML)
- can't create additional volumes on the LVM after instance launch because 
they're not managed by Cinder

Our users like it because they've figured out these LVM volumes are exempt from 
quota management, and because it's fast; our most active hypervisors on any 
given cluster are invariably the LVM ones. So far users have also gotten lucky 
with not a single RAID 0 failing in the 6 months since we've begun deploying 
this solution, so there's probably a bit of a perception gap between current 
and actual expected reliability.

I have begun thinking about ways of improving this system so as to bring these 
volumes under the control of Cinder, but have not come up with anything that I 
think would actually work. We discarded implementing iSCSI because of 
administrative overhead (who really wants to manage iSCSI?) and because it 
would negate the automatic forced locality; the whole point of the design was 
to provide maximum possible block storage speed, and if we have iSCSI traffic 
going over the storage network and competing with Ceph traffic, you get latency 
from the network, Ceph performance is degraded, and nobody's happy. I could 
possibly add cinder-volume to all the LVM hypervisors and register each one as 
a Cinder AZ, but I'm not sure if Nova would create the volume in the right AZ 
when scheduling an instance, and it would also  break the fourth wall on users 
knowing what hypervisor is hosting their instance.

From: ziopr...@gmail.com 
Subject: Re: [Openstack-operators] RAID / stripe block storage volumes

> In our environments, we offer two types of storage. Tenants can either use
> Ceph/RBD and trade speed/latency for reliability and protection against
> physical disk failures, or they can launch instances that are realized as
> LVs on an LVM VG that we create on top of a RAID 0 spanning all but the OS
> disk on the hypervisor. This lets the users elect to go all-in on speed and
[..CUT..]

Hello Ned,

how do you implement this ? What is like the user experience of having
two types of storage ?

We generally have Ceph/RBD as storage backend, however we have a use
case where we need LVM because latency is important.

To cope with our use case we have different flavors, where setting a
flavor-key to a specific flavor you can force the VM to be scheduled
to a specific host-aggregate. Then we have a host-aggregate for
hypervisors supporting the LVM storage and another host-aggregate for
hypervisors running the default Ceph/RBD backend.

However, let's say the user just creates a Cinder Volume in Horizon.
In this case the Volume is created to Ceph/RBD. Is there a solution to
support multiple storage backends at the same time and let the user
decide in Horizon which one to use ???

Thanks.

Saverio


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova-network -> Neutron Migration

2016-02-17 Thread Ned Rhudy (BLOOMBERG/ 731 LEX)
We're in mostly the same boat; using nova-network with VLAN segmentation and 
looking at a Neutron migration (though ours may take a more drastic path and 
take us to Neutron+Calico). One question I have for you: the largest issue and 
conceptual leap we had when initially prototyping Neutron+linuxbridge was that 
our current model only has controllers and work nodes, with no provisions for 
dedicated network nodes to route in/out of the cluster. All our work nodes can 
route by themselves, which would have steered us towards a DVR model, but that 
seems to have its own issues as well as mandating OVS.

Since your branch indicates you're using linuxbridge on Icehouse, are you 
provisioning network nodes as part of your migration, or are you avoiding 
needing to provision network nodes in a different fashion?

From: kevin...@cisco.com 
Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration

Definitely, I can work on that. I need to get the migration done first, but 
once I do I plan to open source our plays and whatever else to help people 
perform the migration themselves. At that point I can work on adding some stuff 
to the networking guide as well. Probably will be a few months from now, though.


On 2/17/16, 9:29 AM, "Matt Kassawara"  wrote:

>Cool! I'd like to see this stuff in the networking guide... or at least a link 
>to it for now.
>
>On Wed, Feb 17, 2016 at 8:14 AM, Kevin Bringard (kevinbri)
> wrote:
>
>Hey All!
>
>I wanted to follow up on this. We've managed successfully migrated Icehouse 
>with per tenant networks (non overlapping, obviously) and L3 services from 
>nova-networking to neutron in the lab. I'm working on the automation bits, but 
>once that is done we'll start
> migrating real workloads.
>
>I forked Sam's stuff and modified it to work in icehouse with tenants nets: 
>https://github.com/kevinbringard/novanet2neutron/tree/icehouse 
>. I need to 
>update the README to succinctly reflect the steps, but the code is there (I'm 
>going to work on the README today).
>
>If this is something folks are interested in I proposed a talk to go over the 
>process and our various use cases in Austin:
>
>https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7045
> 
>
>
>-- Kevin
>
>
>
>On 12/9/15, 12:49 PM, "Kevin Bringard (kevinbri)"  wrote:
>
>>It's worth pointing out, it looks like this only works in Kilo+, as it's 
>>written. Sam pointed out earlier that this was what they'd run it on, but I 
>>verified it won't work on earlier versions because, specifically, in the 
>>migrate-secgroups.py it inserts into
> the default_security_group table, which was introduced in Kilo.
>>
>>I'm working on modifying it. If I manage to get it working properly I'll 
>>commit my changes to my fork and send it out.
>>
>>-- Kevin
>>
>>
>>
>>On 12/9/15, 10:00 AM, "Edgar Magana"  wrote:
>>
>>>I did not but more advanced could mean a lot of things for Neutron. There 
>>>are so many possible scenarios that expecting to have a “script” to cover 
>>>all of them is a whole new project. Not sure we want to explore than. In the 
>>>past we were recommending to
>>> make the migration in multiple steps, maybe we could use this as a good 
>>> step 0.
>>>
>>>
>>>Edgar
>>>
>>>
>>>
>>>
>>>
>>>From: "Kris G. Lindgren"
>>>Date: Wednesday, December 9, 2015 at 8:57 AM
>>>To: Edgar Magana, Matt Kassawara, "Kevin Bringard (kevinbri)"
>>>Cc: OpenStack Operators
>>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>>
>>>
>>>
>>>Doesn't this script only solve the case of going from flatdhcp networks in 
>>>nova-network to same dchp/provider networks in neutron.  Did anyone test to 
>>>see if it also works for doing more advanced nova-network configs?
>>>
>>>
>>>___
>>>Kris Lindgren
>>>Senior Linux Systems Engineer
>>>GoDaddy
>>>
>>>
>>>
>>>
>>>
>>>
>>>From: Edgar Magana 
>>>Date: Wednesday, December 9, 2015 at 9:54 AM
>>>To: Matt Kassawara , "Kevin Bringard (kevinbri)" 
>>>
>>>Cc: OpenStack Operators 
>>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>>
>>>
>>>
>>>Yes! We should but with a huge caveat that is not not supported officially 
>>>by the OpenStack community. At least the author wants to make a move with 
>>>the Neutron team to make it part of the tree.
>>>
>>>
>>>Edgar
>>>
>>>
>>>
>>>
>>>
>>>From: Matt Kassawara
>>>Date: Wednesday, December 9, 2015 at 8:52 AM
>>>To: "Kevin Bringard (kevinbri)"
>>>Cc: Edgar Magana, Tom Fifield, OpenStack Operators
>>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>>
>>>
>>>
>>>Anyone think we 

[Openstack-operators] Managing quota for Nova local storage?

2016-02-17 Thread Ned Rhudy (BLOOMBERG/ 731 LEX)
The subject says it all - does anyone know of a method by which quota can be 
enforced on storage provisioned via Nova rather than Cinder? Googling around 
appears to indicate that this is not possible out of the box (e.g., 
https://ask.openstack.org/en/question/8518/disk-quota-for-projects/).

The rationale is we offer two types of storage, RBD that goes via Cinder and 
LVM that goes directly via the libvirt driver in Nova. Users know they can 
escape the constraints of their volume quotas by using the LVM-backed 
instances, which were designed to provide a fast-but-unreliable RAID 0-backed 
alternative to slower-but-reliable RBD volumes. Eventually users will hit their 
max quota in some other dimension (CPU or memory), but we'd like to be able to 
limit based directly on how much local storage is used in a tenancy.

Does anyone have a solution they've already built to handle this scenario? We 
have a few ideas already for things we could do, but maybe somebody's already 
come up with something. (Social engineering on our user base by occasionally 
destroying a random RAID 0 to remind people of their unsafety, while tempting, 
is probably not a viable candidate solution.)___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Anyone using Project Calico for tenant networking?

2016-02-10 Thread Ned Rhudy (BLOOMBERG/ 731 LEX)
Thanks Neil, very helpful.

From: neil.jer...@metaswitch.com 
Subject: Re: [Openstack-operators] Anyone using Project Calico for tenant 
networking?

Hi Ned,

Sorry for the delay in following up here.

On 06/02/16 14:40, Ned Rhudy (BLOOMBERG/ 731 LEX) wrote:
> Thanks. Having read the documentation, I have one question about the
> network design. Basically, our use case specifies that instances be able
> to have a stable IP across terminations; effectively what we'd like to
> do is have a setup where both the fixed and floating IPs are routable
> outside the cluster. Any given instance should get a routable IP when it
> launches, but additionally be able to take a floating IP that would act
> as a stable endpoint for other things to reference.
>
> The Calico docs specify that you can create public/private IPv4 networks
> in Neutron, both with DHCP enabled. Is it possible to accomplish what
> I'm talking about by creating what are two public IPv4 subnets, one with
> DHCP enabled and one with DHCP disabled that would be used as the float
> pool? Or is this not possible?

For the fixed IPs, yes.  For the float pool, no, I'm afraid we don't 
have that in Calico yet, and I'm not sure if it will take precisely that 
form when we do have floating IP support.

There is work in progress on Calico support for floating IPs, and the 
code for this can be seen at https://review.openstack.org/#/c/253634/ 
and https://github.com/projectcalico/calico/pull/848.  I can't yet say 
when this will land, though.

In terms of how floating IPs are represented in the Neutron data model: 
currently they require a relationship between an external Network, a 
Router and a tenant Network.  The floating IP pool is defined as a 
subnet on the external Network; each allocated floating IP maps onto one 
of the fixed IPs of the tenant network; and the agent that implements 
the Router does the inbound DNAT between those two.

As you've written, floating IPs are interesting for external or provider 
networks too, so we'd be interested in an enhancement to the Neutron 
model to allow that, and I believe there are other interested parties 
too.  But that will take time to agree, and it isn't one of my own 
priorities at the moment.

Hope that's useful.  Best wishes,

  Neil

>
> - Original Message -
> From: Neil Jerram <neil.jer...@metaswitch.com
> <mailto:neil.jer...@metaswitch.com>>
> To: EDMUND RHUDY, openstack-operators@lists.openstack.org
> <mailto:openstack-operators@lists.openstack.org>
> At: 05-Feb-2016 14:11:34
>
> On 05/02/16 19:03, Ned Rhudy (BLOOMBERG/ 731 LEX) wrote:
> > I meant in a general sense of the networking technology that you're
> > using for instance networking, not in the sense of per-tenant networks,
> > though my wording was ambiguous. Part of our larger question centers
> > around the viability of tying instances directly to a provider network.
> > Being that we only operate a private cloud for internal consumption,
> > doing so would have some attractive upsides; tenants clamor for the IP
> > inside their instance to be the same as the floating IP that the outside
> > world sees, but nobody's ever asked us about the ability to roll their
> > own network topology, so we think we could probably do without that.
>
> Cool, IMO that's a good match for what Calico provides.
>


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] RAID / stripe block storage volumes

2016-02-08 Thread Ned Rhudy (BLOOMBERG/ 731 LEX)
In our environments, we offer two types of storage. Tenants can either use 
Ceph/RBD and trade speed/latency for reliability and protection against 
physical disk failures, or they can launch instances that are realized as LVs 
on an LVM VG that we create on top of a RAID 0 spanning all but the OS disk on 
the hypervisor. This lets the users elect to go all-in on speed and sacrifice 
reliability for applications where replication/HA is handled at the app level, 
if the data on the instance is sourced from elsewhere, or if they just don't 
care much about the data.

There are some further changes to our approach that we would like to make down 
the road, but in general our users seem to like the current system and being 
able to forgo reliability or speed as their circumstances demand.

From: j...@topjian.net 
Subject: Re: [Openstack-operators] RAID / stripe block storage volumes

Hi Robert,

Can you elaborate on "multiple underlying storage services"?

The reason I asked the initial question is because historically we've made our 
block storage service resilient to failure. Historically we also made our 
compute environment resilient to failure, too, but over time, we've seen users 
become more educated to cope with compute failure. As a result, we've been able 
to become more lenient with regard to building resilient compute environments.

We've been discussing how possible it would be to translate that same idea to 
block storage. Rather than have a large HA storage cluster (whether Ceph, 
Gluster, NetApp, etc), is it possible to offer simple single LVM volume servers 
and push the failure handling on to the user? 

Of course, this doesn't work for all types of use cases and environments. We 
still have projects which require the cloud to own most responsibility for 
failure than the users. 

But for environments were we offer general purpose / best effort compute and 
storage, what methods are available to help the user be resilient to block 
storage failures?

Joe

On Mon, Feb 8, 2016 at 12:09 PM, Robert Starmer  wrote:

I've always recommended providing multiple underlying storage services to 
provide this rather than adding the overhead to the VM.  So, not in any of my 
systems or any I've worked with.

R


On Fri, Feb 5, 2016 at 5:56 PM, Joe Topjian  wrote:

Hello,

Does anyone have users RAID'ing or striping multiple block storage volumes from 
within an instance?

If so, what was the experience? Good, bad, possible but with caveats?

Thanks,
Joe 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


 ___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
  

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Anyone using Project Calico for tenant networking?

2016-02-06 Thread Ned Rhudy (BLOOMBERG/ 731 LEX)
Thanks. Having read the documentation, I have one question about the network 
design. Basically, our use case specifies that instances be able to have a 
stable IP across terminations; effectively what we'd like to do is have a setup 
where both the fixed and floating IPs are routable outside the cluster. Any 
given instance should get a routable IP when it launches, but additionally be 
able to take a floating IP that would act as a stable endpoint for other things 
to reference. 

The Calico docs specify that you can create public/private IPv4 networks in 
Neutron, both with DHCP enabled. Is it possible to accomplish what I'm talking 
about by creating what are two public IPv4 subnets, one with DHCP enabled and 
one with DHCP disabled that would be used as the float pool? Or is this not 
possible?

- Original Message -
From: Neil Jerram <neil.jer...@metaswitch.com>
To: EDMUND RHUDY, openstack-operators@lists.openstack.org
At: 05-Feb-2016 14:11:34


On 05/02/16 19:03, Ned Rhudy (BLOOMBERG/ 731 LEX) wrote:
> I meant in a general sense of the networking technology that you're
> using for instance networking, not in the sense of per-tenant networks,
> though my wording was ambiguous. Part of our larger question centers
> around the viability of tying instances directly to a provider network.
> Being that we only operate a private cloud for internal consumption,
> doing so would have some attractive upsides; tenants clamor for the IP
> inside their instance to be the same as the floating IP that the outside
> world sees, but nobody's ever asked us about the ability to roll their
> own network topology, so we think we could probably do without that.

Cool, IMO that's a good match for what Calico provides.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Anyone using Project Calico for tenant networking?

2016-02-05 Thread Ned Rhudy (BLOOMBERG/ 731 LEX)
Hello operators,

We're continuing to investigate different cloud networking technologies as part 
of our project to migrate from nova-network to Neutron, and one that's come up 
frequently is Project Calico (http://www.projectcalico.org/). However, we can't 
actually find anything in the way of operator testimonials from people using 
Calico. On paper it looks interesting, but it would be good to hear from 
operators who are actually using it with live users.

If you do use Calico, how do you feel about it? Does it deliver on its 
scalability promises, or has it brought only heartbreak?___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Anyone using Project Calico for tenant networking?

2016-02-05 Thread Ned Rhudy (BLOOMBERG/ 731 LEX)
I meant in a general sense of the networking technology that you're using for 
instance networking, not in the sense of per-tenant networks, though my wording 
was ambiguous. Part of our larger question centers around the viability of 
tying instances directly to a provider network. Being that we only operate a 
private cloud for internal consumption, doing so would have some attractive 
upsides; tenants clamor for the IP inside their instance to be the same as the 
floating IP that the outside world sees, but nobody's ever asked us about the 
ability to roll their own network topology, so we think we could probably do 
without that.

From: neil.jer...@metaswitch.com 
Subject: Re: [Openstack-operators] Anyone using Project Calico for tenant 
networking?

On 05/02/16 18:19, Ned Rhudy (BLOOMBERG/ 731 LEX) wrote:
> Hello operators,
>
> We're continuing to investigate different cloud networking technologies
> as part of our project to migrate from nova-network to Neutron, and one
> that's come up frequently is Project Calico
> (http://www.projectcalico.org/). However, we can't actually find
> anything in the way of operator testimonials from people using Calico.
> On paper it looks interesting, but it would be good to hear from
> operators who are actually using it with live users.
>
> If you do use Calico, how do you feel about it? Does it deliver on its
> scalability promises, or has it brought only heartbreak?

Hi Ned,

I'm a Calico team member, not an operator, but I wanted to check if you 
really meant tenant networking as in per-tenant networks.  Because 
Calico as implemented so far with OpenStack is really more a provider 
network technology.  For example, we don't support overlapping IPs.

Thanks,
 Neil

PS. Thanks for asking that question!  I'm also very interested in the 
answers, of course. :-)


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators