[Openstack-operators] [app-catalog] App Catalog IRC Meeting CANCELLED this week

2016-01-27 Thread Christopher Aedo
Due to scheduling conflicts and a very light agenda, there will be no Community App Catalog IRC meeting this week. Our next meeting is scheduled for February 4th, the agenda can be found here: https://wiki.openstack.org/wiki/Meetings/app-catalog One thing on the agenda for the 2/4/2016 meeting

Re: [Openstack-operators] DVR and public IP consumption

2016-01-27 Thread Fox, Kevin M
But there already is a second external address, the fip address that's nating. Is there a double nat? I'm a little confused. Thanks, Kevin From: Robert Starmer [rob...@kumul.us] Sent: Wednesday, January 27, 2016 3:20 PM To: Carl Baldwin Cc: OpenStack Operators;

Re: [Openstack-operators] DVR and public IP consumption

2016-01-27 Thread Robert Starmer
I think I've created a bit of confusion, because I forgot that DVR still does SNAT (generic non Floating IP tied NAT) on a central network node just like in the non-DVR model. The extra address that is consumed is allocated to a FIP specific namespace when a DVR is made responsible for supporting

Re: [Openstack-operators] Storage backend for glance

2016-01-27 Thread Fox, Kevin M
ceph would work pretty well for that use case too. We've run a ceph with two ost's, with the replication set to 2, to back both cinder and glance for HA. Nothing complicated needed to get it working. Less complicated then drbd I think. You can then also easily scale it out as needed. Thanks,

[Openstack-operators] User Committee Changes

2016-01-27 Thread Shilla Saebi
Hi Everyone, We have an update to the UC. Edgar Magana has been approved to be the board representative to the User Committee and is replacing Subbu Allamaraju. Welcome Edgar and we look forward to working with you! Shilla ___ OpenStack-operators

Re: [Openstack-operators] User Committee Changes

2016-01-27 Thread Edgar Magana
Hello All, Thank you so much Shilla and Jon for the support and confidence I am really looking forward to working with you as well. This is a great opportunity and I am very excited about it. I will do my best to provide meaningful feedback to the Foundation based on my experience as

Re: [Openstack-operators] User Committee Changes

2016-01-27 Thread Robert Starmer
Congratulations Edgar! Robert On Wed, Jan 27, 2016 at 9:28 AM, Edgar Magana wrote: > Hello All, > > Thank you so much Shilla and Jon for the support and confidence I am > really looking forward to working with you as well. > > This is a great opportunity and I am very

Re: [Openstack-operators] Storage backend for glance

2016-01-27 Thread Robert Starmer
Glusterfs backend works great for shared glance, and can be configured for a bit of redundancy at the disk level (rather than non distributed NFS, which needs the NFS server to be present), much like the Ceph model Kevin suggests. Is your database also resiliant (e.g. some form of mysql

Re: [Openstack-operators] how to get glance images for a specific tenant with the openstack client ?

2016-01-27 Thread Saverio Proto
> We have an image promotion process that does this for us. The command I use > to get images from a specific tenant is: > > glance --os-image-api-version 1 image-list --owner= > > I'm sure using the v1 API will make some cringe, but I haven't found > anything similar in the v2 API. > I used

[Openstack-operators] Storage backend for glance

2016-01-27 Thread Sławek Kapłoński
Hello, I want to install Openstack with at least two glance nodes (to have HA) but with local filesystem as glance storage. Is it possible to use something like that in setup with two glance nodes? Maybe someone of You already have something like that? I'm asking because AFAIK is image will be

Re: [Openstack-operators] Storage backend for glance

2016-01-27 Thread Hauke Bruno Wollentin
Hi Slawek, we use a shared NFS Export to save images to Glance. That enables HA in (imho) the simplest way. For your setting you could use something like a hourly/daily/whenever rsync job and set the 'second' Glance node to passive/standby in the load balancing. Also it will be possible to

Re: [Openstack-operators] Storage backend for glance

2016-01-27 Thread Joe Topjian
Yup, it's definitely possible. All Glance nodes will need to share the same database as well as the same file system. Common ways of sharing the file system are to mount /var/lib/glance/images either from NFS (like you mentioned) or Gluster. I've done both in the past with no issues. The usual