Re: [Openstack-operators] Snapshots taking long time

2016-03-02 Thread Andreas Vallin
Hello Saverio, Thanks for your answer. In that case the problem is that I thought the patch you are referring to was already make in Kilo. Doing snapshots directly from ceph goes fast: [root@ceph01: ~] # time rbd -p volumes snap create volume-fecc8258-e6d8-4d3c-9ac2-fe98b5dbbc2f@mytestsnap

Re: [Openstack-operators] Snapshots taking long time

2016-03-02 Thread Saverio Proto
Hello Andreas, what kind of snapshot are you doing ? 1) Snapshot of a instance running on a ephimeral volume ? 2) Snapshot of a instance booted from Volume ? 3) Snapshot of a volume ? in case 1 the ephemeral volume is in the volume pool with the name _disk when you snapshot, this must be read

[Openstack-operators] Snapshots taking long time

2016-03-02 Thread Andreas Vallin
We are currently installing a new openstack cluster (Liberty) with openstack-ansible and an already existing ceph cluster. We have both images and volumes located in ceph with rbd. My current problem is that snapshots take a very long time and I can see that snapshots are temporary created

Re: [Openstack-operators] [openstack-community] Recognising Ops contributions

2016-03-02 Thread David Medberry
On Wed, Mar 2, 2016 at 3:37 PM, Edgar Magana wrote: > We want to make this a reality by gathering a list of criteria that we as > a community feel that shows someone has demonstrated technical > contributions, using their skills as Ops. Our current ideas are as

Re: [Openstack-operators] Setting affinity based on instance type

2016-03-02 Thread Mathieu Gagné
What would prevent the next user from having workloadB collocated with an other user's workloadA if that's the only capacity available? Unless aggregates are used, it will be hard to guaranty that workloadA and workloadB (from any users) are never collocated. You could probably play with custom

[Openstack-operators] [app-catalog] IRC Meeting Thursday March 3rd at 17:00UTC

2016-03-02 Thread Christopher Aedo
Join us Thursday for our weekly meeting, scheduled for March 3rd at 17:00UTC in #openstack-meeting-3 The agenda can be found here, and please add to if you want to get something on the agenda: https://wiki.openstack.org/wiki/Meetings/app-catalog Looking forward to seeing everyone there tomorrow!

Re: [Openstack-operators] Setting affinity based on instance type

2016-03-02 Thread Adam Lawson
Hi Kris, When using aggregates as an example, anyone can assign workloadA<>aggregateA and workloadB<>aggregateB. That's easy. But if we have outstanding requests for workloadB and have a glut of capacity in aggregateA, workloadB won't be able to use those hosts so we have spare capacity and no

Re: [Openstack-operators] Workload Management (post-instantiation)

2016-03-02 Thread Adam Lawson
Okay that is pretty much aligned with what I was thinking - custom monitor/trigger/action. An ask coming out of one of our design discussions today was what products exist that do (or attempt to) address what is normally handled by workload management toolkits such as VMware DRS. //adam *Adam

Re: [Openstack-operators] Setting affinity based on instance type

2016-03-02 Thread Adam Lawson
We're looking at two workloads with different usage patterns: Type A follows a typical cyclical performance pattern (high/low day/night). Type B represents a consistent pattern (constant/predictable pattern). We want a way to ensure Patterns A will have an affinity to stick together, Patterns B

Re: [Openstack-operators] Workload Management (post-instantiation)

2016-03-02 Thread Kris G. Lindgren
We would love to have something like that as well. However, to do it in openstack would mean that something would have to gather/monitor the health of the HV's and not only disable new provisions but kick off/monitor the migrations off the host and onto the new chosen destinations . Also, due

Re: [Openstack-operators] Configuration tool for Openstack

2016-03-02 Thread Edgar Magana
Hello Chris, Please, consider to move your code to OSOps Repos: https://wiki.openstack.org/wiki/Osops Let me know if you need some assistance, Edgar From: "chrishul...@gmail.com" > Date: Wednesday, March 2, 2016

Re: [Openstack-operators] Workload Management (post-instantiation)

2016-03-02 Thread Edgar Magana
We have done it with nagios checks and customize ruby code. Edgar From: Adam Lawson > Date: Wednesday, March 2, 2016 at 1:48 PM To: "openstack-operators@lists.openstack.org"

[Openstack-operators] Setting affinity based on instance type

2016-03-02 Thread Adam Lawson
I'm sure this is possible but I'm trying to find the info I need in the docs so I figured I'd pitch this to you guys while I continue looking: Is it possible to set an affinity/anti-affinity policy to ensure instance Type A is weighted for/against co-location on the same physical host with

[Openstack-operators] Recognising Ops contributions

2016-03-02 Thread Edgar Magana
Dear Users and Operators, The Foundation User Committee [1] has received multiple requests to enable a formal recognition of your contributions to the OpenStack community. This email is our approach to formalize this recognition and make sure that we all feel and are part of the community.

Re: [Openstack-operators] Cloud Upgrade Strategies

2016-03-02 Thread Silence Dogood
- In-place Full Release upgrades (upgrade an entire cloud from Icehouse to Kilo for instance) This tends to be the most likely scenario with CI/CD being almost impossible for anyone using supported openstack components ( such as SDN / NAS / other hardware integration pieces ). That's not

[Openstack-operators] Workload Management (post-instantiation)

2016-03-02 Thread Adam Lawson
Hello fellow Ops-minded stackers! I understand OpenStack uses scheduler logic to place a VM on a host to ensure the load is balanced across hosts. My 64 million dollar question is: Has anyone identified a way to monitor capacity across all hosts on an ongoing basis and automatically live migrate

[Openstack-operators] Cloud Upgrade Strategies

2016-03-02 Thread Adam Lawson
Hey all, So I've been discussing cloud design with the team and of course the topic comes up about how upgrades will be handled. Handling OpenStack code updates generally consists of three paths in my experience: - CI/CD (continuous incremental upgrades) - In-place Full Release upgrades

Re: [Openstack-operators] Configuration tool for Openstack

2016-03-02 Thread Silence Dogood
sounds fun. I might give it a go after and see if it explodes =P On Wed, Mar 2, 2016 at 4:12 PM, wrote: > I'm going to use Pluto to do a basic Liberty install. In a couple of > days I'll have a full set of the basic six install files instead of just > the one I

Re: [Openstack-operators] Configuration tool for Openstack

2016-03-02 Thread chrishull42
I'm going to use Pluto to do a basic Liberty install. In a couple of days I'll have a full set of the basic six install files instead of just the one I included for glance. They will be geared around a one box install at first. I'll update the site. Chris Sent from my iPhone > On Mar

Re: [Openstack-operators] Configuration tool for Openstack

2016-03-02 Thread chrishull42
Not yet. I'm totally open to suggestions. - Chris. Sent from my iPhone > On Mar 2, 2016, at 1:07 PM, Silence Dogood wrote: > > This is neat man. Any support for versioning? > >> On Wed, Mar 2, 2016 at 3:54 PM, wrote: >> Hi all; >> >> I'm

Re: [Openstack-operators] Configuration tool for Openstack

2016-03-02 Thread Silence Dogood
This is neat man. Any support for versioning? On Wed, Mar 2, 2016 at 3:54 PM, wrote: > Hi all; > > I'm still a bit new to the world of stacking, but like many of you I have > suffered thru the process of manual Openstack installation. > > I've been a developer for

[Openstack-operators] Configuration tool for Openstack

2016-03-02 Thread chrishull42
Hi all; I'm still a bit new to the world of stacking, but like many of you I have suffered thru the process of manual Openstack installation. I've been a developer for decades, so please excuse me for "productizing" a simple tool. I hope this is useful. Feedback much appreciated.

[Openstack-operators] [neutron] Default security group discussion on openstack-dev

2016-03-02 Thread Sean M. Collins
Hi Operators, There is a discussion going on right now on the openstack-dev mailing list which I think is important for operators to weigh in on. The short summary is that some believe the default security group and rules (which allow only outbound by default, and no inbound) should be changed.

Re: [Openstack-operators] Taking Scientific WG Ops Meetup Feedback back to Ceilometer

2016-03-02 Thread Tim Bell
We’re starting to have a look at gnocchi in order to address the large storage volumes. We plan on using the Ceph backend for storage. One part of the documentation set that we were missing was a guide to how to migrate from ceilometer to a ceilometer/gnocchi combination (which I understand

Re: [Openstack-operators] [nova][neutron] What are your cells networking use cases?

2016-03-02 Thread Ricardo Rocha
Hi Carl. Adding some numbers from CERN to the discussion, we currently have: * ~40 cells * ~185 segments (called clusters in the our doc linked in the etherpad) * ~185*10 subnets, although it's not easy to compute the actual number today in our nova-network setup With the move to neutron (new

Re: [Openstack-operators] [neutron] network issue with separate subnets under single public-network

2016-03-02 Thread Ihar Hrachyshka
Rahul Sharma wrote: Hi All, I am trying to fix a network-issue in our environment and would like to know some suggestions on how I can achieve it. Here is the issue:- I have two subnets(10.10.10.0/25 and 10.10.10.128/26) with separate gateways for each subnet

[Openstack-operators] [Neutron][LBaaS] LBaaS Doesn't Work - Linux Bridge + VXLAN

2016-03-02 Thread Ludwig Tirazona
Hello Everyone, I have two network nodes, each running neutron-dhcp-agent neutron-l3-agent neutron-lbaas-agent neutron-metadata-agent neutron-plugin-linuxbridge-agent I have it set up for vxlan+linuxbridge. The LB doesn't work. When I set it up (LB Pool, Members, Monitor)

Re: [Openstack-operators] [openstack-community] We are OpenStack, but who is We?

2016-03-02 Thread Marton Kiss
Thank you Lauren and Edgar for the positive feedbacks! Cheers, Marton Kiss OpenStack Ambassador On Wed, Mar 2, 2016 at 1:15 AM Edgar Magana wrote: > Dear Pierre and Ops Community, > > My name is Edgar Magana and I am of the three members of the User > Committee