[openstack-dev] [heat][ceilometer]: scale up/ down based on number of instances in a group

2014-10-25 Thread Daniel Comnea
HI all,


Unfortunately i couldn't find any resource - blueprint/ document/ examples/
presentations about my below use case, hence the question i'm raising now
(if this is not the best place to ask, please let me know).


Having a group of 5 instances, i'd like to always maintain a minimum of 2
instances by using the HEAT autoscaling feature and Ceilometer?

I've seen the Wordpress autoscalling examples based on cpu_util metric but
my use case is more on the number of instances.


Cheers,
Dani
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Automatic evacuate

2014-10-25 Thread Géza Gémes


On 10/22/2014 10:05 PM, David Vossel wrote:


- Original Message -

On 10/21/2014 07:53 PM, David Vossel wrote:

- Original Message -

-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com]
Sent: October 21, 2014 15:07
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Automatic evacuate

On 10/21/2014 06:44 AM, Balázs Gibizer wrote:

Hi,

Sorry for the top posting but it was hard to fit my complete view
inline.

I'm also thinking about a possible solution for automatic server
evacuation. I see two separate sub problems of this problem:
1)compute node monitoring and fencing, 2)automatic server evacuation

Compute node monitoring is currently implemented in servicegroup
module of nova. As far as I understand pacemaker is the proposed
solution in this thread to solve both monitoring and fencing but we
tried and found out that pacemaker_remote on baremetal does not work
together with fencing (yet), see [1]. So if we need fencing then
either we have to go for normal pacemaker instead of pacemaker_remote
but that solution doesn't scale or we configure and call stonith
directly when pacemaker detect the compute node failure.

I didn't get the same conclusion from the link you reference.  It says:

"That is not to say however that fencing of a baremetal node works any
differently than that of a normal cluster-node. The Pacemaker policy
engine
understands how to fence baremetal remote-nodes. As long as a fencing
device exists, the cluster is capable of ensuring baremetal nodes are
fenced
in the exact same way as normal cluster-nodes are fenced."

So, it sounds like the core pacemaker cluster can fence the node to me.
   I CC'd David Vossel, a pacemaker developer, to see if he can help
   clarify.

It seems there is a contradiction between chapter 1.5 and 7.2 in [1] as
7.2
states:
" There are some complications involved with understanding a bare-metal
node's state that virtual nodes don't have. Once this logic is complete,
pacemaker will be able to integrate bare-metal nodes in the same way
virtual
remote-nodes currently are. Some special considerations for fencing will
need to be addressed. "
Let's wait for David's statement on this.

Hey, That's me!

I can definitely clear all this up.

First off, this document is out of sync with the current state upstream.
We're
already past Pacemaker v1.1.12 upstream. Section 7.2 of the document being
referenced is still talking about future v1.1.11 features.

I'll make it simple. If the document references anything that needs to be
done
in the future, it's already done.  Pacemaker remote is feature complete at
this
point. I've accomplished everything I originally set out to do. I see one
change
though. In 7.1 I talk about wanting pacemaker to be able to manage
resources in
containers. I mention something about libvirt sandbox. I scrapped whatever
I was
doing there. Pacemaker now has docker support.
https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/docker

I've known this document is out of date. It's on my giant list of things to
do.
Sorry for any confusion.

As far as pacemaker remote and fencing goes, remote-nodes are fenced the
exact
same way as cluster-nodes. The only consideration that needs to be made is
that
the cluster-nodes (nodes running the full pacemaker+corosync stack) are the
only
nodes allowed to initiate fencing. All you have to do is make sure the
fencing
devices you want to use to fence remote-nodes are accessible to the
cluster-nodes.
  From there you are good to go.

Let me know if there's anything else I can clear up. Pacemaker remote was
designed
to be the solution for the exact scenario you all are discussing here.
Compute nodes
and pacemaker remote are made for one another :D

If anyone is interested in prototyping pacemaker remote for this compute
node use
case, make sure to include me. I have done quite a bit research into how to
maximize
pacemaker's ability to scale horizontally. As part of that research I've
made a few
changes that are directly related to all of this that are not yet in an
official
pacemaker release.  Come to me for the latest rpms and you'll have a less
painful
experience setting all this up :)

-- Vossel



Hi Vossel,

Could you send us a link to the source RPMs please, we have tested on
CentOS7. It might need a recompile.

Yes, centos 7.0 isn't going to have the rpms you need to test this.

There are a couple of things you can do.

1. I put the rhel7 related rpms I test with in this repo.
http://davidvossel.com/repo/os/el7/

*disclaimer* I only maintain this repo for myself. I'm not committed to keeping
it active or up-to-date. It just happens to be updated right now for my own use.

That will give you test rpms for the pacemaker version I'm currently using plus
the latest libqb. If you're going to do any sort of performance metrics you'll
need the latest libqb, v0.17.1

2. Build srpm from latest code on github. Right now master is relatively
stable

Re: [openstack-dev] [NFV] NFV BoF session for OpenStack Summit Paris

2014-10-25 Thread Steve Gordon
- Original Message -
> From: "Steve Gordon" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> - Original Message -
> > From: "Steve Gordon" 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > - Original Message -
> > > From: "Steve Gordon" 
> > > To: "OpenStack Development Mailing List (not for usage questions)"
> > > 
> > > 
> > > Hi all,
> > > 
> > > I took an action item in one of the meetings to try and find a
> > > date/time/space to do another NFV BoF session for Paris to take advantage
> > > of
> > > the fact that many of us will be in attendance for a face to face
> > > session.
> > > 
> > > To try and avoid clashing with the general and design summit sessions I
> > > am
> > > proposing that we meet either before the sessions start one morning,
> > > during
> > > the lunch break, or after the sessions finish for the day. For the lunch
> > > sessions the meeting would be shorter to ensure people actually have time
> > > to
> > > grab lunch beforehand.
> > > 
> > > I've put together a form here, please register your preferred date/time
> > > if
> > > you would be interested in attending an NFV BoF session:
> > > 
> > > http://doodle.com/qchvmn4sw5x39cps
> > > 
> > > I will try and work out the *where* once we have a clear picture of the
> > > preferences for the above. We can discuss further in the weekly meeting.
> > > 
> > > Thanks!
> > > 
> > > Steve
> > > 
> > > [1]
> > > https://openstacksummitnovember2014paris.sched.org/event/f5bcb6033064494390342031e48747e3#.VEWEIOKmhkM
> > 
> > Hi all,
> > 
> > I have just noticed an update on a conversation I had been following on the
> > community list:
> > 
> > http://lists.openstack.org/pipermail/community/2014-October/000921.html
> > 
> > It seems like after hours use of the venue will not be an option in Paris,
> > though there may be some space available for BoF style activities on
> > Wednesday. I also noticed this "Win the telco BoF" session on the summit
> > schedule for the creation of a *new* working group:
> > 
> > 
> > https://openstacksummitnovember2014paris.sched.org/event/f5bcb6033064494390342031e48747e3#.VEbRkOKmhkM
> > 
> > Does anyone know anything about this? It's unclear if this is the
> > appropriate
> > place to discuss the planning and development activities we've been working
> > on. Let's discuss further in the meeting tomorrow.
> > 
> > Thanks,
> > 
> > Steve
> 
> Ok, it looks like there is a user-committee email on this topic now:
> 
> 
> http://lists.openstack.org/pipermail/user-committee/2014-October/000320.html
> 
> I did reach out to Carol to highlight the existing efforts before the above
> was sent but it seems it still does contain quite a bit of overlap.
> 
> -Steve

Hi all,

There is some more detail on the above here for those who don't subscribe to 
user-committee:

http://lists.openstack.org/pipermail/user-committee/2014-October/000322.html

The updated session entry is here:

http://kilodesignsummit.sched.org/event/b3ccf1464e335b703fc126f068142792

I would also like to highlight that the Nova design track has a specific 
session allocated to discuss NFV related specifications:

http://kilodesignsummit.sched.org/event/aa14a2bd2a4c1afa1aa24a60c3131fcc

Let's work this week to make sure that the relevant specification submissions 
on the Nova side are in good shape to ensure a productive discussion.

Thanks,

Steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][ceilometer]: scale up/ down based on number of instances in a group

2014-10-25 Thread Qiming Teng
On Sat, Oct 25, 2014 at 07:58:28AM +0100, Daniel Comnea wrote:
> HI all,
> 
> 
> Unfortunately i couldn't find any resource - blueprint/ document/ examples/
> presentations about my below use case, hence the question i'm raising now
> (if this is not the best place to ask, please let me know).
> 
> 
> Having a group of 5 instances, i'd like to always maintain a minimum of 2
> instances by using the HEAT autoscaling feature and Ceilometer?

Did you try set the 'min_size' property of your auto-scaling group?

Qiming

> I've seen the Wordpress autoscalling examples based on cpu_util metric but
> my use case is more on the number of instances.
> 
> 
> Cheers,
> Dani

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-25 Thread Erik Moe

Proposal C, VLAN-aware-VMs, is about trying to integrate VLAN traffic from VMs 
with Neutron in a more tightly fashion.

It terminates the VLAN at the port connected to the VM. It does not bring in 
the VLAN concept further into Neutron. This is done by mapping each VLAN from 
the VM to a neutron network. After all, VLANs and Neutron networks are very 
much alike.

The modelling reuses the current port structure, there is one port on each 
network. The port still contains information relevant to that network.

By doing these things it's possible to utilize the rest of the features in 
Neutron, only features that have implementation close to VM has to be 
overlooked when implementing this. Other features that have attributes on a VM 
port but is realized remotely works fine, for example DHCP (including 
extra_dhcp_opts) and mechanism drivers that uses portbindings to do network 
plumbing on a switch.

After the Icehouse summit where we discussed the L2-gateway solution I started 
to implement an L2-gateway. The idea was to have an VM with a trunk port 
connected to a trunk network carrying tagged traffic. The network would then be 
connected to a L2-gateway for breaking out a single VLAN and connect it with a 
normal Neutron network. Following are some of the issues I encountered.

Currently a neutron port/network contains attributes related to one broadcast 
domain. A trunk network requires that many attributes are per broadcast domain. 
This would require a bigger refractory of Neutron port/network and affect all 
services using the ports/networks.
Due to this I dropped the track of tight integration with trunk network.

Then I tried to just use the trunk network as a plain pipe to the L2-gateway 
and connect to normal Neutron networks. One issue is that the L2-gateway will 
bridge the networks, but the services in the network you bridge to is unaware 
of your existence. This IMO is ok then bridging Neutron network to some remote 
network, but if you have an Neutron VM and want to utilize various resources in 
another Neutron network (since the one you sit on does not have any resources), 
things gets, let's say non streamlined.

Another issue with trunk network is that it puts new requirements on the 
infrastructure. It needs to be able to handle VLAN tagged frames. For a VLAN 
based network it would be QinQ.

My requirements were to have low/no extra cost for VMs using VLAN trunks 
compared to normal ports, no new bottlenecks/single point of failure. Due to 
this and previous issues I implemented the L2 gateway in a distributed fashion 
and since trunk network could not be realized in reality I only had them in the 
model and optimized them away. But the L2-gateway + trunk network has a 
flexible API, what if someone connects two VMs to one trunk network, well, hard 
to optimize away.

Anyway, due to these and other issues, I limited my scope and switched to the 
current trunk port/subport model.

The code that is for review is functional, you can boot a VM with a trunk port 
+ subports (each subport maps to a VLAN). The VM can send/receive VLAN traffic. 
You can add/remove subports on a running VM. You can specify IP address per 
subport and use DHCP to retrieve them etc.

Thanks,
Erik



From: Bob Melander (bmelande) [mailto:bmela...@cisco.com]
Sent: den 24 oktober 2014 20:13
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

What scares me a bit about the "let's find a common solution for both external 
devices and VMs" approach is the challenge to reach an agreement. I remember a 
rather long discussion in the dev lounge in HongKong about trunking support 
that ended up going in all kinds of directions.

I work on implementing services in VMs so my opinion is definitely colored by 
that. Personally, proposal C is the most appealing to me for the following 
reasons: It is "good enough", a trunk port notion is semantically easy to take 
in (at least to me), by doing it all within the port resource Nova implications 
are minimal, it seemingly can handle multiple network types (VLAN, GRE, VXLAN, 
... they are all mapped to different trunk port local VLAN tags), DHCP should 
work to the trunk ports and its sub ports (unless I overlook something), the 
spec already elaborates a lot on details, there is also already code available 
that can be inspected.

Thanks,
Bob

From: Ian Wells mailto:ijw.ubu...@cack.org.uk>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: torsdag 23 oktober 2014 23:58
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

There are two categories of problems:
1. some networks don't pass VLAN tagged traffic, and it's impossible to detect 
this from the API
2. it's not possible to pas

Re: [openstack-dev] [Heat] Convergence prototyping

2014-10-25 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2014-10-23 11:10:29 -0700:
> Hi folks,
> I've been looking at the convergence stuff, and become a bit concerned 
> that we're more or less flying blind (or at least I have been) in trying 
> to figure out the design, and also that some of the first implementation 
> efforts seem to be around the stuff that is _most_ expensive to change 
> (e.g. database schemata).
> 
> What we really want is to experiment on stuff that is cheap to change 
> with a view to figuring out the big picture without having to iterate on 
> the expensive stuff. To that end, I started last week to write a little 
> prototype system to demonstrate the concepts of convergence. (Note that 
> none of this code is intended to end up in Heat!) You can find the code 
> here:
> 
> https://github.com/zaneb/heat-convergence-prototype
> 
> Note that this is a *very* early prototype. At the moment it can create 
> resources, and not much else. I plan to continue working on it to 
> implement updates and so forth. My hope is that we can develop a test 
> framework and scenarios around this that can eventually be transplanted 
> into Heat's functional tests. So the prototype code is throwaway, but 
> the tests we might write against it in future should be useful.
> 
> I'd like to encourage anyone who needs to figure out any part of the 
> design of convergence to fork the repo and try out some alternatives - 
> it should be very lightweight to do so. I will also entertain pull 
> requests (though I see my branch primarily as a vehicle for my own 
> learning at this early stage, so if you want to go in a different 
> direction it may be best to do so on your own branch), and the issue 
> tracker is enabled if there is something you want to track.
> 
> I have learned a bunch of stuff already:
> 
> * The proposed spec for persisting the dependency graph 
> (https://review.openstack.org/#/c/123749/1) is really well done. Kudos 
> to Anant and the other folks who had input to it. I have left comments 
> based on what I learned so far from trying it out.
> 
> 
> * We should isolate the problem of merging two branches of execution 
> (i.e. knowing when to trigger a check on one resource that depends on 
> multiple others). Either in a library (like taskflow) or just a separate 
> database table (like my current prototype). Baking it into the 
> orchestration algorithms (e.g. by marking nodes in the dependency graph) 
> would be a colossal mistake IMHO.
> 
> 
> * Our overarching plan is backwards.
> 
> There are two quite separable parts to this architecture - the worker 
> and the observer. Up until now, we have been assuming that implementing 
> the observer would be the first step. Originally we thought that this 
> would give us the best incremental benefits. At the mid-cycle meetup we 
> came to the conclusion that there were actually no real incremental 
> benefits to be had until everything was close to completion. I am now of 
> the opinion that we had it exactly backwards - the observer 
> implementation should come last. That will allow us to deliver 
> incremental benefits from the observer sooner.
> 
> The problem with the observer is that it requires new plugins. (That 
> sucks BTW, because a lot of the value of Heat is in having all of these 
> tested, working plugins. I'd love it if we could take the opportunity to 
> design a plugin framework such that plugins would require much less 
> custom code, but it looks like a really hard job.) Basically this means 
> that convergence would be stalled until we could rewrite all the 
> plugins. I think it's much better to implement a first stage that can 
> work with existing plugins *or* the new ones we'll eventually have with 
> the observer. That allows us to get some benefits soon and further 
> incremental benefits as we convert plugins one at a time. It should also 
> mean a transition period (possibly with a performance penalty) for 
> existing plugin authors, and for things like HARestarter (can we please 
> please deprecate it now?).
> 
> So the two phases I'm proposing are:
>   1. (Workers) Distribute tasks for individual resources among workers; 
> implement update-during-update (no more locking).
>   2. (Observers) Compare against real-world values instead of template 
> values to determine when updates are needed. Make use of notifications 
> and such.
> 
> I believe it's quite realistic to aim to get #1 done for Kilo. There 
> could also be a phase 1.5, where we use the existing stack-check 
> mechanism to detect the most egregious divergences between template and 
> reality (e.g. whole resource is missing should be easy-ish). I think 
> this means that we could have a feasible Autoscaling API for Kilo if 
> folks step up to work on it - and in any case now is the time to start 
> on that to avoid it being delayed more than it needs to be based purely 
> on the availability of underlying features. That's why I proposed a 
> session on Autoscalin

Re: [openstack-dev] [neutron] [oslo.db] model_query() future and neutron specifics

2014-10-25 Thread Mike Bayer

> On Oct 23, 2014, at 11:27 AM, Kyle Mestery  wrote:
> 
> Mike, first, thanks for sending out this detailed analysis. I'm hoping
> that some of the DB experts from the Neutron side have read this.
> Would it make sense to add this to our weekly meeting [1] for next
> week and discuss it during there? At least we could give it some
> airtime. I'm also wondering if it makes sense to grab some time in
> Paris on Friday to discuss in person. Let me know your thoughts.
> 
> Thanks,
> Kyle
> 
> [1] https://wiki.openstack.org/wiki/Network/Meetings
> 

hey Kyle -

both good ideas though I’m missing the summit this year due to a new addition 
to our family, and overall not around too much the next couple of weeks.I 
think I’ll be able to circle back to this issue more fully after the summit.

- mike



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev