Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-09 Thread Alex Glikson
Good summary. I would also add that in A1 the schedulers (e.g., in Nova 
and Cinder) could talk to each other to coordinate. Besides defining the 
policy, and the user-facing APIs, I think we should also outline those 
cross-component APIs (need to think whether they have to be user-visible, 
or can be admin).

Regards,
Alex




From:   Mike Spreitzer mspre...@us.ibm.com
To: Yathiraj Udupi (yudupi) yud...@cisco.com, 
Cc: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org
Date:   09/10/2013 08:46 AM
Subject:Re: [openstack-dev] [scheduler] APIs for Smart Resource 
Placement - Updated Instance Group Model and API extension model - WIP 
Draft



Thanks for the clue about where the request/response bodies are 
documented.  Is there any convenient way to view built documentation for 
Havana right now? 

You speak repeatedly of the desire for clean interfaces, and nobody 
could disagree with such words.  I characterize my desire that way too. It 
might help me if you elaborate a little on what clean means to you.  To 
me it is about minimizing the number of interactions between different 
modules/agents and the amount of information in those interactions.  In 
short, it is about making narrow interfaces - a form of simplicity. 

To me the most frustrating aspect of this challenge is the need for the 
client to directly mediate the dependencies between resources; this is 
really what is driving us to do ugly things.  As I mentioned before, I am 
coming from a setting that does not have this problem.  So I am thinking 
about two alternatives: (A1) how clean can we make a system in which the 
client continues to directly mediate dependencies between resources, and 
(A2) how easily and cleanly can we make that problem go away. 

For A1, we need the client to make a distinct activation call for each 
resource.  You have said that we should start the roadmap without joint 
scheduling; in this case, the scheduling can continue to be done 
independently for each resource and can be bundled with the activation 
call.  That can be the call we know and love today, the one that creates a 
resource, except that it needs to be augmented to also carry some pointer 
that points into the policy data so that the relevant policy data can be 
taken into account when making the scheduling decision.  Ergo, the client 
needs to know this pointer value for each resource.  The simplest approach 
would be to let that pointer be the combination of (p1) a VRT's UUID and 
(p2) the local name for the resource within the VRT.  Other alternatives 
are possible, but require more bookkeeping by the client. 

I think that at the first step of the roadmap for A1, the client/service 
interaction for CREATE can be in just two phases.  In the first phase the 
client presents a topology (top-level InstanceGroup in your terminology), 
including resource definitions, to the new API for registration; the 
response is a UUID for that registered top-level group.  In the second 
phase the client creates the resources as is done today, except that 
each creation call is augmented to carry the aforementioned pointer into 
the policy information.  Each resource scheduler (just nova, at first) can 
use that pointer to access the relevant policy information and take it 
into account when scheduling.  The client/service interaction for UPDATE 
would be in the same two phases: first update the policyresource 
definitions at the new API, then do the individual resource updates in 
dependency order. 

I suppose the second step in the roadmap is to have Nova do joint 
scheduling.  The client/service interaction pattern can stay the same. The 
only difference is that Nova makes the scheduling decisions in the first 
phase rather than the second.  But that is not a detail exposed to the 
clients. 

Maybe the third step is to generalize beyond nova? 

For A2, the first question is how to remove user-level create-time 
dependencies between resources.  We are only concerned with the 
user-level create-time dependencies here because it is only they that 
drive intimate client interactions.  There are also create-time 
dependencies due to the nature of the resource APIs; for example, you can 
not attach a volume to a VM until after both have been created.  But 
handling those kinds of create-time dependencies does not require intimate 
interactions with the client.  I know of two software orchestration 
technologies developed in IBM, and both have the property that there are 
no user-level create-time dependencies between resources; rather, the 
startup code (userdata) that each VM runs handles dependencies (using a 
library for cross-VM communication and synchronization).  This can even be 
done in plain CFN, using wait conditions and handles (albeit somewhat 
clunkily), right?  So I think there are ways to get this nice property 
already.  The next question is how best to exploit it to make cleaner 
APIs.  I think we can have a one-step 

Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

2013-10-09 Thread Bob Melander (bmelande)
For use case 2, ability to pin an admin/operator owned VM to a particular 
tenant can be useful.
I.e., the service VMs are owned by the operator but a particular service VM 
will only allow service instances from a single tenant.

Thanks,
Bob

From: Regnier, Greg J 
greg.j.regn...@intel.commailto:greg.j.regn...@intel.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: tisdag 8 oktober 2013 23:48
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron] Service VM discussion - Use Cases

Hi,

Re: blueprint:  
https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms

Before going into more detail on the mechanics, would like to nail down use 
cases.

Based on input and feedback, here is what I see so far.



Assumptions:



- a 'Service VM' hosts one or more 'Service Instances'

- each Service Instance has one or more Data Ports that plug into Neutron 
networks

- each Service Instance has a Service Management i/f for Service management 
(e.g. FW rules)

- each Service Instance has a VM Management i/f for VM management (e.g. health 
monitor)



Use case 1: Private Service VM

Owned by tenant

VM hosts one or more service instances

Ports of each service instance only plug into network(s) owned by tenant



Use case 2: Shared Service VM

Owned by admin/operator

VM hosts multiple service instances

The ports of each service instance plug into one tenants network(s)

Service instance provides isolation from other service instances within VM



Use case 3: Multi-Service VM

Either Private or Shared Service VM

Support multiple service types (e.g. FW, LB, …)


-  Greg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-09 Thread Debojyoti Dutta
Mike, I agree we could have a cleaner API but I am not sure how
cleanly it will integrate with current nova which IMO should be test
we should pass (assuming we do cross services later)

On Tue, Oct 8, 2013 at 10:39 PM, Mike Spreitzer mspre...@us.ibm.com wrote:
 Thanks for the clue about where the request/response bodies are documented.
 Is there any convenient way to view built documentation for Havana right
 now?

 You speak repeatedly of the desire for clean interfaces, and nobody could
 disagree with such words.  I characterize my desire that way too.  It might
 help me if you elaborate a little on what clean means to you.  To me it is
 about minimizing the number of interactions between different modules/agents
 and the amount of information in those interactions.  In short, it is about
 making narrow interfaces - a form of simplicity.


I think the word clean can be overloaded. For me a clean API is to use
minimal nouns and specify the policies, the resources we would like to
request and the extra metadata that we might want to pass. Thus the
three components.

 To me the most frustrating aspect of this challenge is the need for the
 client to directly mediate the dependencies between resources; this is
 really what is driving us to do ugly things.  As I mentioned before, I am
 coming from a setting that does not have this problem.  So I am thinking
 about two alternatives: (A1) how clean can we make a system in which the
 client continues to directly mediate dependencies between resources, and
 (A2) how easily and cleanly can we make that problem go away.

Am a little confused - How is the API dictating either A1 or A2? Isnt
that a function of the implementation of the API. For a moment let us
assume that the black box implementation will be awesome and address
your concerns. The question is this - does the current API help
specify what we  want assuming we will be able to extend the notion of
nodes, edges, policies and metadata?

debo


 For A1, we need the client to make a distinct activation call for each
 resource.  You have said that we should start the roadmap without joint
 scheduling; in this case, the scheduling can continue to be done
 independently for each resource and can be bundled with the activation call.
 That can be the call we know and love today, the one that creates a
 resource, except that it needs to be augmented to also carry some pointer
 that points into the policy data so that the relevant policy data can be
 taken into account when making the scheduling decision.  Ergo, the client
 needs to know this pointer value for each resource.  The simplest approach
 would be to let that pointer be the combination of (p1) a VRT's UUID and
 (p2) the local name for the resource within the VRT.  Other alternatives are
 possible, but require more bookkeeping by the client.

 I think that at the first step of the roadmap for A1, the client/service
 interaction for CREATE can be in just two phases.  In the first phase the
 client presents a topology (top-level InstanceGroup in your terminology),
 including resource definitions, to the new API for registration; the
 response is a UUID for that registered top-level group.  In the second phase
 the client creates the resources as is done today, except that each
 creation call is augmented to carry the aforementioned pointer into the
 policy information.  Each resource scheduler (just nova, at first) can use
 that pointer to access the relevant policy information and take it into
 account when scheduling.  The client/service interaction for UPDATE would be
 in the same two phases: first update the policyresource definitions at the
 new API, then do the individual resource updates in dependency order.

 I suppose the second step in the roadmap is to have Nova do joint
 scheduling.  The client/service interaction pattern can stay the same.  The
 only difference is that Nova makes the scheduling decisions in the first
 phase rather than the second.  But that is not a detail exposed to the
 clients.

 Maybe the third step is to generalize beyond nova?

 For A2, the first question is how to remove user-level create-time
 dependencies between resources.  We are only concerned with the user-level
 create-time dependencies here because it is only they that drive intimate
 client interactions.  There are also create-time dependencies due to the
 nature of the resource APIs; for example, you can not attach a volume to a
 VM until after both have been created.  But handling those kinds of
 create-time dependencies does not require intimate interactions with the
 client.  I know of two software orchestration technologies developed in IBM,
 and both have the property that there are no user-level create-time
 dependencies between resources; rather, the startup code (userdata) that
 each VM runs handles dependencies (using a library for cross-VM
 communication and synchronization).  This can even be done in plain CFN,
 using wait conditions and handles 

Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

2013-10-09 Thread Endre Karlson
What about also allowing a specific service to request a port to be created
on a requested server for an arbitrary service like a physical machine?

I think we should think more in terms of s/VM/Instance where instance can
really be either a VM or a Physical host since it really doesn't matter..

Endre


2013/10/9 Bob Melander (bmelande) bmela...@cisco.com

  For use case 2, ability to pin an admin/operator owned VM to a
 particular tenant can be useful.
 I.e., the service VMs are owned by the operator but a particular service
 VM will only allow service instances from a single tenant.

  Thanks,
 Bob

   From: Regnier, Greg J greg.j.regn...@intel.com
 Reply-To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
 Date: tisdag 8 oktober 2013 23:48
 To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
 
 Subject: [openstack-dev] [Neutron] Service VM discussion - Use Cases

   Hi,

 ** **

 Re: blueprint:
 https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms

 Before going into more detail on the mechanics, would like to nail down
 use cases.  

 Based on input and feedback, here is what I see so far.  

 ** **

 Assumptions:

  

 - a 'Service VM' hosts one or more 'Service Instances'

 - each Service Instance has one or more Data Ports that plug into Neutron
 networks

 - each Service Instance has a Service Management i/f for Service
 management (e.g. FW rules)

 - each Service Instance has a VM Management i/f for VM management (e.g.
 health monitor)

  

 Use case 1: Private Service VM 

 Owned by tenant

 VM hosts one or more service instances

 Ports of each service instance only plug into network(s) owned by tenant**
 **

  

 Use case 2: Shared Service VM

 Owned by admin/operator

 VM hosts multiple service instances

 The ports of each service instance plug into one tenants network(s)

 Service instance provides isolation from other service instances within VM
 

  

 Use case 3: Multi-Service VM

 Either Private or Shared Service VM

 Support multiple service types (e.g. FW, LB, …)

 ** **

 -  Greg

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-09 Thread Thomas Spatzier
Excerpts from Clint Byrum's message

 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org,
 Date: 09.10.2013 03:54
 Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
 proposal for workflows

 Excerpts from Stan Lagun's message of 2013-10-08 13:53:45 -0700:
  Hello,
 
 
  That is why it is necessary to have some central coordination service
which
  would handle deployment workflow and perform specific actions (create
VMs
  and other OpenStack resources, do something on that VM) on each stage
  according to that workflow. We think that Heat is the best place for
such
  service.
 

 I'm not so sure. Heat is part of the Orchestration program, not workflow.


I agree. HOT so far was thought to be a format for describing templates in
a structural, declaritive way. Adding workflows would stretch it quite a
bit. Maybe we should see what aspects make sense to be added to HOT, and
then how to do workflow like orchestration in a layer on top.

  Our idea is to extend HOT DSL by adding  workflow definition
capabilities
  as an explicit list of resources, components’ states and actions.
States
  may depend on each other so that you can reach state X only after
you’ve
  reached states Y and Z that the X depends on. The goal is from initial
  state to reach some final state “Deployed”.
 

We also would like to add some mechanisms to HOT for declaratively doing
software component orchestration in Heat, e.g. saying that one component
depends on another one, or needs input from another one once it has been
deployed etc. (I BTW started to write a wiki page, which is admittedly far
from complete, but I would be happy to work on it with interested folks -
https://wiki.openstack.org/wiki/Heat/Software-Configuration-Provider).
However, we must be careful not to make such features too complicated so
nobody will be able to use it any more. That said, I believe we could make
HOT cover some levels of complexity, but not all. And then maybe workflow
based orchestration on top is needed.


 Orchestration is not workflow, and HOT is an orchestration templating
 language, not a workflow language. Extending it would just complect two
 very different (though certainly related) tasks.

 I think the appropriate thing to do is actually to join up with the
 TaskFlow project and consider building it into a workflow service or
tools
 (it is just a library right now).

  There is such state graph for each of our deployment entities (service,
  VMs, other things). There is also an action that must be performed on
each
  state.

 Heat does its own translation of the orchestration template into a
 workflow right now, but we have already discussed using TaskFlow to
 break up the orchestration graph into distributable jobs. As we get more
 sophisticated on updates (rolling/canary for instance) we'll need to
 be able to reason about the process without having to glue all the
 pieces together.

  We propose to extend HOT DSL with workflow definition capabilities
where
  you can describe step by step instruction to install service and
properly
  handle errors on each step.
 
  We already have an experience in implementation of the DSL, workflow
  description and processing mechanism for complex deployments and
believe
  we’ll all benefit by re-using this experience and existing code, having
  properly discussed and agreed on abstraction layers and distribution of
  responsibilities between OS components. There is an idea of
implementing
  part of workflow processing mechanism as a part of Convection proposal,
  which would allow other OS projects to benefit by using this.
 
  We would like to discuss if such design could become a part of future
Heat
  version as well as other possible contributions from Murano team.
 

 Thanks really for thinking this through. Windows servers are not unique
in
 this regard. Puppet and chef are pretty decent at expressing single-node
 workflows but they are awkward for deferring control and resuming on
other
 nodes, so I do think there is a need for a general purpose distributed
 workflow definition tool.

 I'm not, however, convinced that extending HOT would yield a better
 experience for users. I'd prefer to see HOT have a well defined interface
 for where to defer to external workflows. Wait conditions are actually
 decent at that, and I'm sure with a little more thought we can make them
 more comfortable to use.

Good discussion to have, i.e. what extensions we would need in HOT for
making HOT alone more capable, and what we would need to hook it up with
other orchestration like workflows.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-09 Thread Jaromir Coufal

On 2013/08/10 23:53, Robert Collins wrote:

On 9 October 2013 07:24, Jiří Stránský ji...@redhat.com wrote:

Clint and Monty,

thank you for such good responses. I am new in TripleO team indeed and I was
mostly concerned by the line in the sand. Your responses shed some more
light on the issue for me and i hope we'll be heading the right way :)

Sorry for getting folk concerned! I'm really glad some folk jumped in
to clarify. Let me offer some more thoughts on top of this..
I was taking some concepts as a given - they are part of the OpenStack
culture - when I wrote my mail about TripleO reviewer status:

* That what we need is a bunch of folk actively engaged in thinking
about the structure, performance and features of the component
projects in TripleO, who *apply* that knowledge to every code review.
And we need to grow that collection of reviewers to keep up with a
growing contributor base.

* That the more reviewers we have, the less burden any one reviewer
has to carry : I'd be very happy if we normalised on everyone in -core
doing just one careful and effective review a day, *if* thats
sufficient to carry the load. I doubt it will be, because developers
can produce way more than one patch a day each, which implies 2*
developer count reviews per day *at minimum*, and even if every ATC
was a -core reviewer, we'd still need two reviews per -core per day.

* How much knowledge is needed to be a -core? And how many reviews?
There isn't a magic number of reviews IMO: we need 'lots' of reviews
and 'over a substantial period of time' : it's very hard to review
effectively in a new project, but after 3 months, if someone has been
regularly reviewing they will have had lots of mentoring taking place,
and we (-core membership is voted on by -core members) are likely to
be reasonably happy that they will do a good job.

* And finally that the job of -core is to sacrifice their own
productivity in exachange for team productivity : while there are
limits to this - reviewer fatigue, personal/company goals, etc etc, at
the heart of it it's a volunteer role which is crucial for keeping
velocity up: every time a patch lingers without feedback the developer
writing it is stalled, which is a waste (in the Lean sense).



So with those 'givens' in place, I was trying to just report in that
context.. the metric of reviews being done is a *lower bound* - it is
necessary, but not sufficient, to be -core. Dropping below it for an
extended period of time - and I've set a pretty arbitrary initial
value of approximately one per day - is a solid sign that the person
is not keeping up with evolution of the code base.

Being -core means being on top of the evolution of the program and the
state of the code, and being a regular, effective, reviewer is the one
sure fire way to do that. I'm certainly open to folk who want to focus
on just the CLI doing so, but that isn't enough to keep up to date on
the overall structure/needs - the client is part of the overall
story!. So the big thing for me is - if someone no longer has time to
offer doing reviews, thats fine, we should recognise that and release
them from the burden of -core: their reviews will still be valued and
thought deeply about, and if they contribute more time for a while
then we can ask them to shoulder -core again.

HTH,
-Rob


Hey Rob, Clint and Monty,

thanks for clarification, I was not aware of these details before. I 
hope that it will work well.


Thanks
-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Questions and comments

2013-10-09 Thread Patrick Petit

On 10/9/13 6:53 AM, Mike Spreitzer wrote:
Yes, that helps.  Please, guys, do not interpret my questions as 
hostility, I really am just trying to understand.  I think there is 
some overlap between your concerns and mine, and I hope we can work 
together.
No probs at all. Don't see a sign of hostility at all. Potential 
collaboration and understanding is really how we perceive your questions...


Sticking to the physical reservations for the moment, let me ask for a 
little more explicit details.  In your outline below, late in the game 
you write the actual reservation is performed by the lease manager 
plugin.  Is that the point in time when something (the lease manager 
plugin, in fact) decides which hosts will be used to satisfy the 
reservation?
Yes. The reservation service should return only a Pcloud uuid that is 
empty. The description of host capabilities and extra-specs is only 
defined as metadata of the Pcloud at this point.
Or is that decided up-front when the reservation is made?  I do not 
understand how the lease manager plugin can make this decision on its 
own, isn't the nova scheduler also deciding how to use hosts?  Why 
isn't there a problem due to two independent allocators making 
allocations of the same resources (the system's hosts)?
The way we are designing it excludes race conditions between Nova 
scheduler and the lease manager plugin for host reservations because the 
lease manager plugin will use a private pool of hosts for reservation 
(reservation pool) that is not shared with Nova scheduler. In our view, 
this is not a convenience design artifact but a purpose. It is because 
what we'd like to achieve really is energy efficiency management based 
on a reservation backlog and possibly dynamic management of host 
resources between the reservation pool and the multi-tenant pool. A 
Climate scheduler filter in Nova will do the triage filtering out those 
hosts that belong to the reservation pool and hosts that are reserved in 
an active lease. Another (longer term) goal behind this (was actually 
the primary justification for the reservation pool) is that the lease 
manager plugin could turn machines off to save electricity when the 
reservation backlog allows to do so and consequently turn them back on 
when a lease kicks in if that's needed. We anticipate that the resource 
management algorithms / heuristics behind that behavior is non-trivial 
but we believe that it would be hardly achievable without a reservation 
backlog and some form of capacity management capabilities left open to 
the provider. In particular, things become much trickier when it to 
comes decide what to do with the reserved hosts when a lease ends. We 
foresee few options:


1) Forcibly kill the instances running on reserved hosts and move them 
back to the reservation pool for the next lease to come
2) Keep the instances running on the reserved hosts and move them to an 
intermediary recycling pool until all the instances die at which point 
in time those hosts that are released from duty can return to the 
reservation pool. Case 1 and 2 could optionally be augmented by a grace 
period
3) Keep the instances running on the reserved hosts and move them to the 
multi-tenant pool. Then, it'll be up to the operator to repopulate the 
reservation pool using free hosts. Would require administrative tasks 
like disabling hosts, instance migrations, ... in other words certainly 
a pain if not fully automated.


So, you noticed that all this relies very much on manipulating hosts 
aggregates, metadata and filtering behind the scene. That's one way of 
implementing the whole-host-reservation feature based on the tools we 
have at our disposable today. A substantial refactoring of Nova and 
scheduler could/should be a better way to go? Is it worth it? We don't 
know. We anyway have zero visibility on that.


HTH,
Patrick


Thanks,
Mike

Patrick Petit patrick.pe...@bull.net wrote on 10/07/2013 07:02:36 AM:

 Hi Mike,

 There are actually more facets to this. Sorry if it's a little
 confusing :-( Climate's original blueprint https://
 wiki.openstack.org/wiki/Blueprint-nova-planned-resource-reservation-api
 was about physical host reservation only. The typical use case
 being: I want to reserve x number of hosts that match the
 capabilities expressed in the reservation request. The lease is
 populated with reservations which at this point are only capacity
 descriptors. The reservation becomes active only when the lease
 starts at a specified time and for a specified duration. The lease
 manager plugin in charge of the physical reservation has a planning
 of reservations that allows Climate to grant a lease only if the
 requested capacity is available at that time. Once the lease becomes
 active, the user can request instances to be created on the reserved
 hosts using a lease handle as a Nova's scheduler hint. That's
 basically it. We do not assume or enforce how and by whom (Nova,
 Heat ,...) a resource 

[openstack-dev] [Swift] Havana RC1 (1.10.0-rc1) available

2013-10-09 Thread Thierry Carrez
Hello everyone,

The havana release cycle for Swift already saw the releases of the 1.9.0
and 1.9.1 versions. The final coordinated release for the Havana cycle
shall include Swift 1.10.0. We now have a Swift release candidate for this:

https://launchpad.net/swift/havana/1.10.0-rc1

Unless release-critical issues are found that warrant a release
candidate respin, this RC1 will be formally released as the 1.10.0
(havana) final version on October 17. You are therefore strongly
encouraged to test and validate this tarball.

Alternatively, you can directly test the milestone-proposed branch at:
https://github.com/openstack/swift/tree/milestone-proposed

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/swift/+filebug

and tag it *havana-rc-potential* to bring it to the release crew's
attention.

Note that the master branch of Swift is now open for Icehouse
development.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-09 Thread Steven Hardy
On Wed, Oct 09, 2013 at 12:53:45AM +0400, Stan Lagun wrote:
 Hello,
 
 I’m one of the engineer working on Murano project. Recently we started a
 discussion about Murano and Heat Software orchestration and I want to
 continue this discussion with more technical details.

Thanks, we're certainly interested in Murano, and are keen to discuss your
roadmap, and where requirements and integration opportunities exist.

 In our project we do deployment of complex multi-instance Windows services.
 Those services usually require specific multi-VM orchestration that is
 currently impossible or at least not that easy to achieve with Heat. As you
 are currently doing HOT software orchestration design we would like to
 participate in HOT Software orchestration design and contribute into it, so
 that the Heat could address use-cases which we believe are very common.
 
 For example here is how deployment of a SQL Server cluster goes:
 
1.
 
Allocate Windows VMs for SQL Server cluster

Heat can already do this, you just define either OS::Nova::Server or
AWS::EC2::Instance resource in your template, or possibly a group of
instances via OS::Heat::InstanceGroup if the configuration is the same for
all VMs

2.
Enable secondary IP address from user input on all SQL Windows instances

So this again is already possible, via several resource types,
AWS::EC2::NetworkInterface, AWS::EC2::EIP, OS::Neutron::Port etc..

I suggest using the Neutron resources where possible, if you don't care
about Clouformation portability.

3.
Install SQL Server prerequisites on each node

So Heat is already able to do this, via a couple of methods, for Linux VMs,
so we just need the in-instance agent support for windows (cloud-init,
optionally combined with agents like cfn-init from heat-cfntools)

Can you clarify what you're using for in-instance agents currently,
cloudbase-init, and/or some bespoke tools?

4.
Choose a master node and install Failover Cluster on it
5.
Configure all nodes so that they know which one of them is the master

I'm not sure what's involved in these steps, but it seems like there are
serialization requirements, which can be handled via WaitConditions.

One thing I think we do need to look at is ways to enable expression of
serialization requirements via HOT, which don't require use of the
AWS-compatible WaitCondition resources.

So I think we already have the required functionality, we just need to
build out better native interfaces to it.

Configure all nodes so that they know which one of them is the master
6.
 
Install SQL Server on all nodes
7.
 
Initialize AlwaysOn on all nodes except for the master
8.
 
Initialize Primary replica
9.
 
Initialize secondary replicas
 
 
 All of the steps must take place in appropriate order depending on the
 state of other nodes. Some steps require an output from previous steps and
 all of them require some input parameters. SQL Server requires an Active
 Directory service in order to use Failover mechanism and installation of
 Active Directory with primary and secondary controllers is a complex
 workflow of its own.

So all of this seems possible right now using WaitConditions, but as
mentioned above we should look at ways to provide a better and more
flexible native interface to similar functionality.

 That is why it is necessary to have some central coordination service which
 would handle deployment workflow and perform specific actions (create VMs
 and other OpenStack resources, do something on that VM) on each stage
 according to that workflow. We think that Heat is the best place for such
 service.

Yep, we already do coordination of VM deployment and other openstack
resources, by managing implicit and explicit dependencies between those
resources.

 Our idea is to extend HOT DSL by adding  workflow definition capabilities
 as an explicit list of resources, components’ states and actions. States
 may depend on each other so that you can reach state X only after you’ve
 reached states Y and Z that the X depends on. The goal is from initial
 state to reach some final state “Deployed”.

IMHO there isn't a real need to provide explicit control of the workflow
implied by the resource dependencies for the sort of use-case you describe.

What I think is needed is simply a better native interface to serialization
primitives/resources.

 There is such state graph for each of our deployment entities (service,
 VMs, other things). There is also an action that must be performed on each
 state.
 For example states graph from example above would look like this:
 
 The goal is to reach Service_Done state which depends on VM1_Done and
 VM2_Done states and so on from initial Service_Start state.
 
 We propose to extend HOT DSL with workflow definition capabilities where
 you can describe step by step instruction to install service and properly
 handle errors on each step.

So as has already been mentioned, Heat defines 

Re: [openstack-dev] Candidate proposals for TC (Technical Committee) positions are now open

2013-10-09 Thread Thierry Carrez
Anita Kuno wrote:
 Candidate proposals for the Technical Committee positions (11 positions)
 are now open and will remain open until 23:59 UTC October 10, 2013.

Reminder: You have until tomorrow Thursday, 23:59 UTC to announce your
candidacy to the TC. We currently have 17 candidates for 11 positions (6
one-year seats and 5 six-month seats), and more candidacies is always
better (thanks to Condorcet !).

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reminder: Project release status meeting - 21:00 UTC

2013-10-09 Thread Thierry Carrez
Gareth wrote:
 it seems that we didn't log this channel in
 here: http://eavesdrop.openstack.org/meetings/openstack-meeting/2013/

Meetings are logged per-meeting. This one in particular is logged at
http://eavesdrop.openstack.org/meetings/project/

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Questions and comments

2013-10-09 Thread Dina Belova
Mike, I'll try to describe the reservation process for the virtual
reservations. I'll use Nova project as an example.

As I said, this Nova workflow is only the example that may and certainly
will be modified for other 'virtual' projects.

1) User goes to Nova via CLI/Dashboard and commits all usual actions like
he/she wants to boot instance. The only difference is that user passes
reservation-connected hints to Nova. In the CLI this request may look like
the following:

nova  boot --flavor 1 --image bb3979c2-b2e1-4836-abbc-2ee510064718 --hint
reserved=True --hint lease_params='{name: lease1, start: now,
end: 2013-12-1 16:07}' vm1

If scheduling process went OK, we'll see the following by 'nova list'
command:

+--+--+--++-+--+
| ID   | Name | Status   | Task State |
Power State | Networks |
+--+--+--++-+--+
| a7ac3b2e-dca5-4d21-ab37-cd019a813636 | vm1  | RESERVED | None   |
NOSTATE | private=10.0.0.3 |
+--+--+--++-+--+

2) Request passes up to the Compute Manager, where scheduling process is
already done. If Manager finds reservation related hints it uses Climate
client to create lease using passed to Nova params and id of the VM to be
reserved. Also Nova changes status of VM in its DB to 'RESERVED'. If there
are no reservation related hints filter properties, Nova just spawns
instance as usual.

3) Lease creation request goes to Climate Lease API via Climate Client.
Climate Lease API will be mostly used by other services (like Nova in this
example) and by admin users to manage leases as 'contracts'.

4) Climate Lease API passes lease creation request to Climate Manager
service via RPC. Climate Manager is the service that communicates with all
resource plugins and Climate DB. Climate Manager creates lease record in
DB, all reservation records (for the instance in this case) and all events
records. Even if user passes no additional events (like notifications in
future), at least two events for lease are created - 'start' and 'end'
events.

5) One more function that Manager does is periodical DB polling to find out
if there are any 'UNDONE' event to be processed. If there is such event
(for example, start event for the lease just saved in DB), manager begins
to process it. That means manager sets event status to 'IN_PROGRESS' and
for every reservation in lease commits 'on_start' actions for this
reservation. Now there is one-to-one relationship between lease and
reservation, but we suppose there may be cases for one-to-many
relationship. 'On_start' actions are defined in resource plugin responsible
for this resource type (virtual:instance) in this example. Plugins are
loaded using stevedore and needed ones are defined in climate.conf file.

6) virtual:instance plugin commits on_start actions. For VM it may be
'wake_up' action, that wakes reserved instance up through Nova API. This
may be implemented using Nova extensions mechanism. Wake up action really
spawns this instance.

7) If everything is ok, Manager sets event status to 'DONE' or 'COMPLETED'.

8) Almost the same process is done when Manager gets 'end' event for the
lease from DB.

Thank you for the attention.

Dina


On Wed, Oct 9, 2013 at 1:01 PM, Patrick Petit patrick.pe...@bull.netwrote:

  On 10/9/13 6:53 AM, Mike Spreitzer wrote:

 Yes, that helps.  Please, guys, do not interpret my questions as
 hostility, I really am just trying to understand.  I think there is some
 overlap between your concerns and mine, and I hope we can work together.

 No probs at all. Don't see a sign of hostility at all. Potential
 collaboration and understanding is really how we perceive your
 questions...


 Sticking to the physical reservations for the moment, let me ask for a
 little more explicit details.  In your outline below, late in the game you
 write the actual reservation is performed by the lease manager plugin.
  Is that the point in time when something (the lease manager plugin, in
 fact) decides which hosts will be used to satisfy the reservation?

 Yes. The reservation service should return only a Pcloud uuid that is
 empty. The description of host capabilities and extra-specs is only
 defined as metadata of the Pcloud at this point.

 Or is that decided up-front when the reservation is made?  I do not
 understand how the lease manager plugin can make this decision on its own,
 isn't the nova scheduler also deciding how to use hosts?  Why isn't there a
 problem due to two independent allocators making allocations of the same
 resources (the system's hosts)?

 The way we are designing it excludes race conditions between Nova
 scheduler and the lease manager plugin for host reservations because the
 lease manager plugin will use a private pool of 

[openstack-dev] [Ceilometer] Meeting agenda for Wed Oct 9th at 2100 UTC

2013-10-09 Thread Julien Danjou
The Ceilometer project team holds a meeting in #openstack-meeting, see
https://wiki.openstack.org/wiki/Meetings/MeteringAgenda for more details.

Next meeting is on Wed Oct 9th at 2100 UTC 

Please add your name with the agenda item, so we know who to call on during
the meeting.
* Release python-ceilometerclient? 
* Removal of core membership for unactive contributors
  * http://russellbryant.net/openstack-stats/ceilometer-reviewers-90.txt 
  * John Tran 
  * jiang, yunhong
* Open discussion

If you are not able to attend or have additional topic(s) you would like
to add, please update the agenda on the wiki.

Cheers,
-- 
Julien Danjou
// Free Software hacker / independent consultant
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-09 Thread Petr Blaho
On Tue, Oct 08, 2013 at 02:31:34PM +0200, Jaromir Coufal wrote:
 Hi Chris,
 
 On 2013/08/10 13:13, Chris Jones wrote:
 
 Hi
 
 On 8 October 2013 11:59, Jaromir Coufal jcou...@redhat.com wrote:
 
     * Example: It doesn't make sense, that someone who is 
 core-reviewer
 based on image-builder is able to give +2 on UI or CLI code and
 vice-versa.
 
 
 I'm not sure this is a technical problem as much as a social problem - if
 someone isn't able to give a good review (be it -1/+1 or +2) on a
 particular change, they should just not review it, regardless of which 
 part
 of the project it relates to.
 
 I completely agree on this point. It depends on people's judgement.
 
 Question is if we will depend only on this judgment or we help that with
 splitting reviewers based on projects. I believe that the split can help us.
 Anyway, it is just proposal it depends what others think about that.
 
 
 I'm a tripleo core reviewer, but I have been ignoring the tuskar reviews
 until I have had some time to play with it and get a feel for the code. 
 You
 can argue that I therefore shouldn't even have the power to give a +2 on
 tuskar code, but I would note that before Robert added me to core he 
 wasn't
 simply watching the quantity of my reviews, he was also giving me feedback
 on areas I was going wrong. I would imagine that if I was wildly throwing
 around inappropriate reviews on code I wasn't qualified to review, he 
 would
 give me feedback on that too and ultimately remove me as a reviewer.
 
 Well it depends on the approach, if we think first or second way. I might 
 argue
 that you shouldn't have the +2 power for Tuskar until you have bigger
 contribution on Tuskar code (reviews or patches or ...). Just for me it sounds
 logical, because you are not that close to it and you are not familiar with 
 all
 the background there.
 
 If somebody will be contributing regularly there, he can become core-reviewer
 on that project as well.
 
 If you did bad code reviews on Tuskar and you were removed the 'core-' status,
 you still can do excellent job on other TripleO projects, so why to lose it at
 all of them?
 
 Let me give one example:
 There is tuskar-client which is very important project and there is not that
 big activity as in other projects. There are people who actually wrote the
 whole code and based on the amount of work (reviews), they doesn't have to get
 between core-reviewers. In the future, if they need to move forward or quickly
 fix something, they would need to ask some core-reviewer who is not familiar
 with that code, just to approve it.
 
 You see where I am getting?
 
 
 Perhaps this is something that won't scale well, but I have a great deal 
 of
 faith in Robert's judgement on who is or isn't reviewing effectively.
 
 I have no experience with Rob's distribution of core-members and I believe 
 that
 he does it based on his best faith.
 
 I am just suggesting more project based approach since the whole program
 expanded into more projects. It doesn't have to be strict project based 
 metric,
 it can be combined with 'across projects contribution', so we assure that
 people are aware of the whole effort. But I believe that the project focus
 should stay as primary metric.
 
 
 
 --
 Cheers,
 
 Chris
 
 
 Thanks
 -- Jarda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I generally agree with Jarda w/r/t more project based approach.


I am concerned with case when core reviewers can be overloaded with
review demands.

Of course if this happens we can just add another core
reviewer to the group but I would suggest doing it other way - let's
have broader core group at first and gradually lower number of core
members (using metrics, discussion, need, common agreement from
contributors...) by X every Y weeks or so.

This way core reviewers group will shrink till its members feel that
they have just enough reviews on their agenda that it does not hinder
quality of their work.

This will not eliminate any competition for core membership but it
will eliminate immediate impact on projects' review process, on
reviewers' workload and will help gradually decide if any project needs
a core-member even if that person is not that active reviewer but can
ensure that patches will not grow old for that project.

That is my 2 cents.

-- 
Petr Blaho, pbl...@redhat.com
Software Engineer

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Today's XenAPI meeting

2013-10-09 Thread Bob Ball
Just a quick note to say that we're skipping the XenAPI meeting this week as a 
couple of key participants have other commitments.

Normal service will resume next week.

Bob


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-09 Thread Stan Lagun
 Thanks, we're certainly interested in Murano, and are keen to discuss your
 roadmap, and where requirements and integration opportunities exist
Glad to here it. The same is true from Murano side.

On sample SQL workflow: that was just an example. I didn't want to bother
you with a SQL Server deployment details as it's really not that important.
What I've tried to say is that deployments consists of many steps, the
steps vary depending on instance's role, user input and so on. The step on
one machine
often requires some other machine already be in some state or some output
from a deployment step happened elsewhere.

I do understand that it is doable using Heat alone. Actually we do use Heat
for some parts of workflow. We do not talk to Nova or Neutron directly.
The special use case of Murano is that there is no HOT template author.
Heat is more an administrator's tool who knows how to write HOT templates
and wants to deal with low-level configuration aspects. But Murano is quite
different. In Murano the developers of workflows/scripts/metadata/etc are
not end-users. The user is not doing any sort of programming. He is given a
UI dashboard where he can compose desired environment from available
building blocks (services). Services may depend on each other and UI guides
him how to fulfill all requirements. User also conigures service's (VM's
etc.) settings. The number of instances in SQL Server cluster and which one
of them is going to be the master are such settings.

Because we do not know in advance all the instances and resources that
would be required for the services that user has chosen and the deployment
process is strongly depends on user input we cannot just have some
hardcoded HOT template. So what we do is we dynamically generate HOT
templates my parameterizing and merging several simpler templates together.
Then we use our orchestration engine to send commands to Murano Agents on
VMs to perform deployment steps in correct order with all needed input.
Probably we could do it without orchestration but then we would need to
dynamically generate all that WaitConditions and waiting/signaling scripts
etc. - something that would be error-prone and hard to manage at large
scales.

So we do believe that some external state orchestration would be very very
helpful for complex software services we deal with. Although Murano
currently has such engine it is far from perfect. As we thought on cleaner
and more explicit approach for state orchestration we came to a vision how
to implement it on top of task-orchestration engine like TaskFlow. And then
we came up with idea that we can go even farther and implement
TaskFlow-as-a-Service service with its own REST API etc that could handle
an abstract task orchestration while leaving all deployment-related out of
the scope of such service. That opens many additional opportunities for
integration that would not be available if we just use TaskFlow library
in-process.

But we do believe that it would be better for all of us if it would be Heat
and not Murano who provides such orchestration capabilities.
And yes, I'm completely agree with you that it should not be a part of HOT
templates but something external to it. To my understanding external
orchestration needs to be capable of
1. Process some object (JSON) model describing what services/resources/etc.
are need to be deployed
2. During orchestration invoke some HOT templates (create nested stacks?)
with passing required attributes from that object model as an inputs for
the HOT template
3. Be able to send commands (bash scripts, puppet manifests, chef recipes
etc) to VM in correct order with a parameters taken from object model and
HOT outputs. See https://wiki.openstack.org/wiki/Murano/UnifiedAgent on how
it is in Murano

We are currently communicating with TaskFlow team on possible contributions
to Convection implementation and would be glad to participate in software
orchestration part on the Heat side. I'm not pretend that I know all the
answers or at least our design is good but in Murano we gained much
experience in software orchestration that might be useful for the Heat team
and we would definitely like to share our ideas. I also believe that now is
the time for that as on summit it may be too late because all the principal
decisions are already made.

On Wed, Oct 9, 2013 at 1:24 PM, Steven Hardy sha...@redhat.com wrote:

 On Wed, Oct 09, 2013 at 12:53:45AM +0400, Stan Lagun wrote:
  Hello,
 
  I’m one of the engineer working on Murano project. Recently we started a
  discussion about Murano and Heat Software orchestration and I want to
  continue this discussion with more technical details.

 Thanks, we're certainly interested in Murano, and are keen to discuss your
 roadmap, and where requirements and integration opportunities exist.

  In our project we do deployment of complex multi-instance Windows
 services.
  Those services usually require specific multi-VM orchestration that is
  currently 

Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-09 Thread Sylvain Bauza

Hi Yathi,

Thanks for having taken time explaining your vision.

Climate is about reservations, ie. preempting resources capacity and 
granting a user he will actually get exclusive access to a certain set 
of resources he asks for a certain period of time.
The resource placement decisions are the core of the added-value of 
Climate, as historically we found that we need to do some efficiency on 
it. In other words, we will need to implement a Climate scheduler for 
picking up the right hosts best fitting the user requirements.


In other words, provided an user (or a service) hits the Climate Host 
Reservation API asking for X hosts with these capabilities (and that 
could/should include network bandwidth or host architecture), Climate 
will create a host group (we call it pcloud) on the lease creation 
with no hosts in it, and after a certain period of time (based on 
efficiency criterias - as of Climate v1 at lease start), Climate will 
take user requirements, elect the hosts and put them in the pcloud.



That said, that's still a bit unclear to me but I would find two points 
where your efforts and our efforts could be joined :
 1/ Climate could be seen as a broker for managing the states of the 
Instance Group by offering a backend system for implementing the need of 
a reservation system
 2/ Climate could also see the Smart Resource Placement holder as an 
scheduler for helping to decide which hosts are the best opportunity 
in terms of efficiency



What do you think about it ?
-Sylvain



Le 09/10/2013 01:51, Yathiraj Udupi (yudupi) a écrit :

Hi Sylvain,
Thanks for your comments.  I can see that Climate is aiming to provide 
a reservation service for physical and now virtual resources also like 
you mention.


The Instance-group [a][b] effort   (proposed during the last summit, 
 and good progress has been made so far)  attempts to address the 
tenant facing API aspects in the bigger Smart Resource Placement 
puzzle [c].
The idea is to be able to represent an entire topology (a group of 
resources) that is requested by the tenant, that contains members or 
sub-groups , their connections,  their associated policies and other 
metadata.


The first part is to be able to persist this group, and use the group 
to create/schedule the resources together as a whole group, so that 
intelligent decisions can be made together considering all the 
requirements and constraints (policies).


In the ongoing discussions in the Nova scheduler sub-team, we do agree 
that we need additional support to achieve the creation of the group 
as a whole.   It will involve reservation too to achieve this.


Once the Instance group is registered and persisted,  we can trigger 
the creation/boot up of the instances, which will involve arriving at 
the resource placement decisions and then the actual creation.  So one 
of the idea is to provide clear apis such an external component (such 
as climate, heat, or some other module) can take the placement 
decision results and do the actual creation of resource.


As described in [c], we will also need the support of a global state 
repository to make all the resource states from across services 
available to smart placement decision engine.


As part of the plan for [c],  the first step is to tackle the 
representation and API for these InstanceGroups, and that is this 
ongoing effort within the Nova Scheduler sub-team.


Our idea to separate the phases of this grand scale scheduling of 
resources, and keep the interfaces clean.  If we have to interface 
with Climate for the final creation (I.e., once the smart placement 
decisions have been made), we should be able to do that, at least that 
is the vision.



References
[a]Instance Group Model and API extension doc - 
 https://docs.google.com/document/d/17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit?usp=sharing 
https://docs.google.com/document/d/17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit?usp=sharing
[b] Instance group blueprint - 
https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension
[c] Smart Resource Placement 
https://docs.google.com/document/d/1IiPI0sfaWb1bdYiMWzAAx0HYR6UqzOan_Utgml5W1HI/edit 



Thanks,
Yathi.





From: Sylvain Bauza sylvain.ba...@bull.net 
mailto:sylvain.ba...@bull.net

Date: Tuesday, October 8, 2013 12:40 AM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org

Cc: Yathiraj Udupi yud...@cisco.com mailto:yud...@cisco.com
Subject: Re: [openstack-dev] [scheduler] APIs for Smart Resource 
Placement - Updated Instance Group Model and API extension model - WIP 
Draft


Hi Yathi,

Le 08/10/2013 05:10, Yathiraj Udupi (yudupi) a écrit :

Hi,

Based on the discussions we have had in the past few scheduler 
sub-team meetings,  I am sharing a document that proposes an 
updated Instance Group Model and API extension model.
This is a work-in-progress draft version, but sharing it for early 
feedback.

Re: [openstack-dev] [nova] automatically evacuate instances on compute failure

2013-10-09 Thread Tim Bell

There are also times when I know a hypervisor needs to be failed even if Nova 
has not detected it. Typical examples would be an intervention on a network 
cable or retirement of a rack.

The problem of VM Zombies does need to be addressed too. Not simple to solve.

Thus, I feel a shared effort in this area is needed rather than each deployment 
having its own scripts...

Tim

From: Alex Glikson [mailto:glik...@il.ibm.com]
Sent: 09 October 2013 14:00
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [nova] automatically evacuate instances on compute 
failure

 Hypervisor failure detection is also more or less solved problem in Nova [2]. 
 There are other candidates for that task as well, like Ceilometer's hardware 
 agent [3] (still WIP to my knowledge).

The problem is that in some cases you want to be *really* sure that the 
hypervisor is down before running 'evacuate' (otherwise it could lead to an 
application crash). And you want to do it on scale. So, polling and traditional 
monitoring might not be good enough for a fully-automated service (e.g., you 
may need to do 'fencing' to ensure that the node will not suddenly come back 
with all the VMs still running).

Regards,
Alex




From:Oleg Gelbukh ogelb...@mirantis.commailto:ogelb...@mirantis.com
To:OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date:09/10/2013 02:09 PM
Subject:Re: [openstack-dev] [nova] automatically evacuate instances on 
compute failure




Hello,

We have much interest in this discussion (with focus on second scenario 
outlined by Tim), and working on its design at the moment. Thanks to everyone 
for valuable insights in this thread.

It looks like external orchestration daemon problem is partially solved already 
by Heat with HARestarter resource [1].

Hypervisor failure detection is also more or less solved problem in Nova [2]. 
There are other candidates for that task as well, like Ceilometer's hardware 
agent [3] (still WIP to my knowledge).

[1] 
https://github.com/openstack/heat/blob/stable/grizzly/heat/engine/resources/instance.py#L35
[2] 
http://docs.openstack.org/developer/nova/api/nova.api.openstack.compute.contrib.hypervisors.html#module-nova.api.openstack.compute.contrib.hypervisors
[3] 
https://blueprints.launchpad.net/ceilometer/+spec/monitoring-physical-devices
--
Best regards,
Oleg Gelbukh
Mirantis Labs


On Wed, Oct 9, 2013 at 9:26 AM, Tim Bell 
tim.b...@cern.chmailto:tim.b...@cern.ch wrote:
I have proposed the summit design session for Hong Kong 
(http://summit.openstack.org/cfp/details/103) to discuss exactly these sort of 
points. We have the low level Nova commands but need a service to automate the 
process.

I see two scenarios

- A hardware intervention needs to be scheduled, please rebalance this workload 
elsewhere before it fails completely
- A hypervisor has failed, please recover what you can using shared storage and 
give me a policy on what to do with the other VMs (restart, leave down till 
repair etc.)

Most OpenStack production sites have some sort of script doing this sort of 
thing now. However, each one will be implementing the logic for migration 
differently so there is no agreed best practise approach.

Tim

 -Original Message-
 From: Chris Friesen 
 [mailto:chris.frie...@windriver.commailto:chris.frie...@windriver.com]
 Sent: 09 October 2013 00:48
 To: 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] automatically evacuate instances on 
 compute failure

 On 10/08/2013 03:20 PM, Alex Glikson wrote:
  Seems that this can be broken into 3 incremental pieces. First, would
  be great if the ability to schedule a single 'evacuate' would be
  finally merged
  (_https://blueprints.launchpad.net/nova/+spec/find-host-and-evacuate-instance_).

 Agreed.

  Then, it would make sense to have the logic that evacuates an entire
  host
  (_https://blueprints.launchpad.net/python-novaclient/+spec/find-and-evacuate-host_).
  The reasoning behind suggesting that this should not necessarily be in
  Nova is, perhaps, that it *can* be implemented outside Nova using the
  indvidual 'evacuate' API.

 This actually more-or-less exists already in the existing nova 
 host-evacuate command.  One major issue with this however is that it
 requires the caller to specify whether all the instances are on shared or 
 local storage, and so it can't handle a mix of local and shared
 storage for the instances.   If any of them boot off block storage for
 instance you need to move them first and then do the remaining ones as a 
 group.

 It would be nice to embed the knowledge of whether or not an instance is on 
 shared storage in the instance itself at creation time.  I
 envision specifying this in the config file for the compute manager along 
 with the instance storage location, and the compute manager
 

Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-09 Thread Alex Rudenko
Hi everyone,

I've read this thread and I'd like to share some thoughts. In my opinion,
workflows (which run on VMs) can be integrated with heat templates as
follows:

   1. workflow definitions should be defined separately and processed by
   stand-alone workflow engines (chef, puppet etc).
   2. the HOT resources should reference workflows which they require,
   specifying a type of workflow and the way to access a workflow definition.
   The workflow definition might be provided along with HOT.
   3. Heat should treat the orchestration templates as transactions (i.e.
   Heat should be able to rollback in two cases: 1) if something goes wrong
   during processing of an orchestration workflow 2) when a stand-alone
   workflow engine reports an error during processing of a workflow associated
   with a resource)
   4. Heat should expose an API which enables basic communication between
   running workflows. Additionally, Heat should provide an API to workflows
   that allows workflows to specify whether they completed successfully or
   not. The reference to these APIs should be passed to the workflow engine
   that is responsible for executing workflows on VMs.

Pros of each point:
1  2 - keeps Heat simple and gives a possibility to choose the best
workflows and engines among available ones.
3 - adds some kind of all-or-nothing semantics improving the control and
awareness of what's going on inside VMs.
4 - allows workflow synchronization and communication through Heat API.
Provides the error reporting mechanism for workflows. If a workflow does
not need this functionality, it can ignore it.

Cons:
- Changes to existing workflows making them aware of Heat existence are
required.

These thoughts might show some gaps in my understanding of how Heat works,
but I would like to share them anyway.

Best regards,
Oleksii Rudenko


On Wed, Oct 9, 2013 at 5:37 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.com wrote:

 Hi,

 In addition I want to add couple words about flexibility and debugging
 capabilities. I believe it is quite important for HOT template engine to
 control all aspects of deployment process execution including software
 components. Right now I believe Heat lack of control of what is going on
 the VM side.  In my opinion, HOT template user should be able to define
 what steps are necessary to deploy complex environment and more important,
 he should be able to provide a hints to the engine how to deal with errors
 during deployment. Centralized orchestration sees the whole picture of the
 environment status while scripts on VM usually have quite limited view.
 Workflows specification can have on_error actions and centralized
 orchestration will be capable to make smart decision on how to handle
 errors during deployment.

 I think we need to have a design discussion about the architecture of this
 centralized orchestration. From the one side, this orchestration should
 have the whole information about environment state and as Heat has full
 exposure to the environment it sounds reasonable to have such orchestration
 as a part of Heat. At the other side, HOT template should be quite simple
 to be useful, so additional workflows concepts might overload DSL syntax
 and additional independent orchestration level also sounds quite reasonable
 and this is what we have now as a Murano project.

 It will be nice to have some initial live discussion before the summit as
 not all developers will be on summit. What do you think about Google
 hangout session at the end of this week or on the next week?

 Thanks
 Gosha









 On Wed, Oct 9, 2013 at 7:52 AM, Stan Lagun sla...@mirantis.com wrote:

  Thanks, we're certainly interested in Murano, and are keen to discuss
 your
  roadmap, and where requirements and integration opportunities exist
 Glad to here it. The same is true from Murano side.

 On sample SQL workflow: that was just an example. I didn't want to bother
 you with a SQL Server deployment details as it's really not that important.
 What I've tried to say is that deployments consists of many steps, the
 steps vary depending on instance's role, user input and so on. The step on
 one machine
 often requires some other machine already be in some state or some output
 from a deployment step happened elsewhere.

 I do understand that it is doable using Heat alone. Actually we do use
 Heat for some parts of workflow. We do not talk to Nova or Neutron directly.
 The special use case of Murano is that there is no HOT template author.
 Heat is more an administrator's tool who knows how to write HOT templates
 and wants to deal with low-level configuration aspects. But Murano is quite
 different. In Murano the developers of workflows/scripts/metadata/etc are
 not end-users. The user is not doing any sort of programming. He is given a
 UI dashboard where he can compose desired environment from available
 building blocks (services). Services may depend on each other and UI guides
 him how to fulfill 

Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-09 Thread Mike Spreitzer
I favor separation of concerns.  I think (4), at least, has got nothing to 
do with infrastructure orchestration, the primary concern of today's heat 
engine.  I advocate (4), but as separate functionality.

Regards,
Mike

Alex Rudenko alexei.rude...@gmail.com wrote on 10/09/2013 12:59:22 PM:

 From: Alex Rudenko alexei.rude...@gmail.com
 To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
 Date: 10/09/2013 01:03 PM
 Subject: Re: [openstack-dev] [Heat] HOT Software orchestration 
 proposal for workflows
 
 Hi everyone,
 
 I've read this thread and I'd like to share some thoughts. In my 
 opinion, workflows (which run on VMs) can be integrated with heat 
 templates as follows:
 1. workflow definitions should be defined separately and processed 
 by stand-alone workflow engines (chef, puppet etc). 
 2. the HOT resources should reference workflows which they require, 
 specifying a type of workflow and the way to access a workflow 
 definition. The workflow definition might be provided along with HOT.
 3. Heat should treat the orchestration templates as transactions 
 (i.e. Heat should be able to rollback in two cases: 1) if something 
 goes wrong during processing of an orchestration workflow 2) when a 
 stand-alone workflow engine reports an error during processing of a 
 workflow associated with a resource)
 4. Heat should expose an API which enables basic communication 
 between running workflows. Additionally, Heat should provide an API 
 to workflows that allows workflows to specify whether they completed
 successfully or not. The reference to these APIs should be passed to
 the workflow engine that is responsible for executing workflows on VMs.
 Pros of each point:
 1  2 - keeps Heat simple and gives a possibility to choose the best
 workflows and engines among available ones.
 3 - adds some kind of all-or-nothing semantics improving the control
 and awareness of what's going on inside VMs.
 4 - allows workflow synchronization and communication through Heat 
 API. Provides the error reporting mechanism for workflows. If a 
 workflow does not need this functionality, it can ignore it.
 
 Cons:
 - Changes to existing workflows making them aware of Heat existence 
 are required.
 
 These thoughts might show some gaps in my understanding of how Heat 
 works, but I would like to share them anyway.
 
 Best regards,
 Oleksii Rudenko
 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-09 Thread Clint Byrum
Excerpts from Georgy Okrokvertskhov's message of 2013-10-09 08:37:36 -0700:
 Hi,
 
 In addition I want to add couple words about flexibility and debugging
 capabilities. I believe it is quite important for HOT template engine to
 control all aspects of deployment process execution including software
 components. Right now I believe Heat lack of control of what is going on
 the VM side.  In my opinion, HOT template user should be able to define
 what steps are necessary to deploy complex environment and more important,
 he should be able to provide a hints to the engine how to deal with errors
 during deployment. Centralized orchestration sees the whole picture of the
 environment status while scripts on VM usually have quite limited view.
 Workflows specification can have on_error actions and centralized
 orchestration will be capable to make smart decision on how to handle
 errors during deployment.
 

What you have described above is some of what I'd like to see in HOT.
It is an evolution beyond the limitations of the waitcondition that
keeps things simple. Basically, orchestration providing a hook point
at which a portion of the workflow is deferred to some other tool. The
tool signals back when it is done, or has an error. We have that now,
but now, the errors just halt the process. We definitely need a way to
say something like this:

on_error_code: {default: {rebuild_resources: [ Instance1, Loadbalancer1]}}

The OS::Heat::HARestarter was sort of an attempt at some of this.

I take issue with a tool that wants to control everything. That may be
the easy way out, however I believe that it will lead to a very large,
very complex tool that users will be suspicious of.

For me, I'd like to know where my orchestration ends, and my software
configuration, installation, and state management all begin. The interface
can be such that Heat doesn't have to be omniscient and omnipotent.
It just has to help simplify the task of orchestration and get out of
workflow/config/installation/etc.'s way.

 I think we need to have a design discussion about the architecture of this
 centralized orchestration. From the one side, this orchestration should
 have the whole information about environment state and as Heat has full
 exposure to the environment it sounds reasonable to have such orchestration
 as a part of Heat. At the other side, HOT template should be quite simple
 to be useful, so additional workflows concepts might overload DSL syntax
 and additional independent orchestration level also sounds quite reasonable
 and this is what we have now as a Murano project.


It sounds like you have learned a lot on this journey and it would
definitely be valuable to collaborate with you so that we can make sure
Heat accommodates the use cases you have uncovered.

 It will be nice to have some initial live discussion before the summit as
 not all developers will be on summit. What do you think about Google
 hangout session at the end of this week or on the next week?
 

I find that summit sessions are most useful when we are at a point where
there are just a few decision points to get through. If we come with too
much already done, then group-think will take over and less-well-formed
ideas will get squelched. If we come without clearly defined decisions
to make, then we'll bike-shed for the full 40 minutes.

So, given that, I think a brief pre-summit discussion is a good idea to
help us figure out where we may have conflicting views and then we can
come ready to hash those out in a high bandwidth environment.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change in openstack/nova[master]: Moved headroom calculations into quota_reserve and modified ...

2013-10-09 Thread Ben Nemec


I'm copying openstack-dev for posterity and because smarter people than me hang out there and might be able to answer any questions I can't. :-)
There are a couple of other dependencies you will need to install before the test cases will run successfully (I always forget because my development environments already have them). On Ubuntu you will need to install at least libxslt-dev libmysqlclient-dev and the Fedora equivalents would be libxslt-devel mariadb-devel. That should take care of your first problem.
For the second problem I would double-check that the test is taking the code path you expect. A lot of Nova code is called from multiple places, so it's possible it is not getting to your new code in the way you expect. It looks like the dependency is set up correctly so it should be taking into account the first change. Being able to run tests locally should help with this. :-)
-Ben
On 2013-10-09 11:59, Michael Bright wrote:


Thanks Ben,

I had tried running run_tests.sh this morning before doing my first git review but there was an error in the setup - so I just relied on my own testing.
I got the same thing just now with tox:
  sh: 1: mysql_config: not found
I tried putting a symlink as suggested somewhere on StackOverfow but this wasn't have been OpenStack/DevStack specific.
I'll shoot off a question about this on the "Ask question" page.

However, I'd really like your advice on git-review.

I'm down to a functional error with my last "review":https://review.openstack.org/#/c/50647/
but it looks to me like the problem which is occurring in compute/api,py (see traceback below)
is due to my db/sqlalchemy/api.py changes not having been taken into account.

Yet, on the above page I see that the changes are in the Patch Set 2.
Any idea what I'm missing here?

Thanks in advance,
Mike.

P.S. I promise I'll read the Gerrit Workflow (!!)










Patch Set 2




02e2f033beaef5fd18dc60d7c793f8a28fb8161a(gitweb)








Author

mjbrightopenst...@mjbright.netOct 9, 2013 3:49 PM



Committer

mjbrightopenst...@mjbright.netOct 9, 2013 5:05 PM



Parent(s)




e6fe472e96667327ae21c4afc7a804fbcd573634
Moved headroom calculations into quota_reserve and modified headroom calculations to take into account -ve quota limits (unlimited) on cores and ram.






Download


checkoutpullcherry-pickpatch

Anonymous HTTPSSHHTTP

git fetch https://review.openstack.org/openstack/nova refs/changes/47/50647/2  git checkout FETCH_HEAD




ReviewAbandon ChangeWork In ProgressRebase ChangeDiff All Side-by-SideDiff All Unified






File Path
Comments
Size
Diff
Reviewed




Commit Message


Side-by-Side
Unified




M
nova/db/sqlalchemy/api.py

+12, -9
Side-by-Side
Unified





+12, -9












2013-10-09 15:14:21.972 | Traceback (most recent call last):
2013-10-09 15:14:21.972 |   File "nova/tests/api/openstack/compute/plugins/v3/test_servers.py", line 2256, in test_create_instance_above_quota_ram
2013-10-09 15:14:21.973 | self._do_test_create_instance_above_quota('ram', 2048, 10 * 1024, msg)
2013-10-09 15:14:21.973 |   File "nova/tests/api/openstack/compute/plugins/v3/test_servers.py", line 2243, in _do_test_create_instance_above_quota
2013-10-09 15:14:21.973 | server = self.controller.create(self.req, self.body).obj['server']
2013-10-09 15:14:21.974 |   File "nova/api/openstack/compute/plugins/v3/servers.py", line 799, in create
2013-10-09 15:14:21.974 | **create_kwargs)
2013-10-09 15:14:21.974 |   File "nova/hooks.py", line 105, in inner
2013-10-09 15:14:21.974 | rv = f(*args, **kwargs)
2013-10-09 15:14:21.975 |   File "nova/compute/api.py", line 1217, in create
2013-10-09 15:14:21.975 | legacy_bdm=legacy_bdm)
2013-10-09 15:14:21.975 |   File "nova/compute/api.py", line 866, in _create_instance
2013-10-09 15:14:21.975 | block_device_mapping)
2013-10-09 15:14:21.975 |   File "nova/compute/api.py", line 742, in _provision_instances
2013-10-09 15:14:21.976 | context, instance_type, min_count, max_count)
2013-10-09 15:14:21.976 |   File "nova/compute/api.py", line 333, in _check_num_instances_quota
2013-10-09 15:14:21.976 | headroom = exc.kwargs['headroom']
2013-10-09 15:14:21.976 | KeyError: 'headroom'





On 9 October 2013 17:57, Ben Nemec (Code Review) rev...@openstack.org wrote:

Ben Nemec has posted comments on this change. Change subject: Moved headroom calculations into quota_reserve and modified headroom calculations to take into account -ve quota limits (unlimited) on cores and ram. .. Patch Set 2:
No problem, we all have to start somewhere. :-) Also, you can run the unit tests locally by just running "tox" in the root of the Nova source code. You might have to install tox (I recommend using pip install to make sure you get the latest version), but after that it should take care of everything.

 -- To view, visit https://review.openstack.org/50610 To unsubscribe, visit https://review.openstack.org/settings 

[openstack-dev] Extraroute and router extensions

2013-10-09 Thread Rudra Rugge
Hi All,

Is the extra route extension always tied to the router extension or 
can it live in a separate route-table container. If extra-route routes 
are available in separate container then sharing of such
containers across networks is possible.

Another reason to remove the dependency would be to have
next hops that are not CIDRs. Next-hops should be allowed as
interface or a VM instance such as NAT instance. This would
make the extra route extension more generic. 

This way an extra-route container can be attached/bound to
either a router extension or to a network as well. Many plugins
do not need a separate router entity for most of the inter-network
routing.

Thanks,
Rudra


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Meeting agenda for Wed Oct 9th at 2000 UTC

2013-10-09 Thread Steven Hardy
The Heat team holds a weekly meeting in #openstack-meeting, see

https://wiki.openstack.org/wiki/Meetings/HeatAgenda for more details

The next meeting is on Wed Oct 9th at 2000 UTC

Current topics for discussion:
* Review last week's actions
* RC2 bug status
* https://wiki.openstack.org/wiki/ReleaseNotes/Havana
* Summit Session Proposals
* Open discussion

If anyone has any other topic to discuss, please add to the wiki.

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC candidacy

2013-10-09 Thread Joshua Harlow
Howdy ya'll!

==Who am I==

I'd like to also put myself up for the Technical Committee candidate position, 
via one of the seats that are being made available.
I have been active with OpenStack since ~around~ diablo and have helped lead 
the effort to 'marry' OpenStack and Y! in a way that benefits both the 
community and Yahoo.

==What I have done==

I have been/am an active contribute to nova, glance, cinder.
 - History @ https://review.openstack.org/#/q/owner:harlowja,n,z

I have also helped create the following (along with others in the community):
 - https://launchpad.net/anvil (a tool like devstack, that automatically builds 
OpenStack code  dependencies into packages)
 - https://launchpad.net/taskflow (my active project, that has big plans/goals)

I also am a major contributor to: https://launchpad.net/cloud-init

==What else I do==

In my spare time I code more than I should, mountain bike, ski, and rock climb.

==Background and Experience==

I work at Yahoo! as one of the technical leads on the OpenStack team where we 
have been working to get better involved in the OpenStack community and 
establishing OpenStack internally. We are focused on scale (tens of thousands 
of servers), reliability, security, and making the best software that is 
humanly possible (who doesn't want that)!

A few examples of projects that I have been on:
 - Sponsored search stack (~8000 hosts across ~5 datacenters)
 - Frontpage stack [www.yahoo.com] (millions of page views, huge scale)
 - OpenStack (many users, lots of hypervisors, lots of vms, 4+ datacenters)

==What I think I can bring==

I have been on various engineering teams at Yahoo! for the last 6 years. I have 
designed/architected and implemented code that runs on http://www.yahoo.com, 
the ad systems, the social network backends. Each project has required 
understanding how scale and reliability can be achieved, so that it’s possible 
to maximize uptime (thus getting more customers).

Currently I have been working on establishing OpenStack in Yahoo! and making 
sure Yahoo! keeps on being an active and innovative contributor. I believe I 
can help out in scale (how far can eventlet go...), architectural decisions 
(more services or less??) and help OpenStack be as reliable and manageable as 
possible (taskflow I think has a great potential for helping here).

I also believe that we as a community need to continue encouraging the growth 
of innovative projects and continue building OpenStack as a platform that 
drives the infrastructure of many (if not all) of the companies in the world 
(small and big). I believe the TC can help guide OpenStack into this direction 
(and continue guiding it) and I hope with myself on the TC (if voted in) that 
my unique experiences at Y! (ranging from deploying OpenStack, supporting it 
and developing future features for it) will be useful in guiding the general 
direction.

Thanks for taking me into consideration!

-Josh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Extraroute and router extensions

2013-10-09 Thread Rudra Rugge
Updated the subject [neutron]

Hi All,

Is the extra route extension always tied to the router extension or 
can it live in a separate route-table container. If extra-route routes 
are available in separate container then sharing of such
containers across networks is possible.

Another reason to remove the dependency would be to have
next hops that are not CIDRs. Next-hops should be allowed as
interface or a VM instance such as NAT instance. This would
make the extra route extension more generic. 

This way an extra-route container can be attached/bound to
either a router extension or to a network as well. Many plugins
do not need a separate router entity for most of the inter-network
routing.

Thanks,
Rudra


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-09 Thread Lakshminaraya Renganarayana

Steven Hardy sha...@redhat.com wrote on 10/09/2013 05:24:38 AM:


 So as has already been mentioned, Heat defines an internal workflow,
based
 on the declarative model defined in the template.

 The model should define dependencies, and Heat should convert those
 dependencies into a workflow internally.  IMO if the user also needs to
 describe a workflow explicitly in the template, then we've probably
failed
 to provide the right template interfaces for describing depenendencies.

I agree with Steven here, models should define the dependencies and Heat
should realize/enforce them. An important design issue is granularity at
which dependencies are defined and enforced. I am aware of the
wait-condition
and signal constructs in Heat, but I find them a bit low-level as they are
prone
to the classic dead-lock and race condition problems.  I would like to have
higher level constructs that support finer-granularity dependences which
are needed for software orchestration. Reading through the various
disucssion
on this topic in this mailing list, I see that many would like to have such
higher level constructs for coordination.

In our experience with software orchestration using our own DSL and also
with
some extensions to Heat, we found that the granularity of VMs or Resources
to be
too coarse for defining dependencies for software orchestration. For
example, consider
a two VM app, with VMs vmA, vmB, and a set of software components (ai's and
bi's)
to be installed on them:

vmA = base-vmA + a1 + a2 + a3
vmB = base-vmB + b1 + b2 + b3

let us say that software component b1 of vmB, requires a config value
produced by
software component a1 of vmA. How to declaratively model this dependence?
Clearly,
modeling a dependence between just base-vmA and base-vmB is not enough.
However,
defining a dependence between the whole of vmA and vmB is too coarse. It
would be ideal
to be able to define a dependence at the granularity of software
components, i.e.,
vmB.b1 depends on vmA.a1. Of course, it would also be good to capture what
value
is passed between vmB.b1 and vmA.a1, so that the communication can be
facilitated
by the orchestration engine.

We found that such finer granular modeling of the dependencies provides two
valuable benefits:

1. Faster total (resources + software setup) deployment time. For the
example described
above, a coarse-granularity dependence enforcer would start the deployment
of base-vmB after
vmA + a1 + a2 + a3 is completed, but a fine-granularity dependence enforcer
would start base-vmA
and base-vmB concurrently, and then suspend the execution of vmB.b1 until
vmA.a1 is complete and then
let the rest of deployment proceed concurrently, resulting in a faster
completion.

2. More flexible dependencies. For example, mutual dependencies between
resources,
which can be satisfied when orchestrated at a finer granularity. Using the
example described
above, fine-granularity would allow vmB.b1 depends_on vmA.a1 and also
vmA.a3 depends_on vmB.b2,
but coarse-granularity model would flag this as a cyclic dependence.

There are two aspects that needs support:

1. Heat/HOT template level constructs to support declarative expression of
such fine-granularity
dependencies and the values communicated / passed for the dependence.
2. Support from Heat engine / analyzer in supporting the runtime ordering,
coordination between
resources, and also the communication of the values.

What are your thoughts?___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Extraroute and router extensions

2013-10-09 Thread Salvatore Orlando
Hi Rudra,

Some comments inline.

Regards,
Salvatore

Il 09/ott/2013 19:27 Rudra Rugge rru...@juniper.net ha scritto:

 Updated the subject [neutron]

 Hi All,

 Is the extra route extension always tied to the router extension or
 can it live in a separate route-table container. If extra-route routes
 are available in separate container then sharing of such
 containers across networks is possible.

At this stage it is just an attribute of the router resource even if
they're then implemented in their own database model. Making them reusable
across routers (or networks as you suggest) is feasible, provided that we
also have a solution to ensure backwards compatibility.


 Another reason to remove the dependency would be to have
 next hops that are not CIDRs. Next-hops should be allowed as
 interface or a VM instance such as NAT instance. This would
 make the extra route extension more generic.

It should not be excessively hard generalizing the nexthop attribute
without breaking compatibility. I reckon this can be done independently
from splitting extra routes into a first level resource.

 This way an extra-route container can be attached/bound to
 either a router extension or to a network as well. Many plugins
 do not need a separate router entity for most of the inter-network
 routing.

Indeed. As you surely are already aware, the subnet resource has a similar
attribute for routes to be distributed to instances via DHCP.

 Thanks,
 Rudra


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting / design talks discussion October 10 1800 UTC

2013-10-09 Thread Sergey Lukjanov
Hi folks,

We'll be having the Savanna team meeting as usual in #openstack-meeting-alt 
channel.

*I would like to see all Savanna contributors to discuss talks for Design 
Summit at this meeting.*

Agenda: 
https://wiki.openstack.org/wiki/Meetings/SavannaAgenda#Agenda_for_October.2C_3

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Savanna+Meetingiso=20131003T18

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-09 Thread Georgy Okrokvertskhov
Hi Lakshminaraya,

Thank you for bringing your use case and your thought here. That is exactly
tried to achieve in Murano project.
There are important  aspects you highlighted. Sometime resource model is
two high level to describe deployment process. If you start to use more
granular approach to have defined steps of deployment you will finish with
workflow approach where you have fine control of deployment process but
description will be quite complex.

I think the HOT approach is to provide a simple way do describe you
deployment which consists of solid bricks (resources). If you are using
standard resources you can easily create a simple HOT template for your
deployment. If you need some custom resource you basically have two options
- create new resource class and hide all complexity inside the code or use
some workflows language to describe all steps required. The first approach
is currently supported by Heat. We have an experience of creating new
custom resources for orchestration deployment to specific IT infrastructure
with specific hardware and software.

Right now we are trying to figure out the possibility of adding workflows
to HOT. It looks like adding workflows language directly might harm HOT
simplicity by overloaded DSL syntax and structures.

I actually see the value in Steve's idea to have specific resource or
resource set to call workflows execution on external engine. In this case
HOT template will be still pretty simple as all workflow details will be
hidden, but still manageable without code writing.

Thanks
Gosha


On Wed, Oct 9, 2013 at 11:31 AM, Lakshminaraya Renganarayana 
lren...@us.ibm.com wrote:

 Steven Hardy sha...@redhat.com wrote on 10/09/2013 05:24:38 AM:


 
  So as has already been mentioned, Heat defines an internal workflow,
 based
  on the declarative model defined in the template.
 
  The model should define dependencies, and Heat should convert those
  dependencies into a workflow internally.  IMO if the user also needs to
  describe a workflow explicitly in the template, then we've probably
 failed
  to provide the right template interfaces for describing depenendencies.

 I agree with Steven here, models should define the dependencies and Heat
 should realize/enforce them. An important design issue is granularity at
 which dependencies are defined and enforced. I am aware of the
 wait-condition
 and signal constructs in Heat, but I find them a bit low-level as they are
 prone
 to the classic dead-lock and race condition problems.  I would like to
 have
 higher level constructs that support finer-granularity dependences which
 are needed for software orchestration. Reading through the various
 disucssion
 on this topic in this mailing list, I see that many would like to have such
 higher level constructs for coordination.

 In our experience with software orchestration using our own DSL and also
 with
 some extensions to Heat, we found that the granularity of VMs or Resources
 to be
 too coarse for defining dependencies for software orchestration. For
 example, consider
 a two VM app, with VMs vmA, vmB, and a set of software components (ai's
 and bi's)
 to be installed on them:

 vmA = base-vmA + a1 + a2 + a3
 vmB = base-vmB + b1 + b2 + b3

 let us say that software component b1 of vmB, requires a config value
 produced by
 software component a1 of vmA. How to declaratively model this dependence?
 Clearly,
 modeling a dependence between just base-vmA and base-vmB is not enough.
 However,
 defining a dependence between the whole of vmA and vmB is too coarse. It
 would be ideal
 to be able to define a dependence at the granularity of software
 components, i.e.,
 vmB.b1 depends on vmA.a1. Of course, it would also be good to capture what
 value
 is passed between vmB.b1 and vmA.a1, so that the communication can be
 facilitated
 by the orchestration engine.

 We found that such finer granular modeling of the dependencies provides
 two valuable benefits:

 1. Faster total (resources + software setup) deployment time. For the
 example described
 above, a coarse-granularity dependence enforcer would start the deployment
 of base-vmB after
 vmA + a1 + a2 + a3 is completed, but a fine-granularity dependence
 enforcer would start base-vmA
 and base-vmB concurrently, and then suspend the execution of vmB.b1 until
 vmA.a1 is complete and then
 let the rest of deployment proceed concurrently, resulting in a faster
 completion.

 2. More flexible dependencies. For example, mutual dependencies between
 resources,
 which can be satisfied when orchestrated at a finer granularity. Using the
 example described
 above, fine-granularity would allow vmB.b1 depends_on vmA.a1 and also
 vmA.a3 depends_on vmB.b2,
 but coarse-granularity model would flag this as a cyclic dependence.

 There are two aspects that needs support:

 1. Heat/HOT template level constructs to support declarative expression of
 such fine-granularity
 dependencies and the values communicated / passed for the 

Re: [openstack-dev] [Neutron] Common requirements for services' discussion

2013-10-09 Thread Edgar Magana
Hello all,

Is anyone working on NATaaS?
I know we have some developer working on Router as a Service and they
probably want to include NAT functionality but I have some interest in
having NAT as a Service.

Please, response is somebody is interested in having some discussions about
it.  

Thanks,

Edgar

From:  Sumit Naiksatam sumitnaiksa...@gmail.com
Reply-To:  OpenStack List openstack-dev@lists.openstack.org
Date:  Tuesday, October 8, 2013 8:30 PM
To:  OpenStack List openstack-dev@lists.openstack.org
Subject:  [openstack-dev] [Neutron] Common requirements for services'
discussion

Hi All,

We had a VPNaaS meeting yesterday and it was felt that we should have a
separate meeting to discuss the topics common to all services. So, in
preparation for the Icehouse summit, I am proposing an IRC meeting on Oct
14th 22:00 UTC (immediately after the Neutron meeting) to discuss common
aspects related to the FWaaS, LBaaS, and VPNaaS.

We will begin with service insertion and chaining discussion, and I hope we
can collect requirements for other common aspects such as service agents,
services instances, etc. as well.

Etherpad for service insertion  chaining can be found here:
https://etherpad.openstack.org/icehouse-neutron-service-insertion-chaining

Hope you all can join.

Thanks,
~Sumit.


___ OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] team meeting / design talks discussion October 10 1800 UTC

2013-10-09 Thread Sergey Lukjanov
Correct links

Agenda: 
https://wiki.openstack.org/wiki/Meetings/SavannaAgenda#Agenda_for_October.2C_10

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Savanna+Meetingiso=20131010T18

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

On Oct 9, 2013, at 23:32, Sergey Lukjanov slukja...@mirantis.com wrote:

 Hi folks,
 
 We'll be having the Savanna team meeting as usual in #openstack-meeting-alt 
 channel.
 
 *I would like to see all Savanna contributors to discuss talks for Design 
 Summit at this meeting.*
 
 Agenda: 
 https://wiki.openstack.org/wiki/Meetings/SavannaAgenda#Agenda_for_October.2C_3
 
 http://www.timeanddate.com/worldclock/fixedtime.html?msg=Savanna+Meetingiso=20131003T18
 
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-09 Thread Joshua Harlow
Your example sounds a lot like what taskflow is build for doing.

https://github.com/stackforge/taskflow/blob/master/taskflow/examples/calculate_in_parallel.py
 is a decent example.

In that one, tasks are created and input/output dependencies are specified 
(provides, rebind, and the execute function arguments itself).

This is combined into the taskflow concept of a flow, one of those flows types 
is a dependency graph.

Using a parallel engine (similar in concept to a heat engine) we can run all 
non-dependent tasks in parallel.

An example that I just created that shows this (and shows it running) that 
closer matches your example.

Program (this will work against the current taskflow codebase): 
http://paste.openstack.org/show/48156/

Output @ http://paste.openstack.org/show/48157/

-Josh

From: Lakshminaraya Renganarayana 
lren...@us.ibm.commailto:lren...@us.ibm.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, October 9, 2013 11:31 AM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Heat] HOT Software orchestration proposal for 
workflows


Steven Hardy sha...@redhat.commailto:sha...@redhat.com wrote on 10/09/2013 
05:24:38 AM:


 So as has already been mentioned, Heat defines an internal workflow, based
 on the declarative model defined in the template.

 The model should define dependencies, and Heat should convert those
 dependencies into a workflow internally.  IMO if the user also needs to
 describe a workflow explicitly in the template, then we've probably failed
 to provide the right template interfaces for describing depenendencies.

I agree with Steven here, models should define the dependencies and Heat
should realize/enforce them. An important design issue is granularity at
which dependencies are defined and enforced. I am aware of the wait-condition
and signal constructs in Heat, but I find them a bit low-level as they are prone
to the classic dead-lock and race condition problems.  I would like to have
higher level constructs that support finer-granularity dependences which
are needed for software orchestration. Reading through the various disucssion
on this topic in this mailing list, I see that many would like to have such
higher level constructs for coordination.

In our experience with software orchestration using our own DSL and also with
some extensions to Heat, we found that the granularity of VMs or Resources to be
too coarse for defining dependencies for software orchestration. For example, 
consider
a two VM app, with VMs vmA, vmB, and a set of software components (ai's and 
bi's)
to be installed on them:

vmA = base-vmA + a1 + a2 + a3
vmB = base-vmB + b1 + b2 + b3

let us say that software component b1 of vmB, requires a config value produced 
by
software component a1 of vmA. How to declaratively model this dependence? 
Clearly,
modeling a dependence between just base-vmA and base-vmB is not enough. However,
defining a dependence between the whole of vmA and vmB is too coarse. It would 
be ideal
to be able to define a dependence at the granularity of software components, 
i.e.,
vmB.b1 depends on vmA.a1. Of course, it would also be good to capture what value
is passed between vmB.b1 and vmA.a1, so that the communication can be 
facilitated
by the orchestration engine.

We found that such finer granular modeling of the dependencies provides two 
valuable benefits:

1. Faster total (resources + software setup) deployment time. For the example 
described
above, a coarse-granularity dependence enforcer would start the deployment of 
base-vmB after
vmA + a1 + a2 + a3 is completed, but a fine-granularity dependence enforcer 
would start base-vmA
and base-vmB concurrently, and then suspend the execution of vmB.b1 until 
vmA.a1 is complete and then
let the rest of deployment proceed concurrently, resulting in a faster 
completion.

2. More flexible dependencies. For example, mutual dependencies between 
resources,
which can be satisfied when orchestrated at a finer granularity. Using the 
example described
above, fine-granularity would allow vmB.b1 depends_on vmA.a1 and also vmA.a3 
depends_on vmB.b2,
but coarse-granularity model would flag this as a cyclic dependence.

There are two aspects that needs support:

1. Heat/HOT template level constructs to support declarative expression of such 
fine-granularity
dependencies and the values communicated / passed for the dependence.
2. Support from Heat engine / analyzer in supporting the runtime ordering, 
coordination between
resources, and also the communication of the values.

What are your thoughts?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-09 Thread Stan Lagun
It seems to me that something is missing in our discussion.

If something depends on something else there must be a definition of that
something. It is clear that it is not the case that one instance depends on
another but one application depends on another application. But there is no
such thing as application (service, whatever) in HOT templates. Only
low-level resources. And even resources cannot be grouped to some
application scope because typical HOT template has resources that are
shared between several applications (network, security groups etc.). It
also possible to have several applications sharing a single VM instance.
That brings us to a conclusion that applications and resources cannot be
mixed in the same template on the same level of abstraction.

Now suppose we did somehow established the dependency between two
applications. But this dependency is out of scope of particular HOT
template. Thats because HOT template says what user whishes to install. But
a dependency between applications is an attribute of applications
themselves, not the particular deployment. For example WordPress requires
database. It always does. Not that it requires it within this particular
template but a a universal rule. In Murano we call it data vs. metadata
separation. If there is a metadata that says WordPress requires DB then
you not just only don't have to repeat it in each template but you cannot
even ask a system to deploy WordPress without DB.

So the question is maybe we need to think about applications/services and
their metadata before going into workflow orchestration? Otherwise the
whole orchestration would be reinvented time and time again with each new
HOT template.

What are your thoughts on this?


On Wed, Oct 9, 2013 at 11:37 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.com wrote:

 Hi Lakshminaraya,

 Thank you for bringing your use case and your thought here. That is
 exactly tried to achieve in Murano project.
 There are important  aspects you highlighted. Sometime resource model is
 two high level to describe deployment process. If you start to use more
 granular approach to have defined steps of deployment you will finish with
 workflow approach where you have fine control of deployment process but
 description will be quite complex.

 I think the HOT approach is to provide a simple way do describe you
 deployment which consists of solid bricks (resources). If you are using
 standard resources you can easily create a simple HOT template for your
 deployment. If you need some custom resource you basically have two options
 - create new resource class and hide all complexity inside the code or use
 some workflows language to describe all steps required. The first approach
 is currently supported by Heat. We have an experience of creating new
 custom resources for orchestration deployment to specific IT infrastructure
 with specific hardware and software.

 Right now we are trying to figure out the possibility of adding workflows
 to HOT. It looks like adding workflows language directly might harm HOT
 simplicity by overloaded DSL syntax and structures.

 I actually see the value in Steve's idea to have specific resource or
 resource set to call workflows execution on external engine. In this case
 HOT template will be still pretty simple as all workflow details will be
 hidden, but still manageable without code writing.

 Thanks
 Gosha


 On Wed, Oct 9, 2013 at 11:31 AM, Lakshminaraya Renganarayana 
 lren...@us.ibm.com wrote:

 Steven Hardy sha...@redhat.com wrote on 10/09/2013 05:24:38 AM:


 
  So as has already been mentioned, Heat defines an internal workflow,
 based
  on the declarative model defined in the template.
 
  The model should define dependencies, and Heat should convert those
  dependencies into a workflow internally.  IMO if the user also needs to
  describe a workflow explicitly in the template, then we've probably
 failed
  to provide the right template interfaces for describing depenendencies.

 I agree with Steven here, models should define the dependencies and Heat
 should realize/enforce them. An important design issue is granularity at
 which dependencies are defined and enforced. I am aware of the
 wait-condition
 and signal constructs in Heat, but I find them a bit low-level as they
 are prone
 to the classic dead-lock and race condition problems.  I would like to
 have
 higher level constructs that support finer-granularity dependences which
 are needed for software orchestration. Reading through the various
 disucssion
 on this topic in this mailing list, I see that many would like to have
 such
 higher level constructs for coordination.

 In our experience with software orchestration using our own DSL and also
 with
 some extensions to Heat, we found that the granularity of VMs or
 Resources to be
 too coarse for defining dependencies for software orchestration. For
 example, consider
 a two VM app, with VMs vmA, vmB, and a set of software components (ai's
 and 

Re: [openstack-dev] Keystone OS-EP-FILTER descrepancy

2013-10-09 Thread Dolph Mathews
On Tue, Oct 8, 2013 at 3:20 PM, Miller, Mark M (EB SW Cloud - RD -
Corvallis) mark.m.mil...@hp.com wrote:

 Hello,

 I am attempting to test the Havana v3  OS-EP-FILTER extension with the
 latest RC1 bits and I get a 404 error response.

 The documentation actually shows 2 different URIs for this API:

 - GET /OS-EP-FILTER/projects/{project_id}/endpoints and
 http://identity:35357/v3/OS-FILTER/projects/{project_id}/endpoints

 I have tried both OS-EP-FILTER and OS-FILTER with the same result.
 Does anyone have information as to what I am missing?


Apologies for being late to the party, but it looks like you've already got
this worked out. On behalf of people from the future, thanks for following
up :)

Regarding the self-contradicting documentation, a fix merged last night
(thanks Steve!) to get that straightened out as OS-EP-FILTER:


https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-ep-filter-ext.md



 Regards,

 Mark Miller

 -

 From the online documentation:

 List Associations for Project: GET
 /OS-EP-FILTER/projects/{project_id}/endpoints

 Returns all the endpoints that are currently associated with a specific
 project.

 Response:
 Status: 200 OK
 {
 endpoints:
 [
 {
 id: --endpoint-id--,
 interface: public,
 url: http://identity:35357/;,
 region: north,
 links: {
 self: 
 http://identity:35357/v3/endpoints/--endpoint-id--;
 },
 service_id: --service-id--
 },
 {
 id: --endpoint-id--,
 interface: internal,
 region: south,
 url: http://identity:35357/;,
 links: {
 self: 
 http://identity:35357/v3/endpoints/--endpoint-id--;
 },
 service_id: --service-id--
 }
 ],
 links: {
 self: 
 http://identity:35357/v3/OS-FILTER/projects/{project_id}/endpoints;,
 previous: null,
 next: null
 }
 }


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

2013-10-09 Thread Sumit Naiksatam
Hi Rudra,

We tried to separate policy from mechanism for this blueprint, and are
trying to address the latter. I believe the logic for scaling, and or
clustering multiple service VMs to map to a logical service instance would
lie in the service plugin which realizes the logical service instance.

Can you please elaborate on the service/use-case where you would need to
plug the VM into different networks?

Thanks,
~Sumit.


On Tue, Oct 8, 2013 at 3:48 PM, Rudra Rugge rru...@juniper.net wrote:

  Hi Greg,

  Is there any discussion so far on the scaling of VMs as in launching
 multiple VMs
 for the same service. It would also have impact on the VIF scheme.

  How can we plug these services into different networks - is that still
 being worked
 on?

  Thanks,
 Rudra

  On Oct 8, 2013, at 2:48 PM, Regnier, Greg J greg.j.regn...@intel.com
 wrote:

   Hi,
  ** **
  Re: blueprint:
 https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms
  Before going into more detail on the mechanics, would like to nail down
 use cases. 
  Based on input and feedback, here is what I see so far. 

  Assumptions:
   

 - a 'Service VM' hosts one or more 'Service Instances'

 - each Service Instance has one or more Data Ports that plug into Neutron
 networks

 - each Service Instance has a Service Management i/f for Service
 management (e.g. FW rules)

 - each Service Instance has a VM Management i/f for VM management (e.g.
 health monitor)
   
  Use case 1: Private Service VM

 Owned by tenant

 VM hosts one or more service instances

 Ports of each service instance only plug into network(s) owned by tenant**
 **
   
  Use case 2: Shared Service VM

 Owned by admin/operator

 VM hosts multiple service instances

 The ports of each service instance plug into one tenants network(s)

 Service instance provides isolation from other service instances within VM
 

  
  Use case 3: Multi-Service VM

 Either Private or Shared Service VM

 Support multiple service types (e.g. FW, LB, …)
  ** **
  -  Greg
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone OS-EP-FILTER descrepancy

2013-10-09 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
Adam,

Thank you for the reply. The extension document is pretty good. The 
configuration instructions on the other hand need some help and I had to 
combine information from multiple sources to get OS-EP-FILTERing up and 
running.  Following are the final steps that I used.

Mark

---

To enable the endpoint filter extension:

1. Add the new filter driver to the catalog section to keystone.conf.

Example:
[catalog]
driver = 
keystone.contrib.endpoint_filter.backends.catalog_sql.EndpointFilterCatalog

2. Add the new [endpoint_filter] section  to ``keystone.conf``.

Example:

 [endpoint_filter]
# extension for creating associations between project and endpoints in order to 
# provide a tailored catalog for project-scoped token requests.
driver = keystone.contrib.endpoint_filter.backends.sql.EndpointFilter
# return_all_endpoints_if_no_filter = True

optional: uncomment and set ``return_all_endpoints_if_no_filter`` 

3. Add the ``endpoint_filter_extension`` filter to the ``api_v3`` pipeline in 
``keystone-paste.ini``.

Example:

[filter:endpoint_filter_extension]
paste.filter_factory = 
keystone.contrib.endpoint_filter.routers:EndpointFilterExtension.factory

[pipeline:api_v3]
pipeline = access_log sizelimit url_normalize token_auth admin_token_auth 
xml_body json_body ec2_extension s3_extension endpoint_filter_extension 
service_v3

4. Create the endpoint filter extension tables if using the provided sql 
backend.

Example::
./bin/keystone-manage db_sync --extension endpoint_filter

5.  Once you have done the changes restart the keystone-server to apply the 
changes.




 -Original Message-
 From: Adam Young [mailto:ayo...@redhat.com]
 Sent: Wednesday, October 09, 2013 1:35 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Keystone OS-EP-FILTER descrepancy
 
 We have imporved the extension enumeration in Keystone.  If you got to
 http://hostname:35357/v3 you should see a listing of the extensions that are
 enabled for that Keystone server
 
 
 On 10/08/2013 07:07 PM, Miller, Mark M (EB SW Cloud - RD - Corvallis)
 wrote:
  Sorry to send this out again, but I wrote too soon. I was missing one driver
 entry in keystone.conf. Here are my working settings:
 
  File keystone.conf:
 
  [catalog]
  # dynamic, sql-based backend (supports API/CLI-based management
  commands) #driver = keystone.catalog.backends.sql.Catalog
  driver =
  keystone.contrib.endpoint_filter.backends.catalog_sql.EndpointFilterCa
  talog
 
  # static, file-based backend (does *NOT* support any management
  commands) # driver =
  keystone.catalog.backends.templated.TemplatedCatalog
 
  template_file = default_catalog.templates
 
  [endpoint_filter]
  # extension for creating associations between project and endpoints in
  order to # provide a tailored catalog for project-scoped token requests.
  driver = keystone.contrib.endpoint_filter.backends.sql.EndpointFilter
  return_all_endpoints_if_no_filter = False
 
 
  File keystone-paste.ini:
 
  [filter:endpoint_filter_extension]
  paste.filter_factory =
  keystone.contrib.endpoint_filter.routers:EndpointFilterExtension.facto
  ry
 
  and
 
  [pipeline:api_v3]
  pipeline = access_log sizelimit url_normalize token_auth
  admin_token_auth xml_body json_body ec2_extension s3_extension
  oauth1_extension endpoint_filter_extension service_v3
 
 
 
  Updated Installation instructions:
 
  To enable the endpoint filter extension:
 
  1. Add the new filter driver to the catalog section to keystone.conf.
 
  Example:
  [catalog]
  driver =
  keystone.contrib.endpoint_filter.backends.catalog_sql.EndpointFilterCa
  talog
 
  2. Add the new [endpoint_filter] section  to ``keystone.conf``.
 
  Example:
 
[endpoint_filter]
  # extension for creating associations between project and endpoints in
  order to # provide a tailored catalog for project-scoped token requests.
  driver = keystone.contrib.endpoint_filter.backends.sql.EndpointFilter
  # return_all_endpoints_if_no_filter = True
 
  optional: uncomment and set ``return_all_endpoints_if_no_filter``
 
  3. Add the ``endpoint_filter_extension`` filter to the ``api_v3`` pipeline 
  in
 ``keystone-paste.ini``.
 
  Example:
 
  [filter:endpoint_filter_extension]
  paste.filter_factory =
  keystone.contrib.endpoint_filter.routers:EndpointFilterExtension.facto
  ry
 
  [pipeline:api_v3]
  pipeline = access_log sizelimit url_normalize token_auth
  admin_token_auth xml_body json_body ec2_extension s3_extension
  endpoint_filter_extension service_v3
 
  4. Create the endpoint filter extension tables if using the provided
  sql backend.
 
  Example::
   ./bin/keystone-manage db_sync --extension endpoint_filter
 
  5.  Once you have done the changes restart the keystone-server to
  apply the changes.
 
  -Original Message-
  From: Miller, Mark M (EB SW Cloud - RD - Corvallis)
  Sent: Tuesday, October 08, 2013 1:51 PM
  To: OpenStack Development Mailing List
  Subject: Re: [openstack-dev] Keystone OS-EP-FILTER 

Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

2013-10-09 Thread Sumit Naiksatam
Hi Harshad,

I agree with you that the service instance terminology might be a little
confusing here. The way it was phrased in the original email, I believe it
was meant to suggest an association with the corresponding Neutron logical
service (the XaaS to be precise).

That said (and to your point on service templates, which I do agree is a
helpful feature), we are not trying to introduce new tenant-facing
abstractions in this blueprint. The work in this blueprint was envisioned
to be a library module that will help service plugins to realize the
service on VMs.

Thanks,
~Sumit.


On Tue, Oct 8, 2013 at 4:16 PM, Harshad Nakil hna...@contrailsystems.comwrote:

 Hello Greg,

 Blueprint you have put together is very much in line what we have done in
 openContrail virtual services implementation.

 One thing that we have done is Service instance is a single type of
 service provided by virtual appliance.
 e.g. firewall or load-balancer etc
 Service instance itself can be made up one or more virtual machines.
 This will usually be case when you need to scale out services for
 performance reasons

 Another thing that we have done is introduced a concept of service
 template. Service template describes how the service can be deployed. Image
 specified in the template can also be snapshot of VM with cookie cutter
 configuration.

 service templates can be created by admins.Service instances are created
 by tenants (if allowed) using a service templates.

 So a a single firewall instance from vendor can be packaged as transparent
 L2 firewall in one template and in network L3 firewall in another template.

 Regards
 -Harshad



 On Tue, Oct 8, 2013 at 2:48 PM, Regnier, Greg J 
 greg.j.regn...@intel.comwrote:

  Hi,

 ** **

 Re: blueprint:
 https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms

 Before going into more detail on the mechanics, would like to nail down
 use cases.  

 Based on input and feedback, here is what I see so far.  

 ** **

 Assumptions:

  

 - a 'Service VM' hosts one or more 'Service Instances'

 - each Service Instance has one or more Data Ports that plug into Neutron
 networks

 - each Service Instance has a Service Management i/f for Service
 management (e.g. FW rules)

 - each Service Instance has a VM Management i/f for VM management (e.g.
 health monitor)

  

 Use case 1: Private Service VM 

 Owned by tenant

 VM hosts one or more service instances

 Ports of each service instance only plug into network(s) owned by tenant*
 ***

  

 Use case 2: Shared Service VM

 Owned by admin/operator

 VM hosts multiple service instances

 The ports of each service instance plug into one tenants network(s)

 Service instance provides isolation from other service instances within VM
 

  

 Use case 3: Multi-Service VM

 Either Private or Shared Service VM

 Support multiple service types (e.g. FW, LB, …)

 ** **

 **-  **Greg

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

2013-10-09 Thread Sumit Naiksatam
Thanks Bob, I agree this is an important aspect of the implementation.
However, apart from being able to specify which network(s) the VM has
interfaces on, what more needs to be done specifically in the proposed
library to achieve the tenant level isolation?

Thanks,
~Sumit.


On Tue, Oct 8, 2013 at 11:34 PM, Bob Melander (bmelande) bmela...@cisco.com
 wrote:

  For use case 2, ability to pin an admin/operator owned VM to a
 particular tenant can be useful.
 I.e., the service VMs are owned by the operator but a particular service
 VM will only allow service instances from a single tenant.

  Thanks,
 Bob

   From: Regnier, Greg J greg.j.regn...@intel.com
 Reply-To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
 Date: tisdag 8 oktober 2013 23:48
 To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
 
 Subject: [openstack-dev] [Neutron] Service VM discussion - Use Cases

   Hi,

 ** **

 Re: blueprint:
 https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms

 Before going into more detail on the mechanics, would like to nail down
 use cases.  

 Based on input and feedback, here is what I see so far.  

 ** **

 Assumptions:

  

 - a 'Service VM' hosts one or more 'Service Instances'

 - each Service Instance has one or more Data Ports that plug into Neutron
 networks

 - each Service Instance has a Service Management i/f for Service
 management (e.g. FW rules)

 - each Service Instance has a VM Management i/f for VM management (e.g.
 health monitor)

  

 Use case 1: Private Service VM 

 Owned by tenant

 VM hosts one or more service instances

 Ports of each service instance only plug into network(s) owned by tenant**
 **

  

 Use case 2: Shared Service VM

 Owned by admin/operator

 VM hosts multiple service instances

 The ports of each service instance plug into one tenants network(s)

 Service instance provides isolation from other service instances within VM
 

  

 Use case 3: Multi-Service VM

 Either Private or Shared Service VM

 Support multiple service types (e.g. FW, LB, …)

 ** **

 -  Greg

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-09 Thread Caitlin Bestler

On 10/9/2013 12:55 PM, Joshua Harlow wrote:

Your example sounds a lot like what taskflow is build for doing.



I'm not that familiar with Heat, so I wanted to bounce this off of
you before doing a public foot-in-mouth on the mailing list.

Is the real issue here the difference between *building* a set of
servers (Heat) versus performing a specific *task* on a set of servers
that was already built (taskflow)?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

2013-10-09 Thread Rudra Rugge
Hi Sumit,

I also got confused with service VM and service instance definition. I assumed 
both being the same and hence the networks question.

Rudra

On Oct 9, 2013, at 1:54 PM, Sumit Naiksatam 
sumitnaiksa...@gmail.commailto:sumitnaiksa...@gmail.com
 wrote:

Hi Rudra,

We tried to separate policy from mechanism for this blueprint, and are trying 
to address the latter. I believe the logic for scaling, and or clustering 
multiple service VMs to map to a logical service instance would lie in the 
service plugin which realizes the logical service instance.

Can you please elaborate on the service/use-case where you would need to plug 
the VM into different networks?

Thanks,
~Sumit.


On Tue, Oct 8, 2013 at 3:48 PM, Rudra Rugge 
rru...@juniper.netmailto:rru...@juniper.net wrote:
Hi Greg,

Is there any discussion so far on the scaling of VMs as in launching multiple 
VMs
for the same service. It would also have impact on the VIF scheme.

How can we plug these services into different networks - is that still being 
worked
on?

Thanks,
Rudra

On Oct 8, 2013, at 2:48 PM, Regnier, Greg J 
greg.j.regn...@intel.commailto:greg.j.regn...@intel.com wrote:

Hi,

Re: blueprint:  
https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms
Before going into more detail on the mechanics, would like to nail down use 
cases.
Based on input and feedback, here is what I see so far.

Assumptions:


- a 'Service VM' hosts one or more 'Service Instances'

- each Service Instance has one or more Data Ports that plug into Neutron 
networks

- each Service Instance has a Service Management i/f for Service management 
(e.g. FW rules)

- each Service Instance has a VM Management i/f for VM management (e.g. health 
monitor)


Use case 1: Private Service VM

Owned by tenant

VM hosts one or more service instances

Ports of each service instance only plug into network(s) owned by tenant


Use case 2: Shared Service VM

Owned by admin/operator

VM hosts multiple service instances

The ports of each service instance plug into one tenants network(s)

Service instance provides isolation from other service instances within VM



Use case 3: Multi-Service VM

Either Private or Shared Service VM

Support multiple service types (e.g. FW, LB, …)


-  Greg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

2013-10-09 Thread Rudra Rugge
Hi Sumit,

Please see inline.


On Oct 9, 2013, at 2:03 PM, Sumit Naiksatam 
sumitnaiksa...@gmail.commailto:sumitnaiksa...@gmail.com
 wrote:

Hi Harshad,

I agree with you that the service instance terminology might be a little 
confusing here. The way it was phrased in the original email, I believe it was 
meant to suggest an association with the corresponding Neutron logical service 
(the XaaS to be precise).

That said (and to your point on service templates, which I do agree is a 
helpful feature), we are not trying to introduce new tenant-facing abstractions 
in this blueprint. The work in this blueprint was envisioned to be a library 
module that will help service plugins to realize the service on VMs.

[Rudra] How do we handle interdependency between services within a service VM.
Since the ordering of services is often in the same order in most deployments 
(inbound
firewall/VPN, LB, gateway, outbound FW) it would be better if the template 
specifies most
of this information. Services in the pipeline may be turned on or off.

Based on the blueprint we already have the insertion modes: L2, routed, tap 
etc. The
interface count and interface connections to networks should be specified here. 
In
addition if a service plugin needs scaling then its not convenient for the 
plugin to
launch another service VM and manage the networking aspects.

Hence a template model can contain most of the information (image info, 
services offered,
service ordering, interface count and names, scaling, insertion mode etc). 
Launching of
service VMs (containing services) is then associated to template definition.

Thanks,
Rudra


Thanks,
~Sumit.


On Tue, Oct 8, 2013 at 4:16 PM, Harshad Nakil 
hna...@contrailsystems.commailto:hna...@contrailsystems.com wrote:
Hello Greg,

Blueprint you have put together is very much in line what we have done in 
openContrail virtual services implementation.

One thing that we have done is Service instance is a single type of service 
provided by virtual appliance.
e.g. firewall or load-balancer etc
Service instance itself can be made up one or more virtual machines. This 
will usually be case when you need to scale out services for performance reasons

Another thing that we have done is introduced a concept of service template. 
Service template describes how the service can be deployed. Image specified in 
the template can also be snapshot of VM with cookie cutter configuration.

service templates can be created by admins.Service instances are created by 
tenants (if allowed) using a service templates.

So a a single firewall instance from vendor can be packaged as transparent L2 
firewall in one template and in network L3 firewall in another template.

Regards
-Harshad



On Tue, Oct 8, 2013 at 2:48 PM, Regnier, Greg J 
greg.j.regn...@intel.commailto:greg.j.regn...@intel.com wrote:
Hi,

Re: blueprint:  
https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms
Before going into more detail on the mechanics, would like to nail down use 
cases.
Based on input and feedback, here is what I see so far.

Assumptions:


- a 'Service VM' hosts one or more 'Service Instances'

- each Service Instance has one or more Data Ports that plug into Neutron 
networks

- each Service Instance has a Service Management i/f for Service management 
(e.g. FW rules)

- each Service Instance has a VM Management i/f for VM management (e.g. health 
monitor)


Use case 1: Private Service VM

Owned by tenant

VM hosts one or more service instances

Ports of each service instance only plug into network(s) owned by tenant


Use case 2: Shared Service VM

Owned by admin/operator

VM hosts multiple service instances

The ports of each service instance plug into one tenants network(s)

Service instance provides isolation from other service instances within VM



Use case 3: Multi-Service VM

Either Private or Shared Service VM

Support multiple service types (e.g. FW, LB, …)


-  Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

2013-10-09 Thread Regnier, Greg J
Hi,

The original use cases I called out include multiple service instances within a 
single VM, but not your use case of a single logical service spread across 
multiple VMs for scale-out.  Have you identified requirements for these VMs 
that might be specified within the scope of this blueprint?

I agree the terminology can be confusing.
 I intended the term 'Service VM' to mean the virtual machine that hosts one or 
more 'Service Instances', as Sumit points out is distinguished from the Neutron 
Logical (XaaS) Service.  So a Neutron Logical Service may schedule a Service 
Instance on a new (or existing) Service VM.

Greg



Sumit Naiksatam sumitnaiksatam at gmail.com 
mailto:openstack-dev%40lists.openstack.org?Subject=Re%3A%20%5Bopenstack-dev%5D%20%5BNeutron%5D%20Service%20VM%20discussion%20-%20Use%20CasesIn-Reply-To=%3CCAMWrLvhZtyc2v%2Bbh-98aVjhTw1kYek5MpCMPDWYWjGG1g-C1Pg%40mail.gmail.com%3E
Wed Oct 9 21:03:39 UTC 2013

  *   Previous message: [openstack-dev] [Neutron] Service VM discussion - Use 
Cases 
http://lists.openstack.org/pipermail/openstack-dev/2013-October/016238.html
  *   Next message: [openstack-dev] [Neutron] Service VM discussion - Use Cases 
http://lists.openstack.org/pipermail/openstack-dev/2013-October/016252.html
  *   Messages sorted by: [ date 
]http://lists.openstack.org/pipermail/openstack-dev/2013-October/date.html#16306
 [ thread 
]http://lists.openstack.org/pipermail/openstack-dev/2013-October/thread.html#16306
 [ subject 
]http://lists.openstack.org/pipermail/openstack-dev/2013-October/subject.html#16306
 [ author 
]http://lists.openstack.org/pipermail/openstack-dev/2013-October/author.html#16306

Hi Harshad,



I agree with you that the service instance terminology might be a little

confusing here. The way it was phrased in the original email, I believe it

was meant to suggest an association with the corresponding Neutron logical

service (the XaaS to be precise).



That said (and to your point on service templates, which I do agree is a

helpful feature), we are not trying to introduce new tenant-facing

abstractions in this blueprint. The work in this blueprint was envisioned

to be a library module that will help service plugins to realize the

service on VMs.



Thanks,

~Sumit.





On Tue, Oct 8, 2013 at 4:16 PM, Harshad Nakil hnakil at 
contrailsystems.comhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devwrote:



 Hello Greg,



 Blueprint you have put together is very much in line what we have done in

 openContrail virtual services implementation.



 One thing that we have done is Service instance is a single type of

 service provided by virtual appliance.

 e.g. firewall or load-balancer etc

 Service instance itself can be made up one or more virtual machines.

 This will usually be case when you need to scale out services for

 performance reasons



 Another thing that we have done is introduced a concept of service

 template. Service template describes how the service can be deployed. Image

 specified in the template can also be snapshot of VM with cookie cutter

 configuration.



 service templates can be created by admins.Service instances are created

 by tenants (if allowed) using a service templates.



 So a a single firewall instance from vendor can be packaged as transparent

 L2 firewall in one template and in network L3 firewall in another template.



 Regards

 -Harshad

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC candidacy

2013-10-09 Thread Chris Behrens
Hi all,

I'd like to announce my candidacy for a seat on the OpenStack
Technical Committee.

- General background -

I have over 15 years of experience designing and building distributed
systems.  I am currently a Principal Engineer at Rackspace, where
I have been for a little over 3 years now.  Most of my time at
Rackspace has been spent working on OpenStack as both a developer
and a technical leader.  My first week at Rackspace was spent at
the very first OpenStack Design Summit in Austin where the project
was announced.

Prior to working at Rackspace, I held various roles over 14 years
at Concentric Network Corporation/XO Communications including Senior
Software Architect and eventually Director of Engineering.  My main
focus there was on an award winning web/email hosting platform which
we'd built to be extremely scalable and fault tolerant.  While my
name is not on this patent, I was heavily involved with the development
and design that led to US6611861.

- Why am I interested? -

This is my 3rd time running and I don't want to be considered a failure!

But seriously, as I have mentioned in the past, I have strong
feelings for OpenStack and I want to help as much as possible to
take it to the next level.  I have a lot of technical knowledge and
experience building scalable distributed systems.  I would like to
use this knowledge for good, not evil.

- OpenStack contributions -

As I mentioned above, I was at the very first design summit, so
I've been involved with the project from the beginning.  I started
the initial work for nova-scheduler shortly after the project was
opened.  I also implemented the RPC support for kombu, making sure
to properly support reconnecting and so forth which didn't work
quite so well with the carrot code.  I've contributed a number of
improvements designed to make nova-api more performant.  I've worked
on the filter scheduler as well as designing and implementing the
first version of the Zones replacement that we named 'Cells'.  And
most recently, I was involved in the design and implementation of
the unified objects code in nova.

During Icehouse, I'm hoping to focus on performance and stabilization
while also helping to finish objects conversion.

- Summary -

I feel my years of experience contributing to and leading large scale
technical projects along with my knowledge of the OpenStack projects
will provide a good foundation for technical leadership.

Thanks,

- Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-09 Thread Lakshminaraya Renganarayana

Georgy Okrokvertskhov gokrokvertsk...@mirantis.com wrote on 10/09/2013
03:37:01 PM:

 From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com
 To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
 Date: 10/09/2013 03:41 PM
 Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
 proposal for workflows


 Thank you for bringing your use case and your thought here. That is
 exactly tried to achieve in Murano project.
 There are important  aspects you highlighted. Sometime resource
 model is two high level to describe deployment process. If you start
 to use more granular approach to have defined steps of deployment
 you will finish with workflow approach where you have fine control
 of deployment process but description will be quite complex.

IMHO workflow approaches tend to be heavy-weight. So, I am hoping
for more light-weight data-flow constructs and mechanisms that
can help with the coordination scenarios I have outlined. Data-flow
constructs and mechanisms have had lots of success in other domains,
and I wondering why can't we (the Heat community) leverage the related
theory and tools!

Thanks,
LN

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-09 Thread Joshua Harlow
Thanks for the clarification.

I'm not sure how much of heat has to change to support what you are aiming for. 
Maybe heat should use taskflow ;)

From: Lakshminaraya Renganarayana 
lren...@us.ibm.commailto:lren...@us.ibm.com
Date: Wednesday, October 9, 2013 4:34 PM
To: Joshua Harlow harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com
Cc: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Heat] HOT Software orchestration proposal for 
workflows


Hi Joshua,

I agree that there is an element of taskflow in what I described. But, I am 
aiming for something much more lightweight which can be naturally blended with 
HOT constructs and Heat engine. To be a bit more specific, Heat already has 
dependencies and coordination mechanisms. So, I am aiming for may be just one 
additional construct in Heat/HOT and some logic in Heat that would support 
coordination.

Thanks,
LN

_
Lakshminarayanan Renganarayana
Research Staff Member
IBM T.J. Watson Research Center
http://researcher.ibm.com/person/us-lrengan


Joshua Harlow harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote on 
10/09/2013 03:55:00 PM:

 From: Joshua Harlow harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com
 To: OpenStack Development Mailing List openstack-
 d...@lists.openstack.orgmailto:d...@lists.openstack.org, Lakshminaraya 
 Renganarayana/Watson/IBM@IBMUS
 Date: 10/09/2013 03:55 PM
 Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
 proposal for workflows

 Your example sounds a lot like what taskflow is build for doing.

 https://github.com/stackforge/taskflow/blob/master/taskflow/
 examples/calculate_in_parallel.py is a decent example.

 In that one, tasks are created and input/output dependencies are
 specified (provides, rebind, and the execute function arguments itself).

 This is combined into the taskflow concept of a flow, one of those
 flows types is a dependency graph.

 Using a parallel engine (similar in concept to a heat engine) we can
 run all non-dependent tasks in parallel.

 An example that I just created that shows this (and shows it
 running) that closer matches your example.

 Program (this will work against the current taskflow codebase):
 http://paste.openstack.org/show/48156/

 Output @ http://paste.openstack.org/show/48157/

 -Josh

 From: Lakshminaraya Renganarayana 
 lren...@us.ibm.commailto:lren...@us.ibm.com
 Reply-To: OpenStack Development Mailing List openstack-
 d...@lists.openstack.orgmailto:d...@lists.openstack.org
 Date: Wednesday, October 9, 2013 11:31 AM
 To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
 proposal for workflows

 Steven Hardy sha...@redhat.commailto:sha...@redhat.com wrote on 
 10/09/2013 05:24:38 AM:

 
  So as has already been mentioned, Heat defines an internal workflow, based
  on the declarative model defined in the template.
 
  The model should define dependencies, and Heat should convert those
  dependencies into a workflow internally.  IMO if the user also needs to
  describe a workflow explicitly in the template, then we've probably failed
  to provide the right template interfaces for describing depenendencies.

 I agree with Steven here, models should define the dependencies and Heat
 should realize/enforce them. An important design issue is granularity at
 which dependencies are defined and enforced. I am aware of the wait-condition
 and signal constructs in Heat, but I find them a bit low-level as
 they are prone
 to the classic dead-lock and race condition problems.  I would like to have
 higher level constructs that support finer-granularity dependences which
 are needed for software orchestration. Reading through the various disucssion
 on this topic in this mailing list, I see that many would like to have such
 higher level constructs for coordination.

 In our experience with software orchestration using our own DSL and also with
 some extensions to Heat, we found that the granularity of VMs or
 Resources to be
 too coarse for defining dependencies for software orchestration. For
 example, consider
 a two VM app, with VMs vmA, vmB, and a set of software components
 (ai's and bi's)
 to be installed on them:

 vmA = base-vmA + a1 + a2 + a3
 vmB = base-vmB + b1 + b2 + b3

 let us say that software component b1 of vmB, requires a config
 value produced by
 software component a1 of vmA. How to declaratively model this
 dependence? Clearly,
 modeling a dependence between just base-vmA and base-vmB is not
 enough. However,
 defining a dependence between the whole of vmA and vmB is too
 coarse. It would be ideal
 to be able to define a dependence at the granularity of software
 components, i.e.,
 vmB.b1 depends on vmA.a1. Of course, it would also be good to
 capture what value
 is passed between 

Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-09 Thread Lakshminaraya Renganarayana


Stan Lagun sla...@mirantis.com wrote on 10/09/2013 04:07:33 PM:

It seems to me that something is missing in our discussion.

 If something depends on something else there must be a definition of
 that something. It is clear that it is not the case that one
 instance depends on another but one application depends on another
 application. But there is no such thing as application (service,
 whatever) in HOT templates. Only low-level resources. And even
 resources cannot be grouped to some application scope because
 typical HOT template has resources that are shared between several
 applications (network, security groups etc.). It also possible to
 have several applications sharing a single VM instance. That brings
 us to a conclusion that applications and resources cannot be mixed
 in the same template on the same level of abstraction.

Good point on the levels of abstraction.

 Now suppose we did somehow established the dependency between two
 applications. But this dependency is out of scope of particular HOT
 template. Thats because HOT template says what user whishes to
 install. But a dependency between applications is an attribute of
 applications themselves, not the particular deployment. For example
 WordPress requires database. It always does. Not that it requires it
 within this particular template but a a universal rule. In Murano we
 call it data vs. metadata separation. If there is a metadata that
 says WordPress requires DB then you not just only don't have to
 repeat it in each template but you cannot even ask a system to
 deploy WordPress without DB.

I think the kind of dependency you have outlined above is more of
software component requirements of an application. These kind of
semantic dependencies are important and are probably outside the
scope of Heat.  The kind of dependencies I referred to are of the
nature of data-flow between software components: for example, a
tomcat application server needs (and hence, depends on)  the
DB's username/password to set up its configuration. How do we
model such a data-flow dependence and how to we facilitate the
communication of such values from the DB to the tomcat component?
IMHO, such questions are related to Heat.

 So the question is maybe we need to think about applications/
 services and their metadata before going into workflow
 orchestration? Otherwise the whole orchestration would be reinvented
 time and time again with each new HOT template.

 What are your thoughts on this?

I find your separation of metadata vs. data useful. In my opinion,
the kind of metadata you are trying to capture would be best
modeled by a DSL that sits on top of HOT/Heat.

Thanks,
LN




 On Wed, Oct 9, 2013 at 11:37 PM, Georgy Okrokvertskhov 
 gokrokvertsk...@mirantis.com wrote:
 Hi Lakshminaraya,

 Thank you for bringing your use case and your thought here. That is
 exactly tried to achieve in Murano project.
 There are important  aspects you highlighted. Sometime resource
 model is two high level to describe deployment process. If you start
 to use more granular approach to have defined steps of deployment
 you will finish with workflow approach where you have fine control
 of deployment process but description will be quite complex.

 I think the HOT approach is to provide a simple way do describe you
 deployment which consists of solid bricks (resources). If you are
 using standard resources you can easily create a simple HOT template
 for your deployment. If you need some custom resource you basically
 have two options - create new resource class and hide all complexity
 inside the code or use some workflows language to describe all steps
 required. The first approach is currently supported by Heat. We have
 an experience of creating new custom resources for orchestration
 deployment to specific IT infrastructure with specific hardware and
software.

 Right now we are trying to figure out the possibility of adding
 workflows to HOT. It looks like adding workflows language directly
 might harm HOT simplicity by overloaded DSL syntax and structures.

 I actually see the value in Steve's idea to have specific resource
 or resource set to call workflows execution on external engine. In
 this case HOT template will be still pretty simple as all workflow
 details will be hidden, but still manageable without code writing.

 Thanks
 Gosha


 On Wed, Oct 9, 2013 at 11:31 AM, Lakshminaraya Renganarayana 
 lren...@us.ibm.com wrote:
 Steven Hardy sha...@redhat.com wrote on 10/09/2013 05:24:38 AM:


 
  So as has already been mentioned, Heat defines an internal workflow,
based
  on the declarative model defined in the template.
 
  The model should define dependencies, and Heat should convert those
  dependencies into a workflow internally.  IMO if the user also needs to
  describe a workflow explicitly in the template, then we've probably
failed
  to provide the right template interfaces for describing depenendencies.

 I agree with Steven here, models should define 

[openstack-dev] Etherpad Upgrade

2013-10-09 Thread Clark Boylan
In an effort to better support the upcoming design summit the Infra
team will be upgrading etherpad.openstack.org at 1600UTC Sunday,
October 13. There will be a short time period where etherpads are
inaccessible as we update DNS records and replicate databases. You
might also need to clear your browser cache if you try to access
etherpads before the etherpad TTLs have expired.

Now for technical details. We are moving etherpad to a new cloud
server. The new server comes with a newer version of node.js and an
up to date etherpad-lite and is using a database provided by Trove.
A new etherpad-dev.openstack.org has also been spun up to provide
a platform for load testing (and other etherpad items that folks may want
to test), using this server we have identified some scale problems in the
old setup that should be corrected on the new server.
Making this move now should give us plenty of time to make sure the
service is ready for the summit.

If you have any questions let us know. Thanks,

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Agenda for tomorrow meeting at 2000 UTC

2013-10-09 Thread Joshua Harlow
Hi all,

The [state-management] project team holds a weekly meeting in 
#openstack-meeting on thursdays, 2000 UTC. The next meeting is tomorrow, 
2013-10-10!!!

As usual, everyone is welcome :-)

Link: https://wiki.openstack.org/wiki/Meetings/StateManagement
Taskflow: https://wiki.openstack.org/TaskFlow

## Agenda (30-60 mins):

- Discuss any action items from last meeting.
- Resumption is in, celebrate!
-   0.1 release soon? Any blockers for this?
- Discuss icehouse integration strategies (who is interested in doing what)
- Continue discussion on HK summit
-   https://etherpad.openstack.org/TaskflowHKIdeas
- Brainstorm flow control strategies
-   https://etherpad.openstack.org/BrainstormFlowConditions
- Discuss about any other ideas, problems, open-reviews, issues, solutions, 
questions (and more).

Any other topics are welcome :-)

See you all soon!

--

Joshua Harlow

It's openstack, relax... | harlo...@yahoo-inc.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-09 Thread Angus Salkeld

On 09/10/13 19:31 +0100, Steven Hardy wrote:

On Wed, Oct 09, 2013 at 06:59:22PM +0200, Alex Rudenko wrote:

Hi everyone,

I've read this thread and I'd like to share some thoughts. In my opinion,
workflows (which run on VMs) can be integrated with heat templates as
follows:

   1. workflow definitions should be defined separately and processed by
   stand-alone workflow engines (chef, puppet etc).


I agree, and I think this is the direction we're headed with the
software-config blueprints - essentially we should end up with some new
Heat *resources* which encapsulate software configuration.


Exactly.

I think we need a software-configuration-aas sub-project that knows
how to take puppet/chef/salt/... config and deploy it. Then Heat just
has Resources for these (OS::SoftwareConfig::Puppet). 

We should even move our WaitConditions and Metadata over to that 
yet-to-be-made service so that Heat is totally clean of software config.


How would this solve ordering issues:

resources:
 config1:
   type: OS::SoftwareConfig::Puppet
   hosted_on: server1
   ...
 config2:
   type: OS::SoftwareConfig::Puppet
   hosted_on: server1
   depends_on: config3
   ...
 config3:
   type: OS::SoftwareConfig::Puppet
   hosted_on: server2
   depends_on: config1
   ...
 server1:
   type: OS::Nova::Server
   ...
 server2:
   type: OS::Nova::Server
   ...


Heat knows all about ordering:
It starts the resources:
server1, server2
config1
config3
config2

There is the normal contract in the client:
we post the config to software-config-service
and we wait for the state == ACTIVE (when the config is applied)
before progressing to a resource that is dependant on it.

-Angus



IMO there is some confusion around the scope of HOT, we should not be
adding functionality to it which already exists in established config
management tools IMO, instead we should focus on better integration with
exisitng tools at the resource level, and identifying template interfaces
which require more flexibility (for example serialization primitives)


   2. the HOT resources should reference workflows which they require,
   specifying a type of workflow and the way to access a workflow definition.
   The workflow definition might be provided along with HOT.


So again, I think this acatually has very little to do with HOT.  The
*Heat* resources may define software configuration, or possibly some sort
of workflow, which is acted upon by $thing which is not Heat.

So in the example provided by the OP, maybe you'd have a Murano resource,
which knows how to define the input to the Murano API, which might trigger
workflow type actions to happen in the Murano service.


   3. Heat should treat the orchestration templates as transactions (i.e.
   Heat should be able to rollback in two cases: 1) if something goes wrong
   during processing of an orchestration workflow 2) when a stand-alone
   workflow engine reports an error during processing of a workflow associated
   with a resource)


So we already have the capability for resources to recieve signals, which
would allow (2) in the asynchronous case.  But it seems to me that this is
still a serialization problem, ie a synchronous case, therefore (2) is just
part of (1).

E.g

- Heat stack create starts
- Murano resource created (CREATE IN_PROGRESS state)
- Murano workdlow stuff happens, signals Heat with success/failure
- Murano resource transitions to either COMPLETE or FAILED state
- If a FAILED state happened, e.g on update, we can roll back to the
 previous stack definition (this is already possible in Heat)


   4. Heat should expose an API which enables basic communication between
   running workflows. Additionally, Heat should provide an API to workflows
   that allows workflows to specify whether they completed successfully or
   not. The reference to these APIs should be passed to the workflow engine
   that is responsible for executing workflows on VMs.


I personally don't think this is in scope for Heat.  We already have an API
which exposes the status of stacks and resources.  Exposing some different
API which describes a workflow implemented by a specific subset of resource
types makes no sense to me.



Pros of each point:
1  2 - keeps Heat simple and gives a possibility to choose the best
workflows and engines among available ones.
3 - adds some kind of all-or-nothing semantics improving the control and
awareness of what's going on inside VMs.
4 - allows workflow synchronization and communication through Heat API.
Provides the error reporting mechanism for workflows. If a workflow does
not need this functionality, it can ignore it.


IMHO (4) is very much a step too far, and is not well aligned with the
current interfaces provided by Heat.

I'm really keen to further discuss the use-cases here, but if possible, it
would be helpful if folks can describe their requirements in less abstract
terms, and with reference to our existing interfaces and template model.


These thoughts might show some 

Re: [openstack-dev] addCleanup vs. tearDown

2013-10-09 Thread Lingxian Kong
+1. Thanks Monty, very good clarification and suggestion!


2013/10/9 Nachi Ueno na...@ntti3.com

 +1

 2013/10/8 Monty Taylor mord...@inaugust.com:
  Hey!
 
  Got a question on IRC which seemed fair game for a quick mailing list
 post:
 
  Q: I see both addCleanup and tearDown in nova's test suite - which one
  should I use for new code?
 
  A: addCleanup
 
  All new code should 100% of the time use addCleanup and not tearDown -
  this is because addCleanups are all guaranteed to run, even if one of
  them fails, whereas a failure inside of a tearDown can leave the rest of
  the tearDown un-executed, which can leave stale state laying around.
 
  Eventually, as we get to it, tearDown should be 100% erradicated from
  OpenStack. However, we don't really need more patch churn, so I
  recommend only working on it as you happen to be in related code.
 
  Thanks!
  Monty
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
**
*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com; anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC Candidacy

2013-10-09 Thread Robert Collins
I'm interested in serving on the OpenStack TC.

# About me

I've been working on OpenStack for only a year now, since joining
Monty's merry gang of reprobateswink/ at HP. However I've been
entirely focused on networking and distributed systems since ~2000 -
having as highlights -core membership in the squid HTTP cache team,
one of the founders of the Bazaar DVCS project, and a huge mix of
testing and development efficiency thrown into the mix :). Earlier
this year I was privileged to become a Python Software Foundation
member, and I'm keen to see us collaborating more with upstream,
particularly around testing.

I live in New Zealand, giving me overlap with the US and with a lot of
Asia, but talking with Europe requires planning :)

# Platform

At the recent TripleO sprint in Seattle I was told I should apply for
the TC; after some soul searching, I think yes, I should :).

Three key things occurred to me:

All of our joint hard work to develop OpenStack is wasted if users
can't actually obtain and deploy it. This is why we're working on
making deployment a systematic, rigorous and repeatable upstream
activity: we need to know as part of the CI gate that what we're
developing is usable, in real world scenarios. This is a
multi-component problem: we can't bolt 'be deployable' on after all
the code is written : and thats why during the last two cycles I've
been talking about the problems deploying from trunk at the summits,
and will continue to do so. This cross-program, cross-project effort
ties into the core of what we do, and it's imperative we have folk on
the TC that are actually deploying OpenStack (TripleO is running a
live cloud -https://wiki.openstack.org/wiki/TripleO/TripleOCloud- all
TripleO devs are helping deploy a production cloud).

I have a -lot- of testing experience, and ongoing unit and functional
testing evolution will continue to play a significant role in
OpenStack quality; the TC can help advise across all projects about
automated testing; I'd be delighted to assist with that.

Finally, and I'm going to quote Monty here: As a TC member, I will
place OpenStack's interests over the interests of any individual
project if a conflict between the project and OpenStack, or a project
with another project should a arise. - I think this is a key attitude
we should all hold: we're building an industry changing platform, and
we need to think of the success of the whole platform as being *the*
primary thing to aim for.

Thank you for your consideration,
Rob
-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] what is the code organization of nova

2013-10-09 Thread Noorul Islam K M
Aparna Datt aparna.cl...@gmail.com writes:

 hi i was going through code of nova on github...but there are no readme
 files available regarding code organization of nova. Can anyone provide me
 with a link from where i can begin reading the code ? or if anyone can help
 me by indicators on from which files / folders the nova begins its
 processing?


Apart from what others pointed, I suggest to take a look at this.

http://www.youtube.com/watch?v=P4OU-rOQ4as

Thanks and Regards
Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-09 Thread Mike Spreitzer
Yes, there is more than the northbound API to discuss.  Gary started us 
there in the Scheduler chat on Oct 1, when he broke the issues down like 
this:

11:12:22 AM garyk: 1. a user facing API
11:12:41 AM garyk: 2. understanding which resources need to be tracked
11:12:48 AM garyk: 3. backend implementation

The full transcript is at 
http://eavesdrop.openstack.org/meetings/scheduling/2013/scheduling.2013-10-01-15.08.log.html

Alex Glikson glik...@il.ibm.com wrote on 10/09/2013 02:14:03 AM:
 
 Good summary. I would also add that in A1 the schedulers (e.g., in 
 Nova and Cinder) could talk to each other to coordinate. Besides 
 defining the policy, and the user-facing APIs, I think we should 
 also outline those cross-component APIs (need to think whether they 
 have to be user-visible, or can be admin). 
 
 Regards, 
 Alex 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-09 Thread Mike Spreitzer
Debojyoti Dutta ddu...@gmail.com wrote on 10/09/2013 02:48:26 AM:

 Mike, I agree we could have a cleaner API but I am not sure how
 cleanly it will integrate with current nova which IMO should be test
 we should pass (assuming we do cross services later)

I think the cleaner APIs integrate with Nova as well as the three phase 
API you suggested.  Am I missing some obvious impediment?

 ...
  To me the most frustrating aspect of this challenge is the need for 
the
  client to directly mediate the dependencies between resources; this is
  really what is driving us to do ugly things.  As I mentioned before, I 
am
  coming from a setting that does not have this problem.  So I am 
thinking
  about two alternatives: (A1) how clean can we make a system in which 
the
  client continues to directly mediate dependencies between resources, 
and
  (A2) how easily and cleanly can we make that problem go away.
 
 Am a little confused - How is the API dictating either A1 or A2? Isnt
 that a function of the implementation of the API.  For a moment let us
 assume that the black box implementation will be awesome and address
 your concerns.

I am talking about how the client/service interface, it is not (just) a 
matter of service implementation.

My complaint is that the software orchestration technique commonly used 
prevents us from having a one-phase API for holistic infrastructure 
scheduling.  The commonly used software orchestration technique requires 
some serialization of the resource creation calls.  For example, if one VM 
instance runs a database and another VM instance runs a web server that 
needs to be configured with the private IP address of the database, the 
common technique is for the client to first create the database VM 
instance, then take the private IP address from that VM instance and use 
it to compose the userdata that is passed in the Nova call that creates 
the web server VM instance.  That client can not present all at once a 
fully concrete and literal specification of both VM instances, because the 
userdata for one is not knowable until the other has been created.  The 
client has to be able to make create-like calls in some particular order 
rather than ask for all creation at once.  If the client could ask for all 
creation at once then we could use a one-phase API: it simply takes a 
specification of the resources along with their policies and 
relationships.

Of course, there is another way out.  We do have in OpenStack a technology 
by which a client can present all at once a specification of many VM 
instances where the userdata of some depend on the results of creating 
others.  If we were willing to use this technology, we could follow A2. 
The CREATE flow would go like this: (i) the client presents the 
specification of resources (including the computations that link some), 
with grouping, relationships, and policies, to our new API; (ii) our new 
service registers the new topology and (once we advance this far on the 
development roadmap) does holistic scheduling; (iii) our new service 
updates the resource specifications to include pointers into the policy 
data; (iv) our new service passes the enhanced resource specifications to 
that other service that can do the creation calls linked by the prescribed 
computations; (v) that other service does its thing, causing a series 
(maybe with some allowed parallelism) of creation calls, each augmented by 
the relevant pointer into the policy information; (vi) the service 
implementing a creation call gets what it normally does plus the policy 
pointer, which it follows to get the relevant policy information (at the 
first step in the development roadmap) or the scheduling decision (in the 
second step of the development roadmap).  But I am getting ahead of myself 
here and discussing backend implementation; I think we are still working 
on the user-facing API.

 The question is this - does the current API help
 specify what we  want assuming we will be able to extend the notion of
 nodes, edges, policies and metadata?

I am not sure I understand that remark.  Of course the API you proposed is 
about enabling the client to express the policy information that we both 
advocate.  I am not sure I understand why you add the qualifier of 
assuming we will be able to extend the notion of   I do not think we 
(yet) have a policy type catalog set in stone, if that is the concern.  I 
think there is an interesting discussion to have about defining that 
catalog.

BTW, note that the class you called InstanceGroupPolicy is not just a 
reference to a policy, it also specifies one place where that policy is 
being applied.  That is really the class of policy applications (or 
uses).

I think some types of policies have parameters.  A relationship policy 
about limiting the number of network hops takes a parameter that is the 
hop count limit.  A policy about anti-collocation takes a physical 
hierarchy level as a parameter, to put a lower