[openstack-dev] Re-initializing or dynamically configuring cinder driver

2013-12-13 Thread iKhan
Hi All,

At present cinder driver can be only configured with adding entries in conf
file. Once these driver related entries are modified or added in conf file,
we need to restart cinder-volume service to validate the conf entries and
create a child process that runs in background.

I am thinking of a way to re-initialize or dynamically configure cinder
driver. So that I can accept the configuration from user on fly and perform
operations. I think solution lies somewhere around "oslo.config.cfg", but I
am still unclear about how re-initializing can be achieved.

Let know if anyone here is aware of any approach to re-initialize or
dynamically configure a driver.

-- 
Thanks,
IK
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing the new OpenStack service for Containers

2013-12-13 Thread Krishna Raman
On Dec 13, 2013 12:20 PM, "Eric Windisch"  wrote:
>
> On Fri, Dec 13, 2013 at 1:19 PM, Chuck Short 
wrote:
>>
>> Hi,
>>
>> I have definitely seen a drop off in the proposed Container-Service API
discussion
>
>
> There was only one action item from the meeting, which was a compilation
of use-cases from Krishna.
>
> Krishna, have you made progress on the use-cases? Is there a wiki page?

I have some and will post them along with a new doodle poll for another
meeting soon.

Thanks for the reminder :)

-kr

>
> Regards,
> Eric Windisch
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-13 Thread Tzu-Mainn Chen
> Thanks Mainn, comments inline :)
> 
> On Fri, 2013-12-13 at 19:31 -0500, Tzu-Mainn Chen wrote:
> > Thanks for the reply!  Let me try and address one particular section for
> > now,
> > since it seems to be the part causing the most confusion:
> > 
> > > >  * SERVICE CLASS - a further categorization within a service role
> > > >  for a
> > > >  particular deployment.
> > > > 
> > > >   * NODE PROFILE - a set of requirements that specify what
> > > >   attributes a node must have in order to be mapped to
> > > >a service class
> > > 
> > > I think I still need some more information about the above two. They
> > > sound vaguely like Cobbler's system profiles... :)
> > 
> > I admit that this concept was a bit fuzzy for me as well, but after a few
> > IRC
> > chats, I think I have a better handle on this.  Let me begin with my
> > understanding of what
> > happens in Heat, using Heat terminology:
> > 
> > A Heat stack template defines RESOURCES.  When a STACK is deployed using
> > that template,
> > the resource information in the template is used to instantiate an INSTANCE
> > of that
> > resource on a NODE.  Heat can pass a FLAVOR (flavors?) to nova-scheduler in
> > order to
> > filter for appropriate nodes.
> > 
> > So: based on that explanation, here's what Tuskar has been calling the
> > above:
> > 
> > HEAT TERM == TUSKAR TERM
> > 
> > NODE == NODE
> > STACK == DEPLOYMENT
> > INSTANCE == INSTANCE
> > RESOURCE == SERVICE CLASS (at the very least, it's a one-to-one
> > correspondence)
> > FLAVOR == NODE PROFILE
> > ???== ROLE
> > 
> > The ??? is because ROLE is entirely a Tuskar concept, based on what TripleO
> > views
> > as the fundamental kinds of building blocks for an overcloud: Compute,
> > Controller,
> > Object Storage, Block Storage.  A ROLE essentially categorizes
> > RESOURCES/SERVICE CLASSES;
> > for example, the Control ROLE might contain a control-db resource,
> > control-secure resource,
> > control-api resource, etc.
> 
> So, based on your explanation above, perhaps it makes sense to just
> ditch the concept of roles entirely? Is the concept useful more than
> just being a list of workloads that a node is running?

I think it's still useful from a UI perspective, especially for large 
deployments
with lots of running instances.  Quickly separating out control/compute/storage
instances seems like a good feature.

> > Heat cares only about the RESOURCE and not the ROLE; if the roles were
> > named Foo1, Foo2, Foo3,
> > and Barney, Heat would not care.  Also, if the UI miscategorized, say, the
> > control-db resource
> > under the Block Storage category - Heat would again not care, and the
> > deploy action would work.
> > 
> > From that perspective, I think the above terminology should either *be* the
> > Heat term, or be
> > a word that closely corresponds to the intended purpose.  For example, I
> > think DEPLOYMENT reasonably
> > describes a STACK, but I don't think SERVICE CLASS works for RESOURCE.  I
> > also think ROLE should be
> > RESOURCE CATEGORY, since that seems to me to be the most straightforward
> > description of its purpose.
> 
> I agree with you that either the Tuskar terminology should match the
> Heat terminology, or there should be a good reason for it not to match
> the Heat terminology.
> 
> With regards to "stack" vs. "deployment", perhaps it's best to just
> stick with "stack".
> 
> For "service class", "node profile", and "role", perhaps it may be
> useful to scrap those terms entirely and use a term that the Solum
> project has adopted for describing an application deployment: the
> "plan".
> 
> In Solum-land, the "plan" is simply the instructions for deploying the
> application. In Tuskar-land, the "plan" would simply be the instructions
> for setting up the undercloud.
> 
> So, instead of "SIZE THE ROLES", you would just be defining the "plan"
> in Tuskar.

I'm not against the use of the word "plan" - it's accurate and generic,
which are both pluses. But even if we use that term, we still need to name
the internals of the plan, which would then have several components -
"sizing the roles" is just one step the user needs to perform.  And we still
need terms for the objects within the plan - the resources/service classes and
flavors/node profiles - because the UI and API still need to manipulate them.

Mainn


> Thoughts?
> -jay
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][TripleO] Nested resources

2013-12-13 Thread Robert Collins
On 14 December 2013 09:50, Jay Pipes  wrote:

>
> Is this set in stone? In other words, is it a given that in order to
> create the seed undercloud, that you need to use DIB to do it? Instead
> of an image that is pre-constructed and virsh'd into, what about
> constructing one or more LXC templates, starting a set of LXC containers
> for the various undercloud support services (db, mq, OpenStack services,
> etc), installing those support services using
> config-mgmt-flavor-du-jour? Has this been considered as an option to
> DIB? (sorry if I'm late to the discussion!) :)

Any no-frills spin-up-an-instance technology will work. We use virsh
and full images because that lets folk on Mac and Windows
administrator consoles bootstrap a datacentre without manually
installing a Linux machine Just Because.

I'd be entirely open to any patches needed to make running this via
LXC/Docker etc. Note that you cannot mount iscsi volumes from within
LXC so Nova BareMetal (and Ironic) cannot deploy from within LXC -
you'd need to do some plumbing to permit that to work. [The block
device API needed to mount the SCSI target isn't namespaced...].

As far as building the seed via DIB - we have no alternative codepaths
today, but again, open to patches. The reason we use DIB is because
thats how we build the Golden Images for the undercloud and then the
overcloud, so we get to reuse all the work that goes into that - the
only difference is that rather than using Heat as a metadata source we
 provide a handcrafted JSON file which we insert into the image at
build time. This makes debugging a seed extremely close to debugging a
regular undercloud node (and since the migration path is to scale one
up and then remove it - having them be stamped from the same cloth is
extremely attractive). I'd want to keep that consanginuity I think -
building a seed in a fundamentally different way is more likely than
not going to lead to migration issues.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DHCP Agent Reliability

2013-12-13 Thread Isaku Yamahata
On Fri, Dec 06, 2013 at 04:30:17PM +0900,
Maru Newby  wrote:

> 
> On Dec 5, 2013, at 5:21 PM, Isaku Yamahata  wrote:
> 
> > On Wed, Dec 04, 2013 at 12:37:19PM +0900,
> > Maru Newby  wrote:
> > 
> >> In the current architecture, the Neutron service handles RPC and WSGI with 
> >> a single process and is prone to being overloaded such that agent 
> >> heartbeats can be delayed beyond the limit for the agent being declared 
> >> 'down'.  Even if we increased the agent timeout as Yongsheg suggests, 
> >> there is no guarantee that we can accurately detect whether an agent is 
> >> 'live' with the current architecture.  Given that amqp can ensure eventual 
> >> delivery - it is a queue - is sending a notification blind such a bad 
> >> idea?  In the best case the agent isn't really down and can process the 
> >> notification.  In the worst case, the agent really is down but will be 
> >> brought up eventually by a deployment's monitoring solution and process 
> >> the notification when it returns.  What am I missing? 
> >> 
> > 
> > Do you mean overload of neutron server? Not neutron agent.
> > So event agent sends periodic 'live' report, the reports are piled up
> > unprocessed by server.
> > When server sends notification, it considers agent dead wrongly.
> > Not because agent didn't send live reports due to overload of agent.
> > Is this understanding correct?
> 
> Your interpretation is likely correct.  The demands on the service are going 
> to be much higher by virtue of having to field RPC requests from all the 
> agents to interact with the database on their behalf.

Is this strongly indicating thread-starvation. i.e. too much unfair
thread scheduling.
Given that eventlet is cooperative threading, should sleep(0) to 
hogging thread?
-- 
Isaku Yamahata 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] [keystone] do not approve stable/grizzly changes until you have a working docs job on stable/grizzly

2013-12-13 Thread Sean Dague
I've now pulled out jobs from the gate queue on stable/grizzly for both
heat and keystone. The docs jobs are still broken in stable/grizzly on
both of those trees, which means that pushing a commit to the Gate will
100% fail.

But worse than that, what will actually happen is it will fail. Get
pulled to the side, then *if* a change ahead of it fails, it will go
back into the queue, because zuul assumes the change ahead of it was the
bad one, so give the job another shot. Where it will 100% fail again.

Each of these changes bouncing back and forth probably adds 2 - 4 hours
to the gate duration.

As a general rule, if a job doesn't have valid Jenkins test results
within the last 72 hrs, do not push it to the gate. This is one of the
times where "recheck no bug" is very valid, to ensure the job actually
has any chance of running in the gate.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-13 Thread Jay Pipes
Thanks Mainn, comments inline :)

On Fri, 2013-12-13 at 19:31 -0500, Tzu-Mainn Chen wrote:
> Thanks for the reply!  Let me try and address one particular section for now,
> since it seems to be the part causing the most confusion:
> 
> > >  * SERVICE CLASS - a further categorization within a service role for 
> > > a
> > >  particular deployment.
> > > 
> > >   * NODE PROFILE - a set of requirements that specify what
> > >   attributes a node must have in order to be mapped to
> > >a service class
> > 
> > I think I still need some more information about the above two. They
> > sound vaguely like Cobbler's system profiles... :)
> 
> I admit that this concept was a bit fuzzy for me as well, but after a few IRC
> chats, I think I have a better handle on this.  Let me begin with my 
> understanding of what
> happens in Heat, using Heat terminology:
> 
> A Heat stack template defines RESOURCES.  When a STACK is deployed using that 
> template,
> the resource information in the template is used to instantiate an INSTANCE 
> of that
> resource on a NODE.  Heat can pass a FLAVOR (flavors?) to nova-scheduler in 
> order to
> filter for appropriate nodes.
> 
> So: based on that explanation, here's what Tuskar has been calling the above:
> 
> HEAT TERM == TUSKAR TERM
> 
> NODE == NODE
> STACK == DEPLOYMENT
> INSTANCE == INSTANCE
> RESOURCE == SERVICE CLASS (at the very least, it's a one-to-one 
> correspondence)
> FLAVOR == NODE PROFILE
> ???== ROLE
> 
> The ??? is because ROLE is entirely a Tuskar concept, based on what TripleO 
> views
> as the fundamental kinds of building blocks for an overcloud: Compute, 
> Controller,
> Object Storage, Block Storage.  A ROLE essentially categorizes 
> RESOURCES/SERVICE CLASSES;
> for example, the Control ROLE might contain a control-db resource, 
> control-secure resource,
> control-api resource, etc.

So, based on your explanation above, perhaps it makes sense to just
ditch the concept of roles entirely? Is the concept useful more than
just being a list of workloads that a node is running?

> Heat cares only about the RESOURCE and not the ROLE; if the roles were named 
> Foo1, Foo2, Foo3,
> and Barney, Heat would not care.  Also, if the UI miscategorized, say, the 
> control-db resource
> under the Block Storage category - Heat would again not care, and the deploy 
> action would work.
> 
> From that perspective, I think the above terminology should either *be* the 
> Heat term, or be
> a word that closely corresponds to the intended purpose.  For example, I 
> think DEPLOYMENT reasonably
> describes a STACK, but I don't think SERVICE CLASS works for RESOURCE.  I 
> also think ROLE should be
> RESOURCE CATEGORY, since that seems to me to be the most straightforward 
> description of its purpose.

I agree with you that either the Tuskar terminology should match the
Heat terminology, or there should be a good reason for it not to match
the Heat terminology.

With regards to "stack" vs. "deployment", perhaps it's best to just
stick with "stack".

For "service class", "node profile", and "role", perhaps it may be
useful to scrap those terms entirely and use a term that the Solum
project has adopted for describing an application deployment: the
"plan".

In Solum-land, the "plan" is simply the instructions for deploying the
application. In Tuskar-land, the "plan" would simply be the instructions
for setting up the undercloud. 

So, instead of "SIZE THE ROLES", you would just be defining the "plan"
in Tuskar.

Thoughts?
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bugs] definition of triaged

2013-12-13 Thread Robert Collins
On 14 December 2013 03:07, Russell Bryant  wrote:
> On 12/12/2013 04:46 PM, Robert Collins wrote:
>> Hi, I'm trying to overhaul the bug triage process for nova (initially)
>> to make it much lighter and more effective.
>>
>> I'll be sending a more comprehensive mail shortly
>
> before you do, let's agree what we're trying to solve.  Perhaps you were
> going to cover that in your later message, but it wouldn't hurt
> discussing it now.

Sure.

> I actually didn't think our process was that broken.  It's more that I
> feel we need a person leading a small team that is working on it reguarly.

Yup, I agree. I wanted to get some data and check that the process is
actually straightforward before trying to build a team: it's easier to
build a group if the thing being built is straight foward.

http://webnumbr.com/.join%28nova-confirmed-undecided.all,untouched-nova-bugs.all%29.graph

 - that sawtooth pattern says to me that we're letting it build up,
then tackling it, then rinse and repeat. So either there isn't a team
around it, or folk burn out.

https://wiki.openstack.org/wiki/BugTriage is a 10 step process: thats
not something we can sensible do over coffee in the morning. It also
includes

In particular, the current definition has some significant
boil-the-ocean aspects such as:
"Review all medium and low bugs" - there are 340 open medium/low bugs,
230 bugs with no priority (both new and confirmed), and 500 high bugs.

We've fix-released 4000 bugs, and closed invalid 1200, opinion 46 and
have 1100 open That means we've fixed 4000/6300 bugs, or roughly
2/3rds. Thats *much* better than many other projects. OTOH the
practice we have of filing bugs when we put up patches may be
artificially inflating that: our bug database may not actually reflect
user visible issues.

We have 300 'confirmed', 200 'triaged' and 300 'in progress' bugs, 175
of which have been touched in the 2 months. By the current definition
of 'triaged', most of things we know actually are bugs still need
further handling. It may be that the current definition is part of the
very significant success we've had with actually fixing bugs... but
see above :)

> The idea with the tagging approach was to break up the triage problem
> into smaller work queues.  I haven't kept up with the tagging part and
> would really like to hand that off.  Then some of the work queues aren't
> getting triaged as regularly as they need to.  I'd like to see a small
> team making this a high priority with some of their time each week.

Yep. So I get the idea of the tagging approach, but it just shifts
work around AFAICT, and the total volume of work isn't all that high -
long as we keep on top of it.

> With all of that said, if you think an overhaul of the process is
> necessary to get to the end goal of a more well triaged bug queue, then
> I'm happy to entertain it.

I think we need to split out:
 - what the world would call triage - identify that it is an issue and
how severe - and thus important - it is for the project
   - requires reasonable understanding of nova at a deployer perspective
 - preparing bugs for developers to run with it - moving to the
OpenStack triaged state
   - essentially this is design review so requires reviewer - probably
-core experienced folk - partiticipation
 - the repeated boil-the-ocean stuff : for now this shouldn't be part
of the daily loop. Perhaps something for bug days, or once-a-cycle
review.

So essentially, I want to take this:

1 Task 1: Confirm new bugs (anyone)
2 Task 2: Prioritize confirmed bugs (bug supervisors)
3 Task 3: Solve inconsistencies (anyone)
3.1 New bugs with a priority set
3.2 In progress bugs without an assignee
4 Task 4: Review incomplete bugs (anyone)
5 Task 5: Review stale In Progress bugs (anyone)
6 Task 6: Review bugs with a patch (bug supervisors)
7 Task 7: Review Critical/High bugs (bug supervisors)
8 Task 8: Review Medium/Low bugs (bug supervisors)
9 Task 9: Deprecate old wishlist bugs (bug supervisors)
10 Task 10: Celebrate!

And turn it into something like this:
Daily tasks - first- layer - need to be broadly familiar with Nova but
doesn't require -core knowhow
1: Prioritize unprioritized bugs (bug supervisors)
  1.1 set bugs to Invalid if questions
  1.2 set bugs to Incomplete if questions asked
  1.4 patches are immediately directed to Gerrit
  1.5 subject area tags are applied
  1.3 output is Confirmed or Triaged + Priority set
2: Review Incomplete-with-response bugs
3: Review: In progress bugs without an assignee

Daily tasks - second layer - -core current and previous members
1. Assess the proposed approach in Confirmed+High[1] bugs
1.1. If good, move to Triaged
1.2  If not, suggest what would make the approach good[2]
2. If appropriate add low-hanging-fruit tag

Per-release:
1: Review stale In Progress bugs (anyone)
2: Review Critical/High bugs (bug supervisors)
3: Review Medium/Low bugs (bug supervisors)
4: Deprecate old wishlist bugs (bug supervisors)



1: I don't believe in putting a lot 

Re: [openstack-dev] [TripleO][Tuskar][nova] Terminology

2013-12-13 Thread Steve Baker
On 12/14/2013 01:31 PM, Tzu-Mainn Chen wrote:
> Thanks for the reply!  Let me try and address one particular section for now,
> since it seems to be the part causing the most confusion:
>
>>>  * SERVICE CLASS - a further categorization within a service role for a
>>>  particular deployment.
>>>
>>>   * NODE PROFILE - a set of requirements that specify what
>>>   attributes a node must have in order to be mapped to
>>>a service class
>> I think I still need some more information about the above two. They
>> sound vaguely like Cobbler's system profiles... :)
> I admit that this concept was a bit fuzzy for me as well, but after a few IRC
> chats, I think I have a better handle on this.  Let me begin with my 
> understanding of what
> happens in Heat, using Heat terminology:
>
> A Heat stack template defines RESOURCES.  When a STACK is deployed using that 
> template,
> the resource information in the template is used to instantiate an INSTANCE 
> of that
> resource on a NODE.  Heat can pass a FLAVOR (flavors?) to nova-scheduler in 
> order to
> filter for appropriate nodes.
>
> So: based on that explanation, here's what Tuskar has been calling the above:
>
> HEAT TERM == TUSKAR TERM
> 
...
> INSTANCE == INSTANCE
Actually the native nova resource is OS::Nova::Server, and properties
that need server references are called server_id.

I *thought* that nova's official term for a single compute resource was
Server, and that Instance was a leftover from the AWS API, but looking
through the nova v3 API there is mentions of instances. Some
clarification and historic context from nova would help here.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-13 Thread Tzu-Mainn Chen
Thanks for the reply!  Let me try and address one particular section for now,
since it seems to be the part causing the most confusion:

> >  * SERVICE CLASS - a further categorization within a service role for a
> >  particular deployment.
> > 
> >   * NODE PROFILE - a set of requirements that specify what
> >   attributes a node must have in order to be mapped to
> >a service class
> 
> I think I still need some more information about the above two. They
> sound vaguely like Cobbler's system profiles... :)

I admit that this concept was a bit fuzzy for me as well, but after a few IRC
chats, I think I have a better handle on this.  Let me begin with my 
understanding of what
happens in Heat, using Heat terminology:

A Heat stack template defines RESOURCES.  When a STACK is deployed using that 
template,
the resource information in the template is used to instantiate an INSTANCE of 
that
resource on a NODE.  Heat can pass a FLAVOR (flavors?) to nova-scheduler in 
order to
filter for appropriate nodes.

So: based on that explanation, here's what Tuskar has been calling the above:

HEAT TERM == TUSKAR TERM

NODE == NODE
STACK == DEPLOYMENT
INSTANCE == INSTANCE
RESOURCE == SERVICE CLASS (at the very least, it's a one-to-one correspondence)
FLAVOR == NODE PROFILE
???== ROLE

The ??? is because ROLE is entirely a Tuskar concept, based on what TripleO 
views
as the fundamental kinds of building blocks for an overcloud: Compute, 
Controller,
Object Storage, Block Storage.  A ROLE essentially categorizes 
RESOURCES/SERVICE CLASSES;
for example, the Control ROLE might contain a control-db resource, 
control-secure resource,
control-api resource, etc.

Heat cares only about the RESOURCE and not the ROLE; if the roles were named 
Foo1, Foo2, Foo3,
and Barney, Heat would not care.  Also, if the UI miscategorized, say, the 
control-db resource
under the Block Storage category - Heat would again not care, and the deploy 
action would work.

>From that perspective, I think the above terminology should either *be* the 
>Heat term, or be
a word that closely corresponds to the intended purpose.  For example, I think 
DEPLOYMENT reasonably
describes a STACK, but I don't think SERVICE CLASS works for RESOURCE.  I also 
think ROLE should be
RESOURCE CATEGORY, since that seems to me to be the most straightforward 
description of its purpose.

People with more experience in Heat, please correct any of my misunderstandings!


Mainn

> 
> Best,
> -jay
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Sofware Config progress

2013-12-13 Thread Steve Baker
I've been working on a POC in heat for resources which perform software
configuration, with the aim of implementing this spec
https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-spec

The code to date is here:
https://review.openstack.org/#/q/topic:bp/hot-software-config,n,z

What would be helpful now is reviews which give the architectural
approach enough of a blessing to justify fleshing this POC out into a
ready to merge changeset.

Currently it is possible to:
- create templates containing OS::Heat::SoftwareConfig and
OS::Heat::SoftwareDeployment resources
- deploy configs to OS::Nova::Server, where the deployment resource
remains in an IN_PROGRESS state until it is signalled with the output values
- write configs which execute shell scripts and report back with output
values that other resources can have access to.

What follows is an overview of the architecture and implementation to
help with your reviews.

REST API

Like many heat resources, OS::Heat::SoftwareConfig and
OS::Heat::SoftwareDeployment are backed by "real" resources that are
invoked via a REST API. However in this case, the API that is called is
heat itself.

The REST API for these resources really just act as structured storage
for config and deployments, and the entities are managed via the REST
paths /{tenant_id}/software_configs and /{tenant_id}/software_deployments:
https://review.openstack.org/#/c/58878/
RPC layer of REST API:
https://review.openstack.org/#/c/58877/
DB layer of REST API:
https://review.openstack.org/#/c/58876
heatclient lib access to REST API:
https://review.openstack.org/#/c/58885

This data could be stored in a less structured datastore like swift, but
this API has a couple of important implementation details which I think
justify it existing:
- SoftwareConfig resources are immutable once created. There is no
update API to modify an existing config. This gives confidence that a
config can have a long lifecycle without changing, and a certainty of
what exactly is deployed on a server with a given config.
- Fetching all the deployments and configs for a given server is an
operation done repeatedly throughout the lifecycle of the stack, so is
optimized to be able to do in a single operation. This is called by
using the deployments index API call,
/{tenant_id}/software_deployments?server_id=. The resulting
list of deployments include the their associated config data[1].

OS::Heat::SoftwareConfig resource
=
OS::Heat::SoftwareConfig can be used directly in a template, but it may
end be more frequently used in a resource provider template which
provides a resource aimed at a particular configuration management tool.
http://docs-draft.openstack.org/79/58879/7/check/gate-heat-docs/911a250/doc/build/html/template_guide/openstack.html#OS::Heat::SoftwareConfig
The contents of the config property will depend on the CM tool being
used, but at least one value in the config map will be the actual script
that the CM tool invokes.  An inputs and outputs schema is also defined
here. The group property is used when the deployments data is actually
delivered to the server (more on that later).

Since a config is immutable, any changes to a OS::Heat::SoftwareConfig
on stack update result in replacement.

OS::Heat::SoftwareDeployment resource
=
OS::Heat::SoftwareDeployment joins a OS::Heat::SoftwareConfig resource
with a OS::Nova::Server resource. It allows server-specific input values
to be specified that map to the OS::Heat::SoftwareConfig inputs schema.
Output values that are signaled to the deployment resource are exposed
as resource attributes, using the names specified in the outputs schema.
The OS::Heat::SoftwareDeployment resource remains in an IN_PROGRESS
state until it receives a signal (containing any outputs) from the server.
http://docs-draft.openstack.org/79/58879/7/check/gate-heat-docs/911a250/doc/build/html/template_guide/openstack.html#OS::Heat::SoftwareDeployment

A deployment has its own actions and statuses that are specific to what
a deployment does, and OS::Heat::SoftwareDeployment maps this to heat
resource statuses and actions:
actions:
DEPLOY -> CREATE
REDEPLOY -> UPDATE
UNDEPLOY -> DELETE

status (these could use some bikeshedding):
WAITING -> IN_PROGRESS
RECEIVED -> COMPLETE
FAILED -> FAILED

In the config outputs schema there is a special flag for error_output.
If the signal response contains any value for any of these error_output
outputs then the deployment resource is put into the FAILED state.

The SoftwareDeployment class subclasses SignalResponder which means that
a SoftwareDeployment creates an associated user and ec2 keypair. Since
the SoftwareDeployment needs to use the resource_id for the deployment
resource uuid, the user_id needs to be stored in resource-date instead.
This non-wip change enables that:
https://review.openstack.org

Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Flavio Percoco

On 13/12/13 22:40 +, Kurt Griffiths wrote:

FWIW, Marconi can easily deliver sub-second latency even with lots of
clients fast-polling. We are also considering a long-polling feature that
will reduce latency further for HTTP clients.


And there's already some work going on the `websocket` side[0]. It's
still pretty much an experiment since its completion depends on some
features that are under development. Targeting i-2.

Cheers,
FF

[0] https://github.com/FlaPer87/marconi-websocket



On 12/13/13, 2:41 PM, "Fox, Kevin M"  wrote:


A second or two of latency perhaps shaved off by using something like
AMQP or STOMP might not be justified for its added complexity.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpenpe0mypSS.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-13 Thread Jay Pipes
On Wed, 2013-12-11 at 14:15 -0500, Tzu-Mainn Chen wrote:
> Hi,
> 
> I'm trying to clarify the terminology being used for Tuskar, which may be 
> helpful so that we're sure
> that we're all talking about the same thing :)  I'm copying responses from 
> the requirements thread
> and combining them with current requirements to try and create a unified 
> view.  Hopefully, we can come
> to a reasonably rapid consensus on any desired changes; once that's done, the 
> requirements can be
> updated.
> 
> * NODE a physical general purpose machine capable of running in many roles. 
> Some nodes may have hardware layout that is particularly
>useful for a given role.

Must a node be a physical machine? What about a container?

>  * ROLE - a specific workload we want to map onto one or more nodes. 
> Examples include 'undercloud control plane', 'overcloud control
>plane', 'overcloud storage', 'overcloud compute' etc.

OK, so "role" has implications in both Keystone-land as well as various
configuration management systems (Chef, Ansible). What about just using
the term "workload"?

>  * MANAGEMENT NODE - a node that has been mapped with an undercloud 
> role
>  * SERVICE NODE - a node that has been mapped with an overcloud role

I like the above breakdowns, but instead of "mapped with an undercloud
role", I'd say "runs an undercloud workload" or similar.

> * COMPUTE NODE - a service node that has been mapped to an 
> overcloud compute role
> * CONTROLLER NODE - a service node that has been mapped to an 
> overcloud controller role
> * OBJECT STORAGE NODE - a service node that has been mapped to an 
> overcloud object storage role
> * BLOCK STORAGE NODE - a service node that has been mapped to an 
> overcloud block storage role
>  * UNDEPLOYED NODE - a node that has not been mapped with a role
>   * another option - UNALLOCATED NODE - a node that has not been 
> allocated through nova scheduler (?)
>- (after reading lifeless's explanation, I 
> agree that "allocation" may be a
>   misleading term under TripleO, so I 
> personally vote for UNDEPLOYED)

"FREE NODE" would be my preference here.

>  * INSTANCE - A role deployed on a node - this is where work actually 
> happens.

If it's something that is deployed (eventually) using a Nova call, it
should be called an "instance", yes. ++

> * DEPLOYMENT

Did you want to add a definition to the term "DEPLOYMENT"? How about
this?

DEPLOYMENT - A collection of nodes that comprise both the under and
overcloud

>  * SIZE THE ROLES - the act of deciding how many nodes will need to be 
> assigned to each role
>* another option - DISTRIBUTE NODES (?)
>  - (I think the former is more accurate, but 
> perhaps there's a better way to say it?)

Yeah, I agree with some others that "SIZE THE ROLES" sounds a bit odd.
How about "SIZE THE DEPLOYMENT" or "DETERMINE DEPLOYMENT SCOPE"?

>  * SCHEDULING - the process of deciding which role is deployed on which 
> node

Agree with others that this isn't really scheduling in the sense that
it's not a temporal activity. How about "ASSIGN THE WORKLOADS"?

>  * SERVICE CLASS - a further categorization within a service role for a 
> particular deployment.
> 
>   * NODE PROFILE - a set of requirements that specify what attributes 
> a node must have in order to be mapped to
>a service class

I think I still need some more information about the above two. They
sound vaguely like Cobbler's system profiles... :)

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Kurt Griffiths
FWIW, Marconi can easily deliver sub-second latency even with lots of
clients fast-polling. We are also considering a long-polling feature that
will reduce latency further for HTTP clients.

On 12/13/13, 2:41 PM, "Fox, Kevin M"  wrote:

>A second or two of latency perhaps shaved off by using something like
>AMQP or STOMP might not be justified for its added complexity.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Support for Pecan in Nova

2013-12-13 Thread Doug Hellmann
That covers routes. What about the properties of the inputs and outputs?


On Fri, Dec 13, 2013 at 4:43 PM, Ryan Petrello
wrote:

> Unless there’s some other trickiness going on that I’m unaware of, the
> routes for the WSGI app are defined at application startup time (by methods
> called in the WSGI app’s __init__).
>
> ---
> Ryan Petrello
> Senior Developer, DreamHost
> ryan.petre...@dreamhost.com
>
> On Dec 13, 2013, at 12:56 PM, Doug Hellmann 
> wrote:
>
> >
> >
> >
> > On Thu, Dec 12, 2013 at 9:22 PM, Christopher Yeoh 
> wrote:
> > On Fri, Dec 13, 2013 at 4:12 AM, Jay Pipes  wrote:
> > On 12/11/2013 11:47 PM, Mike Perez wrote:
> > On 10:06 Thu 12 Dec , Christopher Yeoh wrote:
> > On Thu, Dec 12, 2013 at 8:59 AM, Doug Hellmann
> >  > >wrote:
> >
> >
> >
> >
> > On Wed, Dec 11, 2013 at 3:41 PM, Ryan Petrello <
> > ryan.petre...@dreamhost.com
> > >
> > wrote:
> >
> > Hello,
> >
> > I’ve spent the past week experimenting with using Pecan for
> > Nova’s
> > API
> > and have opened an experimental review:
> >
> >
> > https://review.openstack.org/#/c/61303/6
> >
> > …which implements the `versions` v3 endpoint using pecan (and
> > paves the
> > way for other extensions to use pecan).  This is a *potential*
> >
> > approach
> > I've considered for gradually moving the V3 API, but I’m open
> > to other suggestions (and feedback on this approach).  I’ve
> > also got a few open questions/general observations:
> >
> > 1.  It looks like the Nova v3 API is composed *entirely* of
> > extensions (including “core” API calls), and that extensions
> > and their routes are discoverable and extensible via installed
> > software that registers
> > itself
> > via stevedore.  This seems to lead to an API that’s composed of
> >
> > installed
> > software, which in my opinion, makes it fairly hard to map out
> > the
> > API (as
> > opposed to how routes are manually defined in other WSGI
> > frameworks).  I
> > assume at this time, this design decision has already been
> > solidified for
> > v3?
> >
> >
> > Yeah, I brought this up at the summit. I am still having some
> > trouble understanding how we are going to express a stable core
> > API for compatibility testing if the behavior of the API can be
> > varied so significantly by deployment decisions. Will we just
> > list each
> > "required"
> > extension, and forbid any extras for a compliant cloud?
> >
> >
> > Maybe the issue is caused by me misunderstanding the term
> > "extension," which (to me) implies an optional component but is
> > perhaps reflecting a technical implementation detail instead?
> >
> >
> > Yes and no :-) As Ryan mentions, all API code is a plugin in the V3
> > API. However, some must be loaded or the V3 API refuses to start
> > up. In nova/api/openstack/__init__.py we have
> > API_V3_CORE_EXTENSIONS which hard codes which extensions must be
> > loaded and there is no config option to override this (blacklisting
> > a core plugin will result in the V3 API not starting up).
> >
> > So for compatibility testing I think what will probably happen is
> > that we'll be defining a minimum set (API_V3_CORE_EXTENSIONS) that
> > must be implemented and clients can rely on that always being
> > present
> > on a compliant cloud. But clients can also then query through
> > /extensions what other functionality (which is backwards compatible
> > with respect to core) may also be present on that specific cloud.
> >
> > This really seems similar to the idea of having a router class, some
> > controllers and you map them. From my observation at the summit,
> > calling everything an extension creates confusion. An extension
> > "extends" something. For example, Chrome has extensions, and they
> > extend the idea of the core features of a browser. If you want to do
> > more than back/forward, go to an address, stop, etc, that's an
> > extension. If you want it to play an audio clip "stop, hammer time"
> > after clicking the stop button, that's an example of an extension.
> >
> > In OpenStack, we use extensions to extend core. Core are the
> > essential feature(s) of the project. In Cinder for example, core is
> > volume. In core you can create a volume, delete a volume, attach a
> > volume, detach a volume, etc. If you want to go beyond that, that's
> > an extension. If you want to do volume encryption, that's an example
> > of an extension.
> >
> > I'm worried by the discrepancies this will create among the programs.
> > You mentioned maintainability being a plus for this. I don't think
> > it'll be great from the deployers perspective when you have one
> > program that thinks everything is an extension and some of them have
> > to be enabled that the deployer has to be mindful of, while the rest
> > of the programs consider all extensions to be optional.
> >
> > +1. I agree with most of what Mike says above. The idea that there are
> core "extensions" in Nova's v3 API doesn't make a whole lot of

Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-13 Thread Jordan OMara

On 13/12/13 16:20 +1300, Robert Collins wrote:

On 12 December 2013 21:59, Jaromir Coufal  wrote:

On 2013/12/12 01:21, Robert Collins wrote:




Avoiding cloud - ack.

However, on instance - 'instance' is a very well defined term in Nova
and thus OpenStack: Nova boot gets you an instance, nova delete gets
rid of an instance, nova rebuild recreates it, etc. Instances run
[virtual|baremetal] machines managed by a hypervisor. So
nova-scheduler is not ever going to be confused with instance in the
OpenStack space IMO. But it brings up a broader question, which is -
what should we do when terms that are well defined in OpenStack - like
Node, Instance, Flavor - are not so well defined for new users? We
could use different terms, but that may confuse 'stackers, and will
mean that our UI needs it's own dedicated terminology to map back to
e.g. the manuals for Nova and Ironic. I'm inclined to suggest that as
a principle, where there is a well defined OpenStack concept, that we
use it, even if it is not ideal, because the consistency will be
valuable.


I think this is a really important point. I think the consistency is a
powerful tool for teaching new users how they should expect
tripleo/tuskar to work and should lessen the learning curve, as long
they've used openstack before.

--
Jordan O'Mara 
Red Hat Engineering, Raleigh 


pgpvsMEMEL94Z.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Language pack attributes schema

2013-12-13 Thread Clayton Coleman


> On Dec 13, 2013, at 4:25 PM, Georgy Okrokvertskhov 
>  wrote:
> 
> Hi,
> 
> I will change format to YAML. It should be pretty straightforward.
> 
> I also like an idea of versioning for LP. I personally don't like name 
> "Stream" but we can figure out naming later.

Version is almost always too specific - remember that some operators may wish 
to support a single stream of java that may change for users over time.  
Someone using a java language pack may only want to support one java runtime at 
a time, but that version # might change.  It might be major version, it might 
be minor version, or the minor version might end up having major incompatible 
changes and thus a new stream has to be forked.

The operator needs a way to define a label for users to select a stream ("1.6") 
that may change over time ("1.6.5.X"), but with a unique id that links apps 
created against "1.6" to those created later.  The operator may also need to 
supply metadata for these streams, like "obsolete", "will be end of lifed 
soon".  An individual version of a lp in a stream will inherit / reference 
metadata at some point in time (created at x, has exactly version 1.6.5 build).

The user definitely needs to be able to choose a version - but that version is 
a choice that exists for a limited period of time.

> 
> Thanks
> Georgy
> 
> 
> 
> 
>> On Fri, Dec 13, 2013 at 12:40 PM, Clayton Coleman  
>> wrote:
>> I added some comments to the bottom specifically about lessons learned from 
>> operating things like language pack - I discuss the concept of version 
>> streams and how an operator spoon feeds base images  for the language pack 
>> out to applications.  
>> 
>>> On Dec 9, 2013, at 4:05 PM, Georgy Okrokvertskhov 
>>>  wrote:
>>> 
>>> Hi,
>>> 
>>> As a part of Language pack workgroup session we created an etherpad for 
>>> language pack attributes definition. Please find a first draft of language 
>>> pack attributes here: 
>>> https://etherpad.openstack.org/p/Solum-Language-pack-json-format
>>> 
>>> We have identified a minimal list of attributes which should be supported 
>>> by language pack API.
>>> 
>>> Please, provide your feedback and\or ideas in this etherpad. Once it is 
>>> reviewed we can use this as a basis for language packs in PoC.
>>> 
>>> Thanks
>>> Georgy
>>> ___
>>> 
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> -- 
> Georgy Okrokvertskhov
> Technical Program Manager,
> Cloud and Infrastructure Services,
> Mirantis
> http://www.mirantis.com
> Tel. +1 650 963 9828
> Mob. +1 650 996 3284
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Dmitry Mescheryakov
2013/12/13 Alessandro Pilotti 

> Hi guys,
>
> This seems to become a pretty long thread with quite a lot of ideas. What
> do you think about setting up a meeting on IRC to talk about what direction
> to take?
> IMO this has the potential of becoming a completely separated project to
> be hosted on stackforge or similar.
>
> Generally speaking, we already use Cloudbase-Init, which beside being the
> de facto standard Windows "Cloud-Init type feature” (Apache 2 licensed)
> has been recently used as a base to provide the same functionality on
> FreeBSD.
>
> For reference: https://github.com/cloudbase/cloudbase-init and
> http://www.cloudbase.it/cloud-init-for-windows-instances/
>
> We’re seriously thinking if we should transform Cloudbase-init into an
> agent or if we should keep it on line with the current “init only, let the
> guest to the rest” approach which fits pretty
> well with the most common deployment approaches (Heat, Puppet / Chef,
> Salt, etc). Last time I spoke with Scott about this agent stuff for
> cloud-init, the general intention was
> to keep the init approach as well (please correct me if I missed something
> in the meantime).
>
> The limitations that we see, independently from which direction and tool
> will be adopted for the agent, are mainly in the metadata services and the
> way OpenStack users employ them to
> communicate with Nova, Heat and the rest of the pack as orchestration
> requirements complexity increases:
>
> 1) We need a way to post back small amounts of data (e.g. like we already
> do for the encrypted Windows password) for status updates,
> so that the users know how things are going and can be properly notified
> in case of post-boot errors. This might be irrelevant as long as you just
> create a user and deploy some SSH keys,
> but becomes very important for most orchestration templates.
>
> 2) The HTTP metadata service accessible from the guest with its magic
> number is IMO quite far from an optimal solution. Since every hypervisor
> commonly
> used in OpenStack (e.g. KVM, XenServer, Hyper-V, ESXi) provides guest /
> host communication services, we could define a common abstraction layer
> which will
> include a guest side (to be included in cloud-init, cloudbase-init, etc)
> and a hypervisor side, to be implemented for each hypervisor and included
> in the related Nova drivers.
> This has already been proposed / implemented in various third party
> scenarios, but never under the OpenStack umbrella for multiple hypervisors.
>
> Metadata info can be at that point retrieved and posted by the Nova driver
> in a secure way and proxied to / from the guest whithout needing to expose
> the metadata
> service to the guest itself. This would also simplify Neutron, as we could
> get rid of the complexity of the Neutron metadata proxy.
>
>
The idea was discussed in the thread with name 'hypervisor-dependent
agent'. A couple existing agents were proposed: Rackspace agent for Xen
[1][2] and oVirt agent for Qemu [3].

Many people prefer the idea of hypervisor independent agent which will
communicate over network (network agent). The main disadvantage of
hypervisor-dependent agent is obviously the number of implementations need
to be made for different hypervisors/OSes. Also it needs a daemon (in fact
- another agent) running on each Compute host.

IMHO these are very strong arguments for network-based agent. If we start
with hypervisor-dependent agent, it will just take too much time to do
enough implementations. On the other hand, these two types of agents can
share some code. So if need arise, people can write hypervisor-dependent
agent based on network one, or behaving the same way. AFAIK, that is how
Trove is deployed in Rackspace. Trove has network-based agent, and
Rackspace replaces it with their own implementation.


[1] https://github.com/rackerlabs/openstack-guest-agents-unix
[2] https://github.com/rackerlabs/openstack-guest-agents-windows-xenserver
[3] https://github.com/oVirt/ovirt-guest-agent


>
>
> Alessandro
>
>
> On 13 Dec 2013, at 16:28 , Scott Moser  wrote:
>
> > On Tue, 10 Dec 2013, Ian Wells wrote:
> >
> >> On 10 December 2013 20:55, Clint Byrum  wrote:
> >>
> >>> If it is just a network API, it works the same for everybody. This
> >>> makes it simpler, and thus easier to scale out independently of compute
> >>> hosts. It is also something we already support and can very easily
> expand
> >>> by just adding a tiny bit of functionality to neutron-metadata-agent.
> >>>
> >>> In fact we can even push routes via DHCP to send agent traffic through
> >>> a different neutron-metadata-agent, so I don't see any issue where we
> >>> are piling anything on top of an overstressed single resource. We can
> >>> have neutron route this traffic directly to the Heat API which hosts
> it,
> >>> and that can be load balanced and etc. etc. What is the exact scenario
> >>> you're trying to avoid?
> >>>
> >>
> >> You may be making even this harder than it needs to be.  You can cre

Re: [openstack-dev] [Nova] Support for Pecan in Nova

2013-12-13 Thread Ryan Petrello
Unless there’s some other trickiness going on that I’m unaware of, the routes 
for the WSGI app are defined at application startup time (by methods called in 
the WSGI app’s __init__).

---
Ryan Petrello
Senior Developer, DreamHost
ryan.petre...@dreamhost.com

On Dec 13, 2013, at 12:56 PM, Doug Hellmann  wrote:

> 
> 
> 
> On Thu, Dec 12, 2013 at 9:22 PM, Christopher Yeoh  wrote:
> On Fri, Dec 13, 2013 at 4:12 AM, Jay Pipes  wrote:
> On 12/11/2013 11:47 PM, Mike Perez wrote:
> On 10:06 Thu 12 Dec , Christopher Yeoh wrote:
> On Thu, Dec 12, 2013 at 8:59 AM, Doug Hellmann
>  >wrote:
> 
> 
> 
> 
> On Wed, Dec 11, 2013 at 3:41 PM, Ryan Petrello <
> ryan.petre...@dreamhost.com
> >
> wrote:
> 
> Hello,
> 
> I’ve spent the past week experimenting with using Pecan for
> Nova’s
> API
> and have opened an experimental review:
> 
> 
> https://review.openstack.org/#/c/61303/6
> 
> …which implements the `versions` v3 endpoint using pecan (and
> paves the
> way for other extensions to use pecan).  This is a *potential*
> 
> approach
> I've considered for gradually moving the V3 API, but I’m open
> to other suggestions (and feedback on this approach).  I’ve
> also got a few open questions/general observations:
> 
> 1.  It looks like the Nova v3 API is composed *entirely* of
> extensions (including “core” API calls), and that extensions
> and their routes are discoverable and extensible via installed
> software that registers
> itself
> via stevedore.  This seems to lead to an API that’s composed of
> 
> installed
> software, which in my opinion, makes it fairly hard to map out
> the
> API (as
> opposed to how routes are manually defined in other WSGI
> frameworks).  I
> assume at this time, this design decision has already been
> solidified for
> v3?
> 
> 
> Yeah, I brought this up at the summit. I am still having some
> trouble understanding how we are going to express a stable core
> API for compatibility testing if the behavior of the API can be
> varied so significantly by deployment decisions. Will we just
> list each
> "required"
> extension, and forbid any extras for a compliant cloud?
> 
> 
> Maybe the issue is caused by me misunderstanding the term
> "extension," which (to me) implies an optional component but is
> perhaps reflecting a technical implementation detail instead?
> 
> 
> Yes and no :-) As Ryan mentions, all API code is a plugin in the V3
> API. However, some must be loaded or the V3 API refuses to start
> up. In nova/api/openstack/__init__.py we have
> API_V3_CORE_EXTENSIONS which hard codes which extensions must be
> loaded and there is no config option to override this (blacklisting
> a core plugin will result in the V3 API not starting up).
> 
> So for compatibility testing I think what will probably happen is
> that we'll be defining a minimum set (API_V3_CORE_EXTENSIONS) that
> must be implemented and clients can rely on that always being
> present
> on a compliant cloud. But clients can also then query through
> /extensions what other functionality (which is backwards compatible
> with respect to core) may also be present on that specific cloud.
> 
> This really seems similar to the idea of having a router class, some
> controllers and you map them. From my observation at the summit,
> calling everything an extension creates confusion. An extension
> "extends" something. For example, Chrome has extensions, and they
> extend the idea of the core features of a browser. If you want to do
> more than back/forward, go to an address, stop, etc, that's an
> extension. If you want it to play an audio clip "stop, hammer time"
> after clicking the stop button, that's an example of an extension.
> 
> In OpenStack, we use extensions to extend core. Core are the
> essential feature(s) of the project. In Cinder for example, core is
> volume. In core you can create a volume, delete a volume, attach a
> volume, detach a volume, etc. If you want to go beyond that, that's
> an extension. If you want to do volume encryption, that's an example
> of an extension.
> 
> I'm worried by the discrepancies this will create among the programs.
> You mentioned maintainability being a plus for this. I don't think
> it'll be great from the deployers perspective when you have one
> program that thinks everything is an extension and some of them have
> to be enabled that the deployer has to be mindful of, while the rest
> of the programs consider all extensions to be optional.
> 
> +1. I agree with most of what Mike says above. The idea that there are core 
> "extensions" in Nova's v3 API doesn't make a whole lot of sense to me.
> 
> 
> So would it help if we used the term "plugin" to talk about the framework 
> that the API is implemented with,
> and extensions when talking about things which extend the core API? So the 
> whole of the API is implemented
> using plugins, while the core plugins are not considered to be extensions.
> 
> That distinctio

Re: [openstack-dev] [Nova] Support for Pecan in Nova

2013-12-13 Thread Jay Pipes
On Fri, 2013-12-13 at 12:52 +1030, Christopher Yeoh wrote:
> On Fri, Dec 13, 2013 at 4:12 AM, Jay Pipes  wrote:

> 
> +1. I agree with most of what Mike says above. The idea that
> there are core "extensions" in Nova's v3 API doesn't make a
> whole lot of sense to me.
> 
> So would it help if we used the term "plugin" to talk about the
> framework that the API is implemented with,
> and extensions when talking about things which extend the core API? So
> the whole of the API is implemented
> using plugins, while the core plugins are not considered to be
> extensions.

Yeah, that makes more sense :)

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Neutron] How do we know a host is ready to have servers scheduled onto it?

2013-12-13 Thread Jay Pipes
On Thu, 2013-12-12 at 21:01 +, Joshua Harlow wrote:
> Maybe time to revive something like:
> 
> https://review.openstack.org/#/c/12759/
> 
> 
> From experience, all sites (and those internal to yahoo) provide a /status
> (or equivalent) that is used for all sorts of things (from basic
> load-balancing up/down) to other things like actually introspecting the
> state of the process (or to get basics about what the process is doing).
> Typically this is not exposed to the public (its why
> http://www.yahoo.com/status works for me but not for u). It seems like
> something like that could help (but of course not completely solve) the
> type of response jay mentioned.

>From reading through the review above, it looks like markmc had two main
objections:

a) The status/healthcheck middleware should not be in Oslo unless all
OpenStack projects have an interest in using it

b) Standardizing on a HEAD request to the root resource seemed like a
better idea

Mark, has any of your thinking changed on the above? Regarding using
HEAD, that limits the returned result to HTTP headers, versus perhaps
returning a list of dependent services that this service is waiting on
in order to move into a "healthy" status.

Personally, I believe OpenStack has moved into a phase where having this
kind of standardized status/healthcheck middleware would be very useful.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Language pack attributes schema

2013-12-13 Thread Georgy Okrokvertskhov
Hi,

I will change format to YAML. It should be pretty straightforward.

I also like an idea of versioning for LP. I personally don't like name
"Stream" but we can figure out naming later.

Thanks
Georgy




On Fri, Dec 13, 2013 at 12:40 PM, Clayton Coleman wrote:

> I added some comments to the bottom specifically about lessons learned
> from operating things like language pack - I discuss the concept of version
> streams and how an operator spoon feeds base images  for the language pack
> out to applications.
>
> On Dec 9, 2013, at 4:05 PM, Georgy Okrokvertskhov <
> gokrokvertsk...@mirantis.com> wrote:
>
> Hi,
>
> As a part of Language pack workgroup session we created an etherpad for
> language pack attributes definition. Please find a first draft of language
> pack attributes here:
> https://etherpad.openstack.org/p/Solum-Language-pack-json-format
>
> We have identified a minimal list of attributes which should be supported
> by language pack API.
>
> Please, provide your feedback and\or ideas in this etherpad. Once it is
> reviewed we can use this as a basis for language packs in PoC.
>
> Thanks
> Georgy
>
> ___
>
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][heat] ec2tokens, v3 credentials and request signing

2013-12-13 Thread Jay Pipes
On Tue, 2013-12-10 at 15:13 +, Steven Hardy wrote:
> I'm just thinking it would be really great (from a user-of-keystone
> perspective) if we could avoid further fragmentation and just have one type
> of shared secret (a keystone token), which can be configured flexibly
> enough to satisfy the various use-cases?

Amen. No offense to those Keystone contributors who enjoy reading arcane
academic texts and RFCs about x.509, Kerberos, and PKI, but *users and
deployers* of OpenStack (and therefore users of Keystone) don't give a
hoot about any of that stuff, nor should deployers and users *have to
know* about the arcane underbelly of security semantics in order to use
OpenStack.

All deployers want is a simple, easy-to-understand authentication
mechanism that *seamlessly* integrates with other OpenStack projects.

AWS authentication works because it's simple and does its job without
making life unnecessarily difficult for its users.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][TripleO] Nested resources

2013-12-13 Thread Jay Pipes
On Tue, 2013-12-10 at 09:40 +1300, Robert Collins wrote:
> On 6 December 2013 14:11, Fox, Kevin M  wrote:
> > I think the security issue can be handled by not actually giving the 
> > underlying resource to the user in the first place.
> >
> > So, for example, if I wanted a bare metal node's worth of resource for my 
> > own containering, I'd ask for a bare metal node and use a "blessed" image 
> > that contains docker+nova bits that would hook back to the cloud. I 
> > wouldn't be able to login to it, but containers started on it would be able 
> > to access my tenant's networks. All access to it would have to be through 
> > nova suballocations. The bare resource would count against my quotas, but 
> > nothing run under it.
> >
> > Come to think of it, this sounds somewhat similar to what is planned for 
> > Neutron service vm's. They count against the user's quota on one level but 
> > not all access is directly given to the user. Maybe some of the same 
> > implementation bits could be used.
> 
> This is a super interesting discussion - thanks for kicking it off.

Indeed. A very enlightening conversation :)

> I think it would be fantastic to be able to use containers for
> deploying the cloud rather than full images while still running
> entirely OpenStack control up and down the stack.
> 
> Briefly, what we need to be able to do that is:
> 
>  - the ability to bring up an all in one node with everything on it to
> 'seed' the environment.
> - we currently do that by building a disk image, and manually
> running virsh to start it

Is this set in stone? In other words, is it a given that in order to
create the seed undercloud, that you need to use DIB to do it? Instead
of an image that is pre-constructed and virsh'd into, what about
constructing one or more LXC templates, starting a set of LXC containers
for the various undercloud support services (db, mq, OpenStack services,
etc), installing those support services using
config-mgmt-flavor-du-jour? Has this been considered as an option to
DIB? (sorry if I'm late to the discussion!) :)

>  - the ability to reboot a machine *with no other machines running* -
> we need to be able to power off and on a datacentre - and have the
> containers on it come up correctly configured, networking working,
> running etc.

I think a container-based seed would work just fine here, yes?

>  - we explicitly want to be just using OpenStack APIs for all the
> deployment operations after the seed is up; so no direct use of lxc or
> docker or whathaveyou.

Agreed. I'm talking about changing the thinking of the construction of
the seed undercloud, not the overcloud.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OK to Use Flufl.enum

2013-12-13 Thread Adam Young

On 12/13/2013 05:17 AM, Yuriy Taraday wrote:

Hello, Adam.

On Tue, Dec 10, 2013 at 6:55 PM, Adam Young > wrote:


With only a change to the import and requirements, it builds and
runs, but raises:


Traceback (most recent call last):
  File "keystone/tests/test_revoke.py", line 65, in
test_list_is_sorted
valid_until=valid_until))
  File "keystone/contrib/revoke/core.py", line 74, in __init__
setattr(self, k, v)
  File "keystone/contrib/revoke/core.py", line 82, in scope_type
self._scope_type = ScopeType[value]
  File
"/opt/stack/keystone/.venv/lib/python2.7/site-packages/enum/__init__.py",
line 352, in __getitem__
return cls._member_map_[name]
KeyError: 1


Looks like you're doing this the wrong way. Python 3.4's enums work 
either as EnumClass(value) or as EnumClass[name], not as 
EnumClass[value] as it seems your test is doing and flufl is allowing 
it to.


--

Kind regards, Yuriy.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Yep,  figured that out, but thanks for the pointer.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Fox, Kevin M
I think the Marconi idea mentioned earlier would work very similar over http 
but provide a simple to implement solution. I think we need to make the agent 
as simple as possible on the vm. an http client is easy as its likely already 
in the vm.  AMQP or STOMP or whichever client is not as easy.

As for performance, I believe the use case is to send configuration events to 
the vm. like "perform new backup of table foo" , "create user x", etc. A second 
or two of latency perhaps shaved off by using something like AMQP or STOMP 
might not be justified for its added complexity. But maybe I'm wrong. Just 
brainstorming at this point.

Thanks,
Kevin

From: Dmitry Mescheryakov [dmescherya...@mirantis.com]
Sent: Friday, December 13, 2013 12:24 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Unified Guest Agent proposal

2013/12/13 Fox, Kevin M mailto:kevin@pnnl.gov>>
Yeah, I think the extra nic is unnecessary too. There already is a working 
route to 169.254.169.254, and a metadata proxy -> server running on it.

So... lets brainstorm for a minute and see if there are enough pieces already 
to do most of the work.

We already have:
  * An http channel out from private vm's, past network namespaces all the way 
to the node running the neutron-metadata-agent.

We need:
  * Some way to send a command, plus arguments to the vm to execute some action 
and get a response back.

OpenStack has focused on REST api's for most things and I think that is a great 
tradition to continue. This allows the custom agent plugins to be written in 
any language that can speak http (All of them?) on any platform.

A REST api running in the vm wouldn't be accessible from the outside though on 
a private network.

Random thought, can some glue "unified guest agent" be written to bridge the 
gap?

How about something like the following:

The "unified guest agent" starts up, makes an http request to 
169.254.169.254/unified-agent//connect
If at any time the connection returns, it will auto reconnect.
It will block as long as possible and the data returned will be an http 
request. The request will have a special header with a request id.
The http request will be forwarded to localhost: and 
the response will be posted to 
169.254.169.254/unified-agent/cnc_type/response/

The neutron-proxy-server would need to be modified slightly so that, if it sees 
a /unified-agent//* request it:
looks in its config file, unified-agent section, and finds the ip/port to 
contact for a given ', and forwards the request to that server, 
instead of the regular metadata one.

Once this is in place, savana or trove can have their webapi registered with 
the proxy as the server for the "savana" or "trove" cnc_type. They will be 
contacted by the clients as they come up, and will be able to make web requests 
to them, an get responses back.

What do you think?

Thanks,
Kevin

Kevin, frankly that sound like a _big_ overkill and wheel re-invention. The 
idea you propose is similar to HTTP long polling. It actually works in the 
browsers. But I think people use it not because it is very scalable, 
easy-implemented or something else. It is simply one of the few technologies 
available when you need to implement server push in the web.

In our use-case we don't have a limitation 'that must work with a bare browser 
on client side' and hence we can use technologies which much better suite to 
message passing like AMQP, STOMP or others.




From: Ian Wells [ijw.ubu...@cack.org.uk]
Sent: Thursday, December 12, 2013 11:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Unified Guest Agent proposal

On 12 December 2013 19:48, Clint Byrum 
mailto:cl...@fewbar.com>>>
 wrote:
Excerpts from Jay Pipes's message of 2013-12-12 10:15:13 -0800:
> On 12/10/2013 03:49 PM, Ian Wells wrote:
> > On 10 December 2013 20:55, Clint Byrum 
> > mailto:cl...@fewbar.com>>
> >  >  wrote:
> I've read through this email thread with quite a bit of curiosity, and I
> have to say what Ian says above makes a lot of sense to me. If Neutron
> can handle the creation of a "management vNIC" that has some associated
> iptables rules governing it that provides a level of security for guest
> <-> host and guest <-> $OpenStackService, then the transport problem
> domain is essentially solved, and Neutron can be happily ignorant (as it
> should be) of any guest agent communication with anything else.
>

Indeed I think it could work, however I think the NIC is unnecessary.

Seems likely even with a second N

Re: [openstack-dev] [Solum] Language pack attributes schema

2013-12-13 Thread Clayton Coleman
I added some comments to the bottom specifically about lessons learned from 
operating things like language pack - I discuss the concept of version streams 
and how an operator spoon feeds base images  for the language pack out to 
applications.  

> On Dec 9, 2013, at 4:05 PM, Georgy Okrokvertskhov 
>  wrote:
> 
> Hi,
> 
> As a part of Language pack workgroup session we created an etherpad for 
> language pack attributes definition. Please find a first draft of language 
> pack attributes here: 
> https://etherpad.openstack.org/p/Solum-Language-pack-json-format
> 
> We have identified a minimal list of attributes which should be supported by 
> language pack API.
> 
> Please, provide your feedback and\or ideas in this etherpad. Once it is 
> reviewed we can use this as a basis for language packs in PoC.
> 
> Thanks
> Georgy
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Dmitry Mescheryakov
2013/12/13 Sylvain Bauza 

> Why the notifications couldn't be handled by Marconi ?
>
> That would be up to Marconi's team to handle security issues while it is
> part of their mission statement to deliver a messaging service in between
> VMs.
>

Sylvain, very interesting suggestion! Marconi definitely can complement
transports provided by oslo.messaging. Together they will provide a good
diversity in OpenStack deployment options.



>  Le 12 déc. 2013 22:09, "Fox, Kevin M"  a écrit :
>
> Yeah, I think the extra nic is unnecessary too. There already is a working
>> route to 169.254.169.254, and a metadata proxy -> server running on it.
>>
>> So... lets brainstorm for a minute and see if there are enough pieces
>> already to do most of the work.
>>
>> We already have:
>>   * An http channel out from private vm's, past network namespaces all
>> the way to the node running the neutron-metadata-agent.
>>
>> We need:
>>   * Some way to send a command, plus arguments to the vm to execute some
>> action and get a response back.
>>
>> OpenStack has focused on REST api's for most things and I think that is a
>> great tradition to continue. This allows the custom agent plugins to be
>> written in any language that can speak http (All of them?) on any platform.
>>
>> A REST api running in the vm wouldn't be accessible from the outside
>> though on a private network.
>>
>> Random thought, can some glue "unified guest agent" be written to bridge
>> the gap?
>>
>> How about something like the following:
>>
>> The "unified guest agent" starts up, makes an http request to
>> 169.254.169.254/unified-agent//connect
>> If at any time the connection returns, it will auto reconnect.
>> It will block as long as possible and the data returned will be an http
>> request. The request will have a special header with a request id.
>> The http request will be forwarded to localhost:
>> and the response will be posted to
>> 169.254.169.254/unified-agent/cnc_type/response/
>>
>> The neutron-proxy-server would need to be modified slightly so that, if
>> it sees a /unified-agent//* request it:
>> looks in its config file, unified-agent section, and finds the ip/port to
>> contact for a given ', and forwards the request to that server,
>> instead of the regular metadata one.
>>
>> Once this is in place, savana or trove can have their webapi registered
>> with the proxy as the server for the "savana" or "trove" cnc_type. They
>> will be contacted by the clients as they come up, and will be able to make
>> web requests to them, an get responses back.
>>
>> What do you think?
>>
>> Thanks,
>> Kevin
>>
>> 
>> From: Ian Wells [ijw.ubu...@cack.org.uk]
>> Sent: Thursday, December 12, 2013 11:02 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] Unified Guest Agent proposal
>>
>> On 12 December 2013 19:48, Clint Byrum > cl...@fewbar.com>> wrote:
>> Excerpts from Jay Pipes's message of 2013-12-12 10:15:13 -0800:
>> > On 12/10/2013 03:49 PM, Ian Wells wrote:
>> > > On 10 December 2013 20:55, Clint Byrum > cl...@fewbar.com>
>> > > >> wrote:
>> > I've read through this email thread with quite a bit of curiosity, and I
>> > have to say what Ian says above makes a lot of sense to me. If Neutron
>> > can handle the creation of a "management vNIC" that has some associated
>> > iptables rules governing it that provides a level of security for guest
>> > <-> host and guest <-> $OpenStackService, then the transport problem
>> > domain is essentially solved, and Neutron can be happily ignorant (as it
>> > should be) of any guest agent communication with anything else.
>> >
>>
>> Indeed I think it could work, however I think the NIC is unnecessary.
>>
>> Seems likely even with a second NIC that said address will be something
>> like 169.254.169.254 (or the ipv6 equivalent?).
>>
>> There *is* no ipv6 equivalent, which is one standing problem.  Another is
>> that (and admittedly you can quibble about this problem's significance) you
>> need a router on a network to be able to get to 169.254.169.254 - I raise
>> that because the obvious use case for multiple networks is to have a net
>> which is *not* attached to the outside world so that you can layer e.g. a
>> private DB service behind your app servers.
>>
>> Neither of these are criticisms of your suggestion as much as they are
>> standing issues with the current architecture.
>> --
>> Ian.
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack

Re: [openstack-dev] [Solum] Language pack attributes schema

2013-12-13 Thread Jay Pipes
On Mon, 2013-12-09 at 13:02 -0800, Georgy Okrokvertskhov wrote:
> Hi,
> 
> 
> As a part of Language pack workgroup session we created an etherpad
> for language pack attributes definition. Please find a first draft of
> language pack attributes
> here: https://etherpad.openstack.org/p/Solum-Language-pack-json-format
> 
> 
> We have identified a minimal list of attributes which should be
> supported by language pack API.
> 
> Please, provide your feedback and\or ideas in this etherpad. Once it
> is reviewed we can use this as a basis for language packs in PoC.

Hi Georgy,

Would it be possible to use YAML instead of JSON as the default format
for these description documents? I ask this only because HOT is YAML,
and it would be good to align with Heat in that manner since much of
what Solum aims to do is construct HOT templates, yes?

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Dmitry Mescheryakov
2013/12/13 Fox, Kevin M 

> Yeah, I think the extra nic is unnecessary too. There already is a working
> route to 169.254.169.254, and a metadata proxy -> server running on it.
>
> So... lets brainstorm for a minute and see if there are enough pieces
> already to do most of the work.
>
> We already have:
>   * An http channel out from private vm's, past network namespaces all the
> way to the node running the neutron-metadata-agent.
>
> We need:
>   * Some way to send a command, plus arguments to the vm to execute some
> action and get a response back.
>
> OpenStack has focused on REST api's for most things and I think that is a
> great tradition to continue. This allows the custom agent plugins to be
> written in any language that can speak http (All of them?) on any platform.
>
> A REST api running in the vm wouldn't be accessible from the outside
> though on a private network.
>
> Random thought, can some glue "unified guest agent" be written to bridge
> the gap?
>
> How about something like the following:
>
> The "unified guest agent" starts up, makes an http request to
> 169.254.169.254/unified-agent//connect
> If at any time the connection returns, it will auto reconnect.
> It will block as long as possible and the data returned will be an http
> request. The request will have a special header with a request id.
> The http request will be forwarded to localhost:
> and the response will be posted to
> 169.254.169.254/unified-agent/cnc_type/response/
>
> The neutron-proxy-server would need to be modified slightly so that, if it
> sees a /unified-agent//* request it:
> looks in its config file, unified-agent section, and finds the ip/port to
> contact for a given ', and forwards the request to that server,
> instead of the regular metadata one.
>
> Once this is in place, savana or trove can have their webapi registered
> with the proxy as the server for the "savana" or "trove" cnc_type. They
> will be contacted by the clients as they come up, and will be able to make
> web requests to them, an get responses back.
>
> What do you think?
>
> Thanks,
> Kevin
>

Kevin, frankly that sound like a _big_ overkill and wheel re-invention. The
idea you propose is similar to HTTP long polling. It actually works in the
browsers. But I think people use it not because it is very scalable,
easy-implemented or something else. It is simply one of the few
technologies available when you need to implement server push in the web.

In our use-case we don't have a limitation 'that must work with a bare
browser on client side' and hence we can use technologies which much better
suite to message passing like AMQP, STOMP or others.




> 
> From: Ian Wells [ijw.ubu...@cack.org.uk]
> Sent: Thursday, December 12, 2013 11:02 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] Unified Guest Agent proposal
>
> On 12 December 2013 19:48, Clint Byrum  cl...@fewbar.com>> wrote:
> Excerpts from Jay Pipes's message of 2013-12-12 10:15:13 -0800:
> > On 12/10/2013 03:49 PM, Ian Wells wrote:
> > > On 10 December 2013 20:55, Clint Byrum  cl...@fewbar.com>
> > > >> wrote:
> > I've read through this email thread with quite a bit of curiosity, and I
> > have to say what Ian says above makes a lot of sense to me. If Neutron
> > can handle the creation of a "management vNIC" that has some associated
> > iptables rules governing it that provides a level of security for guest
> > <-> host and guest <-> $OpenStackService, then the transport problem
> > domain is essentially solved, and Neutron can be happily ignorant (as it
> > should be) of any guest agent communication with anything else.
> >
>
> Indeed I think it could work, however I think the NIC is unnecessary.
>
> Seems likely even with a second NIC that said address will be something
> like 169.254.169.254 (or the ipv6 equivalent?).
>
> There *is* no ipv6 equivalent, which is one standing problem.  Another is
> that (and admittedly you can quibble about this problem's significance) you
> need a router on a network to be able to get to 169.254.169.254 - I raise
> that because the obvious use case for multiple networks is to have a net
> which is *not* attached to the outside world so that you can layer e.g. a
> private DB service behind your app servers.
>
> Neither of these are criticisms of your suggestion as much as they are
> standing issues with the current architecture.
> --
> Ian.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing the new OpenStack service for Containers

2013-12-13 Thread Eric Windisch
On Fri, Dec 13, 2013 at 1:19 PM, Chuck Short wrote:

> Hi,
>
> I have definitely seen a drop off in the proposed Container-Service API
> discussion
>

There was only one action item from the meeting, which was a compilation of
use-cases from Krishna.

Krishna, have you made progress on the use-cases? Is there a wiki page?

Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Clint Byrum
Excerpts from Dmitry Mescheryakov's message of 2013-12-13 12:01:01 -0800:
> Still, what about one more server process users will have to run? I see
> unified agent as library which can be easily adopted by both exiting and
> new OpenStack projects. The need to configure and maintain Salt server
> process is big burden for end users. That idea will definitely scare off
> adoption of the agent. And at the same time what are the gains of having
> that server process? I don't really see to many of them.
> 

The Salt devs already mentioned that we can more or less just import
salt's master code and run that inside the existing server processes. So
Savanna would have a salt master capability, and so would Heat Engine.

If it isn't eventlet friendly we can just fork it off and run it as its
own child. Still better than inventing our own.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Fox, Kevin M
Ah, good point.

So, disabling the route wouldn't work if you wanted to use the metadata proxy 
for ongoing events for the guest agent.

But the nonce option would would work.

Another option would be to add a 169.254.169.254/metadata/disable url that 
disables that part of the proxy after cloud-init runs. Then no further metadata 
can be read until the next boot.

Kevin


From: Clint Byrum [cl...@fewbar.com]
Sent: Friday, December 13, 2013 11:46 AM
To: openstack-dev
Subject: Re: [openstack-dev] Unified Guest Agent proposal

Excerpts from Fox, Kevin M's message of 2013-12-13 11:32:01 -0800:
> Hmm.. so If I understand right, the concern you started is something like:
>  * You start up a vm
>  * You make it available to your users to ssh into
>  * They could grab the machine's metadata
>
> I hadn't thought about that use case, but that does sound like it would be a 
> problem.
>
> Ok, so... the problem there is that you need a secrets passed to the vm but 
> the network trick isn't secure enough to pass the secret, hence the config 
> drive like trick since only root/admin can read the data.
>
> Now, that does not sound like it excludes the possibility of using the 
> metadata server idea in combination with cloud drive to make things secure. 
> You could use cloud drive to pass a cert, and then have the metadata server 
> require that cert in order to ensure only the vm itself can pull any 
> additional metadata.
>
> The unified guest agent could use the same cert/server to establish trust too.
>
> Does that address the issue?
>

There is still no need for cloud drive.

cloud-init can drop the route to the metadata network once it has fetched
this data, but before it has enabled SSHD by generating host keys.

Or you can just treat everything in that metadata as compromised already,
and just use a nonce to create a trust relationship with newly fetched
secrets stored as root on the box.

This is already a solved problem.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Murano] [Solum] [Glance]Metadata repository initiative discussion for Glance

2013-12-13 Thread Georgy Okrokvertskhov
Hi,

It looks like I forgot to add Glance. Fixing this now. I am sorry for
duplicating the thread.

Thanks
Georgy


On Fri, Dec 13, 2013 at 12:02 PM, Georgy Okrokvertskhov <
gokrokvertsk...@mirantis.com> wrote:

> Yes. It is a Pacific Standard Time.
>
> Thanks
> Georgy
>
>
> On Fri, Dec 13, 2013 at 12:01 PM, Keith Bray wrote:
>
>>  PT as in Pacific Standard Time?
>>
>> -Keith
>> On Dec 13, 2013 1:56 PM, Georgy Okrokvertskhov <
>> gokrokvertsk...@mirantis.com> wrote:
>>  Hi,
>>
>>  It is PT. I will add this info to the doodle pool.
>>
>>  Thanks
>> Georgy
>>
>>
>> On Fri, Dec 13, 2013 at 11:50 AM, Keith Bray wrote:
>>
>>>  What timezone is the poll in?   It doesn't say on the Doodle page.
>>>
>>>  Thanks,
>>> -Keith
>>>
>>>   From: Georgy Okrokvertskhov 
>>> Reply-To: "OpenStack Development Mailing List (not for usage
>>> questions)" 
>>> Date: Friday, December 13, 2013 12:21 PM
>>> To: OpenStack Development Mailing List <
>>> openstack-dev@lists.openstack.org>
>>> Subject: [openstack-dev] [Heat] [Murano] [Solum] Metadata repository
>>> initiative discussion for Glance
>>>
>>>   Hi,
>>>
>>>  Recently a Heater proposal was announced in openstack-dev mailing
>>> list. This discussion lead to a decision to add unified metadata service \
>>> catalog capabilities into Glance.
>>>
>>>  On the Glance weekly meeting this initiative was discussed and Glance
>>> team agreed to take a look onto BPs and API documents for metadata
>>> repository\catalog, in order to understand what can be done during Icehouse
>>> release and how to organize this work in general.
>>>
>>>  There will be a separate meeting devoted to this initiative on Tuesday
>>> 12/17 in #openstack-glance channel. Exact time is not defined yet and I
>>> need time preferences from all parties. Here is a link to a doodle poll
>>> http://doodle.com/9f2vxrftizda9pun . Please select time slot which will
>>> be suitable for you.
>>>
>>>  The agenda for this meeting is the following:
>>> 1. Define project goals in general
>>> 2. Discuss API for this service and find out what can be implemented
>>> during IceHouse release.
>>> 3. Define organizational stuff like how this initiative should be
>>> developed (branch of Glance or separate project within Glance program)
>>>
>>>  Here is an etherpad
>>> https://etherpad.openstack.org/p/MetadataRepository-API for initial API
>>> version for this service.
>>>
>>>  All project which are interested in metadata repository are welcome to
>>> discuss API and service itself.
>>>
>>>  Currently there are several possible use cases for this service:
>>> 1. Heat template catalog
>>> 2. HOT Software orchestration scripts\recipes storage
>>> 3. Murano Application Catalog object storage
>>> 4. Solum assets storage
>>>
>>>  Thanks
>>> Georgy
>>>
>>>
>>
>>
>>  --
>> Georgy Okrokvertskhov
>> Technical Program Manager,
>> Cloud and Infrastructure Services,
>> Mirantis
>> http://www.mirantis.com
>> Tel. +1 650 963 9828
>> Mob. +1 650 996 3284
>>
>
>
>
> --
> Georgy Okrokvertskhov
> Technical Program Manager,
> Cloud and Infrastructure Services,
> Mirantis
> http://www.mirantis.com
> Tel. +1 650 963 9828
> Mob. +1 650 996 3284
>



-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Scott Moser
On Fri, 13 Dec 2013, Fox, Kevin M wrote:

> Hmm.. so If I understand right, the concern you started is something like:
>  * You start up a vm
>  * You make it available to your users to ssh into
>  * They could grab the machine's metadata
>
> I hadn't thought about that use case, but that does sound like it would be a 
> problem.
>
> Ok, so... the problem there is that you need a secrets passed to the vm
> but the network trick isn't secure enough to pass the secret, hence the
> config drive like trick since only root/admin can read the data.
>
> Now, that does not sound like it excludes the possibility of using the
> metadata server idea in combination with cloud drive to make things
> secure. You could use cloud drive to pass a cert, and then have the
> metadata server require that cert in order to ensure only the vm itself
> can pull any additional metadata.
>
> The unified guest agent could use the same cert/server to establish trust too.

For what its worth, the same general problem is solved by just putting a
null route to the metadata service. cloud-init has a config option for
doing this.  After route has put such a route in place, you should
effectively be done.

  
http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config.txt
  # remove access to the ec2 metadata service early in boot via null route
  #  the null route can be removed (by root) with:
  #route del -host 169.254.169.254 reject
  # default: false (service available)
  disable_ec2_metadata: true

I've also considered before that it might be useful for the instance to
make a request to the metadata service that its done and that the data can
now be deleted.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Dmitry Mescheryakov
Still, what about one more server process users will have to run? I see
unified agent as library which can be easily adopted by both exiting and
new OpenStack projects. The need to configure and maintain Salt server
process is big burden for end users. That idea will definitely scare off
adoption of the agent. And at the same time what are the gains of having
that server process? I don't really see to many of them.


2013/12/12 Clint Byrum 

> Excerpts from Dmitry Mescheryakov's message of 2013-12-12 09:24:13 -0800:
> > Clint, Kevin,
> >
> > Thanks for reassuring me :-) I just wanted to make sure that having
> direct
> > access from VMs to a single facility is not a dead end in terms of
> security
> > and extensibility. And since it is not, I agree it is much simpler (and
> > hence better) than hypervisor-dependent design.
> >
> >
> > Then returning to two major suggestions made:
> >  * Salt
> >  * Custom solution specific to our needs
> >
> > The custom solution could be made on top of oslo.messaging. That gives us
> > RPC working on different messaging systems. And that is what we really
> need
> > - an RPC into guest supporting various transports. What it lacks at the
> > moment is security - it has neither authentication nor ACL.
> >
>
> I bet salt would be super open to modularizing their RPC. Since
> oslo.messaging includes ZeroMQ, and is a library now, I see no reason to
> avoid opening that subject with our fine friends in the Salt community.
> Perhaps a few of them are even paying attention right here. :)
>
> The benefit there is that we get everything except the plugins we want
> to write already done. And we could start now with the ZeroMQ-only
> salt agent if we could at least get an agreement on principle that Salt
> wouldn't mind using an abstraction layer for RPC.
>
> That does make the "poke a hole out of private networks" conversation
> _slightly_ more complex. It is one thing to just let ZeroMQ out, another
> to let all of oslo.messaging's backends out. But I think in general
> they'll all share the same thing: you want an address+port to be routed
> intelligently out of the private network into something running under
> the cloud.
>
> Next steps (all can be done in parallel, as all are interdependent):
>
> * Ask Salt if oslo.messaging is a path they'll walk with us
> * Experiment with communicating with salt agents from an existing
>   OpenStack service (Savanna, Trove, Heat, etc)
> * Deep-dive into Salt to see if it is feasible
>
> As I have no cycles for this, I can't promise to do any, but I will
> try to offer assistance if I can.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2013-12-13 11:32:01 -0800:
> Hmm.. so If I understand right, the concern you started is something like:
>  * You start up a vm
>  * You make it available to your users to ssh into
>  * They could grab the machine's metadata
> 
> I hadn't thought about that use case, but that does sound like it would be a 
> problem.
> 
> Ok, so... the problem there is that you need a secrets passed to the vm but 
> the network trick isn't secure enough to pass the secret, hence the config 
> drive like trick since only root/admin can read the data.
> 
> Now, that does not sound like it excludes the possibility of using the 
> metadata server idea in combination with cloud drive to make things secure. 
> You could use cloud drive to pass a cert, and then have the metadata server 
> require that cert in order to ensure only the vm itself can pull any 
> additional metadata.
> 
> The unified guest agent could use the same cert/server to establish trust too.
> 
> Does that address the issue?
> 

There is still no need for cloud drive.

cloud-init can drop the route to the metadata network once it has fetched
this data, but before it has enabled SSHD by generating host keys.

Or you can just treat everything in that metadata as compromised already,
and just use a nonce to create a trust relationship with newly fetched
secrets stored as root on the box.

This is already a solved problem.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Fox, Kevin M
Hmm.. so If I understand right, the concern you started is something like:
 * You start up a vm
 * You make it available to your users to ssh into
 * They could grab the machine's metadata

I hadn't thought about that use case, but that does sound like it would be a 
problem.

Ok, so... the problem there is that you need a secrets passed to the vm but the 
network trick isn't secure enough to pass the secret, hence the config drive 
like trick since only root/admin can read the data.

Now, that does not sound like it excludes the possibility of using the metadata 
server idea in combination with cloud drive to make things secure. You could 
use cloud drive to pass a cert, and then have the metadata server require that 
cert in order to ensure only the vm itself can pull any additional metadata.

The unified guest agent could use the same cert/server to establish trust too.

Does that address the issue?

Thanks,
Kevin

From: Alessandro Pilotti [apilo...@cloudbasesolutions.com]
Sent: Friday, December 13, 2013 10:03 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Unified Guest Agent proposal

18:39 , Clint Byrum  wrote:

> Excerpts from Alessandro Pilotti's message of 2013-12-13 07:13:01 -0800:
>> Hi guys,
>>
>> This seems to become a pretty long thread with quite a lot of ideas. What do 
>> you think about setting up a meeting on IRC to talk about what direction to 
>> take?
>> IMO this has the potential of becoming a completely separated project to be 
>> hosted on stackforge or similar.
>>
>> Generally speaking, we already use Cloudbase-Init, which beside being the de 
>> facto standard Windows "Cloud-Init type feature” (Apache 2 licensed)
>> has been recently used as a base to provide the same functionality on 
>> FreeBSD.
>>
>> For reference: https://github.com/cloudbase/cloudbase-init and 
>> http://www.cloudbase.it/cloud-init-for-windows-instances/
>>
>> We’re seriously thinking if we should transform Cloudbase-init into an agent 
>> or if we should keep it on line with the current “init only, let the guest 
>> to the rest” approach which fits pretty
>> well with the most common deployment approaches (Heat, Puppet / Chef, Salt, 
>> etc). Last time I spoke with Scott about this agent stuff for cloud-init, 
>> the general intention was
>> to keep the init approach as well (please correct me if I missed something 
>> in the meantime).
>>
>> The limitations that we see, independently from which direction and tool 
>> will be adopted for the agent, are mainly in the metadata services and the 
>> way OpenStack users employ them to
>> communicate with Nova, Heat and the rest of the pack as orchestration 
>> requirements complexity increases:
>>
>
> Hi, Allessandro. Really interesting thoughts. Most of what you have
> described that is not about agent transport is what we discussed
> at the Icehouse summit under the topic of the hot-software-config
> blueprint. There is definitely a need for better workflow integration
> in Heat, and that work is happening now.
>

This is great news. I was aware about this effort but didn’t know that it’s 
already in such an advanced stage. Looking forward to check it out these days!

>> 1) We need a way to post back small amounts of data (e.g. like we already do 
>> for the encrypted Windows password) for status updates,
>> so that the users know how things are going and can be properly notified in 
>> case of post-boot errors. This might be irrelevant as long as you just 
>> create a user and deploy some SSH keys,
>> but becomes very important for most orchestration templates.
>>
>
> Heat already has this via wait conditions. hot-software-config will
> improve upon this. I believe once a unified guest agent protocol is
> agreed upon we will make Heat use that for wait condition signalling.
>
>> 2) The HTTP metadata service accessible from the guest with its magic number 
>> is IMO quite far from an optimal solution. Since every hypervisor commonly
>> used in OpenStack (e.g. KVM, XenServer, Hyper-V, ESXi) provides guest / host 
>> communication services, we could define a common abstraction layer which will
>> include a guest side (to be included in cloud-init, cloudbase-init, etc) and 
>> a hypervisor side, to be implemented for each hypervisor and included in the 
>> related Nova drivers.
>> This has already been proposed / implemented in various third party 
>> scenarios, but never under the OpenStack umbrella for multiple hypervisors.
>>
>> Metadata info can be at that point retrieved and posted by the Nova driver 
>> in a secure way and proxied to / from the guest whithout needing to expose 
>> the metadata
>> service to the guest itself. This would also simplify Neutron, as we could 
>> get rid of the complexity of the Neutron metadata proxy.
>>
>
> The neutron metadata proxy is actually relatively simple. Have a look at
> it. The basic way it works in pseudo code is:
>
> port = loo

Re: [openstack-dev] Performance Regression in Neutron/Havana compared to Quantum/Grizzly

2013-12-13 Thread Nathani, Sreedhar (APS)
Hello All,

Update with my testing.

I have installed one more VM as neutron-server host and configured under the 
Load Balancer.
Currently I have 2 VMs running neutron-server process (one is Controller and 
other is dedicated neutron-server VM)

With this configuration during the batch instance deployment with a batch size 
of 30 and sleep time of 20min,
180 instances could get an IP during the first boot. During 181-210 instance 
creation some instances could not get an IP.

This is much better than when running with single neutron server where only 120 
instances could get an IP during the first boot in Havana.

When the instances are getting created, parent neutron-server process spending 
close to 90% of the cpu time on both the servers,
While rest of the neutron-server process (APIs) are spending very low CPU 
utilization.

I think it's good idea to expand the current multiple neutron-server api 
process to support rpc messages as well.

Even with current setup (multiple neutron-server hosts), we still see rpc 
timeouts in DHCP, L2 agents
and dnsmasq process is getting restarted due to SIGKILL though.

Thanks & Regards,
Sreedhar Nathani

From: Nathani, Sreedhar (APS)
Sent: Friday, December 13, 2013 12:08 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] Performance Regression in Neutron/Havana compared 
to Quantum/Grizzly

Hello Salvatore,

Thanks for your feedback. Does the patch 
https://review.openstack.org/#/c/57420/ which you are working on bug 
https://bugs.launchpad.net/neutron/+bug/1253993
will help to correct the OVS agent loop slowdown issue?
Does this patch address the DHCP agent updating the host file once in a minute 
and finally sending SIGKILL to dnsmasq process?

I have tested with Marun's patch https://review.openstack.org/#/c/61168/ 
regarding 'Send DHCP notifications regardless of agent status' but this patch
Also observed the same behavior.


Thanks & Regards,
Sreedhar Nathani

From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Thursday, December 12, 2013 6:21 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Performance Regression in Neutron/Havana compared 
to Quantum/Grizzly


I believe your analysis is correct and inline with the findings reported in the 
bug concerning OVS agent loop slowdown.

The issue has become even more prominent with the ML2 plugin due to an 
increased number of notifications sent.

Another issue which makes delays on the DHCP agent worse is that instances send 
a discover message once a minute.

Salvatore
Il 11/dic/2013 11:50 "Nathani, Sreedhar (APS)" 
mailto:sreedhar.nath...@hp.com>> ha scritto:
Hello Peter,

Here are the tests I have done. Already have 240 instances active across all 
the 16 compute nodes. To make the tests and data collection easy,
I have done the tests on single compute node

First Test -
*   240 instances already active,  16 instances on the compute node where I 
am going to do the tests
*   deploy 10 instances concurrently using nova boot command with 
num-instances option in single compute node
*   All the instances could get IP during the instance boot time.

-   Instances are created at  2013-12-10 13:41:01
-   From the compute host, DHCP requests are sent from 13:41:20 but those 
are not reaching the DHCP server
Reply from the DHCP server got at 13:43:08 (A delay of 108 seconds)
-   DHCP agent updated the host file from 13:41:06 till 13:42:54. Dnsmasq 
process got SIGHUP message every time the hosts file is updated
-   In compute node tap devices are created between 13:41:08 and 13:41:18
Security group rules are received between 13:41:45 and 13:42:56
IP table rules were updated between 13:41:50 and 13:43:04

Second Test -
*   Deleted the newly created 10 instances.
*   240 instances already active,  16 instances on the compute node where I 
am going to do the tests
*   Deploy 30 instances concurrently using nova boot command with 
num-instances option in single compute node
*   None  of the instances could get the IP during the instance boot.


-   Instances are created at  2013-12-10 14:13:50

-   From the compute host, DHCP Requests are sent from  14:14:14 but those 
are not reaching the DHCP Server
(don't see any DHCP requests are reaching the DHCP server 
from the tcpdump on the network node)

-   Reply from the DHCP server only got at 14:22:10 ( A delay of 636 
seconds)

-   From the strace of the DHCP agent process, it first updated the hosts 
file at 14:14:05, after this there is a gap of close to 60 min for
Updating next instance address, it repeated till 7th 
instance which was updated at 14:19:50.  30th instance updated at 14:20:00

-   During the 30 instance creation, dnsmasq process got SIGHUP after the 
host file is updated, but at 14:19:52 it got SIGKILL and new process

Re: [openstack-dev] [TripleO] [Tuskar] [UI] Icehouse Requirements - Summary, Milestones

2013-12-13 Thread Tzu-Mainn Chen
> Quick note - I want to keep this discussion a bit high-level and not to
> get into big implementation details. For everyone, please, let's agree
> in this thread on the direction and approach and we can start follow-up
> threads with bigger details of how to get those things done.

I'm not sure how the items listed below are implementation details; they seem
like scoping the requirements to me.

> On 2013/13/12 12:04, Tzu-Mainn Chen wrote:
> >> *VERSION 0*
> >> ===
> >> Enable user to deploy OpenStack with the simpliest TripleO way, no
> >> difference between hardware.
> >>
> >> Target:
> >> - end of icehouse-2
> >
> > My impression was that some of these features required features to be
> > developed in other
> > OpenStack services - if so, should we call those out so that we can see if
> > they'll be
> > available in the icehouse-2 timeframe?
> As for below listed features for v0 - it is the smallest set of what we
> have to have in the UI - if there is some delay in other services, we
> have to put attention there as well. But I don't think there is anything
> blocking us at the moment.
> 
> >> Features we need to get in:
> >> - Enable manual nodes registration (Ironic)
> >> - Get images available for user (Glance)
> >
> > Are we still providing the Heat template?  If so, are there image
> > requirements that we
> > need to take into account?
> I am not aware of any special requirements, but I will let experts to
> answer here...
> 
> >
> >> - Node roles (hardcode): Controller, Compute, Object Storage, Block
> >> Storage
> >> - Design deployment (number of nodes per role)
> >
> > We're only allowing a single deployment, right?
> Correct. For the whole Icehouse. I don't think we can get multiple
> deployments in time, there are much more important features.
> 
> >> - Deploy (Heat + Nova)
> >
> > What parameters are we passing in for deploy?  Is it limited to the # of
> > nodes/role, or
> > are we also passing in the image?
> I think it is # nodes/role and image as well. Though images might be
> hardcoded for the very first iteration. Soon we should be able to let
> user assign images to roles.
> 
> > Do we also need the following?
> >
> > * unregister a node in Ironic
> > * update a deployment (add or destroy instances)
> > * destroy a deployment
> > * view information about management node (instance?)
> > * list nodes/instances by role
> > * view deployment configuration
> > * view status of deployment as it's being deployed
> Some of that is part of above mentioned, some a bit later down the road
> (not far away though). We need all of that, but let's enable user to
> deploy first and we can add next features after we get something working
> then.

Well, these are requirements that I previously pulled from your wireframes
that aren't listed anywhere, so I don't know if they were forgotten, descoped,
or assumed to be part of release 0.  If it's assumed, I think it's important
that we call it out; otherwise, I'm not sure how we can appropriately evaluate
whether the feature list fits into the icehouse-2 timeframe.

I'm also not sure what we consider "needed".  To me, it seems like:

a) NEEDED
   * list nodes/instances by role

b) STANDARD (but possibly not needed?)
   * unregister a node
   * update a deployment
   * destroy a deployment
   * view status of deployment as it's being deployed

c) UNSURE
   * view information about a management node
   * view deployment configuration - I vaguely recall someone saying that it 
was 
important for the user to view the options being used when creating the 
overcloud,
even if those options were uneditable defaults.


Regarding the split in features between icehouse-2 and icehouse-3 - I'm not 
sure it makes sense.
We're re-architecting all of tuskar, and as such, I think it's more important 
to call out the
features we want for *all* of icehouse.  Otherwise, it's possible that we'll 
create an architecture
that works for icehouse-2 but which needs to be significantly reworked for 
icehouse-3.

For that reason, I think it might make more sense to work towards a single 
deadline within icehouse.


Mainn


> -- Jarda
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] UI Wireframes for Resource Management - ready for implementation

2013-12-13 Thread Tzu-Mainn Chen
> On 2013/13/12 11:20, Tzu-Mainn Chen wrote:
> > These look good!  Quick question - can you explain the purpose of Node
> > Tags?  Are they
> > an additional way to filter nodes through nova-scheduler (is that even
> > possible?), or
> > are they there solely for display in the UI?
> >
> > Mainn
> 
> We start easy, so that's solely for UI needs of filtering and monitoring
> (grouping of nodes). It is already in Ironic, so there is no reason why
> not to take advantage of it.
> -- Jarda

Okay, great.  Just for further clarification, are you expecting this UI 
filtering
to be present in release 0?  I don't think Ironic natively supports filtering
by node tag, so that would be further work that would have to be done.

Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] [Murano] [Solum] Metadata repository initiative discussion for Glance

2013-12-13 Thread Georgy Okrokvertskhov
Hi,

Recently a Heater proposal was announced in openstack-dev mailing list.
This discussion lead to a decision to add unified metadata service \
catalog capabilities into Glance.

On the Glance weekly meeting this initiative was discussed and Glance team
agreed to take a look onto BPs and API documents for metadata
repository\catalog, in order to understand what can be done during Icehouse
release and how to organize this work in general.

There will be a separate meeting devoted to this initiative on Tuesday
12/17 in #openstack-glance channel. Exact time is not defined yet and I
need time preferences from all parties. Here is a link to a doodle poll
http://doodle.com/9f2vxrftizda9pun . Please select time slot which will be
suitable for you.

The agenda for this meeting is the following:
1. Define project goals in general
2. Discuss API for this service and find out what can be implemented during
IceHouse release.
3. Define organizational stuff like how this initiative should be developed
(branch of Glance or separate project within Glance program)

Here is an etherpad
https://etherpad.openstack.org/p/MetadataRepository-APIfor initial API
version for this service.

All project which are interested in metadata repository are welcome to
discuss API and service itself.

Currently there are several possible use cases for this service:
1. Heat template catalog
2. HOT Software orchestration scripts\recipes storage
3. Murano Application Catalog object storage
4. Solum assets storage

Thanks
Georgy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing the new OpenStack service for Containers

2013-12-13 Thread Chuck Short
Hi,

I have definitely seen a drop off in the proposed Container-Service API
discussion. I think peple are still mauling over the ideas that were
presented so far. However with looking at the discussion so far, and
possibly trying to get the discussion going again, I don't think we are at
the point where a totally separate Container-Service API is needed yet.

Regards
chuck


On Thu, Dec 12, 2013 at 12:59 PM, Rick Harris wrote:

> Hi all,
>
> Was wondering if there's been any more work done on the proposed
> Container-Service (Capsule?) API?
>
> Haven't seen much on the ML on this, so just want to make sure the current
> plan is still to have a draft of the Capsule API, compare the delta to the
> existing Nova API, and determine whether a separate service still makes
> sense for the current use-cases.
>
> Thanks!
>
> Rick
>
>
> On Fri, Nov 22, 2013 at 2:35 PM, Russell Bryant wrote:
>
>> On 11/22/2013 02:29 PM, Krishna Raman wrote:
>> >
>> > On Nov 22, 2013, at 10:26 AM, Eric Windisch > > > wrote:
>> >
>> >> On Fri, Nov 22, 2013 at 11:49 AM, Krishna Raman > >> > wrote:
>> >>> Reminder: We are meting in about 15 minutes on #openstack-meeting
>> >>> channel.
>> >>
>> >> I wasn't able to make it. Was meeting-bot triggered? Is there a log of
>> >> today's discussion?
>> >
>> > Yes. Logs are
>> > here:
>> http://eavesdrop.openstack.org/meetings/nova/2013/nova.2013-11-22-17.01.log.html
>>
>> Yep, I used the 'nova' meeting topic for this one.  If the meeting turns
>> in to a regular thing, we should probably switch it to some sort of
>> sub-team type name ... like nova-containers.
>>
>> --
>> Russell Bryant
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Alessandro Pilotti
18:39 , Clint Byrum  wrote:

> Excerpts from Alessandro Pilotti's message of 2013-12-13 07:13:01 -0800:
>> Hi guys,
>> 
>> This seems to become a pretty long thread with quite a lot of ideas. What do 
>> you think about setting up a meeting on IRC to talk about what direction to 
>> take?
>> IMO this has the potential of becoming a completely separated project to be 
>> hosted on stackforge or similar.
>> 
>> Generally speaking, we already use Cloudbase-Init, which beside being the de 
>> facto standard Windows "Cloud-Init type feature” (Apache 2 licensed) 
>> has been recently used as a base to provide the same functionality on 
>> FreeBSD.
>> 
>> For reference: https://github.com/cloudbase/cloudbase-init and 
>> http://www.cloudbase.it/cloud-init-for-windows-instances/
>> 
>> We’re seriously thinking if we should transform Cloudbase-init into an agent 
>> or if we should keep it on line with the current “init only, let the guest 
>> to the rest” approach which fits pretty
>> well with the most common deployment approaches (Heat, Puppet / Chef, Salt, 
>> etc). Last time I spoke with Scott about this agent stuff for cloud-init, 
>> the general intention was
>> to keep the init approach as well (please correct me if I missed something 
>> in the meantime).
>> 
>> The limitations that we see, independently from which direction and tool 
>> will be adopted for the agent, are mainly in the metadata services and the 
>> way OpenStack users employ them to 
>> communicate with Nova, Heat and the rest of the pack as orchestration 
>> requirements complexity increases:
>> 
> 
> Hi, Allessandro. Really interesting thoughts. Most of what you have
> described that is not about agent transport is what we discussed
> at the Icehouse summit under the topic of the hot-software-config
> blueprint. There is definitely a need for better workflow integration
> in Heat, and that work is happening now.
> 

This is great news. I was aware about this effort but didn’t know that it’s 
already in such an advanced stage. Looking forward to check it out these days!

>> 1) We need a way to post back small amounts of data (e.g. like we already do 
>> for the encrypted Windows password) for status updates,
>> so that the users know how things are going and can be properly notified in 
>> case of post-boot errors. This might be irrelevant as long as you just 
>> create a user and deploy some SSH keys,
>> but becomes very important for most orchestration templates.
>> 
> 
> Heat already has this via wait conditions. hot-software-config will
> improve upon this. I believe once a unified guest agent protocol is
> agreed upon we will make Heat use that for wait condition signalling.
> 
>> 2) The HTTP metadata service accessible from the guest with its magic number 
>> is IMO quite far from an optimal solution. Since every hypervisor commonly 
>> used in OpenStack (e.g. KVM, XenServer, Hyper-V, ESXi) provides guest / host 
>> communication services, we could define a common abstraction layer which 
>> will 
>> include a guest side (to be included in cloud-init, cloudbase-init, etc) and 
>> a hypervisor side, to be implemented for each hypervisor and included in the 
>> related Nova drivers.
>> This has already been proposed / implemented in various third party 
>> scenarios, but never under the OpenStack umbrella for multiple hypervisors.
>> 
>> Metadata info can be at that point retrieved and posted by the Nova driver 
>> in a secure way and proxied to / from the guest whithout needing to expose 
>> the metadata 
>> service to the guest itself. This would also simplify Neutron, as we could 
>> get rid of the complexity of the Neutron metadata proxy. 
>> 
> 
> The neutron metadata proxy is actually relatively simple. Have a look at
> it. The basic way it works in pseudo code is:
> 
> port = lookup_requesting_ip_port(remote_ip)
> instance_id = lookup_port_instance_id(port)
> response = forward_and_sign_request_to_nova(REQUEST, instance_id, 
> conf.nova_metadata_ip)
> return response
> 

Heh, I’m quite familiar with the Neutron metadata agent, as we had to patch it 
to get metadata POST working for the Windows password generation. :-)

IMO, metadata exposed to guests via HTTP suffers from security issues due to 
direct exposure to guests (think DOS in the best case) and requires additional 
complexity for fault tolerance 
and high availability just to name a few issues.
Beside that, folks that embraced ConfigDrive for this or other reasons are cut 
out from the metadata POST option, as by definition a CDROM drive is read only.

I was sure that this was going to ge a bit of a hot topic ;). There are IMHO 
valid arguments on both sides, I don’t even see it as a mandatory alternative 
choice,
just one additional option which is being discussed since a while. 

The design and implementation IMO would be fairly easy, with the big advantage 
that it would remove most of the complexity from the deployers.

> Furthermore, if we have to embrace some com

Re: [openstack-dev] [Nova] Support for Pecan in Nova

2013-12-13 Thread Doug Hellmann
On Thu, Dec 12, 2013 at 9:22 PM, Christopher Yeoh  wrote:

> On Fri, Dec 13, 2013 at 4:12 AM, Jay Pipes  wrote:
>
>> On 12/11/2013 11:47 PM, Mike Perez wrote:
>>
>>> On 10:06 Thu 12 Dec , Christopher Yeoh wrote:
>>>
 On Thu, Dec 12, 2013 at 8:59 AM, Doug Hellmann
 >>> >wrote:


>
>
>  On Wed, Dec 11, 2013 at 3:41 PM, Ryan Petrello <
>> ryan.petre...@dreamhost.com
>> >
>>
> wrote:
>>>

>  Hello,
>>
>> I’ve spent the past week experimenting with using Pecan for
>> Nova’s
>>
> API
>>>
 and have opened an experimental review:
>>
>>
>> https://review.openstack.org/#/c/61303/6
>>
>> …which implements the `versions` v3 endpoint using pecan (and
>>
> paves the
>>>
 way for other extensions to use pecan).  This is a *potential*
>>
>>  approach
>>>
 I've considered for gradually moving the V3 API, but I’m open
>> to other suggestions (and feedback on this approach).  I’ve
>> also got a few open questions/general observations:
>>
>> 1.  It looks like the Nova v3 API is composed *entirely* of
>> extensions (including “core” API calls), and that extensions
>> and their routes are discoverable and extensible via installed
>> software that registers
>>
> itself
>>>
 via stevedore.  This seems to lead to an API that’s composed of
>>
>>  installed
>>>
 software, which in my opinion, makes it fairly hard to map out
>> the
>>
> API (as
>>>
 opposed to how routes are manually defined in other WSGI
>>
> frameworks).  I
>>>
 assume at this time, this design decision has already been
>>
> solidified for
>>>
 v3?
>>
>>
> Yeah, I brought this up at the summit. I am still having some
> trouble understanding how we are going to express a stable core
> API for compatibility testing if the behavior of the API can be
> varied so significantly by deployment decisions. Will we just
> list each
>
 "required"
>>>
 extension, and forbid any extras for a compliant cloud?
>
>
  Maybe the issue is caused by me misunderstanding the term
> "extension," which (to me) implies an optional component but is
> perhaps reflecting a technical implementation detail instead?
>
>
>  Yes and no :-) As Ryan mentions, all API code is a plugin in the V3
 API. However, some must be loaded or the V3 API refuses to start
 up. In nova/api/openstack/__init__.py we have
 API_V3_CORE_EXTENSIONS which hard codes which extensions must be
 loaded and there is no config option to override this (blacklisting
 a core plugin will result in the V3 API not starting up).

 So for compatibility testing I think what will probably happen is
 that we'll be defining a minimum set (API_V3_CORE_EXTENSIONS) that
 must be implemented and clients can rely on that always being

>>> present
>>>
 on a compliant cloud. But clients can also then query through
 /extensions what other functionality (which is backwards compatible
 with respect to core) may also be present on that specific cloud.

>>>
>>> This really seems similar to the idea of having a router class, some
>>> controllers and you map them. From my observation at the summit,
>>> calling everything an extension creates confusion. An extension
>>> "extends" something. For example, Chrome has extensions, and they
>>> extend the idea of the core features of a browser. If you want to do
>>> more than back/forward, go to an address, stop, etc, that's an
>>> extension. If you want it to play an audio clip "stop, hammer time"
>>> after clicking the stop button, that's an example of an extension.
>>>
>>> In OpenStack, we use extensions to extend core. Core are the
>>> essential feature(s) of the project. In Cinder for example, core is
>>> volume. In core you can create a volume, delete a volume, attach a
>>> volume, detach a volume, etc. If you want to go beyond that, that's
>>> an extension. If you want to do volume encryption, that's an example
>>> of an extension.
>>>
>>> I'm worried by the discrepancies this will create among the programs.
>>> You mentioned maintainability being a plus for this. I don't think
>>> it'll be great from the deployers perspective when you have one
>>> program that thinks everything is an extension and some of them have
>>> to be enabled that the deployer has to be mindful of, while the rest
>>> of the programs consider all extensions to be optional.
>>>
>>
>> +1. I agree with most of what Mike says above. The idea that there are
>> core "extensions" in Nova's v3 API doesn't make a whole lot of sense to me.
>>
>>
> So would it help if we used the term "plugin" to talk about the framework
> that the API is implemented with,
> and extensions when talking about things which extend the core API? So the

Re: [openstack-dev] [Solum] Using Zuul in the Git-pull blueprint

2013-12-13 Thread Krishna Raman

On Dec 13, 2013, at 9:32 AM, Georgy Okrokvertskhov 
 wrote:

> Hi,
> 
> After reading the etherpad for Solum\Zuul integration I feel that I need more 
> clarity on this. First of all, what is missed is a positioning of Zuul in 
> overall Solum architecture. Let me explain a bit why I have this question 
> about positioning:
> 
> 1. I don't see how Solum entities (Application, Plan, Components) related to 
> Zuul workflows.

Applications/Plans have multiple DU’s. Each DU can come from 1 of:
- User provided DU
- Built from user provided binaries
- Built from user provided source
- from git
- from tar etc.

The build of the DU is what the Zuul workflow will handle. No more. After the 
DU is built, the rest of Solum workflow will take it form there.

> 
> 2. The document describes steps starting from git commit event. It is not 
> clear how workflow appears in Zuul configuration, what are steps which should 
> be performed by user? During F2F discussion we agreed that user will pass 
> some parameters required for build process and deployment process. It is not 
> clear how these parameters will appear in Zuul workflow. 

Each DU will have its own Zuul configuration. Which can be built based on infer 
provided by the user about that DU.
This does not conflict with what we discussed during F2F.

> 
> 3. From a security perspective it is not clear how Solum and Zuul will obtain 
> user authentication information if entry point will be a git commit. Should 
> user invoke Solum API somehow before git commit? Should Solum be an entry 
> point? If Zuul will invoke Solum API for actual steps it should pass user 
> authentication parameters too.

Zuul would not be exposed to the user. It will be hidden behind Solum APIs. 
Solum will take care of user auth and can ask Zuul for info it will display 
back to user.
For M1, we had decided that the git repo being accessed would be a publicly 
visible repo with no auth requirements. But for future flows, I can see Solum 
gathering
the auth info and passing it to Zuul to retrieve the repo as needed.

> 
> 4. If we have multiple users with multiple Application does that mean that we 
> will have multiple Zuul instances, or we will have multiple workflows 
> configured in Zuul? If it is a single instance will config change trig Zuul 
> service restart? 

Single Zuul instance with multiple workflows registered. One for each DU. Zuul 
already supports dynamic configuration changes.

HTH
—Kr

> 
> Thanks
> Georgy
> 
> 
> On Fri, Dec 13, 2013 at 8:56 AM, devdatta kulkarni 
>  wrote:
> -Original Message-
> From: "Krishna Raman" 
> Sent: Friday, December 13, 2013 9:44am
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Subject: Re: [openstack-dev] [Solum] Using Zuul in the Git-pull blueprint
> 
> On Dec 12, 2013, at 1:39 PM, devdatta kulkarni 
>  wrote:
> 
> > We followed on the Zuul question in this week's git-integration working 
> > group meeting.
> >
> > mordred has created an etherpad with a high-level description of Zuul and 
> > how it might
> > fit with Solum't git integration workflow
> >
> > https://etherpad.openstack.org/p/ZuulSolum
> >
> > The working group seemed to be coming to the consensus that we want to use 
> > a single workflow
> > engine, as far as possible, for all of Solum's workflow needs.
> > This brought up the question about, what are really Solum's workflow 
> > requirements.
> 
> Hi
> 
> I had a long conversation with Monty yesterday and we flushed out a few 
> things I would like to run by the group.
> I have also included answers to the questions below.
> 
> >
> > At a high-level, I think that Solum has three different kinds of workflows.
> >
> > 1) Workflow around getting user code into Solum
> >   - This is the git integration piece being worked out in the 
> > git-integration
> > working group.
> 
> This is possible using the Zuul workflows. Would potentially require a little 
> work in Zuul.
> 
> >
> > 2) Workflow around creating language pack(s).
> >   - The main workflow requirement here involves ability to run tests before 
> > creating a language pack.
> > There was some discussion in language-pack working group about this 
> > requirement.
> 
> This is also possible using Zuul and in-fact would benefit Solum by providing 
> config file based build workflows
> that could be customized by ops personelle. For e.g.. one DU might require 
> SVN, another might require git
> and a jenkins CI based unit test before triggering Langpack, other DUs might 
> wish to leverage gerrit etc.
> This would be possible through Zuul without having to reinvent it on the 
> other workflow engine.
> 
> >
> > 3) Workflow around deploying created language pack(s) in order to 
> > instantiate an assembly.
> >   - The deployment may potentially contain several steps, some of which may 
> > be long running, such as
> >   populating a database. Further, there may be a need to c

Re: [openstack-dev] [Solum] Using Zuul in the Git-pull blueprint

2013-12-13 Thread Krishna Raman

On Dec 13, 2013, at 8:56 AM, devdatta kulkarni 
 wrote:

> -Original Message-
> From: "Krishna Raman" 
> Sent: Friday, December 13, 2013 9:44am
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Subject: Re: [openstack-dev] [Solum] Using Zuul in the Git-pull blueprint
> 
> On Dec 12, 2013, at 1:39 PM, devdatta kulkarni 
>  wrote:
> 
>> We followed on the Zuul question in this week's git-integration working 
>> group meeting.
>> 
>> mordred has created an etherpad with a high-level description of Zuul and 
>> how it might
>> fit with Solum't git integration workflow
>> 
>> https://etherpad.openstack.org/p/ZuulSolum
>> 
>> The working group seemed to be coming to the consensus that we want to use a 
>> single workflow
>> engine, as far as possible, for all of Solum's workflow needs.
>> This brought up the question about, what are really Solum's workflow 
>> requirements. 
> 
> Hi
> 
> I had a long conversation with Monty yesterday and we flushed out a few 
> things I would like to run by the group.
> I have also included answers to the questions below.
> 
>> 
>> At a high-level, I think that Solum has three different kinds of workflows.
>> 
>> 1) Workflow around getting user code into Solum
>>  - This is the git integration piece being worked out in the git-integration
>>working group.
> 
> This is possible using the Zuul workflows. Would potentially require a little 
> work in Zuul.
> 
>> 
>> 2) Workflow around creating language pack(s).
>>  - The main workflow requirement here involves ability to run tests before 
>> creating a language pack.
>>There was some discussion in language-pack working group about this 
>> requirement.
> 
> This is also possible using Zuul and in-fact would benefit Solum by providing 
> config file based build workflows
> that could be customized by ops personelle. For e.g.. one DU might require 
> SVN, another might require git 
> and a jenkins CI based unit test before triggering Langpack, other DUs might 
> wish to leverage gerrit etc.
> This would be possible through Zuul without having to reinvent it on the 
> other workflow engine.
> 
>> 
>> 3) Workflow around deploying created language pack(s) in order to 
>> instantiate an assembly.
>>  - The deployment may potentially contain several steps, some of which may 
>> be long running, such as
>>  populating a database. Further, there may be a need to checkpoint 
>> intermediate steps
>>  and retry the workflow from the failed point.
> 
> This is probably not a very good fit for Zuul. It can handle simple workflow 
> but won’t be able to do the
> complex checkpointing, rollback, retry logic etc.
> 
>> 
>> 
>> mordred mentioned that #1 can be achieved by Zuul (both, push-to-solum and 
>> pull-by-solum)
>> We want to know if #2 and #3 can also be achieved by Zuul.
>> If not, we want to know what are the available options.
>> 
>> mordred, thanks for the etherpad; looking forward to the digram :)
> 
> 
> Zuul is workflow engine capable of running simple workflows. It is probably 
> not suitable for all of Solum but would
> manage the source -> DU flow quite nicely. Initially my thoughts were that I 
> wanted to avoid having 2 workflow
> engines in Solum but there is another way to look at it…
> 
> During out F2F, we had said that we should have a Solum API where we could 
> just post DU images. This would
> allow someone to build the DU outside Solum and just provide it. We could use 
> this same API as a clean interface to
> separated out the DU build flow from the DU deploy flow. Once this is done, 
> the DU build flow (#1, #2 above)
> could be cleanly handled by Zuul and the DU deploy flow by whatever complex 
> engine the rest of Solum would
> use.
> 
>>> I think this makes sense.
> 
> If I were to tie this discussion back to the various working groups and 
> blueprints, I think
> the git-integration and language-pack working groups are targeting the "DU 
> build flow" (#1 and #2).
> On the other hand, the work being done as part of 'specify-lang-pack' 
> blueprint and 'pluggable-template-generation'
> are targeting parts of #3. There would be additional blueprints for other 
> aspects of #3.

+1

> 
> - Devdatta
> 
> 
> This approach has a few advantages:
>   * Re-uses what Openstack already uses but its build & CI process (and 
> potentially makes it better)
>   * Allows operations who deploy Solum to customize their build process 
> without having to change Solum
>   * Allows us to leverage the Zuul/OpenStack-infra team to help us solve 
> the DU build flow instead of having 
> to go alone
> 
> —Krishna
> 
>> 
>> 
>> thanks,
>> devkulkarni
>> 
>> 
>> -Original Message-
>> From: "Roshan Agrawal" 
>> Sent: Monday, December 9, 2013 10:57am
>> To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Subject: Re: [openstack-dev] [Solum] Using Zuul in the Git-pull blueprint
>> 
>> 
>>> -Original Message-
>>> From: Krishna Raman [mailto:kr

Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Sylvain Bauza
That's exactly why I proposed Marconi :
 - Notifications ('Marconi') is already an incubated Openstack program and
consequently we need to envisage any already existing solutions in the
Openstack ecosystem before writing a new one (aka. "silos"...)
 - Salt and other any other solutions are good but not perfect as we would
then have only one broker for a solution, with all the disagreements it
could raise (releases roll-out and backwards compatibility, vendor lock-in,
light integration with Openstack...)

Is there an Etherpad for discussing this btw ? Meetings are great but
pretty useless if we need to discuss such of design keypoints right there.




2013/12/13 Fox, Kevin M 

> That's a great idea. How about the proposal below be changed such that the
> metadata-proxy forwards the /connect like calls to marconi queue A, and the
> response like ur's go to queue B.
>
> The agent wouldn't need to know which queue's in marconi its talking to
> then, and could always talk to it.
>
> Any of the servers (savana/trove) that wanted to control the agents would
> then just have to push marconi into queue A and get responses from queue B.
>
> http is then used all the way through the process, which should make
> things easy to implement and scale.
>
> Thanks,
> Kevin
>
> 
> From: Sylvain Bauza [sylvain.ba...@gmail.com]
> Sent: Thursday, December 12, 2013 11:43 PM
> To: OpenStack Development Mailing List, (not for usage questions)
> Subject: Re: [openstack-dev] Unified Guest Agent proposal
>
> Why the notifications couldn't be handled by Marconi ?
>
> That would be up to Marconi's team to handle security issues while it is
> part of their mission statement to deliver a messaging service in between
> VMs.
>
> Le 12 déc. 2013 22:09, "Fox, Kevin M"  kevin@pnnl.gov>> a écrit :
> Yeah, I think the extra nic is unnecessary too. There already is a working
> route to 169.254.169.254, and a metadata proxy -> server running on it.
>
> So... lets brainstorm for a minute and see if there are enough pieces
> already to do most of the work.
>
> We already have:
>   * An http channel out from private vm's, past network namespaces all the
> way to the node running the neutron-metadata-agent.
>
> We need:
>   * Some way to send a command, plus arguments to the vm to execute some
> action and get a response back.
>
> OpenStack has focused on REST api's for most things and I think that is a
> great tradition to continue. This allows the custom agent plugins to be
> written in any language that can speak http (All of them?) on any platform.
>
> A REST api running in the vm wouldn't be accessible from the outside
> though on a private network.
>
> Random thought, can some glue "unified guest agent" be written to bridge
> the gap?
>
> How about something like the following:
>
> The "unified guest agent" starts up, makes an http request to
> 169.254.169.254/unified-agent/ >/connect
> If at any time the connection returns, it will auto reconnect.
> It will block as long as possible and the data returned will be an http
> request. The request will have a special header with a request id.
> The http request will be forwarded to localhost:
> and the response will be posted to
> 169.254.169.254/unified-agent/cnc_type/response/<
> http://169.254.169.254/unified-agent/cnc_type/response/>
>
> The neutron-proxy-server would need to be modified slightly so that, if it
> sees a /unified-agent//* request it:
> looks in its config file, unified-agent section, and finds the ip/port to
> contact for a given ', and forwards the request to that server,
> instead of the regular metadata one.
>
> Once this is in place, savana or trove can have their webapi registered
> with the proxy as the server for the "savana" or "trove" cnc_type. They
> will be contacted by the clients as they come up, and will be able to make
> web requests to them, an get responses back.
>
> What do you think?
>
> Thanks,
> Kevin
>
> 
> From: Ian Wells [ijw.ubu...@cack.org.uk]
> Sent: Thursday, December 12, 2013 11:02 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] Unified Guest Agent proposal
>
> On 12 December 2013 19:48, Clint Byrum  cl...@fewbar.com>>>
> wrote:
> Excerpts from Jay Pipes's message of 2013-12-12 10:15:13 -0800:
> > On 12/10/2013 03:49 PM, Ian Wells wrote:
> > > On 10 December 2013 20:55, Clint Byrum  cl...@fewbar.com>>
> > >  cl...@fewbar.com > I've read through this email thread with quite a bit of curiosity, and I
> > have to say what Ian says above makes a lot of sense to me. If Neutron
> > can handle the creation of a "management vNIC" that has some associated
> > iptables rules 

Re: [openstack-dev] [oslo][glance] Oslo.cfg resets not really resetting the CONF

2013-12-13 Thread Ben Nemec
 

On 2013-12-13 02:44, Amala Basha Alungal wrote: 

> Hi, 
> 
> I stumbled into a situation today where in I had to write few tests that 
> modifies the oslo.config.cfg and in turn resets the values back in a tear 
> down. Acc to the docs, oslo.cfg reset() "_Clears the object state and unsets 
> overrides and defaults." _but, it doesn't seem to be happening, as the 
> subsequent tests that are run retains these modified values and tests behave 
> abnormally. The patch has been submitted for review here [1]. Am I missing 
> something obvious?

I didn't look very closely at why your tests weren't working, but this
is why we have the config fixture for tests. It handles all the
resetting for you. If _that_ doesn't work then we need to look closer.
:-) 

I left a link on the review to the appropriate file. 

-Ben 

 

Links:
--
[1] https://review.openstack.org/#/c/60188/1
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Using Zuul in the Git-pull blueprint

2013-12-13 Thread Georgy Okrokvertskhov
Hi,

After reading the etherpad for Solum\Zuul integration I feel that I need
more clarity on this. First of all, what is missed is a positioning of Zuul
in overall Solum architecture. Let me explain a bit why I have this
question about positioning:

1. I don't see how Solum entities (Application, Plan, Components) related
to Zuul workflows.

2. The document describes steps starting from git commit event. It is not
clear how workflow appears in Zuul configuration, what are steps which
should be performed by user? During F2F discussion we agreed that user will
pass some parameters required for build process and deployment process. It
is not clear how these parameters will appear in Zuul workflow.

3. From a security perspective it is not clear how Solum and Zuul will
obtain user authentication information if entry point will be a git commit.
Should user invoke Solum API somehow before git commit? Should Solum be an
entry point? If Zuul will invoke Solum API for actual steps it should pass
user authentication parameters too.

4. If we have multiple users with multiple Application does that mean that
we will have multiple Zuul instances, or we will have multiple workflows
configured in Zuul? If it is a single instance will config change trig Zuul
service restart?

Thanks
Georgy


On Fri, Dec 13, 2013 at 8:56 AM, devdatta kulkarni <
devdatta.kulka...@rackspace.com> wrote:

> -Original Message-
> From: "Krishna Raman" 
> Sent: Friday, December 13, 2013 9:44am
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Solum] Using Zuul in the Git-pull blueprint
>
> On Dec 12, 2013, at 1:39 PM, devdatta kulkarni <
> devdatta.kulka...@rackspace.com> wrote:
>
> > We followed on the Zuul question in this week's git-integration working
> group meeting.
> >
> > mordred has created an etherpad with a high-level description of Zuul
> and how it might
> > fit with Solum't git integration workflow
> >
> > https://etherpad.openstack.org/p/ZuulSolum
> >
> > The working group seemed to be coming to the consensus that we want to
> use a single workflow
> > engine, as far as possible, for all of Solum's workflow needs.
> > This brought up the question about, what are really Solum's workflow
> requirements.
>
> Hi
>
> I had a long conversation with Monty yesterday and we flushed out a few
> things I would like to run by the group.
> I have also included answers to the questions below.
>
> >
> > At a high-level, I think that Solum has three different kinds of
> workflows.
> >
> > 1) Workflow around getting user code into Solum
> >   - This is the git integration piece being worked out in the
> git-integration
> > working group.
>
> This is possible using the Zuul workflows. Would potentially require a
> little work in Zuul.
>
> >
> > 2) Workflow around creating language pack(s).
> >   - The main workflow requirement here involves ability to run tests
> before creating a language pack.
> > There was some discussion in language-pack working group about this
> requirement.
>
> This is also possible using Zuul and in-fact would benefit Solum by
> providing config file based build workflows
> that could be customized by ops personelle. For e.g.. one DU might require
> SVN, another might require git
> and a jenkins CI based unit test before triggering Langpack, other DUs
> might wish to leverage gerrit etc.
> This would be possible through Zuul without having to reinvent it on the
> other workflow engine.
>
> >
> > 3) Workflow around deploying created language pack(s) in order to
> instantiate an assembly.
> >   - The deployment may potentially contain several steps, some of which
> may be long running, such as
> >   populating a database. Further, there may be a need to checkpoint
> intermediate steps
> >   and retry the workflow from the failed point.
>
> This is probably not a very good fit for Zuul. It can handle simple
> workflow but won’t be able to do the
> complex checkpointing, rollback, retry logic etc.
>
> >
> >
> > mordred mentioned that #1 can be achieved by Zuul (both, push-to-solum
> and pull-by-solum)
> > We want to know if #2 and #3 can also be achieved by Zuul.
> > If not, we want to know what are the available options.
> >
> > mordred, thanks for the etherpad; looking forward to the digram :)
>
>
> Zuul is workflow engine capable of running simple workflows. It is
> probably not suitable for all of Solum but would
> manage the source -> DU flow quite nicely. Initially my thoughts were that
> I wanted to avoid having 2 workflow
> engines in Solum but there is another way to look at it…
>
> During out F2F, we had said that we should have a Solum API where we could
> just post DU images. This would
> allow someone to build the DU outside Solum and just provide it. We could
> use this same API as a clean interface to
> separated out the DU build flow from the DU deploy flow. Once this is
> done, the DU build flow (#1, #2 abo

Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread David Boucha
I have some follow up information regarding the part of this discussion
about the possibility of leveraging
the Salt Minion for an agent.

I discussed this with Tom Hatch and he said that it very feasible to make
this work with Salt.
That could entail using the Salt Master, or even subclassing the Salt
Minion's base class to use
a different communication protocol.

We'd be willing to have some engineers participate in the proposed meeting
if that is welcomed.

Dave Boucha


-- 
Dave Boucha  |  Sr. Engineer

Join us at SaltConf, Jan. 28-30, 2014 in Salt Lake City. www.saltconf.com


5272 South College Drive, Suite 301 | Murray, UT 84123
*office* 801-305-3563
d...@saltstack.com | www.saltstack.com 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [UI] Icehouse Requirements - Summary, Milestones

2013-12-13 Thread Jordan OMara

On 13/12/13 11:36 +0100, Jaromir Coufal wrote:

*VERSION 0*
===
Enable user to deploy OpenStack with the simpliest TripleO way, no  
difference between hardware.


Target:
- end of icehouse-2

Features we need to get in:
- Enable manual nodes registration (Ironic)
- Get images available for user (Glance)
- Node roles (hardcode): Controller, Compute, Object Storage, Block Storage
- Design deployment (number of nodes per role)
- Deploy (Heat + Nova)


Thanks for summarizing this Jarda!

I noticed one thing missing from the V0 list that we had talked about
earlier that seemed important. Copied below from an earlier doc:

retrieve node lists (Ironic + Nova + Heat?)
   management node(s) (awareness of the node)
   resource nodes, broken down by role
   unallocated nodes

This seems also important to include in v0.
--
Jordan O'Mara 
Red Hat Engineering, Raleigh 


pgpgJlPGEy1Tc.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Fox, Kevin M
That's a great idea. How about the proposal below be changed such that the 
metadata-proxy forwards the /connect like calls to marconi queue A, and the 
response like ur's go to queue B.

The agent wouldn't need to know which queue's in marconi its talking to then, 
and could always talk to it.

Any of the servers (savana/trove) that wanted to control the agents would then 
just have to push marconi into queue A and get responses from queue B.

http is then used all the way through the process, which should make things 
easy to implement and scale.

Thanks,
Kevin


From: Sylvain Bauza [sylvain.ba...@gmail.com]
Sent: Thursday, December 12, 2013 11:43 PM
To: OpenStack Development Mailing List, (not for usage questions)
Subject: Re: [openstack-dev] Unified Guest Agent proposal

Why the notifications couldn't be handled by Marconi ?

That would be up to Marconi's team to handle security issues while it is part 
of their mission statement to deliver a messaging service in between VMs.

Le 12 déc. 2013 22:09, "Fox, Kevin M" 
mailto:kevin@pnnl.gov>> a écrit :
Yeah, I think the extra nic is unnecessary too. There already is a working 
route to 169.254.169.254, and a metadata proxy -> server running on it.

So... lets brainstorm for a minute and see if there are enough pieces already 
to do most of the work.

We already have:
  * An http channel out from private vm's, past network namespaces all the way 
to the node running the neutron-metadata-agent.

We need:
  * Some way to send a command, plus arguments to the vm to execute some action 
and get a response back.

OpenStack has focused on REST api's for most things and I think that is a great 
tradition to continue. This allows the custom agent plugins to be written in 
any language that can speak http (All of them?) on any platform.

A REST api running in the vm wouldn't be accessible from the outside though on 
a private network.

Random thought, can some glue "unified guest agent" be written to bridge the 
gap?

How about something like the following:

The "unified guest agent" starts up, makes an http request to 
169.254.169.254/unified-agent//connect
If at any time the connection returns, it will auto reconnect.
It will block as long as possible and the data returned will be an http 
request. The request will have a special header with a request id.
The http request will be forwarded to localhost: and 
the response will be posted to 
169.254.169.254/unified-agent/cnc_type/response/

The neutron-proxy-server would need to be modified slightly so that, if it sees 
a /unified-agent//* request it:
looks in its config file, unified-agent section, and finds the ip/port to 
contact for a given ', and forwards the request to that server, 
instead of the regular metadata one.

Once this is in place, savana or trove can have their webapi registered with 
the proxy as the server for the "savana" or "trove" cnc_type. They will be 
contacted by the clients as they come up, and will be able to make web requests 
to them, an get responses back.

What do you think?

Thanks,
Kevin


From: Ian Wells [ijw.ubu...@cack.org.uk]
Sent: Thursday, December 12, 2013 11:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Unified Guest Agent proposal

On 12 December 2013 19:48, Clint Byrum 
mailto:cl...@fewbar.com>>>
 wrote:
Excerpts from Jay Pipes's message of 2013-12-12 10:15:13 -0800:
> On 12/10/2013 03:49 PM, Ian Wells wrote:
> > On 10 December 2013 20:55, Clint Byrum 
> > mailto:cl...@fewbar.com>>
> >  >  wrote:
> I've read through this email thread with quite a bit of curiosity, and I
> have to say what Ian says above makes a lot of sense to me. If Neutron
> can handle the creation of a "management vNIC" that has some associated
> iptables rules governing it that provides a level of security for guest
> <-> host and guest <-> $OpenStackService, then the transport problem
> domain is essentially solved, and Neutron can be happily ignorant (as it
> should be) of any guest agent communication with anything else.
>

Indeed I think it could work, however I think the NIC is unnecessary.

Seems likely even with a second NIC that said address will be something
like 169.254.169.254 (or the ipv6 equivalent?).

There *is* no ipv6 equivalent, which is one standing problem.  Another is that 
(and admittedly you can quibble about this problem's significance) you need a 
router on a network to be able to get to 169.254.169.254 - I raise that because 
the obvious use case for multiple networks is to have a net which is *not* 
attached to the out

Re: [openstack-dev] [Solum] Using Zuul in the Git-pull blueprint

2013-12-13 Thread devdatta kulkarni
-Original Message-
From: "Krishna Raman" 
Sent: Friday, December 13, 2013 9:44am
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [Solum] Using Zuul in the Git-pull blueprint

On Dec 12, 2013, at 1:39 PM, devdatta kulkarni 
 wrote:

> We followed on the Zuul question in this week's git-integration working group 
> meeting.
> 
> mordred has created an etherpad with a high-level description of Zuul and how 
> it might
> fit with Solum't git integration workflow
> 
> https://etherpad.openstack.org/p/ZuulSolum
> 
> The working group seemed to be coming to the consensus that we want to use a 
> single workflow
> engine, as far as possible, for all of Solum's workflow needs.
> This brought up the question about, what are really Solum's workflow 
> requirements. 

Hi

I had a long conversation with Monty yesterday and we flushed out a few things 
I would like to run by the group.
I have also included answers to the questions below.

> 
> At a high-level, I think that Solum has three different kinds of workflows.
> 
> 1) Workflow around getting user code into Solum
>   - This is the git integration piece being worked out in the git-integration
> working group.

This is possible using the Zuul workflows. Would potentially require a little 
work in Zuul.

> 
> 2) Workflow around creating language pack(s).
>   - The main workflow requirement here involves ability to run tests before 
> creating a language pack.
> There was some discussion in language-pack working group about this 
> requirement.

This is also possible using Zuul and in-fact would benefit Solum by providing 
config file based build workflows
that could be customized by ops personelle. For e.g.. one DU might require SVN, 
another might require git 
and a jenkins CI based unit test before triggering Langpack, other DUs might 
wish to leverage gerrit etc.
This would be possible through Zuul without having to reinvent it on the other 
workflow engine.

> 
> 3) Workflow around deploying created language pack(s) in order to instantiate 
> an assembly.
>   - The deployment may potentially contain several steps, some of which may 
> be long running, such as
>   populating a database. Further, there may be a need to checkpoint 
> intermediate steps
>   and retry the workflow from the failed point.

This is probably not a very good fit for Zuul. It can handle simple workflow 
but won’t be able to do the
complex checkpointing, rollback, retry logic etc.

> 
> 
> mordred mentioned that #1 can be achieved by Zuul (both, push-to-solum and 
> pull-by-solum)
> We want to know if #2 and #3 can also be achieved by Zuul.
> If not, we want to know what are the available options.
> 
> mordred, thanks for the etherpad; looking forward to the digram :)


Zuul is workflow engine capable of running simple workflows. It is probably not 
suitable for all of Solum but would
manage the source -> DU flow quite nicely. Initially my thoughts were that I 
wanted to avoid having 2 workflow
engines in Solum but there is another way to look at it…

During out F2F, we had said that we should have a Solum API where we could just 
post DU images. This would
allow someone to build the DU outside Solum and just provide it. We could use 
this same API as a clean interface to
separated out the DU build flow from the DU deploy flow. Once this is done, the 
DU build flow (#1, #2 above)
could be cleanly handled by Zuul and the DU deploy flow by whatever complex 
engine the rest of Solum would
use.

>> I think this makes sense.

If I were to tie this discussion back to the various working groups and 
blueprints, I think
the git-integration and language-pack working groups are targeting the "DU 
build flow" (#1 and #2).
On the other hand, the work being done as part of 'specify-lang-pack' blueprint 
and 'pluggable-template-generation'
are targeting parts of #3. There would be additional blueprints for other 
aspects of #3.

- Devdatta


This approach has a few advantages:
* Re-uses what Openstack already uses but its build & CI process (and 
potentially makes it better)
* Allows operations who deploy Solum to customize their build process 
without having to change Solum
* Allows us to leverage the Zuul/OpenStack-infra team to help us solve 
the DU build flow instead of having 
  to go alone

—Krishna

> 
> 
> thanks,
> devkulkarni
> 
> 
> -Original Message-
> From: "Roshan Agrawal" 
> Sent: Monday, December 9, 2013 10:57am
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Subject: Re: [openstack-dev] [Solum] Using Zuul in the Git-pull blueprint
> 
> 
>> -Original Message-
>> From: Krishna Raman [mailto:kra...@gmail.com]
>> Sent: Sunday, December 08, 2013 11:24 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [Solum] Using Zuul in the Git-pull blueprint
>> 
>> Hi all,
>> 
>> We had a very good meeting last week aro

Re: [openstack-dev] [governance] Becoming a Program, before applying for incubation

2013-12-13 Thread Sylvain Bauza
Apologies for the miss, I just double-checked and Nova does have its own
mission statement :
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml

Thanks,
-Sylvain


2013/12/13 Sylvain Bauza 

> Thanks Thierry.
>
> AFAIK, Compute ("Nova") is not having yet its own mission statement, so I
> guess any project with different people than regular Nova ATCs should
> consider an request for new Program if they feel there is difference in
> terms of feature delivery ?
>
> -Sylvain
>
>
> 2013/12/13 Thierry Carrez 
>
>> Sylvain Bauza wrote:
>> > While I agree with most of what Thierry said, I need clarifications
>> > though, on what a Program is,
>>
>> A "team" is a group of people working on a given mission. They can be
>> freely created. They apply to become an "OpenStack Program" if they feel
>> their (well-established) mission is essential to the production of
>> "OpenStack" and would like to place themselves under the authority of
>> the Technical Committee.
>>
>> > and what is the key point where an idea
>> > should get its own Program instead of being headed by an already
>> > existing Program.
>>
>> Depends on who is involved, and if the proposed mission is overlapping
>> with an existing Program's mission. If those are two different sets of
>> people, or the missions diverge completely, then it makes sense to make
>> a new program. If those teams share a lot of contributors and their
>> missions can be aligned, making it a single program would probably be
>> beneficial. And then there are all the shades of grey in between.
>>
>> --
>> Thierry Carrez (ttx)
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Clint Byrum
Excerpts from Sergey Lukjanov's message of 2013-12-13 07:46:34 -0800:
> Hi Alessandro,
> 
> it's a good idea to setup an IRC meeting for the unified agents. IMO it'll
> seriously speedup discussion. The first one could be used to determine the
> correct direction, then we can use them to discuss details and coordinate
> efforts, it will be necessary regardless of the approach.
> 

I'd like for those who are going to do the actual work to stand up and
be counted before an IRC meeting. This is starting to feel bike-sheddy
and the answer to bike-shedding is not more meetings.

I am keenly interested in this, but have limited cycles to spare for it
at this time. So I do not count myself as one of those people.

I believe that a few individuals who are involved with already working
specialized agents will be doing the work to consolidate them and to fix
the bug that they all share (Heat shares this too) which is that private
networks cannot reach their respective agent endpoints. I think those
individuals should review the original spec given the new information,
revise it, and present it here in a new thread. If there are enough of
them that they feel they should have a meeting, I suggest they organize
one. But I do not think we need more discussion on a broad scale.

Speaking of that, before I run out and report a bug that affects
Savanna Heat and Trove, is there already a bug titled something like
"Guests cannot reach [Heat/Savanna/Trove] endpoints from inside private
networks." ?

(BTW, paint it yellow!)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [governance] Becoming a Program, before applying for incubation

2013-12-13 Thread Sylvain Bauza
Thanks Thierry.

AFAIK, Compute ("Nova") is not having yet its own mission statement, so I
guess any project with different people than regular Nova ATCs should
consider an request for new Program if they feel there is difference in
terms of feature delivery ?

-Sylvain


2013/12/13 Thierry Carrez 

> Sylvain Bauza wrote:
> > While I agree with most of what Thierry said, I need clarifications
> > though, on what a Program is,
>
> A "team" is a group of people working on a given mission. They can be
> freely created. They apply to become an "OpenStack Program" if they feel
> their (well-established) mission is essential to the production of
> "OpenStack" and would like to place themselves under the authority of
> the Technical Committee.
>
> > and what is the key point where an idea
> > should get its own Program instead of being headed by an already
> > existing Program.
>
> Depends on who is involved, and if the proposed mission is overlapping
> with an existing Program's mission. If those are two different sets of
> people, or the missions diverge completely, then it makes sense to make
> a new program. If those teams share a lot of contributors and their
> missions can be aligned, making it a single program would probably be
> beneficial. And then there are all the shades of grey in between.
>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [governance] Becoming a Program, before applying for incubation

2013-12-13 Thread Flavio Percoco

On 13/12/13 10:44 -0500, Russell Bryant wrote:

On 12/13/2013 10:37 AM, Flavio Percoco wrote:

On 13/12/13 15:53 +0100, Thierry Carrez wrote:

Hi everyone,

TL;DR: Incubation is getting harder, why not ask efforts to apply
for a new program first to get the visibility they need to grow.

Long version:

Last cycle we introduced the concept of "Programs" to replace
the concept of "Official projects" which was no longer working
that well for us. This was recognizing the work of existing
teams, organized around a common mission, as an integral part of
"delivering OpenStack". Contributors to programs become ATCs, so
they get to vote in Technical Committee (TC) elections. In
return, those teams place themselves under the authority of the
TC.

This created an interesting corner case. Projects applying for
incubation would actually request two concurrent things: be
considered a new "Program", and give "incubated" status to a code
repository under that program.

Over the last months we significantly raised the bar for
accepting new projects in incubation, learning from past
integration and QA mistakes. The end result is that a number of
promising projects applied for incubation but got rejected on
maturity, team size, team diversity, or current integration level
grounds.

At that point I called for some specific label, like "Emerging
Technology" that the TC could grant to promising projects that
just need more visibility, more collaboration, more
crystallization before they can make good candidates to be made
part of our integrated releases.

However, at the last TC meeting it became apparent we could
leverage "Programs" to achieve the same result. Promising efforts
would first get their mission, scope and existing results blessed
and recognized as something we'd really like to see in OpenStack
one day. Then when they are ready, they could have one of their
deliveries apply for incubation if that makes sense.

The consequences would be that the effort would place itself
under the authority of the TC. Their contributors would be ATCs
and would vote in TC elections, even if their deliveries never
make it to incubation. They would get (some) space at Design
Summits. So it's not "free", we still need to be pretty
conservative about accepting them, but it's probably manageable.

I'm still weighing the consequences, but I think it's globally
nicer than introducing another status. As long as the TC feels
free to revoke Programs that do not deliver the expected results
(or that no longer make sense in the new world order) I think
this approach would be fine.

Comments, thoughts ?




My first thought while reading this email was:

What happens if that "Emerging Technology" doesn't move forward?


Thierry addressed that at the very end of his message:

 As long as the TC feels free to revoke  Programs that do not deliver
 the expected results (or that no longer make sense in the new world
 order) I think this approach would be fine.


Yup, I just meant to say this was my first concern and that it needs
more clarification than just 'being able to revoke it'.




Will a Program with actual projects exist? (I personally think
this will create some confusion).

I guess the same thing would happen with incubated projects that
never graduate to integrated. However, the probability this would
happen are way lower. You also make a good point w.r.t ATCs and the
rights to vote.

-1 from me. I'd be even in favor to not calling any Program
official until there's an integrated *team* - not project - working
on it. Notice that I'm using the term 'team' and not projects.
Programs like 'Documentation' have an integrated team working on it
and are part of every release cycle, the same thing applies for the
"Release Cycle Management" program, etc.


We wouldn't create a program without an existing team doing some work
already.  We even have rules around now programs along side the rules
for incubating/graduating projects:

http://git.openstack.org/cgit/openstack/governance/tree/reference/new-programs-requirements


That is exactly why I'm bringing this up. Programs play an important
role in OpenStack. More important than just saying: 'Hey, someone is
working on this area', which is why I think they shouldn't be
considered official unless there's an 'integrated' team working on them.

In other words, if a project applying for incubation doesn't fit into
one of the existing programs, we have to request it to create a
program and make it part of the incubation application, which is what
we do today.

Hopefully, I'm not missing the real benefits of this proposal. If I
am, then please, let me know. :)

Cheers,
FF

--
@flaper87
Flavio Percoco


pgpl4TVdTz1UN.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-13 Thread Matt Wagner
On Mon Dec  9 15:22:04 2013, Robert Collins wrote:
> On 9 December 2013 23:56, Jaromir Coufal  wrote:
>>
>> Ironic today will want IPMI address + MAC for each NIC + disk/cpu/memory
>> stats
>>
>> For registration it is just Management MAC address which is needed right? Or
>> does Ironic need also IP? I think that MAC address might be enough, we can
>> display IP in details of node later on.
>
> Ironic needs all the details I listed today. Management MAC is not
> currently used at all, but would be needed in future when we tackle
> IPMI IP managed by Neutron.

I think what happened here is that two separate things we need got
conflated.

We need the IP address of the management (IPMI) interface, for power
control, etc.

We also need the MAC of the host system (*not* its IPMI/management
interface) for PXE to serve it the appropriate content.


-- 
Matt Wagner
Software Engineer, Red Hat



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Clint Byrum
Excerpts from Alessandro Pilotti's message of 2013-12-13 07:13:01 -0800:
> Hi guys,
> 
> This seems to become a pretty long thread with quite a lot of ideas. What do 
> you think about setting up a meeting on IRC to talk about what direction to 
> take?
> IMO this has the potential of becoming a completely separated project to be 
> hosted on stackforge or similar.
> 
> Generally speaking, we already use Cloudbase-Init, which beside being the de 
> facto standard Windows "Cloud-Init type feature” (Apache 2 licensed) 
> has been recently used as a base to provide the same functionality on FreeBSD.
> 
> For reference: https://github.com/cloudbase/cloudbase-init and 
> http://www.cloudbase.it/cloud-init-for-windows-instances/
> 
> We’re seriously thinking if we should transform Cloudbase-init into an agent 
> or if we should keep it on line with the current “init only, let the guest to 
> the rest” approach which fits pretty
> well with the most common deployment approaches (Heat, Puppet / Chef, Salt, 
> etc). Last time I spoke with Scott about this agent stuff for cloud-init, the 
> general intention was
> to keep the init approach as well (please correct me if I missed something in 
> the meantime).
> 
> The limitations that we see, independently from which direction and tool will 
> be adopted for the agent, are mainly in the metadata services and the way 
> OpenStack users employ them to 
> communicate with Nova, Heat and the rest of the pack as orchestration 
> requirements complexity increases:
> 

Hi, Allessandro. Really interesting thoughts. Most of what you have
described that is not about agent transport is what we discussed
at the Icehouse summit under the topic of the hot-software-config
blueprint. There is definitely a need for better workflow integration
in Heat, and that work is happening now.

> 1) We need a way to post back small amounts of data (e.g. like we already do 
> for the encrypted Windows password) for status updates,
> so that the users know how things are going and can be properly notified in 
> case of post-boot errors. This might be irrelevant as long as you just create 
> a user and deploy some SSH keys,
> but becomes very important for most orchestration templates.
>

Heat already has this via wait conditions. hot-software-config will
improve upon this. I believe once a unified guest agent protocol is
agreed upon we will make Heat use that for wait condition signalling.

> 2) The HTTP metadata service accessible from the guest with its magic number 
> is IMO quite far from an optimal solution. Since every hypervisor commonly 
> used in OpenStack (e.g. KVM, XenServer, Hyper-V, ESXi) provides guest / host 
> communication services, we could define a common abstraction layer which will 
> include a guest side (to be included in cloud-init, cloudbase-init, etc) and 
> a hypervisor side, to be implemented for each hypervisor and included in the 
> related Nova drivers.
> This has already been proposed / implemented in various third party 
> scenarios, but never under the OpenStack umbrella for multiple hypervisors.
> 
> Metadata info can be at that point retrieved and posted by the Nova driver in 
> a secure way and proxied to / from the guest whithout needing to expose the 
> metadata 
> service to the guest itself. This would also simplify Neutron, as we could 
> get rid of the complexity of the Neutron metadata proxy. 
> 

The neutron metadata proxy is actually relatively simple. Have a look at
it. The basic way it works in pseudo code is:

port = lookup_requesting_ip_port(remote_ip)
instance_id = lookup_port_instance_id(port)
response = forward_and_sign_request_to_nova(REQUEST, instance_id, 
conf.nova_metadata_ip)
return response

Furthermore, if we have to embrace some complexity, I would rather do so
inside Neutron than in an agent that users must install and make work
on every guest OS.

The dumber an agent is, the better it will scale and more resilient it
will be. I would credit this principle with the success of cloud-init
(sorry, you know I love you Scott! ;). What we're talking about now is
having an equally dumb, but differently focused agent.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [governance] Becoming a Program, before applying for incubation

2013-12-13 Thread Thierry Carrez
Sylvain Bauza wrote:
> While I agree with most of what Thierry said, I need clarifications
> though, on what a Program is,

A "team" is a group of people working on a given mission. They can be
freely created. They apply to become an "OpenStack Program" if they feel
their (well-established) mission is essential to the production of
"OpenStack" and would like to place themselves under the authority of
the Technical Committee.

> and what is the key point where an idea
> should get its own Program instead of being headed by an already
> existing Program.

Depends on who is involved, and if the proposed mission is overlapping
with an existing Program's mission. If those are two different sets of
people, or the missions diverge completely, then it makes sense to make
a new program. If those teams share a lot of contributors and their
missions can be aligned, making it a single program would probably be
beneficial. And then there are all the shades of grey in between.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [UI] Icehouse Requirements - Summary, Milestones

2013-12-13 Thread James Slagle
On Fri, Dec 13, 2013 at 03:04:09PM +0100, Imre Farkas wrote:
> On 12/13/2013 11:36 AM, Jaromir Coufal wrote:
> >
> >*VERSION 0*
> >===
> >Enable user to deploy OpenStack with the simpliest TripleO way, no
> >difference between hardware.
> >
> >Target:
> >- end of icehouse-2
> >
> >Features we need to get in:
> >- Enable manual nodes registration (Ironic)
> >- Get images available for user (Glance)
> >- Node roles (hardcode): Controller, Compute, Object Storage, Block Storage
> >- Design deployment (number of nodes per role)
> >- Deploy (Heat + Nova)
> 
> One note to deploy: It's not done only by Heat and Nova. If we
> expect a fully functional OpenStack installation as a result, we are
> missing a few steps like creating users, initializing and
> registering the service endpoints with Keystone. In TripleO this is
> done by the init-keystone and setup-endpoints scripts. Check devtest
> for more details: 
> http://docs.openstack.org/developer/tripleo-incubator/devtest_undercloud.html

Excellent point Imre, as the deployment isn't really useable until those steps
are done.  The link to the overcloud setup steps is actually:
http://docs.openstack.org/developer/tripleo-incubator/devtest_overcloud.html
Very similar to what is done for the undercloud.

I think most of that logic could be reimplemented to be done via direct calls
to the API using the client libs vs using a CLI.  Not sure about
"keystone-manage pki_setup" though, would need to look into that.

--
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [governance] Becoming a Program, before applying for incubation

2013-12-13 Thread Sylvain Bauza
While I agree with most of what Thierry said, I need clarifications though,
on what a Program is, and what is the key point where an idea should get
its own Program instead of being headed by an already existing Program.

For example, take Barbican which is providing extra features to Keystone,
or Docker which will provide its own API for managing VMs. Are they both
eligible for new Programs, or should they stick to the existing programs
(respectively Identity and Compute) ?

If the answer is "they can be part of the already existing Programs", then
how can we leverage the difference in between the projects (ie. Keystone is
Openstack, Barbican is Stackforge), and how could at some point the code
getting incubated ?

That said, Climate (Reservations as a Service) does have the same concern,
while we're not yet planning to ask for incubation until a certain point,
which needs to be discussed internally.

Thanks,
-Sylvain




2013/12/13 Russell Bryant 

> On 12/13/2013 10:37 AM, Flavio Percoco wrote:
> > On 13/12/13 15:53 +0100, Thierry Carrez wrote:
> >> Hi everyone,
> >>
> >> TL;DR: Incubation is getting harder, why not ask efforts to apply
> >> for a new program first to get the visibility they need to grow.
> >>
> >> Long version:
> >>
> >> Last cycle we introduced the concept of "Programs" to replace
> >> the concept of "Official projects" which was no longer working
> >> that well for us. This was recognizing the work of existing
> >> teams, organized around a common mission, as an integral part of
> >> "delivering OpenStack". Contributors to programs become ATCs, so
> >> they get to vote in Technical Committee (TC) elections. In
> >> return, those teams place themselves under the authority of the
> >> TC.
> >>
> >> This created an interesting corner case. Projects applying for
> >> incubation would actually request two concurrent things: be
> >> considered a new "Program", and give "incubated" status to a code
> >> repository under that program.
> >>
> >> Over the last months we significantly raised the bar for
> >> accepting new projects in incubation, learning from past
> >> integration and QA mistakes. The end result is that a number of
> >> promising projects applied for incubation but got rejected on
> >> maturity, team size, team diversity, or current integration level
> >> grounds.
> >>
> >> At that point I called for some specific label, like "Emerging
> >> Technology" that the TC could grant to promising projects that
> >> just need more visibility, more collaboration, more
> >> crystallization before they can make good candidates to be made
> >> part of our integrated releases.
> >>
> >> However, at the last TC meeting it became apparent we could
> >> leverage "Programs" to achieve the same result. Promising efforts
> >> would first get their mission, scope and existing results blessed
> >> and recognized as something we'd really like to see in OpenStack
> >> one day. Then when they are ready, they could have one of their
> >> deliveries apply for incubation if that makes sense.
> >>
> >> The consequences would be that the effort would place itself
> >> under the authority of the TC. Their contributors would be ATCs
> >> and would vote in TC elections, even if their deliveries never
> >> make it to incubation. They would get (some) space at Design
> >> Summits. So it's not "free", we still need to be pretty
> >> conservative about accepting them, but it's probably manageable.
> >>
> >> I'm still weighing the consequences, but I think it's globally
> >> nicer than introducing another status. As long as the TC feels
> >> free to revoke Programs that do not deliver the expected results
> >> (or that no longer make sense in the new world order) I think
> >> this approach would be fine.
> >>
> >> Comments, thoughts ?
> >>
> >
> >
> > My first thought while reading this email was:
> >
> > What happens if that "Emerging Technology" doesn't move forward?
>
> Thierry addressed that at the very end of his message:
>
>   As long as the TC feels free to revoke  Programs that do not deliver
>   the expected results (or that no longer make sense in the new world
>   order) I think this approach would be fine.
>
> > Will a Program with actual projects exist? (I personally think
> > this will create some confusion).
> >
> > I guess the same thing would happen with incubated projects that
> > never graduate to integrated. However, the probability this would
> > happen are way lower. You also make a good point w.r.t ATCs and the
> > rights to vote.
> >
> > -1 from me. I'd be even in favor to not calling any Program
> > official until there's an integrated *team* - not project - working
> > on it. Notice that I'm using the term 'team' and not projects.
> > Programs like 'Documentation' have an integrated team working on it
> > and are part of every release cycle, the same thing applies for the
> > "Release Cycle Management" program, etc.
>
> We wouldn't create a program without an existing team doing some work

[openstack-dev] [Neutron] blueprint ovs-firewall-driver follow-up meeting

2013-12-13 Thread Amir Sadoughi
Hello all,

On Wednesday, at the ML2 meeting we had an agenda item[1] to discuss the 
blueprint ovs-firewall-driver’s progress and technical challenges. We didn’t 
have time to discuss everything, so at the suggestion of Bob K. I am scheduling 
a meeting for Monday.

Looking at the calendar of OpenStack meetings[2], Monday at 2000 UTC (right 
before Neutron meeting) is open for #openstack-meeting. Looking forward to 
continue the discussion there.

Thanks,

Amir Sadoughi

[1] https://wiki.openstack.org/wiki/Meetings/ML2#Meeting_Dec_11.2C_2013
[2] https://wiki.openstack.org/wiki/Meetings
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Ian Wells
On 13 December 2013 16:13, Alessandro Pilotti <
apilo...@cloudbasesolutions.com> wrote:

> 2) The HTTP metadata service accessible from the guest with its magic
> number is IMO quite far from an optimal solution. Since every hypervisor
> commonly
> used in OpenStack (e.g. KVM, XenServer, Hyper-V, ESXi) provides guest /
> host communication services, we could define a common abstraction layer
> which will
> include a guest side (to be included in cloud-init, cloudbase-init, etc)
> and a hypervisor side, to be implemented for each hypervisor and included
> in the related Nova drivers.
> This has already been proposed / implemented in various third party
> scenarios, but never under the OpenStack umbrella for multiple hypervisors.
>

Firstly, what's wrong with the single anycast IP address mechanism that
makes it 'not an optimal solution'?

While I agree we could, theoretically, make KVM, Xen, Docker, Hyper-V,
VMWare and so on all implement the same backdoor mechanism - unlikely as
that seems - and then implement a userspace mechanism to match in every
cloud-init service in Windows, Linux, *BSD (and we then have a problem with
niche OSes, too, so this mechanism had better be easy to implement, and
it's likely to involve the kernel) it's hard.  And we still come unstuck
when we get to bare metal, because these interfaces just can't be added
there.
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [governance] Becoming a Program, before applying for incubation

2013-12-13 Thread Russell Bryant
On 12/13/2013 10:37 AM, Flavio Percoco wrote:
> On 13/12/13 15:53 +0100, Thierry Carrez wrote:
>> Hi everyone,
>> 
>> TL;DR: Incubation is getting harder, why not ask efforts to apply
>> for a new program first to get the visibility they need to grow.
>> 
>> Long version:
>> 
>> Last cycle we introduced the concept of "Programs" to replace
>> the concept of "Official projects" which was no longer working
>> that well for us. This was recognizing the work of existing
>> teams, organized around a common mission, as an integral part of
>> "delivering OpenStack". Contributors to programs become ATCs, so
>> they get to vote in Technical Committee (TC) elections. In
>> return, those teams place themselves under the authority of the
>> TC.
>> 
>> This created an interesting corner case. Projects applying for 
>> incubation would actually request two concurrent things: be
>> considered a new "Program", and give "incubated" status to a code
>> repository under that program.
>> 
>> Over the last months we significantly raised the bar for
>> accepting new projects in incubation, learning from past
>> integration and QA mistakes. The end result is that a number of
>> promising projects applied for incubation but got rejected on
>> maturity, team size, team diversity, or current integration level
>> grounds.
>> 
>> At that point I called for some specific label, like "Emerging 
>> Technology" that the TC could grant to promising projects that
>> just need more visibility, more collaboration, more
>> crystallization before they can make good candidates to be made
>> part of our integrated releases.
>> 
>> However, at the last TC meeting it became apparent we could
>> leverage "Programs" to achieve the same result. Promising efforts
>> would first get their mission, scope and existing results blessed
>> and recognized as something we'd really like to see in OpenStack
>> one day. Then when they are ready, they could have one of their
>> deliveries apply for incubation if that makes sense.
>> 
>> The consequences would be that the effort would place itself
>> under the authority of the TC. Their contributors would be ATCs
>> and would vote in TC elections, even if their deliveries never
>> make it to incubation. They would get (some) space at Design
>> Summits. So it's not "free", we still need to be pretty
>> conservative about accepting them, but it's probably manageable.
>> 
>> I'm still weighing the consequences, but I think it's globally
>> nicer than introducing another status. As long as the TC feels
>> free to revoke Programs that do not deliver the expected results
>> (or that no longer make sense in the new world order) I think
>> this approach would be fine.
>> 
>> Comments, thoughts ?
>> 
> 
> 
> My first thought while reading this email was:
> 
> What happens if that "Emerging Technology" doesn't move forward?

Thierry addressed that at the very end of his message:

  As long as the TC feels free to revoke  Programs that do not deliver
  the expected results (or that no longer make sense in the new world
  order) I think this approach would be fine.

> Will a Program with actual projects exist? (I personally think
> this will create some confusion).
> 
> I guess the same thing would happen with incubated projects that
> never graduate to integrated. However, the probability this would
> happen are way lower. You also make a good point w.r.t ATCs and the
> rights to vote.
> 
> -1 from me. I'd be even in favor to not calling any Program
> official until there's an integrated *team* - not project - working
> on it. Notice that I'm using the term 'team' and not projects.
> Programs like 'Documentation' have an integrated team working on it
> and are part of every release cycle, the same thing applies for the
> "Release Cycle Management" program, etc.

We wouldn't create a program without an existing team doing some work
already.  We even have rules around now programs along side the rules
for incubating/graduating projects:

http://git.openstack.org/cgit/openstack/governance/tree/reference/new-programs-requirements

> With the above, I'm basically saying that a Queuing ;) program 
> shouldn't exist until there's an integrated team of folks working
> on queuing. Incubation doesn't guarantees integration and
> "emerging technology" doesn't guarantees incubation. Both stages
> mean there's interest about that technology and that we're looking
> forward to see it being part of OpenStack, period. Each stage
> probably means a bit more than that but, IMHO, that's the
> 'community' point of view of those stages.
> 
> What if we have a TC-managed* Program incubation period? The
> Program won't be managed by the team working on the emerging
> technology, nor the team working on the incubated project. Until
> those projects don't graduate, the program won't be official nor
> will have the 'rights' of other programs. And if the project fits
> into another program, then it won't be officially part of it until
> it graduates.

Re: [openstack-dev] [Solum] Using Zuul in the Git-pull blueprint

2013-12-13 Thread Krishna Raman
On Dec 12, 2013, at 1:39 PM, devdatta kulkarni 
 wrote:

> We followed on the Zuul question in this week's git-integration working group 
> meeting.
> 
> mordred has created an etherpad with a high-level description of Zuul and how 
> it might
> fit with Solum't git integration workflow
> 
> https://etherpad.openstack.org/p/ZuulSolum
> 
> The working group seemed to be coming to the consensus that we want to use a 
> single workflow
> engine, as far as possible, for all of Solum's workflow needs.
> This brought up the question about, what are really Solum's workflow 
> requirements. 

Hi

I had a long conversation with Monty yesterday and we flushed out a few things 
I would like to run by the group.
I have also included answers to the questions below.

> 
> At a high-level, I think that Solum has three different kinds of workflows.
> 
> 1) Workflow around getting user code into Solum
>   - This is the git integration piece being worked out in the git-integration
> working group.

This is possible using the Zuul workflows. Would potentially require a little 
work in Zuul.

> 
> 2) Workflow around creating language pack(s).
>   - The main workflow requirement here involves ability to run tests before 
> creating a language pack.
> There was some discussion in language-pack working group about this 
> requirement.

This is also possible using Zuul and in-fact would benefit Solum by providing 
config file based build workflows
that could be customized by ops personelle. For e.g.. one DU might require SVN, 
another might require git 
and a jenkins CI based unit test before triggering Langpack, other DUs might 
wish to leverage gerrit etc.
This would be possible through Zuul without having to reinvent it on the other 
workflow engine.

> 
> 3) Workflow around deploying created language pack(s) in order to instantiate 
> an assembly.
>   - The deployment may potentially contain several steps, some of which may 
> be long running, such as
>   populating a database. Further, there may be a need to checkpoint 
> intermediate steps
>   and retry the workflow from the failed point.

This is probably not a very good fit for Zuul. It can handle simple workflow 
but won’t be able to do the
complex checkpointing, rollback, retry logic etc.

> 
> 
> mordred mentioned that #1 can be achieved by Zuul (both, push-to-solum and 
> pull-by-solum)
> We want to know if #2 and #3 can also be achieved by Zuul.
> If not, we want to know what are the available options.
> 
> mordred, thanks for the etherpad; looking forward to the digram :)


Zuul is workflow engine capable of running simple workflows. It is probably not 
suitable for all of Solum but would
manage the source -> DU flow quite nicely. Initially my thoughts were that I 
wanted to avoid having 2 workflow
engines in Solum but there is another way to look at it…

During out F2F, we had said that we should have a Solum API where we could just 
post DU images. This would
allow someone to build the DU outside Solum and just provide it. We could use 
this same API as a clean interface to
separated out the DU build flow from the DU deploy flow. Once this is done, the 
DU build flow (#1, #2 above)
could be cleanly handled by Zuul and the DU deploy flow by whatever complex 
engine the rest of Solum would
use.

This approach has a few advantages:
* Re-uses what Openstack already uses but its build & CI process (and 
potentially makes it better)
* Allows operations who deploy Solum to customize their build process 
without having to change Solum
* Allows us to leverage the Zuul/OpenStack-infra team to help us solve 
the DU build flow instead of having 
  to go alone

—Krishna

> 
> 
> thanks,
> devkulkarni
> 
> 
> -Original Message-
> From: "Roshan Agrawal" 
> Sent: Monday, December 9, 2013 10:57am
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Subject: Re: [openstack-dev] [Solum] Using Zuul in the Git-pull blueprint
> 
> 
>> -Original Message-
>> From: Krishna Raman [mailto:kra...@gmail.com]
>> Sent: Sunday, December 08, 2013 11:24 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [Solum] Using Zuul in the Git-pull blueprint
>> 
>> Hi all,
>> 
>> We had a very good meeting last week around the git-pull blueprint. During
>> the discussion, Monty suggested using Zuul to manage the git repository
>> access and workflow.
>> While he is working on sending the group a diagram and description of what
>> he has in mind, I had a couple of other questions which I am hoping the
>> extended group will be able to answer.
>> 
>> 1) Zuul is currently an infrastructure project.
>>  - Is there anything that prevents us from using it in Solum?
>>  - Does it need to be moved to a normal OpenStack project?
>> 
>> 2) Zuul provides a sort of workflow engine. This workflow engine could
>> potentially be used to initiate and manage: API Post -> git flow -> lang pack

Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Sergey Lukjanov
Hi Alessandro,

it's a good idea to setup an IRC meeting for the unified agents. IMO it'll
seriously speedup discussion. The first one could be used to determine the
correct direction, then we can use them to discuss details and coordinate
efforts, it will be necessary regardless of the approach.

Thanks.


On Fri, Dec 13, 2013 at 7:13 PM, Alessandro Pilotti <
apilo...@cloudbasesolutions.com> wrote:

> Hi guys,
>
> This seems to become a pretty long thread with quite a lot of ideas. What
> do you think about setting up a meeting on IRC to talk about what direction
> to take?
> IMO this has the potential of becoming a completely separated project to
> be hosted on stackforge or similar.
>
> Generally speaking, we already use Cloudbase-Init, which beside being the
> de facto standard Windows "Cloud-Init type feature” (Apache 2 licensed)
> has been recently used as a base to provide the same functionality on
> FreeBSD.
>
> For reference: https://github.com/cloudbase/cloudbase-init and
> http://www.cloudbase.it/cloud-init-for-windows-instances/
>
> We’re seriously thinking if we should transform Cloudbase-init into an
> agent or if we should keep it on line with the current “init only, let the
> guest to the rest” approach which fits pretty
> well with the most common deployment approaches (Heat, Puppet / Chef,
> Salt, etc). Last time I spoke with Scott about this agent stuff for
> cloud-init, the general intention was
> to keep the init approach as well (please correct me if I missed something
> in the meantime).
>
> The limitations that we see, independently from which direction and tool
> will be adopted for the agent, are mainly in the metadata services and the
> way OpenStack users employ them to
> communicate with Nova, Heat and the rest of the pack as orchestration
> requirements complexity increases:
>
> 1) We need a way to post back small amounts of data (e.g. like we already
> do for the encrypted Windows password) for status updates,
> so that the users know how things are going and can be properly notified
> in case of post-boot errors. This might be irrelevant as long as you just
> create a user and deploy some SSH keys,
> but becomes very important for most orchestration templates.
>
> 2) The HTTP metadata service accessible from the guest with its magic
> number is IMO quite far from an optimal solution. Since every hypervisor
> commonly
> used in OpenStack (e.g. KVM, XenServer, Hyper-V, ESXi) provides guest /
> host communication services, we could define a common abstraction layer
> which will
> include a guest side (to be included in cloud-init, cloudbase-init, etc)
> and a hypervisor side, to be implemented for each hypervisor and included
> in the related Nova drivers.
> This has already been proposed / implemented in various third party
> scenarios, but never under the OpenStack umbrella for multiple hypervisors.
>
> Metadata info can be at that point retrieved and posted by the Nova driver
> in a secure way and proxied to / from the guest whithout needing to expose
> the metadata
> service to the guest itself. This would also simplify Neutron, as we could
> get rid of the complexity of the Neutron metadata proxy.
>
>
>
> Alessandro
>
>
> On 13 Dec 2013, at 16:28 , Scott Moser  wrote:
>
> > On Tue, 10 Dec 2013, Ian Wells wrote:
> >
> >> On 10 December 2013 20:55, Clint Byrum  wrote:
> >>
> >>> If it is just a network API, it works the same for everybody. This
> >>> makes it simpler, and thus easier to scale out independently of compute
> >>> hosts. It is also something we already support and can very easily
> expand
> >>> by just adding a tiny bit of functionality to neutron-metadata-agent.
> >>>
> >>> In fact we can even push routes via DHCP to send agent traffic through
> >>> a different neutron-metadata-agent, so I don't see any issue where we
> >>> are piling anything on top of an overstressed single resource. We can
> >>> have neutron route this traffic directly to the Heat API which hosts
> it,
> >>> and that can be load balanced and etc. etc. What is the exact scenario
> >>> you're trying to avoid?
> >>>
> >>
> >> You may be making even this harder than it needs to be.  You can create
> >> multiple networks and attach machines to multiple networks.  Every
> point so
> >> far has been 'why don't we use  as a backdoor into our VM without
> >> affecting the VM in any other way' - why can't that just be one more
> >> network interface set aside for whatever management  instructions are
> >> appropriate?  And then what needs pushing into Neutron is nothing more
> >> complex than strong port firewalling to prevent the slaves/minions
> talking
> >> to each other.  If you absolutely must make the communication come from
> a
> >
> > +1
> >
> > tcp/ip works *really* well as a communication mechanism.  I'm planning on
> > using it to send this email.
> >
> > For controlled guests, simply don't break your networking.  Anything that
> > could break networking can break /dev/ also.
> >
> > Fwiw, 

[openstack-dev] [Neutron] Cores - Prioritize merging migration fixes after tox change merges

2013-12-13 Thread Maru Newby
As per Anita's email, we're not to approve anything until the following tox fix 
merges:  https://review.openstack.org/#/c/60825

Please keep an eye on the change, and once it merges, make sure that the 
following patches merge before regular approval rules resume:

https://review.openstack.org/#/c/61677 
https://review.openstack.org/#/c/61663

Without these migration patches, devstack is broken for neutron.

Thanks!


Maru


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [governance] Becoming a Program, before applying for incubation

2013-12-13 Thread Flavio Percoco

On 13/12/13 15:53 +0100, Thierry Carrez wrote:

Hi everyone,

TL;DR:
Incubation is getting harder, why not ask efforts to apply for a new
program first to get the visibility they need to grow.

Long version:

Last cycle we introduced the concept of "Programs" to replace the
concept of "Official projects" which was no longer working that well for
us. This was recognizing the work of existing teams, organized around a
common mission, as an integral part of "delivering OpenStack".
Contributors to programs become ATCs, so they get to vote in Technical
Committee (TC) elections. In return, those teams place themselves under
the authority of the TC.

This created an interesting corner case. Projects applying for
incubation would actually request two concurrent things: be considered a
new "Program", and give "incubated" status to a code repository under
that program.

Over the last months we significantly raised the bar for accepting new
projects in incubation, learning from past integration and QA mistakes.
The end result is that a number of promising projects applied for
incubation but got rejected on maturity, team size, team diversity, or
current integration level grounds.

At that point I called for some specific label, like "Emerging
Technology" that the TC could grant to promising projects that just need
more visibility, more collaboration, more crystallization before they
can make good candidates to be made part of our integrated releases.

However, at the last TC meeting it became apparent we could leverage
"Programs" to achieve the same result. Promising efforts would first get
their mission, scope and existing results blessed and recognized as
something we'd really like to see in OpenStack one day. Then when they
are ready, they could have one of their deliveries apply for incubation
if that makes sense.

The consequences would be that the effort would place itself under the
authority of the TC. Their contributors would be ATCs and would vote in
TC elections, even if their deliveries never make it to incubation. They
would get (some) space at Design Summits. So it's not "free", we still
need to be pretty conservative about accepting them, but it's probably
manageable.

I'm still weighing the consequences, but I think it's globally nicer
than introducing another status. As long as the TC feels free to revoke
Programs that do not deliver the expected results (or that no longer
make sense in the new world order) I think this approach would be fine.

Comments, thoughts ?




My first thought while reading this email was:

What happens if that "Emerging Technology" doesn't move forward?

Will a Program with actual projects exist? (I personally think this
will create some confusion).

I guess the same thing would happen with incubated projects that never
graduate to integrated. However, the probability this would happen are
way lower. You also make a good point w.r.t ATCs and the rights to vote.

-1 from me. I'd be even in favor to not calling any Program official
until there's an integrated *team* - not project - working on it.
Notice that I'm using the term 'team' and not projects. Programs like
'Documentation' have an integrated team working on it and are part of
every release cycle, the same thing applies for the "Release Cycle
Management" program, etc.

With the above, I'm basically saying that a Queuing ;) program
shouldn't exist until there's an integrated team of folks working on
queuing. Incubation doesn't guarantees integration and "emerging
technology" doesn't guarantees incubation. Both stages mean there's
interest about that technology and that we're looking forward to see
it being part of OpenStack, period. Each stage probably means a bit
more than that but, IMHO, that's the 'community' point of view of
those stages.

What if we have a TC-managed* Program incubation period? The Program
won't be managed by the team working on the emerging technology, nor
the team working on the incubated project. Until those projects don't
graduate, the program won't be official nor will have the 'rights' of
other programs. And if the project fits into another program, then it
won't be officially part of it until it graduates.

Unless I'm completely wrong about what a program is / should be, I'm
leaning towards -1.

* I'm sorry, I couldn't come up with a better term for this. :)

Cheers,
FF


--
@flaper87
Flavio Percoco


pgp7UZvRUQkcr.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Alessandro Pilotti
Hi guys,

This seems to become a pretty long thread with quite a lot of ideas. What do 
you think about setting up a meeting on IRC to talk about what direction to 
take?
IMO this has the potential of becoming a completely separated project to be 
hosted on stackforge or similar.

Generally speaking, we already use Cloudbase-Init, which beside being the de 
facto standard Windows "Cloud-Init type feature” (Apache 2 licensed) 
has been recently used as a base to provide the same functionality on FreeBSD.

For reference: https://github.com/cloudbase/cloudbase-init and 
http://www.cloudbase.it/cloud-init-for-windows-instances/

We’re seriously thinking if we should transform Cloudbase-init into an agent or 
if we should keep it on line with the current “init only, let the guest to the 
rest” approach which fits pretty
well with the most common deployment approaches (Heat, Puppet / Chef, Salt, 
etc). Last time I spoke with Scott about this agent stuff for cloud-init, the 
general intention was
to keep the init approach as well (please correct me if I missed something in 
the meantime).

The limitations that we see, independently from which direction and tool will 
be adopted for the agent, are mainly in the metadata services and the way 
OpenStack users employ them to 
communicate with Nova, Heat and the rest of the pack as orchestration 
requirements complexity increases:

1) We need a way to post back small amounts of data (e.g. like we already do 
for the encrypted Windows password) for status updates,
so that the users know how things are going and can be properly notified in 
case of post-boot errors. This might be irrelevant as long as you just create a 
user and deploy some SSH keys,
but becomes very important for most orchestration templates.

2) The HTTP metadata service accessible from the guest with its magic number is 
IMO quite far from an optimal solution. Since every hypervisor commonly 
used in OpenStack (e.g. KVM, XenServer, Hyper-V, ESXi) provides guest / host 
communication services, we could define a common abstraction layer which will 
include a guest side (to be included in cloud-init, cloudbase-init, etc) and a 
hypervisor side, to be implemented for each hypervisor and included in the 
related Nova drivers.
This has already been proposed / implemented in various third party scenarios, 
but never under the OpenStack umbrella for multiple hypervisors.

Metadata info can be at that point retrieved and posted by the Nova driver in a 
secure way and proxied to / from the guest whithout needing to expose the 
metadata 
service to the guest itself. This would also simplify Neutron, as we could get 
rid of the complexity of the Neutron metadata proxy. 



Alessandro


On 13 Dec 2013, at 16:28 , Scott Moser  wrote:

> On Tue, 10 Dec 2013, Ian Wells wrote:
> 
>> On 10 December 2013 20:55, Clint Byrum  wrote:
>> 
>>> If it is just a network API, it works the same for everybody. This
>>> makes it simpler, and thus easier to scale out independently of compute
>>> hosts. It is also something we already support and can very easily expand
>>> by just adding a tiny bit of functionality to neutron-metadata-agent.
>>> 
>>> In fact we can even push routes via DHCP to send agent traffic through
>>> a different neutron-metadata-agent, so I don't see any issue where we
>>> are piling anything on top of an overstressed single resource. We can
>>> have neutron route this traffic directly to the Heat API which hosts it,
>>> and that can be load balanced and etc. etc. What is the exact scenario
>>> you're trying to avoid?
>>> 
>> 
>> You may be making even this harder than it needs to be.  You can create
>> multiple networks and attach machines to multiple networks.  Every point so
>> far has been 'why don't we use  as a backdoor into our VM without
>> affecting the VM in any other way' - why can't that just be one more
>> network interface set aside for whatever management  instructions are
>> appropriate?  And then what needs pushing into Neutron is nothing more
>> complex than strong port firewalling to prevent the slaves/minions talking
>> to each other.  If you absolutely must make the communication come from a
> 
> +1
> 
> tcp/ip works *really* well as a communication mechanism.  I'm planning on
> using it to send this email.
> 
> For controlled guests, simply don't break your networking.  Anything that
> could break networking can break /dev/ also.
> 
> Fwiw, we already have an extremely functional "agent" in just about every
> [linux] node in sshd.  Its capable of marshalling just about anything in
> and out of the node. (note, i fully realize there are good reasons for
> more specific agent, lots of them exist).
> 
> I've really never understood "we don't want to rely on networking as a
> transport".
> 
>> system agent and go to a VM, then that can be done by attaching the system
>> agent to the administrative network - from within the system agent, which
>> is the thing that needs this, rather than with

Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Clint Byrum
Excerpts from Scott Moser's message of 2013-12-13 06:28:08 -0800:
> On Tue, 10 Dec 2013, Ian Wells wrote:
> 
> > On 10 December 2013 20:55, Clint Byrum  wrote:
> >
> > > If it is just a network API, it works the same for everybody. This
> > > makes it simpler, and thus easier to scale out independently of compute
> > > hosts. It is also something we already support and can very easily expand
> > > by just adding a tiny bit of functionality to neutron-metadata-agent.
> > >
> > > In fact we can even push routes via DHCP to send agent traffic through
> > > a different neutron-metadata-agent, so I don't see any issue where we
> > > are piling anything on top of an overstressed single resource. We can
> > > have neutron route this traffic directly to the Heat API which hosts it,
> > > and that can be load balanced and etc. etc. What is the exact scenario
> > > you're trying to avoid?
> > >
> >
> > You may be making even this harder than it needs to be.  You can create
> > multiple networks and attach machines to multiple networks.  Every point so
> > far has been 'why don't we use  as a backdoor into our VM without
> > affecting the VM in any other way' - why can't that just be one more
> > network interface set aside for whatever management  instructions are
> > appropriate?  And then what needs pushing into Neutron is nothing more
> > complex than strong port firewalling to prevent the slaves/minions talking
> > to each other.  If you absolutely must make the communication come from a
> 
> +1
> 
> tcp/ip works *really* well as a communication mechanism.  I'm planning on
> using it to send this email.
> 
> For controlled guests, simply don't break your networking.  Anything that
> could break networking can break /dev/ also.
> 

Who discussed breaking networking?

> Fwiw, we already have an extremely functional "agent" in just about every
> [linux] node in sshd.  Its capable of marshalling just about anything in
> and out of the node. (note, i fully realize there are good reasons for
> more specific agent, lots of them exist).
> 

This was already covered way back in the thread. sshd is a backdoor
agent, and thus undesirable for this purpose. Locking it down is more
effort than adopting an agent which is meant to be limited to specific
tasks.

Also SSH is a push agent, so Savanna/Heat/Trove would have to find the
VM, and reach into it to do things. A pull agent scales well because you
only have to tell the nodes where to pull things from, and then you can
add more things to pull from behind that endpoint without having to
update the nodes.

> I've really never understood "we don't want to rely on networking as a
> transport".
> 

You may have gone to plaid with this one. Not sure what you mean. AFAICT
the direct-to-hypervisor tricks are not exactly popular in this thread.
Were you referring to something else?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] configuration groups and datastores type/versions

2013-12-13 Thread Daniel Morris
Good point...

In this case however, couldn't you solve this by simply allowing the user
to specify a list of multiple id's for both the datastore-id's and
datastore-version-id's?  That way the user can directly control which
configurations apply to different types and versions (choosing to apply
0,1, or many).  I am not sure how the provider would be able to directly
manage those on behalf of the user as they would not know which options
actually apply across the different types and versions (unless that too
was maintained).  I could be misunderstanding your proposal though.

Daniel







On 12/12/13 6:02 PM, "McReynolds, Auston"  wrote:

>Another Example:
>
>  Datastore Type | Version
>  -
>  MySQL 5.5  | 5.5.35
>  MySQL 5.5  | 5.5.20
>  MySQL 5.6  | 5.6.15
>  
>
>A user creates a MySQL 5.5 configuration-group that merely consists
>of a innodb_buffer_pool_size override. The innodb_buffer_pool_size
>parameter is still featured in MySQL 5.6, so arguably the
>configuration-group should work with MySQL 5.6 as well. If a
>configuration-group can only be tied to a single datastore type
>and/or a single datastore-version, this will not work.
>
>To support all possible permutations, a "compatibility" list of sorts
>has to be introduced.
>
>Table: configuration_datastore_compatibility
>
>  Name| Description
>  --
>  id| PrimaryKey, Generated UUID
>  from_version_id | ForeignKey(datastore_version.id)
>  to_version_id  | ForeignKey(datastore_version.id)
>
>The cloud provider can then be responsible for updating the
>compatibility table (via trove-manage) whenever a new version of a
>datastore is introduced and has a strict superset of configuration
>parameters as compared to previous versions.
>
>On a related note, it would probably behoove us to consider how to
>handle datastore migrations in relation to configuration-groups.
>A rough-draft blueprint/gist for datastore migrations is located at
>https://gist.github.com/amcrn/dfd493200fcdfdb61a23.
>
>
>Auston
>
>---
>
>From:  Craig Vyvial 
>Reply-To:  "OpenStack Development Mailing List (not for usage questions)"
>
>Date:  Wednesday, December 11, 2013 8:52 AM
>To:  OpenStack Development Mailing List
>
>Subject:  [openstack-dev] [trove] configuration groups and
>datastores type/versions
>
>
>Configuration Groups is currently developed to associate the datastore
>version with a configuration that is created. If a datastore version is
>not presented it will use the default similar to the way instances are
>created now. This looks like
> a way of associating the configuration with a datastore because an
>instance has this same association.
>
>Depending on how you setup your datastore types and versions this might
>not be ideal.
>Example:
>Datastore Type | Version
>-
>Mysql  | 5.1
>Mysql  | 5.5
>
>Percona| 5.5
>-
>
>Configuration  | datastore_version
>---
>mysql-5.5-config   | mysql 5.5
>
>percona-5.5-config | percona 5.5
>
>---
>
>or 
>
>Datastore Type | Version
>-
>Mysql 5.1  | 5.1.12
>Mysql 5.1  | 5.1.13
>
>Mysql  | 5.5.32
>
>Percona| 5.5.44
>-
>
>
>Configuration  | datastore_version
>---
>mysql-5.1-config   | mysql 5.5
>
>percona-5.5-config | percona 5.5
>
>---
>
>
>
>Notice that if you associate the configuration with a datastore version
>then in the latter example you will not be able to use the same
>configurations that you created with different minor versions of the
>datastore. 
>
>Something that we should consider is allowing a configuration to be
>associated with a just a datastore type (eg. Mysql 5.1) so that any
>versions of 5.1 should allow the same configuration to be applied.
>
>I do not view this as a change that needs to happen before the current
>code is merged but more as an additive feature of configurations.
>
>
>*snippet from Morris and I talking about this*
>Given the nature of how the datastore / types code has been implemented in
>that it is highly configurable, I believe that we we need to adjust the
>way in which we are associating configuration groups with datastore types
>and versions.  The main
> use case that I am considering here is that as a user of the API, I want
>to be able to associate configurations with a specific datastore type so
>that I can easily return a list of the configurations that are valid for
>that database type (Example: Get me a
> list of configurations for MySQL 5.6).   We know that configurations will
>vary across types (MySQL vs. Redis) as well as across major versions
>(MySQL 5.1

Re: [openstack-dev] Generic question: Any tips for 'keeping up' with the mailing lists?

2013-12-13 Thread Flavio Percoco

On 12/12/13 17:52 +0100, Thierry Carrez wrote:

Russell Bryant wrote:

On 12/12/2013 11:23 AM, Justin Hammond wrote:

I am a developer who is currently having troubles keeping up with the
mailing list due to volume, and my inability to organize it in my client.
I am nearly forced to use Outlook 2011 for Mac and I have read and
attempted to implement
https://wiki.openstack.org/wiki/MailingListEtiquette but it is still a lot
to deal with. I read once a topic or wiki page on using X-Topics but I
have no idea how to set that in outlook (google has told me that the
feature was removed).

I'm not sure if this is a valid place for this question, but I *am* having
difficulty as a developer.

Thank you for anyone who takes the time to read this.


The trick is defining what "keeping up" means for you.  I doubt anyone
reads everything.  I certainly don't.

First, I filter all of openstack-dev into its own folder.  I'm sure
others filter more aggressively based on topic, but I don't since I know
I may be interested in threads in any of the topics.  Figure out what
filtering works for you.

I scan subjects for the threads I'd probably be most interested in.
While I'm scanning, I'm first looking for topic tags, like [Nova], then
I read the subject and decide whether I want to dive in and read the
rest.  It happens very quickly, but that's roughly my thought process.

With whatever is left over: mark all as read.  :-)


I used to have headaches keeping up with openstack-dev, but now I follow
something very similar to what Russell describes. In addition I use
starring to mark threads I want to follow more closely, for quick retrieval.

The most useful tip I can give you: accept that you can't be reading
everything, and that there are things that may happen in OpenStack that
you can't control. I've been involved with OpenStack since the
beginning, and part of my job was to be aware of everything. With the
explosive growth of the project, that doesn't scale that well. Since I
started ignoring stuff (and "marking thread read" and "marking folder
read" as necessary) I end up being able to start doing some useful work
mid-morning (rather than mid-afternoon).



FWIW, I do the exact same thing mentioned above!

1- Open the openstack-dev folder
2- Filter by the projects I'm contributing the most
3- Then filter by TC / ALL / OSSN etc
4- The quick look at other subjects
5- Finally, mark all as read.

I'm pretty sure you can do this with most of the known email clients
out there. FWIW, I'm using mutt+offlineimap+notmuch

Cheers,
FF

--
@flaper87
Flavio Percoco


pgpfR90Utq2wt.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Clint Byrum
Excerpts from Sylvain Bauza's message of 2013-12-12 23:43:40 -0800:
> Why the notifications couldn't be handled by Marconi ?
> 
> That would be up to Marconi's team to handle security issues while it is
> part of their mission statement to deliver a messaging service in between
> VMs.

Fantastic point. :) Though we don't want communication between VMs,
we want communication between VMs and the cloud provider. Still it does
feel like Marconi would facilitate this.

I'm not up to speed on Marconi though, is it all HTTP for the
communication? If so how is the overhead of polling addressed?

We could always teach Salt to speak over Marconi as easily as we could
teach Salt to speak oslo.messaging. That would also make Salt more useful
in any OpenStack cloud that had Marconi. :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [governance] Becoming a Program, before applying for incubation

2013-12-13 Thread Russell Bryant
On 12/13/2013 09:53 AM, Thierry Carrez wrote:
> Hi everyone,
> 
> TL;DR:
> Incubation is getting harder, why not ask efforts to apply for a new
> program first to get the visibility they need to grow.
> 
> Long version:
> 
> Last cycle we introduced the concept of "Programs" to replace the
> concept of "Official projects" which was no longer working that well for
> us. This was recognizing the work of existing teams, organized around a
> common mission, as an integral part of "delivering OpenStack".
> Contributors to programs become ATCs, so they get to vote in Technical
> Committee (TC) elections. In return, those teams place themselves under
> the authority of the TC.
> 
> This created an interesting corner case. Projects applying for
> incubation would actually request two concurrent things: be considered a
> new "Program", and give "incubated" status to a code repository under
> that program.
> 
> Over the last months we significantly raised the bar for accepting new
> projects in incubation, learning from past integration and QA mistakes.
> The end result is that a number of promising projects applied for
> incubation but got rejected on maturity, team size, team diversity, or
> current integration level grounds.
> 
> At that point I called for some specific label, like "Emerging
> Technology" that the TC could grant to promising projects that just need
> more visibility, more collaboration, more crystallization before they
> can make good candidates to be made part of our integrated releases.
> 
> However, at the last TC meeting it became apparent we could leverage
> "Programs" to achieve the same result. Promising efforts would first get
> their mission, scope and existing results blessed and recognized as
> something we'd really like to see in OpenStack one day. Then when they
> are ready, they could have one of their deliveries apply for incubation
> if that makes sense.
> 
> The consequences would be that the effort would place itself under the
> authority of the TC. Their contributors would be ATCs and would vote in
> TC elections, even if their deliveries never make it to incubation. They
> would get (some) space at Design Summits. So it's not "free", we still
> need to be pretty conservative about accepting them, but it's probably
> manageable.
> 
> I'm still weighing the consequences, but I think it's globally nicer
> than introducing another status. As long as the TC feels free to revoke
> Programs that do not deliver the expected results (or that no longer
> make sense in the new world order) I think this approach would be fine.
> 
> Comments, thoughts ?
> 

I don't have much to add right now beyond +1.

I think the need for being able to bless an emerging project by
acknowledging that its mission and scope are a compliment to OpenStack
is clear and this seems like a good way to accomplish that.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [governance] Becoming a Program, before applying for incubation

2013-12-13 Thread Thierry Carrez
Hi everyone,

TL;DR:
Incubation is getting harder, why not ask efforts to apply for a new
program first to get the visibility they need to grow.

Long version:

Last cycle we introduced the concept of "Programs" to replace the
concept of "Official projects" which was no longer working that well for
us. This was recognizing the work of existing teams, organized around a
common mission, as an integral part of "delivering OpenStack".
Contributors to programs become ATCs, so they get to vote in Technical
Committee (TC) elections. In return, those teams place themselves under
the authority of the TC.

This created an interesting corner case. Projects applying for
incubation would actually request two concurrent things: be considered a
new "Program", and give "incubated" status to a code repository under
that program.

Over the last months we significantly raised the bar for accepting new
projects in incubation, learning from past integration and QA mistakes.
The end result is that a number of promising projects applied for
incubation but got rejected on maturity, team size, team diversity, or
current integration level grounds.

At that point I called for some specific label, like "Emerging
Technology" that the TC could grant to promising projects that just need
more visibility, more collaboration, more crystallization before they
can make good candidates to be made part of our integrated releases.

However, at the last TC meeting it became apparent we could leverage
"Programs" to achieve the same result. Promising efforts would first get
their mission, scope and existing results blessed and recognized as
something we'd really like to see in OpenStack one day. Then when they
are ready, they could have one of their deliveries apply for incubation
if that makes sense.

The consequences would be that the effort would place itself under the
authority of the TC. Their contributors would be ATCs and would vote in
TC elections, even if their deliveries never make it to incubation. They
would get (some) space at Design Summits. So it's not "free", we still
need to be pretty conservative about accepting them, but it's probably
manageable.

I'm still weighing the consequences, but I think it's globally nicer
than introducing another status. As long as the TC feels free to revoke
Programs that do not deliver the expected results (or that no longer
make sense in the new world order) I think this approach would be fine.

Comments, thoughts ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest][Production] Tempest / the gate / real world load

2013-12-13 Thread Salvatore Orlando
Robert,
As you've deliberately picked on me I feel compelled to reply!

Jokes apart, I am going to retire that patch and push the new default in
neutron. Regardless of considerations on real loads vs gate loads, I think
it is correct to assume the default configuration should be one that will
allow gate tests to pass. A sort of maximum common denominator, if you want.
I think however that the discussion on whether our gate tests are
representative of real world deployment is outside the scope of this
thread, even if very interesting.

On the specific matter of this patch we've been noticing the CPU on the
gate tests with neutron easily reaching 100%; this is not because of (b). I
can indeed replicate the same behaviour on any other VM, even with twice as
much vCPUs. Never tried baremetal though.
However, because of the fact that 'just' the gate tests send the cpu on the
single host to 100% should let us think that deployers might easily end up
facing the same problem in real environment (your (a) point) regardless of
how the components are split.

Thankfully, Armando found out a related issue with the DHCP agent which was
causing it to use a lot of cpu as well as terribly stressing ovsdbserver,
and fixed it. Since then we're seeing a lot less timeout errors on the gate.

Salvatore






On 12 December 2013 20:23, Robert Collins  wrote:

> A few times now we've run into patches for devstack-gate / devstack
> that change default configuration to handle 'tempest load'.
>
> For instance - https://review.openstack.org/61137 (Sorry Salvatore I'm
> not picking on you really!)
>
> So there appears to be a meme that the gate is particularly stressful
> - a bad environment - and that real world situations have less load.
>
> This could happen a few ways: (a) deployers might separate out
> components more; (b) they might have faster machines; (c) they might
> have less concurrent activity.
>
> (a) - unlikely! Deployers will cram stuff together as much as they can
> to save overheads. Big clouds will have components split out - yes,
> but they will also have correspondingly more load to drive that split
> out.
>
> (b) Perhaps, but not orders of magnitude faster, the clouds we run on
> are running on fairly recent hardware, and by using big instances we
> don't get crammed it with that many other tenants.
>
> (c) Almost certainly not. Tempest currently does a maximum of four
> concurrent requests. A small business cloud could easily have 5 or 6
> people making concurrent requests from time to time, and bigger but
> not huge clouds will certainly have that. Their /average/ rate of API
> requests may be much lower, but when they point service orchestration
> tools at it -- particularly tools that walk their dependencies in
> parallel - load is going to be much much higher than what we generate
> with Tempest.
>
> tl;dr : if we need to change a config file setting in devstack-gate or
> devstack *other than* setting up the specific scenario, think thrice -
> should it be a production default and set in the relevant projects
> default config setting.
>
> Cheers,
> Rob
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-13 Thread Jay Dobies

* ability to 'preview' changes going to the scheduler


What does this give you? How detailed a preview do you need? What
information is critical there? Have you seen the proposed designs for
a heat template preview feature - would that be sufficient?


Will will probably have a better answer to this, but I feel like at very 
least this goes back to the psychology point raised earlier (I think in 
this thread, but if not, definitely one of the TripleO ones).


A weird parallel is whenever I do a new install of Fedora. I never 
accept their default disk partitioning without electing to review/modify 
it. Even if I didn't expect to change anything, I want to see what they 
are going to give me. And then I compulsively review the summary of what 
actual changes will be applied in the follow up screen that's displayed 
after I say I'm happy with the layout.


Perhaps that's more a commentary on my own OCD and cynicism that I feel 
dirty accepting the magic defaults blindly. I love the idea of anaconda 
doing the heavy lifting of figuring out sane defaults for home/root/swap 
and so on (similarly, I love the idea of Nova scheduler rationing out 
where instances are deployed), but I at least want to know I've seen it 
before it happens.


I fully admit to not knowing how common that sort of thing is. I suspect 
I'm in the majority of geeks and tame by sys admin standards, but I 
honestly don't know. So I acknowledge that my entire argument for the 
preview here is based on my own personality.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Nomination of Sandy Walsh to core team

2013-12-13 Thread Nicolas Barcet
+1 in support of Sandy.  He is a proven contributor and reviewer and he
brings a great business vision and experience to the team.

Cheers,
Nick


On Wed, Dec 11, 2013 at 8:18 PM, Gordon Chung  wrote:

> > To that end, I would like to nominate Sandy Walsh from Rackspace to
> > ceilometer-core. Sandy is one of the original authors of StackTach, and
> > spearheaded the original stacktach-ceilometer integration. He has been
> > instrumental in many of my codes reviews, and has contributed much of the
> > existing event storage and querying code.
>
> +1 in support of Sandy.  the Event work he's led in Ceilometer has been an
> important feature and i think he has some valuable ideas.
>
> cheers,
> gordon chung
> openstack, ibm software standards
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Nicolas Barcet 
a.k.a. nijaba, nick
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-13 Thread Scott Moser
On Tue, 10 Dec 2013, Ian Wells wrote:

> On 10 December 2013 20:55, Clint Byrum  wrote:
>
> > If it is just a network API, it works the same for everybody. This
> > makes it simpler, and thus easier to scale out independently of compute
> > hosts. It is also something we already support and can very easily expand
> > by just adding a tiny bit of functionality to neutron-metadata-agent.
> >
> > In fact we can even push routes via DHCP to send agent traffic through
> > a different neutron-metadata-agent, so I don't see any issue where we
> > are piling anything on top of an overstressed single resource. We can
> > have neutron route this traffic directly to the Heat API which hosts it,
> > and that can be load balanced and etc. etc. What is the exact scenario
> > you're trying to avoid?
> >
>
> You may be making even this harder than it needs to be.  You can create
> multiple networks and attach machines to multiple networks.  Every point so
> far has been 'why don't we use  as a backdoor into our VM without
> affecting the VM in any other way' - why can't that just be one more
> network interface set aside for whatever management  instructions are
> appropriate?  And then what needs pushing into Neutron is nothing more
> complex than strong port firewalling to prevent the slaves/minions talking
> to each other.  If you absolutely must make the communication come from a

+1

tcp/ip works *really* well as a communication mechanism.  I'm planning on
using it to send this email.

For controlled guests, simply don't break your networking.  Anything that
could break networking can break /dev/ also.

Fwiw, we already have an extremely functional "agent" in just about every
[linux] node in sshd.  Its capable of marshalling just about anything in
and out of the node. (note, i fully realize there are good reasons for
more specific agent, lots of them exist).

I've really never understood "we don't want to rely on networking as a
transport".

> system agent and go to a VM, then that can be done by attaching the system
> agent to the administrative network - from within the system agent, which
> is the thing that needs this, rather than within Neutron, which doesn't
> really care how you use its networks.  I prefer solutions where other tools
> don't have to make you a special case.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Fuel

2013-12-13 Thread Liz Blanchard

On Dec 13, 2013, at 8:04 AM, Jaromir Coufal  wrote:

> On 2013/12/12 15:31, Mike Scherbakov wrote:
>> Folks,
>> 
>> 
>> Most of you by now have heard of Fuel, which we’ve been working on as a
>> related OpenStack project for a period of time
>> -see
>> https://launchpad.net/fueland https://wiki.openstack.org/wiki/Fuel. The
>> aim of the project is to provide a distribution agnostic and plug-in
>> agnostic engine for preparing, configuring and ultimately deploying
>> various “flavors” of OpenStack in production. We’ve also used Fuel in
>> most of our customer engagements to stand up an OpenStack cloud.
> ...
>> We’d love to open discussion on this and hear everybody’s thoughts on
>> this direction.
> 
> Hey Mike,
> 
> it sounds all great. I'll be very happy to discuss all the UX efforts going 
> on in TripleO/Tuskar UI together with intentions and future steps of Fuel.
> 
+1. The Fuel wizard has some great UX ideas to bring to our thoughts around 
deployment in the Tuskar UI!

Great to hear these will be brought together,
Liz

> Cheers
> -- Jarda
> 
> --- Jaromir Coufal (jcoufal)
> --- OpenStack User Experience
> --- IRC: #openstack-ux (at FreeNode)
> --- Forum: http://ask-openstackux.rhcloud.com
> --- Wiki: https://wiki.openstack.org/wiki/UX
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bugs] definition of triaged

2013-12-13 Thread Russell Bryant
On 12/12/2013 04:46 PM, Robert Collins wrote:
> Hi, I'm trying to overhaul the bug triage process for nova (initially)
> to make it much lighter and more effective.
> 
> I'll be sending a more comprehensive mail shortly

before you do, let's agree what we're trying to solve.  Perhaps you were
going to cover that in your later message, but it wouldn't hurt
discussing it now.

I actually didn't think our process was that broken.  It's more that I
feel we need a person leading a small team that is working on it reguarly.

The idea with the tagging approach was to break up the triage problem
into smaller work queues.  I haven't kept up with the tagging part and
would really like to hand that off.  Then some of the work queues aren't
getting triaged as regularly as they need to.  I'd like to see a small
team making this a high priority with some of their time each week.

With all of that said, if you think an overhaul of the process is
necessary to get to the end goal of a more well triaged bug queue, then
I'm happy to entertain it.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] [Horizon] [Tuskar] [UI] Horizon and Tuskar-UI merge

2013-12-13 Thread Ladislav Smola

Horizoners,

As discussed in TripleO and Horizon meetings, we are proposing to move 
Tuskar UI under the Horizon umbrella. Since we are building our UI 
solution on top of Horizon, we think this is a good fit. It will allow 
us to get feedback and reviews from the appropriate group of developers.


Tuskar UI is a user interface for the design, deployment, monitoring, 
and management of OpenStack. The code is built on the Horizon framework 
and facilitates the TripleO approach to deployment.  We work closely 
with the TripleO team and will continue to do so. The Tuskar UI itself 
is implemented as a new tab, headed "Infrastructure", which is added as 
a dashboard to OpenStack Horizon. For more information about the TripleO 
project, check out the project wiki: 
https://wiki.openstack.org/wiki/TripleO.


The following is a proposal on how the Tuskar UI project could be 
integrated:
- Create a new codebase for the Tuskar-UI under the horizon umbrella, 
with its own core team
- As an exception to the usual contribution process, commits to 
Tuskar-UI codebase. may be pushed, +2 and approved by one company. This 
is intended to make the development process faster. We are currently 
developing the Tuskar-UI at a fast pace and there are not yet many 
contributors who aren't employed by Red Hat  that are familiar with the 
code. As the code stabilises, and attracts users and developers, this 
exception can be removed.
- The Tuskar-UI cores would be cores of Tuskar-UI codebase only. Horizon 
cores would be cores of the whole Horizon program.



What does it mean for Horizon?
- There will be more developers, reviewers and patches coming to Horizon 
(as a program).
- Horizon contributors will have time to get familiar with the Tuskar-UI 
code, before we decide to merge it into the Horizon codebase.


If you have any questions, please ask!

Thanks,
Tuskar UI team
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [UI] Icehouse Requirements - Summary, Milestones

2013-12-13 Thread Imre Farkas

On 12/13/2013 11:36 AM, Jaromir Coufal wrote:


*VERSION 0*
===
Enable user to deploy OpenStack with the simpliest TripleO way, no
difference between hardware.

Target:
- end of icehouse-2

Features we need to get in:
- Enable manual nodes registration (Ironic)
- Get images available for user (Glance)
- Node roles (hardcode): Controller, Compute, Object Storage, Block Storage
- Design deployment (number of nodes per role)
- Deploy (Heat + Nova)


One note to deploy: It's not done only by Heat and Nova. If we expect a 
fully functional OpenStack installation as a result, we are missing a 
few steps like creating users, initializing and registering the service 
endpoints with Keystone. In TripleO this is done by the init-keystone 
and setup-endpoints scripts. Check devtest for more details: 
http://docs.openstack.org/developer/tripleo-incubator/devtest_undercloud.html


Imre

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request for Barbican

2013-12-13 Thread Russell Bryant
On 12/12/2013 05:26 PM, Dolph Mathews wrote:
> My reason for keeping them separate is more practical:  the Keystone
> team is already somewhat overloaded.  I know that a couple of us
> have interest in contributing to Barbican, the question is time and
> prioritization. 

I don't think that's a very good reason.  Barbican has a team already.
It's not a whole new project being completely placed on an existing team.

Closer collaboration could result in getting *more* help with Keystone.

> Unless there is some benefit to having both projects in the same
> program with essentially different teams, I think Barbican should
> proceed as is.  I personally plan on contributing to Barbican.

There may be...

> 
> /me puts PTL hat on
> 
> ++ I don't want Russel's job.

Harsh!  ;-)

> Keystone has a fairly narrow mission statement in my mind (come to think
> of it, I need to propose it to governance..), and that's basically to
> abstract away the problem of authenticating and authorizing the API
> users of other openstack services. Everything else, including identity
> management, key management, key distribution, quotas, etc, is just
> secondary fodder that we tend to help with along the way... but they
> should be first class problems in someone else's mind.
> 
> If we rolled everything together that kind of looks related to keystone
> under a big keystone program for the sake of organizational tidiness, I
> know I would be less effective as a "PTL" and that's a bit
> disheartening. That said, I'm always happy to help where I can.

I get the arguments that there is not overlap right now, necessarily.  I
do worry a bit about silos where they shouldn't exist, though.  I think
some things to consider are:

1) Are each of the items you mention big enough to have a sustainable
team that can exist as its own program?

2) Would there be a benefit of *changing* the scope and mission of the
Identity program to accomodate a larger problem space?  "Security"
sounds too broad ... but I'm sure you see what I'm getting at.

When we're talking about authentication, authorization, identity
management, key management, key distribution ... these things really
*do* seem related enough that it would be *really* nice if a group was
looking at all of them and how they fit into the bigger OpenStack
picture.  I really don't want to see silos for each of these things.

So, would OpenStack benefit from a tighter relationship between these
projects?  I think this may be the case, personally.

Could this tighter relationship happen between separate programs?  It
could, but I think a single program better expresses the intent if
that's really what is best.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request for Barbican

2013-12-13 Thread Russell Bryant
On 12/13/2013 05:50 AM, Thierry Carrez wrote:
> Russell Bryant wrote:
>> $ git shortlog -s -e | sort -n -r
>>172   John Wood 
>>150   jfwood 
>> 65   Douglas Mendizabal 
>> 39   Jarret Raim 
>> 17   Malini K. Bhandaru 
>> 10   Paul Kehrer 
>> 10   Jenkins 
>>  8   jqxin2006 
>>  7   Arash Ghoreyshi 
>>  5   Chad Lung 
>>  3   Dolph Mathews 
>>  2   John Vrbanac 
>>  1   Steven Gonzales 
>>  1   Russell Bryant 
>>  1   Bryan D. Payne 
>>
>> It appears to be an effort done by a group, and not an individual.  Most
>> commits by far are from Rackspace, but there is at least one non-trivial
>> contributor (Malini) from another company (Intel), so I think this is OK.
> 
> If you remove Jenkins and attach Paul Kehrer, jqxin2006 (Michael Xin),
> Arash Ghoreyshi, Chad Lung and Steven Gonzales to Rackspace, then the
> picture is:
> 
> 67% of commits come from a single person (John Wood)
> 96% of commits come from a single company (Rackspace)
> 
> I think that's a bit brittle: if John Wood or Rackspace were to decide
> to place their bets elsewhere, the project would probably die instantly.
> I would feel more comfortable if a single individual didn't author more
> than 50% of the changes, and a single company didn't sponsor more than
> 80% of the changes.
> 
> Personally I think that's a large enough group to make up a Program and
> gain visibility, but a bit too fragile to enter incubation just now.
> 

There are some other unresolved technical issues making incubation
premature based on our new incubation requirements.  They've made some
nice progress on them already, though.  There's a list here [1].

We've seen in the past that denying incubation didn't do much to help
with visibility and participation.  I think creating a program is a nice
compromise.  It lets us officially bless a mission and creates a place
for people helping accomplish that mission to come together.  Hopefully
this would give other groups more confidence to jump in and start
participating.

[1] https://wiki.openstack.org/wiki/Barbican/Incubation#Tasks_for_Incubation

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Questions on configuration options.

2013-12-13 Thread Steven Hardy
On Fri, Dec 13, 2013 at 08:12:30PM +0800, Qi Ming Teng wrote:
> 
> Hi,
> 
>Just noticed the following configuration options in heat.conf(.sample),
> I'm wondering if some of them are not so relevant now.  Leaving these
> options
> there may cause some confusion, especially for new comers like me.

I think your analysis is incorrect, and that most of these are used, by
oslo modules and keystone auth_token middleware.

>   Some of the options may be place holders for future extension, some could
> be already deprecated but not cleaned away.
> 
>Examples:
> 
>  Option   Default Status
>---  --  ---
>instance_driver  heat.engine.novaParsed; used nowhere.

This one possibly is unused.

>sqlite_dbheat.sqlite Option not parsed; not used except
> for a test case which calls
> 'session.set_defaults' directly.

This is used by oslo sqlalchemy/session.py I think.

>log_config Not parsed; not used.

This is used by oslo log.py AFAICS

>clients_x. Mostly not parsed; not used.
>cert_file
>clients_x. Mostly not parsed; not used.
>  key_file

These were added as part of the clients-ssl-options blueprint, and allow
users to configure the client libraries heat uses to connect to other
openstack services.

> [keystone_authtoken]
>auth_admin_prefixNot parsed; not used.
>auth_host127.0.0.1   Not parsed; not used.
>auth_port35357   Not parsed; not used.
>auth_protocolhttps   Not parsed; not used.
>auth_version   Not parsed; not used.
>delay_auth_decision falseNot parsed; not used.
>http_connect_timeout   Not parsed; not used.
>http_request_max_retries 3   Not parsed; not used.
>http_handler   Not parsed; not used.
>cache  Not parsed; not used.
>certfile   Not parsed; not used.
>keyfileNot parsed; not used.
>cafile Not parsed; not used.
>signing_dirNot parsed; not used.
>memcached_servers  Not parsed; not used.
>token_cache_time 300 Not parsed; not used.
>revocation_cache_time 1  Not parsed; not used.
>memcache_security_strategy Not parsed; not used.
>memcache_secret_keyNot parsed; not used.

These are all options for keystone auth_token, not all of them are
mandatory but they could be configured if desired.

See tools/config/oslo.config.generator.rc for why we get these options.

Hope that helps.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] Weekly topic - Horizon Navigation

2013-12-13 Thread Jaromir Coufal


On 2013/10/12 17:49, Lyle, David wrote:

This topic has been available on ux-askbot and ux Google+ forum before
that for months.  Based on that alone, I don’t believe the vote was
limited.  All input was taken and considered.  In the end, there are
strong opinions on both sides.

What the final outcome was…  We’ll implement the vertical navigation by
default and write the html such that skinning to make the navigation
horizontal should be as easy as writing the less and potentially
additional js for it (thinking of screen width overflow).  I’m not sure
we want to take on the burden of supporting an implementation of both
layout directions in horizon, but there may be workable solutions there.

As other icehouse blueprints depend on the navigation upgrade, I’d
rather not keep rehashing this topic.

-David Lyle


OK, that make sense.

In that case, there is the latest proposal on Vertical Navigation from 
Dec 3, which is waiting for some feedback (mostly domain selector approach).


I am looking forward to see your comments here:

http://ask-openstackux.rhcloud.com/question/2/openstack-dashboard-navigation-redesign/?answer=99#post-id-99

-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Fuel

2013-12-13 Thread Jaromir Coufal

On 2013/12/12 15:31, Mike Scherbakov wrote:

Folks,


Most of you by now have heard of Fuel, which we’ve been working on as a
related OpenStack project for a period of time
-see
https://launchpad.net/fueland https://wiki.openstack.org/wiki/Fuel. The
aim of the project is to provide a distribution agnostic and plug-in
agnostic engine for preparing, configuring and ultimately deploying
various “flavors” of OpenStack in production. We’ve also used Fuel in
most of our customer engagements to stand up an OpenStack cloud.

...

We’d love to open discussion on this and hear everybody’s thoughts on
this direction.


Hey Mike,

it sounds all great. I'll be very happy to discuss all the UX efforts 
going on in TripleO/Tuskar UI together with intentions and future steps 
of Fuel.


Cheers
-- Jarda

--- Jaromir Coufal (jcoufal)
--- OpenStack User Experience
--- IRC: #openstack-ux (at FreeNode)
--- Forum: http://ask-openstackux.rhcloud.com
--- Wiki: https://wiki.openstack.org/wiki/UX

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bugs] definition of triaged

2013-12-13 Thread Sean Dague
On 12/13/2013 06:13 AM, Thierry Carrez wrote:
> Robert Collins wrote:
>> "
>> Confirmed The bug was reproduced or confirmed as a genuine bug
>> Triaged The bug comments contain a full analysis on how to properly
>> fix the issue
>> "
>>
>> From wiki.openstack.org/wiki/Bugs
>>
>> Putting aside the difficulty of complete reproduction sometimes, I
>> don't understand the use of Triaged here.
>>
>> In LP they mean:
>>
>> Confirmed Verified by someone other than the reporter.
>> Triaged Verified by the bug supervisor.
>>
>> So our meaning is very divergent. I'd like us to consolidate on the
>> standard meaning - which is that the relative priority of having a
>> doctor [developer] attack the problem has been assessed.
> 
> I'm the one who established, a long time ago, this divergence between
> Launchpad's classic use of those statuses and OpenStack's. In my
> experience NOBODY ever uses "Confirmed" with the LP meaning, so I
> figured we should use it for something more useful: to describe how
> advanced you are in the resolution of the issue.
> 
> This is why I proposed (and we used) the following distinction, as
> described in https://wiki.openstack.org/wiki/BugTriage :
> 
> "Confirmed" bugs are genuine bugs but nobody really looked into the best
> way to fix them yet. They are confirmed, priority-assigned, tagged
> 
> "Triaged" bugs are bugs which are analyzed and have a clear way forward
> to resolve them, just missing someone to actually write the patch
> 
> That way developers could pick "ready to fix" bugs by searching
> "Triaged" bugs rather than "Confirmed" ones.
> 
>> Specifically:
>>  - we should use Triaged to indicate that:
>> - we have assigned a priority
>> - we believe it's a genuine bug
>> - we have routed[tagged] it to what is probably the right place
>> [vendor driver/low-hanging-fruit etc]
>>  - we should use Incomplete if we aren't sure that its a bug and need
>> the reporter to tell us more to be sure
>>  - triagers shouldn't ever set 'confirmed' - thats reserved solely for
>> end users to tell us that more than one user is encountering the
>> problem.
> 
> I can see how that works for Ubuntu. But did you ever see, in OpenStack,
> an end user tell us that they /also encountered/ the problem ?
> 
> The end result of your proposal is that we stop using "Confirmed" and
> use "Triaged" instead (to describe the exact same thing). We lose the
> ability to use "Triaged" to indicate a "more analyzed" state. I'm not
> sure that's a net win, or worth asking anyone to change their habits, or
> worth changing the BugTriage wikipage (and/or any other page that
> repeated it).
> 
> I'll gladly admit that my meaning of "Triaged" was not used that much
> and that it could be replaced by something more useful. But merely using
> "triaged" for our old meaning of "confirmed" (and stop using
> "confirmed") sounds like change for the sake of change.

Agree with Thierry.

Yesterday was our giant bug triage day in Tempest, going from 276 to 74
open bugs in the process. And the things I learned in the process:

People are terrible at finding bugs that actually affect them. Honestly,
Launchpad search is terrible. I spent way too much time yesterday doing
dup squashing for bugs I knew I'd just seen, then taking 15 minutes to
find the base bug, or finally give up and just mark the bug as invalid
because I knew we were already tracking it, or it was already fixed.

So I like the current model, we just need to use it more. New goes to
Confirmed if someone is sure it's actually a problem, Incomplete if you
need more info from the person.

Triaged is for a bug that's got enough details in it that someone should
be able to just pick it up and run with it. Added bonus, if the
implementation notes are detailed enough, it can be put as
low-hanging-fruit so acts as a good on ramp task.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mirantis OpenStack 4.0 will support Ceilometer service

2013-12-13 Thread Sean Dague
On 12/13/2013 07:48 AM, Roman Sokolkov wrote:
> Hello everyone,
> 
> I would like to announce that Mirantis OpenStack 4.0 will have support
> for Ceilometer service. 
> 
> Here is the short demonstration
> 
>  of Ceilometer
> deployment by Mirantis OpenStack 4.0.
> 
> In this screencast development version is used, preliminary plan for
> Mirantis OpenStack release is the next week.
> 
> Best regards, Roman Sokolkov

This is a development list, not a PR list. Please keep on topic and
don't polute the list with non development threads.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Mirantis OpenStack 4.0 will support Ceilometer service

2013-12-13 Thread Roman Sokolkov
Hello everyone,

I would like to announce that Mirantis OpenStack 4.0 will have support for
Ceilometer service.

Here is the short
demonstration
of Ceilometer
deployment by Mirantis OpenStack 4.0.

In this screencast development version is used, preliminary plan for
Mirantis OpenStack release is the next week.

Best regards, Roman Sokolkov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] policy has no effect because of hard coded assert_admin?

2013-12-13 Thread Dolph Mathews
On Thu, Dec 12, 2013 at 11:03 PM, Qiu Yu  wrote:

> On Fri, Dec 13, 2013 at 2:40 AM, Morgan Fainberg  wrote:
>
>> As Dolph stated, V3 is where the policy file protects.  This is one of
>> the many reasons why I would encourage movement to using V3 Keystone over
>> V2.
>>
>> The V2 API is officially deprecated in the Icehouse cycle, I think that
>> moving the decorator potentially could cause more issues than not as stated
>> for compatibility.  I would be very concerned about breaking compatibility
>> with deployments and maintaining the security behavior with the
>> encouragement to move from V2 to V3.  I am also not convinced passing the
>> context down to the manager level is the right approach.  Making a move on
>> where the protection occurs likely warrants a deeper discussion (perhaps in
>> Atlanta?).
>>
>>
> Thanks for the background info. However, after a quick go-through keystone
> V3 API and existing BPs. Two questions still confuse me regarding policy
> enforcement.
>
> #1 Seems V3 policy api [1] has nothing to do with the policy rules. It
> seems to be dealing with access / secret keys only. So it might be used for
> access key authentication and related control in my understanding.
>
> Is there any use case / example regarding V3 policy api? Does it even
> related to policy rules in json file?
>

The v3 policy API is intended to replace policy.json by centralizing the
persistence and management of policy rule sets.


>
> #2 Found this slides[2] online by Adam Young. And in page 27, he mentioned
> "isAdmin" currently in nova belongs to keystone actually.
>
> Would be really appreciated for some pointers. ML discussion or bp (I
> don't find any so far), etc.
>
> [1] http://api.openstack.org/api-ref-identity.html#Policy_Calls
> [2] http://www.slideshare.net/kamesh001/openstack-keystone
>
> Thanks,
> --
> Qiu Yu
>



-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Fuel

2013-12-13 Thread Mark McLoughlin
Hi Mike,

On Thu, 2013-12-12 at 18:31 +0400, Mike Scherbakov wrote:
> Folks,
> 
> Most of you by now have heard of Fuel, which we’ve been working on as a
> related OpenStack project for a period of time -
> see https://launchpad.net/fuel and
> https://wiki.openstack.org/wiki/Fuel. The aim of the project is to provide
> a distribution agnostic and plug-in agnostic engine for preparing,
> configuring and ultimately deploying various “flavors” of OpenStack in
> production. We’ve also used Fuel in most of our customer engagements to
> stand up an OpenStack cloud.
> 
>  At the same time, we’ve been actively involved with TripleO, which we
> believe to be a great effort in simplifying deployment, operations, scaling
> (and eventually upgrading) of OpenStack.
> 
> Per our discussions with core TripleO team during the Icehouse summit,
> we’ve uncovered that while there are certain areas of collision, most of
> the functionality in TripleO and Fuel is complementary. In general, Fuel
> helps solve many problems around “step zero” of setting up an OpenStack
> environment, such as auto-discovery and inventory of bare metal hardware,
> pre-deployment & post-deployment environment  checks, and wizard-driven
> web-based configuration of OpenStack flavors. At the same time, TripleO has
> made great progress in deployment, scaling and operations (with Tuskar).
> 
> We’d like to propose an effort for community consideration to bring the two
> initiatives closer together to eventually arrive at a distribution
> agnostic, community supported framework covering the entire spectrum of
> deployment, management and upgrades; from “step zero” to a fully functional
> and manageable production-grade OpenStack environment.

Great!

> To that effect, we propose the following high-level roadmap plans for this
> effort:
> 
> 
>-
> 
>Keep and continue to evolve bare-metal discovery and inventory module of
>Fuel, tightly integrating it with Ironic.
>-
> 
>Keep and continue to evolve Fuel’s wizard-driven OpenStack flavor
>configurator. In the near term we’ll work with the UX team to unify the
>user experience across Fuel, TripleO and Tuskar. We are also thinking about
>leveraging diskimagebuilder.
>-
> 
>Continue to evolve Fuel’s pre-deployment (DHCP, L2 connectivity checks)
>and post-deployment validation checks in collaboration with the TripleO and
>Tempest teams.
>-
> 
>Eventually replace Fuel’s current orchestration engine
>https://github.com/stackforge/fuel-astute/ with Heat

This all sounds great to me.

I'd especially like to see some more in-depth discussion about how your
ideas for a configuration wizard like this:

  
http://software.mirantis.com/wp-content/uploads/2013/10/New_Fuel_3.2_Wizard_1-of-3.png
  
http://software.mirantis.com/wp-content/uploads/2013/10/New_Fuel_3.2_Wizard_3-of-3.png

fits into the UX discussions around initial deployment with TripleO
going on, for example:

  
http://ask-openstackux.rhcloud.com/question/96/tripleo-ui-deployment-management/
  
http://lists.openstack.org/pipermail/openstack-dev/2013-December/thread.html#21388
  
http://lists.openstack.org/pipermail/openstack-dev/2013-December/thread.html#20944

Thanks!
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-13 Thread Tzu-Mainn Chen
> On 12.12.2013 17:10, Mark McLoughlin wrote:
> > On Wed, 2013-12-11 at 13:33 +0100, Jiří Stránský wrote:
> >> Hi all,
> >>
> >> TL;DR: I believe that "As an infrastructure administrator, Anna wants a
> >> CLI for managing the deployment providing the same fundamental features
> >> as UI." With the planned architecture changes (making tuskar-api thinner
> >> and getting rid of proxying to other services), there's not an obvious
> >> way to achieve that. We need to figure this out. I present a few options
> >> and look forward for feedback.
> > ..
> >
> >> 1) Make a thicker python-tuskarclient and put the business logic there.
> >> Make it consume other python-*clients. (This is an unusual approach
> >> though, i'm not aware of any python-*client that would consume and
> >> integrate other python-*clients.)
> >>
> >> 2) Make a thicker tuskar-api and put the business logic there. (This is
> >> the original approach with consuming other services from tuskar-api. The
> >> feedback on this approach was mostly negative though.)
> >
> > FWIW, I think these are the two most plausible options right now.
> >
> > My instinct is that tuskar could be a stateless service which merely
> > contains the business logic between the UI/CLI and the various OpenStack
> > services.
> >
> > That would be a first (i.e. an OpenStack service which doesn't have a
> > DB) and it is somewhat hard to justify. I'd be up for us pushing tuskar
> > as a purely client-side library used by the UI/CLI (i.e. 1) as far it
> > can go until we hit actual cases where we need (2).
> 
> For the features that we identified for Icehouse, we probably don't need
> to store any data necessarily. But going forward, it's not entirely
> sure. We had a chat and identified some data that is probably not suited
> for storing in any of the other services (at least in their current state):
> 
> * Roles (like Compute, Controller, Object Storage, Block Storage) - for
> Icehouse we'll have these 4 roles hardcoded. Going forward, it's
> probable that we'll want to let admins define their own roles. (Is there
> an existing OpenStack concept that we could map Roles onto? Something
> similar to using Flavors as hardware profiles? I'm not aware of any.)
> 
> * Links to Flavors to use with the roles - to define on what hardware
> can a particular Role be deployed. For Icehouse we assume homogenous
> hardware.
> 
> * Links to Images for use with the Role/Flavor pairs - we'll have
> hardcoded Image names for those hardcoded Roles in Icehouse. Going
> forward, having multiple undercloud Flavors associated with a Role,
> maybe each [Role-Flavor] pair should have it's own image link defined -
> some hardware types (Flavors) might require special drivers in the image.
> 
> * Overcloud heat template - for Icehouse it's quite possible it might be
> hardcoded as well and we could just just use heat params to set it up,
> though i'm not 100% sure about that. Going forward, assuming dynamic
> Roles, we'll need to generate it.


One more (possible) item to this list: "# of nodes per role in a deployment" -
we'll need this if we want to stage the deployment, although that could
potentially be done on the client-side UI/CLI.


> ^ So all these things could probably be hardcoded for Icehouse, but not
> in the future. Guys suggested that if we'll be storing them eventually
> anyway, we might build these things into Tuskar API right now (and
> return hardcoded values for now, allow modification post-Icehouse). That
> seems ok to me. The other approach of having all this hardcoding
> initially done in a library seems ok to me too.
> 
> I'm not 100% sure that we cannot store some of this info in existing
> APIs, but it didn't seem so to me (to us). We've talked briefly about
> using Swift for it, but looking back on the list i wrote, it doesn't
> seem as very Swift-suited data.
> 
> >
> > One example worth thinking through though - clicking "deploy my
> > overcloud" will generate a Heat template and sent to the Heat API.
> >
> > The Heat template will be fairly closely tied to the overcloud images
> > (i.e. the actual image contents) we're deploying - e.g. the template
> > will have metadata which is specific to what's in the images.
> >
> > With the UI, you can see that working fine - the user is just using a UI
> > that was deployed with the undercloud.
> >
> > With the CLI, it is probably not running on undercloud machines. Perhaps
> > your undercloud was deployed a while ago and you've just installed the
> > latest TripleO client-side CLI from PyPI. With other OpenStack clients
> > we say that newer versions of the CLI should support all/most older
> > versions of the REST APIs.
> >
> > Having the template generation behind a (stateless) REST API could allow
> > us to define an API which expresses "deploy my overcloud" and not have
> > the client so tied to a specific undercloud version.
> 
> Yeah i see that advantage of making it an API, Dean pointed this out
> too. The combination of 

Re: [openstack-dev] [OpenStack-dev] How to modify a bug across multiple repo?

2013-12-13 Thread Christopher Yeoh
Hi wu jiang,


On Wed, Dec 11, 2013 at 12:26 PM, wu jiang  wrote:

> Hi Chris & Rob,
>
>
> Thanks for your reply ,and sorry for my late response..
>
> - I tested again. The modification won't effect tempest test because it's
> an optional argument, so I can commit it later in tempest. Lucky.
>
> - But the bug in Cinder exists at the API layer, so the modification will
> effect the CinderClient behavior..  :(
>
> So, that's quite a complex problem.. Any ideas?
>
>
To help I think I'd need more details on the exact bug. Does the change
that you need to make comply with
the guidelines here https://wiki.openstack.org/wiki/APIChangeGuidelines ?

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Questions on configuration options.

2013-12-13 Thread Qi Ming Teng

Hi,

   Just noticed the following configuration options in heat.conf(.sample),
I'm wondering if some of them are not so relevant now.  Leaving these
options
there may cause some confusion, especially for new comers like me.

  Some of the options may be place holders for future extension, some could
be already deprecated but not cleaned away.

   Examples:

 Option   Default Status
   ---  --  ---
   instance_driver  heat.engine.novaParsed; used nowhere.
   sqlite_dbheat.sqlite Option not parsed; not used except
for a test case which calls
'session.set_defaults' directly.
   log_config Not parsed; not used.
   clients_x. Mostly not parsed; not used.
   cert_file
   clients_x. Mostly not parsed; not used.
 key_file
[keystone_authtoken]
   auth_admin_prefixNot parsed; not used.
   auth_host127.0.0.1   Not parsed; not used.
   auth_port35357   Not parsed; not used.
   auth_protocolhttps   Not parsed; not used.
   auth_version   Not parsed; not used.
   delay_auth_decision falseNot parsed; not used.
   http_connect_timeout   Not parsed; not used.
   http_request_max_retries 3   Not parsed; not used.
   http_handler   Not parsed; not used.
   cache  Not parsed; not used.
   certfile   Not parsed; not used.
   keyfileNot parsed; not used.
   cafile Not parsed; not used.
   signing_dirNot parsed; not used.
   memcached_servers  Not parsed; not used.
   token_cache_time 300 Not parsed; not used.
   revocation_cache_time 1  Not parsed; not used.
   memcache_security_strategy Not parsed; not used.
   memcache_secret_keyNot parsed; not used.


Regards,
  - Qiming

-
Qi Ming Teng, PhD.
Research Staff Member
IBM Research - China___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-13 Thread Jiří Stránský

On 12.12.2013 17:10, Mark McLoughlin wrote:

On Wed, 2013-12-11 at 13:33 +0100, Jiří Stránský wrote:

Hi all,

TL;DR: I believe that "As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI." With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.

..


1) Make a thicker python-tuskarclient and put the business logic there.
Make it consume other python-*clients. (This is an unusual approach
though, i'm not aware of any python-*client that would consume and
integrate other python-*clients.)

2) Make a thicker tuskar-api and put the business logic there. (This is
the original approach with consuming other services from tuskar-api. The
feedback on this approach was mostly negative though.)


FWIW, I think these are the two most plausible options right now.

My instinct is that tuskar could be a stateless service which merely
contains the business logic between the UI/CLI and the various OpenStack
services.

That would be a first (i.e. an OpenStack service which doesn't have a
DB) and it is somewhat hard to justify. I'd be up for us pushing tuskar
as a purely client-side library used by the UI/CLI (i.e. 1) as far it
can go until we hit actual cases where we need (2).


For the features that we identified for Icehouse, we probably don't need 
to store any data necessarily. But going forward, it's not entirely 
sure. We had a chat and identified some data that is probably not suited 
for storing in any of the other services (at least in their current state):


* Roles (like Compute, Controller, Object Storage, Block Storage) - for 
Icehouse we'll have these 4 roles hardcoded. Going forward, it's 
probable that we'll want to let admins define their own roles. (Is there 
an existing OpenStack concept that we could map Roles onto? Something 
similar to using Flavors as hardware profiles? I'm not aware of any.)


* Links to Flavors to use with the roles - to define on what hardware 
can a particular Role be deployed. For Icehouse we assume homogenous 
hardware.


* Links to Images for use with the Role/Flavor pairs - we'll have 
hardcoded Image names for those hardcoded Roles in Icehouse. Going 
forward, having multiple undercloud Flavors associated with a Role, 
maybe each [Role-Flavor] pair should have it's own image link defined - 
some hardware types (Flavors) might require special drivers in the image.


* Overcloud heat template - for Icehouse it's quite possible it might be 
hardcoded as well and we could just just use heat params to set it up, 
though i'm not 100% sure about that. Going forward, assuming dynamic 
Roles, we'll need to generate it.


^ So all these things could probably be hardcoded for Icehouse, but not 
in the future. Guys suggested that if we'll be storing them eventually 
anyway, we might build these things into Tuskar API right now (and 
return hardcoded values for now, allow modification post-Icehouse). That 
seems ok to me. The other approach of having all this hardcoding 
initially done in a library seems ok to me too.


I'm not 100% sure that we cannot store some of this info in existing 
APIs, but it didn't seem so to me (to us). We've talked briefly about 
using Swift for it, but looking back on the list i wrote, it doesn't 
seem as very Swift-suited data.




One example worth thinking through though - clicking "deploy my
overcloud" will generate a Heat template and sent to the Heat API.

The Heat template will be fairly closely tied to the overcloud images
(i.e. the actual image contents) we're deploying - e.g. the template
will have metadata which is specific to what's in the images.

With the UI, you can see that working fine - the user is just using a UI
that was deployed with the undercloud.

With the CLI, it is probably not running on undercloud machines. Perhaps
your undercloud was deployed a while ago and you've just installed the
latest TripleO client-side CLI from PyPI. With other OpenStack clients
we say that newer versions of the CLI should support all/most older
versions of the REST APIs.

Having the template generation behind a (stateless) REST API could allow
us to define an API which expresses "deploy my overcloud" and not have
the client so tied to a specific undercloud version.


Yeah i see that advantage of making it an API, Dean pointed this out 
too. The combination of this and the fact that we'll need to store the 
Roles and related data eventually anyway might be the tipping point.



Thanks! :)

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [UI] Icehouse Requirements - Summary, Milestones

2013-12-13 Thread Jaromir Coufal
Quick note - I want to keep this discussion a bit high-level and not to 
get into big implementation details. For everyone, please, let's agree 
in this thread on the direction and approach and we can start follow-up 
threads with bigger details of how to get those things done.


On 2013/13/12 12:04, Tzu-Mainn Chen wrote:

*VERSION 0*
===
Enable user to deploy OpenStack with the simpliest TripleO way, no
difference between hardware.

Target:
- end of icehouse-2


My impression was that some of these features required features to be developed 
in other
OpenStack services - if so, should we call those out so that we can see if 
they'll be
available in the icehouse-2 timeframe?
As for below listed features for v0 - it is the smallest set of what we 
have to have in the UI - if there is some delay in other services, we 
have to put attention there as well. But I don't think there is anything 
blocking us at the moment.



Features we need to get in:
- Enable manual nodes registration (Ironic)
- Get images available for user (Glance)


Are we still providing the Heat template?  If so, are there image requirements 
that we
need to take into account?
I am not aware of any special requirements, but I will let experts to 
answer here...





- Node roles (hardcode): Controller, Compute, Object Storage, Block Storage
- Design deployment (number of nodes per role)


We're only allowing a single deployment, right?
Correct. For the whole Icehouse. I don't think we can get multiple 
deployments in time, there are much more important features.



- Deploy (Heat + Nova)


What parameters are we passing in for deploy?  Is it limited to the # of 
nodes/role, or
are we also passing in the image?
I think it is # nodes/role and image as well. Though images might be 
hardcoded for the very first iteration. Soon we should be able to let 
user assign images to roles.



Do we also need the following?

* unregister a node in Ironic
* update a deployment (add or destroy instances)
* destroy a deployment
* view information about management node (instance?)
* list nodes/instances by role
* view deployment configuration
* view status of deployment as it's being deployed
Some of that is part of above mentioned, some a bit later down the road 
(not far away though). We need all of that, but let's enable user to 
deploy first and we can add next features after we get something working 
then.


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >