Re: [openstack-dev] [heat][TripleO] Adding interfaces to environment files?

2016-06-07 Thread Jay Dobies

All,

We've got some requirements around adding some interfaces to the heat
environment file format, for example:

1. Now that we support passing un-merged environment files to heat, it'd be
good to support an optional description key for environments,


I've never understood why the environment file doesn't have a 
description field itself. Templates have descriptions, and IMO it makes 
sense for an environment to describe what its particular additions to 
the parameters/registry do.


I'd be happy to write that patch, but I wanted to first double check 
that there wasn't a big philosophical reason why it shouldn't have a 
description.



such that we
could add an API (in addition to the one added by jdob to retrieve the
merged environment for a running stack) that can retrieve
all-the-environments and we can easily tell which one does what (e.g to
display in a UI perhaps)


I'm not sure I follow. Are you saying the API would return the list of 
descriptions, or the actual contents of each environment file that was 
passed in?


Currently, the environment is merged before we do anything with it. We'd 
have to change that to store... I'm not entirely sure. Multiple 
environments in the DB per stack? Is there a raw_environment in the DB 
that we would leverage?




2. We've got requirements around merge strategies for multiple environments
with potentially colliding keys.  Similar to the cloud-init merge
strategy[1] works.  Basically it should be possible to include multiple
environments then have heat e.g append to a list parameter_default instead
of just last-one-wins.

Both of these will likely require some optional additions to the
environment file format - can we handle them just like e.g event_sinks and
just add them?

Clearly since the environment format isn't versioned this poses a
compatibility problem if "new" environments are used on an old heat, but to
be fair we have done this before (with both parameter_defaults and
event_sinks)

What do folks think, can we add at least the description, and what
interface makes sense for the merge strategy (annotation in the environment
vs data passed to the API along with the environment files list?)

Any thoughts on the above would be great :)

Thanks,

Steve

[1] http://cloudinit.readthedocs.io/en/latest/topics/merging.html
[2] 
https://github.com/openstack/python-heatclient/blob/master/heatclient/common/environment_format.py#L22

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] api-ref gate job is active

2016-05-18 Thread Jay Dobies
Just a quick note that there is a new job active called 
gate-heat-api-ref. Our API documentation has been pulled into our tree 
[1] and you can run it locally with `tox -e api-ref`.


For now, it's a direct port of our existing API docs, but I'm planning 
on taking a pass over them to double check that they are still valid. 
Feel free to ping me if you have any questions/issues.


[1] https://review.openstack.org/#/c/312712/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Summit Friday Night Dinner

2016-04-29 Thread Jay Dobies

Torchy's Tacos
1311 S 1st St
Austin, TX 78704

It's about a 20 minute walk from the Radisson (pretty much straight down 
Congress Ave) and then over a block.


Meeting in the Radisson lobby at 7:10 to walk over. Apologies in advance 
if we see another snake and I jump on someone's shoulders (Zane, you're 
tall, so you're likely my target).


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat] Summit session clashes

2016-04-20 Thread Jay Dobies

[snip]


I need to be at both of those Heat ones anyway, so this doesn't really
help me. I'd rather have the DLM session in this slot instead. (The only
sessions I can really skip are the Release Model, Functional Tests and
DLM.) That would give us:

   HeatTripleO

  Wed 3:30 Release Model
  Wed 4:30 HOT Parser
  Wed 5:20 Functional Tests

  Thu 1:30 DLM Upgrades
  Thu 2:20 Convergence switchover  Containers
  Thu 3:10 Convergence cleanup Composable Roles
  Thu 4:10 Performance API
  Thu 5:00 Validation  CI


+1 from me, this will let me bounce between the two as well.


I think that way Steve and I could probably both cover upgrades, and he
could cover the rest.

I'd like to get to the composable roles and containers sessions too, but
we'd have to rejig basically every Heat session and I think it's too
late to be doing that.

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]informal meetup during summit

2016-04-20 Thread Jay Dobies



On 4/20/16 1:00 PM, Rico Lin wrote:

Hi team
Let plan for more informal meetup(relax) time! Let all heaters and any
other projects can have fun and chance for technical discussions together.

After discuss in meeting, we will have a pre-meetup-meetup on Friday
morning to have a cup of cafe or some food. Would like to ask if anyone
knows any nice place for this meetup?:)

Also open for other chance for all can go out for a nice dinner and
beer. Right now seems maybe Monday or Friday night could be the best
candidate for this wonderful task, what all think about this? :)


I really like both of these ideas. I haven't met most of you and it'll 
be good to see everyone in a non-Heat light.


I'm available both Monday and Friday nights. I haven't looked at the 
schedule for Monday night to see what else is planned, but that's my 
vote since I suspect people may be leaving on Friday night.





--
May The Force of OpenStack Be With You,

*/Rico Lin
Chief OpenStack Technologist, inwinSTACK
/*irc: ricolin




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Issue with validation and preview due to get_attr==None

2016-03-23 Thread Jay Dobies
This is the same issue I ran into a few months ago regarding the nested 
parameter validation. Since it resolves to None at that time, there's no 
hook in our current nested parameters implementation to show that it 
will have a value passed in from the parent template.


Unfortunately, I don't have much to offer in terms of a solution, but 
I'm very interested in where this conversation goes :)


On 3/23/16 1:14 PM, Steven Hardy wrote:

Hi all,

I'm looking for some help and additional input on this bug:

https://bugs.launchpad.net/heat/+bug/1559807

Basically, we have multiple issues due to the fact that we consider
get_attr to resolve to None at any point before a resource is actually
instantiated.

It's due to this:

https://github.com/openstack/heat/blob/master/heat/engine/hot/functions.py#L163

This then causes problems during validation of several intrinsic functions,
because if they reference get_attr, they have to contain hacks and
special-cases to work around the validate-time None value (or, as reported
in the bug, fail to validate when all would be fine at runtime).

https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L1333

I started digging into fixes, and there are probably a few possible
approaches, e.g setting stack.Stack.strict_validate always to False, or
reworking the intrinsic function validation to always work with the
temporary None value.

However, it's a more widespread issue than just validation - this affects
any action which happens before the actual stack gets created, so things
like preview updates are also broken, e.g consider this:

resources:
   random:
 type: OS::Heat::RandomString

   config:
 type: OS::Heat::StructuredConfig
 properties:
   group: script
   config:
 foo: {get_attr: [random, value]}

   deployment:
 type: OS::Heat::StructuredDeployment
 properties:
   config:
 get_resource: config
   server: "dummy"

On update, nothing is replaced, but if you do e.g:

   heat stack-update -x --dry-run

You see this:

| replaced  | config| OS::Heat::StructuredConfig |

Which occurs due to the false comparison between the current value of
"random" and the None value we get from get_attr in the temporary stack
used for preview comparison:

https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L528

after_props.get(key) returns None, which makes us falsely declare the
"config" resource gets replaced :(

I'm looking for ideas on how we solve this - it's clearly a major issue
which completely invalidates the results of validate and preview operations
in many cases.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] propose trown for core

2016-03-21 Thread Jay Dobies



On 03/20/2016 02:32 PM, Dan Prince wrote:

I'd like to propose that we add John Trowbridge to the TripleO core
review team. John has become one of the goto guys in helping to chase
down upstream trunk chasing issues. He has contributed a lot to helping
keep general CI issues running and been involved with several new
features over the past year around node introspection, etc. His
involvement with the RDO team also gives him a healthy prospective
about sane releasing practices, etc.

John doesn't have the highest TripleO review stats ATM but I expect his
stats to continue to climb. Especially with his work on upcoming
improvements like tripleo-quickstart, etc. Having John on board the
core team would help drive these projects and it would also be great to
have him able to land fixes related to trunk chasing, etc. I expect
he'll gradually jump into helping with other TripleO projects as well.

If you agree please +1. If there is no negative feedback I'll add him
next Monday.

Dan


+1


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] propose EmilienM for core

2016-03-21 Thread Jay Dobies



On 03/20/2016 02:22 PM, Dan Prince wrote:

I'd like to propose that we add Emilien Macchi to the TripleO core
review team. Emilien has been getting more involved with TripleO during
this last release. In addition to help with various Puppet things he
also has experience in building OpenStack installation tooling,
upgrades, and would be a valuable prospective to the core team. He has
also added several new features around monitoring into instack-
undercloud.

Emilien is currently acting as the Puppet PTL. Adding him to the
TripleO core review team could help us move faster towards some of the
upcoming features like composable services, etc.

If you agree please +1. If there is no negative feedback I'll add him
next Monday.

Dan


+1


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] propose ejuaso for core

2016-03-15 Thread Jay Dobies



On 03/14/2016 10:38 AM, Dan Prince wrote:

http://russellbryant.net/openstack-stats/tripleo-reviewers-180.txt

Our top reviewer over the last half a year ejuaso (goes by Ozz for
Osorio or jaosorior on IRC). His reviews seem consistent, he
consistently attends the meetings and he chimes in on lots of things.
I'd like to propose we add him to our core team (probably long overdue
now too).

If you agree please +1. If there is no negative feedback I'll add him
next Monday.


+1


Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Logo for TripleO

2016-03-11 Thread Jay Dobies



On 03/11/2016 12:32 AM, Jason Rist wrote:

Hey everyone -
We've been working on a UI for TripleO for a few months now and we're
just about to beg to be a part of upstream... and we're in need of a
logo for the login page and header.

In my evenings, I've come up with a logo.

It's a take on the work that Dan has already done on the Owl idea:
http://wixagrid.com/tripleo/tripleo_svg.html

I think it'd be cool if it were used on the CI page and maybe even
tripleo.org - I ran it by the guys on #tripleo and they seem to be
pretty warm on the idea, so I thought I'd run it by here if you missed
the conversation.

It's SVG so we can change the colors pretty easily as I have in the two
attached screenshots.  It also doesn't need to be loaded as a separate
asset.  Additionally, it scales well since it's basically vector instead
of rasterized.

What do you guys think?


Damn dude, really well done, +1 from me.


Can we use it?

I can do a patch for tripleo.org and the ci and wherever else it's in use.

-J



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] issue of ResourceGroup in Heat template

2016-03-09 Thread Jay Dobies



On 3/9/16 4:39 PM, Zane Bitter wrote:

On 09/03/16 05:42, Sergey Kraynev wrote:

Hi Gary,


First of all you don't need to use "depends_on", because using
"get_attr" already create implicit dependency from rg_a.
About getting Null instead of real Ip address:
It sounds like a bug, but IMO, it's expected behavior, because I
suppose it happens due to:
  - you create in rg_a some Server and probably it goes to active state
before ip address becomes available for get_attr. It is necessary to
check, but if it's try to add wait condition for this resource, then
you will get created rg_a with fully available resources and I suppose
IP will be available.


I would have expected the IP address to be available before the server
becomes CREATE_COMPLETE. If it isn't then I'd consider that a bug too -
as you pointed out, people are relying on the dependency created by
get_attr to ensure that they can actually get the attribute.

cheers,
Zane.


On 9 March 2016 at 13:14, Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
 wrote:

Hi,



I have 3 Heat templates using ResourceGroup. There are 2 resource
groups(rg_a and rg_b) and rg_b depends on rg_a.  and rg_b requires
the IP
address of rg_a as the paremeter of rg_b. I use “rg_a_public_ip:
{get_attr:
[rg_a, rg_a_public_ip]}” to get the IP address of rg_a both in the
section
of rg_b parameters (rg_b/properties/resource_def/properties) and the
section
of outputs.

As per my observation,  rg_a_public_ip shows “null” in the parameter
section
of rg_b. while rg_a_public_ip shows the correct IP address in the
outputs
section of the yaml file.



My questions are:

1)  Does this behavior is expected as designed or this is a bug?

2)  What is the alternative solution for the above case(user want
to get
the run-time information of the instance when creating the second
resource
group)  if this behavior is expected?



--- a.yaml ---

resources:

rg_a:

   type: OS::Heat::ResourceGroup

   properties:

   count: 1


Is this still an issue when you remove the resource group and create the 
resource directly? The count of 1 might just be for testing purposes, 
but if that's the end goal you should be able to drop the group entirely.




   resource_def:

   type: b.yaml

   properties:

…



rg_b:

type: OS::Heat::ResourceGroup

depends_on:

 -rg_a

properties:

 count: 2

 resource_def:

 type: c.yaml

 properties:

 rg_a_public_ip: {get_attr: [rg_a, rg_a_public_ip]}
  the value is “null”

 …



outputs:

rg_a_public_ip: {get_attr: [rg_a, rg_a_public_ip]}
-  the value is correct.

--



--b.yaml  

…

resources:

 rg_a:

type: OS::Nova::Server

properties:

  …

outputs:

  rg_a_public_ip:

  value: {get_attr: [rg_a, networks, public, 0]}

--



-- c.yaml 

parameters:

rg_a_public_ip:

  type: string

  description: IP of rg_a

…

resources:

rg_b:

 type: OS::Nova::Server

 properties:

  …

 outputs:

  …

---



Regards,

Gary




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] core members for tripleo-ui

2016-02-29 Thread Jay Dobies

+1

On 02/29/2016 10:27 AM, Dan Prince wrote:

There is a new projects for the ui called tripleo-ui. As most of the
existing TripleO core members aren't going to be reviewing UI specific
patches is seems reasonable that we might add a few review candidates
who can focus specifically on UI specific patches.

I'd like to proposed we add Jiri Tomasek and Ana Krivokapic as core
candidates that will focus primarily on the UI. They would be added to
tripleo core but would agree to only +2 patches within the UI for now,
or at least until they are re-nominated for more general TripleO core,
etc.

Core members if you could please vote on this so we can add these
members at the close of this week. Thanks,

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Questions on template-validate

2016-02-24 Thread Jay Dobies



On 02/24/2016 02:18 AM, Anant Patil wrote:

On 23-Feb-16 20:34, Jay Dobies wrote:

I am going to bring this up in the team meeting tomorrow, but I figured
I'd send it out here as well. Rather than retype the issue, please look at:

https://bugs.launchpad.net/heat/+bug/1548856

My question is what the desired behavior of template-validate should be,
at least from a historical standpoint of what we've honored in the past.
Before I propose/implement a fix, I want to make sure I'm not violating
any unwritten expectations on how it should work.

On a related note -- and this is going to sound really stupid that I
don't know this answer -- but are there any docs on actually using Heat?
I was looking for docs that may explain what the expectation of
template-validate was but I couldn't really find any.

The wiki links to a number of developer-centric docs (HOT guide,
developer process, etc.). I found the (what I believe to be current)
REST API docs [1] but the only real description is "Validates a template."

Thanks  :D


[1] http://developer.openstack.org/api-ref-orchestration-v1.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Sometime back, I too went through this, but got adjusted to the thought
that the template validation is really for validating the syntax and
structure of a template. Whether the values provided are valid or not
will be decided when the stack is validated.


Sorry, one more question. If this is the case, then why does 
template-validate accept -P arguments in the CLI?



The values that depend on
resource plugins to fetch data from other services are not validated,
and to me it makes sense. It helps user to quickly test-develop
templates that are syntactically and structurally valid and they don't
have to depend on resource plugins and services availability. IMO, it
would be better to document the way template validate works, than to
make it a heavy weight request that depends on plugins and services.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Questions on template-validate

2016-02-24 Thread Jay Dobies



On 02/24/2016 02:18 AM, Anant Patil wrote:

On 23-Feb-16 20:34, Jay Dobies wrote:

I am going to bring this up in the team meeting tomorrow, but I figured
I'd send it out here as well. Rather than retype the issue, please look at:

https://bugs.launchpad.net/heat/+bug/1548856

My question is what the desired behavior of template-validate should be,
at least from a historical standpoint of what we've honored in the past.
Before I propose/implement a fix, I want to make sure I'm not violating
any unwritten expectations on how it should work.

On a related note -- and this is going to sound really stupid that I
don't know this answer -- but are there any docs on actually using Heat?
I was looking for docs that may explain what the expectation of
template-validate was but I couldn't really find any.

The wiki links to a number of developer-centric docs (HOT guide,
developer process, etc.). I found the (what I believe to be current)
REST API docs [1] but the only real description is "Validates a template."

Thanks  :D


[1] http://developer.openstack.org/api-ref-orchestration-v1.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Sometime back, I too went through this, but got adjusted to the thought
that the template validation is really for validating the syntax and
structure of a template. Whether the values provided are valid or not
will be decided when the stack is validated. The values that depend on
resource plugins to fetch data from other services are not validated,
and to me it makes sense. It helps user to quickly test-develop
templates that are syntactically and structurally valid and they don't
have to depend on resource plugins and services availability. IMO, it
would be better to document the way template validate works, than to
make it a heavy weight request that depends on plugins and services.


Everything you're saying makes sense. I like the idea of it as a syntax 
validation of the structure of the template alone. I also like that it's 
lightweight.


My only concern is the inclusion of the value in the returned template. 
That's the part that feels weird to me, and is especially misleading if 
we don't have docs around it.


I'm with you on the idea of flushing out those docs. I'll ask in the 
meeting today on the best way to pursue that. I know I've seen patches 
related to updating the docs in code for resource plugins, but I'm not 
sure if that covers the external API docs. If it doesn't, I'll file a 
blueprint for that so we can track it as an across-the-board API docs 
enhancement.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Questions on template-validate

2016-02-23 Thread Jay Dobies
I am going to bring this up in the team meeting tomorrow, but I figured 
I'd send it out here as well. Rather than retype the issue, please look at:


https://bugs.launchpad.net/heat/+bug/1548856

My question is what the desired behavior of template-validate should be, 
at least from a historical standpoint of what we've honored in the past. 
Before I propose/implement a fix, I want to make sure I'm not violating 
any unwritten expectations on how it should work.


On a related note -- and this is going to sound really stupid that I 
don't know this answer -- but are there any docs on actually using Heat? 
I was looking for docs that may explain what the expectation of 
template-validate was but I couldn't really find any.


The wiki links to a number of developer-centric docs (HOT guide, 
developer process, etc.). I found the (what I believe to be current) 
REST API docs [1] but the only real description is "Validates a template."


Thanks  :D


[1] http://developer.openstack.org/api-ref-orchestration-v1.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] spec-lite for simple feature requests

2016-01-22 Thread Jay Dobies



On 01/20/2016 10:21 AM, Rabi Mishra wrote:

Hi All,

As discussed in the team meeting, below is the proposed spec-lite process for 
simple feature requests. This is already being used in Glance project. 
Feedback/comments/concerns are welcome, before we update the contributor docs 
with this:).


tl;dr - spec-lite is a simple feature request created as a bug with enough 
details and with a `spec-lite` tag. Once triaged with status 'Triaged' and 
importance changed to 'Whishlist', it's approved. Status 'Won’t fix' signifies 
the request is rejected and 'Invalid' means it would require a full spec.


Heat Spec Lite
--

Lite specs are small feature requests tracked as Launchpad bugs, with status 
'Wishlist' and tagged with 'spec-lite' tag. These allow for submission and 
review of these feature requests before code is submitted.

These can be used for simple features that don’t warrant a detailed spec to be 
proposed, evaluated, and worked on. The team evaluates these requests as it 
evaluates specs. Once a bug has been approved as a Request for Enhancement 
(RFE), it’ll be targeted for a release.


The workflow for the life of a spec-lite in Launchpad is as follows:

1. File a bug with a small summary of what the request change is and tag it as 
spec-lite.
2. The bug is triaged and importance changed to Wishlist.
3. The bug is evaluated and marked as Triaged to announce approval or to Won’t 
fix to announce rejection or Invalid to request a full spec.
4. The bug is moved to In Progress once the code is up and ready to review.
5. The bug is moved to Fix Committed once the patch lands.

In summary the states are:

New:This is where spec-lite starts, as filed by the community.
Triaged:Drivers - Move to this state to mean, “you can start working on 
it”
Won’t Fix:  Drivers - Move to this state to reject a lite-spec.
Invalid:Drivers - Move to this state to request a full spec for this 
request

Lite spec Submission Guidelines
---

When a bug is submitted, there are two fields that must be filled: ‘summary’ 
and ‘further information’. The ‘summary’ must be brief enough to fit in one 
line.

The ‘further information’ section must be a description of what you would like 
to see implemented in heat. The description should provide enough details for a 
knowledgeable developer to understand what is the existing problem and what’s 
the proposed solution.

Add spec-lite tag to the bug.


Thanks,
Rabi


I think the concept is a really good idea. I like the idea of a light 
weight verification that something makes sense before beginning coding.


One question, when are bugs triaged?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-22 Thread Jay Dobies
I fall very much in the same mentality as Ben. I'm +1 to all of his 
points, with a few comments inline.


On 01/22/2016 12:24 PM, Ben Nemec wrote:

So I haven't weighed in on this yet, in part because I was on vacation
when it was first proposed and missed a lot of the initial discussion,
and also because I wanted to take some time to order my thoughts on it.
  Also because my initial reaction...was not conducive to calm and
rational discussion. ;-)

The tldr is that I don't like it.  To explain why, I'm going to make a
list (everyone loves lists, right? Top $NUMBER reasons we should stop
expecting other people to write our API for us):

1) We've been down this road before.  Except last time it was with Heat.
  I'm being somewhat tongue-in-cheek here, but expecting a general
service to provide us a user-friendly API for our specific use case just
doesn't make sense to me.


I think it's important to think about outside integrations here. The 
current model is to tell other clients to manipulate Heat environments 
and understand how to parse/inspect templates*. Now it will be to 
understand/parse/manipulate Mistral workflows. Neither of those are 
conducive to the types of UI wireframes we've proposed in the past, much 
less friendly to completely outside integrators.


* I realize some of that inspection is moving into Heat, but it's still 
at the mechanical template level rather than providing insight into how 
to actually use them.



2) The TripleO API is not a workflow API.  I also largely missed this
discussion, but the TripleO API is a _Deployment_ API.  In some cases
there also happens to be a workflow going on behind the scenes, but
honestly that's not something I want our users to have to care about.


I'm glad Ben mentioned this, because I always viewed the workflow 
aspects as a subset of what actually needs to be done.



3) It ties us 100% to a given implementation.  If Mistral proves to be a
poor choice for some reason, or insufficient for a particular use case,
we have no alternative.  If we have an API and decide to change our
implementation, nobody has to know or care.  This is kind of the whole
point of having an API - it shields users from all the nasty
implementation details under the surface.


I strongly agree with this one. It's not even generic speculation; we've 
told people in the past to deal with Heat templates and now we're 
telling them to deal with workflows. We already have a history of the 
backend changing and an API would give us much more flexibility (and 
less annoyed users).



4) It raises the bar even further for both new deployers and developers.
  You already need to have a pretty firm grasp of Puppet and Heat
templates to understand how our stuff works, not to mention a decent
understanding of quite a number of OpenStack services.

This presents a big chicken and egg problem for people new to OpenStack.
  It's great that we're based on OpenStack and that allows people to peek
under the hood and do some tinkering, but it can't be required for
everyone.  A lot of our deployers are going to have little to no
OpenStack experience, and TripleO is already a daunting task for those
people (hell, it's daunting for people who _are_ experienced).

5) What does reimplementing all of our tested, well-understood Python
into a new YAML format gain us?  This is maybe the biggest thing I'm
missing from this whole discussion.  We lose a bunch of things (ease of
transition from other Python projects, excellent existing testing
framework, etc.), but what are we actually gaining other than the
ability to say that we use N + 1 OpenStack services?  Because we're way
past the point where "It's OpenStack deploying OpenStack" is sufficient
reason for people to pay attention to us.  We need less "Ooh, neat" and
more "Ooh, that's easy to use and works well."  It's still not clear to
me that Mistral helps in any way with the latter.

6) On the testing note, how do we test these workflows?  Do we know what
happens when step X fails?  How do we test that they handle it properly
in an automated and repeatable way?  In Python these are largely easy
questions to answer: unit tests.  How do you unit test YAML?  This is a
big reason I'm not even crazy about having Mistral on the back end of a
TripleO API.  We'd be going from code that we can test and prove works
in a variety of scenarios, to YAML that is tested and proven to work in
exactly the three scenarios we run in CI.  This is basically the same
situation we had with tripleo-incubator, and it was bad there too.

I dunno.  Maybe I'm too late to this party to have any impact on the
discussion, but I very much do not like the direction we're going and I
would be remiss if I didn't at least point out my concerns with it.

-Ben

On 01/13/2016 03:41 AM, Tzu-Mainn Chen wrote:

Hey all,

I realize now from the title of the other TripleO/Mistral thread [1] that
the discussion there may have gotten confused.  I think using Mistral for
TripleO 

Re: [openstack-dev] [heat] Client checking of server version

2016-01-06 Thread Jay Dobies

I ran into an issue in a review about moving environment resolution from
client to server [1]. It revolves around clients being able to access
older versions of servers (that's a pretty simplistic description; see
[2] for the spec).

Before the holiday, Steve Hardy and I were talking about the
complications involved. In my case, there's no good way to differentiate
an older server from a legitimate error.


Hmmm, it's true that you'll likely just get a 400 error, but I'd hope
that the error message is at least somewhat unique.


Unfortunately, it's not, but I don't think it's due to a Heat problem so 
much as just the nature of the issue. Here's what's happening.


New Client: doesn't do client-side environment resolution before sending 
it to the server.


Old Server: expects the environment to be fully populated and ignores 
the environment file(s) in the files dict.


The result is the server spits back an error saying that, in my 
scenario, there is no type mapping for jdob::Resource1.


The problem is, I get the exact same result for New Client + New Server 
+ incomplete environment files.


The reason I was looking for some sort of version checking is to avoid 
having logic that just says "Maybe it's because it's an old server, 
lemme resolve the environments and send the request again." It feels 
really wrong to trigger two create requests when it's the templates 
themselves that are wrong.



Since the API isn't versioned to the extent that we can leverage that


I mean... it totally is but so far we've chosen not to bump that
version. And we mostly got away with it because we were only adding
functionality. So far.


value, I was looking into using the template versions call. Something
along the lines of:

   supported_versions = hc.template_versions.list()
   version_nums = [i.to_dict()['version'].split('.')[1] for i in
supported_versions]
   mitaka_or_newer = [i for i in version_nums if i >= '2016-04-08']

Yes, I'm planning on cleaning that up before submitting it :)

What I'm wondering is if I should make this into some sort of
generalized utility method in the client, under the assumption that
we'll need this sort of check in the future for the same backward
compatibility requirements.

So a few questions:

1. Does anyone strongly disagree to checking supported template versions
as a way of determining the specifics of the server API.


Yes.

Template versions are supposed to be pluggable, and are explicitly under
control of the operator. We shouldn't be systematically inferring
anything about the server version based on this; in general there's no
causal relationship.


2. Does anything like this already exist that I can use?


Not really; there's the "heat build-info" command, but that is also
explicitly under the control of the operator (and is empty by default).


3. If not, any suggestions on where I should put it? I see a
heat.common.utils module but I'm not sure if there is a convention
against that module (or common in general) making live server calls.

Thanks :D


[1] https://review.openstack.org/#/c/239504/
[2] https://review.openstack.org/#/c/226157/

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Client checking of server version

2016-01-06 Thread Jay Dobies



On 01/05/2016 04:37 PM, Steven Hardy wrote:

On Mon, Jan 04, 2016 at 03:53:07PM -0500, Jay Dobies wrote:

I ran into an issue in a review about moving environment resolution from
client to server [1]. It revolves around clients being able to access older
versions of servers (that's a pretty simplistic description; see [2] for the
spec).

Before the holiday, Steve Hardy and I were talking about the complications
involved. In my case, there's no good way to differentiate an older server
from a legitimate error.

Since the API isn't versioned to the extent that we can leverage that value,
I was looking into using the template versions call. Something along the
lines of:

   supported_versions = hc.template_versions.list()
   version_nums = [i.to_dict()['version'].split('.')[1] for i in
supported_versions]
   mitaka_or_newer = [i for i in version_nums if i >= '2016-04-08']

Yes, I'm planning on cleaning that up before submitting it :)

What I'm wondering is if I should make this into some sort of generalized
utility method in the client, under the assumption that we'll need this sort
of check in the future for the same backward compatibility requirements.

So a few questions:

1. Does anyone strongly disagree to checking supported template versions as
a way of determining the specifics of the server API.


Ok, so some valid concerns have been raised over deriving things using the
HOT version (although I do still wonder if the environment itself should be
versioned, just like the templates, then we could rev the environment
verion and say it supports a list, vs changing anything in the API, but
that's probably a separate discussion).

Taking a step back for a moment, the original discussion was around
providing transparent access to the new interface via heatclient, but that
isn't actually a hard requirement - the old interface works fine for many
users, so we could just introduce a new interface (which would eventually
become the default, after all non-EOL heat versions released support the
new API argument):

Currently we do:

heat stack-create foo -f foo.yaml -e a.yaml -e b.yaml

And this implies some client-side resolution of the multiple -e arguments.

-e is short for "--environment-file", but we could introduce a new format,
e.g "-E", short for "--environment-files":

heat stack-create foo -f foo.yaml -E a.yaml -E b.yaml

This option would work the same way as the current interface, but it would
pass the files unmodified for resolution inside heat (by using the new API
format), and as it's opt-in, it's leaving all the current heatclient
interfaces alone without any internal fallback logic?


+1

My only concern is that the default isn't to exercise the "preferred" 
approach.


However, perhaps I'm viewing things wrong with that as being preferred 
instead of just an alternate for non-heatclient. IIRC, the code is 
largely the same, just being called from two separate places (client v. 
server), so it's not an issue of duplication or the actual logic growing 
stale. And it shouldn't really be an issue of the server-side path 
accidentally breaking since there is CI around it. So maybe my concerns 
are overblown.


It does feel weird to have to document something like that, trying to 
describe the differences between -e and -E, but I suppose if we mark -e 
as deprecated it should be understandable enough.


This also has the benefit of letting this code land without having to do 
a major implementation of micro-versions, so that's a plus :)




Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Client checking of server version

2016-01-06 Thread Jay Dobies



On 01/06/2016 12:05 PM, Zane Bitter wrote:

On 05/01/16 16:37, Steven Hardy wrote:

On Mon, Jan 04, 2016 at 03:53:07PM -0500, Jay Dobies wrote:

I ran into an issue in a review about moving environment resolution from
client to server [1]. It revolves around clients being able to access
older
versions of servers (that's a pretty simplistic description; see [2]
for the
spec).

Before the holiday, Steve Hardy and I were talking about the
complications
involved. In my case, there's no good way to differentiate an older
server
from a legitimate error.

Since the API isn't versioned to the extent that we can leverage that
value,
I was looking into using the template versions call. Something along the
lines of:

   supported_versions = hc.template_versions.list()
   version_nums = [i.to_dict()['version'].split('.')[1] for i in
supported_versions]
   mitaka_or_newer = [i for i in version_nums if i >= '2016-04-08']

Yes, I'm planning on cleaning that up before submitting it :)

What I'm wondering is if I should make this into some sort of
generalized
utility method in the client, under the assumption that we'll need
this sort
of check in the future for the same backward compatibility requirements.

So a few questions:

1. Does anyone strongly disagree to checking supported template
versions as
a way of determining the specifics of the server API.


Ok, so some valid concerns have been raised over deriving things using
the
HOT version (although I do still wonder if the environment itself
should be
versioned, just like the templates, then we could rev the environment
verion and say it supports a list, vs changing anything in the API, but
that's probably a separate discussion).

Taking a step back for a moment, the original discussion was around
providing transparent access to the new interface via heatclient, but
that
isn't actually a hard requirement - the old interface works fine for many
users, so we could just introduce a new interface (which would eventually
become the default, after all non-EOL heat versions released support the
new API argument):

Currently we do:

heat stack-create foo -f foo.yaml -e a.yaml -e b.yaml

And this implies some client-side resolution of the multiple -e
arguments.

-e is short for "--environment-file", but we could introduce a new
format,
e.g "-E", short for "--environment-files":

heat stack-create foo -f foo.yaml -E a.yaml -E b.yaml

This option would work the same way as the current interface, but it
would
pass the files unmodified for resolution inside heat (by using the new
API
format), and as it's opt-in, it's leaving all the current heatclient
interfaces alone without any internal fallback logic?


That would certainly work, but it sounds like a usability/support
nightmare :(

Is there a reason we wouldn't consider bumping the API version to 1.1
for this? We'll have to figure out how to do it some time.


I started to look into the Nova specs on how they handle micro versions. 
I have a few other things on my plate I want to finish up this week, but 
I should be able to take a stab at a POC for it.



cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Client checking of server version

2016-01-04 Thread Jay Dobies
I ran into an issue in a review about moving environment resolution from 
client to server [1]. It revolves around clients being able to access 
older versions of servers (that's a pretty simplistic description; see 
[2] for the spec).


Before the holiday, Steve Hardy and I were talking about the 
complications involved. In my case, there's no good way to differentiate 
an older server from a legitimate error.


Since the API isn't versioned to the extent that we can leverage that 
value, I was looking into using the template versions call. Something 
along the lines of:


  supported_versions = hc.template_versions.list()
  version_nums = [i.to_dict()['version'].split('.')[1] for i in 
supported_versions]

  mitaka_or_newer = [i for i in version_nums if i >= '2016-04-08']

Yes, I'm planning on cleaning that up before submitting it :)

What I'm wondering is if I should make this into some sort of 
generalized utility method in the client, under the assumption that 
we'll need this sort of check in the future for the same backward 
compatibility requirements.


So a few questions:

1. Does anyone strongly disagree to checking supported template versions 
as a way of determining the specifics of the server API.


2. Does anything like this already exist that I can use?

3. If not, any suggestions on where I should put it? I see a 
heat.common.utils module but I'm not sure if there is a convention 
against that module (or common in general) making live server calls.


Thanks :D


[1] https://review.openstack.org/#/c/239504/
[2] https://review.openstack.org/#/c/226157/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] When to use parameters vs parameter_defaults

2015-11-25 Thread Jay Dobies

I think at the same time we add a mechanism to distinguish between
internal and external parameters, we need to add something to indicate
required v. optional.

With a nested stack, anything that's not part of the top-level parameter
contract is defaulted. The problem is that it loses information on what
is a valid default v. what's simply defaulted to pass validation.


I thought the nested validation spec was supposed to handle that though?
  To me, required vs. optional should be as simple as "Does the parameter
definition have a 'default' key?  If yes, then it's optional, if no,
then it's required for the user to pass a value via a parameter or
parameter_default".  I realize we may not have been following that up to
now for various reasons, but it seems like Heat is already providing a
pretty explicit mechanism for marking params as required, so we ought to
use it.


Ya, I was mistaken here. Taking a look at the cinder-netapp.yaml, it 
looks like we're using this correctly:


...
  CinderNetappBackendName:
type: string
default: 'tripleo_netapp'
  CinderNetappLogin:
type: string
  CinderNetappPassword:
type: string
hidden: true
...


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] When to use parameters vs parameter_defaults

2015-11-23 Thread Jay Dobies

On 11/20/2015 07:05 PM, Ben Nemec wrote:

Thinking about this some more makes me wonder if we need a sample config
generator like oslo.config.  It would work off something similar to the
capabilities map, where you would say

SSL:
   templates:
 -puppet/extraconfig/tls/tls-cert-inject.yaml
   output:
 -environments/enable-ssl.yaml

And the tool would look at that, read all the params from
tls-cert-inject.yaml and generate the sample env file.  We'd have to be
able to do a few new things with the params in order for this to work:

-Need to specify whether a param is intended to be set as a top-level
param, parameter_defaults (which we informally do today with the Can be
overridden by parameter_defaults comment), or internal, to define params
that shouldn't be exposed in the sample config and are only intended as
an interface between templates.  There wouldn't be any enforcement of
the internal type, but Python relies on convention for its private
members so there's precedent. :-)


There is new functionality in Heat that will let you pass in a series of 
templates and environments and it will return:


- The list of top-level parameters, the same way template-validate 
always did

- A list of all nested parameters, keyed by resource.

Take a look at 
https://github.com/openstack/heat-specs/blob/master/specs/liberty/nested-validation.rst 
for details and an example.


That's not entirely what you're getting at, I realize that. I'm glad to 
see you suggest a convention-based approach because I think that's the 
only way we're going to be able to convey some of this information.


I think at the same time we add a mechanism to distinguish between 
internal and external parameters, we need to add something to indicate 
required v. optional.


With a nested stack, anything that's not part of the top-level parameter 
contract is defaulted. The problem is that it loses information on what 
is a valid default v. what's simply defaulted to pass validation.


I've been noticing this more and more on the vendor integrations. They 
have parameters that are required (such as a username) and others that 
are less likely to be changed (I can't think of an example, but I think 
everyone can see where I'm going with this).


So I think there are two sorts of things (at least, I'm also thinking 
off the top of my head) we'd like this tool/sample file to convey:


- Parameters a user would want to change, as compared to those used for 
internal data shuffling
- Information on if the user must supply a value, as compared to 
parameters with an actual default


All that said, I dig this idea of a tool that would generate a skeleton 
environment file.



-There would have to be some way to pick out only certain params from a
template, since I think there are almost certainly features that are
configured using a subset of say puppet/controller.yaml which obviously
can't just take the params from an entire file.  Although maybe this is
an indication that we could/should refactor the templates to move some
of these optional params into their own separate files (at this point I
think I should take a moment to mention that this is somewhat of a brain
dump, so I haven't thought through all of the implications yet and I'm
not sure it all makes sense).



The nice thing about generating these programmatically is we would
formalize the interface of the templates somewhat, and it would be
easier to keep sample envs in sync with the actual implementation.


You could go so far as to put CI on top of it like we do with the oslo 
config stuff, which would be neat.



You'd never have to worry about someone adding a param to a file but
forgetting to update the env (or at least it would be easy to catch and
fix when they did, just run "tox -e genconfig").

I'm not saying this is a simple or short-term solution, but I'm curious
what people think about setting this as a longer-term goal, because as I
think our discussion in Tokyo exposed, we're probably going to have a
bit of an explosion of sample envs soon and we're going to need some way
to keep them sane.

Some more comments inline.

On 11/19/2015 10:16 AM, Steven Hardy wrote:

On Mon, Nov 16, 2015 at 08:15:48PM +0100, Giulio Fidente wrote:

On 11/16/2015 04:25 PM, Steven Hardy wrote:

Hi all,

I wanted to start some discussion re $subject, because it's been apparrent
that we have a lack of clarity on this issue (and have done ever since we
started using parameter_defaults).


[...]


How do people feel about this example, and others like it, where we're
enabling common, but not mandatory functionality?


At first I was thinking about something as simple as: "don't use top-level
params for resources which the registry doesn't enable by default".

It seems to be somewhat what we tried to do with the existing pluggable
resources.

Also, not to hijack the thread but I wanted to add another question related
to a similar issue:

   Is there a reason to prefer use of parameters: instead 

Re: [openstack-dev] [tripleo] When to use parameters vs parameter_defaults

2015-11-19 Thread Jay Dobies

My personal preference is to say:

1. Any templates which are included in the default environment (e.g
overcloud-resource-registry-puppet.yaml), must expose their parameters
via overcloud-without-mergepy.yaml

2. Any templates which are included in the default environment, but via a
"noop" implementation *may* expose their parameters provided they are
common and not implementation/vendor specific.


I think this makes sense. Two combination of these two represent what 
TripleO views as the common* API for deploying.


* "common" in the sense that you may not use all of them every time, but 
they are part of the expected deployments.


I still have concerns with us not treating this strongly enough as a 
versioned API, but we can discuss that in a different thread. That's 
more of a "when" we change the parameters v. the conventions on "how" we 
do it.



3. Any templates exposing vendor specific interfaces (e.g at least anything
related to the OS::TripleO::*ExtraConfig* interfaces) must not expose any
parameters via the top level template.

How does this sound?

This does mean we suffer some template bloat from (1) and (2), but it makes
the job of any UI or other tool requiring user input much easier, I think?

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] When to use parameters vs parameter_defaults

2015-11-16 Thread Jay Dobies

Hi all,

I wanted to start some discussion re $subject, because it's been apparrent
that we have a lack of clarity on this issue (and have done ever since we
started using parameter_defaults).

Some context:

- Historically TripleO has provided a fairly comprehensive "top level"
   parameters interface, where many per-role and common options are
   specified, then passed in to the respective ResourceGroups on deployment

https://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/overcloud-without-mergepy.yaml#n14

The nice thing about this approach is it gives a consistent API to the
operator, e.g the parameters schema for the main overcloud template defines
most of the expected inputs to the deployment.

The main disadvantage is a degree of template bloat, where we wire dozens
of parameters into each ResourceGroup, and from there into whatever nested
templates consume them.

- When we started adding interfaces (such as all the OS::TripleO::*ExtraConfig*
   interfaces, there was a need to enable passing arbitrary additional
   values to nested templates, with no way of knowing what they are (e.g to
   enable wiring in third-party pieces we have no knowledge of or which
   require implementation-specific arguments which don't make sense for all
   deployments.

To do this, we made use of the heat parameter_defaults interface, which
(unlike normal parameters) have global scope (visible to all nested stacks,
without explicitly wiring in the values from the parent):

http://docs.openstack.org/developer/heat/template_guide/environment.html#define-defaults-to-parameters

The nice thing about this approach is its flexibility, any arbitrary
values can be provided without affecting the parent templates, and it can
allow for a terser implementation because you only specify the parameter
definition where it's actually used.

The main disadvantage of this approach is it becomes very much harder to
discover an API surface for the operator, e.g the parameters that must be
provided on deployment by any CLI/UI tools etc.  This has been partially
addressed by the new-for-liberty nested validation heat feature, but
there's still a bunch of unsolved complexity around how to actually consume
that data and build a coherent consolidated API for user interaction:

https://github.com/openstack/heat-specs/blob/master/specs/liberty/nested-validation.rst

My question is, where do we draw the line on when to use each interface?

My position has always been that we should only use parameter_defaults for
the ExtraConfig interfaces, where we cannot know what reasonable parameters
are.  And for all other "core" functionality, we should accept the increased
template verbosity and wire arguments in from overcloud-without-mergepy.

However we've got some patches which fall into a grey area, e.g this SSL
enablement patch:

https://review.openstack.org/#/c/231930/46/overcloud-without-mergepy.yaml

Here we're actually removing some existing (non functional) top-level
parameters, and moving them to parameter_defaults.

I can see the logic behind it, it does make the templates a bit cleaner,
but at the expense of discoverablility of those (probably not
implementation dependent) parameters.

How do people feel about this example, and others like it, where we're
enabling common, but not mandatory functionality?

In particular I'm keen to hear from Mainn and others interested in building
UIs on top of TripleO as to which is best from that perspective, and how
such arguments may be handled relative to the capabilities mapping proposed
here:

https://review.openstack.org/#/c/242439/

Thanks!

Steve



(in re-reading this, I realize I'm not providing an answer to how I feel 
about the example as I am adding some more thoughts in general; 
apologies for that)


I see there being a few issues with the current approach:

- I'll get this one out of the way first, even though it's not the 
biggest issue. The name 'parameter_defaults' tends to confuse new 3rd 
party integrators since we're not using it as a default per se. I 
understand from Heat's point of view it's defaulting a parameter value, 
but from the user's standpoint, they are setting an actual value to be 
used. Perhaps this can be solved with better docs, and I largely mention 
it because I can see most of what you wrote in here turning into 
documentation, so it'll be good to have it mentioned.


- Back to the point of your e-mail, there are two ways to view it.

-- If you define too few parameters at the top level, you end up with a 
lot of comments like the following inside of nested templates:


ControlPlaneSubnetCidr: # Override this via parameter_defaults

or

# To be defined via a local or global environment in parameter_defaults
  rhel_reg_activation_key:
type: string

There are other examples too, but the thing to note is that we've so far 
been pretty good about adding those comments. It's not a programmatic 
marker, which may be a problem for a UX, but at least the 

Re: [openstack-dev] [tripleo] Location of TripleO REST API

2015-11-16 Thread Jay Dobies



On 11/10/2015 10:08 AM, Tzu-Mainn Chen wrote:

Hi all,

At the last IRC meeting it was agreed that the new TripleO REST API
should forgo the Tuskar name, and simply be called... the TripleO
API.  There's one more point of discussion: where should the API
live?  There are two possibilities:

a) Put it in tripleo-common, where the business logic lives.  If we
do this, it would make sense to rename tripleo-common to simply
tripleo.


+1

If the exercise is to move the logic behind an API, let's not have two 
avenues into that logic.



b) Put it in its own repo, tripleo-api


The first option made a lot of sense to people on IRC, as the proposed
API is a very thin layer that's bound closely to the code in tripleo-
common.  The major objection is that renaming is not trivial; however
it was mentioned that renaming might not be *too* bad... as long as
it's done sooner rather than later.

What do people think?


Thanks,
Tzu-Mainn Chen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Shared code between server and client

2015-10-27 Thread Jay Dobies

On 23/10/15 05:35, Robert Collins wrote:

My 2c - if its a stable API in the client, and can be kept stable,
there's no problem.


+1


Ok, forgive me for sounding dumb here (and changing the topic of the 
thread somewhat), but what do we consider a stable client API? Is it as 
broad as any methods in heatclient that aren't prefixed with _? Or do we 
scope it only to specific areas (e.g. common, RPC client) and anything 
else is considered "use at your own risk because we may need to change it"?


My guess is that it's the former given other sentiments I've heard 
around OpenStack, but I wanted to explicitly ask anyway.


Thanks :)



-Rob

On 23 October 2015 at 08:49, Jay Dobies <jason.dob...@redhat.com> wrote:

I'm working on moving the functionality for merging environments from
the
client into the server [1]. I've only superficially looked at
template_utils.py (in heatclient) but I'm guessing there is stuff in
there I
will want to use server-side.

The server has a requirement on heatclient, but I'm not sure what the
convention is for using code in it. Can I directly call into a module in
heatclient/common from the server or is the client dependency only
intended
to be used through the client-facing APIs?

[1] https://blueprints.launchpad.net/heat/+spec/multi-environments

Thanks :)

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Shared code between server and client

2015-10-27 Thread Jay Dobies



On 10/27/2015 11:09 AM, Jay Dobies wrote:

On 23/10/15 05:35, Robert Collins wrote:

My 2c - if its a stable API in the client, and can be kept stable,
there's no problem.


+1


Ok, forgive me for sounding dumb here (and changing the topic of the
thread somewhat), but what do we consider a stable client API? Is it as
broad as any methods in heatclient that aren't prefixed with _? Or do we




scope it only to specific areas (e.g. common, RPC client)


Sorry, I merged two concepts in there. I was referring to common in the 
client and rpc_client in the server. It's possible the same answer 
doesn't apply to both; I'm not sure if we'd want to support someone 
using the server's RPC client code.



else is considered "use at your own risk because we may need to change it"?

My guess is that it's the former given other sentiments I've heard
around OpenStack, but I wanted to explicitly ask anyway.

Thanks :)



-Rob

On 23 October 2015 at 08:49, Jay Dobies <jason.dob...@redhat.com> wrote:

I'm working on moving the functionality for merging environments from
the
client into the server [1]. I've only superficially looked at
template_utils.py (in heatclient) but I'm guessing there is stuff in
there I
will want to use server-side.

The server has a requirement on heatclient, but I'm not sure what the
convention is for using code in it. Can I directly call into a
module in
heatclient/common from the server or is the client dependency only
intended
to be used through the client-facing APIs?

[1] https://blueprints.launchpad.net/heat/+spec/multi-environments

Thanks :)

__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Shared code between server and client

2015-10-22 Thread Jay Dobies
I'm working on moving the functionality for merging environments from 
the client into the server [1]. I've only superficially looked at 
template_utils.py (in heatclient) but I'm guessing there is stuff in 
there I will want to use server-side.


The server has a requirement on heatclient, but I'm not sure what the 
convention is for using code in it. Can I directly call into a module in 
heatclient/common from the server or is the client dependency only 
intended to be used through the client-facing APIs?


[1] https://blueprints.launchpad.net/heat/+spec/multi-environments

Thanks :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] resource_registry base_url

2015-10-22 Thread Jay Dobies
In looking through the environment file loading code, I keep seeing a 
check for base_url in the resource registry. It looks like a way to have 
the registry entries only be the filenames (I suppose relative filenames 
work as well) instead of having to enter the full path every time. The 
base_url would be used as the root URL for those filenames when loading 
them.


Thing is, I can't find any reference to that in the docs. I did a search 
for resource_registry, but the only thing I can find is [1] which 
doesn't talk about base_url.


Is this something that's still supported or was it "turned off" (so to 
speak) by removing the docs about it so users didn't know to use it? Is 
the syntax to just sit it side by side with the resource definitions, 
similar to:


resource_registry:
  "base_url": /home/jdob/my_templates
  "OS::Nova::Server": my_nova.yaml

Or am I just totally missing where in the docs it's talked about (which 
is also terribly possible)?


[1] 
http://docs.openstack.org/developer/heat/template_guide/environment.html?highlight=resource_registry


Thanks :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] mox to mock migration

2015-10-09 Thread Jay Dobies
I forget where we left things at the last meeting with regard to whether 
or not there should be a blueprint on this. I was going to work on some 
during some downtime but I wanted to make sure I wasn't overlapping with 
what others may be converting (it's more time consuming than I anticipated).


Any thoughts on how to track it?

Thanks :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] mox to mock migration

2015-10-09 Thread Jay Dobies
This sounds good, I was hoping it'd be acceptable to use etherpad. I 
filed a blueprint [1] but I'm anticipating using the etherpad much more 
regularly to track which files are being worked or completed.


[1] https://blueprints.launchpad.net/heat/+spec/mox-to-mock-conversion
[2] https://etherpad.openstack.org/p/heat-mox-to-mock

Thanks for the guidance :)

On 10/09/2015 12:42 PM, Steven Hardy wrote:

On Fri, Oct 09, 2015 at 09:06:57AM -0400, Jay Dobies wrote:

I forget where we left things at the last meeting with regard to whether or
not there should be a blueprint on this. I was going to work on some during
some downtime but I wanted to make sure I wasn't overlapping with what
others may be converting (it's more time consuming than I anticipated).

Any thoughts on how to track it?


I'd probably suggest raising either a bug or a blueprint (not spec), then
link from that to an etherpad where you can track all the tests requiring
rework, and who's working on them.

"it's more time consuming than I anticipated" is pretty much my default
response for anything to do with heat unit tests btw, good luck! :)

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Core reviewers for python-tripleoclient and tripleo-common

2015-09-10 Thread Jay Dobies

On 09/10/2015 10:06 AM, James Slagle wrote:

TripleO has added a few new repositories, one of which is
python-tripleoclient[1], the former python-rdomanager-oscplugin.

With the additional repositories, there is an additional review burden
on our core reviewers. There is also the fact that folks who have been
working on the client code for a while when it was only part of RDO
are not TripleO core reviewers.

I think we could help with the additional burden of reviews if we made
two of those people core on python-tripleoclient and tripleo-common
now.

Specifically, the folks I'm proposing are:
Brad P. Crochet 
Dougal Matthews 


+1 to both. I've seen a lot of Dougal's reviews and his Python knowledge 
is excellent.



The options I see are:
- keep just 1 tripleo acl, and add additional folks there, with a good
faith agreement not to +/-2,+A code that is not from the 2 client
repos.


+1 to this. I feel like it encourages cross pollination into other 
tripleo repos (we could use the eyes on THT) without having to jump 
through extra hoops as their involvement with them increases.



- create a new gerrit acl in project-config for just these 2 client
repos, and add folks there as needed. the new acl would also contain
the existing acl for tripleo core reviewers
- neither of the above options - don't add these individuals to any
TripleO core team at this time.

The first is what was more or less done when Tuskar was brought under
the TripleO umbrella to avoid splitting the core teams, and it's the
option I'd prefer.

TripleO cores, please reply here with your vote from the above
options. Or, if you have other ideas, you can share those as well :)

[1] https://review.openstack.org/#/c/215186/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Plugin integration and environment file naming

2015-09-08 Thread Jay Dobies
I like where this is going. I've been asked a number of times where to 
put things and we never had a solid convention. I like the idea of 
having that docced somewhere.


I like either of the proposed solutions. My biggest concern is that they 
don't capture how you actually use them. I know that was the point of 
your e-mail; we don't yet have the Heat constructs in place for the 
templates to convey that information.


What about if we adopt the directory structure model and strongly 
request a README.md file in there? It's similar to the image elements 
model. We could offer a template to fill out or leave it open ended, but 
the purpose would be to specify:


- Installation instructions (e.g. "set the resource registry namespace 
for Blah to point to this file" or "use the corresponding environment 
file foo.yaml")
- Parameters that can/should be specified via parameter_defaults. I'm 
not saying we add a ton of documentation in there that would be 
duplicate of the actual parameter definitions, but perhaps just a list 
of the parameter names. That way, a user can have an idea of what 
specifically to look for in the template parameter list itself.


That should be all of the info that we'd like Heat to eventually provide 
and hold us over until those discussions are finished.


On 09/08/2015 08:20 AM, Jiří Stránský wrote:

On 8.9.2015 13:47, Jiří Stránský wrote:

Apart from "cinder" and "neutron-ml2" directories, we could also have a
"combined" (or sth similar) directory for env files which combine
multiple other env files. The use case which i see is for extra
pre-deployment configs which would be commonly used together. E.g.
combining Neutron and Horizon extensions of a single vendor [4].


Ah i mixed up two things in this paragraph -- env files vs. extraconfig
nested stacks. Not sure if we want to start namespacing the extraconfig
bits in a parallel manner. E.g.
"puppet/extraconfig/pre_deploy/controller/cinder",
"puppet/extraconfig/pre_deploy/controller/neutron-ml2". It would be
nice, especially if we're sort of able to map the extraconfig categories
to env file categories most of the time. OTOH the directory nesting is
getting quite deep there :)


That was my thought too, that the nesting is getting a bit deep. I also 
don't think we should enforce the role in the directory structure as 
we've already seen instances of things that have to happen on both 
controller and compute.




J.


[4]
https://review.openstack.org/#/c/213142/1/puppet/extraconfig/pre_deploy/controller/all-bigswitch.yaml




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Encapsulating logic and state in the client

2015-08-25 Thread Jay Dobies

Thinking about this further, the interesting question to me is how much
logic we aim to encapsulate behind an API. For example, one of the simpler
CLI commands we have in RDO-Manager (which is moving upstream[1]) is to
run introspection on all of the Ironic nodes. This involves a series of
commands that need to be run in order and it can take upwards of 20
minutes depending how many nodes you have. However, this does just
communicate with Ironic (and ironic inspector) so is it worth hiding
behind an API? I am inclined to say that it is so we can make the end
result as easy to consume as possible but I think it might be difficult
to draw the line in some cases.

The question then rises about what this API would look like? Generally
speaking I feel like it looks like a workflow API, it shouldn't offer
many (or any?) unique features, rather it manages the process of
performing a series of operations across multiple APIs. There have been
attempts at doing this within OpenStack before in a more general case,
I wonder what we can learn from those.


This is where my head is too. The OpenStack on OpenStack thing means we 
get to leverage the existing tools and users can leverage their existing 
knowledge of the products.


But what I think an API will provide is guidance on how to achieve that 
(the big argument there being if this should be done in an API or 
through documentation). It coaches new users and integrations on how to 
make all of the underlying pieces play together to accomplish certain 
things.


To your question on that ironic call, I'm split on how I feel.

On one hand, I really like the idea of the TripleO API being able to 
support an OpenStack deployment entirely on its own. You may want to go 
directly to some undercloud tools for certain edge cases, but for the 
most part you should be able to accomplish the goal of deploying 
OpenStack through the TripleO APIs.


But that's not necessarily what TripleO wants to be. I've seen the 
sentiment of it only being tools for deploying OpenStack, in which case 
a single API isn't really what it's looking to do. I still think we need 
some sort of documentation to guide integrators instead of saying look 
at the REST API docs for these 5 projects, but that documentation is 
lighter weight than having pass through calls in a TripleO API.





Unfortunately, as undesirable as these are, they're sometimes necessary
in the world we currently live in. The only long-term solution to this
is to put all of the logic and state behind a ReST API where it can be
accessed from any language, and where any state can be stored
appropriately, possibly in a database. In principle that could be
accomplished either by creating a tripleo-specific ReST API, or by
finding native OpenStack undercloud APIs to do everything we need. My
guess is that we'll find a use for the former before everything is ready
for the latter, but that's a discussion for another day. We're not there
yet, but there are things we can do to keep our options open to make
that transition in the future, and this is where tripleo-common comes in.

I submit that anything that adds logic or state to the client should be
implemented in the tripleo-common library instead of the client plugin.
This offers a couple of advantages:

- It provides a defined boundary between code that is CLI-specific and
code that is shared between the CLI and GUI, which could become the
model for a future ReST API once it has stabilised and we're ready to
take that step.
- It allows for an orderly transition when that happens - we can have a
deprecation period during which the tripleo-common library is imported
into both the client and the (future, hypothetical) ReST API.

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



[1]: https://review.openstack.org/#/c/215186/3/gerrit/projects.yaml,cm

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat] Tuskar v. Heat responsibilities

2015-06-29 Thread Jay Dobies



FWIW, I liked what you were proposing in the other thread. In thinking
about the deployment flow in the Tuskar-UI, I think it would enable
exposing and setting the nested stack parameters easily (you choose
various resources as displayed in a widget, click a reload/refresh
button, and new parameters are exposed).


I agree, I was thinking something similar too. There's a step to pick 
the larger decisions (implementations of resource types) and then a 
refresh that will ask Heat to recalculate the full set of parameters.



What might also be neat is if something like heatclient then had
support to automatically generate stub yaml environment files based on
the output of the template-validate. So it could spit out a yaml file
that had a parameter_defaults: section with all the expected
parameters and their default values, that way the user could then just
edit that stub to complete the required inputs.


This is similar to what Tuskar API was looking to do. I think it'd be 
awesome to see Heat support it natively.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat] Tuskar v. Heat responsibilities

2015-06-29 Thread Jay Dobies

We could do likewise in the environment:

resource_registry:
   OS::TripleO::ControllerConfig: puppet/controller-config.yaml
   ...
   constraints:
 OS::TripleO::ControllerConfig:
 - allowed_values:
   - puppet/controller-config.yaml,
   - foo/other-config.yaml]

These constraints would be enforced at stack validation time such that the
environment would be rejected if the optional constraints were not met.


I like this approach.

Originally, I was thinking it might be cleaner to encode the
relationship in the opposite direction. Something like this in
puppet/controller-config.yaml:

implements:
   OS::TripleO::ControllerConfig

But then, you leave it up to the external tools (a UI, etc) to know
how to discover these implementing templates. If they're explicitly
listed in a list as in your example, that helps UI's / API's more
easily present these choices. Maybe it could work both ways.


Yeah the strict interface definition is basically the TOSCA approach
referenced by Thomas in my validation thread, and while I'm not opposed to
that, it just feels like overkill for this particular problem.

I don't see any mutually exclusive logic here, we could probably consider
adding resource_registry constraints and still add interfaces later if it
becomes apparent we really need them - atm I'm just slightly wary of adding
more complexity to already complex templates, and also on relying on deep
introspection to match up interfaces (when we've got no deep validation
capabilities at all in heat atm) vs some simple rules in the environment.

Sounds like we've got enough consensus on this idea to be worth raising a
spec, I'll do that next week.


I had originally been thinking of it like slagle describes, from the 
child up to the parent as well. What I like about that approach is that 
it achieves a more pluggable model when you think about extensions that 
aren't accepted or applicable in TripleO upstream.


If someone comes along and adds a new ControllerConfig to your above 
example, they have to edit whatever environment you're talking about 
that defines the constraints (I'm call it overcloud-something.yaml for now).


This becomes a problem from a packaging point of view, especially when 
you factor in non-TripleO integrators (without revealing too much inside 
baseball, think partner integrations). How do I add in an extra package 
(RPM, DEB, whatever) that provides that ControllerConfig and have it 
picked up as a valid option?


We don't want to be editing the overcloud-something.yaml because it's 
owned by another package and there's the potential for conflicts if 
multiple extra implementations start stepping on each other.


An interface/discovery sort of mechanism, which I agree is more complex, 
would be easier to work with in those cases.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat] Tuskar v. Heat responsibilities

2015-06-29 Thread Jay Dobies

I had originally been thinking of it like slagle describes, from the
child up to the parent as well. What I like about that approach is that
it achieves a more pluggable model when you think about extensions that
aren't accepted or applicable in TripleO upstream.

If someone comes along and adds a new ControllerConfig to your above
example, they have to edit whatever environment you're talking about
that defines the constraints (I'm call it overcloud-something.yaml for
now).

This becomes a problem from a packaging point of view, especially when
you factor in non-TripleO integrators (without revealing too much inside
baseball, think partner integrations). How do I add in an extra package
(RPM, DEB, whatever) that provides that ControllerConfig and have it
picked up as a valid option?

We don't want to be editing the overcloud-something.yaml because it's
owned by another package and there's the potential for conflicts if
multiple extra implementations start stepping on each other.

An interface/discovery sort of mechanism, which I agree is more complex,
would be easier to work with in those cases.


I'm effectively replying to my own e-mail here, but I've expressed these 
thoughts on the spec and it'd probably be better to continue this train 
of thought there:


https://review.openstack.org/#/c/196656/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Heat] Tuskar v. Heat responsibilities

2015-06-23 Thread Jay Dobies
I didn't want to hijack Steve Hardy's thread about the recursive 
validation, but I wanted to summarize the needs that Tuskar and the UI 
have been trying to answer and some of the problems we ran into.


I think it's fairly common knowledge now that Tuskar and the THT 
templates diverged over the past few months, so I won't rehash it. If 
you need a summary of what happened, look here: 
https://jdob.fedorapeople.org/tuskar-heat.jpg


Below are some of the needs that the Tuskar UI in general has when 
working with the TripleO Heat Templates. I'm hoping we can come up with 
a decent list and use that to help drive what belongs in Heat v. what 
belongs elsewhere, and ultimately what that elsewhere actually is.



= Choosing Component Implementations =

== Background ==

I'm already off to a bad start, since the word component isn't 
actually a term in this context. What I'm referring to is the fact that 
we are starting to see what is almost a plugin model in the THT templates.


Previously, we had assumed that all of the overcloud configuration would 
be done through parameters. This is no longer the case as the 
resource_registry is used to add certain functionality.


For example, in overcloud-resource-registry-puppet.yaml, we see:

 # set to controller-config-pacemaker.yaml to enable pacemaker
 OS::TripleO::ControllerConfig: puppet/controller-config.yaml

That's a major overcloud configuration setting, but that choice isn't 
made through a parameter. It's in a different location and a different 
mechanism entirely.


Similarly, enabling a Netapp backend for Cinder is done by setting a 
resource_registry entry to change the CinderBackend template [1]. This 
is a slightly different case conceptually than HA since the original 
template being overridden is a noop [2], but the mechanics of how to set 
it are the same.


There are also a number of pre and post hooks that exist in the 
overcloud template that we are seeing more and more implementations of. 
RHEL registration is implemented as such a hook [3].


I'm drawing a difference here between fundamental configuration changes 
(HA v. non-HA) and optional additions (RHEL registration). Again, 
mechanically they are implemented as resource_registry substitutions, 
though from a UI standpoint we'd likely want to treat them differently. 
Whether or not that difference is actually captured by the templates 
themselves or is purely in the UI is open to debate.


== Usage in TripleO ==

All of the examples I mentioned above have landed upstream and the Heat 
features necessary to facilitate them all exist.


What doesn't exist is a way to manipulate the resource_registry. Tuskar 
doesn't have APIs for that level of changes; it assumed all 
configuration changes would be through parameters and hasn't yet had 
time to add in support for dorking with the registry in this fashion.


While, technically, all of the resource_registry entries can be 
overridden, there are only a few that would make sense for a user to 
want to configure (I'm not talking about advanced users writing their 
own templates).


On top of that, only certain templates can be used to fulfill certain 
resource types. For instance, you can't point CinderBackend to 
rhel-registration.yaml. That information isn't explicitly captured by 
Heat templates. I suppose you could inspect usages of a resource type in 
overcloud to determine the api of that type and then compare that to 
possible implementation templates' parameter lists to figure out what is 
compatible, but that seems like a heavy-weight approach.


I mention that because part of the user experience would be knowing 
which resource types can have a template substitution made and what 
possible templates can fulfill it.


== Responsibility ==

Where should that be implemented? That's a good question.

The idea of resolving resource type uses against candidate template 
parameter lists could fall under the model Steve Hardy is proposing of 
having Heat do it (he suggested the validate call, but this may be 
leading us more towards template inspection sorts of APIs supported by Heat.


It is also possibly an addition to HOT, to somehow convey an interface 
so that we can more easily programatically look at a series of templates 
and understand how they play together. We used to be able to use the 
resource_registry to understand those relationships, but that's not 
going to work if we're trying to find substitutions into the registry.


Alternatively, if Heat/HOT has no interest in any of this, this is 
something that Tuskar (or a Tuskar-like substitute) will need to solve 
going forward.



= Consolidated Parameter List =

== Background ==

This is what Steve was getting at in his e-mail. I'll rehash the issue 
briefly.


We used to be able to look at the parameters list in the overcloud 
template and know all of the parameters that need to be specified to 
configure the overcloud.


The parameter passing is pretty strict, so if overcloud 

Re: [openstack-dev] [heat][tripleo]Recursive validation for easier composability

2015-06-22 Thread Jay Dobies



On 06/22/2015 12:19 PM, Steven Hardy wrote:

Hi all,

Lately I've been giving some thought to how we might enable easier
composability, and in particular how we can make it easier for folks to
plug in deeply nested optional extra logic, then pass data in via
parameter_defaults to that nested template.

Here's an example of the use-case I'm describing:

https://review.openstack.org/#/c/193143/5/environments/cinder-netapp-config.yaml

Here, we want to allow someone to easily turn on an optional configuration
or feature, in this case a netapp backend for cinder.


I think the actual desired goal is bigger than just optional 
configuration. I think it revolves more around choosing a nested stack 
implementation for a resource type and how to manage custom parameters 
for that implementation. We're getting into the territory here of having 
a parent stack defining an API that nested stacks can plug into. I'd 
like to have some sort of way of deriving that information instead of 
having it be completely relegated to outside documentation (but I'm 
getting off topic; at the end I mention how I want to do a better write 
up of the issues Tuskar has faced and I'll elaborate more there).



The parameters specific to this feature/configuration only exist in the
nested cinder-netapp-config.yaml template, then parameter_defaults are used
to wire in the implementation specific data without having to pass the
values through every parent template (potentially multiple layers of
nesting).

This approach is working out OK, but we're missing an interface which makes
the schema for parameters over the whole tree available.



This is obviously
a problem, particularly for UI's, where you really need a clearly defined
interface for what data is required, what type it is, and what valid values
may be chosen.


I think this is going to be an awesome addition to Heat. As you alluded 
to, we've struggled with this in TripleO. The parameter_defaults works 
to circumvent the parameter passing, but it's rough from a user 
experience point of view since getting the unified list of what's 
configurable is difficult.



I'm considering an optional additional flag to our template-validate API
which allows recursive validation of a tree of templates, with the data
returned on success to include a tree of parameters, e.g:

heat template-validate -f parent.yaml -e env.yaml --show-nested
{
   Description: The Parent,
   Parameters: {
 ParentConfig: {
   Default: [],
   Type: Json,
   NoEcho: false,
   Description: ,
   Label: ExtraConfig
 },
 ControllerFlavor: {
   Type: String,
   NoEcho: false,
   Description: ,
   Label: ControllerFlavor
 }
   }
  NestedParameters: {
 child.yaml: {
 Parameters: {
   ChildConfig: {
   Default: [],
   Type: Json,
   NoEcho: false,
   Description: ,
   Label: Child ExtraConfig
   }
 }
  }
}


Are you intending on resolving parameters passed into a nested stack 
from the parent against what's defined in the nested stack's parameter 
list? I'd want NestedParameters to only list things that aren't already 
being specified to the parent.


Specifically with regard to the TripleO Heat templates, there is still a 
lot of logic that needs to be applied to properly divide out parameters. 
For example, there are some things passed in from the parents to the 
nested stacks that are kinda namespaced by convention, but its not a 
hard convention. So to try to group the parameters by service, we'd have 
to look at a particular NestedParameters section and then also add in 
anything from the parent that applies to that service. I don't believe 
we can use parameter groups to correlate them (we might be able to, or 
that might be its own improvement).


I realize that's less of a Heat issue and more of a THT issue, but I 
figured I'd bring it up anyway.



This implies that we also need to pass the files map to the validate API,
like we currently do for create (we already pass the environment, although
it's not really doing much beyond providing parameters for the parent stack
AFAICT, we completely skip validating TemplateResources because the files
aren't passed):

https://github.com/openstack/heat/blob/master/heat/engine/service.py#L873

Before I go ahead and spend time writing a spec/code for this, what do
folks think about enhancing validate like this?  Is there an alternative,
for example adding a parameters schema output to stack-preview?


For what it's worth, I'd rather see this as a spec before code. There 
are a lot of complications we hit in Tuskar in trying to make 
configuring the overcloud through THT user-friendly. This is one part of 
it, but there are others. I'd like to have them all talked out and see 
what the larger group of changes are.


For example, take the cinder-netapp-config example you mentioned. That 
can only be used to fulfill a specific resource type in the 

Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-07 Thread Jay Dobies



On 05/07/2015 06:01 AM, Giulio Fidente wrote:

On 05/07/2015 11:15 AM, marios wrote:

On 07/05/15 05:32, Dan Prince wrote:


[..]


Something like this:

https://review.openstack.org/#/c/180833/


+1 I like this as an idea. Given we've already got quite a few reviews
in flight making changes to overcloud_controller.pp (we're still working
out how to, and enabling services) I'd be happier to let those land and
have the tidy up once it settles (early next week at the latest) -
especially since there's some working out+refactoring to do still,


+1 on not block ongoing work

as of today a split would cause the two .pp to have a lot of duplicated
data, making them not better than one with the ifs IMHO


I'm with Giulio here. I'm not as strong on my puppet as everyone else, 
but I don't see the current approach as duplication, it's just passing 
in different configurations.



we should probably move out of the existing .pp the duplicated parts
first (see my other email on the matter)


My bigger concern is Tuskar. It has the ability to set parameters. It's 
hasn't moved to a model where you're configuring the overcloud through 
selecting entries in the resource registry. I can see that making sense 
in the future, but that's going to require API changes.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-07 Thread Jay Dobies

Something like this:

https://review.openstack.org/#/c/180833/


I'm not convinced this is a good user experience though. You have 
configuration effectively in two places. If you want to enable Galera, 
or enable ceph storage, it's a parameter. But not pacemaker. To enable 
that, you have to look in the resource registry instead.





I have two mild concerns about this approach:

1) We'd duplicate the logic (or at least the inclusion logic) for the
common parts in two places, making it prone for the two .pp variants to
get out of sync. The default switches from if i want to make a
difference between the two variants, i need to put in a conditional to
if i want to *not* make a difference between the two variants, i need
to put this / include this in two places.


The goal for these manifests is that we would just be doing 'include's
for various stackforge puppet modules. If we have
'include ::glance::api' in two places that doesn't really bother me.
Agree that it isn't ideal but I don't think it bothers me too much. And
the benefit is we can get rid of pacemaker conditionals for all the
things.



2) If we see some other bit emerging in the future, which would be
optional but at the same time omnipresent in a similar way as
Pacemaker is, we'll see the same if/else pattern popping up. Using the
same solution would mean we'd have 4 .pp files (a 2x2 matrix) doing the
same thing to cover all scenarios. This is a somewhat hypothetical
concern at this point, but it might become real in the future (?).


Sure. It could happen. But again maintaining all of those in a single
file could be quite a mess too. And if we are striving to set all or our
Hiera data in Heat (avoiding use of some of the puppet functions we now
make use of like split, etc) this would further de-duplicate it I think.

Again having duplication that includes just the raw puppet classes
doesn't bother me too much.





If we were to split out the controller into two separate templates I
think it might be appropriate to move a few things into puppet-tripleo
to de-duplicate a bit. Things like the database creation for example.
But probably not all of the services... because we are trying as much as
possible to use the stackforge puppet modules directly (and not our own
composition layer).


I think our restraint from having a composition layer (extracting things
into puppet-tripleo) is what's behind my concern no. 1 above. I know one
of the arguments against having a composition layer is that it makes
things less hackable, but if we could amend puppet modules without
rebuilding or altering the image, it should mitigate the problem a bit
[1]. (It's almost a matter that would deserve a separate thread though :) )



I think this split is a good compromise and would probably even speed up
the implementation of the remaining pacemaker features too. And removing
all the pacemaker conditionals we have from the non-pacemaker version
puts us back in a reasonably clean state I think.

Dan



An alternative approach could be something like:

if hiera('step') = 2 {
  include ::tripleo::mongodb
}

and move all the mongodb related logic to that class and let it deal
with both pacemaker and non-pacemaker use cases. This would reduce the
stress on the top-level .pp significantly, and we'd keep things
contained in logical units. The extracted bits will still have
conditionals but it's going to be more manageable because the bits will
be a lot smaller. So this would mean splitting up the manifest per
service rather than based on pacemaker on/off status. This would require
more extraction into puppet-tripleo though, so it kinda goes against the
idea of not having a composition layer. It would also probably consume a
bit more time to implement initially and be more disruptive to the
current state of things.

At this point i don't lean strongly towards one or the other solution, i
just want us to have an option to discuss and consider benefits and
drawbacks of both, so that we can take an informed decision. I think i
need to let this sink in a bit more myself.


Cheers

Jirka

[1] https://review.openstack.org/#/c/179177/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Core reviewer update proposal

2015-05-05 Thread Jay Dobies



On 05/05/2015 07:57 AM, James Slagle wrote:

Hi, I'd like to propose adding Giulio Fidente and Steve Hardy to TripleO Core.

Giulio has been an active member of our community for a while. He
worked on the HA implementation in the elements and recently has been
making a lot of valuable contributions and reviews related to puppet
in the manifests, heat templates, ceph, and HA.

Steve Hardy has been instrumental in providing a lot of Heat domain
knowledge to TripleO and his reviews and guidance have been very
beneficial to a lot of the template refactoring. He's also been
reviewing and contributing in other TripleO projects besides just the
templates, and has shown a solid understanding of TripleO overall.

180 day stats:
| gfidente | 2080  42 166   0   079.8% |
16 (  7.7%)  |
|  shardy  | 2060  27 179   0   086.9% |
16 (  7.8%)  |

TripleO cores, please respond with +1/-1 votes and any
comments/objections within 1 week.


+1

They've both been huge in the development of the THT templates and the 
puppet integration over the past few months.



Giulio and Steve, also please do let me know if you'd like to serve on
the TripleO core team if there are no objections.

I'd also like to give a heads-up to the following folks whose review
activity is very low for the last 90 days:
|   tomas-8c8 **   |   80   0   0   8   2   100.0% |0 (  0.0%)  |
|lsmola ** |   60   0   0   6   5   100.0% |0 (  0.0%)  |
| cmsj **  |   60   2   0   4   266.7% |0 (  0.0%)  |
|   jprovazn **|   10   1   0   0   0 0.0% |0 (  0.0%)  |
|   jonpaul-sullivan **|  no activity
Helping out with reviewing contributions is one of the best ways we
can make good forward progress in TripleO. All of the above folks are
valued reviewers and we'd love to see you review more submissions. If
you feel that your focus has shifted away from TripleO and you'd no
longer like to serve on the core team, please let me know.

I also plan to remove Alexis Lee from core, who previously has
expressed that he'd be stepping away from TripleO for a while. Alexis,
thank you for reviews and contributions!



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Building images separation and moving images into right place at right time

2015-04-17 Thread Jay Dobies
Have you seen Dan's first steps towards splitting the overcloud image 
building out of devtest_overcloud? It's not the same thing that you're 
talking about, but it might be a step in that direction.


https://review.openstack.org/#/c/173645/

On 04/17/2015 09:50 AM, Jaromir Coufal wrote:

Hi All,

at the moment we are building discovery, deploy and overcloud images all
at once. Then we face user to deal with uploading all images at one step.

User should not be exposed to discovery/deploy images. This should
happen automatically for the user during undercloud installation as
post-config step, so that undercloud is usable.

Once user installs undercloud (and have discovery  deploy images at
their place) he should be able to build / download / create overcloud
images (by overcloud images I mean overcloud-full.*). This is what user
should deal with.

For this we will need to separate building process for discovery+deploy
images and for overcloud images. Is that possible?

-- Jarda

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] nominating James Polley for tripleo-core

2015-01-14 Thread Jay Dobies

+1

On 01/14/2015 02:26 PM, Gregory Haynes wrote:

Excerpts from Clint Byrum's message of 2015-01-14 18:14:45 +:

Hello! It has been a while since we expanded our review team. The
numbers aren't easy to read with recent dips caused by the summit and
holidays. However, I believe James has demonstrated superb review skills
and a commitment to the project that shows broad awareness of the
project.

Below are the results of a meta-review I did, selecting recent reviews
by James with comments and a final score. I didn't find any reviews by
James that I objected to.

https://review.openstack.org/#/c/133554/ -- Took charge and provided
valuable feedback. +2
https://review.openstack.org/#/c/114360/ -- Good -1 asking for better
commit message and then timely follow-up +1 with positive comments for
more improvement. +2
https://review.openstack.org/#/c/138947/ -- Simpler review, +1'd on Dec.
19 and no follow-up since. Allowing 2 weeks for holiday vacation, this
is only really about 7 - 10 working days and acceptable. +2
https://review.openstack.org/#/c/146731/ -- Very thoughtful -1 review of
recent change with alternatives to the approach submitted as patches.
https://review.openstack.org/#/c/139876/ -- Simpler review, +1'd in
agreement with everyone else. +1
https://review.openstack.org/#/c/142621/ -- Thoughtful +1 with
consideration for other reviewers. +2
https://review.openstack.org/#/c/113983/ -- Thorough spec review with
grammar pedantry noted as something that would not prevent a positive
review score. +2

All current tripleo-core members are invited to vote at this time. Thank
you!



Definite +1.

-Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Do we want to remove Nova-bm support?

2014-12-04 Thread Jay Dobies

+1, FWIW.


Alexis


+1

This is similar to the no merge.py discussion. If something isn't 
covered by CI, it's going to grow stale pretty quickly.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Meeting purpose

2014-12-04 Thread Jay Dobies

As an example of something that I think doesn't add much value in the
meeting - DerekH has already been giving semi-regular CI/CD status
reports via email. I'd like to make these weekly update emails
regular, and take the update off the meeting agenda. I'm offering to
share the load with him to make this easier to achieve.


The Tuskar item is the same way. Not sure how that was added as an 
explicit agenda item, but I don't see why we'd call out to one 
particular project within TripleO. Anything we'd need eyes on should be 
covered when we chime in about specs or reviews needing eyes.



Are there other things on our regular agenda that you feel aren't
offering much value?


I'd propose we axe the regular agenda entirely and let people promote
things in open discussion if they need to. In fact the regular agenda
often seems like a bunch of motions we go through... to the extent that
while the TripleO meeting was going on we've actually discussed what was
in my opinion the most important things in the normal #tripleo IRC
channel. Is getting through our review stats really that important!?


I think the review stats would be better handled in e-mail format like 
Derek's CI status e-mails. We don't want the reviews to get out of hand, 
but the time spent pasting in the links and everyone looking at the 
stats during the meeting itself are wasteful. I could see bringing it up 
if it's becoming a problem, but the number crunching doesn't need to be 
part of the meeting.



  Are there things you'd like to see moved onto, or off, the agenda?


Perhaps a streamlined agenda like this would work better:

  * Bugs


This one is valuable and I like the idea of keeping it.


  * Projects needing releases


Is this even needed as well? It feels like for months now the answer is 
always Yes, release the world.


I think our cadence on those release can be slowed down as well (the 
last few releases I've done have had minimal churn at best), but I'm not 
trying to thread jack into that discussion. I bring it up because we 
could remove that from the meeting and do an entirely new model where we 
get the release volunteer through other means on a (potentially) less 
frequent release basis.



  * Open Discussion (including important SPECs, CI, or anything needing
attention). ** Leader might have to drive this **


I like the idea of a specific Specs/Reviews section. It should be quick, 
but a specific point in time where people can #info a review they need 
eyes on. I think it appeals to my OCD to have this more structured than 
interspersed with other topics in open discussion.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar] Puppet module

2014-10-29 Thread Jay Dobies
Nope, there isn't a puppet module for deploying Tuskar, but starting one 
makes sense.


On 10/28/2014 06:04 PM, Emilien Macchi wrote:

Hi,

I was looking at deploying Tuskar API with Puppet and I was wondering if
you guys have already worked on a Puppet module.

If not, I think we could start something in stackforge like we already
did for other OpenStack components.

Thanks,



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar][tripleo] Tuskar/TripleO on Devstack

2014-10-28 Thread Jay Dobies

5. API: You can't create or modify roles via the API, or even view the
content of the role after creating it


None of that is in place yet, mostly due to time. The tuskar-load-roles 
was a short-term solution to getting a base set of roles in. 
Conceptually you're on target with I want to see in the coming releases.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Set WIP for stale patches?

2014-09-19 Thread Jay Dobies

On 2014-09-18 15:21:20 -0400 (-0400), Jay Dobies wrote:

How many of the reviews that we WIP-1 will actually be revisited?

I'm sure there will be cases where a current developer forgetting
they had started on something, seeing the e-mail about the WIP-1,
and then abandoning the change.

But what about developers who have moved off the project entirely?
Is this only masking the problem of stale reviews from our review
stats and leaving the review queue to bloat?

[...]

What is review queue bloat in this scenario? How is a change
indefinitely left in Gerrit with workflow -1 set any different
from a change indefinitely left in Gerrit with abandoned set? It's
not like we go through and purge changes from Gerrit based on these,
and they take up just as much space and other resources in either
state.


Ah, ok. I assumed the abandoned ones were reaped over time. Perhaps it's 
just a matter of me writing different searches when I want to ignore the 
workflow -1s.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Set WIP for stale patches?

2014-09-18 Thread Jay Dobies

How many of the reviews that we WIP-1 will actually be revisited?

I'm sure there will be cases where a current developer forgetting they 
had started on something, seeing the e-mail about the WIP-1, and then 
abandoning the change.


But what about developers who have moved off the project entirely? Is 
this only masking the problem of stale reviews from our review stats and 
leaving the review queue to bloat?


I honestly don't know; those are real questions, not rhetorical ones 
trying to prove a point. I'd guess the longer-running OpenStack projects 
have had to deal with this as well, and perhaps I'm overestimating just 
how many of these perpetually in limbo reviews there are.



On 09/18/2014 03:26 AM, mar...@redhat.com wrote:

On 18/09/14 00:29, James Polley wrote:



On Wed, Sep 17, 2014 at 6:26 PM, mar...@redhat.com
mailto:mar...@redhat.com mandr...@redhat.com
mailto:mandr...@redhat.com wrote:

 Hi,

 as part of general housekeeping on our reviews, it was discussed at last
 week's meeting [1] that we should set workflow -1 for stale reviews
 (like gerrit used to do when I were a lad).

 The specific criteria discussed was 'items that have a -1 from a core
 but no response from author for 14 days'. This topic came up again
 during today's meeting and it wasn't clear if the intention was for
 cores to start enforcing this? So:

 Do we start setting WIP/workflow -1 for those reviews that have a -1
 from a core but no response from author for 14 days


I'm in favour of doing this; as long as we make it clear that we're
doing it to help us focus review effort on things that are under active
development - it doesn't mean we think the patch shouldn't land, it just
means we know it's not ready yet so we don't want reviewers to be
looking at it until it moves forward.

For the sake of making sure new developers don't get put off, I'd like
to see us leaving a comment explaining why we're WIPing the change and
noting that uploading a new revision will remove the WIP automatically



+1 - indeed, I'd say as part of this discussion, or if/when it comes up
as a motion for a vote in the weekly meeting, we should also put out and
agree on the 'standard' text to be used for this and stick it on the
wiki (regardless of whether this is to be implemented manually at first
and perhaps automated later),

thanks, marios

setting workflow -1 as this review has been inactive for two weeks
following a negative review. Please see the wiki @ foo for more
information. Note that once you upload a new revision the workflow is
expected to be reset (feel free to shout on freenode/#tripleo if it isn't).





 thanks, marios

 [1]
 
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-09-09-19.04.log.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Propose adding StevenK to core reviewers

2014-09-10 Thread Jay Dobies

+1

On 09/09/2014 02:32 PM, Gregory Haynes wrote:

Hello everyone!

I have been working on a meta-review of StevenK's reviews and I would
like to propose him as a new member of our core team.

As I'm sure many have noticed, he has been above our stats requirements
for several months now. More importantly, he has been reviewing a wide
breadth of topics and seems to have a strong understanding of our code
base. He also seems to be doing a great job at providing valuable
feedback and being attentive to responses on his reviews.

As such, I think he would make a great addition to our core team. Can
the other core team members please reply with your votes if you agree or
disagree.

Thanks!
Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Review metrics - what do we want to measure?

2014-09-04 Thread Jay Dobies

It can, by running your own... but again it seems far better for
core reviewers to decide if a change has potential or needs to be
abandoned--that way there's an accountable human making that
deliberate choice rather than the review team hiding behind an
automated process so that no one is to blame for hurt feelings
besides the infra operators who are enforcing this draconian measure
for you.


The thing is that it's also pushing more work onto already overloaded
core review teams.  Maybe submitters don't like auto-abandon, but I bet
they like having a core reviewer spending time cleaning up dead reviews
instead of reviewing their change even less.

TBH, if someone's offended by the bot then I can't imagine how incensed
they must be when a human does the same thing.  The bot clearly isn't
making it personal, and even if the human isn't either it's much easier
to have misunderstandings (see also every over-reaction to a -1 ever).

I suppose it makes it easier for cores to ignore reviews, but from the
other discussions I've read that hasn't gone away just because
auto-abandon did, so I'm not convinced that's a solution anyway.


+1, I don't think it'll come as much of a shock if a -1 review gets 
closed due to time without progress.



/2 cents




To make the whole process a little friendlier we could increase
the time frame from 1 week to 2.


snarkHow about just automatically abandon any new change as soon
as it's published, and if the contributor really feels it's
important they'll unabandon it./snark




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Nova] Specs and approvals

2014-08-25 Thread Jay Dobies
I was on vacation last week and am late to the discussion, but I'm +1 
for the idea.


On 08/19/2014 02:08 PM, Joe Gordon wrote:




On Tue, Aug 19, 2014 at 8:23 AM, Russell Bryant rbry...@redhat.com
mailto:rbry...@redhat.com wrote:

On 08/19/2014 05:31 AM, Robert Collins wrote:
  Hey everybody - https://wiki.openstack.org/wiki/TripleO/SpecReviews
  seems pretty sane as we discussed at the last TripleO IRC meeting.
 
  I'd like to propose that we adopt it with the following tweak:
 
  19:46:34 lifeless so I propose that +2 on a spec is a commitment to
  review it over-and-above the core review responsibilities
  19:47:05 lifeless if its not important enough for a reviewer to do
  that thats a pretty strong signal
  19:47:06 dprince lifeless: +1, I thought we already agreed to that
  at the meetup
  19:47:17 slagle yea, sounds fine to me
  19:47:20 bnemec +1
  19:47:30 lifeless dprince: it wasn't clear whether it was
  part-of-responsibility, or additive, I'm proposing we make it clearly
  additive
  19:47:52 lifeless and separately I think we need to make surfacing
  reviews-for-themes a lot better
 
  That is - +1 on a spec review is 'sure, I like it', +2 is
specifically
  I will review this *over and above* my core commitment - the goal
  here is to have some very gentle choke on concurrent WIP without
  needing the transition to a managed pull workflow that Nova are
  discussing - which we didn't have much support for during the
meeting.
 
  Obviously, any core can -2 for any of the usual reasons - this motion
  is about opening up +A to the whole Tripleo core team on specs.
 
  Reviewers, and other interested kibbitzers, please +1 / -1 as you
feel fit :)

+1

I really like this.  In fact, I like it a lot more than the current
proposal for Nova.  I think the Nova team should consider this, as well.


Nova and tripleo are at different points in there lifecycle just look at
tripleo-specs [0] vs nova-specs [1]. TripleO has 11 specs and nova has
80+, TripleO has 22 cores and nova has 21 cores.  AFAIK none of the
tripleo specs are vendor specific, while a good chunk of nova ones are.
I don't think there is a one size fits all solution here.


[0] http://specs.openstack.org/openstack/tripleo-specs/
[1] http://specs.openstack.org/openstack/nova-specs/


It still rate limits code reviews by making core reviewers explicitly
commit to reviewing things.  This is like our previous attempt at
sponsoring blueprints, but the use of gerrit I think would make it more
successful.

It also addresses my primary concerns with the tensions between group
will and small groups no longer being able to self organize and push
things to completion without having to haggle through yet another
process.

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Spec Minimum Review Proposal

2014-07-22 Thread Jay Dobies
At the meetup today, the topic of our spec process came up. The general 
sentiment is that the process is still young and the hiccups are 
expected, but we do need to get better about making sure we're staying 
on top of them.


As a first step, it was proposed to add 1 spec review a week to the 
existing 3 reviews per day requirement for cores.


Additionally, we're going to start to capture and review the metrics on 
spec patches specifically during the weekly meeting. That should help 
bring to light how long reviews are sitting in the queue without being 
touched.


What are everyone's feelings on adding a 1 spec review per week 
requirement for cores?


Not surprisingly, I'm +1 for it  :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposal to add Jon Paul Sullivan and Alexis Lee to core review team

2014-07-10 Thread Jay Dobies

FWIW, I'm a firm believer in progress over perfection and although I
comment on the form, I try to score on the function.


I really like this phrase, comment on the form, score on the function.

Lately I've been trying to be very specific about things I'm pointing 
out that are potentially a learning experience (This could be shortened 
into self.foo = foo or None), things that aren't a problem but the 
author might want to take into account (Consider...), or those that 
are actually problematic and would warrant a -1.


I've found in the past that it's good to step back every so often and 
reorient myself. Thanks Tomas for the write up.



I'll get better at
commenting to this effect, especially so if my nitpicking gains the
weight of core.

I love English and believe careful use is a great benefit, particularly
in dense technical documents. You're entirely correct that this
shouldn't be allowed to noticeably impede progress though.


Alexis



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] dib-utils Release Question

2014-06-24 Thread Jay Dobies
Ahh, ok. I had just assumed it was a Python library, but I admittedly 
didn't look too closely at it. Thanks :)


On 06/23/2014 09:32 PM, Steve Kowalik wrote:

On 24/06/14 06:31, Jay Dobies wrote:

I finished the releases for all of our existing projects and after
poking around tarballs.openstack.org and pypi, it looks like they built
successfully. Yay me \o/

However, it doesn't look like dib-utils build worked. I don't see it
listed on tarballs.openstack.org. It was the first release for that
project, but I didn't take any extra steps (I just followed the
instructions on the releases wiki and set it to version 0.0.1).

I saw the build for it appear in zuul but I'm not sure how to go back
and view the results of a build once it disappears off the main page.

Can someone with experience releasing a new project offer me any insight?


\o/

I've been dealing with releases of new projects from the os-cloud-config
side recently, so let's see.

dib-utils has a post job of dib-utils-branch-tarball, so the job does
exist, as you pointed out, but it doesn't hurt to double check.

The object the tag points to is commit
45b7cf44bc939ef08afc6b1cb1d855e0a85710ad, so logs can be found at
http://logs.openstack.org/45/45b7cf44bc939ef08afc6b1cb1d855e0a85710ad

And from the log a few levels deep at the above URL, we see:

2014-06-16 07:17:13.122 | + tox -evenv python setup.py sdist
2014-06-16 07:17:13.199 | ERROR: toxini file 'tox.ini' not found
2014-06-16 07:17:13.503 | Build step 'Execute shell' marked build as failure

Since it's not a Python project, no tarball or pypi upload.

Cheers,



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] dib-utils Release Question

2014-06-23 Thread Jay Dobies
I finished the releases for all of our existing projects and after 
poking around tarballs.openstack.org and pypi, it looks like they built 
successfully. Yay me \o/


However, it doesn't look like dib-utils build worked. I don't see it 
listed on tarballs.openstack.org. It was the first release for that 
project, but I didn't take any extra steps (I just followed the 
instructions on the releases wiki and set it to version 0.0.1).


I saw the build for it appear in zuul but I'm not sure how to go back 
and view the results of a build once it disappears off the main page.


Can someone with experience releasing a new project offer me any insight?

Thanks  :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] [Ironic] [Ceilometer] [Horizon] [TripleO] Nodes Management UI - designs

2014-06-02 Thread Jay Dobies
Very nicely done, seeing this stuff laid out is really useful. A few 
comments:



= Page 3 =

* Nit: The rocker switch for power is a bit odd to me since it looks 
like it can be toggled.


* Can you show an example of a non-healthy node? Is it just an X instead 
of a check or are there different degrees/forms of unhealthy that can be 
discerned at this level?


* I didn't realize this until the next page and the nodes with bells on 
them, but there's no indication in this table of which node may have an 
alarm associated with it. Is there no way of viewing the node-alarm 
association from this view?



= Page 4 =

* I'm not trying to be a pain in the ass about the counts in the summary 
section, but they are kinda confusing me as I try to read this page 
without guidance.


** I see 26 nodes but it says 28. That's largely a test data nit that 
doesn't affect my understanding.


** It says 0 alarms, but I see three alarm bells. That one is a bit more 
than test data anal-retentiveness since it's making me wonder if I'm 
interpretting the bells correctly as alarms.


** It looks like this is a grid view, so I might be expecting too much, 
but is there any sorting available based on status? I'm guessing the 
columns in the previous view can be sorted (which will be very useful) 
but without something similar here, I wonder to its effectiveness if I 
can't couple the alarmed or non-running machines.



= Page 5 =

* I retract my previous statement about the sorting, the Group By 
example is what I was getting at. Can I drill into a particular group 
and see just those nodes?



= Page 6 =

* This is a cool idea, showing at the summary level why a node is 
unhealthy. What happens if it passes multiple thresholds? Do we just 
show one of the problematic values (assuming there's a priority to the 
metrics so we show the most important one)?



= Page 10 =

* Nit: The tags seem to take up prime screen real estate for something 
I'm not sure is terribly important on this page. Perhaps the intended 
use for them is more important than I'm giving credit.


* Is Flavors Consumption always displayed, or is that just the result of 
an the alarm? If it was unhealthy due to CPU usage, would that appear 
instead/in addition to?



= Page 11 =

* In this view, will we know about configured thresholds? I'm wondering 
if we can color or otherwise highlight more at-risk metrics to 
immediately grab the user's attention.



On 05/28/2014 05:18 PM, Jaromir Coufal wrote:

Hi All,

There is a lot of tags in the subject of this e-mail but believe me that
all listed projects (and even more) are relevant for the designs which I
am sending out.

Nodes management section in Horizon is being expected for a while and
finally I am sharing the results of designing around it.

http://people.redhat.com/~jcoufal/openstack/horizon/nodes/2014-05-28_nodes-ui.pdf


These views are based on modular approach and combination of multiple
services together; for example:
* Ironic - HW details and management
* Ceilometer - Monitoring graphs
* TripleO/Tuskar - Deployment Roles
etc.

Whenever some service is missing, that particular functionality should
be disabled and not displayed to a user.

I am sharing this without any bigger description so that I can get
feedback whether people can get oriented in the UI without hints. Of
course you cannot get each and every detail without exploring, having
tooltips, etc. But the goal for each view is to manage to express at
least the main purpose without explanation. If it does not, it needs to
be fixed.

Next week I will organize a recorded broadcast where I will walk you
through the designs, explain high-level vision, details and I will try
to answer questions if you have any. So feel free to comment anything or
ask whatever comes to your mind here in this thread, so that I can cover
your concerns. Any feedback is very welcome - positive so that I know
what you think that works, as well as negative so that we can improve
the result before implementation.

Thank you all
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Spec Template Change Proposal

2014-05-22 Thread Jay Dobies

Merging a few of the replies into a single response:

 I like all of this plan, except for the name Overview. To me, 
Overview suggests a high-level summary rather than being one of the 
beefier sections of a spec. Something like Detail or Detailed 
overview (because the low-level detail will come in the changes that 
implement the spec, not in the spec) seem like better descriptions of 
what we intend to have there.


I didn't put much thought into the name, so Overview, Summary, Detail, 
etc. doesn't matter to me. If we agree to go down the route of a holder 
section here (as compared to loosening the validation), I'll poll for a 
better name.



I'm a bit ambivalent to be honest, but adding a section for Overview
doesn't really do much IMO.  Just give an overview in the first couple
of sentences under Proposed Change. If I go back and add an Overview
section to my spec in review, I'm just going to slap everything in
Proposed Change into one Overview section :).  To me, Work Items is
where more of the details goes (which does support aribtrary
subsections with ^^^).


That's actually my expectation, that everything currently in place gets 
slapped under Overview. The change is pretty much only to support being 
able to further break down that section while still leaving the existing 
level of validation in place. It's not so much organizational as it is 
to make sphinx happy.



In general though I think that the unit tests are too rigid and
pedantic. Plus, having to go back and update old specs when we make
changes to unit tests seems strange. No biggie right now, but we do
have a couple of specs in review. Unless we write the unit tests to be
backwards compatible. This just feels a bit like engineering just for
the sake of it.  Maybe we need a spec on it :).


I agree that it's possible I'll be back here in the next few days 
complaining that my problem description is too large and would benefit 
from subsections, which I couldn't currently add because they'd be 
second-level sections which are strictly enforced.



I was a bit surprised to see that we don't have the Data Model section
in our specs, and when I had one, unit tests failed. We actually do
have data model stuff in Tuskar and our json structures in tripleo.


You can blame me for that, when I created the repository I took the nova
template and removed the sections I thought we're not relevant perhaps I
was a little too aggressive. I got no problem if we want to add any of
them back in.

Looks like these are the sections I removed:
Data model impact
REST API impact
Notifications impact

I'd obviously forgotten about Tuskar, sorry.



 We just landed a change to permit the third level subsections, but the
intent AIUI of requiring exact titles to constrain the expression
space in the interests of clarity. We can (and should) add more
standard sections as needed.

I do like the idea of having these look consistent. I can work within 
the structure fine given that third-level subsections are permitted, but 
my issue is still that I have been treating the first section under 
Proposed Change as the meaty part of the change, which due to the lack 
of a second-level subsection doesn't let me add my own subsections.



Given the feedback, there are a few approaches we can take:

1. Add a second-level subsection at the start of Proposed Change. This 
subsection will be the description of the actual change and adding in 
this will allow custom subsections to be permitted by the existing unit 
tests.


2. Reduce the validation to only enforce required sections but not barf 
on the addition of new ones.



Somewhat tangential (but to address Slagle's concern) is the question of 
whether or not we need some sort of template version number to prevent 
having to update X many existing specs when changing the structure in 
the future. I feel like this is overkill and it's probably much simpler 
to settle on a Juno template in the very near future (selfishly, I say 
near to allow my own issue here to be addressed) and then only change 
the templates at new versions. Again, I'm probably overthinking things 
at this point, but just throwing it out there.



Personally, my vote is for #1. Existing specs are simple to update, just 
slap the existing change under the new subsection and move on. For the 
naming of it, I'm fine with James P's suggestion of Detail.


Then for K, we make any changes to the template based on our usage of it 
in Juno. It's like a scrum post mortem task for a giant 6 month sprint :)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Spec Template Change Proposal

2014-05-21 Thread Jay Dobies

Currently, there is the following in the template:



Proposed change
===

[snip]

Alternatives


[snip]

Security impact
---



The unit tests assert the top and second level sections are standard, so 
if I add a section at the same level as Alternatives under Proposed 
Change, the tests will fail. If I add a third level section using ^, 
they pass.


The problem is that you can't add a ^ section under Proposed Change. 
Sphinx complains about a title level inconsistency since I'm skipping 
the second level and jumping to the third. But I can't add a 
second-level section directly under Proposed Change because it will 
break the unit tests that validate the structure.


The proposed change is going to be one of the beefier sections of a 
spec, so not being able to subdivide it is going to make the 
documentation messy and removes the ability to link directly to a 
portion of a proposed change.


I propose we add a section at the top of Proposed Change called Overview 
that will hold the change itself. That will allow us to use third level 
sections in the change itself while still having the first and section 
section structure validated by the tests.


I have no problem making the change to the templates, unit tests, and 
any existing specs (I don't think we have any yet), but before I go 
through that, I wanted to make sure there wasn't a major disagreement.


Thoughts?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] HAProxy and Keystone setup (in Overcloud)

2014-04-28 Thread Jay Dobies

We may want to consider making use of Heat outputs for this.


This was my first thought as well. stack-show returns a JSON document 
that would be easy enough to parse through instead of having it in two 
places.



Rather than assuming hard coding, create an output on the overcloud
template that is something like 'keystone_endpoint'. It would look
something like this:

Outputs:
   keystone_endpoint:
 Fn::Join:
   - ''
   - - http://;
 - {Fn::GetAtt: [ haproxy_node, first_ip ]} # fn select and yada
 - :
 - {Ref: KeystoneEndpointPort} # thats a parameter
 - /v2.0


These are then made available via heatclient as stack.outputs in
'stack-show'.

That way as we evolve new stacks that have different ways of controlling
the endpoints (LBaaS anybody?) we won't have to change os-cloud-config
for each one.



2) do Keystone setup from inside Overcloud:
Extend keystone element, steps done in init-keystone script would be
done in keystone's os-refresh-config script. This script would have to
be called only on one of nodes in cluster and only once (though we
already do similar check for other services - mysql/rabbitmq, so I don't
think this is a problem). Then this script can easily get list of
haproxy ports from heat metadata. This looks like more attractive option
to me - it eliminates an extra post-create config step.


Things that can be done from outside the cloud, should be done from
outside the cloud. This helps encourage the separation of concerns and
also makes it simpler to reason about which code is driving the cloud
versus code that is creating the cloud.



Related to Keystone setup is also the plan around keys/cert setup
described here:
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031045.html
But I think this plan would remain same no matter which of the options
above would be used.


What do you think?

Jan



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Default paths in os-*-config projects

2014-04-15 Thread Jay Dobies



On 04/14/2014 09:30 PM, Clint Byrum wrote:

Excerpts from Ben Nemec's message of 2014-04-14 15:41:23 -0700:

Right now the os-*-config projects default to looking for their files in
/opt/stack, with an override env var provided for other locations.  For
packaging purposes it would be nice if they defaulted to a more
FHS-compliant location like /var/lib.  For devtest we could either
override the env var or simply install the appropriate files to /var/lib.

This was discussed briefly in IRC and everyone seemed to be onboard with
the change, but Robert wanted to run it by the list before we make any
changes.  If anyone objects to changing the default, please reply here.
   I'll take silence as agreement with the move. :-)



+1 from me for doing FHS compliance. :)

/var/lib is not actually FHS compliant as it is for Variable state
information. os-collect-config does have such things, and does use
/var/lib. But os-refresh-config reads executables and os-apply-config
reads templates, neither of which will ever be variable state
information.

/usr/share would be the right place, as it is Architecture independent
data. I suppose if somebody wants to compile a C program as an o-r-c
script we could rethink that, but I'd just suggest they drop it in a bin
dir and exec it from a one line shell script in the /usr/share.

So anyway, I suggest:

/usr/share/os-apply-config/templates
/usr/share/os-refresh-config/scripts


+1

This would have been my suggestion too if we were moving out of /opt. 
I've gotten yelled at in the past for not using this in these sorts of 
cases :)



With the usual hierarchy underneath.

We'll need to continue to support the non-FHS paths for at least a few
releases as well.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] /bin/bash vs. /bin/sh

2014-04-15 Thread Jay Dobies
+1 to using bash, the argument about not keeping POSIX compliance for 
the sake of it makes sense to me.


On 04/15/2014 07:31 AM, Ghe Rivero wrote:

+1 to use bash as the default shell. So far, all major distros use bash
as the default one (except Debian which uses dash).
An about rewriting the code in Python, I agree that shell is complicated
for large programs, but writing anything command oriented in other than
shell is a nightmare. But there are some parts that can benefit from that.

Ghe Rivero

On 04/15/2014 11:05 AM, Chris Jones wrote:

Hi

On 15 April 2014 09:14, Daniel P. Berrange berra...@redhat.com
mailto:berra...@redhat.com wrote:

I supose that rewriting the code to be in Python is out of the
question ?  IMHO shell is just a terrible language for doing any
program that is remotely complicated (ie longer than 10 lines of


I don't think it's out of the question - where something makes sense
to switch to Python, that would seem like a worthwhile thing to be
doing. I do think it's a different question though - we can quickly
flip things from /bin/sh to /bin/bash without affecting their
suitability for replacement with python.

--
Cheers,

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][design] review based conceptual design process

2014-04-15 Thread Jay Dobies

+1, I think it's a better medium for conversations than blueprints or wikis.

I'm also +1 to a tripleo-specs repo, but that's less me having a problem 
with using incubator and more my OCD.


On 04/15/2014 03:43 PM, Monty Taylor wrote:

On 04/15/2014 11:44 AM, Robert Collins wrote:

I've been watching the nova process, and I think its working out well
- it certainly addresses:
  - making design work visible
  - being able to tell who has had input
  - and providing clear feedback to the designers

I'd like to do the same thing for TripleO this cycle..


++


I'm thinking we can just add docs to incubator, since thats already a
repository separate to our production code - what do folk think?


In the current nova-specs thread on the ML, Tim Bell says:

I think that there is also a need to verify the user story aspect. One
of the great things with the ability to subscribe to nova-specs is that
the community can give input early, when we can check on the need and
the approach. I know from the CERN team how the requirements need to be
reviewed early, not after the code has been written.

Which is great. I'm mentioning it because he calls out the ability to
subscribe to nova-specs.

I think if you put them in incubator, then people who are wanting to
fill a role like Tim - subscribing as an operator and validating user
stories - might be a bit muddied by patches to other thigns. (although
thanks for having a thought about less repos :) )

So I'd just vote, for whatever my vote is worth, for a tripleo-specs repo.

Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [Horizon] Icehouse Release of TripleO UI + Demo

2014-04-10 Thread Jay Dobies

On 04/10/2014 01:40 PM, Nachi Ueno wrote:

Hi Jarda

Congratulations
This release and the demo is super awesome!!
Do you have any instruction to install this one?


I'd like to see this too. I asked a few times and never got an answer on 
whether or not there was a documented way of demoing this without a ton 
of baremetal lying around.






2014-04-10 1:32 GMT-07:00 Jaromir Coufal jcou...@redhat.com:

Dear Stackers,

I am happy to announce that yesterday Tuskar UI (TripleO UI) has tagged
branch 0.1.0 for Icehouse release [0].

I put together a narrated demo of all included features [1].

You can find one manual part in the whole workflow - cloud initialization.
There is ongoing work on automatic os-cloud-config, but for the release we
had to include manual way. Automation should be added soon though.

I want to thank all contributors for hard work to make this happen. It has
been pleasure to cooperate with all of you guys and I am looking forward to
bringing new features [2] in.


-- Jarda


[0] 0.1.0 Icehouse Release of the UI:
https://github.com/openstack/tuskar-ui/releases/tag/0.1.0

[1] Narrated demo of TripleO UI 0.1.0:
https://www.youtube.com/watch?v=-6whFIqCqLU

[2] Juno Planning for Tuskar:
https://wiki.openstack.org/wiki/TripleO/TuskarJunoPlanning

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] config options, defaults, oh my!

2014-04-08 Thread Jay Dobies

I'm very wary of trying to make the decision in TripleO of what should and 
shouldn't be configurable in some other project.For sure the number of 
config options in Nova is a problem, and one that's been discussed many times 
at summits.   However I think you could also make the case/assumption for any 
service that the debate about having a config option has already been held 
within that service as part of the review that merged that option in the code - 
re-running the debate about whether something should be configurable via 
TripleO feels like some sort of policing function on configurability above and 
beyond what the experts in that service have already considered, and that 
doesn't feel right to me.


My general feeling is that I agree with this sentiment. In my experience 
on management tools, there's always someone who wants to turn the one 
knob I forgot to expose. And that's been on significantly simpler 
projects than OpenStack; the complexity and scale of the features means 
there's potentially a ton of tweaking to be done.


More generally, this starts to drift into the bigger question of what 
TripleO is. The notion of defaults or limiting configuration exposure is 
for prescriptive purposes. You can change X because we think it's going 
to have a major impact. If we don't expose Y, it's because we're 
driving the user to not want to change it.


I've always assumed TripleO is very low-level. Put another way, 
non-prescriptive. It's not going to push an agenda that says you should 
be doing things a certain way, but rather gives you more than enough 
rope to hang yourself (just makes it easier).


The question of how to make things easier to grok for a new user lies in 
a different area. Either documentation (basic v. advanced user guide 
sort of thing) or potentially in the Tuskar GUI. More configuration 
options means Tuskar's life is more difficult, but to me, that's where 
we add in the notion of You almost definitely want to configure these 
things, but if you're really insane you can look at this other set of 
stuff to configure.


So I think we need to have a way of specifying everything. And we need 
to have that way not kill us in the process. I like the proposed idea of 
an open-ended config area. It's us acknowledging that we're sitting on 
top of a dozen other projects. Admittedly, I don't fully understand 
Slagle's proposal, but the idea of pulling in samples from other 
projects and not making us acknowledge every configuration option is 
also appealing.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] heat is not present in keystone service-list

2014-04-08 Thread Jay Dobies
For what it's worth, I have a fresh devstack installation from about a 
week ago and I have two Heat services registered without any extra steps.


On 04/08/2014 11:44 AM, Steven Dake wrote:

On 04/08/2014 07:00 AM, Peeyush Gupta wrote:

Hi all,

I have been trying to install heat with devstack. As shown here
http://docs.openstack.org/developer/heat/getting_started/on_devstack.html

I added the IMAGE_URLS to the locarc file. Then I ran unstack.sh and
then stack.sh. Now, when I run heat stack-list, I get the following error:

$ heat stack-list
publicURL endpoint for orchestration not found

I found that some people got this error because of wrong endpoint in
keystone service-list, but in my output there is no heat!


My guess is your devstack is older to the point of not having heat
enabled by default.  You can add the following to your localrc:

# heat
ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
IMAGE_URLS+=,http://fedorapeople.org/groups/heat/prebuilt-jeos-images/F18-x86_6
4-cfntools.qcow2,http://fedorapeople.org/groups/heat/prebuilt-jeos-images/F18-i3
86-cfntools.qcow2,http://fedorapeople.org/groups/heat/prebuilt-jeos-images/F19-i
386-cfntools.qcow2,http://fedorapeople.org/groups/heat/prebuilt-jeos-images/F19-
x86_64-cfntools.qcow2,http://download.fedoraproject.org/pub/fedora/linux/release
s/20/Images/x86_64/Fedora-x86_64-20-20131211.1-sda.qcow2



$ keystone service-list
+--+--+---+---+
|id|   name   |  type   |
 description|
+--+--+---+---+
| 808b93d2008c48f69d42ae7555c27b6f |  cinder  | volume  |   Cinder
Volume Service   |
| f57c596db43443d7975d890d9f0f4941 | cinderv2 |  volumev2 |
 Cinder Volume Service V2 |
| d8567205287a4072a489a89959801629 |   ec2|  ec2|  EC2
Compatibility Layer  |
| 9064dc9d626045179887186d0b3647d0 |  glance  | image   |
 Glance Image Service   |
| 70cf29f8ceed48d0a39ba7e29481636d | keystone |  identity |
Keystone Identity Service |
| b6cca1393f814637bbb8f95f658ff70a |   nova   |  compute  |
 Nova Compute Service   |
| 0af6de1208a14d259006f86000d33f0d |  novav3  | computev3 |  Nova
Compute Service V3  |
| b170b6b212ae4843b3a6987c546bc640 |s3| s3|
  S3|
+--+--+---+---+

Please help me resolve this error.
Thanks,
~Peeyush Gupta


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march [additional cores]

2014-04-08 Thread Jay Dobies

On 04/07/2014 07:50 PM, Robert Collins wrote:

tl;dr: 3 more core members to propose:
bnemec
greghaynes
jdon


I'm comfortable with committing to at least 3 reviews a day and promise 
to wield the awesome power of +2 responsibly. I appreciate being 
nominated :)




On 4 April 2014 08:55, Chris Jones c...@tenshu.net wrote:

Hi

+1 for your proposed -core changes.

Re your question about whether we should retroactively apply the 3-a-day
rule to the 3 month review stats, my suggestion would be a qualified no.

I think we've established an agile approach to the member list of -core, so
if there are a one or two people who we would have added to -core before the
goalposts moved, I'd say look at their review quality. If they're showing
the right stuff, let's get them in and helping. If they don't feel our new
goalposts are achievable with their workload, they'll fall out again
naturally before long.


So I've actioned the prior vote.

I said: Bnemec, jdob, greg etc - good stuff, I value your reviews
already, but...

So... looking at a few things - long period of reviews:
60 days:
|greghaynes   | 1210  22  99   0   081.8% |
14 ( 11.6%)  |
|  bnemec | 1160  38  78   0   067.2% |
10 (  8.6%)  |
|   jdob  |  870  15  72   0   082.8% |
4 (  4.6%)  |

90 days:

|  bnemec | 1450  40 105   0   072.4% |
17 ( 11.7%)  |
|greghaynes   | 1420  23 119   0   083.8% |
22 ( 15.5%)  |
|   jdob  | 1060  17  89   0   084.0% |
7 (  6.6%)  |

Ben's reviews are thorough, he reviews across all contributors, he
shows good depth of knowledge and awareness across tripleo, and is
sensitive to the pragmatic balance between 'right' and 'good enough'.
I'm delighted to support him for core now.

Greg is very active, reviewing across all contributors with pretty
good knowledge and awareness. I'd like to see a little more contextual
awareness though - theres a few (but not many) reviews where looking
at how the big picture of things fitting together more would have been
beneficial. *however*, I think that's a room-to-improve issue vs
not-good-enough-for-core - to me it makes sense to propose him for
core too.

Jay's reviews are also very good and consistent, somewhere between
Greg and Ben in terms of bigger-context awareness - so another
definite +1 from me.

-Rob






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Horizon] Searching for a new name for Tuskar UI

2014-03-27 Thread Jay Dobies

It might be good to do a similar thing as Keystone does. We could keep
python-tuskarclient focused only on Python bindings for Tuskar (but keep
whatever CLI we already implemented there, for backwards compatibility),
and implement CLI as a plugin to OpenStackClient. E.g. when you want to
access Keystone v3 API features (e.g. domains resource), then
python-keystoneclient provides only Python bindings, it no longer
provides CLI.


+1

I've always liked the idea of separating out the bindings from the CLI 
itself.




I think this is a nice approach because it allows the python-*client to
stay thin for including within Python apps, and there's a common
pluggable CLI for all projects (one top level command for the user). At
the same time it would solve our naming problems (tuskarclient would
stay, because it would be focused on Tuskar only) and we could reuse the
already implemented other OpenStackClient plugins for anything on
undercloud.

We previously raised that OpenStackClient has more plugins (subcommands)
that we need on undercloud and that could confuse users, but i'd say it
might not be as troublesome to justify avoiding the OpenStackClient way.
(Even if we decide that this is a big problem after all and OSC plugin
is not enough, we should still probably aim for separating TripleO CLI
and Tuskarclient in the future.)

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Reviewers] Please add openstack/os-cloud-config to your tripleo-repositories-to-review

2014-03-03 Thread Jay Dobies
I updated https://wiki.openstack.org/wiki/TripleO and the link at the 
All TripleO Reviews at the bottom to include it.


On 03/02/2014 12:07 AM, Robert Collins wrote:

This is a new repository to provide common code for tuskar and the
seed initialisation logic - the post heat completion initial
configuration of a cloud.

Cheers,
Rob



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-27 Thread Jay Dobies

Yeah. This is a double bladed axe but i'm leaning towards naming flavors
consistently a bit more too. Here's an attempt at +/- summary:


node profile

+ a bit more descriptive for a newcomer imho

- CLI renaming/reimplementing mentioned before

- inconsistency dangers lurking in the deep - e.g. if an error message
bubbles up from Nova all the way to the user, it might mention flavors,
and if we talk 99% of time about node profiles, then user will not know
what is meant in the error message. I'm a bit worried that we'll keep
hitting things like this in the long run.


While I agree with all of your points, this is the one that resonates 
with me the most. We won't be able to be 100% consistent with a rename 
(exceptions are a great example). It's already irritating to the user to 
have to see an error, having to then see it in terms they aren't 
familiar with is an added headache.



- developers still often call them flavors, because that's what Nova
calls them


flavor

+ fits with the rest, does not cause communication or development problems

- not so descriptive (but i agree with you - OpenStack admins will
already be familiar what flavor means in the overcloud, and i think
they'd be able to infer what it means in the undercloud)


I'm CCing Jarda as this affects his work quite a lot and i think he'll
have some insight+opinion (he's on PTO now so it might take some time
before he gets to this).





One other thing, I've looked at my own examples so far, so I didn't
really think about this but seeing it written down, I've realised the
way we specify the roles in the Tuskar CLI really bugs me.

  --roles 1=1 \
  --roles 2=1

I know what this means, but even reading it now I think: One equals
one? Two equals one? What? I think we should probably change the arg
name and also refer to roles by name.

  --role-count compute=10

and a shorter option

  -R compute=10


Yeah this is https://bugs.launchpad.net/tuskar/+bug/1281051

I agree with you on the solution (rename long option, support lookup by
names, add a short option).


Thanks

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] JSON output values from Tuskar API

2014-02-26 Thread Jay Dobies
This is a new concept to me in JSON, I've never heard of a wrapper 
element like that being called a namespace.


My first impression is that is looks like cruft. If there's nothing else 
at the root of the JSON document besides the namespace, all it means is 
that every time I go to access relevant data I have an extra layer of 
indirection. Something like:


volume_wrapper = get_volume(url)
volume = volume_wrapper['volume']

or

volume = get_volume(url)
name = volume['volume']['name']

If we ever forsee an aggregate API, I can see some value in it. For 
instance, a single call that aggregates a volume with some relevant 
metrics from ceilometer. In that case, I could see leaving both distinct 
data sets separate at the root with some form of namespace rather than 
attempting to merge the data.


Even in that case, I think it'd be up to the aggregate API to introduce 
that.


Looking at api.openstack.org, there doesn't appear to be any high level 
resource get that would aggregate the different subcollections.


For instance, {tenant_id}/volumes stuffs everything inside of an element 
called volumes. {tenant_id}/types stuffs everything inside of an 
element called volume_types. If a call to {tenant_id} aggregated both of 
those, then I can see leaving the namespace in on the single ID look ups 
for consistency (even if it's redundant). However, the API doesn't 
appear to support that, so just looking at the examples given it looks 
like an added layer of depth that carries no extra information and makes 
using the returned result a bit awkward IMO.



On 02/26/2014 01:38 PM, Petr Blaho wrote:

Hi,

I am wondering what is the OpenStack way of returning json from
apiclient.

I have got 2 different JSON response examples from http://api.openstack.org/:

json output with namespace:
{
   volume:
   {
 status:available,
 availability_zone:nova,
 id:5aa119a8-d25b-45a7-8d1b-88e127885635,
 name:vol-002,
 volume_type:None,
 metadata:{
   contents:not junk
 }
   }
}
(example for GET 'v2/{tenant_id}/volumes/{volume_id}' of Block Storage API v2.0 
taken from
http://api.openstack.org/api-ref-blockstorage.html [most values ommited])

json output without namespace:
{
   alarm_actions: [
   http://site:8000/alarm;
 ],
 alarm_id: null,
 combination_rule: null,
 description: An alarm,
 enabled: true,
 type: threshold,
 user_id: c96c887c216949acbdfbd8b494863567
}
(example for GET 'v2/alarms/{alarm_id}' of Telemetry API v2.0 taken from
http://api.openstack.org/api-ref-telemetry.html [most values ommited])

Tuskar API now uses without namespace variant.

By looking at API docs at http://api.openstack.org/ I can say that
projects use both ways, altought what I would describe as nicer API
uses namespaced variant.

So, returning to my question, does OpenStack have some rules what
format of JSON (namespaced or not) should APIs return?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-26 Thread Jay Dobies

Hello,

i went through the CLI way of deploying overcloud, so if you're
interested what's the workflow, here it is:

https://gist.github.com/jistr/9228638


This is excellent to see it all laid out like this, thanks for writing 
it up.



I'd say it's still an open question whether we'll want to give better UX
than that ^^ and at what cost (this is very much tied to the benefits
and drawbacks of various solutions we discussed in December [1]). All in
all it's not as bad as i expected it to be back then [1]. The fact that
we keep Tuskar API as a layer in front of Heat means that CLI user
doesn't care about calling merge.py and creating Heat stack manually,
which is great.


I agree that it's great that Heat is abstracted away. I also agree that 
it's not as bad as I too expected it to be.


But generally speaking, I think it's not an ideal user experience. A few 
things jump out at me:


* We currently have glance, nova, and tuskar represented. We'll likely 
need something to ceilometer as well for gathering metrics and 
configuring notifications (I assume the notifications will fall under 
that, but come with me on it).


That's a lot for an end user to comprehend and remember, which concerns 
me for both adoption and long term usage. Even in the interim when a 
user remembers nova is related to node stuff, doing a --help on nova is 
huge.


That's going to put a lot of stress on our ability to document our 
prescribed path. It will be tricky for us to keep track of the relevant 
commands and still point to the other project client documentation so as 
to not duplicate it all.


* Even at this level, it exposes the underlying guts. There are calls to 
nova baremetal listed in there, but eventually those will turn into 
ironic calls. It doesn't give us a ton of flexibility in terms of 
underlying technology if that knowledge bubbles up to the end user that way.


* This is a good view into what third-party integrators are going to 
face if they choose to skip our UIs and go directly to the REST APIs.



I like the notion of OpenStackClient. I'll talk ideals for a second. If 
we had a standard framework and each project provided a command 
abstraction that plugged in, we could pick and choose what we included 
under the Tuskar umbrella. Advanced users with particular needs could go 
directly to the project clients if needed.


I think this could go beyond usefulness for Tuskar as well. On a 
previous project, I wrote a pluggable client framework, allowing the end 
user to add their own commands that put a custom spin on what data was 
returned or how it was rendered. That's a level between being locked 
into what we decide the UX should be and having to go directly to the 
REST APIs themselves.


That said, I know that's a huge undertaking to get OpenStack in general 
to buy into. I'll leave it more that I think it is a lesser UX (not even 
saying bad, just not great) to have so much for the end user to digest 
to attempt to even play with it. I'm more of the mentality of a unified 
TripleO CLI that would be catered towards handling TripleO stuffs. Short 
of OpenStackClient, I realize I'm not exactly in the majority here, but 
figured it didn't hurt to spell out my opinion  :)




In general the CLI workflow is on the same conceptual level as Tuskar
UI, so that's fine, we just need to use more commands than tuskar.

There's one naming mismatch though -- Tuskar UI doesn't use Horizon's
Flavor management, but implements its own and calls it Node Profiles.
I'm a bit hesitant to do the same thing on CLI -- the most obvious
option would be to make python-tuskarclient depend on python-novaclient
and use a renamed Flavor management CLI. But that's wrong and high cost
given that it's only about naming :)

The above issue is once again a manifestation of the fact that Tuskar
UI, despite its name, is not a UI for Tuskar, it is UI for a bit more
services. If this becomes a greater problem, or if we want a top-notch
CLI experience despite reimplementing bits that can be already done
(just not in a super-friendly way), we could start thinking about
building something like OpenStackClient CLI [2], but directed
specifically at Undercloud/Tuskar needs and using undercloud naming.

Another option would be to get Tuskar UI a bit closer back to the fact
that Undercloud is OpenStack too, and keep the name Flavors instead of
changing it to Node Profiles. I wonder if that would be unwelcome to
the Tuskar UI UX, though.


Jirka


[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-December/021919.html

[2] https://wiki.openstack.org/wiki/OpenStackClient

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Jay Dobies

Just to throw this out there, is this something we need for Icehouse?

Yes, I fully acknowledge that it's an ugly security hole. But what's our 
story for how stable/clean Tuskar will be for Icehouse? I don't believe 
the intention is for people to use this in a production environment yet, 
so it will be people trying things out in a test environment. I don't 
think it's absurd to document that we haven't finished hardening the 
security yet and to not use super-sensitive passwords.


If there was a simple answer, I likely wouldn't even suggest this. But 
there's some real design and thought that needs to take place and, 
frankly, we're running out of time. Keeping in mind the intended usage 
of the Icehouse release of Tuskar, it might make sense to shelve this 
for now and file a big fat bug that we address in Juno.


On 02/20/2014 08:47 AM, Radomir Dopieralski wrote:

On 20/02/14 14:10, Jiří Stránský wrote:

On 20.2.2014 12:18, Radomir Dopieralski wrote:



Thinking about it some more, all the uses of the passwords come as a
result of an action initiated by the user either by tuskar-ui, or by
the tuskar command-line client. So maybe we could put the key in their
configuration and send it with the request to (re)deploy. Tuskar-API
would still need to keep it for the duration of deployment (to register
the services at the end), but that's it.


This would be possible, but it would damage the user experience quite a
bit. Afaik other deployment tools solve password storage the same way we
do now.


I don't think it would damage the user experience so much. All you need
is an additional configuration option in Tuskar-UI and Tuskar-client,
the encryption key.

That key would be used to encrypt the passwords when they are first sent
to Tuskar-API, and also added to the (re)deployment calls.

This way, if the database leaks due to a security hole in MySQL or bad
engineering practices administering the database, the passwords are
still inaccessible. To get them, the attacker would need to get
*both* the database and the config files from host on which Tuskar-UI runs.

With the tuskar-client it's a little bit more obnoxious, because you
would need to configure it on every host from which you want to use it,
but you already need to do some configuration to point it at the
tuskar-api and authenticate it, so it's not so bad.

I agree that this complicates the whole process a little, and adds
another potential failure point though.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] consistency vs packages in TripleO

2014-02-14 Thread Jay Dobies

On Fri, Feb 14, 2014 at 10:27:20AM +1300, Robert Collins wrote:

So progressing with the 'and folk that want to use packages can' arc,
we're running into some friction.

I've copied -operators in on this because its very relevant IMO to operators :)

So far:
  - some packages use different usernames
  - some put things in different places (and all of them use different
places to the bare metal ephemeral device layout which requires
/mnt/).
  - possibly more in future.

Now, obviously its a 'small matter of code' to deal with this, but the
impact on ops folk isn't so small. There are basically two routes that
I can see:

# A
  - we have a reference layout - install from OpenStack git / pypi
releases; this is what we will gate on, and can document.
  - and then each distro (both flavor of Linux and also possibly things
like Fuel that distribution OpenStack) is different - install on X,
get some delta vs reference.
  - we need multiple manuals describing how to operate and diagnose
issues in such a deployment, which is a matrix that overlays platform
differences the user selects like 'Fedora' and 'Xen'.

I agree with what James already said here. It probably not TripleO's job to
document all that.  A good documentation for the reference layout should be the
goal.

And currently the differences aren't all that big I think. And for some of them
we already have good solutions (like e.g. the os-svc-* tools). There is room
for improvement in handling of the differences for usernames, though :)


# B
  - we have one layout, with one set of install paths, usernames
  - package installs vs source installs make no difference - we coerce
the package into reference upstream shape as part of installing it.

Unless I am completely missunderstanding your proposal I think this would void
many of the reasons why people would choose to install from packages in the
first place.


  - documentation is then identical for all TripleO installs, except
the platform differences (as above - systemd on Fedora, upstart on
Ubuntu, Xen vs KVM)

B seems much more useful to our ops users - less subtly wrong docs, we
avoid bugs where tools we write upstream make bad assumptions,
experience operating a TripleO deployed OpenStack is more widely
applicable (applies to all such installs, not just those that happened
to use the same package source).

I am propably repeating much of what James already said already. But I think an
operator that makes the decision to do a package base Triplo installation does
so e.g. because he is familiar with the tools and conventions of the specific
distro/provider of the packages he choose. And probably wants TripleO to be
consistent with that. And yes, with the decision for packages, he decides to
differ from the reference layout.


I agree with the notions that admins have come to expect differences 
from distro to distro. It's the case for any number of services.


I'd go beyond that and say you're going to have problems getting the 
packages accepted/certified if they break the typical distro 
conventions. There are guidelines that say where things like Python code 
must live and packages may not even be accepted if they violate those.


The same is likely for the admins themselves, taking issue if the 
packages don't match their expectation criteria for the distro.



I see this much like the way Nova abstracts out trivial Hypervisor
differences to let you 'nova boot' anywhere, that we should be hiding
these incidental (vs fundamental capability) differences.

What say ye all?

-Robv




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-05 Thread Jay Dobies

First, I don't think RollingUpdatePattern and CanaryUpdatePattern should be 2 
different entities. The second just looks like a parametrization of the first 
(growth_factor=1?).


Perhaps they can just be one. Until I find parameters which would need
to mean something different, I'll just use UpdatePattern.


I wondered about this too. Maybe I'm just not as familiar with the 
terminology, but since we're stopping on all failures both function as a 
canary in testing the waters before doing the update. The only 
difference is the potential for acceleration.


As for an example of an entirely different strategy, what about the idea 
of standing up new instances with the updates and then killing off the 
old ones? It may come down to me not fully understanding the scale of 
when you say updating configuration, but it may be desirable to not 
scale down your capacity while the update is executing and instead 
having a quick changeover (for instance, in the floating IPs or a load 
balancer).



I then feel that using (abusing?) depends_on for update pattern is a bit weird. 
Maybe I'm influenced by the CFN design, but the separate UpdatePolicy attribute 
feels better (although I would probably use a property). I guess my main 
question is around the meaning of using the update pattern on a server 
instance. I think I see what you want to do for the group, where child_updating 
would return a number, but I have no idea what it means for a single resource. 
Could you detail the operation a bit more in the document?



I would be o-k with adding another keyword. The idea in abusing depends_on
is that it changes the core language less. Properties is definitely out
for the reasons Christopher brought up, properties is really meant to
be for the resource's end target only.


I think depends_on would be a clever use of the existing language if we 
weren't in a position to influence it's evolution. A resource's update 
policy is a first-class concept IMO, so adding that notion directly into 
the definition feels cleaner.


[snip]

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-30 Thread Jay Dobies

Wouldn't lying about the hardware specs when registering nodes be
problematic for upgrades?  Users would have
to re-register their nodes.


This was my first impression too, the line basically lie about the 
hardware specs when enrolling them. It feels more wrong to have the 
user provide false data than it does to ignore that data for Icehouse. 
I'd rather have the data correct now and ignore it than tell users when 
they upgrade to Juno they have to re-enter all of their node data.


It's not heterogenous v. homogeneous support. It's whether or not we use 
the data. We can capture it now and not provide the user the ability to 
differentiate what something is deployed on. That's a heterogeneous 
enrivonment, but just a lack of fine-grained control over where the 
instances fall.


And all of this is simply for the time constraints of Icehouse's first 
pass. A known, temporary limitation.




One reason why a custom filter feels attractive is that it provides us
with a clear upgrade path:

Icehouse
   * nodes are registered with correct attributes
   * create a custom scheduler filter that allows any node to match
   * users are informed that for this release, Tuskar will not
differentiate between heterogeneous hardware

J-Release
   * implement the proper use of flavors within Tuskar, allowing Tuskar
to work with heterogeneous hardware
   * work with nova regarding scheduler filters (if needed)
   * remove the custom scheduler filter


Mainn



As far as nova-scheduler and Ironic go, I believe this is a solved
problem. Steps are:
- enroll hardware with proper specs (CPU, RAM, disk, etc)
- create flavors based on hardware specs
- scheduler filter matches requests exactly

There are, I suspect, three areas where this would fall short today:
- exposing to the user when certain flavors shouldn't be picked,
because there is no more hardware available which could match it
- ensuring that hardware is enrolled with the proper specs //
trouble-shooting when it is not
- a UI that does these well

If I understand your proposal correctly, you're suggesting that we
introduce non-deterministic behavior. If the scheduler filter falls
back to $flavor when $flavor is not available, even if the search
is in ascending order and upper-bounded by some percentage, the user
is still likely to get something other than what they requested.
 From a utilization and inventory-management standpoint, this would
be a headache, and from a user standpoint, it would be awkward.
Also, your proposal is only addressing the case where hardware
variance is small; it doesn't include a solution for deployments
with substantially different hardware.

I don't think introducing a non-deterministic hack when the
underlying services already work, just to provide a temporary UI
solution, is appropriate. But that's just my opinion.

Here's an alternate proposal to support same-arch but different
cpu/ram/disk hardware environments:
- keep the scheduler filter doing an exact match
- have the UI only allow the user to define one flavor, and have
that be the lowest common denominator of available hardware
- assign that flavor's properties to all nodes -- basically lie
about the hardware specs when enrolling them
- inform the user that, if they have heterogeneous hardware, they
will get randomly chosen nodes from their pool, and that scheduling
on heterogeneous hardware will be added in a future UI release

This will allow folks who are using TripleO at the commandline to
take advantage of their heterogeneous hardware, instead of crippling
already-existing functionality, while also allowing users who have
slightly (or wildly) different hardware specs to still use the UI.


Regards,
Devananda



On Thu, Jan 30, 2014 at 7:14 AM, Tomas Sedovic tsedo...@redhat.com
mailto:tsedo...@redhat.com wrote:

On 30/01/14 15:53, Matt Wagner wrote:

On 1/30/14, 5:26 AM, Tomas Sedovic wrote:

Hi all,

I've seen some confusion regarding the homogenous
hardware support as
the first step for the tripleo UI. I think it's time to
make sure we're
all on the same page.

Here's what I think is not controversial:

1. Build the UI and everything underneath to work with
homogenous
hardware in the Icehouse timeframe
2. Figure out how to support heterogenous hardware and
do that (may or
may not happen within Icehouse)

The first option implies having a single nova flavour
that will match
all the boxes we want to work with. It may or may not be
surfaced in the

Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-28 Thread Jay Dobies



On 01/28/2014 11:42 AM, Jay Pipes wrote:

On Tue, 2014-01-28 at 10:02 -0500, Tzu-Mainn Chen wrote:

Yep, although the reason why - that no end-user will know what these terms mean 
-
has never been entirely convincing to me.


Well, tenants would never see any of the Tuskar UI, so I don't think we
need worry about them. And if a deployer is enabling Tuskar -- and using
Tuskar/Triple-O for undercloud deployment -- then I would think that the
deployer would understand the concept/terminology of undercloud and
overcloud, since it's an essential concept in deploying with
Triple-O. :)

So, in short, I don't see a problem with using the terms undercloud and
overcloud.

Best,
-jay


+1, I was going to say the same thing. Someone installing and using 
Tuskar will have to be sold on the concept of it, and I'm not sure how 
we'd describe what it does without using those terms.








___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Editing Nodes

2014-01-17 Thread Jay Dobies



On 01/17/2014 03:28 AM, mar...@redhat.com wrote:

On 16/01/14 00:28, Clint Byrum wrote:

Excerpts from James Slagle's message of 2014-01-15 05:07:08 -0800:

I'll start by laying out how I see editing or updating nodes working
in TripleO without Tuskar:

To do my initial deployment:
1.  I build a set of images for my deployment for different roles. The
images are different based on their role, and only contain the needed
software components to accomplish the role they intend to be deployed.
2.  I load the images into glance
3.  I create the Heat template for my deployment, likely from
fragments that are already avaiable. Set quantities, indicate which
images (via image uuid) are for which resources in heat.
4.  heat stack-create with my template to do the deployment

To update my deployment:
1.  If I need to edit a role (or create a new one), I create a new image.
2.  I load the new image(s) into glance
3.  I edit my Heat template, update any quantities, update any image uuids, etc.
4.  heat stack-update my deployment

In both cases above, I see the role of Tuskar being around steps 3 and 4.



Agreed!



+1 ...


review  /#/c/52045/ is about generating the overcloud template using
merge.py **. Having recently picked this up again and following latest
wireframes and UI design, it seems like most of current Tuskar code is
going away. After initial panick I saw Jay has actually already started
that with /#/c/66062/

Jay: I think at some point (asap) my /#/c/52045/ will be rebased on your
  /#/c/66062/. Currently my code creates templates from the Tuskar
representations, i.e. ResourceClasses. For now I will assume that I'll
be getting something along the lines of:

{
'resource_categories': { 'controller': 1, 'compute': 4, 'object': 1,
'block' 2}
}

i.e. just resource categories and number of instances for each (plus any
other user supplied config/auth info). Will there be controllers (do we
need them, apart from a way to create, update, delete)? Let's talk some
more on irc later. I'll update the commit message on my review to point
to yours for now,

thanks! marios

** merge.py is going to be binned but it is the best thing we've got
_today_ and within the Icehouse timeframe.


My stuff got merged in today. You should be able to use db's api.py to 
grab everything you need. Ping me (jdob) if you have any questions on it 
or need some different queries.





Steps 1 and 2 are really CI's responsibility in a CD cloud. The end of
the testing phase is new images in glance! For a stable release cloud,
a tool for pulling new released images from elsewhere into Glance would
be really useful, but worst case an admin downloads the new images and
loads them manually.


I may be misinterpreting, but let me say that I don't think Tuskar
should be building images. There's been a fair amount of discussion
around a Nova native image building service [1][2]. I'm actually not
sure what the status/concensus on that is, but maybe longer term,
Tuskar might call an API to kick off an image build.



Tuskar should just deploy what it has available. I definitely could
see value in having an image updating service separate from Tuskar,
but I think there are many different answers for how do images arrive
in Glance?.


Ok, so given that frame of reference, I'll reply inline:

On Mon, Jan 13, 2014 at 11:18 AM, Jay Dobies jason.dob...@redhat.com wrote:

I'm pulling this particular discussion point out of the Wireframes thread so
it doesn't get lost in the replies.

= Background =

It started with my first bulletpoint:

- When a role is edited, if it has existing nodes deployed with the old
version, are the automatically/immediately updated? If not, how do we
reflect that there's a difference between how the role is currently
configured and the nodes that were previously created from it?


I would think Roles need to be versioned, and the deployed version
recorded as Heat metadata/attribute. When you make a change to a Role,
it's a new version. That way you could easily see what's been
deployed, and if there's a newer version of the Role to deploy.



Could Tuskar version the whole deployment, but only when you want to
make it so ? If it gets too granular, it becomes pervasive to try and
keep track of or to roll back. I think that would work well with a goal
I've always hoped Tuskar would work toward which would be to mostly just
maintain deployment as a Heat stack that nests the real stack with the
parameters realized. With Glance growing Heat template storage capability,
you could just store these versions in Glance.


Replies:


I know you quoted the below, but I'll reply here since we're in a new thread.


I would expect any Role change to be applied immediately. If there is some
change where I want to keep older nodes how they are set up and apply new
settings only to new added nodes, I would create new Role then.


-1 to applying immediately.

When you edit a Role, it gets a new version. But nodes that are
deployed

Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-15 Thread Jay Dobies

I don't necessarily disagree with this assertion, but what this could
lead to is a proliferation of a bunch of very similar images.  Templatizing
some of the attributes (e.g., this package is enabled, that one isn't)
can reduce the potential explosion of images stored in glance.  If that's
a concern, then it needs to be addressed.  Note that this is true whether
tuskar does/helps with the image building or not.


We have quite a proliferation of services already:

http://docs.openstack.org/training-guides/content/figures/5/figures/image31.jpg

Realistically, the number of individual services on that diagram (I
rapidly counted 39.. I bet I'm off) are the maximum number of image
contents we should ever have in a given deployment. Well plus heat which
is two more. And maybe Trove.. and Designate.. Ok so lets say 50.

Of course, users might recombine every service with every other service
over the life-cycle of their deployment, but realistically, that might
lead to _100_ individual image definitions in service at one time while
new topologies are being rolled out.

I'm o-k with that kind of proliferation. It is measurable and
controllable. Also it is crazy. Realistically we're going to see maybe
10 definitions.. controllers.. computes.. blocks.. swifts.. and some
supporting things.

The positive trade there is that we don't have to wonder how a box has
changed since it was deployed. It is always running all of the software
we deployed to it, with the config we defined now, or it is in an
error state.


+1 to this last point. Customizing services outside of the image is 
going to complicate knowing what is running on each node. It's much 
easier to know the node was provisioned with image X and then knowing 
what's in X.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Editing Nodes

2014-01-15 Thread Jay Dobies

Thanks for the feedback, really good stuff in here :)

On 01/15/2014 08:07 AM, James Slagle wrote:

I'll start by laying out how I see editing or updating nodes working
in TripleO without Tuskar:

To do my initial deployment:
1.  I build a set of images for my deployment for different roles. The
images are different based on their role, and only contain the needed
software components to accomplish the role they intend to be deployed.
2.  I load the images into glance
3.  I create the Heat template for my deployment, likely from
fragments that are already avaiable. Set quantities, indicate which
images (via image uuid) are for which resources in heat.
4.  heat stack-create with my template to do the deployment

To update my deployment:
1.  If I need to edit a role (or create a new one), I create a new image.
2.  I load the new image(s) into glance
3.  I edit my Heat template, update any quantities, update any image uuids, etc.
4.  heat stack-update my deployment

In both cases above, I see the role of Tuskar being around steps 3 and 4.

I may be misinterpreting, but let me say that I don't think Tuskar
should be building images. There's been a fair amount of discussion
around a Nova native image building service [1][2]. I'm actually not
sure what the status/concensus on that is, but maybe longer term,
Tuskar might call an API to kick off an image build.


I didn't mean to imply that Tuskar would be building images, just 
kicking them off.


As for whether or not it should, that's an interesting question. You and 
I are both on the same page on not having a generic image and having the 
services be configured outside of that, so I'll ignore that idea for now.


I've always thought of Tuskar as providing the user with everything 
they'd need. My gut reaction is that I don't like the idea of saying 
they have to go through a separate step of creating the image and then 
configuring the resource category in Tuskar and attaching the image to it.


That said, I suspect my gut is wrong, or at very least not in line with 
the OpenStack way of thinking.



Ok, so given that frame of reference, I'll reply inline:

On Mon, Jan 13, 2014 at 11:18 AM, Jay Dobies jason.dob...@redhat.com wrote:

I'm pulling this particular discussion point out of the Wireframes thread so
it doesn't get lost in the replies.

= Background =

It started with my first bulletpoint:

- When a role is edited, if it has existing nodes deployed with the old
version, are the automatically/immediately updated? If not, how do we
reflect that there's a difference between how the role is currently
configured and the nodes that were previously created from it?


I would think Roles need to be versioned, and the deployed version
recorded as Heat metadata/attribute. When you make a change to a Role,
it's a new version. That way you could easily see what's been
deployed, and if there's a newer version of the Role to deploy.


+1, the more I've been thinking about this, the more I like it. We can't 
assume changes will be immediately applied to all provisioned instances, 
so we need some sort of record of what an instance was actually built 
against.



Replies:


I know you quoted the below, but I'll reply here since we're in a new thread.


I would expect any Role change to be applied immediately. If there is some
change where I want to keep older nodes how they are set up and apply new
settings only to new added nodes, I would create new Role then.


-1 to applying immediately.


Agreed. At large scales, there are a number of problems with this.


When you edit a Role, it gets a new version. But nodes that are
deployed with the older version are not automatically updated.


We will have to store image metadata in tuskar probably, that would map to
glance, once the image is generated. I would say we need to store the list
of the elements and probably the commit hashes (because elements can
change). Also it should be versioned, as the images in glance will be also
versioned.


I'm not sure why this image metadata would be in Tuskar. I definitely
like the idea of knowing the versions/commit hashes of the software
components in your images, but that should probably be in Glance.


+1


We can't probably store it in the Glance, cause we will first store the
metadata, then generate image. Right?


I'm not sure I follow this point. But, mainly, I don't think Tuskar
should be automatically generating images.


Then we could see whether image was created from the metadata and whether
that image was used in the heat-template. With versions we could also see
what has changed.


We'll be able to tell what image was used in the heat template, and
thus the deployment,  based on it's UUID.

I love the idea of seeing differences between images, especially
installed software versions, but I'm not sure that belongs in Tuskar.
That sort of utility functionality seems like it could apply to any
image you might want to launch in OpenStack, not just to do a
deployment.  So

Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-13 Thread Jay Dobies

Excellent write up Jay.

I don't actually know the answer. I'm not 100% bought into the idea that 
Tuskar isn't going to store any information about the deployment and 
will rely entirely on Heat/Ironic as the data store there. Losing this 
extra physical information may be a a strong reason why we need to store 
capture additional data beyond what is or will be utilized by Ironic.


For now, I think the answer is that this is the first pass for Icehouse. 
We're still a ways off from being able to do what you described 
regardless of where the model lives. There are ideas around how to 
partition things as you're suggesting (configuring profiles for the 
nodes; I forget the exact term but there was a big thread about manual 
v. automatic node allocation that had an idea) but there's nothing in 
the wireframes to account for it yet.


So not a very helpful reply on my part :) But your feedback was 
described well which will help keep those concerns in mind post-Icehouse.



Hmm, so this is a bit disappointing, though I may be less disappointed
if I knew that Ironic (or something else?) planned to account for
datacenter inventory in a more robust way than is currently modeled.

If Triple-O/Ironic/Tuskar are indeed meant to be the deployment tooling
that an enterprise would use to deploy bare-metal hardware in a
continuous fashion, then the modeling of racks, and the attributes of
those racks -- location, power supply, etc -- are a critical part of the
overall picture.

As an example of why something like power supply is important... inside
ATT, we had both 8kW and 16kW power supplies in our datacenters. For a
42U or 44U rack, deployments would be limited to a certain number of
compute nodes, based on that power supply.

The average power draw for a particular vendor model of compute worker
would be used in determining the level of compute node packing that
could occur for that rack type within a particular datacenter. This was
a fundamental part of datacenter deployment and planning. If the tooling
intended to do bare-metal deployment of OpenStack in a continual manner
does not plan to account for these kinds of things, then the chances
that tooling will be used in enterprise deployments is diminished.

And, as we all know, when something isn't used, it withers. That's the
last thing I want to happen here. I want all of this to be the
bare-metal deployment tooling that is used *by default* in enterprise
OpenStack deployments, because the tooling fits the expectations of
datacenter deployers.

It doesn't have to be done tomorrow :) It just needs to be on the map
somewhere. I'm not sure if Ironic is the place to put this kind of
modeling -- I thought Tuskar was going to be that thing. But really,
IMO, it should be on the roadmap somewhere.

All the best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-13 Thread Jay Dobies



On 01/13/2014 05:43 AM, Jaromir Coufal wrote:

On 2014/10/01 21:17, Jay Dobies wrote:

Another question:

- A Role (sounds like we're moving away from that so I'll call it
Resource Category) can have multiple Node Profiles defined (assuming I'm
interpretting the + and the tabs in the Create a Role wireframe
correctly). But I don't see anywhere where a profile is selected when
scaling the Resource Category. Is the idea behind the profiles that you
can select how much power you want to provide in addition to how many
nodes?


Yes, that is correct, Jay. I mentioned that in walkthrough and in
wireframes with the note More views needed (for deploying, scaling,
managing roles).

I would say there might be two approaches - one is to specify which node
profile you want to scale in order to select how much power you want to
add.

The other approach is just to scale the number of nodes in a role and
let system decide the best match (which node profile is chosen will be
decided on the best fit, probably).

I lean towards the first approach, where you specify what role and which
node profile you want to use for scaling. However this is just
introduction of the idea and I believe we can get answers until we get
to that step.

Any preferences for one of above mentioned approaches?


I lean towards the former as well. See the Domain Model Locations thread 
and Jay Pipes' response for an admin's use case that backs it up.


A few weeks ago, there was the giant thread that turned into manual v. 
automatic allocation[1]. The conversation used as an example a system 
that was heavily geared towards disk IO being specifically used for the 
storage-related roles.


Where I'm going with this is that I'm not sure it'll be enough to simply 
use some values for a node profile. I think we're going to need some way 
of identifying nodes as having a particular set of characteristics 
(totally running out of words here) and then saying that the new 
allocation should come from that type of node.


That's a long way of saying that I think an explicit step to say more 
about what we're adding is not only necessary, but potentially 
invalidates some of the wireframes as they exist today. I think over 
time, that is going to be much more complex than incrementing some numbers.


Don't get me wrong. I fully appreciate that we're still very early on 
and scoped to Icehouse for now. Need to start somewhere :)



[1] 
http://lists.openstack.org/pipermail/openstack-dev/2013-December/022163.html



-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Tuskar] Editing Nodes

2014-01-13 Thread Jay Dobies
I'm pulling this particular discussion point out of the Wireframes 
thread so it doesn't get lost in the replies.


= Background =

It started with my first bulletpoint:

- When a role is edited, if it has existing nodes deployed with the old 
version, are the automatically/immediately updated? If not, how do we 
reflect that there's a difference between how the role is currently 
configured and the nodes that were previously created from it?


Replies:

I would expect any Role change to be applied immediately. If there is 
some change where I want to keep older nodes how they are set up and 
apply new settings only to new added nodes, I would create new Role then.


We will have to store image metadata in tuskar probably, that would map 
to glance, once the image is generated. I would say we need to store the 
list of the elements and probably the commit hashes (because elements 
can change). Also it should be versioned, as the images in glance will 
be also versioned.
We can't probably store it in the Glance, cause we will first store the 
metadata, then generate image. Right?


Then we could see whether image was created from the metadata and 
whether that image was used in the heat-template. With versions we could 
also see what has changed.


But there was also idea that there will be some generic image, 
containing all services, we would just configure which services to 
start. In that case we would need to version also this.



= New Comments =

My comments on this train of thought:

- I'm afraid of the idea of applying changes immediately for the same 
reasons I'm worried about a few other things. Very little of what we do 
will actually finish executing immediately and will instead be long 
running operations. If I edit a few roles in a row, we're looking at a 
lot of outstanding operations executing against other OpenStack pieces 
(namely Heat).


The idea of immediately also suffers from a sort of Oh shit, that's not 
what I meant when hitting save. There's no way for the user to review 
what the larger picture is before deciding to make it so.


- Also falling into this category is the image creation. This is not 
something that finishes immediately, so there's a period between when 
the resource category is saved and the new image exists.


If the image is immediately created, what happens if the user tries to 
change the resource category counts while it's still being generated? 
That question applies both if we automatically update existing nodes as 
well as if we don't and the user is just quick moving around the UI.


What do we do with old images from previous configurations of the 
resource category? If we don't clean them up, they can grow out of hand. 
If we automatically delete them when the new one is generated, what 
happens if there is an existing deployment in process and the image is 
deleted while it runs?


We need some sort of task tracking that prevents overlapping operations 
from executing at the same time. Tuskar needs to know what's happening 
instead of simply having the UI fire off into other OpenStack components 
when the user presses a button.


To rehash an earlier argument, this is why I advocate for having the 
business logic in the API itself instead of at the UI. Even if it's just 
a queue to make sure they don't execute concurrently (that's not enough 
IMO, but for example), the server is where that sort of orchestration 
should take place and be able to understand the differences between the 
configured state in Tuskar and the actual deployed state.


I'm off topic a bit though. Rather than talk about how we pull it off, 
I'd like to come to an agreement on what the actual policy should be. My 
concerns focus around the time to create the image and get it into 
Glance where it's available to actually be deployed. When do we bite 
that time off and how do we let the user know it is or isn't ready yet?


- Editing a node is going to run us into versioning complications. So 
far, all we've entertained are ways to map a node back to the resource 
category it was created under. If the configuration of that category 
changes, we have no way of indicating that the node is out of sync.


We could store versioned resource categories in the Tuskar DB and have 
the version information also find its way to the nodes (note: the idea 
is to use the metadata field on a Heat resource to store the res-cat 
information, so including version is possible). I'm less concerned with 
eventual reaping of old versions here since it's just DB data, though we 
still hit the question of when to delete old images.


- For the comment on a generic image with service configuration, the 
first thing that came to mind was the thread on creating images from 
packages [1]. It's not the exact same problem, but see Clint Byrum's 
comments in there about drift. My gut feeling is that having specific 
images for each res-cat will be easier to manage than trying to edit 
what services are running on 

Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-10 Thread Jay Dobies

Thanks for the feedback  :)


= Stack =
There is a single stack in Tuskar, the overcloud.

A small nit here: in the long term Tuskar will support multiple overclouds.


Yes, absolutely. I should have added For Icehouse like I did in other 
places. Good catch.



There's few pieces of concepts which I think is missing from the list:
- overclouds: after Heat successfully created the stack, Tuskar needs to
keep track whether it applied the post configuration steps (Keystone
initialization, registering services, etc) or not. It also needs to know
the name of the stack (only 1 stack named 'overcloud' for Icehouse).


I assumed this sort of thing was captured by the resource status, though 
I'm far from a Heat expert. Is it not enough to assume that if the 
resource started successfully, all of that took place?



- service endpoints of an overcloud: eg. Tuskar-ui in the undercloud
will need the url of the overcloud Horizon. The overcloud Keystone owns
the information about this (after post configuration is done) and Heat
owns the information about the overcloud Keystone.



- user credentials for an overcloud: it will be used by Heat during
stack creation, by Tuskar during post configuration, by Tuskar-ui
querying various information (eg. running vms on a node) and finally by
the user logging in to the overcloud Horizon. Now it can be found in the
Tuskar-ui settings file [1].


Both of these are really good points that I haven't seen discussed yet. 
The wireframes cover the allocation of nodes and displaying basic 
details of what's created (even that is still placeholder) but not much 
beyond that.


I'd like to break that into a separate thread. I'm not saying it's 
unrelated, but since it's not even wireframed out I'd like to have a 
dedicated discussion about what it might look like. I'll start that 
thread up as soon as I collect my thoughts.



Imre

[1]
https://github.com/openstack/tuskar-ui/blob/master/local_settings.py.example#L351


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-10 Thread Jay Dobies

As much as the Tuskar Chassis model is lacking compared to the Tuskar
Rack model, the opposite problem exists for each project's model of
Node. In Tuskar, the Node model is pretty bare and useless, whereas
Ironic's Node model is much richer.


Thanks for looking that deeply into it :)


So, it's not as simple as it may initially seem :)


Ah, I should have been clearer in my statement - my understanding is that
we're scrapping concepts like Rack entirely.


That was my understanding as well. The existing Tuskar domain model was 
largely placeholder/proof of concept and didn't necessarily reflect 
exactly what was desired/expected.



Mainn


Best,
-jay

[1]
https://github.com/openstack/ironic/blob/master/ironic/db/sqlalchemy/models.py
[2]
https://github.com/openstack/ironic/blob/master/ironic/db/sqlalchemy/models.py#L83



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-10 Thread Jay Dobies

Thanks for recording this. A few questions:

- I'm guessing the capacity metrics will come from Ceilometer. Will 
Ceilometer provide the averages for the role or is that calculated by 
Tuskar?


- When on the change deployments screen, after making a change but not 
yet applying it, how are the projected capacity changes calculated?


- For editing a role, does it make a new image with the changes to what 
services are deployed each time it's saved?


- When a role is edited, if it has existing nodes deployed with the old 
version, are the automatically/immediately updated? If not, how do we 
reflect that there's a difference between how the role is currently 
configured and the nodes that were previously created from it?


- I don't see any indication that the role scaling process is taking 
place. That's a potentially medium/long running operation, we should 
have some sort of way to inform the user it's running and if any errors 
took place.


That last point is a bit of a concern for me. I like the simplicity of 
what the UI presents, but the nature of what we're doing doesn't really 
fit with that. I can click the count button to add 20 nodes in a few 
seconds, but the execution of that is a long running, asynchronous 
operation. We have no means of reflecting that it's running, nor finding 
any feedback on it as it runs or completes.


Related question. If I have 20 instances and I press the button to scale 
it out to 50, if I immediately return to the My Deployment screen what 
do I see? 20, 50, or the current count as they are stood up?


It could all be written off as a future feature, but I think we should 
at least start to account for it in the wireframes. The initial user 
experience could be off putting if it's hard to discern the difference 
between what I told the UI to do and when it's actually finished being done.


It's also likely to influence the ultimate design as we figure out who 
keeps track of the running operations and their results (for both simple 
display purposes to the user and auditing reasons).



On 01/10/2014 09:58 AM, Jaromir Coufal wrote:

Hi everybody,

there is first stab of Deployment Management section with future
direction (note that it was discussed as a scope for Icehouse).

I tried to add functionality in time and break it down to steps. This
will help us to focus on one functionality at a time and if we will be
in time pressure for Icehouse release, we can cut off last steps.

Wireframes:
http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-10_tripleo-ui_deployment-management.pdf


Recording of walkthrough:
https://www.youtube.com/watch?v=9ROxyc85IyE

We sare about to start with first step as soon as possible, so please
focus on our initial steps the most (which doesn't mean that we should
neglect the direction).

Every feedback is very welcome, thanks
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-10 Thread Jay Dobies

Another question:

- A Role (sounds like we're moving away from that so I'll call it 
Resource Category) can have multiple Node Profiles defined (assuming I'm 
interpretting the + and the tabs in the Create a Role wireframe 
correctly). But I don't see anywhere where a profile is selected when 
scaling the Resource Category. Is the idea behind the profiles that you 
can select how much power you want to provide in addition to how many nodes?



On 01/10/2014 09:58 AM, Jaromir Coufal wrote:

Hi everybody,

there is first stab of Deployment Management section with future
direction (note that it was discussed as a scope for Icehouse).

I tried to add functionality in time and break it down to steps. This
will help us to focus on one functionality at a time and if we will be
in time pressure for Icehouse release, we can cut off last steps.

Wireframes:
http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-10_tripleo-ui_deployment-management.pdf


Recording of walkthrough:
https://www.youtube.com/watch?v=9ROxyc85IyE

We sare about to start with first step as soon as possible, so please
focus on our initial steps the most (which doesn't mean that we should
neglect the direction).

Every feedback is very welcome, thanks
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-09 Thread Jay Dobies
I'm trying to hash out where data will live for Tuskar (both long term 
and for its Icehouse deliverables). Based on the expectations for 
Icehouse (a combination of the wireframes and what's in Tuskar client's 
api.py), we have the following concepts:



= Nodes =
A node is a baremetal machine on which the overcloud resources will be 
deployed. The ownership of this information lies with Ironic. The Tuskar 
UI will accept the needed information to create them and pass it to 
Ironic. Ironic is consulted directly when information on a specific node 
or the list of available nodes is needed.



= Resource Categories =
A specific type of thing that will be deployed into the overcloud. 
These are static definitions that describe the entities the user will 
want to add to the overcloud and are owned by Tuskar. For Icehouse, the 
categories themselves are added during installation for the four types 
listed in the wireframes.


Since this is a new model (as compared to other things that live in 
Ironic or Heat), I'll go into some more detail. Each Resource Category 
has the following information:


== Metadata ==
My intention here is that we do things in such a way that if we change 
one of the original 4 categories, or more importantly add more or allow 
users to add more, the information about the category is centralized and 
not reliant on the UI to provide the user information on what it is.


ID - Unique ID for the Resource Category.
Display Name - User-friendly name to display.
Description - Equally self-explanatory.

== Count ==
In the Tuskar UI, the user selects how many of each category is desired. 
This stored in Tuskar's domain model for the category and is used when 
generating the template to pass to Heat to make it happen.


These counts are what is displayed to the user in the Tuskar UI for each 
category. The staging concept has been removed for Icehouse. In other 
words, the wireframes that cover the waiting to be deployed aren't 
relevant for now.


== Image ==
For Icehouse, each category will have one image associated with it. Last 
I remember, there was discussion on whether or not we need to support 
multiple images for a category, but for Icehouse we'll limit it to 1 and 
deal with it later.


Metadata for each Resource Category is owned by the Tuskar API. The 
images themselves are managed by Glance, with each Resource Category 
keeping track of just the UUID for its image.



= Stack =
There is a single stack in Tuskar, the overcloud. The Heat template 
for the stack is generated by the Tuskar API based on the Resource 
Category data (image, count, etc.). The template is handed to Heat to 
execute.


Heat owns information about running instances and is queried directly 
when the Tuskar UI needs to access that information.


--

Next steps for me are to start to work on the Tuskar APIs around 
Resource Category CRUD and their conversion into a Heat template. 
There's some discussion to be had there as well, but I don't want to put 
too much into one e-mail.



Thoughts?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-09 Thread Jay Dobies

The UI will also need to be able to look at the Heat resources running
within the overcloud stack and classify them according to a resource
category.  How do you envision that working?


There's a way in a Heat template to specify arbitrary metadata on a 
resource. We can add flags in there and key off of those.



Next steps for me are to start to work on the Tuskar APIs around
Resource Category CRUD and their conversion into a Heat template.
There's some discussion to be had there as well, but I don't want to put
too much into one e-mail.



I'm looking forward to seeing the API specification, as Resource Category
CRUD is currently a big unknown in the tuskar-ui api.py file.


Mainn




Thoughts?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread Jay Dobies
There were so many places in this thread that I wanted to jump in on as 
I caught up, it makes sense to just summarize things in once place 
instead of a half dozen quoted replies.


I agree with the sentiments about flexibility. Regardless of my personal 
preference on source v. packages, it's been my experience that the 
general mindset of production deployment is that new ideas move slowly. 
Admins are set in their ways and policies are in place on how things are 
consumed.


Maybe the newness of all things cloud-related and image-based management 
for scale is a good time to shift the mentality out of packages (again, 
I'm not suggesting whether or not it should be shifted). But I worry 
about adoption if we don't provide an option for people to use blessed 
distro packages, either because of company policy or years of habit and 
bias. If done correctly, there's no difference between a package and a 
particular tag in a source repository, but there is a psychological 
component there that I think we need to account for, assuming someone is 
willing to bite off the implementation costs (which is sounds like there 
is).



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Jay Dobies



On 12/20/2013 08:40 AM, Ladislav Smola wrote:

On 12/20/2013 02:06 PM, Radomir Dopieralski wrote:

On 20/12/13 13:04, Radomir Dopieralski wrote:

[snip]

I have just learned that tuskar-api stays, so my whole ranting is just a
waste of all our time. Sorry about that.



Hehe. :-)

Ok after the last meeting we are ready to say what goes to Tuskar-API.

Who wants to start that thread? :-)


I'm writing something up, but I won't have anything worth showing until 
after the New Year (sounds so far away when I say it that way; it's 
simply that I'm on vacation starting today until the 6th).





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] UI Wireframes for Resource Management - ready for implementation

2013-12-16 Thread Jay Dobies



On 12/13/2013 01:53 PM, Tzu-Mainn Chen wrote:

On 2013/13/12 11:20, Tzu-Mainn Chen wrote:

These look good!  Quick question - can you explain the purpose of Node
Tags?  Are they
an additional way to filter nodes through nova-scheduler (is that even
possible?), or
are they there solely for display in the UI?

Mainn


We start easy, so that's solely for UI needs of filtering and monitoring
(grouping of nodes). It is already in Ironic, so there is no reason why
not to take advantage of it.
-- Jarda


Okay, great.  Just for further clarification, are you expecting this UI 
filtering
to be present in release 0?  I don't think Ironic natively supports filtering
by node tag, so that would be further work that would have to be done.

Mainn


I might be getting ahead of things, but will the tags be free-form 
entered by the user, pre-entered in a separate settings and selectable 
at node register/update time, or locked into a select few that we specify?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-13 Thread Jay Dobies

* ability to 'preview' changes going to the scheduler


What does this give you? How detailed a preview do you need? What
information is critical there? Have you seen the proposed designs for
a heat template preview feature - would that be sufficient?


Will will probably have a better answer to this, but I feel like at very 
least this goes back to the psychology point raised earlier (I think in 
this thread, but if not, definitely one of the TripleO ones).


A weird parallel is whenever I do a new install of Fedora. I never 
accept their default disk partitioning without electing to review/modify 
it. Even if I didn't expect to change anything, I want to see what they 
are going to give me. And then I compulsively review the summary of what 
actual changes will be applied in the follow up screen that's displayed 
after I say I'm happy with the layout.


Perhaps that's more a commentary on my own OCD and cynicism that I feel 
dirty accepting the magic defaults blindly. I love the idea of anaconda 
doing the heavy lifting of figuring out sane defaults for home/root/swap 
and so on (similarly, I love the idea of Nova scheduler rationing out 
where instances are deployed), but I at least want to know I've seen it 
before it happens.


I fully admit to not knowing how common that sort of thing is. I suspect 
I'm in the majority of geeks and tame by sys admin standards, but I 
honestly don't know. So I acknowledge that my entire argument for the 
preview here is based on my own personality.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-12 Thread Jay Dobies



On 12/12/2013 04:25 PM, Keith Basil wrote:

On Dec 12, 2013, at 4:05 PM, Jay Dobies wrote:


Maybe this is a valid use case?

Cloud operator has several core service nodes of differing configuration
types.

[node1]  -- balanced mix of disk/cpu/ram for general core services
[node2]  -- lots of disks for Ceilometer data storage
[node3]  -- low-end appliance like box for a specialized/custom core service
 (SIEM box for example)

All nodes[1,2,3] are in the same deployment grouping (core services).  As 
such,
this is a heterogenous deployment grouping.  Heterogeneity in this case defined 
by
differing roles and hardware configurations.

This is a real use case.

How do we handle this?


This is the sort of thing I had been concerned with, but I think this is just a 
variation on Robert's GPU example. Rather than butcher it by paraphrasing, I'll 
just include the relevant part:


The basic stuff we're talking about so far is just about saying each
role can run on some set of undercloud flavors. If that new bit of kit
has the same coarse metadata as other kit, Nova can't tell it apart.
So the way to solve the problem is:
- a) teach Ironic about the specialness of the node (e.g. a tag 'GPU')
- b) teach Nova that there is a flavor that maps to the presence of
that specialness, and
   c) teach Nova that other flavors may not map to that specialness

then in Tuskar whatever Nova configuration is needed to use that GPU
is a special role ('GPU compute' for instance) and only that role
would be given that flavor to use. That special config is probably
being in a host aggregate, with an overcloud flavor that specifies
that aggregate, which means at the TripleO level we need to put the
aggregate in the config metadata for that role, and the admin does a
one-time setup in the Nova Horizon UI to configure their GPU compute
flavor.



Yes, the core services example is a variation on the above.  The idea
of _undercloud_ flavor assignment (flavor to role mapping) escaped me
when I read that earlier.

It appears to be very elegant and provides another attribute for Tuskar's
notion of resource classes.  So +1 here.



You mention three specific nodes, but what you're describing is more likely 
three concepts:
- Balanced Nodes
- High Disk I/O Nodes
- Low-End Appliance Nodes

They may have one node in each, but I think your example of three nodes is 
potentially *too* simplified to be considered as proper sample size. I'd guess 
there are more than three in play commonly, in which case the concepts 
breakdown starts to be more appealing.


Correct - definitely more than three, I just wanted to illustrate the use case.


I not sure I explained what I was getting at properly. I wasn't implying 
you thought it was limited to just three. I do the same thing, simplify 
down for discussion purposes (I've done so in my head about this very 
topic).


But I think this may be a rare case where simplifying actually masks the 
concept rather than exposes it. Manual feels a bit more desirable in 
small sample groups but when looking at larger sets of nodes, the flavor 
concept feels less odd than it does when defining a flavor for a single 
machine.


That's all. :) Maybe that was clear already, but I wanted to make sure I 
didn't come off as attacking your example. It certainly wasn't my 
intention. The balanced v. disk machine thing is the sort of thing I'd 
been thinking for a while but hadn't found a good way to make concrete.



I think the disk flavor in particular has quite a few use cases, especially until SSDs 
are ubiquitous. I'd want to flag those (in Jay terminology, the disk hotness) 
as hosting the data-intensive portions, but where I had previously been viewing that as 
manual allocation, it sounds like the approach is to properly categorize them for what 
they are and teach Nova how to use them.

Robert - Please correct me if I misread any of what your intention was, I don't 
want to drive people down the wrong path if I'm misinterpretting anything.


-k


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Jay Dobies
Disclaimer: I swear I'll stop posting this sort of thing soon, but I'm 
new to the project. I only mention it again because it's relevant in 
that I missed any of the discussion on why proxying from tuskar API to 
other APIs is looked down upon. Jiri and I had been talking yesterday 
and he mentioned it to me when I started to ask these same sorts of 
questions.


On 12/11/2013 07:33 AM, Jiří Stránský wrote:

Hi all,

TL;DR: I believe that As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI. With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui - tuskarclient - tuskar-api - heat-api|ironic-api|etc.


My biggest concern was that having each client call out to the 
individual APIs directly put a lot of knowledge into the clients that 
had to be replicated across clients. At the best case, that's simply 
knowing where to look for data. But I suspect it's bigger than that and 
there are workflows that will be implemented for tuskar needs. If the 
tuskar API can't call out to other APIs, that workflow implementation 
needs to be done at a higher layer, which means in each client.


Something I'm going to talk about later in this e-mail but I'll mention 
here so that the diagrams sit side-by-side is the potential for a facade 
layer that hides away the multiple APIs. Lemme see if I can do this in 
ASCII:


tuskar-ui -+   +-tuskar-api
   |   |
   +-client-facade-+-nova-api
   |   |
tuskar-cli-+   +-heat-api

The facade layer runs client-side and contains the business logic that 
calls across APIs and adds in the tuskar magic. That keeps the tuskar 
API from calling into other APIs* but keeps all of the API call logic 
abstracted away from the UX pieces.


* Again, I'm not 100% up to speed with the API discussion, so I'm going 
off the assumption that we want to avoid API to API calls. If that isn't 
as strict of a design principle as I'm understanding it to be, then the 
above picture probably looks kinda silly, so keep in mind the context 
I'm going from.


For completeness, my gut reaction was expecting to see something like:

tuskar-ui -+
   |
   +-tuskar-api-+-nova-api
   ||
tuskar-cli-++-heat-api

Where a tuskar client talked to the tuskar API to do tuskar things. 
Whatever was needed to do anything tuskar-y was hidden away behind the 
tuskar API.



This meant that the integration logic of how to use heat, ironic and
other services to manage an OpenStack deployment lied within
*tuskar-api*. This gave us an easy way towards having a CLI - just build
tuskarclient to wrap abilities of tuskar-api.

Nowadays we talk about using heat and ironic (and neutron? nova?
ceilometer?) apis directly from the UI, similarly as Dashboard does.
But our approach cannot be exactly the same as in Dashboard's case.
Dashboard is quite a thin wrapper on top of python-...clients, which
means there's a natural parity between what the Dashboard and the CLIs
can do.


When you say python- clients, is there a distinction between the CLI and 
a bindings library that invokes the server-side APIs? In other words, 
the CLI is packaged as CLI+bindings and the UI as GUI+bindings?



We're not wrapping the APIs directly (if wrapping them directly would be
sufficient, we could just use Dashboard and not build Tuskar API at
all). We're building a separate UI because we need *additional logic* on
top of the APIs. E.g. instead of directly working with Heat templates
and Heat stacks to deploy overcloud, user will get to pick how many
control/compute/etc. nodes he wants to have, and we'll take care of Heat
things behind the scenes. This makes Tuskar UI significantly thicker
than Dashboard is, and the natural parity between CLI and UI vanishes.
By having this logic in UI, we're effectively preventing its use from
CLI. (If i were bold i'd also think about integrating Tuskar with other
software which would be prevented too if we keep the business logic in
UI, but i'm not absolutely positive about use cases here).


I see your point about preventing its use from the CLI, but more 
disconcerting IMO is that it just doesn't belong in the UI. That sort of 
logic, the Heat things behind the scenes, sounds like the jurisdiction 
of the API (if I'm reading into what that entails correctly).



Now this raises a question - how do we get CLI reasonably on par with
abilities of the UI? (Or am i wrong that Anna the infrastructure
administrator would want that?)


To reiterate my point above, I see the idea of getting the CLI on par, 
but I also see it as striving for a cleaner design as well.



Here are some options i 

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Jay Dobies
 I will take it little side ways. I think we should be asking why have 
 we needed the tuskar-api. It has done some more complex logic (e.g.  
 building a heat template) or storing additional info, not supported  
 by the services we use (like rack associations).

 That is a perfectly fine use-case of introducing tuskar-api.

 Although now, when everything is shifting to the services themselves, 
 we don't need tuskar-api for that kind of stuff. Can you please list 
 what complex operations are left, that should be done in tuskar? I  
 think discussing concrete stuff would be best.


This is a good call to circle back on that I'm not sure of it either. 
The wireframes I've seen so far largely revolve around node listing and 
allocation, but I 100% know I'm oversimplifying it and missing something 
bigger there.



Also, as I have been talking with rdopieralsky, there has been some
problems in the past, with tuskar doing more steps in one. Like create a
rack and register new nodes in the same time. As those have been
separate API calls and there is no transaction handling, we should not
do this kind of things in the first place. If we have actions that
depends on each other, it should go from UI one by one. Otherwise we
will be showing messages like, The rack has not been created, but 5
from 8 nodes has been added. We have tried to delete those added nodes,
but 2 of the 5 deletions has failed. Please figure this out, then you
can run this awesome action that calls multiple dependent APIs without
real rollback again. (or something like that, depending on what gets
created first)


This is what I expected to see as the primary argument against it, the 
lack of a good transactional model for calling the dependent APIs. And 
it's certainly valid.


But what you're describing is the exact same problem regardless if you 
go from the UI or from the Tuskar API. If we're going to do any sort of 
higher level automation of things for the user that spans APIs, we're 
going to run into it. The question is if the client(s) handle it or the 
API. The alternative is to not have the awesome action in the first 
place, in which case we're not really giving the user as much value as 
an application.



I am not saying we should not have tuskar-api. Just put there things
that belongs there, not proxy everything.



btw. the real path of the diagram is

tuskar-ui - tuskarclient - tuskar-api - heatclient - heat-api
.|ironic|etc.

My conclusion
--

I say if it can be tuskar-ui - heatclient - heat-api, lets keep it
that way.


I'm still fuzzy on what OpenStack means when it says *client. Is that 
just a bindings library that invokes a remote API or does it also 
contain the CLI bits?



If we realize we are putting some business logic to UI, that needs to be
done also to CLI, or we need to store some additional data, that doesn't
belong anywhere let's put it in Tuskar-API.

Kind Regards,
Ladislav


Thanks for the feedback  :)




On 12/11/2013 03:32 PM, Jay Dobies wrote:

Disclaimer: I swear I'll stop posting this sort of thing soon, but I'm
new to the project. I only mention it again because it's relevant in
that I missed any of the discussion on why proxying from tuskar API to
other APIs is looked down upon. Jiri and I had been talking yesterday
and he mentioned it to me when I started to ask these same sorts of
questions.

On 12/11/2013 07:33 AM, Jiří Stránský wrote:

Hi all,

TL;DR: I believe that As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI. With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui - tuskarclient - tuskar-api - heat-api|ironic-api|etc.


My biggest concern was that having each client call out to the
individual APIs directly put a lot of knowledge into the clients that
had to be replicated across clients. At the best case, that's simply
knowing where to look for data. But I suspect it's bigger than that
and there are workflows that will be implemented for tuskar needs. If
the tuskar API can't call out to other APIs, that workflow
implementation needs to be done at a higher layer, which means in each
client.

Something I'm going to talk about later in this e-mail but I'll
mention here so that the diagrams sit side-by-side is the potential
for a facade layer that hides away the multiple APIs. Lemme see if I
can do this in ASCII:

tuskar-ui -+   +-tuskar-api
   |   |
   +-client-facade-+-nova-api
   |   |
tuskar-cli-+   +-heat-api

The facade layer runs client-side and contains the business logic that
calls across APIs and adds in the tuskar magic. That keeps the tuskar
API from calling

Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-11 Thread Jay Dobies


So glad we're hashing this out now. This will save a bunch of headaches 
in the future. Good call pushing this forward.


On 12/11/2013 02:15 PM, Tzu-Mainn Chen wrote:

Hi,

I'm trying to clarify the terminology being used for Tuskar, which may be 
helpful so that we're sure
that we're all talking about the same thing :)  I'm copying responses from the 
requirements thread
and combining them with current requirements to try and create a unified view.  
Hopefully, we can come
to a reasonably rapid consensus on any desired changes; once that's done, the 
requirements can be
updated.

* NODE a physical general purpose machine capable of running in many roles. 
Some nodes may have hardware layout that is particularly
useful for a given role.


Do we ever need to distinguish between undercloud and overcloud nodes?


  * REGISTRATION - the act of creating a node in Ironic


DISCOVERY - The act of having nodes found auto-magically and added to 
Ironic with minimal user intervention.




  * ROLE - a specific workload we want to map onto one or more nodes. 
Examples include 'undercloud control plane', 'overcloud control
plane', 'overcloud storage', 'overcloud compute' etc.

  * MANAGEMENT NODE - a node that has been mapped with an undercloud 
role
  * SERVICE NODE - a node that has been mapped with an overcloud role
 * COMPUTE NODE - a service node that has been mapped to an 
overcloud compute role
 * CONTROLLER NODE - a service node that has been mapped to an 
overcloud controller role
 * OBJECT STORAGE NODE - a service node that has been mapped to an 
overcloud object storage role
 * BLOCK STORAGE NODE - a service node that has been mapped to an 
overcloud block storage role

  * UNDEPLOYED NODE - a node that has not been mapped with a role
   * another option - UNALLOCATED NODE - a node that has not been 
allocated through nova scheduler (?)
- (after reading lifeless's explanation, I agree that 
allocation may be a
   misleading term under TripleO, so I 
personally vote for UNDEPLOYED)


Undeployed still sounds a bit odd to me when paired with the word role. 
I could see deploying a workload bundle or something, but a role 
doesn't feel like a tangible thing that is pushed out somewhere.


Unassigned? As in, it hasn't been assigned a role yet.


  * INSTANCE - A role deployed on a node - this is where work actually 
happens.


I'm fine with instance, but the the phrasing a role deployed on a 
node feels odd to me in the same way undeployed does. Maybe a slight 
change to A node that has been assigned a role, but that also may be 
me being entirely too nit-picky.


To put it in context, on a scale of 1-10, my objection to this and 
undeployed is around a 2, so don't let me come off as strenuously 
objecting.



* DEPLOYMENT

  * SIZE THE ROLES - the act of deciding how many nodes will need to be 
assigned to each role
* another option - DISTRIBUTE NODES (?)
  - (I think the former is more accurate, but 
perhaps there's a better way to say it?)

  * SCHEDULING - the process of deciding which role is deployed on which 
node


I know this derives from a Nova term, but to me, the idea of 
scheduling carries a time-in-the-future connotation to it. The 
interesting part of what goes on here is the assignment of which roles 
go to which instances.



  * SERVICE CLASS - a further categorization within a service role for a 
particular deployment.


I don't understand this one, can you add a few examples?


   * NODE PROFILE - a set of requirements that specify what attributes 
a node must have in order to be mapped to
a service class


Even without knowing what service class is, I like this one.  :)




Does this seem accurate?  All feedback is appreciated!

Mainn


Thanks again  :D

 ___

OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-10 Thread Jay Dobies

Thanks for the explanation!

I'm going to claim that the thread revolves around two main areas of 
disagreement.  Then I'm going
to propose a way through:

a) Manual Node Assignment

I think that everyone is agreed that automated node assignment through 
nova-scheduler is by
far the most ideal case; there's no disagreement there.

The disagreement comes from whether we need manual node assignment or not.  I 
would argue that we
need to step back and take a look at the real use case: heterogeneous nodes.  
If there are literally
no characteristics that differentiate nodes A and B, then why do we care which 
gets used for what?  Why
do we need to manually assign one?


This is a better way of verbalizing my concerns. I suspect there are 
going to be quite a few heterogeneous environments built from legacy 
pieces in the near term and fewer built from the ground up with all new 
matching hotness.


On the other side of it, instead of handling legacy hardware I was 
worried about the new hotness (not sure why I keep using that term) 
specialized for a purpose. This is exactly what Robert described in his 
GPU example. I think his explanation of how to use the scheduler to 
accommodate that makes a lot of sense, so I'm much less behind the idea 
of a strict manual assignment than I previously was.



If we can agree on that, then I think it would be sufficient to say that we 
want a mechanism to allow
UI users to deal with heterogeneous nodes, and that mechanism must use 
nova-scheduler.  In my mind,
that's what resource classes and node profiles are intended for.

One possible objection might be: nova scheduler doesn't have the appropriate 
filter that we need to
separate out two nodes.  In that case, I would say that needs to be taken up 
with nova developers.


b) Terminology

It feels a bit like some of the disagreement come from people using different 
words for the same thing.
For example, the wireframes already details a UI where Robert's roles come 
first, but I think that message
was confused because I mentioned node types in the requirements.

So could we come to some agreement on what the most exact terminology would be? 
 I've listed some examples below,
but I'm sure there are more.

node type | role
management node | ?
resource node | ?
unallocated | available | undeployed
create a node distribution | size the deployment
resource classes | ?
node profiles | ?

Mainn

- Original Message -

On 10 December 2013 09:55, Tzu-Mainn Chen tzuma...@redhat.com wrote:

* created as part of undercloud install process



By that note I meant, that Nodes are not resources, Resource instances
run on Nodes. Nodes are the generic pool of hardware we can deploy
things onto.


I don't think resource nodes is intended to imply that nodes are
resources; rather, it's supposed to
indicate that it's a node where a resource instance runs.  It's supposed to
separate it from management node
and unallocated node.


So the question is are we looking at /nodes/ that have a /current
role/, or are we looking at /roles/ that have some /current nodes/.

My contention is that the role is the interesting thing, and the nodes
is the incidental thing. That is, as a sysadmin, my hierarchy of
concerns is something like:
  A: are all services running
  B: are any of them in a degraded state where I need to take prompt
action to prevent a service outage [might mean many things: - software
update/disk space criticals/a machine failed and we need to scale the
cluster back up/too much load]
  C: are there any planned changes I need to make [new software deploy,
feature request from user, replacing a faulty machine]
  D: are there long term issues sneaking up on me [capacity planning,
machine obsolescence]

If we take /nodes/ as the interesting thing, and what they are doing
right now as the incidental thing, it's much harder to map that onto
the sysadmin concerns. If we start with /roles/ then can answer:
  A: by showing the list of roles and the summary stats (how many
machines, service status aggregate), role level alerts (e.g. nova-api
is not responding)
  B: by showing the list of roles and more detailed stats (overall
load, response times of services, tickets against services
  and a list of in trouble instances in each role - instances with
alerts against them - low disk, overload, failed service,
early-detection alerts from hardware
  C: probably out of our remit for now in the general case, but we need
to enable some things here like replacing faulty machines
  D: by looking at trend graphs for roles (not machines), but also by
looking at the hardware in aggregate - breakdown by age of machines,
summary data for tickets filed against instances that were deployed to
a particular machine

C: and D: are (F) category work, but for all but the very last thing,
it seems clear how to approach this from a roles perspective.

I've tried to approach this using /nodes/ as the starting point, and
after two terrible drafts 

  1   2   >