Re: [openstack-dev] [tripleo] Proposing Florian Fuchs for tripleo-validations core

2017-04-06 Thread Tomas Sedovic

On 04/06/2017 11:53 AM, Martin André wrote:

Hellooo,

I'd like to propose we extend Florian Fuchs +2 powers to the
tripleo-validations project. Florian is already core on tripleo-ui
(well, tripleo technically so this means there is no changes to make
to gerrit groups).

Florian took over many of the stalled patches in tripleo-validations
and is now the principal contributor in the project [1]. He has built
a good expertise over the last months and I think it's time he has
officially the right to approve changes in tripleo-validations.

Consider this my +1 vote.


Yes please! +1 from me.



Martin

[1] 
http://stackalytics.com/?module=tripleo-validations=patches=pike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [validations][ui][tripleo] resetting state of validations

2017-01-19 Thread Tomas Sedovic

On 01/18/2017 10:14 PM, Dan Trainor wrote:

Hi -

Is there a way to reset the state of all the validations that have
previously ran, back to the original state they were prior to running?

Using the UI, for example, some validations (by design) run as soon as
you log in.  Others run after different actions are completed.  But
there's a state at which none of the validations have been ran, prior to
logging in to the UI.  I want to re-run those validations as if I had
logged in to the UI for the first time, for testing purposes.

Thanks!
-dant



(adding tripleo to the subject)

I don't believe there is a Mistral action that would let you do a reset 
like that, but a potential workaround would be to clone an existing 
plan. When you switch to the clone in the UI, it should be in the state 
you're asking for.


I don't have a tripleo env handy so I can't verify this will work, but I 
do seem to remember it behaving that way.


Tomas




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Newton release this week

2016-11-07 Thread Tomas Sedovic

On 11/06/2016 05:24 PM, James Slagle wrote:

TripleO plans to do an updated Newton release this upcoming week to
pick up the critical fixes that have been backported to stable/newton
since the original Newton release.

My plan as of now is to request the release on Wednesday November 9th.
I'll use this initial patch[1] from Emilien when I do, and update the
respective hash's in that review to be the latest on the newton branch
from each repository.

If there are any concerns, or something that needs to be included in
this release but you don't feel it will have merged by November 9th,
please let me know asap.


Hey James,

This is a fairly important bug for the post-deployment validations:

https://bugs.launchpad.net/tripleo/+bug/1635226

fixed by:

https://review.openstack.org/#/c/391093/
https://review.openstack.org/#/c/390854/

The patches have (imho) been good to go for a few days, but we couldn't 
merge them due to the CI issues.


It would be great if we could get them in the release, too. I'll create 
the backports as soon as they merge in master.


Tomas




[1] https://review.openstack.org/#/c/391799/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] How To Review tripleo-validations

2016-10-10 Thread Tomas Sedovic
the tripleo-common readme, but all of this should be 
exposed in tripleo.org.


I'll try to get to that in the coming weeks.

* We do not test any of the validations in the CI. That needs to change.

* If possible, the deployers shouldn't have to run the validations 
manually at the right time. They should be run as part of the tripleo 
workflows.


* The validation bugs are stored in the tripleo launchpad[4] with the 
`validations` tag.


If you have any questions or suggestions, don't hesitate to reply here 
or talk to me (shadower) on IRC.


Cheers,
Tomas Sedovic

[1]: https://bugs.launchpad.net/tripleo/+bug/1620617
[2]: http://docs.ansible.com/ansible/playbooks_variables.html
[3]: http://docs.ansible.com/ansible/list_of_all_modules.html
[4]: https://bugs.launchpad.net/tripleo/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Status of the validations and priority patches

2016-08-16 Thread Tomas Sedovic

Hey folks,

Martin André and I have been working on adding validations into the 
TripleO workflow. And since the time for landing things in Neutron is 
getting closer, here are a few patches I'd like to call your attention to.


This patchset adds the Mistral workflows the gui and cli need for 
running validations:


https://review.openstack.org/#/c/313632/

This validation adds a couple of libraries to reading undercloud.conf 
and hiera data, which a few other validations need:


https://review.openstack.org/#/c/329385


A lot of our other patches depend on these. Other than that, anything in 
tripleo-validations is a fair game (most patches are individual 
validations):


https://review.openstack.org/#/q/project:openstack/tripleo-validations+status:open


Cheers,
Tomas Sedovic

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [E] [TripleO] scripts to do post deployment analysis of an overcloud

2016-08-08 Thread Tomas Sedovic

On 08/08/2016 11:05 AM, Hugh Brock wrote:

On Wed, Aug 3, 2016 at 4:49 PM, Joe Talerico  wrote:

On Wed, Jul 27, 2016 at 2:04 AM, Hugh Brock  wrote:

On Jul 26, 2016 8:08 PM, "Gordon, Kent" 
wrote:








-Original Message-
From: Gonéri Le Bouder [mailto:gon...@lebouder.net]
Sent: Tuesday, July 26, 2016 12:24 PM
To: openstack-dev@lists.openstack.org
Subject: [E] [openstack-dev] [TripleO] scripts to do post deployment
analysis
of an overcloud

Hi all,

For the Distributed-CI[0] project, we did two scripts[1] that we use to
extract


Links not included in message


information from an overcloud.
We use this information to improve the readability of the deployment
logs.
I attached an example to show how we use the extracted stack
information.

Now my question, do you know some other tools that we can use to do this
kind of anaylsis?
--
Gonéri Le Bouder






Tomaš, this seems like it might be useful for the deploy validations,
and also for improving the quality of error reporting -- would you
mind taking a look?


Yeah we may end up using the heat stack dump tool for some validations 
since it puts all the Heat data in one place.


However, this seems like a great thing to be included in the openstack 
overcloud deploy command and/or the mistral workflow.


I.e. after each deployment, we could run list_nodes_status to verify 
that the overcloud nodes are indeed running and accessible (before 
tempest) and dump the o-c-c logs to a known directory.




--Hugh





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Consolidating TripleO validations with Browbeat validations

2016-06-21 Thread Tomas Sedovic

On 06/20/2016 06:37 PM, Joe Talerico wrote:

Hello - It would seem there is a little bit of overlap with TripleO
validations ( clapper validations ) and Browbeat *Checks*. I would
like to see these two come together, and I wanted to get some feedback
on this.

For reference here are the Browbeat checks :
https://github.com/openstack/browbeat/tree/master/ansible/check

We check for common deployment mistakes, possible deployment
performance issues and some bugs that could impact the scale and
performance of your cloud... At the end we build a report of found
issues with the cloud, like :
https://github.com/openstack/browbeat/blob/master/ansible/check/browbeat-example-bug_report.log

We eventually wanted to take these findings and push them to
ElasticSearch as metadata for our result data (just so we would be
aware of any BZs or possibly missed tuning).

Anyhoo, I just would like to get feedback on consolidating these
checks into TripleO Validations if that makes sense. If this does make
sense, who could I work with to see that this happens?


Hey!

I'd love to have a single place for all the validations if it's at all 
possible.


Our repos are structured a little differently, but I hope we can come up 
with a solution that works for both uses.


The primary idea behind the tripleo-validations (née Clapper) repo is to 
have have checks in the various stages of the deployment to find 
potential hardware issues or incorrect configuration early.


We want to have these run by the cli and the web ui automatically, which 
is why we opted for the one validation (playbook) with extra metadata 
per file approach.


There are two people working on these right now: myself and Martin 
André. We're in #tripleo (mandre & shadower). Feel free to talk to 
either of us, though I suspect I may have more time for this.


We're both in the CE(S)T so there should be some overlap for realtime chat.

Tomas




Thanks
Joe

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Consolidating TripleO validations with Browbeat validations

2016-06-21 Thread Tomas Sedovic

On 06/20/2016 08:58 PM, Assaf Muller wrote:

On Mon, Jun 20, 2016 at 12:43 PM, Joe Talerico  wrote:

On Mon, Jun 20, 2016 at 12:41 PM, Ihar Hrachyshka  wrote:



On 20 Jun 2016, at 18:37, Joe Talerico  wrote:

Hello - It would seem there is a little bit of overlap with TripleO
validations ( clapper validations ) and Browbeat *Checks*. I would
like to see these two come together, and I wanted to get some feedback
on this.

For reference here are the Browbeat checks :
https://github.com/openstack/browbeat/tree/master/ansible/check

We check for common deployment mistakes, possible deployment
performance issues and some bugs that could impact the scale and
performance of your cloud... At the end we build a report of found
issues with the cloud, like :
https://github.com/openstack/browbeat/blob/master/ansible/check/browbeat-example-bug_report.log

We eventually wanted to take these findings and push them to
ElasticSearch as metadata for our result data (just so we would be
aware of any BZs or possibly missed tuning).

Anyhoo, I just would like to get feedback on consolidating these
checks into TripleO Validations if that makes sense. If this does make
sense, who could I work with to see that this happens?


Sorry for hijacking the thread somewhat, but it seems that neutron-sanity-check 
would cover for some common deployment issues, if utilized by projects like 
browbeat. Has anyone considered the tool?

http://docs.openstack.org/cli-reference/neutron-sanity-check.html

If there are projects that are interested in integrating checks that are 
implemented by neutron community, we would be glad to give some guidance.

Ihar


Hey Ihar - the TripleO validations are using this :
https://github.com/rthallisey/clapper/blob/0881300a815f8b801a38d117b8d01b42a00c7f7b/ansible-tests/validations/neutron-sanity-check.yaml


Oops, that's missing a bunch of configuration files. Here's the
configuration values it expects:
https://github.com/openstack/neutron/blob/master/neutron/cmd/sanity_check.py#L32

And here's how the tool uses them:
https://github.com/openstack/neutron/blob/master/neutron/cmd/sanity_check.py#L272

It runs specific checks according to configuration file values. Is
ipset enabled in the OVS agent configuration file? Great, let's check
we can use it and report back if there's any errors.


Aah, thanks for letting us know! I'll update the validation.

Tomas






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] composable roles team

2016-05-03 Thread Tomas Sedovic

On 05/02/2016 07:47 PM, Brent Eagles wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 05/01/2016 08:01 PM, Emilien Macchi wrote:




If a feature can't land without disruption, then why not using a
special branch to be merged once the feature is complete ?


The problem is that during our work, some people will update the
manifests, and it will affect us, since we're copy/pasting the
code somewhere else (in puppet-tripleo), that's why we might need
some outstanding help from the team, to converge to the new model.
I know asking that is tough, but if we want to converge quickly,
we need to make the adoption accepted by everyone. One thing we can
do, is asking our reviewer team to track the patches that will need
some work, and the composable team can help in the review process.

The composable roles is a feature that we all wait, having the
help from our contributors will really save us time.


s/wait/want/ I expect.

Well said. I understand the reservations on the -1 for non-composable
role patches. It *does* feel a bit strong, but in the end I think it's
just being honest. The likelihood that a patch on tht is going to land
"as is" prior to the merge of the related composable role changes
seems really unlikely. I for one, am willing to do what I can to help
anyone who has had their patch pre-empted during this period get their
patch refactored/ported once the comp-roles thing has settled down.


There is a precedent for this: back when we were using merge.py to 
generate the TripleO Heat templates, the patch that moved us to pure 
Heat was taking a long time to merge due to all the other t-h-t patches 
coming in and causing conflicts.


So we decided to -2 all the other t-h-t patches until this one got merged.

On the other hand, that lasted for a few days. I'm not sure how long 
until we get the composable roles landed.


Tomas




Cheers,

Brent

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJXJ5KpAAoJEIXWptqvFlBWmA8P/1dBlsCNYIqOHBBWxzEnLM41
gP/K+UsGFHaXj86yOdus5gp58/JFX9KJ+mqr0Yi/8ail+h+t0yjgcCXLlp6HUTKo
7OtNfAzPMDeDkquB5R7WREJfLdtP7tVpBsd0Ezs00y5ZDUuDk/J0waleQFtAKUjr
Xiip2y/e8tZMdWa0gvp/q+kWJ3v+YhAnl9PNQMCeIGf/IwDQrTNYvDTIChLx6dud
g7tWfH+ej6nL/ty8UM4R3ac94ZyZLrxprShbdpAh798kYhrR1WPju+hmBgln8rlx
fcTzXq8b428QzCmNKFeKuNmP32yXjOCZlEi2/NijfiR7nFY6sLvh7ROIODiwmzx8
fPSb1W8bLqIijeAUy2YpZFfvbe+NZdn2iIHjseS6Yu4D85NakUunkrBJEpbnCy8L
26N9ShseHbVRckpMSyxEyi+jJfJcCp4FzR26SUFUamPcusMQVlBDQlhOh/8lr/Lq
frhxcYCn45JZ/R2pc3PS2HnRapmvM/TLxdFhbteUFMcEXBT4dvdQPcQdqH1Kx/Yw
S5C+1CESRMGH2KpqghHaMNnySYHFHYQNmCKEVfJjERGbI/U5dEEogIUuzHXHQlYV
kL83XvMh6gGHfRwbmeTOLsrR86c8+u3vaE5PzHPxQ3IBseezmRYiN2fmNclYsg8B
LvyHYCRNvOcj1y8gr0Yr
=obJ6
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Is Swift a good choice of database for the TripleO API?

2015-12-23 Thread Tomas Sedovic

On 12/23/2015 01:01 PM, Steven Hardy wrote:

On Tue, Dec 22, 2015 at 03:36:02PM +, Dougal Matthews wrote:

Hi all,

This topic came up in the 2015-12-15 meeting[1], and again briefly today.
After working with the code that came out of the deployment library
spec[2] I
had some concerns with how we are storing the templates.

Simply put, when we are dealing with 100+ files from
tripleo-heat-templates
how can we ensure consistency in Swift without any atomicity or
transactions.
I think this is best explained with a couple of examples.

 - When we create a new deployment plan (upload all the templates to
swift)
   how do we handle the case where there is an error? For example, if we
are
   uploading 10 files - what do we do if the 5th one fails for some
reason?
   There is a patch to do a manual rollback[3], but I have concerns
about
   doing this in Python. If Swift is completely inaccessible for a short
   period the rollback wont work either.


How does using a different data store help fix this error path?

Regardless of the Swift/DB/Git choice, you have to detect something failed,
and mark the plan as in a failed state.


 - When deploying to Heat, we need to download all the YAML files from
Swift.
   This can take a couple of seconds. What happens if somebody starts to
   upload a new version of the plan in the middle? We could end up
trying to
   deploy half old and half new files. We wouldn't have a consistent
view of
   the database.


If this happens, the API design is wrong - we should have a plan reference
a unique identifier, including a version (e.g through swift versioned
objects, git tags/branches or whatever), then on deployment heat should
download exactly one version of those artefacts (e.g via a swift tempurl or
a git URL, or whatever).

FYI heatclient already supports downloading template objects directly from
swift, and heat/heatclient already support downloading from http URLs, so
all we have to do AFAICS is generate a URL pointing to a consistent
version of the templates.


We had a few suggestions in the meeting:

 - Add a locking mechanism. I would be concerned about deadlocks or
having to
   lock for the full duration of a deploy.


I think we need to create a new version of a plan every time it's modified,
then on deployment the concern around locking is much reduced, or even
eliminated?


 - Store the files in a tarball (so we only deal with one file). I think
we
   could still hit issues with multiple operations unless we guarantee
that
   only one instance of the API is ever running.

I think these could potentially be made to work, but at this point are we
using the best tool for the job?

For alternatives, I see a can think of a couple of options:

- Use a database, like we did for Tuskar and most OpenStack API's do.


Storing template files in a database with Tuskar was a mistake - we should
*not* do that again IMO - it was a very opaque interface and caused a lot
of confusion in use.

Storing code (in this case yaml files) in some sort of versioned repository
is *such* a solved problem - it'll be total wheel-reinventing if we
implement template revision control inside a bespoke DB IMHO.

I think swift is probably a workable solution (using versioning and
tempurl's), but we should really consider making the template store
pluggable, as having the API backed by an (operator visible) git repo is
likely to be a nicer and more familiar solution.

The question around validation data I'm less clear on - we already store
discovery data in swift, which seems to work OK - how is the validation
data different, in that it warrants a bespoke DB data store?


Each validation result is quite small (basically passed/failed, 
date of the run, and which stage and validation it corresponds to).


We'll want to be able to query them per stage, validation and plan, sort 
them by the time and possibly use select validations within a certain 
time span.


We can just store them in Swift, but then we'd have to write the code 
that maintains the indexes and filters the objects based on a given 
query. The validations can run in parallel, so we need to make sure the 
indexes are updated atomically (or not use indexes at all and always 
load all the results).


All that is certainly feasible, but it's also something that a database 
does out of the box. We're talking about a single table and 3-4 SELECT 
statements (or the equivalent in Mongo, etc.).




Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





Re: [openstack-dev] [TripleO] Is Swift a good choice of database for the TripleO API?

2015-12-22 Thread Tomas Sedovic

On 12/22/2015 04:36 PM, Dougal Matthews wrote:

Hi all,

This topic came up in the 2015-12-15 meeting[1], and again briefly today.
After working with the code that came out of the deployment library
spec[2] I
had some concerns with how we are storing the templates.





For alternatives, I see a can think of a couple of options:

- Use a database, like we did for Tuskar and most OpenStack API's do.
- Invest time in building something on Swift.
- Glance was transitioning to be a Artifact store. I don't know the
status of
   this or if it would meet out needs.

Any input, ideas or suggestions would be great!


This is probably worth mentioning:

https://review.openstack.org/#/c/255792/

If this extension to the API goes through, it may add some weight to the 
database option.


At minimum, we'd have to store the validation results somewhere and be 
able to query them per stage, validation and (optionally) plan. That 
could be done in Swift or filesystem/git as well, but we'd have to write 
code to handle indexing and querying and at that point using a DB seems 
easier.


Tomas Sedovic




Thanks,
Dougal


[1]:
http://eavesdrop.openstack.org/meetings/tripleo/2015/tripleo.2015-12-15-14.03.log.html#l-89
[2]:
https://specs.openstack.org/openstack/tripleo-specs/specs/mitaka/tripleo-overcloud-deployment-library.html
[3]: https://review.openstack.org/#/c/257481/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Where to keep data about stack breakpoints?

2015-01-22 Thread Tomas Sedovic

On 01/14/2015 01:47 AM, Ton Ngo wrote:

Hi Tomas,
  I think your patch is a great start so we can prototype quickly;  I am
trying it out right now.  We can break up the implementation into several
parts that can be updated more or less independently based on the feedback.
Ton,


snip





Hey everyone,

I've sent another revision, this time something that I think can 
actually be mergeable (sans the missing tests, I'll add some tomorrow).


There are two patches now:

https://review.openstack.org/#/c/146123/ (heat-engine)

https://review.openstack.org/#/c/149319/ (python-heatclient)

They're using the environment to determine which resources should have 
breakpoints (python-hetaclient converts the convenient CLI invocation to 
appropriate environment entries).


Both stack-create and stack-update support breakpoints and you can run 
stack-update on a stack that's waiting on a breakpoint and things should 
just work.


Breakpoints on resources in nested stacks are supported. The spec 
mentioned prefixing such resources with the nested template name but 
that's not sufficient to resolve all the conflits so we check the 
absolute path up the nested stacks instead. Both heatclient and the 
environment offer a bit of syntactic sugar to make this less tedious.


You can clear active breakpoints by calling `heat clear-breakpoint 
mystack resource_1 resource_2 ...`.



Feedback is much appreciated!

Tomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Where to keep data about stack breakpoints?

2015-01-13 Thread Tomas Sedovic

On 01/12/2015 07:05 PM, Steven Hardy wrote:

On Mon, Jan 12, 2015 at 04:29:15PM +0100, Tomas Sedovic wrote:

Hey folks,

I did a quick proof of concept for a part of the Stack Breakpoint spec[1]
and I put the does this resource have a breakpoint flag into the metadata
of the resource:

https://review.openstack.org/#/c/146123/

I'm not sure where this info really belongs, though. It does sound like
metadata to me (plus we don't have to change the database schema that way),
but can we use it for breakpoints etc., too? Or is metadata strictly for
Heat users and not for engine-specific stuff?


Metadata is supposed to be for template defined metadata (with the notable
exception of server resources where we merge SoftwareDeployment metadata in
to that defined in the template).

So if we're going to use the metadata template interface as a way to define
the breakpoint, this is OK, but do we want to mix the definition of the
stack with this flow control data? (I personally think probably not).

I can think of a couple of alternatives:

1. Use resource_data, which is intended for per-resource internal data, and
set it based on API data passed on create/update (see Resource.data_set)

2. Store the breakpoint metadata in the environment


Ooh, I forgot about resource_data! That sounds perfect, actually.



I think the environment may be the best option, but we'll have to work out
how to best represent a tree of nested stacks (something the spec interface
description doesn't consider AFAICS).


I think we have two orthogonal questions here:

1. How do end users set up and clear breakpoints
2. How does the engine stores breakpoint-related data

As per the spec (and it makes perfect sense to me), users will declare 
breakpoints via the environment and through CLI (which as you say can be 
translated to the environment).


But we ran then read that and just store has_breakpoint in each 
resource's data.


The spec does mention breakpoints on nested stacks briefly:

 For nested stack, the breakpoint would be prefixed with
 the name of the nested template.

I'm assuming we'll need some sort of separator, but the general idea 
sounds okay to me. Something like this, perhaps:


nested_stack/nested_template.yaml/SomeResource




If we use the environment, then no additional API interfaces are needed,
just supporting a new key in the existing data, and python-heatclient can
take care of translating any CLI --breakpoint argument into environment
data.


I also had a chat with Steve Hardy and he suggested adding a STOPPED state
to the stack (this isn't in the spec). While not strictly necessary to
implement the spec, this would help people figure out that the stack has
reached a breakpoint instead of just waiting on a resource that takes a long
time to finish (the heat-engine log and event-list still show that a
breakpoint was reached but I'd like to have it in stack-list and
resource-list, too).

It makes more sense to me to call it PAUSED (we're not completely stopping
the stack creation after all, just pausing it for a bit), I'll let Steve
explain why that's not the right choice :-).


So, I've not got strong opinions on the name, it's more the workflow:

1. User triggers a stack create/update
2. Heat walks the graph, hits a breakpoint and stops.
3. Heat explicitly triggers continuation of the create/update

My argument is that (3) is always a stack update, either a PUT or PATCH
update, e.g we _are_ completely stopping stack creation, then a user can
choose to re-start it (either with the same or a different definition).

So, it _is_ really an end state, as a user might never choose to update
from the stopped state, in which case *_STOPPED makes more sense.

Paused implies the same action as the PATCH update, only we trigger
continuation of the operation from the point we reached via some sort of
user signal.

If we actually pause an in-progress action via the scheduler, we'd have to
start worrying about stuff like token expiry, hitting timeouts, resilience
to engine restarts, etc, etc.  So forcing an explicit update seems simpler
to me.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Where to keep data about stack breakpoints?

2015-01-13 Thread Tomas Sedovic

On 01/13/2015 01:15 AM, Ton Ngo wrote:

 I was also thinking of using the environment to hold the breakpoint,
similarly to parameters.  The CLI and API would process it just like
parameters.

As for the state of a stack hitting the breakpoint, leveraging the
FAILED state seems to be sufficient, we just need to add enough information
to differentiate between a failed resource and a resource at a breakpoint.
Something like emitting an event or a message should be enough to make that
distinction.   Debugger for native program typically does the same thing,
leveraging the exception handling in the OS by inserting an artificial
error at the breakpoint to force a program to stop.  Then the debugger
would just remember the address of these artificial errors to decode the
state of the stopped program.

As for the workflow, instead of spinning in the scheduler waiting for a
signal, I was thinking of moving the stack off the engine as a failed
stack. So this would be an end-state for the stack as Steve suggested, but
without adding a new stack state.   Again, this is similar to how a program
being debugged is handled:  they are moved off the ready queue and their
context is preserved for examination.  This seems to keep the
implementation simple and we don't have to worry about timeout,
performance, etc.  Continuing from the breakpoint then should be similar to
stack-update on a failed stack.  We do need some additional handling, such
as allowing resource in-progress to run to completion instead of aborting.

 For the parallel paths in a template, I am thinking about these
alternatives:
1. Stop after all the current in-progress resources complete, but do not
start any new resources even if there is no dependency.  This should be
easier to implement, but the state of the stack would be non-deterministic.
2. Stop only the paths with the breakpoint, continue all other parallel
paths to completion.  This seems harder to implement, but the stack would
be in a deterministic state and easier for the user to reason with.

To be compatible with convergence, I had suggested to Clint earlier to
add a mode where the convergence engine does not attempt to retry so the
user can debug, and I believe this was added to the blueprint.

Ton,



Regarding the spinning schedule, I get the token expiry and stuff, but 
it is *super simple* to implement.


Literally a while loop that yields. Two lines of code.

And we don't have to change anything in the scheduler or the way we 
handle stack or whatever. Heat already knows how to handle this situation.


Can we start with that implementation (because it's simple and correct) 
and then take it from there? Assuming we can stick to the same API/UI, 
we should be able to change it later when we've documented issues with 
the current approach.



As for parallel execution, I definitely prefer the deterministic 
approach: stop on the breakpoint and everything that depends on it, but 
resolve everything else that you can.


Again, this is trivially handled by Heat already (my patch has no 
special handling for this case). If you want to pause everything, you 
can always set up more breakpoints and advance them either manually or 
all at once with the (to be implemented) stepping functionality.







From:   Steven Hardy sha...@redhat.com
To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
Date:   01/12/2015 02:40 PM
Subject:Re: [openstack-dev] [Heat] Where to keep data about stack
 breakpoints?



On Mon, Jan 12, 2015 at 05:10:47PM -0500, Zane Bitter wrote:

On 12/01/15 13:05, Steven Hardy wrote:

I also had a chat with Steve Hardy and he suggested adding a STOPPED

state

to the stack (this isn't in the spec). While not strictly necessary to
implement the spec, this would help people figure out that the stack

has

reached a breakpoint instead of just waiting on a resource that takes

a long

time to finish (the heat-engine log and event-list still show that a
breakpoint was reached but I'd like to have it in stack-list and
resource-list, too).

It makes more sense to me to call it PAUSED (we're not completely

stopping

the stack creation after all, just pausing it for a bit), I'll let

Steve

explain why that's not the right choice :-).

So, I've not got strong opinions on the name, it's more the workflow:

1. User triggers a stack create/update
2. Heat walks the graph, hits a breakpoint and stops.
3. Heat explicitly triggers continuation of the create/update


Did you mean the user rather than Heat for (3)?


Oops, yes I did.


My argument is that (3) is always a stack update, either a PUT or PATCH
update, e.g we_are_  completely stopping stack creation, then a user can
choose to re-start it (either with the same or a different definition).


Hmmm, ok that's interesting. I have not been thinking of it that way.

I've

always thought of it like this:




Re: [openstack-dev] [Heat] Where to keep data about stack breakpoints?

2015-01-13 Thread Tomas Sedovic

On 01/12/2015 10:36 PM, Zane Bitter wrote:

On 12/01/15 10:49, Ryan Brown wrote:

On 01/12/2015 10:29 AM, Tomas Sedovic wrote:

Hey folks,

I did a quick proof of concept for a part of the Stack Breakpoint
spec[1] and I put the does this resource have a breakpoint flag into
the metadata of the resource:

https://review.openstack.org/#/c/146123/

I'm not sure where this info really belongs, though. It does sound like
metadata to me (plus we don't have to change the database schema that
way), but can we use it for breakpoints etc., too? Or is metadata
strictly for Heat users and not for engine-specific stuff?


I'd rather not store it in metadata so we don't mix user metadata with
implementation-specific-and-also-subject-to-change runtime metadata. I
think this is a big enough feature to warrant a schema update (and I
can't think of another place I'd want to put the breakpoint info).


+1

I'm actually not convinced it should be in the template at all. Steve's
suggestion of putting it the environment might be a good one, or maybe
it should even just be an extra parameter to the stack create/update
APIs (like e.g. the timeout is)?


Absolutely. I've used metadata as the fastest way to play with breakpoints.

The spec talks about setting breakpoints via environment and via `heat 
stack-create --breakpoint MyResource`. And that absolutely makes sense 
to me.





I also had a chat with Steve Hardy and he suggested adding a STOPPED
state to the stack (this isn't in the spec). While not strictly
necessary to implement the spec, this would help people figure out that
the stack has reached a breakpoint instead of just waiting on a resource
that takes a long time to finish (the heat-engine log and event-list
still show that a breakpoint was reached but I'd like to have it in
stack-list and resource-list, too).

It makes more sense to me to call it PAUSED (we're not completely
stopping the stack creation after all, just pausing it for a bit), I'll
let Steve explain why that's not the right choice :-).


+1 to PAUSED. To me, STOPPED implies an end state (which a breakpoint is
not).


I agree we need an easy way for the user to see why nothing is
happening, but adding additional states to the stack is a pretty
dangerous change that risks creating regressions all over the place. If
we can find _any_ other way to surface the information, it would be
preferable IMHO.


Would adding a new state to resources be similarly tricky, or could we 
do that instead? That way you'd see what's going on in `resource-list` 
which is should be good enough.


The patch is already emitting an event saying that a breakpoint has been 
reached so we're not completely silent on this. But when debugging a 
stack, I always look at resource-list first since it's easier to read 
and only if I need the timing info do I reach for event-list.


Dunno how representative that is.



cheers,
Zane.


For sublime end user confusion, we could use BROKEN. ;)


Haha, that's brilliant!




Tomas

[1]:
http://specs.openstack.org/openstack/heat-specs/specs/juno/stack-breakpoint.html



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Where to keep data about stack breakpoints?

2015-01-12 Thread Tomas Sedovic

Hey folks,

I did a quick proof of concept for a part of the Stack Breakpoint 
spec[1] and I put the does this resource have a breakpoint flag into 
the metadata of the resource:


https://review.openstack.org/#/c/146123/

I'm not sure where this info really belongs, though. It does sound like 
metadata to me (plus we don't have to change the database schema that 
way), but can we use it for breakpoints etc., too? Or is metadata 
strictly for Heat users and not for engine-specific stuff?


I also had a chat with Steve Hardy and he suggested adding a STOPPED 
state to the stack (this isn't in the spec). While not strictly 
necessary to implement the spec, this would help people figure out that 
the stack has reached a breakpoint instead of just waiting on a resource 
that takes a long time to finish (the heat-engine log and event-list 
still show that a breakpoint was reached but I'd like to have it in 
stack-list and resource-list, too).


It makes more sense to me to call it PAUSED (we're not completely 
stopping the stack creation after all, just pausing it for a bit), I'll 
let Steve explain why that's not the right choice :-).


Tomas

[1]: 
http://specs.openstack.org/openstack/heat-specs/specs/juno/stack-breakpoint.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Managing no-mergepy template duplication

2014-12-03 Thread Tomas Sedovic

On 12/03/2014 11:11 AM, Steven Hardy wrote:

Hi all,

Lately I've been spending more time looking at tripleo and doing some
reviews. I'm particularly interested in helping the no-mergepy and
subsequent puppet-software-config implementations mature (as well as
improving overcloud updates via heat).

Since Tomas's patch landed[1] to enable --no-mergepy in
tripleo-heat-templates, it's become apparent that frequently patches are
submitted which only update overcloud-source.yaml, so I've been trying to
catch these and ask for a corresponding change to e.g controller.yaml.



You beat me to this. Thanks for writing it up!


This raises the following questions:

1. Is it reasonable to -1 a patch and ask folks to update in both places?


I'm in favour.


2. How are we going to handle this duplication and divergence?


I'm not sure we can. get_file doesn't handle structured data and I don't 
know what else we can do. Maybe we could split out all SoftwareConfig 
resources to separate files (like Dan did in [nova config])? But the 
SoftwareDeployments, nova servers, etc. have a different structure.


[nova config] https://review.openstack.org/#/c/130303/


3. What's the status of getting gating CI on the --no-mergepy templates?


Derek, can we add a job that's identical to 
check-tripleo-ironic-overcloud-{f20,precise}-nonha except it passes 
--no-mergepy to devtest.sh?



4. What barriers exist (now that I've implemented[2] the eliding functionality
requested[3] for ResourceGroup) to moving to the --no-mergepy
implementation by default?


I'm about to post a patch that moves us from ResourceGroup to 
AutoScalingGroup (for rolling updates), which is going to complicate 
this a bit.


Barring that, I think you've identified all the requirements: CI job, 
parity between the merge/non-merge templates and a process that 
maintains it going forward (or puts the old ones in a maintanence-only 
mode).


Anyone have anything else that's missing?



Thanks for any clarification you can provide! :)

Steve

[1] https://review.openstack.org/#/c/123100/
[2] https://review.openstack.org/#/c/128365/
[3] https://review.openstack.org/#/c/123713/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Strategy for testing and merging the merge.py-free templates

2014-10-14 Thread Tomas Sedovic

Hi everyone,

As outlined in the Remove merge.py[1] spec, Peter Belanyi and I have 
built the templates for controller, nova compute, swift and cinder nodes 
that can be deploying directly to Heat (i.e. no merge.py pass is necessary).


The patches:

https://review.openstack.org/#/c/123100/
https://review.openstack.org/#/c/123713/

I'd like to talk about testing and merging them.

Both Peter and myself have successfully run them through devtest 
multiple times. The Tuskar and TripleO UI folks have managed to hook 
them up to the UI and make things work, too.


That said, there is a number of limitations which don't warrant making 
them the new default just yet:


* Juno Heat is required
* python-heatclient version 0.2.11 is required to talk to Heat
* There is currently no way in Heat to drop specific nodes from a 
ResourceGroup (say because of a hardware failure) so the ellision 
feature from merge.py is not supported yet
* I haven't looked into this yet, but I'm not very optimistic about an 
upgrade path from the merge.py-based templates to the heat-native ones


On the other hand, it would be great if we could add them to devtest as 
an alternative and to have them exercised by the CI. It would make it 
easier to keep them in sync and to iron out any inconsistencies.



James Slagle proposed something like this when I talked to him on IRC:

1. teach devtest about the new templates, driven by a 
OVERCLOUD_USE_MERGE_PY switch (defaulting to the merge.py-based templates)

2. Do a CI run of the new template patches, merge them
3. Add a (initially non-voting?) job to test the heat-only templates
4. When we've resolved all the issues stopping up from the switch, make 
the native templates default, deprecate the merge.py ones



This makes sense to me. Any objections/ideas?

Thanks,
Tomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Strategy for testing and merging the merge.py-free templates

2014-10-14 Thread Tomas Sedovic

On 10/14/2014 12:43 PM, Steven Hardy wrote:

On Tue, Oct 14, 2014 at 10:55:30AM +0200, Tomas Sedovic wrote:

Hi everyone,

As outlined in the Remove merge.py[1] spec, Peter Belanyi and I have built
the templates for controller, nova compute, swift and cinder nodes that can
be deploying directly to Heat (i.e. no merge.py pass is necessary).

The patches:

https://review.openstack.org/#/c/123100/
https://review.openstack.org/#/c/123713/

I'd like to talk about testing and merging them.

Both Peter and myself have successfully run them through devtest multiple
times. The Tuskar and TripleO UI folks have managed to hook them up to the
UI and make things work, too.

That said, there is a number of limitations which don't warrant making them
the new default just yet:

* Juno Heat is required
* python-heatclient version 0.2.11 is required to talk to Heat
* There is currently no way in Heat to drop specific nodes from a
ResourceGroup (say because of a hardware failure) so the ellision feature
from merge.py is not supported yet


FYI, I saw that comment from Clint in 123713, and have been looking
into ways to add this feature to Heat - hopefully will have some code to
post soon.


Oh, cool! It was on my todo list of things to tackle next. If I can help 
in any way (e.g. testing), let me know.




Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] stack-update with existing parameters

2014-09-24 Thread Tomas Sedovic
On 24/09/14 13:50, Dimitri Mazmanov wrote:
 TL;DR Is there any reason why stack-update doesn¹t reuse the existing
 parameters when I extend my stack definition with a resource that uses
 them?

Hey Dimitri,

There is an open bug for this feature:

https://bugs.launchpad.net/heat/+bug/1224828

and it seems to be being worked on.

 
 I have created a stack from the hello_world.yaml template
 (https://github.com/openstack/heat-templates/blob/master/hot/hello_world.ya
 ml)
 It has the following parameters keyname, image, flavor, admin_pass,
 db_port.
 
 heat stack-create hello_world -P
 key_name=test_keypair;image=test_image_cirros;flavor=m1.test_heat;admin_pa
 ss=Openst1 -f hello_world.yaml
 
 Then I have added one more nova server resource with new name(server1),
 rest all the details are untouched.
 
 I get the following when I use this new template without mentioning any of
 the parameter value.
 
 heat --debug stack-update hello_world -f hello_world_modified.yaml
 
 On debugging it throws the below exception.
 The resource was found
 athttp://localhost:8004/v1/7faee9dd37074d3e8896957dc4a52e22/stacks/hello_wo
 rld/85a0bc2c-1a20-45c4-a8a9-7be727db6a6d; you should be redirected
 automatically.
 DEBUG (session) RESP: [400] CaseInsensitiveDict({'date': 'Wed, 24 Sep 2014
 10:08:08 GMT', 'content-length': '961', 'content-type': 'application/json;
 charset=UTF-8'})
 RESP BODY: {explanation: The server could not comply with the request
 since it is either malformed or otherwise incorrect., code: 400,
 error: {message: The Parameter (admin_pass) was not provided.,
 traceback: Traceback (most recent call last):\n\n  File
 \/opt/stack/heat/heat/engine/service.py\, line 63, in wrapped\n
 return func(self, ctx, *args, **kwargs)\n\n  File
 \/opt/stack/heat/heat/engine/service.py\, line 576, in update_stack\n
 env, **common_params)\n\n  File \/opt/stack/heat/heat/engine/parser.py\,
 line 109, in __init__\ncontext=context)\n\n  File
 \/opt/stack/heat/heat/engine/parameters.py\, line 403, in validate\n
 param.validate(validate_value, context)\n\n  File
 \/opt/stack/heat/heat/engine/parameters.py\, line 215, in validate\n
 raise 
 exception.UserParameterMissing(key=self.name)\n\nUserParameterMissing: The
 Parameter (admin_pass) was not provided.\n, type:
 UserParameterMissing}, title: Bad Request}
 
 When I mention all the parameters then it updates the stack properly
 
 heat --debug stack-update hello_world -P
 key_name=test_keypair;image=test_image_cirros;flavor=m1.test_heat;admin_pa
 ss=Openst1 -f hello_world_modified.yaml
 
 Any reason why I can¹t reuse the existing parameters during the
 stack-update if I don¹t  want to specify them again?
 
 -
 Dimitri
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Propose adding StevenK to core reviewers

2014-09-10 Thread Tomas Sedovic
On 09/09/14 20:32, Gregory Haynes wrote:
 Hello everyone!
 
 I have been working on a meta-review of StevenK's reviews and I would
 like to propose him as a new member of our core team.
 
 As I'm sure many have noticed, he has been above our stats requirements
 for several months now. More importantly, he has been reviewing a wide
 breadth of topics and seems to have a strong understanding of our code
 base. He also seems to be doing a great job at providing valuable
 feedback and being attentive to responses on his reviews.
 
 As such, I think he would make a great addition to our core team. Can
 the other core team members please reply with your votes if you agree or
 disagree.

+1

 
 Thanks!
 Greg
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Meeting time update

2014-09-03 Thread Tomas Sedovic
As you all (hopefully) know, our meetings alternate between Tuesdays
19:00 UTC and Wednesdays 7:00 UTC.

Because of the whining^W weekly-expressed preferences[1] of the
Europe-based folks, the latter meetings are going to be moved by +1 hour.

So the new meeting times are:

* Tuesdays at 19:00 UTC (unchanged)
* Wednesdays at 8:00 UTC (1 hour later)

The first new EU-friendly meeting will take place on Wednesday 17th
September.

The wiki page has been updated accordingly:

https://wiki.openstack.org/wiki/Meetings/TripleO

but I don't know how to reflect the change in the iCal feed. Anyone
willing to do that, please?

[1]:
http://lists.openstack.org/pipermail/openstack-dev/2014-August/043544.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Change of meeting time

2014-08-20 Thread Tomas Sedovic
On 20/08/14 05:15, Derek Higgins wrote:
 On 24/05/14 01:21, James Polley wrote:
 Following a lengthy discussion under the subject Alternating meeting
 tmie for more TZ friendliness, the TripleO meeting now alternates
 between Tuesday 1900UTC (the former time) and Wednesday 0700UTC, for
 better coverage across Australia, India, China, Japan, and the other
 parts of the world that found it impossible to get to our previous
 meeting time.
 
 Raising a point that came up on this morning's irc meeting
 
 A lot (most?) of the people at this morning's meeting were based in
 western Europe, getting up earlier then usual for the meeting (me
 included), When daylight saving kicks in it might push them passed the
 threshold, would an hour later (0800 UTC) work better for people or is
 the current time what fits best?
 
 I'll try to make the meeting regardless if its moved or not but an hour
 later would certainly make it a little more palatable.

Same here

 

 https://wiki.openstack.org/wiki/Meetings/TripleO#Weekly_TripleO_team_meeting
 has been updated with a link to the iCal feed so you can figure out
 which time we're using each week.

 The coming meeting will be our first Wednesday 0700UTC meeting. We look
 forward to seeing some fresh faces (well, fresh nicks at least)!


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Passing a list of ResourceGroup's attributes back to its members

2014-08-13 Thread Tomas Sedovic
On 12/08/14 01:06, Steve Baker wrote:
 On 09/08/14 11:15, Zane Bitter wrote:
 On 08/08/14 11:07, Tomas Sedovic wrote:
 On 08/08/14 00:53, Zane Bitter wrote:
 On 07/08/14 13:22, Tomas Sedovic wrote:
 Hi all,

 I have a ResourceGroup which wraps a custom resource defined in
 another
 template:

   servers:
 type: OS::Heat::ResourceGroup
 properties:
   count: 10
   resource_def:
 type: my_custom_server
 properties:
   prop_1: ...
   prop_2: ...
   ...

 And a corresponding provider template and environment file.

 Now I can get say the list of IP addresses or any custom value of each
 server from the ResourceGroup by using `{get_attr: [servers,
 ip_address]}` and outputs defined in the provider template.

 But I can't figure out how to pass that list back to each server in
 the
 group.

 This is something we use in TripleO for things like building a MySQL
 cluster, where each node in the cluster (the ResourceGroup) needs the
 addresses of all the other nodes.

 Yeah, this is kind of the perpetual problem with clusters. I've been
 hoping that DNSaaS will show up in OpenStack soon and that that will be
 a way to fix this issue.

 The other option is to have the cluster members discover each other
 somehow (mDNS?), but people seem loath to do that.

 Right now, we have the servers ungrouped in the top-level template
 so we
 can build this list manually. But if we move to ResourceGroups (or any
 other scaling mechanism, I think), this is no longer possible.

 So I believe the current solution is to abuse a Launch Config resource
 as a store for the data, and then later retrieve it somehow? Possibly
 you could do something along similar lines, but it's unclear how the
 'later retrieval' part would work... presumably it would have to
 involve
 something outside of Heat closing the loop :(

 Do you mean AWS::AutoScaling::LaunchConfiguration? I'm having trouble
 figuring out how would that work. LaunchConfig represents an instance,
 right?


 We can't pass the list to ResourceGroup's `resource_def` section
 because
 that causes a circular dependency.

 And I'm not aware of a way to attach a SoftwareConfig to a
 ResourceGroup. SoftwareDeployment only allows attaching a config to a
 single server.

 Yeah, and that would be a tricky thing to implement well, because a
 resource group may not be a group of servers (but in many cases it may
 be a group of nested stacks that each contain one or more servers, and
 you'd want to be able to handle that too).

 Yeah, I worried about that, too :-(.

 Here's a proposal that might actually work, though:

 The provider resource exposes the reference to its inner instance by
 declaring it as one of its outputs. A SoftwareDeployment would learn to
 accept a list of Nova servers, too.

 Provider template:

  resources:
my_server:
  type: OS::Nova::Server
  properties:
...

... (some other resource hidden in the provider template)

  outputs:
inner_server:
  value: {get_resource: my_server}
ip_address:
  value: {get_attr: [controller_server, networks, private, 0]}

 Based on my limited testing, this already makes it possible to use the
 inner server with a SoftwareDeployment from another template that uses
 my_server as a provider resource.

 E.g.:

  a_cluster_of_my_servers:
type: OS::Heat::ResourceGroup
properties:
  count: 10
  resource_def:
type: custom::my_server
...

  some_deploy:
type: OS::Heat::StructuredDeployment
properties:
  server: {get_attr: [a_cluster_of_my_servers,
 resource.0.inner_server]}
  config: {get_resource: some_config}


 So what if we allowed SoftwareDeployment to accept a list of servers in
 addition to accepting just one server? Or add another resource that does
 that.

 I approve of that in principle. Only Steve Baker can tell us for sure
 if there are any technical roadblocks in the way of that, but I don't
 see any.

 Maybe if we had a new resource type that was internally implemented as
 a nested stack... that might give us a way of tracking the individual
 deployment statuses for free.

 cheers,
 Zane.

 Then we could do:

  mysql_cluster_deployment:
type: OS::Heat::StructuredDeployment
properties:
  server_list: {get_attr: [a_cluster_of_my_servers,
 inner_server]}
  config: {get_resource: mysql_cluster_config}
  input_values:
cluster_ip_addresses: {get_attr: [a_cluster_of_my_servers,
 ip_address]}

 This isn't that different from having a SoftwareDeployment accepting a
 single server and doesn't have any of the problems of allowing a
 ResourceGroup as a SoftwareDeployment target.

 What do you think?
 All the other solutions I can think of will result in circular issues.
 
 I'll start looking at a spec to create a resource

[openstack-dev] [Heat] Passing a list of ResourceGroup's attributes back to its members

2014-08-07 Thread Tomas Sedovic
Hi all,

I have a ResourceGroup which wraps a custom resource defined in another
template:

servers:
  type: OS::Heat::ResourceGroup
  properties:
count: 10
resource_def:
  type: my_custom_server
  properties:
prop_1: ...
prop_2: ...
...

And a corresponding provider template and environment file.

Now I can get say the list of IP addresses or any custom value of each
server from the ResourceGroup by using `{get_attr: [servers,
ip_address]}` and outputs defined in the provider template.

But I can't figure out how to pass that list back to each server in the
group.

This is something we use in TripleO for things like building a MySQL
cluster, where each node in the cluster (the ResourceGroup) needs the
addresses of all the other nodes.

Right now, we have the servers ungrouped in the top-level template so we
can build this list manually. But if we move to ResourceGroups (or any
other scaling mechanism, I think), this is no longer possible.

We can't pass the list to ResourceGroup's `resource_def` section because
that causes a circular dependency.

And I'm not aware of a way to attach a SoftwareConfig to a
ResourceGroup. SoftwareDeployment only allows attaching a config to a
single server.


Is there a way to do this that I'm missing? And if there isn't, is this
something we could add to Heat? E.g. extending a SoftwareDeployment to
accept ResourceGroups or adding another resource for that purpose.

Thanks,
Tomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Strategy for merging Heat HOT port

2014-08-05 Thread Tomas Sedovic
On 04/08/14 00:50, Steve Baker wrote:
 On 01/08/14 12:19, Steve Baker wrote:
 The changes to port tripleo-heat-templates to HOT have been rebased to
 the current state and are ready to review. They are the next steps in
 blueprint tripleo-juno-remove-mergepy.

 However there is coordination needed to merge since every existing
 tripleo-heat-templates change will need to be rebased and changed
 after the port lands (lucky you!).

 Here is a summary of the important changes in the series:

 https://review.openstack.org/#/c/105327/
 Low risk and plenty of +2s, just needs enough validation from CI for
 an approve

 Merged
 https://review.openstack.org/#/c/105328/
 Scripted conversion to HOT. Converts everything except Fn::Select

 This is now:
 - rebased against 82c50c1 Fix swift memcache and device properties
 - switched to heat_template_version: 2014-10-16 to get list_join
 - is now passing CI
 
 https://review.openstack.org/#/c/105347/
 Manual conversion of Fn::Select to extended get_attr

All three patches are merged now and I've removed the t-h-t -2s.

Sorry for the inconvenience everybody.


 I'd like to suggest the following approach for getting these to land:
 * Any changes which really should land before the above 3 get listed
 in this mail thread (vlan?)
 * Reviews of the above 3 changes, and local testing of change 105347
 * All other tripleo-heat-templates need to be rebased/reworked to be
 after 105347 (and maybe -2 until they are?)

 I'm available for any questions on porting your changes to HOT.
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Strategy for merging Heat HOT port

2014-08-01 Thread Tomas Sedovic
On 01/08/14 02:19, Steve Baker wrote:
 The changes to port tripleo-heat-templates to HOT have been rebased to
 the current state and are ready to review. They are the next steps in
 blueprint tripleo-juno-remove-mergepy.
 
 However there is coordination needed to merge since every existing
 tripleo-heat-templates change will need to be rebased and changed after
 the port lands (lucky you!).
 
 Here is a summary of the important changes in the series:
 
 https://review.openstack.org/#/c/105327/
 Low risk and plenty of +2s, just needs enough validation from CI for an
 approve
 
 https://review.openstack.org/#/c/105328/
 Scripted conversion to HOT. Converts everything except Fn::Select
 
 https://review.openstack.org/#/c/105347/
 Manual conversion of Fn::Select to extended get_attr
 
 I'd like to suggest the following approach for getting these to land:
 * Any changes which really should land before the above 3 get listed in
 this mail thread (vlan?)
 * Reviews of the above 3 changes, and local testing of change 105347
 * All other tripleo-heat-templates need to be rebased/reworked to be
 after 105347 (and maybe -2 until they are?)

Agreed to all this. We've done a similar thing for the software config work.

I'll try to do a local run of 105347 today.

 
 I'm available for any questions on porting your changes to HOT.
 
 cheers
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Extended get_attr support for ResourceGroup

2014-07-15 Thread Tomas Sedovic
On 15/07/14 20:01, Zane Bitter wrote:
 On 14/07/14 12:21, Tomas Sedovic wrote:
 On 12/07/14 06:41, Zane Bitter wrote:
 On 11/07/14 09:37, Tomas Sedovic wrote:
 
[snip]
 

 Alternatively, we could extend the ResourceGroup's get_attr behaviour:

   {get_attr: [controller_group, resource.0.networks.ctlplane.0]}

 but the former is a bit cleaner and more generic.

 I wrote a patch that implements this (and also handles (3) above in a
 similar manner), but in the end I decided that this:

{get_attr: [controller_group, resource.0, networks, ctlplane, 0]}

 would be better than either that or the current syntax (which was
 obviously obscure enough that you didn't discover it). My only
 reservation was that it might make things a little weird when we have an
 autoscaling API to get attributes from compared with the dotted syntax
 that you suggest, but I soon got over it ;)

 So now that I understand how this works, I'm not against keeping things
 the way we are. There is a consistency there, we just need to document
 it and perhaps show some examples.
 
 It kind of fell out of the work I was doing on the patches above anyway.
 It would be harder to _not_ implement this (and the existing way still
 works too).

Okay, fine by me :-)

 

 ---
[snip]


 There is one aspect of this that probably doesn't work yet: originally
 outputs and attributes were only allowed to be strings. We changed that
 for attributes, but probably not yet for outputs (even though outputs of
 provider templates become attributes of the facade resource). But that
 should be easy to fix. (And if your data can be returned as a string, it
 should already work.)

 Unless I misunderstood what you're saying, it seems to be working now:

 controller.yaml:

  outputs:
hosts_entry:
  description: An IP address and a hostname of the server
  value:
ip: {get_attr: [controller_server, networks, private, 0]}
name: {get_attr: [controller_server, name]}

 environment.yaml:

  resource_registry:
OS::TripleO::Controller: controller.yaml

 test-resource-group.yaml:

  resources:
servers:
  type: OS::Heat::ResourceGroup
  properties:
count: 3
resource_def:
  type: OS::TripleO::Controller
  properties:
key_name: {get_param: key_name}
image: {get_param: image_id}

  outputs:
hosts:
  description: /etc/hosts entries for each server
  value: {get_attr: [servers, hosts_entry]}

 Heat stack-show test-resource-group:

 {
   output_value: [
 {u'ip': u'10.0.0.4', u'name':
 u'rg-7heh-0-tweejsvubaht-controller_server-mscy33sbtirn'},
 {u'ip': u'10.0.0.3', u'name':
 u'rg-7heh-1-o4szl7lry27d-controller_server-sxpkalgi27ii'},
 {u'ip': u'10.0.0.2', u'name':
 u'rg-7heh-2-l2y6rqxml2fi-controller_server-u4jcjacjdrea'}
   ],
   description: /etc/hosts entries for each server,
   output_key: hosts
 },
 
 It looks like the dicts are being converted to strings by Python, so
 there probably is a small bug here to be fixed. (At the very least, if
 we're converting to strings we should do so using json.dumps(), not
 repr().)

Ooops, you're right. In my excitement, I completely missed that!

 
 [snip]
 

 So this boils down to 4 features proposals:

 1. Support extended attributes in ResourceGroup's members

 Sorted.

 Yep


 2. Allow a way to use a Resource ID (e.g. what you get by {get_attr:
 [ResourceGroup, refs]} or {get_attr: [ResourceGroup, resource.0]}) with
 existing intrinsic functions (get_resource, get_attr)

 No dice, but (1) solves the problem anyway.

 Agreed


 3. A `map` intrinsic function that turns a list of items to another
 list
 by doing operations on each item

 There may be a better solution available to us already, so IMO
 investigate that first. If that turns out not to be the case then we'll
 need to reach a consensus on whether map is something we want.

 You're right. I no longer think map (or anything like it) is necessary.
 
 That's the kind of thing I love to hear :D
 
 4. A `concat_list` intrinsic function that joins multiple lists into
 one.

 Low priority.

 Yeah.


 I think the first two are not controversial. What about the other two?
 I've shown you some examples where we would find a good use in the
 TripleO templates. The lack of `map` actually blocks us from going
 all-Heat.

 Hopefully that's not actually the case.

 The alternative would be to say that this sort of stuff to be done
 inside the instance by os-apply-config et al. It would complicate
 things
 for TripleO, but oh well.

 It seems to me that the alternative is not necessarily to modify
 os-apply-config, but rather to provide a software config with a script
 that converts the data from whatever format Heat can supply to whatever
 format is needed by the application. Although I don't think it's even
 required in the specific case you

Re: [openstack-dev] [Heat] [TripleO] Extended get_attr support for ResourceGroup

2014-07-14 Thread Tomas Sedovic
On 12/07/14 06:41, Zane Bitter wrote:
 On 11/07/14 09:37, Tomas Sedovic wrote:
 Hi all,

 This is a follow-up to Clint Byrum's suggestion to add the `Map`
 intrinsic function[0], Zane Bitter's response[1] and Randall Burt's
 addendum[2].

 Sorry for bringing it up again, but I'd love to reach consensus on this.
 The summary of the previous conversation:
 
 Please keep bringing it up until you get a straight answer ;)
 
 1. TripleO is using some functionality currently not supported by Heat
 around scaled-out resources
 2. Clint proposed a `map` intrinsic function that would solve it
 3. Zane said Heat have historically been against a for-loop functionality
 4. Randall suggested ResourceGroup's attribute passthrough may do what
 we need

 I've looked at the ResourceGroup code and experimented a bit. It does do
 some of what TripleO needs but not all.
 
 Many thanks for putting this together Tomas, this is exactly the kind of
 information that is _incredibly_ helpful in knowing what sort of
 features we need in HOT. Fantastic work :)
 
 Here's what we're doing with our scaled-out resources (what we'd like to
 wrap in a ResourceGroup or similar in the future):


 1. Building a coma-separated list of RabbitMQ nodes:

 https://github.com/openstack/tripleo-heat-templates/blob/a7f2a2c928e9c78a18defb68feb40da8c7eb95d6/overcloud-source.yaml#L642


 This one is easy with ResourceGroup's inner attribute support:

  list_join:
  - , 
  - {get_attr: [controller_group, name]}

 (controller_group is a ResourceGroup of Nova servers)


 2. Get the name of the first Controller node:

 https://github.com/openstack/tripleo-heat-templates/blob/a7f2a2c928e9c78a18defb68feb40da8c7eb95d6/overcloud-source.yaml#L339


 Possible today:

  {get_attr: [controller_group, resource.0.name]}


 3. List of IP addresses of all controllers:

 https://github.com/openstack/tripleo-heat-templates/blob/a7f2a2c928e9c78a18defb68feb40da8c7eb95d6/overcloud-source.yaml#L405


 We cannot do this, because resource group doesn't support extended
 attributes.

 Would need something like:

  {get_attr: [controller_group, networks, ctlplane, 0]}

 (ctlplane is the network controller_group servers are on)
 
 I was going to give an explanation of how we could implement this, but
 then I realised a patch was going to be easier:
 
 https://review.openstack.org/#/c/106541/
 https://review.openstack.org/#/c/106542/

Thanks, that looks great.

 
 4. IP address of the first node in the resource group:

 https://github.com/openstack/tripleo-heat-templates/blob/a7f2a2c928e9c78a18defb68feb40da8c7eb95d6/swift-deploy.yaml#L29


 Can't do: extended attributes are not supported for the n-th node for
 the group either.
 
 I believe this is possible today using:
 
   {get_attr: [controller_group, resource.0.networks, ctlplane, 0]}

Yeah, I've missed this. I have actually checked the ResourceGroup's
GetAtt method but didn't realise the connection with the GetAtt function
so I hadn't tried it before.

 
 This can be solved by `get_resource` working with resource IDs:

 get_attr:
 - {get_attr: [controller_group, resource.0]}
 - [networks, ctlplane, 0]

 (i.e. we get the server's ID from the ResourceGroup and change
 `get_attr` to work with the ID's too. Would also work if `get_resource`
 understood IDs).
 
 This is never going to happen.
 
 Think of get_resource as returning an object whose string representation
 is the UUID of the named resource (get_attr is similar, but returning
 attributes instead). It doesn't mean that having the UUID of a resource
 is the same as having the resource itself; the UUID could have come from
 anywhere. What you're talking about is a radical departure from the
 existing, very simple but extremely effective, model toward something
 that's extremely difficult to analyse with lots of nasty edge cases.
 It's common for people to think they want this, but it always turns out
 there's a better way to achieve their goal within the existing data model.

Right, that makes sense. I don't think I fully grasped the existing
model so this felt like a nice quick fix.

 
 Alternatively, we could extend the ResourceGroup's get_attr behaviour:

  {get_attr: [controller_group, resource.0.networks.ctlplane.0]}

 but the former is a bit cleaner and more generic.
 
 I wrote a patch that implements this (and also handles (3) above in a
 similar manner), but in the end I decided that this:
 
   {get_attr: [controller_group, resource.0, networks, ctlplane, 0]}
 
 would be better than either that or the current syntax (which was
 obviously obscure enough that you didn't discover it). My only
 reservation was that it might make things a little weird when we have an
 autoscaling API to get attributes from compared with the dotted syntax
 that you suggest, but I soon got over it ;)

So now that I understand how this works, I'm not against keeping things
the way we are. There is a consistency there, we just need to document

[openstack-dev] [Heat] [TripleO] Extended get_attr support for ResourceGroup

2014-07-11 Thread Tomas Sedovic
Hi all,

This is a follow-up to Clint Byrum's suggestion to add the `Map`
intrinsic function[0], Zane Bitter's response[1] and Randall Burt's
addendum[2].

Sorry for bringing it up again, but I'd love to reach consensus on this.
The summary of the previous conversation:

1. TripleO is using some functionality currently not supported by Heat
around scaled-out resources
2. Clint proposed a `map` intrinsic function that would solve it
3. Zane said Heat have historically been against a for-loop functionality
4. Randall suggested ResourceGroup's attribute passthrough may do what
we need

I've looked at the ResourceGroup code and experimented a bit. It does do
some of what TripleO needs but not all.

Here's what we're doing with our scaled-out resources (what we'd like to
wrap in a ResourceGroup or similar in the future):


1. Building a coma-separated list of RabbitMQ nodes:

https://github.com/openstack/tripleo-heat-templates/blob/a7f2a2c928e9c78a18defb68feb40da8c7eb95d6/overcloud-source.yaml#L642

This one is easy with ResourceGroup's inner attribute support:

list_join:
- , 
- {get_attr: [controller_group, name]}

(controller_group is a ResourceGroup of Nova servers)


2. Get the name of the first Controller node:

https://github.com/openstack/tripleo-heat-templates/blob/a7f2a2c928e9c78a18defb68feb40da8c7eb95d6/overcloud-source.yaml#L339

Possible today:

{get_attr: [controller_group, resource.0.name]}


3. List of IP addresses of all controllers:

https://github.com/openstack/tripleo-heat-templates/blob/a7f2a2c928e9c78a18defb68feb40da8c7eb95d6/overcloud-source.yaml#L405

We cannot do this, because resource group doesn't support extended
attributes.

Would need something like:

{get_attr: [controller_group, networks, ctlplane, 0]}

(ctlplane is the network controller_group servers are on)


4. IP address of the first node in the resource group:

https://github.com/openstack/tripleo-heat-templates/blob/a7f2a2c928e9c78a18defb68feb40da8c7eb95d6/swift-deploy.yaml#L29

Can't do: extended attributes are not supported for the n-th node for
the group either.

This can be solved by `get_resource` working with resource IDs:

   get_attr:
   - {get_attr: [controller_group, resource.0]}
   - [networks, ctlplane, 0]

(i.e. we get the server's ID from the ResourceGroup and change
`get_attr` to work with the ID's too. Would also work if `get_resource`
understood IDs).


Alternatively, we could extend the ResourceGroup's get_attr behaviour:

{get_attr: [controller_group, resource.0.networks.ctlplane.0]}

but the former is a bit cleaner and more generic.


---


That was the easy stuff, where we can get by with the current
functionality (plus a few fixes).

What follows are examples that really need new intrinsic functions (or
seriously complicating the ResourceGroup attribute code and syntax).


5. Building a list of {ip: ..., name: ...} dictionaries to configure
haproxy:

https://github.com/openstack/tripleo-heat-templates/blob/a7f2a2c928e9c78a18defb68feb40da8c7eb95d6/overcloud-source.yaml#L478

This really calls for a mapping/for-each kind of functionality. Trying
to invent a ResourceGroup syntax for this would be perverse.

Here's what it could look like under Clint's `map` proposal:

map:
- ip: {get_attr: [{get_resource: $1}, networks, ctlplane, 0]
  name: {get_attr: [{get_resource: $1}, name]}
- {get_attr: [compute_group, refs]}

(this relies on `get_resource` working with resource IDs. Alternatively,
we could have a `resources` attribute for ResourceGroup that returns
objects that can be used with get_attr.


6. Building the /etc/hosts file

https://github.com/openstack/tripleo-heat-templates/blob/a7f2a2c928e9c78a18defb68feb40da8c7eb95d6/overcloud-source.yaml#L585

Same as above, but also joining two lists together.

We can use nested {list_join: [\n, [...]} just as we're doing now, but
having a `concat_list` function would make this and some other cases
shorter and clearer.


7. Building the list of Swift devices:

https://github.com/openstack/tripleo-heat-templates/blob/a7f2a2c928e9c78a18defb68feb40da8c7eb95d6/swift-deploy.yaml#L23

In addition to the abowe, we're adding a single element at the beginning
of a list.

Asking for a `cons` support is pushing it, right? ;-)

We could just wrap that in a list and use `concat_list` or keep using
nested `list_join`s as in the /etc/hosts case.





So this boils down to 4 features proposals:

1. Support extended attributes in ResourceGroup's members
2. Allow a way to use a Resource ID (e.g. what you get by {get_attr:
[ResourceGroup, refs]} or {get_attr: [ResourceGroup, resource.0]}) with
existing intrinsic functions (get_resource, get_attr)
3. A `map` intrinsic function that turns a list of items to another list
by doing operations on each item
4. A `concat_list` intrinsic function that joins multiple lists into one.

I think the first two are not controversial. What about the other two?
I've shown you some examples where we would 

Re: [openstack-dev] [TripleO] Proposal to add Jon Paul Sullivan and Alexis Lee to core review team

2014-07-10 Thread Tomas Sedovic
On 09/07/14 17:52, Clint Byrum wrote:
 Hello!
 
 I've been looking at the statistics, and doing a bit of review of the
 reviewers, and I think we have an opportunity to expand the core reviewer
 team in TripleO. We absolutely need the help, and I think these two
 individuals are well positioned to do that.
 
 I would like to draw your attention to this page:
 
 http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt
 
 Specifically these two lines:
 
 +---+---++
 |  Reviewer | Reviews   -2  -1  +1  +2  +A+/- % | Disagreements* |
 +---+---++
 |  jonpaul-sullivan | 1880  43 145   0   077.1% |   28 ( 14.9%)  |
 |   lxsli   | 1860  23 163   0   087.6% |   27 ( 14.5%)  |
 
 Note that they are right at the level we expect, 3 per work day. And
 I've looked through their reviews and code contributions: it is clear
 that they understand what we're trying to do in TripleO, and how it all
 works. I am a little dismayed at the slightly high disagreement rate,
 but looking through the disagreements, most of them were jp and lxsli
 being more demanding of submitters, so I am less dismayed.
 
 So, I propose that we add jonpaul-sullivan and lxsli to the TripleO core
 reviewer team.

+1 for both. However, some of the reviews show what I think is a
worrying trend in TripleO core. Specifically, nitpicking and tendency to
bikeshed.

I am absolutely in favour of keeping a clear and unified coding style --
which may require seemingly pointless comments around whitespace, using
more widespread coding idioms or requiring an explanation to a
non-obvious bit of code. Nothing against people pointing that out.

On the other hand, there is a fine line between being demanding of
submitters and slowing people down. I think asking to change the tone of
a sentence (will vs. should), requiring to replace semver (an
abbreviation used in the specification itself) with the full wording, or
to splitting a sentence into two, add little overall benefit and are not
something we ought to bother with.

I've seen this in the code too, but it seems much more prevalent in the
specs and docs. Every comment like this puts unnecessary burden on the
submitter, the reviewers and the CI and can delay good changes from
being merged by days or even weeks.

I know I've been guilty of this too. Can we all agree (the new as well
as the old cores) to read the bikeshedding essay again and keep it in
mind when doing reviews?

http://bikeshed.com/



 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Backwards compatibility policy for our projects

2014-06-17 Thread Tomas Sedovic
On 16/06/14 18:51, Clint Byrum wrote:
 Excerpts from Tomas Sedovic's message of 2014-06-16 09:19:40 -0700:
 All,

 After having proposed some changes[1][2] to tripleo-heat-templates[3],
 reviewers suggested adding a deprecation period for the merge.py script.

 While TripleO is an official OpenStack program, none of the projects
 under its umbrella (including tripleo-heat-templates) have gone through
 incubation and integration nor have they been shipped with Icehouse.

 So there is no implicit compatibility guarantee and I have not found
 anything about maintaining backwards compatibility neither on the
 TripleO wiki page[4], tripleo-heat-template's readme[5] or
 tripleo-incubator's readme[6].

 The Release Management wiki page[7] suggests that we follow Semantic
 Versioning[8], under which prior to 1.0.0 (t-h-t is ) anything goes.
 According to that wiki, we are using a stronger guarantee where we do
 promise to bump the minor version on incompatible changes -- but this
 again suggests that we do not promise to maintain backwards
 compatibility -- just that we document whenever we break it.

 
 I think there are no guarantees, and no promises. I also think that we've
 kept tripleo_heat_merge pretty narrow in surface area since making it
 into a module, so I'm not concerned that it will be incredibly difficult
 to keep those features alive for a while.
 
 According to Robert, there are now downstreams that have shipped things
 (with the implication that they don't expect things to change without a
 deprecation period) so there's clearly a disconnect here.

 
 I think it is more of a we will cause them extra work thing. If we
 can make a best effort and deprecate for a few releases (as in, a few
 releases of t-h-t, not OpenStack), they'll likely appreciate that. If
 we can't do it without a lot of effort, we shouldn't bother.

Oh. I did assume we were talking about OpenStack releases, not t-h-t,
sorry. I have nothing against making a new tht release that deprecates
the features we're no longer using and dropping them for good in a later
release.

What do you suggest would be a reasonable waiting period? Say a month or
so? I think it would be good if we could remove all the deprecated stuff
before we start porting our templates to HOT.

 
 If we do promise backwards compatibility, we should document it
 somewhere and if we don't we should probably make that more visible,
 too, so people know what to expect.

 I prefer the latter, because it will make the merge.py cleanup easier
 and every published bit of information I could find suggests that's our
 current stance anyway.

 
 This is more about good will than promising. If it is easy enough to
 just keep the code around and have it complain to us if we accidentally
 resurrect a feature, that should be enough. We could even introduce a
 switch to the CLI like --strict that we can run in our gate and that
 won't allow us to keep using deprecated features.
 
 So I'd like to see us deprecate not because we have to, but because we
 can do it with only a small amount of effort.

Right, that's fair enough. I've thought about adding a strict switch,
too, but I'd like to start removing code from merge.py, not adding more :-).

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Backwards compatibility policy for our projects

2014-06-16 Thread Tomas Sedovic
All,

After having proposed some changes[1][2] to tripleo-heat-templates[3],
reviewers suggested adding a deprecation period for the merge.py script.

While TripleO is an official OpenStack program, none of the projects
under its umbrella (including tripleo-heat-templates) have gone through
incubation and integration nor have they been shipped with Icehouse.

So there is no implicit compatibility guarantee and I have not found
anything about maintaining backwards compatibility neither on the
TripleO wiki page[4], tripleo-heat-template's readme[5] or
tripleo-incubator's readme[6].

The Release Management wiki page[7] suggests that we follow Semantic
Versioning[8], under which prior to 1.0.0 (t-h-t is ) anything goes.
According to that wiki, we are using a stronger guarantee where we do
promise to bump the minor version on incompatible changes -- but this
again suggests that we do not promise to maintain backwards
compatibility -- just that we document whenever we break it.

According to Robert, there are now downstreams that have shipped things
(with the implication that they don't expect things to change without a
deprecation period) so there's clearly a disconnect here.

If we do promise backwards compatibility, we should document it
somewhere and if we don't we should probably make that more visible,
too, so people know what to expect.

I prefer the latter, because it will make the merge.py cleanup easier
and every published bit of information I could find suggests that's our
current stance anyway.

Tomas

[1]: https://review.openstack.org/#/c/99384/
[2]: https://review.openstack.org/#/c/97939/
[3]: https://github.com/openstack/tripleo-heat-templates
[4]: https://wiki.openstack.org/wiki/TripleO
[5]:
https://github.com/openstack/tripleo-heat-templates/blob/master/README.md
[6]: https://github.com/openstack/tripleo-incubator/blob/master/README.rst
[7]: https://wiki.openstack.org/wiki/TripleO/ReleaseManagement
[8]: http://semver.org/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-06-10 Thread Tomas Sedovic
On 10/06/14 10:25, Clint Byrum wrote:
 Excerpts from Jaromir Coufal's message of 2014-06-08 16:44:58 -0700:
 Hi,

 it looks that there is no more activity on the survey for mid-cycle 
 dates so I went forward to evaluate it.

 I created a table view into the etherpad [0] and results are following:
 * option1 (Jul 28 - Aug 1): 27 attendees - collides with Nova/Ironic
 * option2 (Jul 21-25) : 27 attendees
 * option3 (Jul 25-29) : 17 attendees - collides with Nova/Ironic
 * option4 (Aug 11-15) : 13 attendees

 I think that we can remove options 3 and 4 from the consideration, 
 because there is lot of people who can't make it. So we have option1 and 
 option2 left. Since Robert and Devananda (PTLs on the projects) can't 
 make option1, which also conflicts with Nova/Ironic meetup, I think it 
 is pretty straightforward.

 Based on the survey the winning date for the mid-cycle meetup is 
 option2: July 21th - 25th.

 Does anybody have very strong reason why we shouldn't fix the date for 
 option2 and proceed forward with the organization for the meetup?

 
 July 21-25 is also the shortest notice. I will not be able to attend
 as plans have already been made for the summer and I've already been
 travelling quite a bit recently, after all we were all just at the summit
 a few weeks ago.
 
 I question the reasoning that being close to FF is a bad thing, and
 suggest adding much later dates. But I understand since the chosen dates
 are so close, there is a need to make a decision immediately.
 
 Alternatively, I suggest that we split Heat out of this, and aim at
 later dates in August.

Apologies for not participating earlier, I wasn't sure I'd be able to go
until now.

July 21th - 25th doesn't work for me at all (wedding). Any later date
should be okay so I second both of Clint's suggestions.



 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleO] Should #tuskar business be conducted in the #tripleo channel?

2014-05-30 Thread Tomas Sedovic
On 30/05/14 02:08, James Slagle wrote:
 On Thu, May 29, 2014 at 12:25 PM, Anita Kuno ante...@anteaya.info wrote:
 As I was reviewing this patch today:
 https://review.openstack.org/#/c/96160/

 It occurred to me that the tuskar project is part of the tripleo
 program:
 http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n247

 I wondered if business, including bots posting to irc for #tuskar is
 best conducted in the #tripleo channel. I spoke with Chris Jones in
 #tripleo and he said the topic hadn't come up before. He asked me if I
 wanted to kick off the email thread, so here we are.

 Should #tuskar business be conducted in the #tripleo channel?
 
 I'd say yes. I don't think the additional traffic would be a large
 distraction at all to normal TripleO business.

Agreed, I don't think the traffic increase would be problematic. Neither
channel seems particularly busy.

And it would probably be beneficial to the TripleO developers who aren't
working on the UI stuff as well as the UI people who aren't necessarily
hacking on the rest of TripleO. A discussion in one area can sometimes
use some input from the other, which is harder when you need to move the
conversation between channels.

 
 I can however see how it might be nice to have #tuskar to talk tuskar
 api and tuskar ui stuff in the same channel. Do folks usually do that?
 Or is tuskar-ui conversation already happening in #openstack-horizon?
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-docs] [Heat][Documentation] Heat template documentation

2014-05-28 Thread Tomas Sedovic
On 28/05/14 10:05, Julien Danjou wrote:
 On Tue, May 27 2014, Gauvain Pocentek wrote:
 
 So my feeling is that we should work on the tools to convert RST
 (or whatever format, but RST seems to be the norm for openstack
 projects) to docbook, and generate our online documentation from
 there. There are tools that can help us doing that, and I don't
 see an other solution that would make us move forward.
 
 Anne, you talked about experimenting with the end user guide, and
 after the discussion and the technical info brought by Doug,
 Steve and Steven, I now think it is worth trying.
 
 I think it's a very good idea.
 
 FWIW, AsciiDoc¹ has a nice markup format that can be converted to 
 Docbook. I know it's not RST, but it's still better than writing
 XML IMHO.

I would voice my support for AsciiDoc as well. Conversion to DocBook
was what it was designed for and the two should be be semantically
equivalent (i.e. any markup we're using in our DocBook sources should
be available in AsciiDoc as well).


These two articles provide a good quick introduction:

http://asciidoctor.org/docs/asciidoc-writers-guide/

http://asciidoctor.org/docs/asciidoc-syntax-quick-reference/

 
 
 ¹  http://www.methods.co.nz/asciidoc/
 
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march [additional cores]

2014-04-08 Thread Tomas Sedovic
On 08/04/14 01:50, Robert Collins wrote:
 tl;dr: 3 more core members to propose:
 bnemec
 greghaynes
 jdon

-1, there's a typo in jdob's nick ;-)

In all seriousness, I support all of them being added to core.

 
 
 On 4 April 2014 08:55, Chris Jones c...@tenshu.net wrote:
 Hi

 +1 for your proposed -core changes.

 Re your question about whether we should retroactively apply the 3-a-day
 rule to the 3 month review stats, my suggestion would be a qualified no.

 I think we've established an agile approach to the member list of -core, so
 if there are a one or two people who we would have added to -core before the
 goalposts moved, I'd say look at their review quality. If they're showing
 the right stuff, let's get them in and helping. If they don't feel our new
 goalposts are achievable with their workload, they'll fall out again
 naturally before long.
 
 So I've actioned the prior vote.
 
 I said: Bnemec, jdob, greg etc - good stuff, I value your reviews
 already, but...
 
 So... looking at a few things - long period of reviews:
 60 days:
 |greghaynes   | 1210  22  99   0   081.8% |
 14 ( 11.6%)  |
 |  bnemec | 1160  38  78   0   067.2% |
 10 (  8.6%)  |
 |   jdob  |  870  15  72   0   082.8% |
 4 (  4.6%)  |
 
 90 days:
 
 |  bnemec | 1450  40 105   0   072.4% |
 17 ( 11.7%)  |
 |greghaynes   | 1420  23 119   0   083.8% |
 22 ( 15.5%)  |
 |   jdob  | 1060  17  89   0   084.0% |
 7 (  6.6%)  |
 
 Ben's reviews are thorough, he reviews across all contributors, he
 shows good depth of knowledge and awareness across tripleo, and is
 sensitive to the pragmatic balance between 'right' and 'good enough'.
 I'm delighted to support him for core now.
 
 Greg is very active, reviewing across all contributors with pretty
 good knowledge and awareness. I'd like to see a little more contextual
 awareness though - theres a few (but not many) reviews where looking
 at how the big picture of things fitting together more would have been
 beneficial. *however*, I think that's a room-to-improve issue vs
 not-good-enough-for-core - to me it makes sense to propose him for
 core too.
 
 Jay's reviews are also very good and consistent, somewhere between
 Greg and Ben in terms of bigger-context awareness - so another
 definite +1 from me.
 
 -Rob
 
 
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] TripleO Heat Templates and merge.py

2014-04-07 Thread Tomas Sedovic
On 06/04/14 23:27, Steve Baker wrote:
 On 05/04/14 04:47, Tomas Sedovic wrote:
 Hi All,

snip

 The maintenance burden of merge.py can be gradually reduced if features
 in it can be deleted when they are no longer needed. At some point in
 this process merge.py will need to accept HOT templates, and risk of
 breakage during this changeover would be reduced the smaller merge.py is.
 
 How about this for the task order?
 1. remove OpenStack::ImageBuilder::Elements support from merge.py
 2. move to software-config based templates
 3. remove the following from merge.py
3.1. merging params and resources
3.2. FileInclude
3.3. OpenStack::Role
 4. port tripleo templates and merge.py to HOT
 5. use some HOT replacement for Merge::Map, delete Merge::Map from tripleo
 6. move to resource providers/scaling groups for scaling
 7. rm -f merge.py

I like this.

Clint's already working on #2. I can tackle #1 and help review  test
the software config changes. We can deal with the rest afterwards.

One note on 3.1: until we switch to provider resources or get_file, we
can't drop the merging params and resources feature.

We can drop FileInclude, OpenStack::Role and deep merge (e.g. joining
`notCompute0Config` from `overcloud-source.yaml` and `swift-source.yaml`
example from my email), but we'll have to keep the functionality of
putting multiple templates together for a bit longer.

That said, I don't think switching to provider resources is going to be
a drastic change.

 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] [Heat] TripleO Heat Templates and merge.py

2014-04-04 Thread Tomas Sedovic
Hi All,

I was wondering if the time has come to document what exactly are we
doing with tripleo-heat-templates and merge.py[1], figure out what needs
to happen to move away and raise the necessary blueprints on Heat and
TripleO side.

(merge.py is a script we use to build the final TripleO Heat templates
from smaller chunks)

There probably isn't an immediate need for us to drop merge.py, but its
existence either indicates deficiencies within Heat or our unfamiliarity
with some of Heat's features (possibly both).

I worry that the longer we stay with merge.py the harder it will be to
move forward. We're still adding new features and fixing bugs in it (at
a slow pace but still).

Below is my understanding of the main marge.py functionality and a rough
plan of what I think might be a good direction to move to. It is almost
certainly incomplete -- please do poke holes in this. I'm hoping we'll
get to a point where everyone's clear on what exactly merge.py does and
why. We can then document that and raise the appropriate blueprints.


## merge.py features ##


1. Merging parameters and resources

Any uniquely-named parameters and resources from multiple templates are
put together into the final template.

If a resource of the same name is in multiple templates, an error is
raised. Unless it's of a whitelisted type (nova server, launch
configuration, etc.) in which case they're all merged into a single
resource.

For example: merge.py overcloud-source.yaml swift-source.yaml

The final template has all the parameters from both. Moreover, these two
resources will be joined together:

 overcloud-source.yaml 

  notCompute0Config:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
  ImageId: '0'
  InstanceType: '0'
Metadata:
  admin-password: {Ref: AdminPassword}
  admin-token: {Ref: AdminToken}
  bootstack:
public_interface_ip:
  Ref: NeutronPublicInterfaceIP


 swift-source.yaml 

  notCompute0Config:
Type: AWS::AutoScaling::LaunchConfiguration
Metadata:
  swift:
devices:
  ...
hash: {Ref: SwiftHashSuffix}
service-password: {Ref: SwiftPassword}


The final template will contain:

  notCompute0Config:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
  ImageId: '0'
  InstanceType: '0'
Metadata:
  admin-password: {Ref: AdminPassword}
  admin-token: {Ref: AdminToken}
  bootstack:
public_interface_ip:
  Ref: NeutronPublicInterfaceIP
  swift:
devices:
  ...
hash: {Ref: SwiftHashSuffix}
service-password: {Ref: SwiftPassword}


We use this to keep the templates more manageable (instead of having one
huge file) and also to be able to pick the components we want: instead
of `undercloud-bm-source.yaml` we can pick `undercloud-vm-source` (which
uses the VirtualPowerManager driver) or `ironic-vm-source`.



2. FileInclude

If you have a pseudo resource with the type of `FileInclude`, we will
look at the specified Path and SubKey and put the resulting dictionary in:

 overcloud-source.yaml 

  NovaCompute0Config:
Type: FileInclude
Path: nova-compute-instance.yaml
SubKey: Resources.NovaCompute0Config
Parameters:
  NeutronNetworkType: gre
  NeutronEnableTunnelling: True


 nova-compute-instance.yaml 

  NovaCompute0Config:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
  InstanceType: '0'
  ImageId: '0'
Metadata:
  keystone:
host: {Ref: KeystoneHost}
  neutron:
host: {Ref: NeutronHost}
  tenant_network_type: {Ref: NeutronNetworkType}
  network_vlan_ranges: {Ref: NeutronNetworkVLANRanges}
  bridge_mappings: {Ref: NeutronBridgeMappings}
  enable_tunneling: {Ref: NeutronEnableTunnelling}
  physical_bridge: {Ref: NeutronPhysicalBridge}
  public_interface: {Ref: NeutronPublicInterface}
service-password:
  Ref: NeutronPassword
  admin-password: {Ref: AdminPassword}

The result:

  NovaCompute0Config:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
  InstanceType: '0'
  ImageId: '0'
Metadata:
  keystone:
host: {Ref: KeystoneHost}
  neutron:
host: {Ref: NeutronHost}
  tenant_network_type: gre
  network_vlan_ranges: {Ref: NeutronNetworkVLANRanges}
  bridge_mappings: {Ref: NeutronBridgeMappings}
  enable_tunneling: True
  physical_bridge: {Ref: NeutronPhysicalBridge}
  public_interface: {Ref: NeutronPublicInterface}
service-password:
  Ref: NeutronPassword
  admin-password: {Ref: AdminPassword}

Note the `NeutronNetworkType` and `NeutronEnableTunneling` parameter
substitution.

This is useful when you want to pick only bits and pieces of an existing
template. In the example above, `nova-compute-instance.yaml` is a
standalone template you can launch on its own. 

Re: [openstack-dev] [TripleO] reviewer update march

2014-04-03 Thread Tomas Sedovic
On 03/04/14 13:02, Robert Collins wrote:
 Getting back in the swing of things...
 
 Hi,
 like most OpenStack projects we need to keep the core team up to
 date: folk who are not regularly reviewing will lose context over
 time, and new folk who have been reviewing regularly should be trusted
 with -core responsibilities.
 
 In this months review:
  - Dan Prince for -core
  - Jordan O'Mara for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core
 
 Existing -core members are eligible to vote - please indicate your
 opinion on each of the three changes above in reply to this email.

+1

 
snip
 
 
 -core that are not keeping up recently... :
 
 |   tomas-8c8 **  |  310   4   2  25   887.1% |

Duly noted. I've picked up the daily pace again in the last couple of
weeks and will continue doing so.

 1 (  3.2%)  |
 |marios **|  270   1  17   9   796.3% |
 3 ( 11.1%)  |
 |   tzumainn **   |  270   3  23   1   488.9% |
 0 (  0.0%)  |
 |pblaho **|  170   0   4  13   4   100.0% |
 1 (  5.9%)  |
 |jomara **|   00   0   0   0   1 0.0% |
 0 (  0.0%)  |
 
 
 Please remember - the stats are just an entry point to a more detailed
 discussion about each individual, and I know we all have a bunch of
 work stuff, on an ongoing basis :)
 
 I'm using the fairly simple metric we agreed on - 'average at least
 three reviews a
 day' as a proxy for 'sees enough of the code and enough discussion of
 the code to be an effective reviewer'. The three review a day thing we
 derived based
 on the need for consistent volume of reviews to handle current
 contributors - we may
 lower that once we're ahead (which may happen quickly if we get more cores... 
 :)
 But even so:
  - reading three patches a day is a pretty low commitment to ask for
  - if you don't have time to do that, you will get stale quickly -
 you'll only see under
33% of the code changes going on (we're doing about 10 commits
a day - twice as many since december - and hopefully not slowing down!)
 
 Cheers,
 Rob
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-21 Thread Tomas Sedovic
On 20/02/14 16:24, Imre Farkas wrote:
 On 02/20/2014 03:57 PM, Tomas Sedovic wrote:
 On 20/02/14 15:41, Radomir Dopieralski wrote:
 On 20/02/14 15:00, Tomas Sedovic wrote:

 Are we even sure we need to store the passwords in the first place? All
 this encryption talk seems very premature to me.

 How are you going to redeploy without them?


 What do you mean by redeploy?

 1. Deploy a brand new overcloud, overwriting the old one
 2. Updating the services in the existing overcloud (i.e. image updates)
 3. Adding new machines to the existing overcloud
 4. Autoscaling
 5. Something else
 6. All of the above

 I'd guess each of these have different password workflow requirements.
 
 I am not sure if all these use cases have different password
 requirement. If you check devtest, no matter whether you are creating or
 just updating your overcloud, all the parameters have to be provided for
 the heat template:
 https://github.com/openstack/tripleo-incubator/blob/master/scripts/devtest_overcloud.sh#L125
 
 
 I would rather not require the user to enter 5/10/15 different passwords
 every time Tuskar updates the stack. I think it's much better to
 autogenerate the passwords for the first time, provide an option to
 override them, then save and encrypt them in Tuskar. So +1 for designing
 a proper system for storing the passwords.

Well if that is the case and we can't change the templates/heat to
change that, the secrets should be put in Keystone or at least go
through Keystone. Or use Barbican or whatever.

We shouldn't be implementing crypto in Tuskar.

 
 Imre
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Tomas Sedovic
On 20/02/14 10:12, Radomir Dopieralski wrote:
 On 19/02/14 18:29, Dougal Matthews wrote:
 The question for me, is what passwords will we have and when do we need
 them? Are any of the passwords required long term.
 
 We will need whatever the Heat template needs to generate all the
 configuration files. That includes passwords for all services that are
 going to be configured, such as, for example, Swift or MySQL.


This is a one-time operation, though, isn't it? You pass those
parameters to Heat when you run stack-create. Heat and os-*-config will
handle the rest.

 
 I'm not sure about the exact mechanisms in Heat, but I would guess that
 we will need all the parameters, including passwords, when the templates
 are re-generated. We could probably generate new passwords every time,
 though.

What do you mean by regenarating the templates? Do you mean when we want
to update the deployment (e.g. using heat stack-update)?

 
 If we do need to store passwords it becomes a somewhat thorny issue, how
 does Tuskar know what a password is? If this is flagged up by the
 UI/client then we are relying on the user to tell us which isn't wise.
 
 All the template parameters that are passwords are marked in the Heat
 parameter list that we get from it as NoEcho: true, so we do have an
 idea about which parts are sensitive.
 

If at all possible, we should not store any passwords or keys whatsoever.

We may have to pass them through from the user to an API (and then
promptly forget them) or possible hold onto them for a little while (in
RAM) but never persisting them anywhere.

Let's go through the specific cases where we do handle passwords and
what to do with them.

Looking at devtest, I can see two places where the user deals with
passwords:

http://docs.openstack.org/developer/tripleo-incubator/devtest_overcloud.html

1) in the step 10. (Deploy an overcloud) we pass the various overcloud
service passwords and keys to Heat (it's things like the Keystone Admin
Token  password, SSL key  cert, nova/heat/cinder/glance service
passwords, etc.).

I'm assuming this could include any database and AMQP passwords in the
future.

2) step 17  18 (Perform admin setup of your overcloud) where pass some
of the same passwords to Keystone to set up the Overcloud OpenStack
services (compute, metering, orchestration, etc.)

And that's it.

I'd love if we could eventually push the steps 17  18 into our Heat
templates, it's where they belong I think (please correct me if that's
wrong).

Regardless, all the passwords here are user-specified. When you install
OpenStack, you have to come up with a bunch of passwords up front and
use them to set the various services up.

Now Tuskar serves as an intermediary. It should ask for these passwords
and then perform the steps you'd otherwise do manually and then *forget*
the passwords again.

Since we're using the passwords in 2 steps (10 and 17), we can't just
pass them to Heat and immediately forget them. But we can pass them in
step 10, wait for it to finish, pass them to step 17 and forget them then.

So here's the workflow:

1. The user wants to deploy the overcloud through the UI
2. They're asked to fill in all the necessary information (including the
passwords) -- or we autogenerate it which doesn't change anything
3. Tuskar UI sends a request to Tuskar API including the passwords
3.1. Tuskar UI forgets the passwords (this isn't an explicit action, we
don't store them anywhere)
4. Tuskar API fetches/builds the correct Heat template
5. Tuskar API calls heat stack-create and passes in all the params
(including passwords)
6. Tuskar API waits for heat stack-create to finish
7. Tuskar API issues a bunch of keystone calls to set up the services
(with the specified passwords)
8. Tuskar API forgets the passwords

The asynchronous nature of Heat stack-create may make this a bit more
difficult but the point should still stand -- we should not persist the
passwords. We may have to store them somewhere for a short duration, but
not throughout the entire lifecycle of the overcloud.

I'm not sure if we have to pass the unchanged parameters to Heat again
during stack-update (they may or may not be stored on the metadata
server). If we do, I'd vote we ask the user to re-enter them instead of
storing them somewhere.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Better handling of lists in Heat - a proposal to add a map function

2014-02-20 Thread Tomas Sedovic
On 19/02/14 08:48, Clint Byrum wrote:
 Since picking up Heat and trying to think about how to express clusters
 of things, I've been troubled by how poorly the CFN language supports
 using lists. There has always been the Fn::Select function for
 dereferencing arrays and maps, and recently we added a nice enhancement
 to HOT to allow referencing these directly in get_attr and get_param.
 
 However, this does not help us when we want to do something with all of
 the members of a list.
 
 In many applications I suspect the template authors will want to do what
 we want to do now in TripleO. We have a list of identical servers and
 we'd like to fetch the same attribute from them all, join it with other
 attributes, and return that as a string.
 
 The specific case is that we need to have all of the hosts in a cluster
 of machines addressable in /etc/hosts (please, Designate, save us,
 eventually. ;). The way to do this if we had just explicit resources
 named NovaCompute0, NovaCompute1, would be:
 
   str_join:
 - \n
 - - str_join:
 - ' '
 - get_attr:
   - NovaCompute0
   - networks.ctlplane.0
 - get_attr:
   - NovaCompute0
   - name
   - str_join:
 - ' '
 - get_attr:
   - NovaCompute1
   - networks.ctplane.0
 - get_attr:
   - NovaCompute1
   - name
 
 Now, what I'd really like to do is this:
 
 map:
   - str_join:
 - \n
 - - str_join:
   - ' '
   - get_attr:
 - $1
 - networks.ctlplane.0
   - get_attr:
 - $1
 - name
   - - NovaCompute0
 - NovaCompute1
 
 This would be helpful for the instances of resource groups too, as we
 can make sure they return a list. The above then becomes:
 
 
 map:
   - str_join:
 - \n
 - - str_join:
   - ' '
   - get_attr:
 - $1
 - networks.ctlplane.0
   - get_attr:
 - $1
 - name
   - get_attr:
   - NovaComputeGroup
   - member_resources
 
 Thoughts on this idea? I will throw together an implementation soon but
 wanted to get this idea out there into the hive mind ASAP.

I think it's missing lambdas and recursion ;-).

Joking aside, I like it. As long as we don't actually turn this into
anything remotely resembling turing-completeness, having useful data
processing primitives is good.

Now onto the bikeshed: could we denote the arguments with something
that's more obviously looking like a Heat specific notation and not a
user-entered string?

E.g. replace $1 with {Arg: 1}

It's a bit uglier but more obvious to spot what's going on.

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Tomas Sedovic
On 20/02/14 14:10, Jiří Stránský wrote:
 On 20.2.2014 12:18, Radomir Dopieralski wrote:
 On 20/02/14 12:02, Radomir Dopieralski wrote:
 Anybody who gets access to Tuskar-API gets the
 passwords, whether we encrypt them or not. Anybody who doesn't have
 access to Tuskar-API doesn't get the passwords, whether we encrypt
 them or not.
 
 Yeah, i think so too.
 
 Thinking about it some more, all the uses of the passwords come as a
 result of an action initiated by the user either by tuskar-ui, or by
 the tuskar command-line client. So maybe we could put the key in their
 configuration and send it with the request to (re)deploy. Tuskar-API
 would still need to keep it for the duration of deployment (to register
 the services at the end), but that's it.
 
 This would be possible, but it would damage the user experience quite a
 bit. Afaik other deployment tools solve password storage the same way we
 do now.
 
 Imho keeping the passwords the way we do now is not among the biggest
 OpenStack security risks. I think we can make the assumption that
 undercloud will not be publicly accessible, so a potential external
 attacker would have to first gain network access to the undercloud
 machines and only then they can start trying to exploit Tuskar API to
 hand out the passwords. Overcloud services (which are meant to be
 publicly accessible) have their service passwords accessible in
 plaintext, e.g. in nova.conf you'll find nova password and neutron
 password -- i think this is comparatively greater security risk.

This to me reads as: we should fix the OpenStack services not to store
passwords in their service.conf, not making the situation worse by
storing them in even more places.

 
 So if we can come up with a solution where the benefits outweigh the
 drawbacks and it makes sense in broader view at OpenStack security, we
 should go for it, but so far i'm not convinced there is such a solution.
 Just my 2 cents :)
 
 Jirka
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Tomas Sedovic
On 20/02/14 15:41, Radomir Dopieralski wrote:
 On 20/02/14 15:00, Tomas Sedovic wrote:
 
 Are we even sure we need to store the passwords in the first place? All
 this encryption talk seems very premature to me.
 
 How are you going to redeploy without them?
 

What do you mean by redeploy?

1. Deploy a brand new overcloud, overwriting the old one
2. Updating the services in the existing overcloud (i.e. image updates)
3. Adding new machines to the existing overcloud
4. Autoscaling
5. Something else
6. All of the above

I'd guess each of these have different password workflow requirements.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Tomas Sedovic
On 20/02/14 16:02, Radomir Dopieralski wrote:
 On 20/02/14 15:57, Tomas Sedovic wrote:
 On 20/02/14 15:41, Radomir Dopieralski wrote:
 On 20/02/14 15:00, Tomas Sedovic wrote:

 Are we even sure we need to store the passwords in the first place? All
 this encryption talk seems very premature to me.

 How are you going to redeploy without them?


 What do you mean by redeploy?

 1. Deploy a brand new overcloud, overwriting the old one
 2. Updating the services in the existing overcloud (i.e. image updates)
 3. Adding new machines to the existing overcloud
 4. Autoscaling
 5. Something else
 6. All of the above
 
 I mean clicking scale in tuskar-ui.
 

Right. So either Heat's able to handle this on its own or we fix it to
be able to do that or we ask for the necessary parameters again.

I really dislike having to do crypto in tuskar.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [UX] Infrastructure Management UI - Icehouse scoped wireframes

2014-02-05 Thread Tomas Sedovic

On 05/02/14 03:58, Jaromir Coufal wrote:

Hi to everybody,

based on the feedback from last week [0] I incorporated changes in the
wireframes so that we keep them up to date with latest decisions:

http://people.redhat.com/~jcoufal/openstack/tripleo/2014-02-05_tripleo-ui-icehouse.pdf


Changes:
* Smaller layout change in Nodes Registration (no rush for update)
* Unifying views for 'deploying' and 'deployed' states of the page for
deployment detail
* Improved workflow for associating node profiles with roles
- showing final state of MVP
- first iteration contains only last row (no node definition link)


Hey Jarda,

Looking good. I've got two questions:

1. Are we doing node tags (page 4) for the first iteration? Where are 
they going to live?


2. There are multiple node profiles per role on pages 11, 12, 17. Is 
that just an oversight or do you intend on keeping those in? I though 
the consensus was to do 1 node profile per deployment role.


Thanks,
Tomas




-- Jarda

[0] https://www.youtube.com/watch?v=y2fv6vebFhM


On 2014/16/01 01:50, Jaromir Coufal wrote:

Hi folks,

thanks everybody for feedback. Based on that I updated wireframes and
tried to provide a minimum scope for Icehouse timeframe.

http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-16_tripleo-ui-icehouse.pdf



Hopefully we are able to deliver described set of features. But if you
find something what is missing which is critical for the first release
(or that we are implementing a feature which should not have such high
priority), please speak up now.

The wireframes are very close to implementation. In time, there will
appear more views and we will see if we can get them in as well.

Thanks all for participation
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [UX] Infrastructure Management UI - Icehouse scoped wireframes

2014-02-05 Thread Tomas Sedovic

snip

1. Are we doing node tags (page 4) for the first iteration? Where are
they going to live?

Yes, it's very easy to do, already part of Ironic.


Cool!




2. There are multiple node profiles per role on pages 11, 12, 17. Is
that just an oversight or do you intend on keeping those in? I though
the consensus was to do 1 node profile per deployment role.

I tried to avoid the confusion by the comment:
'- showing final state of MVP
  - first iteration contains only last row (no node definition link)'


I'm sorry, I completely missed that comment. Thanks for the clarification.



Maybe I should be more clear. By last row I meant that in the first
iteration, the form will contain only one row with dropdown to select
only one flavor per role.

I intend to keep multiple roles for Icehouse scope. We will see if we
can get there in time, I am hoping for 'yes'. But I am absolutely
aligned with the consensus that we are starting only one node profile
per role.

-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-02-03 Thread Tomas Sedovic

My apologies for firing this off and then hiding under the FOSDEM rock.

In light of the points raised by Devananda and Robert, I no longer think 
fiddling with the scheduler is the way to go.


Note this was never intended to break/confuse all TripleO users -- I 
considered it a cleaner equivalent to entering incorrect HW specs (i.e. 
instead of doing that you would switch to this other filter in nova.conf).


Regardless, I stand corrected on the distinction between heterogeneous 
hardware all the way and having a flavour per service definition. That 
was a very good point to raise.


I'm fine with both approaches.

So yeah, let's work towards having a single Node Profile (flavor) 
associated with each Deployment Role (pages 12  13 of the latest 
mockups[1]), optionally starting with requiring all the Node Profiles to 
be equal.


Once that's working fine, we can look into the harder case of having 
multiple Node Profiles within a Deployment Role.


Is everyone comfortable with that?

Tomas

[1]: 
http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-27_tripleo-ui-icehouse.pdf


On 03/02/14 00:21, Robert Collins wrote:

On 3 February 2014 08:45, Jaromir Coufal jcou...@redhat.com wrote:





However, taking a step back, maybe the real answer is:

a) homogeneous nodes
b) document. . .
 - **unsupported** means of demoing Tuskar (set node attributes to
match flavors, hack
   the scheduler, etc)


Why are people calling it 'hack'? It's an additional filter to
nova-scheduler...?


It doesn't properly support the use case; its extra code to write and
test and configure that is precisely identical to mis-registering
nodes.


 - our goals of supporting heterogeneous nodes for the J-release.


I wouldn't talk about J-release. I would talk about next iteration or next
step. Nobody said that we are not able to make it in I-release.


+1




Does this seem reasonable to everyone?

Mainn



Well +1 for a) and it's documentation.

However me and Robert, we look to have different opinions on what
'homogeneous' means in our context. I think we should clarify that.


So I think my point is more this:
  - either this iteration is entirely limited to homogeneous hardware,
in which case, document it, not workarounds or custom schedulers etc.
  - or it isn't limited, in which case we should consider the options:
- flavor per service definition
- custom scheduler
- register nodes wrongly

-Rob




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-30 Thread Tomas Sedovic

Hi all,

I've seen some confusion regarding the homogenous hardware support as 
the first step for the tripleo UI. I think it's time to make sure we're 
all on the same page.


Here's what I think is not controversial:

1. Build the UI and everything underneath to work with homogenous 
hardware in the Icehouse timeframe
2. Figure out how to support heterogenous hardware and do that (may or 
may not happen within Icehouse)


The first option implies having a single nova flavour that will match 
all the boxes we want to work with. It may or may not be surfaced in the 
UI (I think that depends on our undercloud installation story).


Now, someone (I don't honestly know who or when) proposed a slight step 
up from point #1 that would allow people to try the UI even if their 
hardware varies slightly:


1.1 Treat similar hardware configuration as equal

The way I understand it is this: we use a scheduler filter that wouldn't 
do a strict match on the hardware in Ironic. E.g. if our baremetal 
flavour said 16GB ram and 1TB disk, it would also match a node with 24GB 
ram or 1.5TB disk.


The UI would still assume homogenous hardware and treat it as such. It's 
just that we would allow for small differences.


This *isn't* proposing we match ARM to x64 or offer a box with 24GB RAM 
when the flavour says 32. We would treat the flavour as a lowest common 
denominator.


Nor is this an alternative to a full heterogenous hardware support. We 
need to do that eventually anyway. This is just to make the first MVP 
useful to more people.


It's an incremental step that would affect neither point 1. (strict 
homogenous hardware) nor point 2. (full heterogenous hardware support).


If some of these assumptions are incorrect, please let me know. I don't 
think this is an insane U-turn from anything we've already agreed to do, 
but it seems to confuse people.


At any rate, this is not a huge deal and if it's not a good idea, let's 
just drop it.


Tomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-30 Thread Tomas Sedovic

On 30/01/14 15:53, Matt Wagner wrote:

On 1/30/14, 5:26 AM, Tomas Sedovic wrote:

Hi all,

I've seen some confusion regarding the homogenous hardware support as
the first step for the tripleo UI. I think it's time to make sure we're
all on the same page.

Here's what I think is not controversial:

1. Build the UI and everything underneath to work with homogenous
hardware in the Icehouse timeframe
2. Figure out how to support heterogenous hardware and do that (may or
may not happen within Icehouse)

The first option implies having a single nova flavour that will match
all the boxes we want to work with. It may or may not be surfaced in the
UI (I think that depends on our undercloud installation story).

Now, someone (I don't honestly know who or when) proposed a slight step
up from point #1 that would allow people to try the UI even if their
hardware varies slightly:

1.1 Treat similar hardware configuration as equal

The way I understand it is this: we use a scheduler filter that wouldn't
do a strict match on the hardware in Ironic. E.g. if our baremetal
flavour said 16GB ram and 1TB disk, it would also match a node with 24GB
ram or 1.5TB disk.

The UI would still assume homogenous hardware and treat it as such. It's
just that we would allow for small differences.

This *isn't* proposing we match ARM to x64 or offer a box with 24GB RAM
when the flavour says 32. We would treat the flavour as a lowest common
denominator.


Does Nova already handle this? Or is it built on exact matches?


It's doing an exact match as far as I know. This would likely involve 
writing a custom filter for nova scheduler and updating nova.conf 
accordingly.




I guess my question is -- what is the benefit of doing this? Is it just
so people can play around with it? Or is there a lasting benefit
long-term? I can see one -- match to the closest, but be willing to give
me more than I asked for if that's all that's available. Is there any
downside to this being permanent behavior?


Absolutely not a long term thing. This is just to let people play around 
with the MVP until we have the proper support for heterogenous hardware in.


It's just an idea that would increase the usefulness of the first 
version and should be trivial to implement and take out.


If neither is the case or if we will in fact manage to have a proper 
heterogenous hardware support early (in Icehouse), it doesn't make any 
sense to do this.




I think the lowest-common-denominator match will be familiar to
sysadmins, too. Want to do RAID striping across a 500GB and a 750GB
disk? You'll get a striped 500GB volume.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] os-*-config in tripleo repositories

2014-01-09 Thread Tomas Sedovic

On 09/01/14 14:13, Derek Higgins wrote:

It looks like we have some duplication and inconsistencies on the 3
os-*-config elements in the tripleo repositories

os-apply-config (duplication) :
We have two elements that install this
  diskimage-builder/elements/config-applier/
  tripleo-image-elements/elements/os-apply-config/

As far as I can tell the version in diskimage-builder isn't used by
tripleo and the upstart file is broke
./dmesg:[   13.336184] init: Failed to spawn config-applier main
process: unable to execute: No such file or directory

To avoid confusion I propose we remove
diskimage-builder/elements/config-applier/ (or deprecated if we have a
suitable process) but would like to call it out here first to see if
anybody is using it or thinks its a bad idea?

inconsistencies
   os-collect-config, os-refresh-config : these are both installed from
git into the global site-packages
   os-apply-config : installed from a released tarball into its own venv

   To be consistent with the other elements all 3 I think should be
installed from git into its own venv, thoughts?


I've no insight into why things are the way they are, but your 
suggestions make sense to me.




If no objections I'll go ahead an do this next week,

thanks,
Derek.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Tomas Sedovic
Just an fyi, tomas-8c8 on the reviewers list is yours truly. That's 
the name I got assigned when I registered to Gerrit and apparently, it 
can't be changed.


Thanks for the heads-up, will be doing more reviews.

T.

On 07/10/13 21:03, Robert Collins wrote:

Hi, like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up: it's not
a purely mechanical process :).

As we've just merged review teams with Tuskar devs, we need to allow
some time for everyone to get up to speed; so for folk who are core as
a result of the merge will be retained as core, but November I expect
the stats will have normalised somewhat and that special handling
won't be needed.

IMO these are the reviewers doing enough over 90 days to meet the
requirements for core:

|   lifeless **| 3498 140   2 19957.6% |2
(  1.0%)  |
| clint-fewbar **  | 3292  54   1 27283.0% |7
(  2.6%)  |
| cmsj **  | 2481  25   1 22189.5% |   13
(  5.9%)  |
|derekh ** |  880  28  23  3768.2% |6
( 10.0%)  |

Who are already core, so thats easy.

If you are core, and not on that list, that may be because you're
coming from tuskar, which doesn't have 90 days of history, or you need
to get stuck into some more reviews :).

Now, 30 day history - this is the heads up for folk:

| clint-fewbar **  | 1792  27   0 15083.8% |6 (  4.0%)  |
| cmsj **  | 1791  15   0 16391.1% |   11 (  6.7%)  |
|   lifeless **| 1293  39   2  8567.4% |2 (  2.3%)  |
|derekh ** |  410  11   0  3073.2% |0 (  0.0%)  |
|  slagle  |  370  11  26   070.3% |3 ( 11.5%)  |
|ghe.rivero|  280   4  24   085.7% |2 (  8.3%)  |


I'm using the fairly simple metric of 'average at least one review a
day' as a proxy for 'sees enough of the code and enough discussion of
the code to be an effective reviewer'. James and Ghe, good stuff -
you're well on your way to core. If you're not in that list, please
treat this as a heads-up that you need to do more reviews to keep on
top of what's going on, whether so you become core, or you keep it.

In next month's update I'll review whether to remove some folk that
aren't keeping on top of things, as it won't be a surprise :).

Cheers,
Rob









___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar and releases

2013-10-02 Thread Tomas Sedovic

On 02/10/13 07:46, Sergey Lukjanov wrote:

In Savanna we're supplying plugin for OpenStack Dashboard - 
https://github.com/stackforge/savanna-dashboard

It doesn't contains any copy-paste and dependencies for Django/Horizon and 
currently compatible with Grizzly and Havana OpenStack Dashboard, it only 
contains our code and could be installed to any other existing dashboard. Here 
you can find info about how to install Savanna dashboard plugin - 
https://savanna.readthedocs.org/en/latest/horizon/installation.guide.html


Yeah, that's how tuskar-ui works, too.



Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

On Oct 2, 2013, at 4:08, Robert Collins robe...@robertcollins.net wrote:


We'd like to get tuskar projects doing releases sooner rather than
later. For python-tuskarclient, this is pretty much a no-brainer : we
just need to start doing it.

However, for tuskar-ui and tuskar it's more complex.

# tuskar

This is an API service; whats the story for non-integrated projects
and releases? Can we do them, or does the release team do them? Does
it make integration/incubation any harder?

# tuskar-ui

This is a Horizon plugin -
http://git.openstack.org/cgit/stackforge/tuskar-ui/tree/docs/install.rst#n57
- so it's not super clear to me how to make releases of it. Should it
really just be in the Horizon tree?

-Rob

--
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Generalising racks :- modelling a datacentre

2013-09-25 Thread Tomas Sedovic

On 09/25/2013 05:15 AM, Robert Collins wrote:

One of the major things Tuskar does is model a datacenter - which is
very useful for error correlation, capacity planning and scheduling.

Long term I'd like this to be held somewhere where it is accessible
for schedulers and ceilometer etc. E.g. network topology + switch
information might be held by neutron where schedulers can rely on it
being available, or possibly held by a unified topology db with
scheduler glued into that, but updated by neutron / nova / cinder.
Obviously this is a) non-trivial and b) not designed yet.

However, the design of Tuskar today needs to accomodate a few things:
  - multiple reference architectures for clouds (unless there really is
one true design)
  - the fact that today we don't have such an integrated vertical scheduler.

So the current Tuskar model has three constructs that tie together to
model the DC:
  - nodes
  - resource classes (grouping different types of nodes into service
offerings - e.g. nodes that offer swift, or those that offer nova).
  - 'racks'

AIUI the initial concept of Rack was to map to a physical rack, but
this rapidly got shifted to be 'Logical Rack' rather than physical
rack, but I think of Rack as really just a special case of a general
modelling problem..


Yeah. Eventually, we settled on Logical Rack meaning a set of nodes on 
the same L2 network (in a setup where you would group nodes into 
isolated L2 segments). Which kind of suggests we come up with a better name.


I agree there's a lot more useful stuff to model than just racks (or 
just L2 node groups).





From a deployment perspective, if you have two disconnected

infrastructures, thats two AZ's, and two underclouds : so we know that
any one undercloud is fully connected (possibly multiple subnets, but
one infrastructure). When would we want to subdivide that?

One case is quick fault aggregation: if a physical rack loses power,
rather than having 16 NOC folk independently investigating the same 16
down hypervisors, one would prefer to identify that the power to the
rack has failed (for non-HA powered racks); likewise if a single
switch fails (for non-HA network topologies) you want to identify that
that switch is down rather than investigating all the cascaded errors
independently.

A second case is scheduling: you may want to put nova instances on the
same switch as the cinder service delivering their block devices, when
possible, or split VM's serving HA tasks apart. (We currently do this
with host aggregates, but being able to do it directly would be much
nicer).

Lastly, if doing physical operations like power maintenance or moving
racks around in a datacentre, being able to identify machines in the
same rack can be super useful for planning, downtime announcements, 
orhttps://plus.google.com/hangouts/_/04919b4400b8c4c5ba706b752610cd433d9acbe1
host evacuation, and being able to find a specific machine in a DC is
also important (e.g. what shelf in the rack, what cartridge in a
chassis).


I agree. However, we should take care not to commit ourselves to 
building a DCIM just yet.




Back to 'Logical Rack' - you can see then that having a single
construct to group machines together doesn't really support these use
cases in a systematic fasion:- Physical rack modelling supports only a
subset of the location/performance/failure use cases, and Logical rack
doesn't support them at all: we're missing all the rich data we need
to aggregate faults rapidly : power, network, air conditioning - and
these things cover both single machine/groups of machines/racks/rows
of racks scale (consider a networked PDU with 10 hosts on it - thats a
fraction of a rack).

So, what I'm suggesting is that we model the failure and performance
domains directly, and include location (which is the incremental data
racks add once failure and performance domains are modelled) too. We
can separately noodle on exactly what failure domain and performance
domain modelling looks like - e.g. the scheduler focus group would be
a good place to have that discussion.


Yeah I think it's pretty clear that the current Tuskar concept where 
Racks are the first-class objects isn't going to fly. We should switch 
our focus on the individual nodes and their grouping and metadata.


I'd like to start with something small and simple that we can improve 
upon, though. How about just going with freeform tags and key/value 
metadata for the nodes?


We can define some well-known tags and keys to begin with (rack, 
l2-network, power, switch, etc.), it would be easy to iterate and once 
we settle on the things we need, we can solidify them more.


In the meantime, we have the API flexible enough to handle whatever 
architectures we end up supporting and the UI can provide the 
appropriate views into the data.


And this would allow people to add their own criteria that we didn't 
consider.




E.g. for any node I should be able to ask:
- what failure domains is this in? [e.g. power-45, 

[openstack-dev] [Tuskar] The last IRC meeting

2013-09-24 Thread Tomas Sedovic
I planned to cancel today's meeting, but I was reminded that we ought to 
finish the naming votes from the last week and it would be good to talk 
a bit more about the Tuskar coming under TripleO.


The time:

Tuesday, 24th September, 2013 at 19:00 UTC

The agenda:

* Discuss merger with TripleO including meeting time moving to this slot.
* Finish last week's voting https://etherpad.openstack.org/tuskar-naming
* Open discussion

https://wiki.openstack.org/wiki/Meetings/Tuskar

This is most likely going to be the last Tuskar-specific weekly meeting, 
going forward we'll just be a part of the TripleO one.


T.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tuskar] [TripleO] The vision and looking ahead

2013-09-19 Thread Tomas Sedovic

Hi everyone,

Some of us Tuskar developers have had the chance to meet the TripleO 
developers face to face and discuss the visions and goals of our projects.


Tuskar's ultimate goal is to have to a full OpenStack management 
solution: letting the cloud operators try OpenStack, install it, keep it 
running throughout the entire lifecycle (including bringing in new 
hardware, burning it in, decommissioning), help to scale it, secure the 
setup, monitor for failures, project the need for growth and so on.


And to provide a good user interface and API to let the operators 
control and script this easily.


Now, the scope of the OpenStack Deployment program (TripleO) includes 
not just installation, but the entire lifecycle management (from racking 
it up to decommissioning). Among other things they're thinking of are 
issue tracker integration and inventory management, but these could 
potentially be split into a separate program.


That means we do have a lot of goals in common and we've just been going 
at them from different angles: TripleO building the fundamental 
infrastructure while Tuskar focusing more on the end user experience.


We've come to a conclusion that it would be a great opportunity for both 
teams to join forces and build this thing together.


The benefits for Tuskar would be huge:

* being a part of an incubated project
* more eyballs (see Linus' Law (the ESR one))
* better information flow between the current Tuskar and TripleO teams
* better chance at attracting early users and feedback
* chance to integrate earlier into an OpenStack release (we could make 
it into the *I* one)


TripleO would get a UI and more developers trying it out and helping 
with setup and integration.


This shouldn't even need to derail us much from the rough roadmap we 
planned to follow in the upcoming months:


1. get things stable and robust enough to demo in Hong Kong on real hardware
2. include metrics and monitoring
3. security

What do you think?

Tomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] [TripleO] The vision and looking ahead

2013-09-19 Thread Tomas Sedovic

On 09/19/2013 04:00 PM, Adam Young wrote:

On 09/19/2013 05:19 AM, Imre Farkas wrote:

On 09/19/2013 10:08 AM, Tomas Sedovic wrote:

Hi everyone,

Some of us Tuskar developers have had the chance to meet the TripleO
developers face to face and discuss the visions and goals of our
projects.

Tuskar's ultimate goal is to have to a full OpenStack management
solution: letting the cloud operators try OpenStack, install it, keep it
running throughout the entire lifecycle (including bringing in new
hardware, burning it in, decommissioning), help to scale it, secure the
setup, monitor for failures, project the need for growth and so on.

And to provide a good user interface and API to let the operators
control and script this easily.

Now, the scope of the OpenStack Deployment program (TripleO) includes
not just installation, but the entire lifecycle management (from racking
it up to decommissioning). Among other things they're thinking of are
issue tracker integration and inventory management, but these could
potentially be split into a separate program.

That means we do have a lot of goals in common and we've just been going
at them from different angles: TripleO building the fundamental
infrastructure while Tuskar focusing more on the end user experience.

We've come to a conclusion that it would be a great opportunity for both
teams to join forces and build this thing together.

The benefits for Tuskar would be huge:

* being a part of an incubated project
* more eyballs (see Linus' Law (the ESR one))
* better information flow between the current Tuskar and TripleO teams
* better chance at attracting early users and feedback
* chance to integrate earlier into an OpenStack release (we could make
it into the *I* one)

TripleO would get a UI and more developers trying it out and helping
with setup and integration.

This shouldn't even need to derail us much from the rough roadmap we
planned to follow in the upcoming months:

1. get things stable and robust enough to demo in Hong Kong on real
hardware
2. include metrics and monitoring
3. security

What do you think?


That is an excellent idea!

Does it mean from the practical point of view that the Tuskar code
will be merged into the TripleO repos and the project will be deleted
from StackForge and Launchpad?


I would recommend against that, and instead have the unified team merge,
but maintain both repos.  Think of how Keystone manages both
python-keystoneclient and keystone server.

And let me be the first to suggest that the unified team be called
Tuskarooo!


My understanding is: we'd mostly keep our repos (tuskar, tukar-ui, 
python-tuskarclient) for now but probably move them from stackforge to 
openstack (since that's where all the TripleO repos live).


The Tuskar code would probably be integrated a bit later than the 
current TripleO stuff (API in I) and we'll need to meet some integration 
requirements, but I believe that eventually tuskar-ui would merge with 
Horizon just like all the other UIs do. (provided that ends up making sense)


There is some code we currently have in Tuskar that will make more sense 
to move to another project or whatever but that's details.


And yeah, Tuskarooo's great but I wouldn't say no to Tripletusk 
(Triceratops!) or say Gablerstaplerfahrer.






Imre



Tomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] Tuskar Names Clarification Unification

2013-09-17 Thread Tomas Sedovic

On 09/17/2013 04:17 AM, Jaromir Coufal wrote:


On 2013/16/09 15:11, Tomas Sedovic wrote:

On 09/16/2013 05:50 PM, Jaromir Coufal wrote:

Hi,

after few days of gathering information, it looks that no more new ideas
appear there, so let's take the last round of voting for names which you
prefer. It's important for us to get on the same page.

https://etherpad.openstack.org/tuskar-naming

Thanks guys
-- Jarda


Thanks Jarda,

I was thinking we could do the voting during the weekly IRC meeting
(the bot has some cool voting capabilities).

Unfortunately, I've fallen ill and chances are I won't be able to
drive the meeting. If you folks want to self-organise and start the
vote, you have my blessing.

Otherwise, shall we do it on the IRC meeting after that?

T.

Sure Thomas, thanks for following the thread. We can try to
self-organize there, I might try to run it, just can you point me to bot
commands?

Get well soon


Thanks.

Here's the manual:

http://meetbot.debian.net/Manual.html

The most important commands are:

#startmeeting tuskar

#topic an item from the agenda

#info an item to be listed in the meeting minutes

#agreed whatever the people agreed to do

#action nick an action the person is supposed to take

#endmeeting

If you scroll to the last section of the Heat meeting page, they have 
some instructions on running the meetings:


https://wiki.openstack.org/wiki/Meetings/HeatAgenda



-- Jarda

[snip]



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] Tuskar Names Clarification Unification

2013-09-17 Thread Tomas Sedovic

On 09/17/2013 04:53 AM, Mike Spreitzer wrote:


  From: Jaromir Coufal jcou...@redhat.com
  To: openstack-dev@lists.openstack.org,
  Date: 09/16/2013 11:51 AM
  Subject: Re: [openstack-dev] [Tuskar] Tuskar Names Clarification 
Unification
 
  Hi,
 
  after few days of gathering information, it looks that no more new
  ideas appear there, so let's take the last round of voting for names
  which you prefer. It's important for us to get on the same page.

I am concerned that the proposals around the term 'rack' do not
recognize that there might be more than one layer in the organization.

Is it more important to get appropriately abstract and generic terms, or
is the desire to match common concrete terms?



So our thinking here is to work with a grouping of nodes in the same 
physical location, the same subnet and ideally the same hardware 
configuration.


This can be a physical rack, or what I believe you call a chassis (a 
group of servers inside the rack).


Regarding layers, I see two physical layers in play: a rack and a 
chassis. Could there be more?


The idea right now is that the Tuskar admins would choose their 
preferred granularity and just work on that level.


Do you think that's not sufficient?

T.


Regards,
Mike


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] Tuskar Names Clarification Unification

2013-09-16 Thread Tomas Sedovic

On 09/16/2013 05:50 PM, Jaromir Coufal wrote:

Hi,

after few days of gathering information, it looks that no more new ideas
appear there, so let's take the last round of voting for names which you
prefer. It's important for us to get on the same page.

https://etherpad.openstack.org/tuskar-naming

Thanks guys
-- Jarda


Thanks Jarda,

I was thinking we could do the voting during the weekly IRC meeting (the 
bot has some cool voting capabilities).


Unfortunately, I've fallen ill and chances are I won't be able to drive 
the meeting. If you folks want to self-organise and start the vote, you 
have my blessing.


Otherwise, shall we do it on the IRC meeting after that?

T.




On 2013/12/09 11:20, Jaromir Coufal wrote:

Hello everybody,

I just started and etherped with various names of concepts in Tuskar.
It is important to get all of us on the same page, so the usage and
discussions around Tuskar concepts are clear and easy to use (also for
users, not just contributors!).

https://etherpad.openstack.org/tuskar-naming

Keep in mind, that we will use these names in API, CLI and UI as well,
so they should be as descriptive as possible and not very long or
difficult though.

Etherpad is not the best tool for mark up, but I did my best. Each
concept which needs name is bold and is followed with bunch of bullets
- description, suggestion of names, plus discussion under each
suggestion, why yes or not.

Name suggestions are in underlined italic font.

Feel free to add  update  discuss anything in the document, because
I might have forgotten bunch of stuff.

Thank you all and follow the etherpad :)
-- Jarda




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] Meeting agenda for Tue 10th September at 19:00 UTC

2013-09-11 Thread Tomas Sedovic

The meeting happened.

You can read the notes:

http://eavesdrop.openstack.org/meetings/tuskar/2013/tuskar.2013-09-10-19.00.html

or the full IRC log if you're so inclined:

http://eavesdrop.openstack.org/meetings/tuskar/2013/tuskar.2013-09-10-19.00.log.html

On 09/09/2013 05:34 PM, Tomas Sedovic wrote:

The Tuskar team holds a meeting in #openstack-meeting-alt, see

https://wiki.openstack.org/wiki/Meetings/Tuskar

The next meeting is on Tuesday 10th September at 19:00 UTC.

Current topics for discussion:

* Documentation
* Simplify development setup
* Tests
* Releases  Milestones
* Open discussion

If you have any other topics to discuss, please add them to the wiki.

Thanks,
shadower

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tuskar] Meeting agenda for Tue 10th September at 19:00 UTC

2013-09-09 Thread Tomas Sedovic

The Tuskar team holds a meeting in #openstack-meeting-alt, see

https://wiki.openstack.org/wiki/Meetings/Tuskar

The next meeting is on Tuesday 10th September at 19:00 UTC.

Current topics for discussion:

* Documentation
* Simplify development setup
* Tests
* Releases  Milestones
* Open discussion

If you have any other topics to discuss, please add them to the wiki.

Thanks,
shadower

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tuskar] Weekly IRC meetings

2013-09-05 Thread Tomas Sedovic

Hey everyone,

after much wrangling, we've come to something resembling a consensus: 
the weekly IRC meetings will be held on Tuesdays 19:00 UTC. This should 
accommodate the US folks (who always have it easy), and both lifeless 
and devs in Europe.


The details are documented here:

https://wiki.openstack.org/wiki/Meetings#Tuskar_meeting

I'll send out the agenda for the first meeting later (at most 24 hours 
before the meeting starts).


Three cheers for timezones!

--
Tomas Sedovic
Tuskar PTL

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tuskar PTL candidacy

2013-08-22 Thread Tomas Sedovic

On 08/22/2013 03:38 PM, Don Schenck wrote:

Help a newbie: PTL??


PTL = Project Technical Lead.

Every OpenStack project has one. Their responsibilities are mostly to 
make sure the release goes out smoothly, setting milestones for 
blueprints, attending the release meetings, etc.


They're elected after every release from a pool of self-nominees.

https://wiki.openstack.org/wiki/PTLguide



-Original Message-
From: Tomas Sedovic [mailto:tsedo...@redhat.com]
Sent: Thursday, August 22, 2013 8:55 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Tuskar PTL candidacy

I would like to nominate myself for the role of Tuskar PTL.

While not senior by any stretch of imagination, I did QA automation for three 
years and development for another three (professionally; I've been coding on my 
own since I was 13) so I'm familiar with both sides of the table.

I've helped with the development of the Heat project during its first six 
months and got one of the early releases out the door (pre-incubation iirc). 
I'm familiar with the OpenStack processes, I'd attended weekly Heat meetings 
and been helping my colleagues with the transition from a Ruby on Rails github 
workflow to the OpenStack way of development.

I'll freely admit I'm not the most knowledgeable person around. But I'm not 
afraid to ask questions or admit being wrong and I tend to learn quickly.

Apart from more features and stability, I'll push Tuskar towards clear 
documentation and friendliness to newcomers.

--
Tomas Sedovic

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Announcing Tuskar project and PTL nominations

2013-08-21 Thread Tomas Sedovic

Hi everyone,

We would like to announce Tuskar, an OpenStack management service.

Our goal is to provide an API and UI to install and manage OpenStack at 
larger scale: where you deal with racks, different hardware classes for 
different purposes (storage, memory vs. cpu-intensive compute), the 
burn-in process, monitoring the HW utilisation, etc.


Some of this will overlap with TripleO, Ceilometer and possibly other 
projects. In that case, we will work with the projects to figure out the 
best place to fix rather than duplicating effort and playing in our own 
sandbox.



Current status:

There's a saying that if you're not embarrassed by your first release, 
you've shipped too late.


I'm happy to say, we are quite embarrassed :-)

We've got a prototype that allows us to define different hardware 
classes and provision the racks with the appropriate images, then add 
new racks and have them provisioned.


We've got a Horizon dashboard plugin that shows the general direction we 
want to follow and we're looking into integrating Ceilometer metrics and 
alarms.


However, we're still tossing around different ideas and things are very 
likely to change.


Our repositories are on Stackforge:

https://github.com/stackforge/tuskar
https://github.com/stackforge/python-tuskarclient
https://github.com/stackforge/tuskar-ui

And we're using Launchpad to manage our bugs and blueprints:

https://launchpad.net/tuskar
https://launchpad.net/tuskar-ui

If you want to talk to us, pop in the #tuskar IRC channel on Freenode or 
send an email to openstack-...@lists.launchpad.net with [Tuskar] in 
the subject.



PTL:

Talking to OpenStack developers, we were advised to elect the PTL early.

Since we're nearing the end of the Havana cycle, we'll elect the PTL for 
a slightly longer term -- the rest of Havana and throughout Icehouse. 
The next election will coincide with those of the official OpenStack 
projects.


If you are a Tuskar developer and want to nominate yourself, please send 
an email to openstack-...@lists.launchpad.net with subject Tuskar PTL 
candidacy.


The self-nomination period will end on Monday, 26th August 2013, 23:59 UTC.


--
Tomas Sedovic

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Tuskar project and PTL nominations

2013-08-21 Thread Tomas Sedovic

On 08/21/2013 02:32 PM, Tomas Sedovic wrote:

Hi everyone,

We would like to announce Tuskar, an OpenStack management service.

Our goal is to provide an API and UI to install and manage OpenStack at
larger scale: where you deal with racks, different hardware classes for
different purposes (storage, memory vs. cpu-intensive compute), the
burn-in process, monitoring the HW utilisation, etc.

Some of this will overlap with TripleO, Ceilometer and possibly other
projects. In that case, we will work with the projects to figure out the
best place to fix rather than duplicating effort and playing in our own
sandbox.


Current status:

There's a saying that if you're not embarrassed by your first release,
you've shipped too late.

I'm happy to say, we are quite embarrassed :-)

We've got a prototype that allows us to define different hardware
classes and provision the racks with the appropriate images, then add
new racks and have them provisioned.

We've got a Horizon dashboard plugin that shows the general direction we
want to follow and we're looking into integrating Ceilometer metrics and
alarms.

However, we're still tossing around different ideas and things are very
likely to change.

Our repositories are on Stackforge:

https://github.com/stackforge/tuskar
https://github.com/stackforge/python-tuskarclient
https://github.com/stackforge/tuskar-ui

And we're using Launchpad to manage our bugs and blueprints:

https://launchpad.net/tuskar
https://launchpad.net/tuskar-ui

If you want to talk to us, pop in the #tuskar IRC channel on Freenode or
send an email to openstack-...@lists.launchpad.net with [Tuskar] in
the subject.


A typo in the mailing list name, sorry. I meant:

openstack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




PTL:

Talking to OpenStack developers, we were advised to elect the PTL early.

Since we're nearing the end of the Havana cycle, we'll elect the PTL for
a slightly longer term -- the rest of Havana and throughout Icehouse.
The next election will coincide with those of the official OpenStack
projects.

If you are a Tuskar developer and want to nominate yourself, please send
an email to openstack-...@lists.launchpad.net with subject Tuskar PTL
candidacy.

The self-nomination period will end on Monday, 26th August 2013, 23:59 UTC.


--
Tomas Sedovic

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon[ Update on the realtime-communication blueprint

2013-06-24 Thread Tomas Sedovic

All,

Presenting a new version of the proof of concept for the 
realtime-communication Horizon blueprint[0]. Here's the code:


https://review.openstack.org/#/q/status:open+project:openstack/horizon+branch:master+topic:bp/realtime-communication,n,z

Following the previous discussion, this iteration listens to the 
OpenStack notifications and passes them using websocket (or one of the 
fallbacks) to the connected browsers. That means adding oslo-incubator 
as a dependency (the RPC bits of it at least).


The code grew a bit more complex, because as far as I can see, the RPC 
helpers in oslo-incubator are deeply intertwined with eventlet, but 
there is no suitable web realtime library built on top of eventlet*.


The available solutions are built on top of gevent or tornado, both of 
which come with their own event loops. In order to both receive 
notifications and push them to the browsers, we need two separate 
threads, one for each event loop.


We're only talking about a 150 lines of code and I don't expect it to 
grow much beyond that, so I'm not really worried about that now. But if 
there are better solutions, please share them.


The original PoC was built on top of gevent and gevent-socketio, neither 
of which is likely to be Python3-compatible any time soon. Since that's 
a requirement for new dependencies, I've switched to Tornado[1] and 
SockJS-tornado[2] that are both compatible with Python 3.


I didn't add the dependencies to openstack/requirements[3] yet but if 
you are fine with this stack, I shall do so and we can start thinking 
about merging this.


To try it out, please run the server:

./horizon-realtime.py --config-file etc/horizon/horizon-realtime.conf

navigate to `http://localhost:8080/project/instances` (you can verify 
the WebSocket connection in the JavaScript console) and then launch an 
instance out of band (e.g. via the nova command line). The webpage 
should update automatically when you add, suspend, resume or delete the 
instance.


T.

[0]: https://blueprints.launchpad.net/horizon/+spec/realtime-communication
[1]: http://www.tornadoweb.org/en/stable/
[2]: https://github.com/mrjoes/sockjs-tornado
[3]: https://github.com/openstack/requirements

* eventlet ships with a WebSocket implementation, but it lacks the 
fallback transports, etc. that socket.io and SockJS provide.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev