Re: [openstack-dev] [Horizon] Less compiler dependency

2013-12-10 Thread Jiri Tomasek

On 12/09/2013 12:47 PM, Jaromir Coufal wrote:

Hey all Horizoners,

This is last time I am trying to bring this concern up (well at least 
last time for a while :)). But...


Watching latest progress with updating Bootstrap to v3 and dealing 
with compiling issues, I am more and more concerned about dependency 
on lesscpy. Currently the library supports only some features of less. 
It is very small and very young project, containing one or two 
maintainers. We are already waiting couple of months so that we can 
update to Bootstrap 3 because of this dependency. And all of these 
problems will not disappear once we update Bootstrap. Because the 
library will support certain use cases but when Horizon will grow we 
will use more advanced features which the library will not have 
covered. And we will be blocked by the same issue over and over again.


So I would like to ask everybody, if we can reconsider this dependency 
and find some other alternative. I know we moved from nodejs, because 
it is packaging nightmare. But honestly, better to invest more into 
packaging than being blocked months waiting for features we need to 
get in. If we find some alternative for both, it is win for us. But 
current situation is making me nervous.


Thanks
-- Jarda


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi all,

So in IRC discussion we agreed to try 3 approaches how to resolve the 
problem:


1/ Try to dive into Lesscpy and help with making it support Bootstrap 3 
(and gradually all less features), subsequently keep it up to date.


2/ Investigate using lessc with other js engine than nodejs (eg. Rhino)

3/ Have production and development environment in Horizon, where 
development includes nodejs, release compiled css as well as less files. 
The styling customization would then require user to recompile 
stylesheets with his changes. On the other hand we'd have nodejs present 
in development envitonment and be able to use other tools that require it.



Jirka
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2013-12-11 Thread Jiri Tomasek

+1 for Tatiana Mazur to Horizon Core



On 12/10/2013 09:24 PM, Lyle, David wrote:

I would like to nominate Tatiana Mazur to Horizon Core.  Tatiana has been a 
significant code contributor in the last two releases, understands the code 
base well and has been doing a significant number of reviews for the last to 
milestones.


Additionally, I'd like to remove some inactive members of Horizon-core who have 
been inactive since the early Grizzly release at the latest.
Devin Carlen
Jake Dahn
Jesse Andrews
Joe Heck
John Postlethwait
Paul McMillan
Todd Willey
Tres Henry
paul-tashima
sleepsonthefloor


Please respond with a +1/-1 by this Friday.

-David Lyle




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-12 Thread Jiri Tomasek

On 12/11/2013 08:54 PM, Jay Dobies wrote:


So glad we're hashing this out now. This will save a bunch of 
headaches in the future. Good call pushing this forward.


On 12/11/2013 02:15 PM, Tzu-Mainn Chen wrote:

Hi,

I'm trying to clarify the terminology being used for Tuskar, which 
may be helpful so that we're sure
that we're all talking about the same thing :)  I'm copying responses 
from the requirements thread
and combining them with current requirements to try and create a 
unified view.  Hopefully, we can come
to a reasonably rapid consensus on any desired changes; once that's 
done, the requirements can be

updated.

* NODE a physical general purpose machine capable of running in many 
roles. Some nodes may have hardware layout that is particularly

useful for a given role.


Do we ever need to distinguish between undercloud and overcloud nodes?


  * REGISTRATION - the act of creating a node in Ironic


DISCOVERY - The act of having nodes found auto-magically and added to 
Ironic with minimal user intervention.




  * ROLE - a specific workload we want to map onto one or more 
nodes. Examples include 'undercloud control plane', 'overcloud control

plane', 'overcloud storage', 'overcloud compute' etc.

  * MANAGEMENT NODE - a node that has been mapped with an 
undercloud role
  * SERVICE NODE - a node that has been mapped with an 
overcloud role
 * COMPUTE NODE - a service node that has been mapped to 
an overcloud compute role
 * CONTROLLER NODE - a service node that has been mapped 
to an overcloud controller role
 * OBJECT STORAGE NODE - a service node that has been 
mapped to an overcloud object storage role
 * BLOCK STORAGE NODE - a service node that has been 
mapped to an overcloud block storage role


  * UNDEPLOYED NODE - a node that has not been mapped with a 
role
   * another option - UNALLOCATED NODE - a node that has 
not been allocated through nova scheduler (?)
- (after reading lifeless's 
explanation, I agree that allocation may be a
   misleading term under TripleO, 
so I personally vote for UNDEPLOYED)


Undeployed still sounds a bit odd to me when paired with the word 
role. I could see deploying a workload bundle or something, but a 
role doesn't feel like a tangible thing that is pushed out somewhere.


Unassigned? As in, it hasn't been assigned a role yet.

  * INSTANCE - A role deployed on a node - this is where work 
actually happens.


I'm fine with instance, but the the phrasing a role deployed on a 
node feels odd to me in the same way undeployed does. Maybe a 
slight change to A node that has been assigned a role, but that also 
may be me being entirely too nit-picky.


To put it in context, on a scale of 1-10, my objection to this and 
undeployed is around a 2, so don't let me come off as strenuously 
objecting.



* DEPLOYMENT

  * SIZE THE ROLES - the act of deciding how many nodes will need 
to be assigned to each role

* another option - DISTRIBUTE NODES (?)
  - (I think the former is more 
accurate, but perhaps there's a better way to say it?)


  * SCHEDULING - the process of deciding which role is deployed 
on which node


I know this derives from a Nova term, but to me, the idea of 
scheduling carries a time-in-the-future connotation to it. The 
interesting part of what goes on here is the assignment of which roles 
go to which instances.


  * SERVICE CLASS - a further categorization within a service 
role for a particular deployment.


I don't understand this one, can you add a few examples?


See wireframes [1] page 19, says Compute Nodes which is the default 
service class. Box below Create New Compute Class serves to creation 
of new service class. Nodes in Service Classes are differentiated by 
Node Profiles.


[1] 
http://people.redhat.com/~jcoufal/openstack/tripleo/2013-12-03_tripleo-ui_03-deployment.pdf 
http://people.redhat.com/%7Ejcoufal/openstack/tripleo/2013-12-03_tripleo-ui_03-deployment.pdf




   * NODE PROFILE - a set of requirements that specify what 
attributes a node must have in order to be mapped to

a service class


Even without knowing what service class is, I like this one.  :)




Does this seem accurate?  All feedback is appreciated!

Mainn


Thanks again  :D

 ___

OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Jirka
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [TripleO] [Horizon] [Tuskar] [UI] Horizon and Tuskar-UI merge

2013-12-16 Thread Jiri Tomasek

On 12/16/2013 03:32 PM, Jaromir Coufal wrote:

On 2013/16/12 14:03, Matthias Runge wrote:

On 12/13/2013 03:08 PM, Ladislav Smola wrote:

Horizoners,

As discussed in TripleO and Horizon meetings, we are proposing to move
Tuskar UI under the Horizon umbrella. Since we are building our UI
solution on top of Horizon, we think this is a good fit. It will allow
us to get feedback and reviews from the appropriate group of 
developers.



I don't think, we really disagree here.

My main concern would be more: what do we get, if we make up another
project under the umbrella of horizon? I mean, what does that mean at 
all?


My proposal would be, to send patches directly to horizon. As discussed
in last weeks horizon  meeting, tuskar UI would become integrated in
Horizon, but disabled by default. This would enable a faster integration
in Horizon and would reduce the overhead of creating a separate
repositoy, installation instructions, packaging etc. etc.

 From the horizon side: we would get some new contributors (and 
hopefully

reviewers), which is very much appreciated.

Matthias


This is important note. From information architecture and user 
interaction point of view, I don't think it makes sense to keep all 
the three tabs visible together (Project, Admin, Infrastructure). 
There are lot of reasons, but main points:


* Infrastructure itself is undercloud concept running in different 
instance of Horizon.


* Users dealing with deployment and infrastructure management are not 
the users of OpenStack UI / Dashboard. It is different set of users. 
So it doesn't make sense to have giant application, which provides 
each and every possible feature. I think we need to keep focused.


So by default, I would say that there should exist Project + Admin tab 
together or Infrastructure. But never all three together. So when 
Matthias say 'disabled by default', I would mean completely hidden for 
user and if user wants to use Infrastructure management, he can enable 
it in different horizon instance, but it will be the only visible tab 
for him. So it will be sort of separate application, but still running 
on top of Horizon.


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Thanks for pointing this out, In Horizon you can easily decide which 
dashboards to show, so the Infrastructure management Horizon instance 
can have Project and Admin dashboards disabled.


I think there has been discussed that some panels of Admin dashboard 
should be required for infrastructure management. We can solve this by 
adding those selected Admin panels also into Infrastructure dashboard.


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Horizon and Tuskar-UI codebase merge

2013-12-19 Thread Jiri Tomasek

On 12/19/2013 08:58 AM, Matthias Runge wrote:

On 12/18/2013 10:33 PM, Gabriel Hurley wrote:


Adding developers to Horizon Core just for the purpose of reviewing
an incubated umbrella project is not the right way to do things at
all.  If my proposal of two separate groups having the +2 power in
Gerrit isn't technically feasible then a new group should be created
for management of umbrella projects.

Yes, I totally agree.

Having two separate projects with separate cores should be possible
under the umbrella of a program.

Tuskar differs somewhat from other projects to be included in horizon,
because other projects contributed a view on their specific feature.
Tuskar provides an additional dashboard and is talking with several apis
below. It's a something like a separate dashboard to be merged here.

When having both under the horizon program umbrella, my concern is, that
both projects wouldn't be coupled so tight, as I would like it.

Esp. I'd love to see an automatic merge of horizon commits to a
(combined) tuskar and horizon repository, thus making sure, tuskar will
work in a fresh (updated) horizon environment.


Please correct me if I am wrong, but I think this is not an issue. 
Currently Tuskar-UI is run from Horizon fork. In local Horizon fork we 
create symlink to tuskar-ui local clone and to run Horizon with 
Tuskar-UI we simply start Horizon server. This means that Tuskar-UI runs 
on latest version of Horizon. (If you pull regularly of course).




Matthias

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] RFC - Suggestion for switching from Less to Sass (Bootstrap 3 Sass support)

2014-02-06 Thread Jiri Tomasek

Hey,

Switching to SASS/Compass seems to me like a nice idea. Although reading 
Compass docs on using it in django/python projects [1], they recommend 
to serve compiled css in as output for production, so the production 
servers don't have to carry ruby/compass gems dependencies.


Also in django project development, you need to run compass --watch if 
you want scss to automatically compile and developers need to install 
ruby environment with necessary gems.


Switching to sass/compass is a good thing as it resolves the issue with 
nodejs dependency for less and also brings compass goodness into play. I 
think this solution is a bit rough for a python/django developer though.


Independently on whether we choose to stick with less or change to sass, 
we'll still need to add dependency (nodejs or ruby). What we need to 
consider is whether we want to compile css in production or not.


Recently mentioned solution of separating css and js into separate 
project that outputs compiled js and css comes into play. Problem with 
sass/compass here I see is that we'll probably need nodejs dependency 
for js tools like bower, grunt, js test suites etc. With sass/compass 
we'd need additional Ruby dependency.


Jirka



[1] http://compass-style.org/blog/2011/05/09/compass-django/


On 02/05/2014 08:23 PM, Gabriel Hurley wrote:

I would imagine the downstream distros won't have the same problems with Ruby 
as they did with Node.js from a dependency standpoint, though it still doesn't 
jive with the community's all-Python bias.

My real concern, though, is anyone who may have extended the Horizon 
stylesheets using the capabilities of LESS. There are lots of ways you can 
customize the appearance of Horizon, and some folks may have gone that route.

My recommended course of action would be to think deeply on some recommended ways of 
upgrading from LESS to SASS for existing deployments who may have written 
their own stylesheets. Treat this like a feature deprecation (which is what it is).

Otherwise, if it makes people's lives better to use SASS instead of LESS, it 
sounds good to me.

 - Gabriel


-Original Message-
From: Jason Rist [mailto:jr...@redhat.com]
Sent: Wednesday, February 05, 2014 9:48 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Horizon] RFC - Suggestion for switching from
Less to Sass (Bootstrap 3  Sass support)

On Wed 05 Feb 2014 09:32:54 AM MST, Jaromir Coufal wrote:

Dear Horizoners,

in last days there were couple of interesting discussions about
updating to Bootstrap 3. In this e-mail, I would love to give a small
summary and propose a solution for us.

As Bootstrap was heavily dependent on Less, when we got rid of node.js
we started to use lesscpy. Unfortunately because of this change we
were unable to update to Bootstrap 3. Fixing lesscpy looks problematic
- there are issues with supporting all use-cases and even if we fix
this in some time, we might challenge these issues again in the future.

There is great news for Bootstrap. It started to support Sass [0].
(Thanks Toshi and MaxV for highlighting this news!)

Thanks to this step forward, we might get out of our lesscpy issues by
switching to Sass. I am very happy with this possible change, since
Sass is more powerful than Less and we will be able to update our
libraries without any constraints.

There are few downsides - we will need to change our Horizon Less
files to Sass, but it shouldn't be very big deal as far as we
discussed it with some Horizon folks. We can actually do it as a part
of Bootstrap update [1] (or CSS files restructuring [2]).

Other concern will be with compilers. So far I've found 3 ways:
* rails dependency (how big problem would it be?)
* https://pypi.python.org/pypi/scss/0.7.1
* https://pypi.python.org/pypi/SassPython/0.2.1
* ... (other suggestions?)

Nice benefit of Sass is, that we can use advantage of Compass
framework [3], which will save us a lot of energy when writing (not
just cross-browser) stylesheets thanks to their mixins.

When we discussed on IRC with Horizoners, it looks like this is good
way to go in order to move us forward. So I am here, bringing this
suggestion up to whole community.

My proposal for Horizon is to *switch from Less to Sass*. Then we can
unblock our already existing BPs, get Bootstrap updates and include
Compass framework. I believe this is all doable in Icehouse timeframe
if there are no problems with compilers.

Thoughts?

-- Jarda

[0] http://getbootstrap.com/getting-started/
[1] https://blueprints.launchpad.net/horizon/+spec/bootstrap-update
[2] https://blueprints.launchpad.net/horizon/+spec/css-breakdown
[3] http://compass-style.org/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I think this is a fantastic idea. Having no experience with Less, but seeing 
that
it is troublesome - if 

Re: [openstack-dev] [Horizon] Bootstrap 3 update and problems with lesscpy

2013-10-22 Thread Jiri Tomasek

Hi,

I have updates on my work towards getting Horizon updated to Bootstrap 3.

I have finished Bootstrap 3 update for Horizon using the old lessc 
compiler to review the work and I have created two versions:
1. patch that uses original lessc library to be able to review the 
bootstrap 3 in current Horizon [1]
2. patch that uses Lesscpy but does not compile css properly - for 
reviewing of the compilation issues [2]

(see commit messages for details)

I marked these as work in progress because of lesscpy problems but 
please feel free to have a look and give feedback.


So what is remaining is to get lesscpy up to speed with Bootstrap 3. 
Sascha Peilicke has created a fix to support semicolons in mixin 
arguments [3] some time agobut it still needs to get into original 
repository as pull request. The other issues reported are still valid. 
Buggest issue at the moment seems to be in @media declaration including 
variable https://github.com/robotis/Lesscpy/issues/18[4]. I have tried 
to have a look at this but the solution is probably not the best... [5]. 
I have contacted Sascha regarding this and so far I am waiting for response.


Any feedback or help with getting Lesscpy fixed is highly welcome.

[1] https://review.openstack.org/#/c/49710/
[2] https://review.openstack.org/#/c/49712/
[3] https://github.com/saschpe/Lesscpy/commits/master
[4] https://github.com/robotis/Lesscpy/issues/18
[5] https://github.com/jtomasek/Lesscpy/commits/var_in_media

Thanks

Jirka


On 09/19/2013 01:44 AM, Gabriel Hurley wrote:

I'm also strongly against reverting the move to lesscpy. As David said, that 
change was highly-requested by the downstream distros and other folks packaging 
Horizon in various ways.

Since there's no evidence that lesscpy does not intend to support bootstrap 3 
in a reasonable timeframe, reverting the patch in the interim would simply be 
impatience. The better thing to do as a member of the larger open source 
community is to contribute your own energy to lesscpy and to help them improve 
their project in a timely manner. I'm glad to hear that Sasha is already 
working on that. I'm sure they're happy for the assistance and for having their 
work utilized by a significant project like Horizon.

We'll get to bootstrap 3, but not by undoing work we've already done.

Please keep us all updated on the progress upstream, I know I for one look 
forward to seeing the benefits we can derive from the newer bootstrap code.

 - Gabriel


-Original Message-
From: Lyle, David (Cloud Services) [mailto:david.l...@hp.com]
Sent: Wednesday, September 18, 2013 8:44 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Horizon] Bootstrap 3 update and problems
with lesscpy

Right now, master in Horizon is still working toward Havana-rc1.  We are still
likely more than a week away from master moving to Icehouse-1.  As this is
the case, reverting a highly desired Havana change to address a blueprint for
Icehouse that can be addressed properly upstream in lesscpy does not seem
like a good course of action.  I understand the amount of work involved in
updating Bootstrap, but our goal should be to properly resolve the conflict
once we are working on Icehouse.

-David

On Wednesday, September 18, 2013 6:27 AM Jiri Tomasek
[mailto:jtoma...@redhat.com] wrote:


Hi all,
I've started working on updating Bootstrap to version 3 in Horizon.

https://blueprints.launchpad.net/horizon/+spec/bootstrap-update


As I have described in blueprint whiteboard, I am experiencing compile

problems with the new lesscpy compiler that we started using recently. The
compiled css code is incorrect and when running the compilation from
terminal, about 200 syntax errors occur. This is related to certain features of
Less not being supported by lesscpy. I have created a GIthub issue for
lesscpy here: https://github.com/robotis/Lesscpy/issues/22 .


Sasha Peilicke has already started working on updating the lesscpy library to

support all less features needed to compile Bootstrap 3 properly. Although I
think that it will take more than a few weeks before lesscpy is there where
we need it.


I have part of Bootstrap 3 update ready and as it is quite a large patch I

would like to get this in as soon as possible because any rebase to a new
Horizon master is quite tedious process. Also there are another blueprints
that depend on this update (font-icons and css-breakdown, see dependency
tree).


So I would like to propose to revert the patch that introduces lesscpy library

(a0739c9423 Drop NodeJS dependency in favor of pure-python lesscpy) and
use the lessc library for the time being untill lesscpy is capable of compiling
Bootstrap 3.


I have revert patch ready together with update of lessc library in

horizon/bin, which I can make part of Bootstrap-update blueprint and send
them right away to gerrit for a review. I have also tested that with this 
setup the Bootstrap 3 updated Horizon less file compiles properly.


When

Re: [openstack-dev] Bad review patterns

2013-11-07 Thread Jiri Tomasek

On 11/07/2013 08:25 AM, Daniel P. Berrange wrote:

On Thu, Nov 07, 2013 at 12:21:38AM +, Day, Phil wrote:

Leaving a mark.
===

You review a change and see that it is mostly fine, but you feel that since you
did so much work reviewing it, you should at least find
*something* wrong. So you find some nitpick and -1 the change just so that
they know you reviewed it.

This is quite obvious. Just don't do it. It's OK to spend an hour reviewing
something, and then leaving no comments on it, because it's simply fine, or
because we had to means to test someting (see the first pattern).



Another one that comes into this category is adding a -1 which just says I 
agree with
the other -1's in here.   If you have some additional perspective and can 
expand on
it then that's fine - otherwise it adds very little and is just review count 
chasing.

I don't think that it is valueless as you describe. If I multiple people
add '-1' with a same comments as name, then as a core reviewer I will
consider that initial -1 to be a much stronger nack, than if only one person
had added the -1. So I welcome people adding I agree with blah to any
review.


It's an unfortunate consequence of counting and publishing review stats that 
having
such a measure will inevitable also drive behavour.

IMHO what this shows is not people trying to game the stats, but rather the
inadequacy of gerrit. We don't have any way to distinguish a -1 minor nice
to have nitpick from a -1 serious code issue that is a must-fix. Adding
a -2 is really too heavyweight because it is sticky, and implies do not
ever merge this.

It would be nice to be able to use '-2' for serious must-fix issue without
it being sticky, and have a separate way for core reviewers to put an review
into a block from being merged indefinitely state - perhaps a new state
would be more useful eg a Blocked state, to add to New, Open, Merged,
Abadoned.

Daniel
The comment describing the  -1 should be enough to distinquish between 
minor nitpick and serious code issue IMHO. If it is a serious issue, 
other reviewers also giving -1 confirming the issue is probably a good 
thing. (Not with the minor nit though...)


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Introduction of AngularJS in membership workflow

2013-11-13 Thread Jiri Tomasek

Hi,

I'd like to point out, that our main intent should be to use mostly 
AngularJS's Directives feature.
As Jordan mentions, It is a self-contained reusable item that is 
initialized on the html element
(see line 6 in [2]), you can pass it variables that Django template has 
available. Then Angular takes
over and replaces the html element with template that belongs to 
directive. The business logic
is taken care by controller that is also assigned to the directive. The 
directive can get data either
from the variables passed to the html element or better, through the 
service injected to controller.

This service brings data asynchronously from our API.

In our patch we are getting data using the current membership code, that 
brings data from
hidden form. Maintaining the synchronization between the directive and 
the form involves quite
a lot of code. Once we'd have the API on the Django side that would 
serve the data for membership
component in json, the membership directive code would get reduced by a 
good amount.


Reading back on yesterday's Horizon meeting, there was some confusion 
about compile phase
The compile phase in angular does not have much to do with jasvascript 
compilation/minification.
It is a phase in AngularJS when compiler parses the template and 
instantiates directives and expressions.
( 
http://www.benlesh.com/2013/08/angular-compile-how-it-works-how-to-use.html 
)


Jirka

On 11/11/2013 08:21 PM, Jordan OMara wrote:

Hello Horizon!

On November 11th, we submitted a patch to introduce AngularJS into
Horizon [1]. We believe AngularJS adds a lot of value to Horizon.

First, AngularJS allows us to write HTML templates for interactive
elements instead of doing jQuery-based DOM manipulation. This allows
the JavaScript layer to focus on business logic, provides easy to
write JavaScript testing that focuses on the concern (e.g. business
logic, template, DOM manipulation), and eases the on-boarding for new
developers working with the JavaScript libraries.
Second, AngularJS is not an all or nothing solution and integrates
with the existing Django templates. For each feature that requires
JavaScript, we can write a self-contained directive to handle the DOM,
a template to define our view and a controller to contain the business
logic. Then, we can add this directive to the existing template. To
see an example in action look at _workflow_step_update_member.html
[2]. It can also be done incrementally - this isn't an all-or-nothing
approach with a massive front-end time investment, as the Angular
components can be introduced over time.

Finally, the initial work to bring AngularJS to Horizon provides a
springboard to remove the DOM Database (i.e. hidden-divs) used on
the membership page (and others). Instead of abusing the DOM, we can
instead expose an API for membership data, add an AngularJS resource
(i.e. reusable representation of API entities) for the API. The data
can then be loaded data asynchronously and allow the HTML to focus on
expressing a semantic representation of the data to the user.
  Please give our patch a try! You can find the interactions on
Domains/Groups, Flavors/Access(this form does not seem to work in
current master or on my patch) and Projects/UsersGroups. You should
notice that it behaves...exactly the same!
  We look forward to your feedback.  Jordan O'Mara  Jirka Tomasek

[1] [https://review.openstack.org/#/c/55901/] [2] 
[https://github.com/jsomara/horizon/blob/angular2/horizon/templates/horizon/common/_workflow_step_update_members.html]



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Use icon set instead of instance Action

2013-11-15 Thread Jiri Tomasek

  
  
Hi, 
  
  you might also want to bring up this proposal on Openstack UX
  Askbot [1], which is agreed to be a place to discuss Openstack UX
  issues.
  This particular feature would propably depend on using font icons
  which is dealt with in this blueprint [2].
  
  [1]
  
  http://ask-openstackux.rhcloud.com
  [2]
  
  https://blueprints.launchpad.net/horizon/+spec/font-icons
  
  Jirka
  
  On 11/15/2013 12:41 PM, Garry Chen wrote:


  
  Hi all
  
  
  Would you consider to change the drop-down action list of a
instances to some common button icon set?
  
Some actions like run, pause, restart, shutdown, terminate
  , even like "create Snapshot", can use button instead? which
  like the pic below:





Think icon set may has better user experience.


Regards
Garry
  
  
  
  
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



  

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Horizon based UI Updates in Tuskar-UI

2013-11-19 Thread Jiri Tomasek

Hi,

As Horizon is recently undergoing a set of user interface changes, there 
will be needed some effort in

getting Tuskar-UI up to date with new Horizon UI structure and features.

The changes are comming mostly from the update of Twitter Bootstrap 
framework to latest version (3.0) and
Introducing the AngularJS javascript frontend framework. These two 
updates are in review phase and are

counted in for Icehouse.

To adopt these changes I have identified following tasks:

Horizon based UI Updates in Tuskar-UI:

- Get up to date with Bootstrap 3:
  - update templates through the application to match Bootstrap 3 markup
  - get Forms to use django-bootstrap-form
  - update custom less code to match Bootstrap 3 styling
  - update javascript to use Bootstrap 3 features

- Start using AngularJS Directives with our javascript components
  - update graphs js code to use AngularJS Directives
  - update UI code as Horizon transfers to AngularJS if needed, most of 
the AngularJS inovations should not

require code update on Tuskar-UI side

There is also plan to support icon-fonts as a solution for Icons across 
Horizon applications. This might require
adding custom icon-font, if Tuskar-UI will require icons that are not 
present in basic set that Horizon provides.



Related Horizon blueprints:

Update Twitter Bootstrap to version 3
https://blueprints.launchpad.net/horizon/+spec/bootstrap-update

Introduce AngularJS as Frontend Javascript Framework
https://blueprints.launchpad.net/horizon/+spec/angularjs-javascript-frontend-framework

Change current bitmap icons for font icons
https://blueprints.launchpad.net/horizon/+spec/font-icons


Jirka
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Javascript development improvement

2013-11-21 Thread Jiri Tomasek

Hi,

I also don't see an issue with using nodejs in Horizon development 
environment. Is the problem in Django not differentiating the 
development and production environments by default?
Could the problem be resolved by having two different environments with 
the two requirements files etc. similar as Rails does?


Regarding less, I don't really care what compiler we use as long as it 
works. And if we need to provide uncompiled less for production, then 
let's use Lesscpy.


Jirka

On 11/21/2013 09:21 AM, Ladislav Smola wrote:

Hello,

as long as node won't be Production dependency, it shouldn't be a 
problem, right? I give +1 to that


Regards
Ladislav

On 11/20/2013 05:01 PM, Maxime Vidori wrote:
Hi all, I know it is pretty annoying but I have to resurrect this 
subject.


With the integration of Angularjs into Horizon we will encounter a 
lot of issues with javascript. I ask you to reconsider to bring back 
Nodejs as a development platform. I am not talking about production, 
we are all agree that Node is not ready for production, and we do not 
want it as a backend. But the facts are that we need a lot of its 
features, which will increase the tests and the development. 
Currently, we do not have any javascript code quality: jslint is a 
great tool and can be used easily into node. Angularjs also provides 
end-to-end testing based on nodejs again, testing is important 
especially if we start to put more logic into JS. Selenium is used 
just to run qUnit tests, we can bring back these tests into node and 
have a clean unified testing platform. Tests will be easier to perform.


Finally, (do not punch me in the face) lessc which is used for 
bootstrap is completely integrated into it. I am afraid that modern 
javascript development can not be perform without this tool.


Regards

Maxime Vidori


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-04 Thread Jiri Tomasek

Hi,

As the development of Tuskar-UI somehow stagnated recently, I have been 
focusing more on Horizon project lately to get features we need for 
Tuskar-UI. I acknowledge that I haven't been paying enough attention and 
reviews in TripleO. The statistics says it all. Although as the 
development of Tuskar-UI is about to rise rapidly, it would be nice to 
be able to give +2's here. I'll try to get up to speed with TripleO 
together with upcoming Tuskar-UI changes.


Jirka


On 12/04/2013 08:12 AM, Robert Collins wrote:

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Ghe Rivero for -core
  - Jan Provaznik for removal from -core
  - Jordan O'Mara for removal from -core
  - Martyn Taylor for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.

Ghe, please let me know if you're willing to be in tripleo-core. Jan,
Jordan, Martyn, Jiri  Jaromir, if you are planning on becoming
substantially more active in TripleO reviews in the short term, please
let us know.

My approach to this caused some confusion a while back, so I'm going
to throw in some boilerplate here for a few more editions... - I'm
going to talk about stats here, but they
are only part of the picture : folk that aren't really being /felt/ as
effective reviewers won't be asked to take on -core responsibility,
and folk who are less active than needed but still very connected to
the project may still keep them : it's not pure numbers.

Also, it's a vote: that is direct representation by the existing -core
reviewers as to whether they are ready to accept a new reviewer as
core or not. This mail from me merely kicks off the proposal for any
changes.

But, the metrics provide an easy fingerprint - they are a useful tool
to avoid bias (e.g. remembering folk who are just short-term active) -
human memory can be particularly treacherous - see 'Thinking, Fast and
Slow'.

With that prelude out of the way:

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up so they
aren't caught by surprise.

Our merger with Tuskar has now had plenty of time to bed down; folk
from the Tuskar project who have been reviewing widely within TripleO
for the last three months are not in any way disadvantaged vs previous
core reviewers when merely looking at the stats; and they've had three
months to get familiar with the broad set of codebases we maintain.

90 day active-enough stats:

+--+---++
| Reviewer | Reviews   -2  -1  +1  +2  +A+/- % | Disagreements* |
+--+---++
|   lifeless **| 521   16 181   6 318 14162.2% |   16 (  3.1%)  |
| cmsj **  | 4161  30   1 384 20692.5% |   22 (  5.3%)  |
| clint-fewbar **  | 3792  83   0 294 12077.6% |   11 (  2.9%)  |
|derekh ** | 1960  36   2 158  7881.6% |6 (  3.1%)  |
|slagle ** | 1650  36  94  35  1478.2% |   15 (  9.1%)  |
|ghe.rivero| 1500  26 124   0   082.7% |   17 ( 11.3%)  |
|rpodolyaka| 1420  34 108   0   076.1% |   21 ( 14.8%)  |
|lsmola ** | 1011  15  27  58  3884.2% |4 (  4.0%)  |
|ifarkas **|  950  10   8  77  2589.5% |4 (  4.2%)  |
| jistr ** |  951  19  16  59  2378.9% |5 (  5.3%)  |
|  markmc  |  940  35  59   0   062.8% |4 (  4.3%)  |
|pblaho ** |  831  13  45  24   983.1% |   19 ( 22.9%)  |
|marios ** |  720   7  32  33  1590.3% |6 (  8.3%)  |
|   tzumainn **|  670  17  15  35  1574.6% |3 (  4.5%)  |
|dan-prince|  590  10  35  14  1083.1% |7 ( 11.9%)  |
|   jogo   |  570   6  51   0   089.5% |2 (  3.5%)  |


This is a massive improvement over last months report. \o/ Yay. The
cutoff line here is pretty arbitrary - I extended a couple of rows
below one-per-work-day because Dan and Joe were basically there - and
there is a somewhat bigger gap to the next most active reviewer below
that.

About half of Ghe's reviews are in the last 30 days, and ~85% in the
last 60 - but he has been doing significant numbers of thoughtful
reviews over the whole three months - I'd like to propose him for
-core.
Roman has very

Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2014-06-24 Thread Jiri Tomasek

On 06/20/2014 11:17 PM, Lyle, David wrote:

I would like to nominate Zhenguo Niu and Ana Krivokapic to Horizon core.

Zhenguo has been a prolific reviewer for the past two releases providing
high quality reviews. And providing a significant number of patches over
the past three releases.

Ana has been a significant reviewer in the Icehouse and Juno release
cycles. She has also contributed several patches in this timeframe to both
Horizon and tuskar-ui.

Please feel free to respond in public or private your support or any
concerns.

Thanks,
David


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


+1 to both, thanks for your hard work!

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Use of AngularJS

2014-06-03 Thread Jiri Tomasek

On 05/29/2014 05:30 PM, Musso, Veronica A wrote:

Hello,

During the last Summit the use of AngularJS in Horizon was discussed and there 
is the intention to do a better use of it in the dashboards.
  I think this blueprint could help 
https://blueprints.launchpad.net/horizon/+spec/django-angular-integration, 
since it proposes the integration of Django-Angular 
(http://django-angular.readthedocs.org/en/latest/index.html).
I would like to know the community opinion about it, due I could start its 
implementation.

Thanks!

Best Regards,
Verónica Musso

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Thanks for bringing this up. We have been discussing including this lib 
before and I think using it's features are beneficial. I'll have a 
broader look at it.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][infra] Plan for the splitting of Horizon into two repositories

2014-06-04 Thread Jiri Tomasek

On 05/31/2014 11:13 PM, Jeremy Stanley wrote:

On 2014-05-29 20:55:01 + (+), Lyle, David wrote:
[...]

There are several more xstatic packages that horizon will pull in that are
maintained outside openstack. The packages added are only those that did
not have existing xstatic packages. These packages will be updated very
sparingly, only when updating say bootstrap or jquery versions.

[...]

I'll admit that my Web development expertise is probably almost 20
years stale at this point, so forgive me if this is a silly
question: what is the reasoning against working with the upstreams
who do not yet distribute needed Javascript library packages to help
them participate in the distribution channels you need? This strikes
me as similar to forking a Python library which doesn't publish to
PyPI, just so you can publish it to PyPI. When some of these
dependencies begin to publish xstatic packages themselves, do the
equivalent repositories in Gerrit get decommissioned at that point?
Standard way to publish javascript libraries these days is publish 
minified javascript file/s (usually in dist part of repository) and 
standard way to include it in the project is to use nodejs tools such as 
Bower to list the js dependencies and have them installed automatically.


In our case it is more convinient to use xstatic packages, which we have 
to create if someone hasn't done it already. I think it might happen, 
that some of 'our' packages might turn into the official ones.


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] mocking policy

2014-06-11 Thread Jiri Tomasek

On 06/10/2014 12:01 PM, Maxime Vidori wrote:

+1 for the use of mock.

Is mox3 really needed? Or can we move our tests for python3 to mock, and use 
this library for every tests for python3?

- Original Message -
From: David Lyle david.l...@hp.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Tuesday, June 10, 2014 5:58:07 AM
Subject: Re: [openstack-dev] [horizon] mocking policy

I have no problem with this proposal.

David

On 6/4/14, 6:41 AM, Radomir Dopieralski openst...@sheep.art.pl wrote:


Hello,

I'd like to start a discussion about the use of mocking libraries in
Horizon's tests, in particular, mox and mock.

As you may know, Mox is the library that has been used so far, and we
have a lot of tests written using it. It is based on a similar Java
library and does very strict checking, although its error reporting may
leave something more to be desired.

Mock is a more pythonic library, insluded in the stdlib of recent Python
versions, but also available as a separate library for older pythons. It
has a much more relaxed approach, allowing you to only test the things
that you actually care about and to write tests that don't have to be
rewritten after each and every refactoring.

Some OpenStack projects, such as Nova, seem to have adopted an approach
that favors Mock in newly written tests, but allows use of Mox for older
tests, or when it's more suitable for the job.

In Horizon we only use Mox, and Mock is not even in requirements.txt. I
would like to propose to add Mock to requirements.txt and start using it
in new tests where it makes more sense than Mox -- in particular, when
we are writing unit tests only testing small part of the code.

Thoughts?
--
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

+1

Thanks for bringing this up.


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard

2014-11-11 Thread Jiri Tomasek

On 11/10/2014 12:19 PM, Matthias Runge wrote:

On Thu, Oct 30, 2014 at 01:13:48PM +0100, Matthias Runge wrote:

Hi,

tl;dr: how to progreed in separating horizon and openstack_dashboard

About a year ago now we agreed, it makes sense to separate horizon and
openstack_dashboard.

At the past summit, we discussed this again. Currently, our repo
contains two directories: horizon and openstack_dashboard, they both
will need new names.

We discussed a renaming in the past; the former consensus was:
rename horizon to horizon_lib and
rename openstack_dashboard to horizon.

IMHO that doesn't make any sense and will confuse people a lot. I
wouldn't object to rename horizon to horizon_lib, although any other
name, e.g django-horizon should be fine as well.

openstack_dashboard is our official name; people from outside refer to
the Dashboard as Horizon, why not rename to openstack_horizon here?

Thoughts? Opinions? Suggestions?


From what was discussed on contributors meetup, keeping the names 
'horizon' for the lib (framework) and 'openstack_dashboard' for 
dashboard seemed most convenient. And I happen to aggree with that.


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-11 Thread Jiri Tomasek

Hey,

Thanks for writing this up!

I am including some notes and questions inline...

On 11/11/2014 08:02 AM, Richard Jones wrote:

Hi all,

At the summit last week, we developed a plan for moving forward with 
modernising Horizon's UI using AngularJS. If you weren't at that 
meeting and are interested in helping out with this effort please let 
me know!


The relevant etherpad from the meeting:
https://etherpad.openstack.org/p/kilo-horizon-contributors-meetup

TL;DR: piece by piece we will replace Django views in Horizon with 
angular views, and we're going to start with Identity


First up, I'd like to ask the UX folk who raised their hands in that 
meeting to indicate which of the Identity panes we should start with. 
I believe a wizard was mentioned, as a way to exercise the new wizard 
code from Maxime.


At the same time, I'm looking at updating the AngularJS 
recommendations in the wiki. I believe other aspects of the current 
approach to angular code should also be revisited, if we're to scale 
up to the full angular front-end envisaged. I'd appreciate if those 
interested in this aspect in particular could contact me so we can 
sort this out as a team!

I am interested.


I'd like to start the design work for the new REST API layer we'll be 
exposing to the angular application code, but that is also part of the 
broader discussion about the structure of the angular code in the 
Horizon application as mentioned above. Should it be a new blueprint/spec?


I think spec seems appropriate.  Do you think using django-angular would 
be convenient?




There were some discussions around tooling. We're using xstatic to 
manage 3rd party components, but there's a lot missing from that 
environment. I hesitate to add supporting xstatic components on to the 
already large pile of work we have to do, so would recommend we switch 
to managing those components with bower instead. For reference the 
list of 3rd party components I used in angboard* (which is really only 
a teensy fraction of the total application we'd end up with, so this 
components list is probably reduced):


json3
es5-shim
angular
angular-route
angular-cookies
angular-animate
angular-sanitize
angular-smart-table
angular-local-storage
angular-bootstrap
angular-translate
font-awesome
boot
underscore
ng-websocket

Just looking at PyPI, it looks like only a few of those are in 
xstatic, and those are out of date.


grunt provides a lot of features for developing an angular interface. 
In particular LiveReload accelerates development significantly. 
There's a django-livereload but it uses tiny-lr under the hood, so 
we're still using a node application for LiveReload support... so it 
might make sense to just use grunt. grunt provides many other features 
as well (wiredep integration with bower, build facilities with ngMin, 
test monitoring and reload etc).


There seemed to be agreement to move to jasmine (from qunit) for 
writing the tests. It's not noted in the etherpad, but I recall karma 
was accepted as a given for the test runner. For those not in the 
meeting, angboard uses mocha+chai for test writing, but I agreed that 
jasmine is acceptable, and is already used by Storyboard (see below).


Also, phantomjs so we don't have to fire up a browser for exercising 
(what should hopefully be an extensive) unit test suite.


The Storyboard project has successfully integrated these tools into 
the OpenStack CI environment.


Using javascript tooling (yeoman, grunt, bower, etc.) has this issue of 
being dependent on nodejs which if I recall correctly is causing 
problems for packagers as some versions of these tools require different 
nodejs versions - please Mathias correct me if I am wrong. I know this 
discussion has been here before, but using these tools is necessary for 
effective development. So we need to resolve the problem asap. 
Storyboard does not have this issue as it is infra thing.


Petr Belanyi has added optional jshint install for js linting into 
Horizon and it installs nodejs as it depends on it. Could this approach 
work for our need of js tooling too? [1]


How hard is it going to be if we'll need to go xstatic way? Is that even 
possible?





 Richard

* https://github.com/r1chardj0n3s/angboard


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[1] https://review.openstack.org/#/c/97237/


Jiri
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-12 Thread Jiri Tomasek

On 11/12/2014 02:35 PM, Monty Taylor wrote:

On 11/12/2014 02:40 AM, Richard Jones wrote:

On 12 November 2014 18:17, Matthias Runge mru...@redhat.com wrote:


On 11/11/14 10:53, Jiri Tomasek wrote:

Hey,

Thanks for writing this up!

The Storyboard project has successfully integrated these tools into
the OpenStack CI environment.

OpenStack CI and distributors are different, because OpenStack CI does
not distribute software.


Ah, I wasn't clear; my concern was whether the tools chosen would be
compatible with the CI environment. I'm hoping that distribution of the
tools isn't our concern (see below).

u

Using javascript tooling (yeoman, grunt, bower, etc.) has this issue of
being dependent on nodejs which if I recall correctly is causing
problems for packagers as some versions of these tools require different
nodejs versions - please Mathias correct me if I am wrong. I know this
discussion has been here before, but using these tools is necessary for
effective development. So we need to resolve the problem asap.
Storyboard does not have this issue as it is infra thing.

As far as I know, those tools don't require different nodejs versions.
But: we can not have different node.js versions installed at the same
time. I assume, this is true for all distributions. Creating and
maintaining parallel installable versions just sucks and causes many
issues.


I believe the nodeenv method of installing node solves this, as it's
entirely local to the development environment.


Just for the record, I believe that we should chose the tools that make
sense for making our software, as long as it's not physically impossible
for them to be packaged. This means we should absolutely not use things
that require multiple versions of node to be needed. The nodejs that's
in trusty is new enough to work with all of the modern javascript tool
chain things needed for this, so other than the various javascript tools
and libraries not being packaged in the distros yet, it should be fine.

That a bunch of javascript libraries will need to be distro pacakged
should not be a blocker (although I don't think that anyone is saying it
is) That is, after all, the important work the distros do. At this
point, given the popularity of javascript and javascript tooling, I'm
pretty sure the problem is going to have to be solved at some point.

+1, I am really glad this has been said.



I will have to go through all dependencies and do a review, if those are

acceptable for inclusion e.g in Fedora. The same is true for Thomas
Goirand for inclusion in Debian.


Petr Belanyi has added optional jshint install for js linting into
Horizon and it installs nodejs as it depends on it. Could this approach
work for our need of js tooling too? [1]

Sigh, this nonsense doesn't go away? This is the third time the same
issue comes up.

jshint is NOT free software.

https://github.com/jshint/jshint/blob/master/src/jshint.js#L19
https://github.com/jshint/jshint/blob/master/src/jshint.js#L19


They're trying to resolve that https://github.com/jshint/jshint/issues/1234

But regardless, jshint doesn't have to be installed from a Linux
repository; it's usually installed using npm alongside the other node tools.


 Richard



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Jiri

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-12 Thread Jiri Tomasek

On 11/11/2014 08:02 AM, Richard Jones wrote:

Hi all,

At the summit last week, we developed a plan for moving forward with 
modernising Horizon's UI using AngularJS. If you weren't at that 
meeting and are interested in helping out with this effort please let 
me know!


The relevant etherpad from the meeting:
https://etherpad.openstack.org/p/kilo-horizon-contributors-meetup

TL;DR: piece by piece we will replace Django views in Horizon with 
angular views, and we're going to start with Identity


First up, I'd like to ask the UX folk who raised their hands in that 
meeting to indicate which of the Identity panes we should start with. 
I believe a wizard was mentioned, as a way to exercise the new wizard 
code from Maxime.


At the same time, I'm looking at updating the AngularJS 
recommendations in the wiki. I believe other aspects of the current 
approach to angular code should also be revisited, if we're to scale 
up to the full angular front-end envisaged. I'd appreciate if those 
interested in this aspect in particular could contact me so we can 
sort this out as a team!


I'd like to start the design work for the new REST API layer we'll be 
exposing to the angular application code, but that is also part of the 
broader discussion about the structure of the angular code in the 
Horizon application as mentioned above. Should it be a new blueprint/spec?


There were some discussions around tooling. We're using xstatic to 
manage 3rd party components, but there's a lot missing from that 
environment. I hesitate to add supporting xstatic components on to the 
already large pile of work we have to do, so would recommend we switch 
to managing those components with bower instead. For reference the 
list of 3rd party components I used in angboard* (which is really only 
a teensy fraction of the total application we'd end up with, so this 
components list is probably reduced):


json3
es5-shim
angular
angular-route
angular-cookies
angular-animate
angular-sanitize
angular-smart-table
angular-local-storage
angular-bootstrap
angular-translate
font-awesome
boot
underscore
ng-websocket

Just looking at PyPI, it looks like only a few of those are in 
xstatic, and those are out of date.


grunt provides a lot of features for developing an angular interface. 
In particular LiveReload accelerates development significantly. 
There's a django-livereload but it uses tiny-lr under the hood, so 
we're still using a node application for LiveReload support... so it 
might make sense to just use grunt. grunt provides many other features 
as well (wiredep integration with bower, build facilities with ngMin, 
test monitoring and reload etc).


There seemed to be agreement to move to jasmine (from qunit) for 
writing the tests. It's not noted in the etherpad, but I recall karma 
was accepted as a given for the test runner. For those not in the 
meeting, angboard uses mocha+chai for test writing, but I agreed that 
jasmine is acceptable, and is already used by Storyboard (see below).


Also, phantomjs so we don't have to fire up a browser for exercising 
(what should hopefully be an extensive) unit test suite.


The Storyboard project has successfully integrated these tools into 
the OpenStack CI environment.



 Richard

* https://github.com/r1chardj0n3s/angboard


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I am going to try to conclude what has been said in emails across this 
thread.


As Monty Taylor said, nodejs itself is not a blocker as multiple 
versions of it should not be needed by our tools. (That's also what npm 
and bower are taking care of, right?) Only thing that is required is 
that all tools/js libs we want to use would eventually have to be 
packaged. This is just a bunch of work for packagers.



Approach on using Xstatic packages vs Js tooling:

As only problem with using js tooling should be just actual packaging of 
it, I think it makes sense to use these tools and make development 
simpler then going other way around and using Xstatic packages - which 
means devs would have to care about getting stuff packaged as xstatic 
and added to the code, while maintaining proper versions and making sure 
that they work ok together. NPM and Bower do this for us. Common sense 
tells me packagers should take care of packaging.
Packaging of these tools will have to get resolved somehow anyway, as 
there will be rise in requirements of using them not just from Horizon...



Which tools should we use eventually:

Based on the contributions by Maxime, Martin and the others, I think the 
list of tools should end up as follows:


Tooling:
npm
bower
gulp
Jasmine
Karma/Protractor(?)/eslint
...?

Bower and Gulp seems to get along well 
(https://github.com/yeoman/generator-gulp-webapp)


Tastypie on the Django side

Angular 

Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-12 Thread Jiri Tomasek

On 11/12/2014 05:18 PM, Julie Pichon wrote:

On 12/11/14 15:12, Jiri Tomasek wrote:

Approach on using Xstatic packages vs Js tooling:

As only problem with using js tooling should be just actual packaging of
it, I think it makes sense to use these tools and make development
simpler then going other way around and using Xstatic packages - which
means devs would have to care about getting stuff packaged as xstatic
and added to the code, while maintaining proper versions and making sure
that they work ok together. NPM and Bower do this for us. Common sense
tells me packagers should take care of packaging.
Packaging of these tools will have to get resolved somehow anyway, as
there will be rise in requirements of using them not just from Horizon...

I can't speak for the rest but that part doesn't seem correct to me. The
XStatic packages are Python packages (as in, dependencies) that the
Horizon team is responsible for (when they don't already exist) and
maintains on stackforge, so we do have to create them and make sure they
all work well together. The later packaging as rpm or deb or others is
left to the distro packagers of course.

There are instructions already on how to create xstatic packages [1],
it's not very complicated and just takes some review time.

Thanks,

Julie

[1]
http://docs.openstack.org/developer/horizon/contributing.html#javascript-and-css-libraries


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I might have expressed myself wrong about XStatic packages. But as you 
say as well, to use XStatic packages, we need to most often create them 
and maintain the correct versions we require in Horizon and they don't 
help to packagers either.  I makes sense to use them in Django 
application as they can be included in requirements.txt and we don't 
have to carry them directly in the code. So I am definitely ok to use 
them for Django dependencies we have.


Similar thing is npm and bower doing on the Angular side except for we 
don't have to create these libraries as they already exist. NPM and 
Bower are taking care of including the right versions of js libs our dev 
env and our application needs. They use similar description files as 
requirements.txt in Django.


It makes no sense not to use them and try to include js libraries using 
XStatic packages and listing them in requirements.txt because this way 
we don't know which version of js lib to use etc. NPM and bower are 
doing this for us.


In both approaches dependencies need to have packages in the end 
regadles of being it XStatic package or js library or Angular module.


It is about using the right tools for the job.


I see relation between Nodejs and js libs/tools and Angular app defining 
it's dependencies using NPM and Bower quite similar as Ruby, Rubygems 
and Rails application defining it's dependencies in Gemfile.lock. 
Rubygems are being packaged in distros, so why shouldn't node packages?



Jiri


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Jiri Tomasek

On 11/13/2014 04:04 PM, Thomas Goirand wrote:

On 11/13/2014 12:13 PM, Richard Jones wrote:

the npm stuff is all tool chain; tools
that I believe should be packaged as such by packagers.

npm is already in Debian:
https://packages.debian.org/sid/npm

However, just like we can't use CPAN, pear install, pip install and
such when building or installing package, we wont be able to use NPM.
This means every single dependency that isn't in Debian will need to be
packaged.


Horizon is an incredibly complex application. Just so we're all on the
same page, the components installed by bower for angboard are:

angular
   Because writing an application the size of Horizon without it would be
madness :)
angular-route
   Provides structure to the application through URL routing.
angular-cookies
   Provides management of browser cookies in a way that integrates well
with angular.
angular-sanitize
   Allows direct embedding of HTML into angular templates, with sanitization.
json3
   Compatibility for older browsers so JSON works.
es5-shim
   Compatibility for older browsers so Javascript (ECMAScript 5) works.
angular-smart-table
   Table management (population, sorting, filtering, pagination, etc)
angular-local-storage
Browser local storage with cookie fallback, integrated with angular
mechanisms.
angular-bootstrap
Extensions to angular that leverage bootstrap (modal popups, tabbed
displays, ...)
font-awesome
Additional glyphs to use in the user interface (warning symbol, info
symbol, ...)
boot
Bootstrap for CSS styling (this is the dependency that brings in
jquery and requirejs)
underscore
Javascript utility library providing a ton of features Javascript
lacks but Python programmers expect.
ng-websocket
Angular-friendly interface to using websockets
angular-translate
Support for localization in angular using message catalogs generated
by gettext/transifex.
angular-mocks
Mocking support for unit testing angular code
angular-scenario
More support for angular unit tests

Additionally, angboard vendors term.js because it was very poorly
packaged in the bower ecosystem. +1 for xstatic there I guess :)

So those are the components we needed to create the prototype in a few
weeks. Not using them would have added months (or possibly years) to the
development time. Creating an application of the scale of Horizon
without leveraging all that existing work would be like developing
OpenStack while barring all use of Python 3rd-party packages.

I have no problem with adding dependencies. That's how things work, for
sure, I just want to make sure it doesn't become hell, with so many
components inter-depending on 100s of them, which would become not
manageable. If we define clear boundaries, then fine! The above seems
reasonable anyway.

Though did you list the dependencies of the above?

Also, if the Horizon project starts using something like NPM (which
again, is already available in Debian, so it has my preference), will we
at least be able to control what version gets in, just like with pip?
Because that's a huge concern for me, and this has been very well and
carefully addressed during the Juno cycle. I would very much appreciate
if the same kind of care was taken again during the Kilo cycle, whatever
path we take. How do I use npm by the way? Any pointer?


NPM and Bower work the similar way as pip, they maintain similar files 
as requirements.txt that list dependencies and it's versions.
I think we should bring up patch that introduces this toolset so we can 
discuss the real amount of dependencies and the process.
It would be also nice to introduce something similar as 
global-requirements.txt in OpenStack project to make sure we have all 
deps in one place and get some approval process on versions used.


Here is an example of random Angular application's package.json (used by 
NPM) and bower.json (used by Bower) files:

http://fpaste.org/150513/89599214/

I'll try to search for a good article that describes how this ecosystem 
works.




Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Bootstrap 3 update and problems with lesscpy

2013-09-18 Thread Jiri Tomasek

Hi all,

I've started working on updating Bootstrap to version 3 in Horizon. 
https://blueprints.launchpad.net/horizon/+spec/bootstrap-update


As I have described in blueprint whiteboard, I am experiencing compile 
problems with the new lesscpy compiler that we started using recently. 
The compiled css code is incorrect and when running the compilation from 
terminal, about 200 syntax errors occur. This is related to certain 
features of Less not being supported by lesscpy. I have created a GIthub 
issue for lesscpy here: https://github.com/robotis/Lesscpy/issues/22 .


Sasha Peilicke has already started working on updating the lesscpy 
library to support all less features needed to compile Bootstrap 3 
properly. Although I think that it will take more than a few weeks 
before lesscpy is there where we need it.


I have part of Bootstrap 3 update ready and as it is quite a large patch 
I would like to get this in as soon as possible because any rebase to a 
new Horizon master is quite tedious process. Also there are another 
blueprints that depend on this update (font-icons and css-breakdown, see 
dependency tree).


So I would like to propose to revert the patch that introduces lesscpy 
library (a0739c9423 Drop NodeJS dependency in favor of pure-python 
lesscpy) and use the lessc library for the time being untill lesscpy is 
capable of compiling Bootstrap 3.


I have revert patch ready together with update of lessc library in 
horizon/bin, which I can make part of Bootstrap-update blueprint and 
send them right away to gerrit for a review. I have also tested that 
with this setup the Bootstrap 3 updated Horizon less file compiles properly.


When lesscpy is ready to support Bootstrap 3, geting back to lesscpy is 
then simple process of just reapplying the reverted commit.


-- Jirka Tomasek



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] When to use parameters vs parameter_defaults

2015-12-09 Thread Jiri Tomasek

On 11/25/2015 03:17 PM, Jay Dobies wrote:

I think at the same time we add a mechanism to distinguish between
internal and external parameters, we need to add something to indicate
required v. optional.

With a nested stack, anything that's not part of the top-level 
parameter

contract is defaulted. The problem is that it loses information on what
is a valid default v. what's simply defaulted to pass validation.


I thought the nested validation spec was supposed to handle that though?
  To me, required vs. optional should be as simple as "Does the 
parameter

definition have a 'default' key?  If yes, then it's optional, if no,
then it's required for the user to pass a value via a parameter or
parameter_default".  I realize we may not have been following that up to
now for various reasons, but it seems like Heat is already providing a
pretty explicit mechanism for marking params as required, so we ought to
use it.


Ya, I was mistaken here. Taking a look at the cinder-netapp.yaml, it 
looks like we're using this correctly:


...
  CinderNetappBackendName:
type: string
default: 'tripleo_netapp'
  CinderNetappLogin:
type: string
  CinderNetappPassword:
type: string
hidden: true
...


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I need to read the thread once again, but I'd like to add a few 
observations from the GUI implementation:


The nested validation as it works right now, requires that all root 
template parameters need to have 'default' or 'value' set, otherwise the 
heat validation fails and no parameters are returned. This is a sort of 
a blocker because we need to use this to retrieve the parameters and let 
user set the values for them. This means, that to be able to list the 
parameters, we need to make sure that all root template parameters have 
'default' set, which is not optimal.


Other observation (maybe a bit outside of the topic) is that the list of 
parameters defined in root template is huge, It would be nice if root 
template and more possibly root environment included resource registry 
only for the roles/templates that are explicitly required for the 
minimal deployment (controller, compute) and split other roles into 
separate  optional environments.
In current situation the user is required to set flavors, node counts 
etc. for all roles defined in root template even though he is not going 
to use them (sets the node_count to 0)



Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Is Swift a good choice of database for the TripleO API?

2016-01-06 Thread Jiri Tomasek

On 01/06/2016 11:48 AM, Dougal Matthews wrote:



On 5 January 2016 at 17:09, Jiri Tomasek <jtoma...@redhat.com 
<mailto:jtoma...@redhat.com>> wrote:


On 12/23/2015 07:40 PM, Steven Hardy wrote:

On Wed, Dec 23, 2015 at 11:05:05AM -0600, Ben Nemec wrote:

On 12/23/2015 10:26 AM, Steven Hardy wrote:

On Wed, Dec 23, 2015 at 09:28:59AM -0600, Ben Nemec wrote:

On 12/23/2015 03:19 AM, Dougal Matthews wrote:


On 22 December 2015 at 17:59, Ben Nemec
<openst...@nemebean.com
<mailto:openst...@nemebean.com>
<mailto:openst...@nemebean.com
<mailto:openst...@nemebean.com>>> wrote:

 Can we just do git like I've been
suggesting all along? ;-)

 More serious discussion inline. :-)

 On 12/22/2015 09:36 AM, Dougal Matthews
wrote:
 > Hi all,
 >
 > This topic came up in the 2015-12-15
meeting[1], and again briefly
 today.
 > After working with the code that came
out of the deployment library
 > spec[2] I
 > had some concerns with how we are
storing the templates.
 >
 > Simply put, when we are dealing with
100+ files from
 tripleo-heat-templates
 > how can we ensure consistency in Swift
without any atomicity or
 > transactions.
 > I think this is best explained with a
couple of examples.
 >
 >  - When we create a new deployment plan
(upload all the templates
 to swift)
 >how do we handle the case where
there is an error? For example,
 if we are
 >uploading 10 files - what do we do
if the 5th one fails for
 some reason?
 >There is a patch to do a manual
rollback[3], but I have
 concerns about
 >doing this in Python. If Swift is
completely inaccessible for a
 short
 >period the rollback wont work either.
 >
 >  - When deploying to Heat, we need to
download all the YAML files from
 > Swift.
 >This can take a couple of seconds.
What happens if somebody
 starts to
 >upload a new version of the plan in
the middle? We could end up
 trying to
 >deploy half old and half new files.
We wouldn't have a
 consistent view of
 >the database.
 >
 > We had a few suggestions in the meeting:
 >
 >  - Add a locking mechanism. I would be
concerned about deadlocks or
 > having to
 >lock for the full duration of a deploy.

 There should be no need to lock the plan
for the entire deploy.  It's
 not like we're re-reading the templates
at the end of the deploy today.
  It's a one-shot read and then the plan
could be unlocked, at least as
 far as I know.


Good point. That would be holding the lock for
longer than we need.

 The only option where we wouldn't need
locking at all is the
 read-copy-update model Clint mentions,
which might be a valid option as
 well.  Whatever we do, there are go

Re: [openstack-dev] [TripleO] Is Swift a good choice of database for the TripleO API?

2016-01-05 Thread Jiri Tomasek

On 12/23/2015 07:40 PM, Steven Hardy wrote:

On Wed, Dec 23, 2015 at 11:05:05AM -0600, Ben Nemec wrote:

On 12/23/2015 10:26 AM, Steven Hardy wrote:

On Wed, Dec 23, 2015 at 09:28:59AM -0600, Ben Nemec wrote:

On 12/23/2015 03:19 AM, Dougal Matthews wrote:


On 22 December 2015 at 17:59, Ben Nemec > wrote:

 Can we just do git like I've been suggesting all along? ;-)

 More serious discussion inline. :-)

 On 12/22/2015 09:36 AM, Dougal Matthews wrote:
 > Hi all,
 >
 > This topic came up in the 2015-12-15 meeting[1], and again briefly
 today.
 > After working with the code that came out of the deployment library
 > spec[2] I
 > had some concerns with how we are storing the templates.
 >
 > Simply put, when we are dealing with 100+ files from
 tripleo-heat-templates
 > how can we ensure consistency in Swift without any atomicity or
 > transactions.
 > I think this is best explained with a couple of examples.
 >
 >  - When we create a new deployment plan (upload all the templates
 to swift)
 >how do we handle the case where there is an error? For example,
 if we are
 >uploading 10 files - what do we do if the 5th one fails for
 some reason?
 >There is a patch to do a manual rollback[3], but I have
 concerns about
 >doing this in Python. If Swift is completely inaccessible for a
 short
 >period the rollback wont work either.
 >
 >  - When deploying to Heat, we need to download all the YAML files from
 > Swift.
 >This can take a couple of seconds. What happens if somebody
 starts to
 >upload a new version of the plan in the middle? We could end up
 trying to
 >deploy half old and half new files. We wouldn't have a
 consistent view of
 >the database.
 >
 > We had a few suggestions in the meeting:
 >
 >  - Add a locking mechanism. I would be concerned about deadlocks or
 > having to
 >lock for the full duration of a deploy.

 There should be no need to lock the plan for the entire deploy.  It's
 not like we're re-reading the templates at the end of the deploy today.
  It's a one-shot read and then the plan could be unlocked, at least as
 far as I know.


Good point. That would be holding the lock for longer than we need.
  


 The only option where we wouldn't need locking at all is the
 read-copy-update model Clint mentions, which might be a valid option as
 well.  Whatever we do, there are going to be concurrency issues though.
  For example, what happens if two users try to make updates to the plan
 at the same time?  If you don't either merge the changes or disallow one
 of them completely then one user's changes might be lost.

 TBH, this is further convincing me that we should just make this git
 backed and let git handle the merging and conflict resolution (never
 mind the fact that it gets us a well-understood version control system
 for "free").  For updates that don't conflict with other changes, git
 can merge them automatically, but for merge conflicts you just return a
 rebase error to the user and make them resolve it.  I have a feeling
 this is the behavior we'll converge on eventually anyway, and rather
 than reimplement git, let's just use the real thing.


I'd be curious to hear more how you would go about doing this with git. I've
never automated git to this level, so I am concerned about what issues we
might hit.

TBH I haven't thought it through to that extent yet.  I'm mostly
suggesting it because it seems like a fit for the template storage
requirements - we know we want version control, we want to be able to
merge changes from multiple sources, and we want some way to handle
merge conflicts.  Git does all of this already.

That said, I'm not sure about everything here.  For example, how would
you expose merge conflicts to the user?  I don't know that I would want
to force a user to learn git in order to use TripleO (although that
would be the devops-y thing to do), but maybe just passing them back the
files with the merge conflict markers and having them resolve those
locally and retry the update would work.  I'm not sure how that would
map to the current version of the API though.  Do we provide any way to
pass templates back to the user?  I feel like that was kind of a one-way
street.

What part of the deployment API workflow could result in merge conflicts?

My understanding was that it's something like:

1. Take copy of reference templates tree
2. Introspect tempalates, expose required parameters so user can be
prompted for them
3. Create environment files(s) derived from the user input
4. Validate the combination of (1) and (3)
5. Deploy the templates+environments

On update, (1) would be "overwrite existing version of templates"

This update 

Re: [openstack-dev] [tripleo] When to use parameters vs parameter_defaults

2015-11-20 Thread Jiri Tomasek

On 11/16/2015 04:25 PM, Steven Hardy wrote:

Hi all,

I wanted to start some discussion re $subject, because it's been apparrent
that we have a lack of clarity on this issue (and have done ever since we
started using parameter_defaults).

Some context:

- Historically TripleO has provided a fairly comprehensive "top level"
   parameters interface, where many per-role and common options are
   specified, then passed in to the respective ResourceGroups on deployment

https://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/overcloud-without-mergepy.yaml#n14

The nice thing about this approach is it gives a consistent API to the
operator, e.g the parameters schema for the main overcloud template defines
most of the expected inputs to the deployment.

The main disadvantage is a degree of template bloat, where we wire dozens
of parameters into each ResourceGroup, and from there into whatever nested
templates consume them.

- When we started adding interfaces (such as all the OS::TripleO::*ExtraConfig*
   interfaces, there was a need to enable passing arbitrary additional
   values to nested templates, with no way of knowing what they are (e.g to
   enable wiring in third-party pieces we have no knowledge of or which
   require implementation-specific arguments which don't make sense for all
   deployments.

To do this, we made use of the heat parameter_defaults interface, which
(unlike normal parameters) have global scope (visible to all nested stacks,
without explicitly wiring in the values from the parent):

http://docs.openstack.org/developer/heat/template_guide/environment.html#define-defaults-to-parameters

The nice thing about this approach is its flexibility, any arbitrary
values can be provided without affecting the parent templates, and it can
allow for a terser implementation because you only specify the parameter
definition where it's actually used.

The main disadvantage of this approach is it becomes very much harder to
discover an API surface for the operator, e.g the parameters that must be
provided on deployment by any CLI/UI tools etc.  This has been partially
addressed by the new-for-liberty nested validation heat feature, but
there's still a bunch of unsolved complexity around how to actually consume
that data and build a coherent consolidated API for user interaction:

https://github.com/openstack/heat-specs/blob/master/specs/liberty/nested-validation.rst

My question is, where do we draw the line on when to use each interface?

My position has always been that we should only use parameter_defaults for
the ExtraConfig interfaces, where we cannot know what reasonable parameters
are.  And for all other "core" functionality, we should accept the increased
template verbosity and wire arguments in from overcloud-without-mergepy.

However we've got some patches which fall into a grey area, e.g this SSL
enablement patch:

https://review.openstack.org/#/c/231930/46/overcloud-without-mergepy.yaml

Here we're actually removing some existing (non functional) top-level
parameters, and moving them to parameter_defaults.

I can see the logic behind it, it does make the templates a bit cleaner,
but at the expense of discoverablility of those (probably not
implementation dependent) parameters.

How do people feel about this example, and others like it, where we're
enabling common, but not mandatory functionality?

In particular I'm keen to hear from Mainn and others interested in building
UIs on top of TripleO as to which is best from that perspective, and how
such arguments may be handled relative to the capabilities mapping proposed
here:

https://review.openstack.org/#/c/242439/

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I think I'll try to do a bit of a recap to make sure I understand 
things. It may shift slightly off the topic of this thread but I think 
it is worth it and it will describe what the GUI is able/expecting to 
work with.


Template defines parameters and passes them to child templates via 
resource properties.

Root template parameter values are set by (in order of precedence):
1. 'parameters' param in 'stack create' api call or 'parameters' section 
in environment

2. 'parameter_defaults' section in environment
3. 'default' in parameter definition in template

Non-root template parameter values are set by (in order of precedence):
1. parent resource properties
2. 'parameter_defaults' in environment
3. 'default' in parameter definition in template

The name collisions in parameter_defaults should not be a problem since 
the template author should make sure, the parameters names he defines 
don't collide with other templates.


The GUI's main goal (same as CLI and tripleo-common) is not to hardcode 
anything and use THT (or 

Re: [openstack-dev] [TripleO] Removing unused/deprecated template parameters?

2016-01-13 Thread Jiri Tomasek

On 01/13/2016 10:33 AM, Juan Antonio Osorio wrote:

IIRC some of them were already marked (via a comment) as deprecated.

+1 to cleaning up the parameters. I think it should be done as soon as 
possible, as the existence of some of them makes the usage of the 
templates quite confusing.


BR

On Tue, Jan 12, 2016 at 10:47 PM, Steven Hardy > wrote:


Hi all,

I've noticed that we have a fairly large number of unused
parameters in
t-h-t, some of which are marked deprecated, some aren't.

Since we moved tripleoclient to use parameter_defaults everywhere,
I think
it should be safe to remove these unused parameters, even in
overcloud.yaml.

See:

https://review.openstack.org/#/c/227057/

https://review.openstack.org/#/c/227057/

Since those, we can pass removed/deprecated parameters from the
client and
they will be ignored, even if they're removed from the template
(unlike if
you use "parameters", where a validation error would occur.

I'd like to go ahead and clean these up (only on the master
branch), is
that reasonable?  We can document the change via a mitaka release
note?

Ideally, we'd have user-visible warnings for a deprecation period, but
there's no way to output such warnings atm via heat, so we'd need
to wire
them in via tripleoclient or tripleo-common, which seems a bit
backwards
given that we can just remove the parameters there instead.

Thoughts?

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


+1 IUC this will reduce the number of parameters returned by nested heat 
validate, which is a good thing as those parameters confuse user.


Jirka
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Driving workflows with Mistral

2016-01-12 Thread Jiri Tomasek

On 01/11/2016 04:51 PM, Dan Prince wrote:

Background info:

We've got a problem in TripleO at the moment where many of our
workflows can be driven by the command line only. This causes some
problems for those trying to build a UI around the workflows in that
they have to duplicate deployment logic in potentially multiple places.
There are specs up for review which outline how we might solve this
problem by building what is called TripleO API [1].

Late last year I began experimenting with an OpenStack service called
Mistral which contains a generic workflow API. Mistral supports
defining workflows in YAML and then creating, managing, and executing
them via an OpenStack API. Initially the effort was focused around the
idea of creating a workflow in Mistral which could supplant our
"baremetal introspection" workflow which currently lives in python-
tripleoclient. I create a video presentation which outlines this effort
[2]. This particular workflow seemed to fit nicely within the Mistral
tooling.



More recently I've turned my attention to what it might look like if we
were to use Mistral as a replacement for the TripleO API entirely. This
brings forth the question of would TripleO be better off building out
its own API... or would relying on existing OpenStack APIs be a better
solution?

Some things I like about the Mistral solution:

- The API already exists and is generic.

- Mistral already supports interacting with many of the OpenStack API's
we require [3]. Integration with keystone is baked in. Adding support
for new clients seems straightforward (I've had no issues in adding
support for ironic, inspector, and swift actions).

- Mistral actions are pluggable. We could fairly easily wrap some of
our more complex workflows (perhaps those that aren't easy to replicate
with pure YAML workflows) by creating our own TripleO Mistral actions.
This approach would be similar to creating a custom Heat resource...
something we have avoided with Heat in TripleO but I think it is
perhaps more reasonable with Mistral and would allow us to again build
out our YAML workflows to drive things. This might allow us to build
off some of the tripleo-common consolidation that is already underway
...

- We could achieve a "stable API" by simply maintaining input
parameters for workflows in a stable manner. Or perhaps workflows get
versioned like a normal API would be as well.

- The purist part of me likes Mistral quite a bit. It fits nicely with
the deploy OpenStack with OpenStack. I sort of feel like if we have to
build our own API in TripleO part of this vision has failed and could
even be seen as a massive technical debt which would likely be hard to
build a community around outside of TripleO.

- Some of the proposed validations could perhaps be implemented as new
Mistral actions as well. I'm not convinced we require TripleO API just
to support a validations mechanism yet. Perhaps validations seem hard
because we are simply trying to do them in the wrong places anyway?
(like for example perhaps we should validate network connectivity at
inspection time rather than during provisioning).

- Power users might find a workflow built around a Mistral API more
easy to interact with and expand upon. Perhaps this ends up being
something that gets submitted as a patchset back to the TripleO that we
accept into our upstream "stock" workflow sets.



Last week we landed the last patches [4] to our undercloud to enable
installing Mistral by simply setting: enable_mistral = true in
undercloud.conf. NOTE: you'll need to be using a recent trunk repo from
Delorean so that you have the recently added Mistral packages for this
to work. Although the feature is disable by default this should enable
those wishing to tinker with Mistral as a new TripleO undercloud
service an easy path forwards.

[1] https://review.openstack.org/#/c/230432
[2] https://www.youtube.com/watch?v=bnAT37O-sdw
[3] http://git.openstack.org/cgit/openstack/mistral/tree/mistral/action
s/openstack/mapping.json
[4] https://etherpad.openstack.org/p/tripleo-undercloud-workflow


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi, I have a few questions:

Is Mistral action able to access/manipulate local files? E.g. access the 
templates installed at undercloud's 
/usr/share/openstack-tripleo-heat-templates?


I Mistral action able to call either OpenStack service python client or 
OpenStack service API directly?


What is the response from the Mistral action in the workflow? Lets say 
we'd use Mistral to get a list of available environments (we do this in 
tripleo-common now) So we call Mistral API to trigger a workflow that 
has single action which gets the list of environments. Is mistral able 
to provide this list as a response, or it 

Re: [openstack-dev] [TripleO] Driving workflows with Mistral

2016-01-12 Thread Jiri Tomasek

On 01/12/2016 04:22 PM, Ryan Brown wrote:

On 01/12/2016 06:52 AM, Jiri Tomasek wrote:

On 01/11/2016 04:51 PM, Dan Prince wrote:

Background info:

We've got a problem in TripleO at the moment where many of our
workflows can be driven by the command line only. This causes some
problems for those trying to build a UI around the workflows in that
they have to duplicate deployment logic in potentially multiple places.
There are specs up for review which outline how we might solve this
problem by building what is called TripleO API [1].

Late last year I began experimenting with an OpenStack service called
Mistral which contains a generic workflow API. Mistral supports
defining workflows in YAML and then creating, managing, and executing
them via an OpenStack API. Initially the effort was focused around the
idea of creating a workflow in Mistral which could supplant our
"baremetal introspection" workflow which currently lives in python-
tripleoclient. I create a video presentation which outlines this effort
[2]. This particular workflow seemed to fit nicely within the Mistral
tooling.



More recently I've turned my attention to what it might look like if we
were to use Mistral as a replacement for the TripleO API entirely. This
brings forth the question of would TripleO be better off building out
its own API... or would relying on existing OpenStack APIs be a better
solution?

Some things I like about the Mistral solution:

- The API already exists and is generic.

- Mistral already supports interacting with many of the OpenStack API's
we require [3]. Integration with keystone is baked in. Adding support
for new clients seems straightforward (I've had no issues in adding
support for ironic, inspector, and swift actions).

- Mistral actions are pluggable. We could fairly easily wrap some of
our more complex workflows (perhaps those that aren't easy to replicate
with pure YAML workflows) by creating our own TripleO Mistral actions.
This approach would be similar to creating a custom Heat resource...
something we have avoided with Heat in TripleO but I think it is
perhaps more reasonable with Mistral and would allow us to again build
out our YAML workflows to drive things. This might allow us to build
off some of the tripleo-common consolidation that is already underway
...

- We could achieve a "stable API" by simply maintaining input
parameters for workflows in a stable manner. Or perhaps workflows get
versioned like a normal API would be as well.

- The purist part of me likes Mistral quite a bit. It fits nicely with
the deploy OpenStack with OpenStack. I sort of feel like if we have to
build our own API in TripleO part of this vision has failed and could
even be seen as a massive technical debt which would likely be hard to
build a community around outside of TripleO.

- Some of the proposed validations could perhaps be implemented as new
Mistral actions as well. I'm not convinced we require TripleO API just
to support a validations mechanism yet. Perhaps validations seem hard
because we are simply trying to do them in the wrong places anyway?
(like for example perhaps we should validate network connectivity at
inspection time rather than during provisioning).

- Power users might find a workflow built around a Mistral API more
easy to interact with and expand upon. Perhaps this ends up being
something that gets submitted as a patchset back to the TripleO that we
accept into our upstream "stock" workflow sets.



Last week we landed the last patches [4] to our undercloud to enable
installing Mistral by simply setting: enable_mistral = true in
undercloud.conf. NOTE: you'll need to be using a recent trunk repo from
Delorean so that you have the recently added Mistral packages for this
to work. Although the feature is disable by default this should enable
those wishing to tinker with Mistral as a new TripleO undercloud
service an easy path forwards.

[1] https://review.openstack.org/#/c/230432
[2] https://www.youtube.com/watch?v=bnAT37O-sdw
[3] http://git.openstack.org/cgit/openstack/mistral/tree/mistral/action
s/openstack/mapping.json
[4] https://etherpad.openstack.org/p/tripleo-undercloud-workflow


__ 



OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi, I have a few questions:

Is Mistral action able to access/manipulate local files? E.g. access the
templates installed at undercloud's
/usr/share/openstack-tripleo-heat-templates?


I believe with mistral there would be an intermediate step of 
uploading the templates to Swift first. Heat can read templates from 
swift, and any mistral workflows would be able to read the templates 
out, modify them, and save back to swift.


Correct, but from the Mistral usage standpoint, having the fl

Re: [openstack-dev] [heat][TripleO] Adding interfaces to environment files?

2016-06-08 Thread Jiri Tomasek
On Wed, Jun 8, 2016 at 11:23 AM, Steven Hardy  wrote:

> On Tue, Jun 07, 2016 at 04:53:12PM -0400, Zane Bitter wrote:
> > On 07/06/16 15:57, Jay Dobies wrote:
> > > >
> > > > 1. Now that we support passing un-merged environment files to heat,
> > > > it'd be
> > > > good to support an optional description key for environments,
> > >
> > > I've never understood why the environment file doesn't have a
> > > description field itself. Templates have descriptions, and IMO it makes
> > > sense for an environment to describe what its particular additions to
> > > the parameters/registry do.
> >
> > Just use a comment?
>
> This doesn't work for any of the TripleO use-cases because you can't parse
> a comment.
>
> The requirements are twofold:
>
> 1. Prior to creating the stack, we need a way to present choices to the
> user about which environment files to enable.  This is made much easier if
> you can include a human-readable description about what the environment
> actually does.
>
> 2. After creating the stack, we need a way to easily introspect the stack
> and see what environments were enabled.  Same as above, it's be
> super-awesome if we could just then strip out the description of what they
> do, so we don't have to maintain hacks like this:
>
>
> https://github.com/openstack/tripleo-heat-templates/blob/master/capabilities-map.yaml
>
> The description is one potentially easy-win here, it just makes far more
> sense to keep the description of a thing inside the same file (just like we
> do already with HOT templates).
>
> The next step beyond that is the need to express dependencies between
> things, which is what I was trying to address via the
> https://review.openstack.org/#/c/196656/ spec - that kinda stalled when it
> took 7 months to land so we'll probably need that capabilities_map for that
> unless we can revive that effort.
>
> > > I'd be happy to write that patch, but I wanted to first double check
> > > that there wasn't a big philosophical reason why it shouldn't have a
> > > description.
> >
> > There's not much point unless you're also adding an API to retrieve
> > environment files like Steve mentioned. Comments get stripped when the
> yaml
> > is parsed, but that's fairly academic if you don't have a way to get it
> out
> > again.
>
> Yup, I'm absolutely proposing we add an interface to retrieve the
> environment files (or, in fact, the entire stack files map, and a list of
> environment_files).
>
> Steve
>


Hi, thanks for bringing this topic up. Capabilities map provides several
information about environments. We definitely need to get rid of it in
favor of having Heat provide this from the environment file metadata. How
much additional work would it be to enable environments provide more
metadata than just a description?

>From the GUI point of view an information structure such as following would
be much appreciated:

environments/environments/net-bond-with-vlans.yaml:

meta:
  label: Net Bond with Vlans
  description: >
Configure each role to use a pair of bonded nics (nic2 and
nic3) and configures an IP address on each relevant isolated network
for each role. This option assumes use of Network Isolation.
  requires:
- environments/network-isolation.yaml
- overcloud-resource-registry-puppet.yaml
  alternatives:
- environments/net-single-nic-with-vlans.yaml
  group:
- network-configuration

Grouping of environments is a bit problematic. We could introduce something
like 'group' which could categorize the environments. Problem is that each
group would eventually require own entity to cover group label and
description.


-- Jirka


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][TripleO] Adding interfaces to environment files?

2016-06-14 Thread Jiri Tomasek

On 06/13/2016 03:51 PM, Ben Nemec wrote:

On 06/08/2016 07:00 AM, Jiri Tomasek wrote:

On Wed, Jun 8, 2016 at 11:23 AM, Steven Hardy <sha...@redhat.com
<mailto:sha...@redhat.com>> wrote:

 On Tue, Jun 07, 2016 at 04:53:12PM -0400, Zane Bitter wrote:
 > On 07/06/16 15:57, Jay Dobies wrote:
 > > >
 > > > 1. Now that we support passing un-merged environment files to heat,
 > > > it'd be
 > > > good to support an optional description key for environments,
 > >
 > > I've never understood why the environment file doesn't have a
 > > description field itself. Templates have descriptions, and IMO it makes
 > > sense for an environment to describe what its particular additions to
 > > the parameters/registry do.
 >
 > Just use a comment?

 This doesn't work for any of the TripleO use-cases because you can't
 parse
 a comment.

 The requirements are twofold:

 1. Prior to creating the stack, we need a way to present choices to the
 user about which environment files to enable.  This is made much
 easier if
 you can include a human-readable description about what the environment
 actually does.

 2. After creating the stack, we need a way to easily introspect the
 stack
 and see what environments were enabled.  Same as above, it's be
 super-awesome if we could just then strip out the description of
 what they
 do, so we don't have to maintain hacks like this:

 
https://github.com/openstack/tripleo-heat-templates/blob/master/capabilities-map.yaml

 The description is one potentially easy-win here, it just makes far more
 sense to keep the description of a thing inside the same file (just
 like we
 do already with HOT templates).

 The next step beyond that is the need to express dependencies between
 things, which is what I was trying to address via the
 https://review.openstack.org/#/c/196656/ spec - that kinda stalled
 when it
 took 7 months to land so we'll probably need that capabilities_map
 for that
 unless we can revive that effort.

 > > I'd be happy to write that patch, but I wanted to first double check
 > > that there wasn't a big philosophical reason why it shouldn't have a
 > > description.
 >
 > There's not much point unless you're also adding an API to retrieve
 > environment files like Steve mentioned. Comments get stripped when the 
yaml
 > is parsed, but that's fairly academic if you don't have a way to get it 
out
 > again.

 Yup, I'm absolutely proposing we add an interface to retrieve the
 environment files (or, in fact, the entire stack files map, and a
 list of
 environment_files).

 Steve



Hi, thanks for bringing this topic up. Capabilities map provides several
information about environments. We definitely need to get rid of it in
favor of having Heat provide this from the environment file metadata.
How much additional work would it be to enable environments provide more
metadata than just a description?

 From the GUI point of view an information structure such as following
would be much appreciated:

environments/environments/net-bond-with-vlans.yaml:

meta:
   label: Net Bond with Vlans
   description: >
 Configure each role to use a pair of bonded nics (nic2 and
 nic3) and configures an IP address on each relevant isolated network
 for each role. This option assumes use of Network Isolation.
   requires:
 - environments/network-isolation.yaml
 - overcloud-resource-registry-puppet.yaml
   alternatives:
 - environments/net-single-nic-with-vlans.yaml
   group:
 - network-configuration

Grouping of environments is a bit problematic. We could introduce
something like 'group' which could categorize the environments. Problem
is that each group would eventually require own entity to cover group
label and description.

This is why I actually don't think grouping information belongs in the
environment files at all.  I left some related thoughts in a response to
Steve on https://review.openstack.org/#/c/253638/ but mostly it boils
down to the fact that the group metadata is at a different level from
the environments so putting it in the environment is a bad fit.

Note that the same applies to alternatives.  Putting requirements in the
environments makes perfect sense, but making them be aware of all their
siblings too gets messy (consider that if we add a single new network
environment now all of the existing environments would have to be
updated as well).


Yeah, makes perfect sense. Alternative solution could be that we 
maintain groups in separate meta file such as capabilities map which is 
tied to a repository and maps that files in it. I am aware that it 
brings additional maintenance to the repo, but the difference to 
cap

[openstack-dev] [TripleO] Nodes Registration workflow improvements

2016-06-13 Thread Jiri Tomasek

Hi all,

As we are close to merging the initial Nodes Registration workflows and 
action [1, 2] using Mistral which successfully provides the current 
registration logic via common API, I'd like to start discussion on how 
to improve it so it satisfies GUI and CLI requirements. I'd like to try 
to describe the clients goals and define requirements, describe current 
workflow problems and propose a solution. I'd like to record the result 
of discussion to Blueprint [3] which Ryan already created.



CLI goals and optimal workflow


CLI's main benefit is based on the fact that it's commands can simply 
become part of a script, so it is important that the operation is 
idempotent. The optimal CLI workflow is:


User runs 'openstack baremetal import' and provides instackenv.json file 
which includes all nodes information. When the registration fails at 
some point, user is notified about the error and re-runs the command 
with the same set of nodes. Rinse and repeat until all nodes are 
properly registered.



GUI goals and optimal workflow
=

GUI's main goal is to provide a user friendly way to register nodes, 
inform the user on the process, handle the problems and lets user fix 
them. GUI strives for being responsive and interactive.


GUI allows user to add nodes specification manually one by one by 
provided form or allow user (in same manner as CLI) to provide the 
instackenv.json file which holds the nodes description. Importing the 
file (or adding node manually) will populate an array of nodes the user 
wants to register. User is able to browse these nodes and make 
corrections to their configuration. GUI provides client side validations 
to verify inputs (node name format, required fields, mac address, ip 
address format etc.)


Then user triggers the registration. The nodes are moved to nodes table 
as they are being registered. If an error occurs during registration of 
any of the nodes, user is notified about the issue and can fix it in 
registration form and can re-trigger registration for failed nodes. 
Rinse and repeat until all nodes are successfully registered and in 
proper state (manageable).


Such workflow keeps the GUI interactive, user does not have to look at 
the spinner for several minutes (in case of registering hundreds of 
nodes), hoping that something does not happen wrong. User is constantly 
informed about the progress, user is able to react to the situation as 
he wants, User is able to freely interact with the GUI while 
registration is happening on the background. User is able to register 
nodes in batches.



Current solution
=

Current solution uses register_or_update_nodes workflow [1] which takes 
a nodes_json array and runs register_or_update_nodes and 
set_nodes_managed tasks. When the whole operation completes it sends 
Zaqar message notifying about the result of the registration of the 
whole batch of nodes.


register_or_update_nodes runs tripleo.register_or_update_nodes action 
[2] which uses business logic in tripleo_common/utils/nodes.py


utils.nodes.py module has been originally extracted from tripleoclient 
to get the business logic behind the common API. It does following:


- converts the instackenv.json nodes format to appropriate ironic driver 
format (driver-info fields)

- sets kernel and ramdisk ids defaults if they're not provided
- for each node it tests if node already exists (finds nodes by mac 
addresses) and updates it or registers it as new based on the result.



Current Problems:
- no zaqar notification is sent for each node
- nodes are registered in batch, registration fails when error happens 
on a certain node, leaving already registered nodes in inconsistent state
- workflow does not notify user about what nodes have been registered 
and what failed, only thing user gets is relevant error message
- when the workflow succeeds, the registered_nodes list sent by Zaqar 
message has outdated information
- when nodes are updated using nodes registration, the forkflow ends up 
as failed, without any error output, although the nodes are updated 
successfully


- utils/nodes.py decides whether the node should be created or updated 
based on mac address which is subject to change. It needs to be done by 
UUID which is fixed.
- utils/nodes.py uses instackenv.json nodes list format - the conversion 
should be done in client


- instackenv.json uses nodes list format which is not compatible with 
ironic which forces us to do the format conversions and limit the ironic 
driver support



Proposed changes
===

To satisfy clients requirements we need to:
- assure the idempotency of idempotency of running the nodes 
registration providing the instackenv.json

- enable the workflow to track each node registration workflow separately


The changes can be done in 2 steps:
1. refactor register_or_update_nodes workflow and utils/nodes.py

- register_or_update_nodes workflow calls 

Re: [openstack-dev] [tripleo] Zaqar messages standardization

2016-05-26 Thread Jiri Tomasek

On 05/25/2016 08:08 PM, Thomas Herve wrote:

On Fri, May 20, 2016 at 5:52 PM, Jiri Tomasek <jtoma...@redhat.com> wrote:

Hey all,

I've been recently working on getting the TripleO UI integrated with Zaqar,
so it can receive a messages from Mistral workflows and act upon them
without having to do various polling hacks.

Since there is currently quite a large amount of new TripleO workflows
comming to tripleo-common, we need to standardize this communication so
clients can consume the messages consistently.

I'll try to outline the requirements as I see it to start the discussion.

Zaqar queues:
To listen to the Zaqar messages it requires the client to connect to Zaqar
WebSocket, send authenticate message and subscribe to queue(s) which it
wants to listen to. The currently pending workflow patches which send Zaqar
messages [1, 2] expect that the queue is created by client and name is
passed as an input to the workflow [3].

 From the client perspective, it would IMHO be better if all workflows sent
messages to the same queue and provide means to identify itself by carrying
workflow name and execution id. The reason is, that if client creates a
queue and triggers the workflow and then disconnects from the Socket (user
refreshes browser), then it does not know what queues it previously created
and which it should listen to. If there is single 'tripleo' queue, then all
clients always know that it is where it will get all the messages from.

Messages identification and content:
The client should be able to identify message by it's name so it can act
upon it. The name should probably be relevant to the action or workflow it
reports on.

{
   body: {
 name: 'tripleo.validations.v1.run_validation,
 execution_id: '123123123'
 data: {}
   }
}

Other parts of the message are optional but it would be good to provide
information relevant to the message's purpose, so the client can update
relevant state and does not have to do any additional API calls. So e.g. in
case of running the validation a message includes validation id.

Hi,

Sorry for not responding earlier, but I have some inputs. In Heat we
publish events on Zaqar queue, and we defined this format:

 {
 'timestamp': $timestamp,
 'version': '0.1',
 'type': 'os.heat.event',
 'id': $uuid,
 'payload': {
 'XXX
 }
 }


Thanks, it totally makes sense. So when I convert my example to your 
usage it looks like this:


{
body: {
'timestamp': $timestamp,
'type': 'tripleo.validations.v1.run_validation',
'id': $uuid,
'payload': {
execution_id: '123123123',
validation_id: '123321'
...
 }
}
}

I am not sure whether to separate the version from type as it would 
become complicated to reconstruct the workflow name (at least for 
tripleo workflows).
The most important is the 'type' as that is the key which we'd like to 
use on client to identify what action to take.




I don't think we have strong requirements on that, and we can
certainly make some tweaks. If we can converge towards something
simimar that'd be great.

Thanks,



Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Zaqar messages standardization

2016-06-17 Thread Jiri Tomasek

On 05/26/2016 12:18 PM, Thomas Herve wrote:

On Thu, May 26, 2016 at 11:48 AM, Jiri Tomasek <jtoma...@redhat.com> wrote:

On 05/25/2016 08:08 PM, Thomas Herve wrote:

Sorry for not responding earlier, but I have some inputs. In Heat we
publish events on Zaqar queue, and we defined this format:

  {
  'timestamp': $timestamp,
  'version': '0.1',
  'type': 'os.heat.event',
  'id': $uuid,
  'payload': {
  'XXX
  }
  }


Thanks, it totally makes sense. So when I convert my example to your usage
it looks like this:

{
 body: {
 'timestamp': $timestamp,
 'type': 'tripleo.validations.v1.run_validation',
 'id': $uuid,
 'payload': {
 execution_id: '123123123',
 validation_id: '123321'
 ...
  }
 }
}

I am not sure whether to separate the version from type as it would become
complicated to reconstruct the workflow name (at least for tripleo
workflows).
The most important is the 'type' as that is the key which we'd like to use
on client to identify what action to take.

Looks great to me, thanks!



So as the workflows start to shape up and we start to use Zaqar messages 
in clients, this is the mistral task we use to send a message:


send_message:
  action: zaqar.queue_post
  input:
  queue_name: <% $.queue_name %>
  messages:
body:
  type: tripleo.baremetal.v1.register_or_update
  execution_id: <% execution().id %>
  payload:
status: <% $.get('status', 'SUCCESS') %>
message: <% $.get('message', '') %>
registered_nodes: <% $.registered_nodes or [] %>

I am coming to a conclusion, that instead of passing just the execution 
ID, it would be much more beneficial to send the whole execution object, 
because
1. the client is going to have to fire up additional request to get the 
execution info
2. execution object itself often includes most of the information which 
this task explicitly includes in payload




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-26 Thread Jiri Tomasek

On 01/14/2016 12:54 PM, Steven Hardy wrote:

On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:

Hey all,

I realize now from the title of the other TripleO/Mistral thread [1] that
the discussion there may have gotten confused.  I think using Mistral for
TripleO processes that are obviously workflows - stack deployment, node
registration - makes perfect sense.  That thread is exploring practicalities
for doing that, and I think that's great work.

What I inappropriately started to address in that thread was a somewhat
orthogonal point that Dan asked in his original email, namely:

"what it might look like if we were to use Mistral as a replacement for the
TripleO API entirely"

I'd like to create this thread to talk about that; more of a 'should we'
than 'can we'.  And to do that, I want to indulge in a thought exercise
stemming from an IRC discussion with Dan and others.  All, please correct me
if I've misstated anything.

The IRC discussion revolved around one use case: deploying a Heat stack
directly from a Swift container.  With an updated patch, the Heat CLI can
support this functionality natively.  Then we don't need a TripleO API; we
can use Mistral to access that functionality, and we're done, with no need
for additional code within TripleO.  And, as I understand it, that's the
true motivation for using Mistral instead of a TripleO API: avoiding custom
code within TripleO.

That's definitely a worthy goal... except from my perspective, the story
doesn't quite end there.  A GUI needs additional functionality, which boils
down to: understanding the Heat deployment templates in order to provide
options for a user; and persisting those options within a Heat environment
file.

Right away I think we hit a problem.  Where does the code for 'understanding
options' go?  Much of that understanding comes from the capabilities map
in tripleo-heat-templates [2]; it would make sense to me that responsibility
for that would fall to a TripleO library.

Still, perhaps we can limit the amount of TripleO code.  So to give API
access to 'getDeploymentOptions', we can create a Mistral workflow.

   Retrieve Heat templates from Swift -> Parse capabilities map

Which is fine-ish, except from an architectural perspective
'getDeploymentOptions' violates the abstraction layer between storage and
business logic, a problem that is compounded because 'getDeploymentOptions'
is not the only functionality that accesses the Heat templates and needs
exposure through an API.  And, as has been discussed on a separate TripleO
thread, we're not even sure Swift is sufficient for our needs; one possible
consideration right now is allowing deployment from templates stored in
multiple places, such as the file system or git.

Actually, that whole capabilities map thing is a workaround for a missing
feature in Heat, which I have proposed, but am having a hard time reaching
consensus on within the Heat community:

https://review.openstack.org/#/c/196656/

Given that is a large part of what's anticipated to be provided by the
proposed TripleO API, I'd welcome feedback and collaboration so we can move
that forward, vs solving only for TripleO.


Yes, the original intent was to provide user with a means to safely 
construct the deployment template tree which iiuc is what this proposed 
feature provides. Then in the process of figuring out how to provide 
user with a reasonable choices for designing the deployment, I realized 
that more natural way is to provide the choices on the environments 
level (which is what we already do in THT - provide the list of 
alternative/complementary environments) The capabilities map provides a 
list of environments which user is able to choose from when designing 
the deployment supporting the choice with environment description and 
information about whether the environment is required to use, if it 
depends on the use of other environment or if the environment is one of 
the mutually exclusive choices.


So in addition to what the mentioned spec defines, to implement a 
replacement for capabilities_map in Heat, we'd need a way to specify 
environment description, other environments dependency, mutual 
exclusivity to other environments etc from within environment itself. 
Heat then could return a response which resembles to what 
capabilities_map provides now.


Maybe the current spec is sufficient and based on the information that 
this spec brings Heat would already be able to identify the 
inter-environment behavior based on parsing the resource_registry in the 
environments? Only thing we'd need to add is a 'description' to 
environment file.


-- Jirka




Are we going to have duplicate 'getDeploymentOptions' workflows for each
storage mechanism?  If we consolidate the storage code within a TripleO
library, do we really need a *workflow* to call a single function?  Is a
thin TripleO API that contains no additional business logic really so bad
at that point?

Actually, this is an argument 

Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-27 Thread Jiri Tomasek

On 01/27/2016 03:36 PM, Dan Prince wrote:

On Wed, 2016-01-27 at 14:32 +0100, Jiri Tomasek wrote:

On 01/26/2016 09:05 PM, Ben Nemec wrote:

On 01/25/2016 04:36 PM, Dan Prince wrote:

On Mon, 2016-01-25 at 15:31 -0600, Ben Nemec wrote:

On 01/22/2016 06:19 PM, Dan Prince wrote:

On Fri, 2016-01-22 at 11:24 -0600, Ben Nemec wrote:

So I haven't weighed in on this yet, in part because I was
on
vacation
when it was first proposed and missed a lot of the initial
discussion,
and also because I wanted to take some time to order my
thoughts
on
it.
   Also because my initial reaction...was not conducive to
calm and
rational discussion. ;-)

The tldr is that I don't like it.  To explain why, I'm
going to
make
a
list (everyone loves lists, right? Top $NUMBER reasons we
should
stop
expecting other people to write our API for us):

1) We've been down this road before.  Except last time it
was
with
Heat.
   I'm being somewhat tongue-in-cheek here, but expecting a
general
service to provide us a user-friendly API for our specific
use
case
just
doesn't make sense to me.

We've been down this road with Heat yes. But we are currently
using
Heat for some things that we arguable should be (a workflows
tool
might
help offload some stuff out of Heat). Also we haven't
implemented
custom Heat resources for TripleO either. There are mixed
opinions
on
this but plugging in your code to a generic API is quite nice
sometimes.

That is the beauty of Mistral I think. Unlike Heat it
actually
encourages you to customize it with custom Python actions.
Anything
we
want in tripleo-common can become our own Mistral action
(these get
registered with stevedore entry points so we'd own the code)
and
the
YAML workflows just tie them together via tasks.

We don't have to go off and build our own proxy deployment
workflow
API. The structure to do just about anything we need already
exists
so
why not go and use it?


2) The TripleO API is not a workflow API.  I also largely
missed
this
discussion, but the TripleO API is a _Deployment_ API.  In
some
cases
there also happens to be a workflow going on behind the
scenes,
but
honestly that's not something I want our users to have to
care
about.

Agree that users don't have to care about this.

Users can get as involved as they want here. Most users I
think
will
use python-tripleoclient to drive the deployment or the new
UI.
They
don't have to interact with Mistral directly unless they
really
want
to. So whether we choose to build our own API or use a
generic one
I
think this point is mute.

Okay, I think this is a very fundamental point, and I believe
it gets
right to the heart of my objection to the proposed change.

When I hear you say that users will use tripleoclient to talk
to
Mistral, it raises a big flag.  Then I look at something like
https://github.com/dprince/python-tripleoclient/commit/77ffd2fa
7b1642
b9f05713ca30b8a27ec4b322b7
and the flag gets bigger.

The thing is that there's a whole bunch of business logic
currently
sitting in the client that shouldn't/can't be there.  There are
historical reasons for it, but the important thing is that the
current
client architecture is terribly flawed.  Business logic should
never
live in the client like it does today.

Totally agree here. In fact I have removed business logic from
python-
tripleoclient in this patch and moved it into a Mistral action.
Which
can then be used via a stable API from anywhere.


Looking at that change, I see a bunch of business logic around
taking
our configuration and passing it to Mistral.  In order for us
to do
something like that and have a sustainable GUI, that code _has_
to
live
behind an API that the GUI and CLI alike can call.  If we ask
the GUI
to
re-implement that code, then we're doomed to divergence between
the
CLI
and GUI code and we'll most likely end up back where we are
with a
GUI
that can't deploy half of our features because they were
implemented
solely with the CLI in mind and made assumptions the GUI can't
meet.

The latest feedback I've gotten from working with the UI
developers on
this was that we should have a workflow to create the
environment. That
would get called via the Mistral API via python-tripleoclient and
any
sort of UI you could imagine and would essentially give us a
stable
environment interface.

Anything that requires tripleoclient means !GUI though.  I know the
current GUI still has a bunch of dependencies on the CLI, but that
seems
like something we need to fix, not a pattern to repeat.  I still
think
any sentence containing "call Mistral via tripleoclient" is
indicative
of a problem in the design.

I am not sure I understand the argument here.

Regardless of which API we use (Mistral API or TripleO API) GUI is
going
to call the API and tripleoclient (CLI) is going to call the API
(through mistralclient - impl. detail).

GUI can't and does not call API through tripleoclient. This is why
the
work on extracting the common business logic to tripleo-common
happened.
So trip

Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-27 Thread Jiri Tomasek

On 01/26/2016 09:05 PM, Ben Nemec wrote:

On 01/25/2016 04:36 PM, Dan Prince wrote:

On Mon, 2016-01-25 at 15:31 -0600, Ben Nemec wrote:

On 01/22/2016 06:19 PM, Dan Prince wrote:

On Fri, 2016-01-22 at 11:24 -0600, Ben Nemec wrote:

So I haven't weighed in on this yet, in part because I was on
vacation
when it was first proposed and missed a lot of the initial
discussion,
and also because I wanted to take some time to order my thoughts
on
it.
  Also because my initial reaction...was not conducive to calm and
rational discussion. ;-)

The tldr is that I don't like it.  To explain why, I'm going to
make
a
list (everyone loves lists, right? Top $NUMBER reasons we should
stop
expecting other people to write our API for us):

1) We've been down this road before.  Except last time it was
with
Heat.
  I'm being somewhat tongue-in-cheek here, but expecting a general
service to provide us a user-friendly API for our specific use
case
just
doesn't make sense to me.

We've been down this road with Heat yes. But we are currently using
Heat for some things that we arguable should be (a workflows tool
might
help offload some stuff out of Heat). Also we haven't implemented
custom Heat resources for TripleO either. There are mixed opinions
on
this but plugging in your code to a generic API is quite nice
sometimes.

That is the beauty of Mistral I think. Unlike Heat it actually
encourages you to customize it with custom Python actions. Anything
we
want in tripleo-common can become our own Mistral action (these get
registered with stevedore entry points so we'd own the code) and
the
YAML workflows just tie them together via tasks.

We don't have to go off and build our own proxy deployment workflow
API. The structure to do just about anything we need already exists
so
why not go and use it?


2) The TripleO API is not a workflow API.  I also largely missed
this
discussion, but the TripleO API is a _Deployment_ API.  In some
cases
there also happens to be a workflow going on behind the scenes,
but
honestly that's not something I want our users to have to care
about.

Agree that users don't have to care about this.

Users can get as involved as they want here. Most users I think
will
use python-tripleoclient to drive the deployment or the new UI.
They
don't have to interact with Mistral directly unless they really
want
to. So whether we choose to build our own API or use a generic one
I
think this point is mute.

Okay, I think this is a very fundamental point, and I believe it gets
right to the heart of my objection to the proposed change.

When I hear you say that users will use tripleoclient to talk to
Mistral, it raises a big flag.  Then I look at something like
https://github.com/dprince/python-tripleoclient/commit/77ffd2fa7b1642
b9f05713ca30b8a27ec4b322b7
and the flag gets bigger.

The thing is that there's a whole bunch of business logic currently
sitting in the client that shouldn't/can't be there.  There are
historical reasons for it, but the important thing is that the
current
client architecture is terribly flawed.  Business logic should never
live in the client like it does today.

Totally agree here. In fact I have removed business logic from python-
tripleoclient in this patch and moved it into a Mistral action. Which
can then be used via a stable API from anywhere.


Looking at that change, I see a bunch of business logic around taking
our configuration and passing it to Mistral.  In order for us to do
something like that and have a sustainable GUI, that code _has_ to
live
behind an API that the GUI and CLI alike can call.  If we ask the GUI
to
re-implement that code, then we're doomed to divergence between the
CLI
and GUI code and we'll most likely end up back where we are with a
GUI
that can't deploy half of our features because they were implemented
solely with the CLI in mind and made assumptions the GUI can't meet.

The latest feedback I've gotten from working with the UI developers on
this was that we should have a workflow to create the environment. That
would get called via the Mistral API via python-tripleoclient and any
sort of UI you could imagine and would essentially give us a stable
environment interface.

Anything that requires tripleoclient means !GUI though.  I know the
current GUI still has a bunch of dependencies on the CLI, but that seems
like something we need to fix, not a pattern to repeat.  I still think
any sentence containing "call Mistral via tripleoclient" is indicative
of a problem in the design.


I am not sure I understand the argument here.

Regardless of which API we use (Mistral API or TripleO API) GUI is going 
to call the API and tripleoclient (CLI) is going to call the API 
(through mistralclient - impl. detail).


GUI can't and does not call API through tripleoclient. This is why the 
work on extracting the common business logic to tripleo-common happened. 
So tripleo-common is the place which holds the business logic.


The proposed API (in the spec) is supposed only to 

[openstack-dev] [TripleO] TripleO UI Demo

2016-04-22 Thread Jiri Tomasek

Hello all,

I've created a demo video which shows latest changes in TripleO UI. You 
can watch it here: https://youtu.be/1Lc04DKGxCg


-- Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Zaqar messages standardization

2016-05-23 Thread Jiri Tomasek

On 05/23/2016 11:51 AM, Dougal Matthews wrote:



On 20 May 2016 at 18:39, Dan Prince <dpri...@redhat.com 
<mailto:dpri...@redhat.com>> wrote:


On Fri, 2016-05-20 at 17:52 +0200, Jiri Tomasek wrote:
> Hey all,
>
> I've been recently working on getting the TripleO UI integrated with
> Zaqar, so it can receive a messages from Mistral workflows and act
> upon them without having to do various polling hacks.
>
> Since there is currently quite a large amount of new TripleO
> workflows comming to tripleo-common, we need to standardize this
> communication so clients can consume the messages consistently.
>
> I'll try to outline the requirements as I see it to start the
> discussion.
>
> Zaqar queues:
> To listen to the Zaqar messages it requires the client to connect to
> Zaqar WebSocket, send authenticate message and subscribe to queue(s)
> which it wants to listen to. The currently pending workflow patches
> which send Zaqar messages [1, 2] expect that the queue is created by
> client and name is passed as an input to the workflow [3].
>
> From the client perspective, it would IMHO be better if all
workflows
> sent messages to the same queue and provide means to identify itself
> by carrying workflow name and execution id. The reason is, that if
> client creates a queue and triggers the workflow and then
disconnects
> from the Socket (user refreshes browser), then it does not know what
> queues it previously created and which it should listen to. If there
> is single 'tripleo' queue, then all clients always know that it is
> where it will get all the messages from.

I think each workflow that supports queue messages (probably most of
them) should probably allow to set your own queue_name that will get
messages posted to it. Then it would simply be a convention that the
client simply pass the same queue name to any concurrent workflows
that
are executed.

The single queue -> multiple workflows use case is however
important to
support for the UI so adding the execution_id and fully qualified
workflow name to each queue message should allow both patterns to work
fine.

And while the queue name is configurable perhaps we default it to
'tripleo' so that you really don't have to set it anywhere unless you
really want to.

If you buy this I can update the patches linked below per the latest
feedback.


+1, I like this approach.

Sounds good to me too. Thanks!



Dan


>
> Messages identification and content:
> The client should be able to identify message by it's name so it can
> act upon it. The name should probably be relevant to the action or
> workflow it reports on.
>
> {
>   body: {
> name: 'tripleo.validations.v1.run_validation,
> execution_id: '123123123'
> data: {}
>   }
> }
>
> Other parts of the message are optional but it would be good to
> provide information relevant to the message's purpose, so the client
> can update relevant state and does not have to do any additional API
> calls. So e.g. in case of running the validation a message includes
> validation id.
>
>
> [1]
https://review.openstack.org/#/c/313953/2/workbooks/deployment.ya
> ml
> [2]
https://review.openstack.org/#/c/313632/8/workbooks/validations.y
> aml
> [3]
https://review.openstack.org/#/c/313957/1/tripleoclient/v1/overcl
> oud_execute.py
>
> -- Jirka
>
_
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubs
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubs>
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- Jirka
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Zaqar messages standardization

2016-05-20 Thread Jiri Tomasek
Hey all,

I've been recently working on getting the TripleO UI integrated with Zaqar,
so it can receive a messages from Mistral workflows and act upon them
without having to do various polling hacks.

Since there is currently quite a large amount of new TripleO workflows
comming to tripleo-common, we need to standardize this communication so
clients can consume the messages consistently.

I'll try to outline the requirements as I see it to start the discussion.

Zaqar queues:
To listen to the Zaqar messages it requires the client to connect to Zaqar
WebSocket, send authenticate message and subscribe to queue(s) which it
wants to listen to. The currently pending workflow patches which send Zaqar
messages [1, 2] expect that the queue is created by client and name is
passed as an input to the workflow [3].

>From the client perspective, it would IMHO be better if all workflows sent
messages to the same queue and provide means to identify itself by carrying
workflow name and execution id. The reason is, that if client creates a
queue and triggers the workflow and then disconnects from the Socket (user
refreshes browser), then it does not know what queues it previously created
and which it should listen to. If there is single 'tripleo' queue, then all
clients always know that it is where it will get all the messages from.

Messages identification and content:
The client should be able to identify message by it's name so it can act
upon it. The name should probably be relevant to the action or workflow it
reports on.

{
  body: {
name: 'tripleo.validations.v1.run_validation,
execution_id: '123123123'
data: {}
  }
}

Other parts of the message are optional but it would be good to provide
information relevant to the message's purpose, so the client can update
relevant state and does not have to do any additional API calls. So e.g. in
case of running the validation a message includes validation id.


[1] https://review.openstack.org/#/c/313953/2/workbooks/deployment.yaml
[2] https://review.openstack.org/#/c/313632/8/workbooks/validations.yaml
[3]
https://review.openstack.org/#/c/313957/1/tripleoclient/v1/overcloud_execute.py

-- Jirka
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [UI] Version

2016-08-01 Thread Jiri Tomasek



On 27.7.2016 15:18, Steven Hardy wrote:

On Wed, Jul 27, 2016 at 08:41:32AM -0300, Honza Pokorny wrote:

Hello folks,

As the tripleo-ui project is quickly maturing, it might be time to start
versioning our code.  As of now, the version is set to 0.0.1 and that
hardly reflects the state of the project.

What do you think?

I would like to see it released as part of the coordinated tripleo release,
e.g tagged each milestone along with all other projects where we assert the
release:cycle-with-intermediary tag:

https://github.com/openstack/governance/blob/master/reference/projects.yaml#L4448

Because tripleo-ui isn't yet fully integrated with TripleO (e.g packaging,
undercloud installation and CI testing), we've not tagged it in the last
two milestone releases, but perhaps we can for the n-3 release?

https://review.openstack.org/#/c/324489/

https://review.openstack.org/#/c/340350/

When we do that, the versioning will align with all other TripleO
deliverables, solving the problem of the 0.0.1 version?

The steps to achieve this are:

1. Get per-commit builds of tripleo-ui working via delorean-current:

https://trunk.rdoproject.org/centos7-master/current/

2. Get the tripleo-ui package installed and configured as part of the
undercloud install (via puppet) - we might want to add a conditional to the
undercloud.conf so it's configurable (enabled by default?)

https://github.com/openstack/instack-undercloud/blob/master/elements/puppet-stack-config/puppet-stack-config.pp

3. Get the remaining Mistral API pieces landed so it's fully functional

4. Implement some basic CI smoke tests to ensure the UI is at least
accessible.

Does that sequence make sense, or have I missed something?
Makes perfect sense. Here is the launchpad link that tracks undercloud 
integration of GUI 
https://blueprints.launchpad.net/tripleo-ui/+spec/instack-undercloud-ui-config


Jirka



Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Deployment plan management efforts sync up

2017-02-02 Thread Jiri Tomasek

Hello all,

there has been several ongoing efforts in TripleO regarding Deployment 
Plans management and Deployment configuration itself. A lot of this work 
is done to satisfy certain individual requirements but I think some 
further discussion needs to happen to make sure the solutions we create 
are effective for all parts of TripleO.


There are several goals / features we currently aim for:
- Define and manage custom Roles (GUI/CLI)
- Define and manage networks (CLI/GUI)
- Import/Export Deployment plans so it is possible to reuse them or use 
them as a reference/starting point


Currently the Deployment plan stored in Swift consist of:
- tripleo heat templates
- roles_data.yaml - meta file used as an input to define roles (and 
assign networks to roles [1])

- network_data.yaml - meta file used as input to define networks [2]
- capabilities-map.yaml - meta file to describe THT environment files 
capabilities
- mistral environment - json structure in Mistral which we use as a 
backend store accessible by mistral actions and workflows (tripleo-common)


Currently, only possibility to configure roles and networks is by 
creating or updating the plan with changed meta files. We need to create 
Mistral actions to handle manipulating roles and networks so GUI (also 
CLI) can retrieve current roles/networks configuration and update it, 
which in turn will regenerate related templates. Now, the question 
arises: Do we want to use roles_data.yaml in Swift container as a 
storage for this information? I thought we agreed on using Mistral 
environment to store plan related data. (See here for additional context 
[3] )


This means that on plan creation, we use roles_data.yaml (and 
network_data.yaml etc.) to populate Mistral environment and generate 
templates using this data. roles_data.yaml (and others) then need to be 
discarded because from this point on, the data will be updated in 
Mistral environment through tripleo-common actions. roles_data.yaml is 
therefore used just as a default which is used when plan is created (or 
updated).


Now, plan export comes into play [4]. We want to be able to pull down 
the plan and deploy it using CLI directly (which creates/updates plan as 
far as during deploy command), We want to be able to reuse the plan in 
other deployments or use it as a reference architecture for subsequent 
deployments. This means we don't only want to download the contents of 
Swift container, but also configuration stored in Mistral environment.


So Plan export action pulls down files from Swift container, adds meta 
files: roles_data.yaml, network_data.yaml... which it populates by 
looking at appropriate keys in Mistral environment + 
plan_environment.yaml which includes remaining data from Mistral 
environment such as parameter values, environments etc. All of those are 
then returned in a single tarball. Question to consider is whether to 
split all this data into separate files or keep it in single one.


Plan Import [5] then allows to provide plan_environment.yaml to enable 
populating parameter values and environments selection during plan creation.



Alternative solution is to store meta files (roles_data.yaml, 
network_data.yaml...) in Swift container and use those files as a data 
store which IMHO does not comply with decision to use Mistral 
environment as a plan data store. In that case we should probably get 
rid of using Mistral environment altogether and use 
plan_environment.yaml file in Swift container to store data which we 
currently store in Mistral environment. I am quite convinced that there 
have been good reasons to not to do this and use Mistral environment.



[1] https://review.openstack.org/#/c/409920/
[2] https://review.openstack.org/#/c/409921/
[3] https://review.openstack.org/#/c/409921/
[4] 
http://specs.openstack.org/openstack/tripleo-specs/specs/ocata/gui-plan-import-export.html#problem-description
[5] 
https://blueprints.launchpad.net/tripleo/+spec/enhance-plan-creation-with-plan-environment-json



-- Jirka


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Deployment plan management efforts sync up

2017-02-02 Thread Jiri Tomasek



On 2.2.2017 13:57, Ana Krivokapic wrote:



On Thu, Feb 2, 2017 at 1:46 PM, Emilien Macchi <emil...@redhat.com 
<mailto:emil...@redhat.com>> wrote:


On Thu, Feb 2, 2017 at 6:56 AM, Jiri Tomasek <jtoma...@redhat.com
<mailto:jtoma...@redhat.com>> wrote:
> Hello all,
>
> there has been several ongoing efforts in TripleO regarding
Deployment Plans
> management and Deployment configuration itself. A lot of this
work is done
> to satisfy certain individual requirements but I think some further
> discussion needs to happen to make sure the solutions we create are
> effective for all parts of TripleO.
>
> There are several goals / features we currently aim for:
> - Define and manage custom Roles (GUI/CLI)
> - Define and manage networks (CLI/GUI)
> - Import/Export Deployment plans so it is possible to reuse them
or use them
> as a reference/starting point
>
> Currently the Deployment plan stored in Swift consist of:
> - tripleo heat templates
> - roles_data.yaml - meta file used as an input to define roles
(and assign
> networks to roles [1])
> - network_data.yaml - meta file used as input to define networks [2]
> - capabilities-map.yaml - meta file to describe THT environment
files
> capabilities
> - mistral environment - json structure in Mistral which we use
as a backend
> store accessible by mistral actions and workflows (tripleo-common)
>
> Currently, only possibility to configure roles and networks is
by creating
> or updating the plan with changed meta files. We need to create
Mistral
> actions to handle manipulating roles and networks so GUI (also
CLI) can
> retrieve current roles/networks configuration and update it,
which in turn
> will regenerate related templates. Now, the question arises: Do
we want to
> use roles_data.yaml in Swift container as a storage for this
information? I
> thought we agreed on using Mistral environment to store plan
related data.
> (See here for additional context [3] )
>
> This means that on plan creation, we use roles_data.yaml (and
> network_data.yaml etc.) to populate Mistral environment and generate
> templates using this data. roles_data.yaml (and others) then
need to be
> discarded because from this point on, the data will be updated
in Mistral
> environment through tripleo-common actions. roles_data.yaml is
therefore
> used just as a default which is used when plan is created (or
updated).
>
> Now, plan export comes into play [4]. We want to be able to pull
down the
> plan and deploy it using CLI directly (which creates/updates
plan as far as
> during deploy command), We want to be able to reuse the plan in
other
> deployments or use it as a reference architecture for subsequent
> deployments. This means we don't only want to download the
contents of Swift
> container, but also configuration stored in Mistral environment.
>
> So Plan export action pulls down files from Swift container,
adds meta
> files: roles_data.yaml, network_data.yaml... which it populates
by looking
> at appropriate keys in Mistral environment +
plan_environment.yaml which
> includes remaining data from Mistral environment such as
parameter values,
> environments etc. All of those are then returned in a single
tarball.
> Question to consider is whether to split all this data into
separate files
> or keep it in single one.
>
> Plan Import [5] then allows to provide plan_environment.yaml to
enable
> populating parameter values and environments selection during
plan creation.

For which cycle would you target this blueprint?
We need to update it accordingly:

https://blueprints.launchpad.net/tripleo/+spec/enhance-plan-creation-with-plan-environment-json

<https://blueprints.launchpad.net/tripleo/+spec/enhance-plan-creation-with-plan-environment-json>


I think it should be targeted for Pike 1.

+1




>
> Alternative solution is to store meta files (roles_data.yaml,
> network_data.yaml...) in Swift container and use those files as
a data store
> which IMHO does not comply with decision to use Mistral
environment as a
> plan data store. In that case we should probably get rid of
using Mistral
> environment altogether and use plan_environment.yaml file in
Swift container
> to store data which we currently store in Mistral environment. I
am quite
> convinced that there have been good reasons to not to do this
and use
> Mistral environm

Re: [openstack-dev] [tripleo] Update TripleO core members

2017-01-25 Thread Jiri Tomasek

+1


On 23.1.2017 20:03, Emilien Macchi wrote:

Greeting folks,

I would like to propose some changes in our core members:

- Remove Jay Dobies who has not been active in TripleO for a while
(thanks Jay for your hard work!).
- Add Flavio Percoco core on tripleo-common and tripleo-heat-templates
docker bits.
- Add Steve Backer on os-collect-config and also docker bits in
tripleo-common and tripleo-heat-templates.

Indeed, both Flavio and Steve have been involved in deploying TripleO
in containers, their contributions are very valuable. I would like to
encourage them to keep doing more reviews in and out container bits.

As usual, core members are welcome to vote on the changes.

Thanks,



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Honza Pokorny core on tripleo-ui

2017-01-25 Thread Jiri Tomasek

+1, Nice!


On 24.1.2017 14:52, Emilien Macchi wrote:

I have been discussed with TripleO UI core reviewers and it's pretty
clear Honza's work has been valuable so we can propose him part of
Tripleo UI core team.
His quality of code and reviews make him a good candidate and it would
also help the other 2 core reviewers to accelerate the review process
in UI component.

Like usual, this is open for discussion, Tripleo UI core and TripleO
core, please vote.

Thanks,



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][TripleO] Adding interfaces to environment files?

2017-01-15 Thread Jiri Tomasek



On 8.6.2016 11:15, Steven Hardy wrote:

On Tue, Jun 07, 2016 at 03:57:31PM -0400, Jay Dobies wrote:

All,

We've got some requirements around adding some interfaces to the heat
environment file format, for example:

1. Now that we support passing un-merged environment files to heat, it'd be
good to support an optional description key for environments,

I've never understood why the environment file doesn't have a description
field itself. Templates have descriptions, and IMO it makes sense for an
environment to describe what its particular additions to the
parameters/registry do.

I'd be happy to write that patch, but I wanted to first double check that
there wasn't a big philosophical reason why it shouldn't have a description.

AFAIK there are two reasons:

1. Until your recent work landed, any description would be destroyed by the
client when it merged the environments

2. We've got no way to retrieve the environment descriptions from heat (as
Zane mentioned in his reply).

I'm suggesting we fix (2) as a followup step to your work to add an API
that returns the merged environment, e.g add an API that returns the files
map associated with a stack, and one that can list the environments in use
(not just the resolved/merged environment).


such that we
could add an API (in addition to the one added by jdob to retrieve the
merged environment for a running stack) that can retrieve
all-the-environments and we can easily tell which one does what (e.g to
display in a UI perhaps)

I'm not sure I follow. Are you saying the API would return the list of
descriptions, or the actual contents of each environment file that was
passed in?

The actual contents, either by passing a list of environment filenames, and
providing another API that can return the files map containing the files,
or by having one API call that can return a map of filenames to content for
all environments passed via environment_files.

Basically, I think we should expose all data as passed to create_stack in
it's original form, and (as you already added) in it's post-processed form
e.g the merged environment.


Currently, the environment is merged before we do anything with it. We'd
have to change that to store... I'm not entirely sure. Multiple environments
in the DB per stack? Is there a raw_environment in the DB that we would
leverage?

We just need to store the environment_files list - we already store the
environment files in the files map

https://review.openstack.org/#/c/241662/17/heat/engine/service.py

So, we need to store environment_files as well as the output of
_merge_environments, then add some sort of API to expose both that list and
the files map.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I'd like to revive this discussion. Fairly often the environment is used 
to define certain feature. It would be really beneficial if environment 
itself could carry its documentation and it was possible to 
programmatically retrieve it. Optional metadata section would be IMHO a 
good solution. HOT spec already has such section in template resources 
section [1].


[1] 
http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#resources-section


-- Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][UI] Port number for frontend app

2016-08-24 Thread Jiri Tomasek



On 22.8.2016 08:08, Honza Pokorny wrote:

Hello folks,

We've been using port 3000 for the GUI during development and testing.
Now that we're working on packaging and shipping our code, we're
wondering if port 3000 is still the best choice.

Would 3000 conflict with any other services?  Is there a better option?

Thanks

Honza Pokorny

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


It would probably be nice to run it on port 80. Not sure if it will 
collide with anything in undercloud. Horizon maybe?


-- Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] TripleO-UI status for TripleO RC1

2016-09-06 Thread Jiri Tomasek

Hey all,

here is a summary of TripleO-UI related TripleO RC1 work:

Tripleo-common patches required by TripleO-UI:

- roles listing: https://review.openstack.org/#/c/330283/
- validations run_groups workflow fix: 
https://review.openstack.org/#/c/366055/
- Include environments which aren't specified in capabilities-map in 
capabilities output: https://review.openstack.org/#/c/355598/
- wire in jinja templating: https://review.openstack.org/#/c/362465/ + 
any other custom roles patches or patches which clean up the parameters 
defined in overcloud.yaml

- set deployment parameters  https://review.openstack.org/365625

Tripleo-heat-templates patches required by TripleO-UI:

- capabilities map update: https://review.openstack.org/#/c/364842/

Instack-undercloud patches required by TripleO-UI:

- enable_ui patch:  https://review.openstack.org/#/c/344140/ (+ any 
dependent puppet-* patches) - this needs to get tested and verified that 
all requirements specified in 
https://blueprints.launchpad.net/tripleo-ui/+spec/instack-undercloud-ui-config 
are fulfilled


Mistral patches required by TripleO-UI:

- make execution output optionally part of executions listing: 
https://review.openstack.org/#/c/364446/ (MERGED, mistral package with 
this patch needs to be installed with TripleO RC1



TripleO RC1 TripleO-UI work is tracked at TripleO-UI launchpad marked as 
RC1 series https://blueprints.launchpad.net/tripleo-ui/rc1 . We're in 
good progress so far considering that the work on those items started on 
Thursday last week.
Patches waiting for review are available here: 
https://review.openstack.org/#/q/project:openstack/tripleo-ui


We'd need to get back end dependencies merged as soon as possible, so 
the work which depends on those can be implemented in relatively stable 
environment.



Thanks

-- Jirka


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Julie Pichon for tripleo core

2016-11-23 Thread Jiri Tomasek



On 22.11.2016 18:01, Dougal Matthews wrote:

Hi all,

I would like to propose we add Julie (jpich) to the TripleO core team 
for python-tripleoclient and tripleo-common. This nomination is based 
partially on review stats[1] and also my experience with her reviews 
and contributions.


Julie has consistently provided thoughtful and detailed reviews since 
the start of the Newton cycle. She has made a number of contributions 
which improve the CLI and has been extremely helpful with other tasks 
that don't often get enough attention (backports, bug 
triaging/reporting and improving our processes[2]).


I think she will be a valuable addition to the review team

Dougal


[1]: http://stackalytics.com/report/contribution/tripleo-group/90
[2]: https://review.openstack.org/#/c/352852/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

+1!

-- Jirka
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Network Configuration in TripleO UI

2016-12-08 Thread Jiri Tomasek

Hi all,

I've been investigating how to implement TripleO network configuration 
in TripleO UI. Based on my findings I'd like to propose a solution.


tl;dr proposal: Slightly refactor Network environment files to match GUI 
usage, Use Jinja Templating to generate dynamic parts of the 
templates/environments



# Overview

I've used Ben Nemec's amazing Network template generator as a reference 
to help me understand how the network configuration works [1]. In 
general the process of configuring the network in TripleO is:


Define which Networks we intend to use -> Assign Roles to the Networks 
(+ Assign Role Services to the Network) -> Generate NIC config templates 
based on previous information



# Deeper dive into templates

We currently have 2 environment files in THT [2] which define network 
configuration:


network-environment.yaml [3] - holds the information on NIC 
configuration for each Role using 
OS::TripleONet::SoftwareConfig resource + related 
parameter configuration


network-isolation.yaml [4]
- defines the list of networks using OS::TripleO::Network:: 
resource
- defines ports configuration for each network using 
OS::TripleO::Network::Ports::VipPort (note that both 
resources point to the static templates - those templates don't require 
any manual modification)
- holds  Roles - Networks assignment using 
OS::TripleOPorts::Port for each role and 
storage (again, templates referenced by those resources don't require 
any modification)


User is intended to go ahead and modify those environments and provide 
NIC config templates to achieve a network configuration that matches his 
needs.



# How GUI works

Before proceeding to proposed changes I need to describe briefly how 
TripleO UI works. TripleO UI is using THT as a source of truth, which 
means that it is trying not to add any additional business logic or 
manipulate templates. Rather it uses environment files as a 'features' 
which user can enable or disable depending on the needs of the 
deployment. The information about inter-environment relationships is 
tracked in capabilities-map.yaml which is also part of the THT. Based on 
these choices, UI allows user to configure parameters for those 
features. The parameter values and information about which environments 
are selected is stored in mistral environment. This approach leaves the 
plan templates intact. Huge benefit of this approach is that UI (or 
tripleo-common) does not need to hold explicit business logic related to 
certain deployment features as it is purely driven by THT. Also Adding a 
new feature involves only providing the templates/environments and it 
automatically appears as an option in UI.


To achieve best user experience while using this approach, the 
environment files need to be defined in a granular manner, so they don't 
require user to modify them and each describe an isolated 'feature'.


Roles and Network Configuration are exceptions to this concept as they 
require modification/generation of the templates/environments and 
therefore they use Jinja templating to achieve that.



# The proposal

So having described previous, here is the approach I think we should use 
to achieve network configuration using TripleO UI:


1. Put networks definitions into separate environment for each network:
- this way GUI can provide a list of networks available to use and let 
user select which of them he wants to use. These environments are not 
dynamic and if user wants to add a new network, he does so by creating 
new templates and environment for it. UI also provides means to 
configure parameters for each network at this point (if needed).


For example the environment for a Storage Network looks like this:

resource_registry:
  OS::TripleO::Network::Storage: ../network/storage.yaml
  OS::TripleO::Network::Ports::StorageVipPort: ../network/ports/storage.yaml

2. Assign Roles to Networks
Having the Networks selected as well as Roles defined, TripleO UI 
provides user with means to assign Roles to Networks. This step involves 
generating the network-environment.yaml file. So TripleO UI sends the 
mapping of roles to network in json format to tripleo-common which in 
turn uses network-isolation.j2.yaml Jinja template to generate the 
environment file. I expect that pre-defined network-isolation.yaml will 
be included in default plan so the user does not need to start from 
scratch. Tripleo-common also provides an action to fetch network-roles 
assignment data by parsing the network-isolation.yaml


In addition, user is able to assign individual Role Services to a 
Network. ServiceNetMap parameter is currently used for this. GUI needs 
to make sure that it represents Services-Networks assignment grouped by 
Role so it is ensured that user assigns Services to only networks where 
their Role is assigned.


3. Generate NIC Config templates
TripleO UI provides means to configure NICS, Bonds etc. for each Role, 
using the information from previous steps. It sends 

Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-12 Thread Jiri Tomasek



On 4.1.2017 09:13, Saravanan KR wrote:

Hello,

The aim of this mail is to ease the DPDK deployment with TripleO. I
would like to see if the approach of deriving THT parameter based on
introspection data, with a high level input would be feasible.

Let me brief on the complexity of certain parameters, which are
related to DPDK. Following parameters should be configured for a good
performing DPDK cluster:
* NeutronDpdkCoreList (puppet-vswitch)
* ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under review)
* NovaVcpuPinset (puppet-nova)

* NeutronDpdkSocketMemory (puppet-vswitch)
* NeutronDpdkMemoryChannels (puppet-vswitch)
* ComputeKernelArgs (PreNetworkConfig [4]) (under review)
* Interface to bind DPDK driver (network config templates)

The complexity of deciding some of these parameters is explained in
the blog [1], where the CPUs has to be chosen in accordance with the
NUMA node associated with the interface. We are working a spec [2], to
collect the required details from the baremetal via the introspection.
The proposal is to create mistral workbook and actions
(tripleo-common), which will take minimal inputs and decide the actual
value of parameters based on the introspection data. I have created
simple workbook [3] with what I have in mind (not final, only
wireframe). The expected output of this workflow is to return the list
of inputs for "parameter_defaults",  which will be used for the
deployment. I would like to hear from the experts, if there is any
drawbacks with this approach or any other better approach.

This workflow will ease the TripleO UI need to integrate DPDK, as UI
(user) has to choose only the interface for DPDK [and optionally, the
number for CPUs required for PMD and Host]. Of-course, the
introspection should be completed, with which, it will be easy to
deploy a DPDK cluster.

There is a complexity if the cluster contains heterogeneous nodes, for
example a cluster having HP and DELL machines with different CPU
layout, we need to enhance the workflow to take actions based on
roles/nodes, which brings in a requirement of localizing the above
mentioned variables per role. For now, consider this proposal for
homogeneous cluster, if there is a value in this, I will work towards
heterogeneous clusters too.

Please share your thoughts.

Regards,
Saravanan KR


[1] https://krsacme.github.io/blog/post/dpdk-pmd-cpu-list/
[2] https://review.openstack.org/#/c/396147/
[3] https://gist.github.com/krsacme/c5be089d6fa216232d49c85082478419
[4] 
https://review.openstack.org/#/c/411797/6/extraconfig/pre_network/host_config_and_reboot.role.j2.yaml

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


We are recently getting quite a lot of requests such as this - of 
bringing up the logic which takes the introspection data and 
pre-populates the parameters with it. This is usable for network 
configuration, storage etc. So as It seems there is a real need for such 
features, TripleO team should discuss general approach on how this logic 
should work. Mistral workflow is an obvious choice, we just need to make 
sure a certain pre-requisities are met.


From the GUI point of view, we probably don't want this type of 
workflow to happen as part of starting the deployment. That's too late. 
We need to find mechanism which helps us to identify when such workflow 
can run and it should probably be confirmed by user. And when it 
finishes, Used needs to be able to review those parameters and confirm 
that this is the configuration he wants to deploy and should be able to 
make changes to it.


Obviously, as this workflow uses introspection data, user could be 
offered to run it when introspection finishes. Problem is that we need 
to verify, that using this workflow is valid for the deployment setup 
user is creating. For example, If this workflow sets parameters which 
are defined in templates which user won't deploy, it is wrong.


So I think that proper way would be to embed this in environment selection:
Environment selection is a step where user does high level deployment 
decisions - selects environments which are going to be used for 
deployment. We could bring in a mechanism (embedded in environment file 
or capabilities-map.yaml maybe?) which would allow GUI to do: 'hey, 
you've just enabled feature Foo, and you have introspection data 
available. Do you wish to pre-configure this feature using this data?' 
On confirmation the workflow is triggered and configuration is 
populated. User reviews it and does tweaks if he wants.


I'd love to hear feedback on this.

--Jirka



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [tripleo] Proposing Florian Fuchs for tripleo-validations core

2017-04-06 Thread Jiri Tomasek
+1

On Thu, Apr 6, 2017 at 12:56 PM, Julie Pichon  wrote:

> On 6 April 2017 at 10:53, Martin André  wrote:
> > Hellooo,
> >
> > I'd like to propose we extend Florian Fuchs +2 powers to the
> > tripleo-validations project. Florian is already core on tripleo-ui
> > (well, tripleo technically so this means there is no changes to make
> > to gerrit groups).
> >
> > Florian took over many of the stalled patches in tripleo-validations
> > and is now the principal contributor in the project [1]. He has built
> > a good expertise over the last months and I think it's time he has
> > officially the right to approve changes in tripleo-validations.
> >
> > Consider this my +1 vote.
> >
> > Martin
> >
> > [1] http://stackalytics.com/?module=tripleo-validations;
> metric=patches=pike
> >
>
> +1!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Florian Fuchs for tripleo-validations core

2017-04-18 Thread Jiri Tomasek

+1!


On 6.4.2017 11:53, Martin André wrote:

Hellooo,

I'd like to propose we extend Florian Fuchs +2 powers to the
tripleo-validations project. Florian is already core on tripleo-ui
(well, tripleo technically so this means there is no changes to make
to gerrit groups).

Florian took over many of the stalled patches in tripleo-validations
and is now the principal contributor in the project [1]. He has built
a good expertise over the last months and I think it's time he has
officially the right to approve changes in tripleo-validations.

Consider this my +1 vote.

Martin

[1] 
http://stackalytics.com/?module=tripleo-validations=patches=pike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-03-13 Thread Jiri Tomasek
Hi, I agree that this new updated logo is great refinement to what we
currently have. Love it.

Thanks Heidi

On Mon, Mar 13, 2017 at 8:24 PM, Dan Prince  wrote:

> Hi Heidi,
>
> I like this one a good bit better. He might looks a smidge cross-eyed
> to me... but I'd take this one any day over the previous version.
>
> Thanks for trying to capture the spirit of the original logos.
>
> Dan
>
> On Fri, 2017-03-10 at 08:26 -0800, Heidi Joy Tretheway wrote:
> > Hi TripleO team,
> >
> > Here’s an update on your project logo. Our illustrator tried to be as
> > true as possible to your original, while ensuring it matched the line
> > weight, color palette and style of the rest. We also worked to make
> > sure that three Os in the logo are preserved. Thanks for your
> > patience as we worked on this! Feel free to direct feedback to me.
> >
> > _
> > _
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> > cribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Saravanan KR core

2017-07-31 Thread Jiri Tomasek
+1

On Mon, Jul 24, 2017 at 10:32 AM, Dougal Matthews  wrote:

> +1!
>
> On 21 July 2017 at 16:01, Emilien Macchi  wrote:
>
>> Saravanan KR has shown an high level of expertise in some areas of
>> TripleO, and also increased his involvement over the last months:
>> - Major contributor in DPDK integration
>> - Derived parameter works
>> - and a lot of other things like improving UX and enabling new
>> features to improve performances and networking configurations.
>>
>> I would like to propose Saravanan part of TripleO core and we expect
>> his particular focus on t-h-t, os-net-config and tripleoclient for now
>> but we hope to extend it later.
>>
>> As usual, we'll vote :-)
>> Thanks,
>> --
>> Emilien Macchi
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Plan description in the create/update plan form

2017-06-30 Thread Jiri Tomasek



On 29.6.2017 23:28, Ben Nemec wrote:



On 06/29/2017 07:25 AM, Ana Krivokapic wrote:

Resending with the [tripleo] tag, sorry...

On Thu, Jun 29, 2017 at 2:22 PM, Ana Krivokapic > wrote:

Hi TripleO devs,

I am working on adding a description field to the "Crate Plan" form
in the TripleO UI [1]. The goal is to make it possible for the user
to specify a plan description using a form field when creating a
plan. As the plan description lives in the plan-environment.yaml
file[2], the idea is to retrieve this value from
plan-environment.yaml when the user uploads the plan, populate the
form field with it, let the user change it, and then save it back to
the file.

I have a WIP patch up [3] which solves the issue in the case of
uploading the plan as a folder. However, I am having a hard time
solving the case of uploading the plan as a tarball. The issue is
obviously with accessing the contents of the tarball. Here are some
possible approaches that come to mind:

1) Use one of the existing third-party JS libraries that can extract
a tarball in the browser. Pros: front-end only solution, no need for
additional API calls, no need for back-end changes. Cons: adding a
new dependency, these libraries don't seem much maintained.

2) Use swift to upload and extract the tarball. Pros: no need for
back-end changes, we can just call the swift API. Cons: splitting
the tarball upload from plan creation, which should really be one
atomic operation.

3) Modify the plan create workflow to accept a plan description as a
parameter. Pros: keeps plan creation atomic. Cons: changes to the
plan create workflow interface needed. Also this way there is no way
to send back the information about the description to the UI, we
would have to just accept the value of the form field, and overwrite
whatever was in the plan-environment.yaml file.

Of course there is also a fourth option:

4) This is not worth the effort to implement and we should just drop
it. :)


So the user can update the description after the initial upload, 
right? I wouldn't have a huge problem with just saying that you don't 
get the description box pre-populated if you upload a binary format 
like tar. It's not quite as ideal, but as long as there is some way to 
set the description at some point in the process it should be fine 
until/unless we decide to pursue a more complicated solution.




My personal opinion is that the cons of 1) and 2) make these
approaches unacceptable. The cons of 3) make it kind of not worth it
- seems like a lot of work for a partial solution. So I'm leaning
towards 4) at the moment.

I'd like to hear your opinions on this, is there a another/better
approach that I'm missing? Jirka, you mentioned we could postpone
this work to the next cycle and there are improvements that we can
work on in the meantime which would make implementation of this
feature easier?

Any and all thoughts, comments, opinions are welcome.



I've taken a look into how the plan creation actually works and here are 
the outcomes:


There are 3 ways to create a plan using plan creation mistral workflow:
1. Create plan from default templates in undercloud machine
2. Create plan from Swift container
3. Create plan from git repository

TripleO UI currently only uses second option and it provides user to do 
it 2 ways:

1. Upload via directory (Chrome browser only)
2. Upload tarball

In both cases, the mistral workflow expects the Swift container to be 
created before the workflow is triggered. TripleO UI now does it as a 
single chain of API calls - call Swift API to create container and 
upload files, call Mistral to run the plan creation workflow. So in 
theory we could split this in UI into a wizard, where once the files are 
in place in swift, we could reach for plan-environment.yaml and let the 
user set the description. Once that is done user can continue with 
triggering the mistral workflow in wizard.


Problem with this is that this solution would not support options 1. and 
3. which we'd like to integrate in TripleO UI too. So alternatively, we 
can turn the Plan creation dialog into a wizard which first creates the 
plan (swift uploads if needed, run plan creation workflow) and when 
succeeds, wizards provides user to edit plan description which would 
finalize the plan creation wizard.


The same approach could be used for plan update and plan export.

I think this discussion is a nice start for Queens spec/blueprint which 
introduces additional plan creation options and description editting in 
TripleO UI.


-- Jirka



[1] https://bugs.launchpad.net/tripleo/+bug/1698818

[2] 
https://github.com/openstack/tripleo-heat-templates/blob/master/plan-environment.yaml#L4-L5


Re: [openstack-dev] [tripleo][ui] another i18n proposal for heat templates 'description' help strings

2017-05-10 Thread Jiri Tomasek



On 9.5.2017 17:01, Florian Fuchs wrote:

On Mon, May 8, 2017 at 10:20 AM, Peng Wu  wrote:

Hi Julie,

   I generated one example javascript file containing the translatable
strings.
   URL: https://pwu.fedorapeople.org/openstack-i18n/tripleo/tripleo-heat
-templates.js

   And the code to generate the above file is in:
   https://pwu.fedorapeople.org/openstack-i18n/tripleo/

   The generated file need to be copied to tripleo-ui project, and
translate as other javascript files.

   Please review it, thanks!

Thanks for the update Peng! A few comments:

1. A minor thing: The file is missing the import for the
defineMessages function, so that would have to be added here.
2. The UI looks up messages by their object key, not the message id
(random examples: [1]). The current naming would make that quite hard
(description1, description2, descriptionN, ...). Ideally the key would
be created using some reproducible conventions. For instance, the UI
stores EnvironmentGroups by their title, Environments by their file
path and Parameters by their name. If the message keys would reflect
this structure, the UI could look up the objects dynamically, based on
the naming conventions.

Two more things I noticed:

I count roughly 3400 occurrences of the word "description" in current
tripleo-heat-templates. I'm not sure we need them all (your current
example file only lists about 600, which would still be an acceptable
number I guess), but if we do, we should probably think about some
dynamic way to load messages in the UI and not put them all into one
huge file.

We need to handle cases where translatable t-h-t strings don't have a
corresponding message object in the generated js file. That's probably
not a big thing, but something we have to take care of.

[1] 
https://github.com/openstack/tripleo-ui/blob/master/src/js/components/deployment/DeploymentDetail.js#L106
  
https://github.com/openstack/tripleo-ui/blob/master/src/js/components/deployment/DeploymentDetail.js#L59

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I am probably a bit late to the discussion, but I think we're missing 
quite important thing and that is the fact that TripleO UI is supposed 
to use various plans (template sets) not strictly tripleo-heat-templates 
repository contents. Tripleo-heat-templates repository is just a default 
plan, but user can provide own changed files to the plan. Or create new 
plan which is very different from what default tripleo-heat-templates 
repository holds.


Also I am quite scared of keeping the GUI-specific file in sync with 
tripleo-heat-templates contents.


IMHO a proper solution is introducing translations as part of 
tripleo-heat-templates repository - template files hold the keys and 
translations are held in a separate files in THT.


-- Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [tripleo] demo: node auto-discovery with one power button click

2017-05-11 Thread Jiri Tomasek



On 11.5.2017 16:56, Dmitry Tantsur wrote:

Hi all!

While people are enjoying the Forum, I also have something to show.

I've got a lot of questions about auto-discovery, so I've recorded a demo
of it using TripleO Ocata: https://www.youtube.com/watch?v=wJkDxxjL3NQ.

Please let me know what you think!

Dmitry

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi, thanks for interesting demo! I am wondering how does this compare to 
"Discover nodes, knowing IP range for their BMCs and the default IPMI 
credentials" blueprint [1].


I really like the power of introspection rules. TripleO UI can nicely 
benefit from this, letting user provide rules to apply on introspection.


Regarding the autodiscovery use case, is it possible to somehow power up 
the machines using API? What if user does not know the $IPMI_HOST?


Can we replace enabling autodiscovery option in undercloud with 
discovering nodes as defined in [1] which would return json resembling 
to instackenv.json file and register the nodes using this? Does it make 
sense?


Can Ironic-inspector send messages via zaqar to notify subscribers about 
starting the autodiscovery?


[1] https://blueprints.launchpad.net/tripleo/+spec/node-discovery-by-range

Thanks
-- Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Role updates

2017-06-12 Thread Jiri Tomasek



On 12.6.2017 10:55, Dmitry Tantsur wrote:

On 06/09/2017 05:24 PM, Alex Schultz wrote:

Hey folks,

I wanted to bring to your attention that we've merged the change[0] to
add a basic set of roles that can be combined to create your own
roles_data.yaml as needed.  With this change the roles_data.yaml and
roles_data_undercloud.yaml files in THT should not be changed by hand.
Instead if you have an update to a role, please update the appropriate
roles/*.yaml file. I have proposed a change[1] to THT with additional
tools to validate that the roles/*.yaml files are updated and that
there are no unaccounted for roles_data.yaml changes. Additionally
this change adds in a new tox target to assist in the generate of
these basic roles data files that we provide.

Ideally I would like to get rid of the roles_data.yaml and
roles_data_undercloud.yaml so that the end user doesn't have to
generate this file at all but that won't happen this cycle.  In the
mean time, additional documentation around how to work with roles has
been added to the roles README[2].


Hi, this is awesome! Do we expect more example roles to be added? E.g. 
I could add a role for a reference Ironic Conductor node.


Hi, thanks for doing great work in this and bringing up the topic!

I'd like to point out one problem which we've been dealing with for 
quite a while now which is TripleO UI and CLI interoperability. The main 
reason why we introduced Mistral 'TripleO' API is to consolidate the 
business logic to single place which will be used by all TripleO 
clients, so all will use the same codebase and not diverge. This has 
been established and agreed on quite a long time ago but it occurs that 
the problem of diverging the codebases still creeps in.


Main problem is that CLI (unlike all other clients) still tends to 
operate on local files rather than a deployment plan stored in Swift. 
Result is that new features which should be implemented in single place 
(tripleo-common - Mistral Actions/Worklflows) are implemented twice - in 
tripleoclient and (usually later for no real reason) in tripleo-common. 
Roles management is exact example. There is a great effort made to 
simplifying and managing Roles, but only by CLI, regarless of other 
clients need to do the same. This causes us having to maintain 2 
codebases which have the same goal, increases development time and other 
costs.


So my question is: How much effort would it be to change CLI workflow to 
operate on plan in Swift rather on local files? What are the pros and 
cons? How do we solve the problem of lacking features in tripleo-common?


Recently a changes in tripleo-common have been made which make 
operations on Swift plan much simpler. All the data about deployment is 
kept in Swift in templates/environment files and plan-environment.yaml 
(which replaced mistral environment data structure) so 
importing/exporting plan is much simpler now. If CLI leveraged this 
functionality, there would not be any need for user to store CLI command 
which was used for deployment. All the data are in plan-environment.yaml.


Let's take a look at Roles management example. Alex mentions removing 
roles_data.yaml. Yes, there is no need for it. Deployment plan is 
pre-created with undercloud install already, so CLI user could list 
available roles and use command which sets roles (takes list of roles 
names), this calls Mistral action/workflow which stores this selection 
in plan-environment.yaml in Swift and regenerates/updates j2 templates. 
Same with anything else (add environment files add/modify templates, set 
parameters...). Then user just fires 'openstack overcloud deploy' and is 
done. In case of need, user can simply export the plan and keep the 
files locally to easily recreate same deployment elsewhere.


What are the reasons why CLI could not work this way? Do those outweigh 
having to implement and maintain the business logic at two places?


Thanks,
Jirka





Thanks,
-Alex

[0] https://review.openstack.org/#/c/445687/
[1] https://review.openstack.org/#/c/472731/
[2] 
https://github.com/openstack/tripleo-heat-templates/blob/master/roles/README.rst


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] TripleO/Ansible PTG session

2017-09-21 Thread Jiri Tomasek
st 20. 9. 2017 v 19:37 odesílatel James Slagle <james.sla...@gmail.com>
napsal:

> On Tue, Sep 19, 2017 at 8:37 AM, Giulio Fidente <gfide...@redhat.com>
> wrote:
> > On 09/18/2017 05:37 PM, James Slagle wrote:
> >> - The entire sequence and flow is driven via Mistral on the Undercloud
> >> by default. This preserves the API layer and provides a clean reusable
> >> interface for the CLI and GUI.
> >
> > I think it's worth saying that we want to move the deployment steps out
> > of heat and in ansible, not in mistral so that mistral will run the
> > workflow only once and let ansible go through the steps
> >
> > I think having the steps in mistral would be a nice option to be able to
> > rerun easily a particular deployment step from the GUI, versus having
> > them in ansible which is instead a better option for CLI users ... but
> > it looks like having them in ansible is the only option which permits us
> > to reuse the same code to deploy an undercloud because having the steps
> > in mistral would require the undercloud installation itself to depend on
> > mistral which we don't want to
> >
> > James, Dan, please comment on the above if I am wrong
>
> That's correct. We don't want to require Mistral to install the
> Undercloud. However, I don't think that necessarily means it has to be
> a single call to ansible-playbook. We could have multiple invocations
> of ansible-playbook. Both Mistral and CLI code for installing the
> undercloud could handle that easily.


Mistral workflow's input could hold a list of steps that would define which
deploy steps ansible is going to go through. Is that correct assumption? On
undercloud installation list of steps would be provided by CLI.


>
> You wouldn't be able to interleave an external playbook among the
> deploy steps however. That would have to be done under a single call
> to ansible-playbook (at least how that is written now). We could
> however have hooks that could serve as integration points to call
> external playbooks after each step.


Could an external playbook be triggered as a custom step provided in
Mistral workflow input I mention above?


>
> >> - It would still be possible to run ansible-playbook directly for
> >> various use cases (dev/test/POC/demos). This preserves the quick
> >> iteration via Ansible that is often desired.
> >>
> >> - The remaining SoftwareDeployment resources in tripleo-heat-templates
> >> need to be supported by config download so that the entire
> >> configuration can be driven with Ansible, not just the deployment
> >> steps. The success criteria for this point would be to illustrate
> >> using an image that does not contain a running os-collect-config.
> >>
> >> - The ceph-ansible implementation done in Pike could be reworked to
> >> use this model. "config download" could generate playbooks that have
> >> hooks for calling external playbooks, or those hooks could be
> >> represented in the templates directly. The result would be the same
> >> either way though in that Heat would no longer be triggering a
> >> separate Mistral workflow just for ceph-ansible.
> >
> > I'd say for ceph-ansible, kubernetes and in general anything else which
> > needs to run with a standard playbook installed on the undercloud and
> > not one generated via the heat templates... these "external" services
> > usually require the inventory file to be in different format, to
> > describe the hosts to use on a per-service basis, not per-role (and I
> > mean tripleo roles here, not ansible roles obviously)
> >
> > About that, we discussed a more long term vision where the playbooks
> > (static data) needd to describe how to deploy/upgrade a given service is
> > in a separate repo (like tripleo-apb) and we "compose" from heat the
> > list of playbooks to be executed based on the roles/enabled services; in
> > this scenario we'd be much closer to what we had to do for ceph-ansible
> > and I feel like that might finally allow us merge back the ceph
> > deployment (or kubernetes deployment) process into the more general
> > approach driven by tripleo
> >
> > James, Dan, comments?
>
> Agreed, I think this is the longer term plan in regards to using
> APB's, where everything consumed is an external playbook/role.
>
> We definitely want to consider this plan in parallel with the POC work
> that Flavio is pulling together and make sure that they are aligned so
> that we're not constantly reworking the framework.
>
> I've not yet had a chance to review the material he sent out t

Re: [openstack-dev] [TripleO] TripleO/Ansible PTG session

2017-09-22 Thread Jiri Tomasek
Will it be possible to send Zaqar messages at each deployment step to make
the deployment process more interactive? in case of driving separate
playbooks from mistral workflow, that would be absolutely possible. As it
seems we're more keen on driving everything from wrapping ansible playbook,
is it going to be possible to send Zaqar messages from ansible playbook
directly?

Being able to properly monitor progress of deployment is important so it
would be good to clarify how that is going to work.

-- Jirka

On Fri, Sep 22, 2017 at 3:17 PM, Jiří Stránský  wrote:

> On 22.9.2017 13:44, Giulio Fidente wrote:
>
>> On 09/21/2017 07:53 PM, Jiří Stránský wrote:
>>
>>> On 21.9.2017 18:04, Marios Andreou wrote:
>>>
 On Thu, Sep 21, 2017 at 3:53 PM, Jiří Stránský 
 wrote:

>>>
>> [...]
>>
>> That way we could run the whole thing end-to-end via
> ansible-playbook, or
> if needed one could execute smaller bits by themselves (steps or nested
> playbook runs) -- that capability is not baked in by default, but i
> think
> we could make it so.
>
> Also the interface for services would be clean and simple -- it's
> always
> the ansible tasks.
>
> And Mistral-less use cases become easier to handle too (= undercloud
> installation when Mistral isn't present yet, or development envs when
> you
> want to tune the playbook directly without being forced to go through
> Mistral).
>
>
 You don't *have* to go through mistral either way I mean you can always
 just run ansible-playbook directly using the generated playbooks if
 that is
 what you need for dev/debug etc.



> Logging becomes a bit more unwieldy in this scenario though, as for the
> nested ansible-playbook execution, all output would go into a task in
> the
> outer playbook, which would be harder to follow and the log of the
> outer
> playbook could be huge.
>
> So this solution is no silver bullet, but from my current point of
> view it
> seems a bit less conceptually foreign than using Mistral to provide
> step
> loop functionality to Ansible, which should be able to handle that on
> its
> own.
>
>
> just saying using mistral to invoke ansible-playbook doesn't imply
 having
 mistral do the looping/step control. I think it was already mentioned
 that
 we can/will have multiple invocations of ansible-playbook. Having the
 loop
 in the playbook then means organising our templates a certain way so
 that
 there is a _single_ parent playbook which we can parameterise to then
 run
 all or some of the steps (which as pointed above is currently the case
 for
 the upgrade and deployment steps playbooks).

>>>
>>> Yup, +1 again :) However, the 1)2)3)4) approach discussed earlier in the
>>> thread suggested to hand over the step loop control to Mistral and keep
>>> using the Mistral workflow_tasks, which would make it impossible to have
>>> a single parent playbook, if i understood correctly. So Mistral would be
>>> a requirement for running all steps via a single command (impacting UC
>>> install and developer workflow).
>>>
>>
>> yes I am not sold (yet?) on the idea of ansible driving the deployment
>> and would like to keep some abstraction before it
>>
>> the additional abstraction will make it possible for example to execute
>> tasks written as mistral actions (eg. python code) in between or during
>> any given deployment step, instead of ansible tasks only ... I guess we
>> could also write ansible actions in python but it's not trivial to ship
>> them from THT and given the project mission we have of being "openstack
>> on openstack" I'd also prefer writing a mistral action vs ansible
>>
>> similarily, the ceph-ansible workflow runs a task to build the ansible
>> inventory; if we make the "external" services integration an
>> ansible->ansible process we'll probably need to ship from THT an heat
>> query (or ansible task) to be executed by the "outer" ansible to create
>> the inventory for the inner ansible
>>
>
> Yea, allowing e2e software deployment with Ansible requires converting the
> current Mistral workflow_tasks into Ansible. In terms of services affected
> by this, there's in-tree ceph-ansible [1] and we have proposed patches for
> Kubernetes and Skydive (that's what i'm aware of).
>
>
>> I supported the introduction of mistral as an API and would prefer to
>> have there more informations there versus moving it away into YACT (yet
>> another configuration tool)
>>
>
> We could mitigate this somewhat by doing what Marios and James suggested
> -- running the global playbook one step at a time when the playbook is
> executed from Mistral. It will not give Mistral 100% of the information
> when compared with the approach you suggested, but it's a bit closer...
>
>
>> depending on mistral for the undercloud 

Re: [openstack-dev] [tripleo] Facilitating automation testing in TripleO UI

2017-09-08 Thread Jiri Tomasek
On Fri, Aug 4, 2017 at 9:33 AM, Honza Pokorny  wrote:

> About 10 years ago, we were promised a fully semantic version of HTML.
> No more nested divs to structure your documents.  However, all we got
> was a few generic, and only marginally useful elements like  and
> .
>
> On 2017-08-03 18:59, Ana Krivokapic wrote:
> > Hi TripleO devs,
> >
> > In our effort to make the TripleO UI code friendlier to automation
> > testing[1], there is an open review[2] for which we seem to have some
> > difficulty reaching the consensus on how best to proceed. There is
> already
> > a discussion happening on the review itself, and I'd like to move it
> here,
> > rather than having it in a Gerrit review.
> >
> > The tricky part is around adding HTML element ids to the Nodes page. This
> > page is generated by looping through the list of registered nodes and
> > displaying complete information about each of them. Because of this, many
> > of the elements are repeating on the page (CPU info, BIOS, power state,
> > etc, for each node). We need to figure out how to make these elements
> easy
> > for the automation testing code to access, both in terms of locating a
> > single group within the page, as well as distinguishing the individual
> > elements of a group from each other. There are several approaches that
> > we've come up so far:
> >
> > 1) Add unique IDs to all the elements. Generate unique `id` html
> attributes
> > by including the node UUID in the value of the `id` attribute. Do this
> for
> > both the higher level elements (divs that hold all the information about
> a
> > single node), as well as the lower level (the ones that hold info about
> > BIOS, CPU, etc). The disadvantage of this approach is cluttering the UI
> > codebase with many `id` attributes that would otherwise not be needed.
>
> While this is useful for addressing a particular element, I think it
> would still require quite a bit of parsing.  You'd find yourself writing
> string-splitting code all over the place.  It would make the code harder
> to read without providing much semantic information --- unless of course
> every single element had some kind of ID.



>
> > 2) Add CSS classes instead of IDs. Pros for this approach: no need to
> > generate the clumsy ids containing the node UUID, since the CSS classes
> > don't need to be unique. Cons: we would be adding even more classes to
> HTML
> > elements, many of which are already cluttered with many classes. Also,
> > these classes would not exist anywhere in CSS or serve any other purpose.
>
> I like this option the best.  It seems to be the most natural way of
> adding semantic information to the bare-bones building blocks of the
> web.  Classes are simple strings that add information about the intended
> use of the element.  Using jQuery-like selectors, this can make for some
> easy-to-understand code.  Do you want to grab the power state of the
> currently expanded node in the list?
>
> $('#node-list div.node.expanded').find('.power-state')
>
> By default, Selenium can query the DOM by id, by class name, and by
> xpath.  It can be extended to use pyquery which is the Python
> implementation of the jQuery css selector.  I think many of the
> automation implementation headaches can be solved by using pyquery.
>
> https://blogs.gnome.org/danni/2012/11/19/extending-selenium-with-jquery/


I agree with this solution. Using IDs for unique elements in the page and
classes for elements which are repeating (list items) seems most natural
to me. In combination with pyquery it should be sufficient IMHO.

Also, to make sure that classes won't repeat and it is simple to identify
the desired element, we should use BEM [2] similarly as we do with IDs

Also note that incrementally we are extracting presentational components
into separate building blocks (see [1] for example), which makes the code
much more readable and classnames won't be cluttered any more.
e.g. instead of having code such as
content of item
we'll have
content of the
item
IIUC this is the semanticity Honza is looking for.

[1]
https://github.com/patternfly/patternfly-react/pull/50/files#diff-b2dff316cba6ec51de4d1712eef132d0R76
[2] https://codepen.io/Merri/post/advanced-bem-with-react-components

-- Jirka


>
> Furthermore, I think that classes can be used effectively when
> describing transient state like the expanded/collapsed state of a
> togglable element.  It's easy to implement on the client side, and it
> should be helpful on the automation side.
>
> Relying on patternfly presentational class names won't suffice.



>
> > 3) Add custom HTML attributes. These usually start with the 'data-'
> prefix,
> > and would not need to be unique either. Pros: avoids the problems
> described
> > with both approaches above. Cons: AFAIU, the React framework could have
> > problems with custom attributes (Jirka can probably explain this better).
> > Also, casual readers of the code could be confused about the purpose of
> > 

[openstack-dev] [tripleo] TripleO UI and CLI feature parity

2017-09-12 Thread Jiri Tomasek
Hello all,

As we are in the planning phase for Queens cycle, I'd like to open the
discussion on the topic of CLI (tripleoclient) and GUI (tripleo-ui) feature
parity.

Two years ago, when TripleO UI was started, it was agreed that in order to
provide API for GUI and to achieve compatibility between GUI and CLI, the
TripleO business logic gets extracted from tripleoclient into
tripleo-common library and it will be provided through Mistral actions and
workflows so GUI and other potential clients can use it.

The problem:

Currently we are facing a recurring problem that when a new feature is
added to TripleO it often gets a correctly implemented business logic in
form of utility functions in tripleo-common but those are then used
directly by tripleoclient. At this point the feature is considered complete
as it is integrated in CLI and passes CI tests. The consequences of this
approach are:

- there is no API for the new feature, so the feature is only usable by CLI
- part of the business logic still lives in tripleoclient
- GUI can not support the feature and gets behind CLI capabilities
- GUI contributors need to identify the new feature, raise bugs [1],
feature then gets API support in tripleo-common
- API implementation is not tested in CI
- GUI and CLI diverges in how that feature is operated as business logic is
implemented twice, which has number of negative effects on TripleO
functionality (backwards compatibility, upgrades...)

The biggest point of divergence between GUI and CLI is that CLI tends to
generate a set of local files which are then put together when deploy
command is run whereas GUI operates on Deployment plan which is stored in
Swift and accessed through API provided by tripleo-common.

The described problem currently affects all of the features which CLI uses
to generate files which are used in deploy command (e.g. Roles management,
Container images preparation, Networks management etc.) There is no API for
those features and therefore GUI can't support them until Mistral actions
and workflows are implemented for it.

Proposed solution:

We should stop considering TripleO as a set of utility scripts used to
construct 'deploy' command, we should rather consider TripleO as a
deployment application which has its internal state (Deployment plan in
Swift) which is accessed and modified via API.
TripleO feature should be considered complete when API for it is created.
CLI should solely use TripleO business logic through Mistral actions and
workflows provided by tripleo-common - same as any other client has to.

Results of this change are:
- tripleoclient is extremely lightweight, containing no tripleo business
logic
- tripleo-common API is tested in CI as it is used by CLI
- tripleoclient and tripleo-ui are perfectly compatible, interoperable and
its features and capabilities match
- TripleO business logic lives solely in tripleo-common and is operated the
same way by any client
- no new backward compatibility problems caused by releasing features which
are not supported by API are not introduced
- new features related to Ansible or containers are available to all clients
- upgrades work the same way for deployments deployed via CLI and GUI
- deployment is replicable without need of keeping the the deploy command
and generated files around (exported deployment plan has all the
information)

Note that argument of convenience of being able to modify deployment files
locally is less and less relevant as we are incrementally moving from
forcing user to modify templates manually (all the jinja templating,
roles_data.yaml, network_data.yaml generation, container images
preparation, derive parameters workflows etc.). In Pike we have made
changes to simplify the way Deployment plan is stored and it is extremely
easy to import and export it in case when some manual changes are needed.

Proposed action items:
- Document what feature complete means in TripleO and how features should
be accessed by clients
- Identify steps to achieve feature parity between CLI and GUI
(tripleo-common) [2]
- Implement missing plan operations CLI commands to be able to deprecate
commands which generate local files which are used with deploy command

[1] https://bugs.launchpad.net/tripleo/+bug/1715377
[2] https://etherpad.openstack.org/p/tripleo-ui-queens-planning

Thanks
Jirka
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Deployment workflow changes for ui/client

2017-10-24 Thread Jiri Tomasek
On Fri, Oct 20, 2017 at 1:20 PM, Brad P. Crochet  wrote:

> On Thu, Oct 19, 2017 at 4:56 PM James Slagle 
> wrote:
>
>> I've been looking at how we can hook up the deployment changes for
>> config-download[1] with the existing deployment workflows in Mistral.
>>
>> However, it seems we have not sufficiently abstracted the logic to do
>> a "deployment" behind a given workflow(s). The existing things a
>> client (or UI) has to do is:
>>
>> - call tripleo.deployment.v1.deploy_plan
>> - poll for success/failure of that workflow
>> - poll for success/failure of in progress Heat stack (list events, etc)
>> - call tripleo.deployment.v1.overcloudrc
>> (probably more things too)
>>
>> If I want to make some changes to the deployment workflow, such that
>> after the Heat stack operation is complete, we run a config-download
>> action/workflow, then apply the generated ansible via
>> ansible-playbook, I can't really do that without requiring all clients
>> to also get updated to use those new steps (via calling new workflows,
>> etc).
>>
>> As a first attempt, I took a shot at creating a workflow that does every
>> step:
>> https://review.openstack.org/#/c/512876/
>> But even that will require client changes as it necessitates a
>> behavior change in that the workflow has to wait for the stack to be
>> complete as opposed to returning as soon as the stack operation is
>> accepted by Heat.
>>
>>
> Thankfully we already have that capability. :)
>
>
>> I'd like to implement this in a way that minimizes the impact of
>> changes on both python-tripleoclient and tripleo-ui, but it's looking
>> as if some changes would be required to use this new ansible driven
>> approach.
>>
>> Thoughts or feedback on how to proceed? I'm guess I'm also wondering
>> if the existing API exposed by the workflows is easy to consume by the
>> UI, or if it would be better to be wrapped in a single workflow...at
>> least that way we could make logical implementation changes without
>> requiring ui/cilent changes.
>>
>> [1] https://blueprints.launchpad.net/tripleo/+spec/ansible-
>> config-download
>>
>>
> +1 to all of this. I think from the CLI perspective, it should be a
> minimal impact. If anything, it will get rid of a lot of code that doesn't
> really belong. I can't say what the impact to the UI would be. However, one
> thing that we should make sure of is that we send messages back through
> Zaqar to keep the CLI and UI informed of what is occurring. That should
> happen already with most of the existing workflows.
>
> This is a great step in the right direction. The Workflows squad will be
> happy to assist in any way we can. We will start by reviewing what you have
> so far.
>

Having a workflow which wraps whole deployment sounds great from the UI
side too as it allows us to simplify the steps you described above. IIRC
the reason the whole deployment did not get wrapped into a single workflow
before is that the workflow/tasks timeouted before the deployment could
finish which caused the workflow to fail.

It should not be problematic to integrate these changes in GUI. The soon we
can test it the better. As Brad noted, it is important to get as many Zaqar
messages as possible so we can track the progress properly.

-- Jirka


>
>
>> --
>> -- James Slagle
>> --
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
> Brad P. Crochet, RHCA, RHCE, RHCVA, RHCDS
> Principal Software Engineer
> (c) 704.236.9385 <(704)%20236-9385>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Nominate akrivoka for tripleo-validations core

2017-11-07 Thread Jiri Tomasek
+1, great work Ana!

út 7. 11. 2017 v 1:33 odesílatel Honza Pokorny  napsal:

> Hello people,
>
> I would like to nominate Ana Krivokapić (akrivoka) for the core team for
> tripleo-validations.  She has really stepped up her game on that project
> in terms of helpful reviews, and great patches.
>
> With Ana's help as a core, we can get more done, and innovate faster.
>
> If there are no objections within a week, we'll proceed with adding Ana
> to the team.
>
> Thanks
>
> Honza Pokorny
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Next steps for pre-deployment workflows (e.g derive parameters)

2017-11-08 Thread Jiri Tomasek
On Wed, Nov 8, 2017 at 6:09 AM, Steven Hardy  wrote:

> Hi all,
>
> Today I had a productive hallway discussion with jtomasek and
> stevebaker re $subject, so I wanted to elaborate here for the benefit
> of those folks not present.  Hopefully we can get feedback on the
> ideas and see if it makes sense to continue and work on some patches:
>
> The problem under discussion is how do we run pre-deployment workflows
> (such as those integrated recently to calculate derived parameters,
> and in future perhaps also those which download container images etc),
> and in particular how do we make these discoverable via the UI
> (including any input parameters).
>
> The idea we came up with has two parts:
>
> 1. Add a new optional section to roles_data for services that require
> pre-deploy workflows
>
> E.g something like this:
>
>  pre_deploy_workflows:
> - derive_params:
>   workflow_name:
> tripleo.derive_params_formulas.v1.dpdk_derive_params
>   inputs:
>   ...
>
> This would allow us to associate a specific mistral workflow with a
> given service template, and also work around the fact that currently
> mistral inputs don't have any schema (only key/value input) as we
> could encode the required type and any constraints in the inputs block
> (clearly this could be removed in future should typed parameters
> become available in mistral).
>
> 2. Add a new workflow that calculates the enabled services and returns
> all pre_deploy_workflows
>
> This would take all enabled environments, then use heat to validate
> the configuration and return the merged resource registry (which will
> require https://review.openstack.org/#/c/509760/), then we would
> iterate over all enabled services in the registry and extract a given
> roles_data key (e.g pre_deploy_workflows)
>
> The result of the workflow would be a list of all pre_deploy_workflows
> for all enabled services, which the UI could then use to run the
> workflows as part of the pre-deploy process.
>

As I think about this more, we may find out that matching a service to
workflow is not enough as workflow may require several services (together
defining a feature) So maybe doing it in separate file would help. E.g.

pre-deploy-workflows.yaml
- name: my.workflow
  services: a, b, c, d

Maybe there is a better way, maybe this is not even needed. I am not sure.
What do you think?


What I really like about this proposal is that it provides a standard way
to configure deployment features and provides clear means to add additional
such configurations.

The resulting deployment configuration steps in GUI would look following:

1/ Hardware (reg. nodes, introspect etc)

2/ High level deployment configuration (basically selecting additional
environment files)

3/ Roles management (Roles selection, roles -> nodes assignment, roles
configuration - setting roles_data properties)

4/ Network configuration -  network configuration wizard: (I'll describe
this in separate email)

5/ Deployment Features configuration (This proposal) - a list of features
to configure, the list is nicely generated from information provided in
previous steps, user has all the information to configure those features at
hand and can go through these step by step.

6/ Advanced deployment config - a view providing a way to review
Environment/Roles/Services parameters, search and tweak them if needed.

7/ Deploy.

I believe these steps should cover anything we should need to do for
deployment configuration.

-- Jirka



>
> If this makes sense I can go ahead and push some patches so we can
> iterate on the implementation?
>
> Thanks,
>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Next steps for pre-deployment workflows (e.g derive parameters)

2017-11-08 Thread Jiri Tomasek
On Wed, Nov 8, 2017 at 11:09 AM, Saravanan KR  wrote:

> Thanks Steven for the update.
>
> Current CLI flow:
> --
> * User need to add -p parameter for the overcloud deploy command with
> workflows to be invoked [1]
> * Plan will update updated to the swift container
> * Derived parameters workflow is initiated
> - For each role
> * Get the introspection data of first node assigned to the role
> * Find the list features based on the services or parameters
> * If dpdk present, run dpdk formulas workflow
> * if sriov is present, run sriov formulas workfow (under
> development)
> * if sriov or dpdk is present, run host formulas workflow
> * if hci present, run hci formulas workflow
>
> Here the order of the formulas workflow invocation is important. For
> example,  in Compute-DPDK-HCI role, HCI formulas should exclude the
> CPUs allocated for DPDK PMD threads, while calculating cpu allocation
> ratio.
>
> I am trying to understand the proposed changes. Is it for assisting UI
> only or changing the existing CLI flow too? If the idea is to invoke
> the individual formulas workflow, it will not be possible with
> existing implementation, need to be re-worked. We need to introduce
> order for formulas workflow and direct fetching and merging of derived
> parameters in plan.
>

So there are several problems we're trying to solve with this proposal. In
general is to provide feature based workflows which will configure these
features, as well as provide means to get current configuration of these
features and provide sensible information about the input for these
workflows.

I think one of the main problem of current implementation is that user is
not able to get any information about input required to provide to run
derivation workflows. That information is purely documentation based and
also involves tweaking deployment-plan which I am convinced is not a good
way to provide the input.

So what we're proposing is to bring up a mechanism of mapping the
derivation workflows to services (roles/environments) so as Steven
described we're able to identify which workflows are possible to run and
provide extensive input definition so user can see what he is configuring
and why (input type, description, label).

This also means that there is several parameter derivation workflows rather
than just one and the input for the workflow is the actual input passed to
mistral (no plan-environment.yaml changes involved). Using this whole
approach means that for each such identified feature, we can provide -
Input details, general feature description (mistral workflow description)
and current configuration (by reaching to mistral workflow execution if
that was run before)

If as you're saying certain workflows depend on each other those should
probably be in one workflow. On the other hand, I think it is not goo
approach to try to put all the parameter derivation workflows into single
workflow.


-- Jirka


>
> As per earlier discussion jtomasek, to invoke derived parameters
> workflow (existing) for a plan, UI requires following information:
> * Whether derived parameters should be invoked for this deployment
> (based on roles and enabled services)
> * If yes, list of parameters, its types, and its default values (and
> choices if present), are required
>
> Did I miss anything?
>
> Regards,
> Saravanan KR
>
> [1] https://github.com/openstack/tripleo-heat-templates/blob/
> master/plan-samples/plan-environment-derived-params.yaml
>
> On Wed, Nov 8, 2017 at 2:39 PM, Bogdan Dobrelya 
> wrote:
> > On 11/8/17 6:09 AM, Steven Hardy wrote:
> >>
> >> Hi all,
> >>
> >> Today I had a productive hallway discussion with jtomasek and
> >> stevebaker re $subject, so I wanted to elaborate here for the benefit
> >> of those folks not present.  Hopefully we can get feedback on the
> >> ideas and see if it makes sense to continue and work on some patches:
> >>
> >> The problem under discussion is how do we run pre-deployment workflows
> >> (such as those integrated recently to calculate derived parameters,
> >> and in future perhaps also those which download container images etc),
> >> and in particular how do we make these discoverable via the UI
> >> (including any input parameters).
> >>
> >> The idea we came up with has two parts:
> >>
> >> 1. Add a new optional section to roles_data for services that require
> >> pre-deploy workflows
> >>
> >> E.g something like this:
> >>
> >>   pre_deploy_workflows:
> >>  - derive_params:
> >>workflow_name:
> >> tripleo.derive_params_formulas.v1.dpdk_derive_params
> >>inputs:
> >>...
> >>
> >> This would allow us to associate a specific mistral workflow with a
> >> given service template, and also work around the fact that currently
> >> mistral inputs don't have any schema (only key/value input) as we
> >> could encode the required 

[openstack-dev] [tripleo] FFE Select Roles TripleO-UI

2018-01-08 Thread Jiri Tomasek
Hello,

I’d like to request an FFE to finish GUI work on roles management, specifically 
listing of roles and selection of roles for deployment. This feature is one of 
the main goals of current cycle. The pending patches are ready to be merged, 
mostly just waiting for tripleo-common patches to land (those already have FFE).

Blueprints:
https://blueprints.launchpad.net/tripleo/+spec/tripleo-ui-select-roles 

https://blueprints.launchpad.net/openstack/?searchtext=roles-crud-ui 


Patches:
https://review.openstack.org/#/q/topic:bp/tripleo-ui-select-roles+(status:open+OR+status:merged)
 


— Jiri Tomasek__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][ui] Network Configuration wizard

2018-02-09 Thread Jiri Tomasek
Hi, all

Full support for network configuration is one of the main goals for TripleO
UI for Rocky cycle as it is missing part which still requires user to
manually prepare templates and provide them to deployment plan.

*Step 1. Network Isolation*

In Queens cycle we've started working on adding roles and networks
management Mistral workflows  [1], [2] which allows GUI to provide
composable roles and networks features. Roles management workflows are
landed, networks management work has most of the patches up for a review.

Both roles and networks management is based on a similar concept of having
roles/networks directory in deployment plan which consists of
roles/networks definitions available to be used for deployment. The list of
selected roles/networks which are actually used for deployment as well as
it's configuration is stored in roles_data.yaml and network_data.yaml which
are then used for populating jinja templates/environments. TripleO-common
then provides Mistral workflows for listing available roles/networks,
listing currently selected roles/networks, updating roles/networks and
selecting roles/networks.

This functionality allows us to:
Select roles for deployment and configure them
Select networks used for deployment and configure them
Assign networks to roles

Result of this is network-isolation.yaml environment file with correct
templates configured in resource_registry and parameters set according to
information in networks_data.yaml and roles_data.yaml

Work needed to finish:
[tripleo-heat-templates]
Add networks directory https://review.openstack.org/#/c/520634/

[tripleo-common]
Update Networks
https://blueprints.launchpad.net/tripleo/+spec/update-networks-action
Get Available Networks
https://blueprints.launchpad.net/tripleo/+spec/get-networks-action
Select Networks  (will be pretty much the
same as
https://blueprints.launchpad.net/tripleo/+spec/tripleo-common-select-roles-workflow
)

[tripleo-ui] , Wireframes [6]
Create Network configuration step in deployment plan page
Create network configuration wizard view
Create dialog to select networks used for deployment
Create dialog to configure networks
Create dialog to assign networks to roles
https://blueprints.launchpad.net/tripleo/+spec/networks-roles-assignment-ui

Up to here the direction is pretty well defined.

*Step 2. network-environment -> NIC configs*

Second step of network configuration is NIC config. For this
network-environment.yaml is used which references NIC config templates
which define network_config in their resources section. User is currently
required to configure these templates manually. We would like to provide
interactive view which would allow user to setup these templates using
TripleO UI. A good example is a standalone tool created by Ben Nemec [3].

There is currently work aimed for Pike to introduce jinja templating for
network environments and templates [4] (single-nic-with-vlans,
bond-with-vlans) to support composable networks and roles (integrate data
from roles_data.yaml and network_data.yaml) It would be great if we could
move this one step further by using these samples as a starting point and
let user specify full NIC configuration.

Available information at this point:
- list of roles and networks as well as which networks need to be
configured at which role's NIC Config template
- os-net-config schema which defines NIC configuration elements and
relationships [5]
- jinja templated sample NIC templates

Requirements:
- provide feedback to the user about networks assigned to role and have not
been configured in NIC config yet
- let user construct network_config section of NIC config templates for
each role (brigdes/bonds/vlans/interfaces...)
- provide means to assign network to vlans/interfaces and automatically
construct network_config section parameter references
- populate parameter definitions in NIC config templates based on
role/networks assignment
- populate parameter definitions in NIC config templates based on specific
elements which use them e.g. BondInterfaceOvsOptions in case when ovs_bond
is used
- store NIC config templates in deployment plan and reference them from
network-environment.yaml

Problems to solve:
As a biggest problem to solve I see defining logic which would
automatically handle assigning parameters to elements in network_config
based on Network which user assigns to the element. For example: Using GUI,
user is creating network_config for compute role based on
network/config/multiple-nics/compute.yaml, user adds an interface and
assigns the interface to Tenant network. Resulting template should then
automatically populate addresses/ip_netmask: get_param: TenantIpSubnet.
Question is whether all this logic should live in GUI or should GUI pass
simplified format to Mistral workflow which will convert it to proper
network_config format and populates the template with it.

I'd really like to hear some ideas or feedback on this so we can figure out
how to define a mechanism for 

Re: [openstack-dev] [TripleO][ui] Network Configuration wizard

2018-02-15 Thread Jiri Tomasek
On Wed, Feb 14, 2018 at 11:16 PM, Ben Nemec <openst...@nemebean.com> wrote:

>
>
> On 02/09/2018 08:49 AM, Jiri Tomasek wrote:
>
>> *Step 2. network-environment -> NIC configs*
>>
>> Second step of network configuration is NIC config. For this
>> network-environment.yaml is used which references NIC config templates
>> which define network_config in their resources section. User is currently
>> required to configure these templates manually. We would like to provide
>> interactive view which would allow user to setup these templates using
>> TripleO UI. A good example is a standalone tool created by Ben Nemec [3].
>>
>> There is currently work aimed for Pike to introduce jinja templating for
>> network environments and templates [4] (single-nic-with-vlans,
>> bond-with-vlans) to support composable networks and roles (integrate data
>> from roles_data.yaml and network_data.yaml) It would be great if we could
>> move this one step further by using these samples as a starting point and
>> let user specify full NIC configuration.
>>
>> Available information at this point:
>> - list of roles and networks as well as which networks need to be
>> configured at which role's NIC Config template
>> - os-net-config schema which defines NIC configuration elements and
>> relationships [5]
>> - jinja templated sample NIC templates
>>
>> Requirements:
>> - provide feedback to the user about networks assigned to role and have
>> not been configured in NIC config yet
>>
>
> I don't have much to add on this point, but I will note that because my UI
> is standalone and pre-dates composable networks it takes the opposite
> approach.  As a user adds a network to a role, it exposes the configuration
> for that network.  Since you have the networks ahead of time, you can
> obviously expose all of those settings up front and ensure the correct
> networks are configured for each nic-config.
>
> I say this mostly for everyone's awareness so design elements of my tool
> don't get copied where they don't make sense.
>
> - let user construct network_config section of NIC config templates for
>> each role (brigdes/bonds/vlans/interfaces...)
>> - provide means to assign network to vlans/interfaces and automatically
>> construct network_config section parameter references
>>
>
> So obviously your UI code is going to differ, but I will point out that
> the code in my tool for generating the actual os-net-config data is
> semi-standalone: https://github.com/cybertron/t
> ripleo-scripts/blob/master/net_processing.py
>
> It's also about 600 lines of code and doesn't even handle custom roles or
> networks yet.  I'm not clear whether it ever will at this point given the
> change in my focus.
>
> Unfortunately the input JSON schema isn't formally documented, although
> the unit tests do include a number of examples.
> https://github.com/cybertron/tripleo-scripts/blob/master/tes
> t-data/all-the-things/nic-input.json covers quite a few different cases.
>
> - populate parameter definitions in NIC config templates based on
>> role/networks assignment
>> - populate parameter definitions in NIC config templates based on
>> specific elements which use them e.g. BondInterfaceOvsOptions in case when
>> ovs_bond is used
>>
>
> I guess there's two ways to handle this - you could use the new jinja
> templating to generate parameters, or you could handle it in the generation
> code.
>
> I'm not sure if there's a chicken-and-egg problem with the UI generating
> jinja templates, but that's probably the simplest option if it works. The
> approach I took with my tool was to just throw all the parameters into all
> the files and if they're unused then oh well.  With jinja templating you
> could do the same thing - just copy a single boilerplate parameter header
> that includes the jinja from the example nic-configs and let the templating
> handle all the logic for you.
>
> It would be cleaner to generate static templates that don't need to be
> templated, but it would require re-implementing all of the custom network
> logic for the UI.  I'm not sure being cleaner is sufficient justification
> for doing that.
>
> - store NIC config templates in deployment plan and reference them from
>> network-environment.yaml
>>
>> Problems to solve:
>> As a biggest problem to solve I see defining logic which would
>> automatically handle assigning parameters to elements in network_config
>> based on Network which user assigns to the element. For example: Using GUI,
>> user is creating network_config for compute role based on
>> network/config/multiple-nic

[openstack-dev] [tripleo] FFE request for config-download-ui

2018-07-26 Thread Jiri Tomasek
Hello,

I would like to request a FFE for [1]. Current status of TripleO UI patches
is here [2] there are last 2 patches pending review which currently depend
on [3] which is close to land.

[1] https://blueprints.launchpad.net/tripleo/+spec/config-download-ui/
[2]
https://review.openstack.org/#/q/project:openstack/tripleo-ui+branch:master+topic:bp/config-download-ui
[3] https://review.openstack.org/#/c/583293/

Thanks
-- Jiri
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Patches to speed up plan operations

2018-08-08 Thread Jiri Tomasek
Hello, thanks for bringing this up.

I am going to try to test this patch with TripleO UI tomorrow. Without
properly looking at the patch, questions I would like to get answers for
are:

How is this going to affect ways to create/update deployment plan?
Currently user is able to create deployment plan by:
- not providing any files - creating deployment plan from default files in
/usr/share/openstack-tripleo-heat-templates
- providing a tarball
- providing a local directory of files to create plan from
- providing a git repository link

These changes will have an impact on certain TripleO UI operations where
(in rare cases) we reach directly for a swift object

IIUC it seems we are deciding to consider deployment plan as a black box
packed in a tarball, which I quite like, we'll need to provide a standard
way how to provide custom files to the plan.

How is this going to affect CLI vs GUI workflow as currently CLI creates
the plan as part of the deploy command, rather than GUI starts its workflow
by selecting/creating deployment plan and whole configuration of the plan
is performed on the deployment plan. Then the deployment plan gets
deployed. We are aiming to introduce CLI commands to consolidate the
behaviour of both clients to what GUI workflow is currently.

I am going to try to find answers to these questions and identify potential
problems in next couple of days.

-- Jirka


On Tue, Aug 7, 2018 at 5:34 PM Dan Prince  wrote:

> Thanks for taking this on Ian! I'm fully on board with the effort. I
> like the consolidation and performance improvements. Storing t-h-t
> templates in Swift worked okay 3-4 years ago. Now that we have more
> templates, many of which need .j2 rendering the storage there has
> become quite a bottleneck.
>
> Additionally, since we'd be sending commands to Heat via local
> filesystem template storage we could consider using softlinks again
> within t-h-t which should help with refactoring and deprecation
> efforts.
>
> Dan
> On Wed, Aug 1, 2018 at 7:35 PM Ian Main  wrote:
> >
> >
> > Hey folks!
> >
> > So I've been working on some patches to speed up plan operations in
> TripleO.  This was originally driven by the UI needing to be able to
> perform a 'plan upload' in something less than several minutes. :)
> >
> > https://review.openstack.org/#/c/581153/
> > https://review.openstack.org/#/c/581141/
> >
> > I have a functioning set of patches, and it actually cuts over 2 minutes
> off the overcloud deployment time.
> >
> > Without patch:
> > + openstack overcloud plan create --templates
> /home/stack/tripleo-heat-templates/ overcloud
> > Creating Swift container to store the plan
> > Creating plan from template files in: /home/stack/tripleo-heat-templates/
> > Plan created.
> > real3m3.415s
> >
> > With patch:
> > + openstack overcloud plan create --templates
> /home/stack/tripleo-heat-templates/ overcloud
> > Creating Swift container to store the plan
> > Creating plan from template files in: /home/stack/tripleo-heat-templates/
> > Plan created.
> > real0m44.694s
> >
> > This is on VMs.  On real hardware it now takes something like 15-20
> seconds to do the plan upload which is much more manageable from the UI
> standpoint.
> >
> > Some things about what this patch does:
> >
> > - It makes use of process-templates.py (written for the undercloud) to
> process the jinjafied templates.  This reduces replication with the
> existing version in the code base and is very fast as it's all done on
> local disk.
> > - It stores the bulk of the templates as a tarball in swift.  Any
> individual files in swift take precedence over the contents of the tarball
> so it should be backwards compatible.  This is a great speed up as we're
> not accessing a lot of individual files in swift.
> >
> > There's still some work to do; cleaning up and fixing the unit tests,
> testing upgrades etc.  I just wanted to get some feedback on the general
> idea and hopefully some reviews and/or help - especially with the unit test
> stuff.
> >
> > Thanks everyone!
> >
> > Ian
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO]Addressing Edge/Multi-site/Multi-cloud deployment use cases (new squad)

2018-08-22 Thread Jiri Tomasek
Hi,

thanks for a write up James. I am adding a few notes/ideas inline...

On Mon, Aug 20, 2018 at 10:48 PM James Slagle 
wrote:

> As we start looking at how TripleO will address next generation deployment
> needs such as Edge, multi-site, and multi-cloud, I'd like to kick off a
> discussion around how TripleO can evolve and adapt to meet these new
> challenges.
>
> What are these challenges? I think the OpenStack Edge Whitepaper does a
> good
> job summarizing some of them:
>
>
> https://www.openstack.org/assets/edge/OpenStack-EdgeWhitepaper-v3-online.pdf
>
> They include:
>
> - management of distributed infrastructure
> - massive scale (thousands instead of hundreds)
> - limited network connectivity
> - isolation of distributed sites
> - orchestration of federated services across multiple sites
>
> We already have a lot of ongoing work that directly or indirectly starts to
> address some of these challenges. That work includes things like
> config-download, split-controlplane, metalsmith integration, validations,
> all-in-one, and standalone.
>
> I laid out some initial ideas in a previous message:
>
> http://lists.openstack.org/pipermail/openstack-dev/2018-July/132398.html
>
> I'll be reviewing some of that here and going into a bit more detail.
>
> These are some of the high level ideas I'd like to see TripleO start to
> address:
>
> - More separation between planning and deploying (likely to be further
> defined
>   in spec discussion). We've had these concepts for a while, but we need
> to do
>   a better job of surfacing them to users as deployments grow in size and
>   complexity.
>

One of the focus points of ui/cli and workflows squads for Stein is getting
GUI and CLI consolidated so
that both clients operate on deployment plan via Mistral workflows. We are
currently working on identifying
missing CLI commands which would lead to adopting the same workflow as GUI
uses. This will lead to
complete interoperability between the clients and would make a deployment
plan the first-class citizen as
Ben mentioned in discussion linked above.

Existing plan import/export functionality makes the deployment plan easily
portable and replicable as it is
possible to export the plan at any point of time and re-use it (with
ability to still
apply some tweaks for each usage)

When Steven's work [1] introduces plan-types which adds ability to define
multiple starting points for the
deployment plan.

[1] https://review.openstack.org/#/c/574753


>
>   With config-download, we can more easily separate the phases of
> rendering,
>   downloading, validating, and applying the configuration. As we increase
> in
>   scale to managing many deployments, we should take advantage of what
> each of
>   those phases offer.
>
>   The separation also makes the deployment more portable, as we should
>   eliminate any restrictions that force the undercloud to be the control
> node
>   applying the configuration.
>
> - Management of multiple deployments from a single undercloud. This is of
>   course already possible today, but we need better docs and polish and
> more
>   testing to flush out any bugs.
>
> - Plan and template management in git.
>
>   This could be an iterative step towards eliminating Swift in the
> undercloud.
>   Swift seemed like a natural choice at the time because it was an existing
>   OpenStack service.  However, I think git would do a better job at
> tracking
>   history and comparing changes and is much more lightweight than Swift.
> We've
>   been managing the config-download directory as a git repo, and I like
> this
>   direction. For now, we are just putting the whole git repo in Swift, but
> I
>   wonder if it makes sense to consider eliminating Swift entirely. We need
> to
>   consider the scale of managing thousands of plans for separate edge
>   deployments.
>
>   I also think this would be a step towards undercloud simplification.
>

+1, we need to identify how much this affects the existing API and overall
user experience
for managing deployment plans. Currentl plan management options we support
are:
- create plan from default files (/usr/share/tht...)
- create/update plan from local directory
- create/update plan by providing tarball
- create/update plan from remote git repository

Ian has been working on similar efforts towards performance improvements
[2], It
would be good to take this a step further and evaluate possibility to
eliminate Swift entirely.

[2] https://review.openstack.org/#/c/581153/

-- Jirka


>
> - Orchestration between plans. I think there's general agreement around
> scaling
>   up the undercloud to be more effective at managing and deploying multiple
>   plans.
>
>   The plans could be different OpenStack deployments potentially sharing
> some
>   resources. Or, they could be deployments of different software stacks
>   (Kubernetes/OpenShift, Ceph, etc).
>
>   We'll need to develop some common interfaces for some basic orchestration
>   between plans. It could include 

Re: [openstack-dev] [TripleO] Plan management refactoring for Life cycle

2018-09-10 Thread Jiri Tomasek
Hi Mathieu,

Thanks for bringing up the topic. There are several efforts currently in
progress which should lead to solving the problems you're describing. We
are working on introducing CLI commands which would perform the deployment
configuration operations on deployment plan in Swift. This is a main step
to finally reach CLI and GUI compatibility/interoperability. CLI will
perform actions to configure deployment (roles, networks, environments
selection, parameters setting etc.) by calling Mistral workflows which
store the information in deployment plan in Swift. The result is that all
the information which define the deployment are stored in central place -
deployment plan in Swift and the deploy command is turned into simple
'openstack overcloud  deploy'. Deployment plan then has
plan-environment.yaml which has the list of environments used and
customized parameter values, roles-data.yaml which carry roles definition
and network-data.yaml which carry networks definition. The information
stored in these files (and deployment plan in general) can then be treated
as source of information about deployment. The deployment can then be
easily exported and reliably replicated.

Here is the document which we put together to identify missing pieces
between GUI,CLI and Mistral TripleO API. We'll use this to discuss the
topic at PTG this week and define work needed to be done to achieve the
complete interoperability. [1]

Also there is a pending patch from Steven Hardy which aims to remove CLI
specific environments merging which should fix the problem with tracking of
the environments used with CLI deployment. [2]

[1] https://gist.github.com/jtomasek/8c2ae6118be0823784cdafebd9c0edac
(Apologies for inconvenient format, I'll try to update this to
better/editable format. Original doc:
https://docs.google.com/spreadsheets/d/1ERfx2rnPq6VjkJ62JlA_E6jFuHt9vVl3j95dg6-mZBM/edit?usp=sharing
)
[2] https://review.openstack.org/#/c/448209/

-- Jirka

On Mon, Sep 10, 2018 at 8:05 AM mathieu bultel  wrote:

> Hi folks,
>
> Last week I wrote a BluePrint and a spec [1] to propose to change the way
> we used and managed the Plan in TripleO for the Deployment and the Life
> cycle (update/upgrade and scale).
>
> While I was working on trying to simplified the implementation of the
> Update and Upgrade for a end user usage, I found very hard to follow all
> the calls that the TripleO Client was doing to the HeatClient and
> SwiftClient.
>
> I traced the calls and found that we can safely and easily decrease the
> number of calls and simplified the way that we are computing & rendering
> the TripleO Heat Templates files.
>
> I did a PoC to see what would be the problematic behind that and what we
> could do without breaking the "standard" usage and all the particular
> things that the current code handle (specific deployments and
> configurations & so on).
>
> By this refactoring I'm seeing another gain for the life cycle part of
> TripleO, where we used to try to make thing simpler & safer but we
> constantly failed due to this complexity and all the "special cases" that
> we faced during the testing.
>
> The result is that, when a user need to perform an update/upgrade of his
> deployment, he really have to be careful, to pay a lot of attention of all
> the options, -e environments files that he previously used  with the risk
> to make a simple mistake, and totally mess up the deployment.
>
> So my goals with this PoC and this BP is to try to addressed those points
> by:
>
> simplify  and reduce the number of calls between the clients,
>
> have a simple way for creating and updating the Plan, even by amending the
> plan with only a particular files / config or Playbooks,
>
> store all the in formations provided by the user by uploading all the
> files outsides of the plan,
>
> keep the track of the environment files passed to the CLI,
>
> trace the life cycle story of the deployment.
>
> So feel free to comment, add your concerns or feedback around this.
>
> Cheer,
>
> Mathieu
>
> [1]
>
> https://blueprints.launchpad.net/tripleo/+spec/tripleo-plan-management
>
> https://review.openstack.org/599396
>
> [2]
>
>  https://review.openstack.org/583145
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Posibilities to aggregate/merge configs across templates

2018-09-11 Thread Jiri Tomasek
Hi,

The problems you're describing are close to the discussion we had with
Mathieu Bultel here [1]. Currently to set some parameters values as
ultimate source of truth, you need to put them in plan-environment.yaml.
Ignoring the fact that CLI now merges environments itself (fixed by [2] and
not affecting this behaviour), the Mistral workflows pass the environments
to heat in order in which they are provided with -e option and then as last
environment it applies parameter_defaults from plan-environment.yaml.
The result of [1] effort is going to be that the way deployment
configuration (roles setting, networks selection, environments selection
and explicit parameters setting) is going to be done the same by both CLI
and GUI through Mistral Workflows which already exist but are used only by
GUI. When you look at plan-environment.yaml in Swift, you can see the list
of environment files in order in which they're merged as well as parameters
which are going to override the values in environments in case of collision.

Merging strategy for parameters is an interesting problem, configuring this
in t-h-t looks like a good solution to me. Note that the GUI always
displays the parameter values which it is getting from GetParameters
Mistral action. This action gets the parameter values from Heat by running
heat validate. This means that it always displays the real parameter values
which are actually going to be applied by Heat as a result of all the
merging. If user updates that value with GUI it will end up being set in
plan-environment.yaml.

-- Jirka




[1]
http://lists.openstack.org/pipermail/openstack-dev/2018-September/134511.html
[2] https://review.openstack.org/#/c/448209/


On Tue, Sep 4, 2018 at 9:54 AM Kamil Sambor  wrote:

> Hi all,
>
> I want to start discussion on: how to solve issue with merging environment
> values in TripleO.
>
> Description:
> In TripleO we experience some issues related to setting parameters in heat
> templates. First, it isn't possible to set some params as ultimate source
> of truth (disallow to overwrite param in other heat templates). Second it
> isn't possible to merge values from different templates [0][1].
> Both features are implemented in heat and can be easly used in
> templates.[2][3]
> This doesn't work in TripleO because we overwrite all values in template in
> python client instead of aggregating them etc. orsimply let heat do the
> job .[4][5]
>
> Solution:
> Example solutions are: we can fix how python tripleo client works with env
> and templates and enable heat features or we can write some puppet code
> that will work similar to firewall code [6] and will support aggregate and
> merge values that we point out. Both solutions have pros and cons but IMHO
> solution which give heat to do job is preferable. But solution with merging
> give us possibilities to have full control on merging of environments.
>
> Problems:
> Only few as a start: With both solutions we will have the same problem,
> porting new patches which will use this functionalities to older version of
> rhel. Also upgrades can be really problematic to new version. Also changes
> which will enable heat feature will totally change how templates work and
> we
> will need to change all templates and change default behavior (which is
> merge
> params) to override behavior and also add posibilities to run temporaly old
> behavior.
>
> On the end, I prepared two patchsets with two PoC in progress. First one
> with
> merging env in tripleo client but with using heat merging functionality:
> https://review.openstack.org/#/c/599322/ . And second where we ignore
> merget
> env and move all files and add them into deployment plan enviroments.
> https://review.openstack.org/#/c/599559/
>
> What do you think about each of solution?Which solution should be used
> in TripleO?
>
> Best,
> Kamil Sambor
>
> [0] https://bugs.launchpad.net/tripleo/+bug/1716391
> [1] https://bugs.launchpad.net/heat/+bug/1635409
> [2]
> https://docs.openstack.org/heat/pike/template_guide/environment.html#restrict-update-or-replace-of-a-given-resource
> [3]
> https://docs.openstack.org/heat/pike/template_guide/environment.html#environment-merging
> [4]
> https://github.com/openstack/python-tripleoclient/blob/master/tripleoclient/utils.py#L1019
> [5]
> https://github.com/openstack/python-heatclient/blob/f73c2a4177377b710a02577feea38560b00a24bf/heatclient/common/template_utils.py#L191
> [6]
> https://github.com/openstack/puppet-tripleo/tree/master/manifests/firewall
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [tripleo] Workflows Squad changes

2018-11-28 Thread Jiri Tomasek
Hi all,

Recently, the workflows squad has been reorganized and people from the
squad are joining different squads. I would like to discuss how we are
going to adjust to this situation to make sure that tripleo-common
development is not going to be blocked in terms of feature work and reviews.

With this change, most of the tripleo-common maintenance work goes
naturally to UI & Validations squad as CLI and GUI are the consumers of the
API provided by tripleo-common. Adriano Petrich from workflows squad has
joined UI squad to take on this work.

As a possible solution, I would like to propose Adriano as a core reviewer
to tripleo-common and adding tripleo-ui cores right to +2 tripleo-common
patches.

It would be great to hear opinions especially former members of Workflows
squad and regular contributors to tripleo-common on these changes and in
general on how to establish regular reviews and maintenance to ensure that
tripleo-common codebase is moved towards converging the CLI and GUI
deployment workflow.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev