Re: [openstack-dev] [TripleO] nominating James Polley for tripleo-core

2015-01-16 Thread Ladislav Smola

+1

On 01/14/2015 07:34 PM, Robert Collins wrote:


+1

On 15 Jan 2015 07:15, Clint Byrum cl...@fewbar.com 
mailto:cl...@fewbar.com wrote:


Hello! It has been a while since we expanded our review team. The
numbers aren't easy to read with recent dips caused by the summit and
holidays. However, I believe James has demonstrated superb review
skills
and a commitment to the project that shows broad awareness of the
project.

Below are the results of a meta-review I did, selecting recent reviews
by James with comments and a final score. I didn't find any reviews by
James that I objected to.

https://review.openstack.org/#/c/133554/ -- Took charge and provided
valuable feedback. +2
https://review.openstack.org/#/c/114360/ -- Good -1 asking for better
commit message and then timely follow-up +1 with positive comments for
more improvement. +2
https://review.openstack.org/#/c/138947/ -- Simpler review, +1'd
on Dec.
19 and no follow-up since. Allowing 2 weeks for holiday vacation, this
is only really about 7 - 10 working days and acceptable. +2
https://review.openstack.org/#/c/146731/ -- Very thoughtful -1
review of
recent change with alternatives to the approach submitted as patches.
https://review.openstack.org/#/c/139876/ -- Simpler review, +1'd in
agreement with everyone else. +1
https://review.openstack.org/#/c/142621/ -- Thoughtful +1 with
consideration for other reviewers. +2
https://review.openstack.org/#/c/113983/ -- Thorough spec review with
grammar pedantry noted as something that would not prevent a positive
review score. +2

All current tripleo-core members are invited to vote at this time.
Thank
you!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] os-cloud-config

2014-10-30 Thread Ladislav Smola

Hello,

we call it from UI after the deployment 
https://github.com/openstack/tuskar-ui/blob/master/tuskar_ui/infrastructure/overview/forms.py#L222 
.
There should be conversation on the summit whether to do the call from 
somewhere else (tuskar, template..).


Kind Regards,
Ladislav


On 10/30/2014 01:47 PM, LeslieWang wrote:

Dear all,

Seems like os-cloud-config is to initialise uncercloud or overcloud 
after heat orchestration. But I can not find how it is used in either 
tuskar, or tuskar-UI. So can anyone explain a little bit how it is 
used in TripleO projects? Thanks.


Best Regards
Leslie Wang


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tripleo] Release report

2014-10-01 Thread Ladislav Smola

1. os-apply-config: release: 0.1.22 -- 0.1.23
--https://pypi.python.org/pypi/os-apply-config/0.1.23
--
http://tarballs.openstack.org/os-apply-config/os-apply-config-0.1.23.tar.gz

2. os-refresh-config:   release: 0.1.7 -- 0.1.8
--https://pypi.python.org/pypi/os-refresh-config/0.1.8
--
http://tarballs.openstack.org/os-refresh-config/os-refresh-config-0.1.8.tar.gz

3. os-collect-config:   no changes, 0.1.28

4. os-cloud-config: release: 0.1.10 -- 0.1.11
--https://pypi.python.org/pypi/os-cloud-config/0.1.11
--
http://tarballs.openstack.org/os-cloud-config/os-cloud-config-0.1.11.tar.gz

5. diskimage-builder:   release: 0.1.31 -- 0.1.32
--https://pypi.python.org/pypi/diskimage-builder/0.1.32
--
http://tarballs.openstack.org/diskimage-builder/diskimage-builder-0.1.32.tar.gz

6. dib-utils:   release: 0.0.6 -- 0.0.7
--https://pypi.python.org/pypi/dib-utils/0.0.7
--http://tarballs.openstack.org/dib-utils/dib-utils-0.0.7.tar.gz

7. tripleo-heat-templates:  release: 0.7.7 -- 0.7.8
--https://pypi.python.org/pypi/tripleo-heat-templates/0.7.8
--
http://tarballs.openstack.org/tripleo-heat-templates/tripleo-heat-templates-0.7.8.tar.gz

8: tripleo-image-elements:  release: 0.8.7 -- 0.8.8
--https://pypi.python.org/pypi/tripleo-image-elements/0.8.8
--
http://tarballs.openstack.org/tripleo-image-elements/tripleo-image-elements-0.8.8.tar.gz

9: tuskar:  release 0.4.12 -- 0.4.13
--https://pypi.python.org/pypi/tuskar/0.4.13
--http://tarballs.openstack.org/tuskar/tuskar-0.4.13.tar.gz

10. python-tuskarclient:release 0.1.12 -- 0.1.13
--https://pypi.python.org/pypi/python-tuskarclient/0.1.13
--
http://tarballs.openstack.org/python-tuskarclient/python-tuskarclient-0.1.13.tar.gz


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Propose adding StevenK to core reviewers

2014-09-10 Thread Ladislav Smola

+1

On 09/09/2014 08:32 PM, Gregory Haynes wrote:

Hello everyone!

I have been working on a meta-review of StevenK's reviews and I would
like to propose him as a new member of our core team.

As I'm sure many have noticed, he has been above our stats requirements
for several months now. More importantly, he has been reviewing a wide
breadth of topics and seems to have a strong understanding of our code
base. He also seems to be doing a great job at providing valuable
feedback and being attentive to responses on his reviews.

As such, I think he would make a great addition to our core team. Can
the other core team members please reply with your votes if you agree or
disagree.

Thanks!
Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Change of meeting time

2014-08-21 Thread Ladislav Smola

On 08/20/2014 11:30 AM, Dougal Matthews wrote:

- Original Message -

From: Derek Higgins der...@redhat.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Wednesday, 20 August, 2014 10:15:51 AM
Subject: Re: [openstack-dev] [TripleO] Change of meeting time

On 24/05/14 01:21, James Polley wrote:

Following a lengthy discussion under the subject Alternating meeting
tmie for more TZ friendliness, the TripleO meeting now alternates
between Tuesday 1900UTC (the former time) and Wednesday 0700UTC, for
better coverage across Australia, India, China, Japan, and the other
parts of the world that found it impossible to get to our previous
meeting time.

Raising a point that came up on this morning's irc meeting

A lot (most?) of the people at this morning's meeting were based in
western Europe, getting up earlier then usual for the meeting (me
included), When daylight saving kicks in it might push them passed the
threshold, would an hour later (0800 UTC) work better for people or is
the current time what fits best?

I'll try to make the meeting regardless if its moved or not but an hour
later would certainly make it a little more palatable.

+1, I don't have a strong preference, but an hour later would make it a
bit easier, particularly when DST kicks in.

Dougal


+1


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Strategy for merging Heat HOT port

2014-08-01 Thread Ladislav Smola

Hello,

yes, this is a very needed change, so I'd vote for -2 everything unless 
it's rebased to 105347.


Thank you for the patches.

Ladislav

On 08/01/2014 02:19 AM, Steve Baker wrote:
The changes to port tripleo-heat-templates to HOT have been rebased to 
the current state and are ready to review. They are the next steps in 
blueprint tripleo-juno-remove-mergepy.


However there is coordination needed to merge since every existing 
tripleo-heat-templates change will need to be rebased and changed 
after the port lands (lucky you!).


Here is a summary of the important changes in the series:

https://review.openstack.org/#/c/105327/
Low risk and plenty of +2s, just needs enough validation from CI for 
an approve


https://review.openstack.org/#/c/105328/
Scripted conversion to HOT. Converts everything except Fn::Select

https://review.openstack.org/#/c/105347/
Manual conversion of Fn::Select to extended get_attr

I'd like to suggest the following approach for getting these to land:
* Any changes which really should land before the above 3 get listed 
in this mail thread (vlan?)

* Reviews of the above 3 changes, and local testing of change 105347
* All other tripleo-heat-templates need to be rebased/reworked to be 
after 105347 (and maybe -2 until they are?)


I'm available for any questions on porting your changes to HOT.

cheers


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Strategy for recovering crashed nodes in the Overcloud?

2014-07-25 Thread Ladislav Smola

Hi,

I believe you are looking for stack convergence in Heat. It's not fully 
implemented yet AFAIK.
You can check it out here 
https://blueprints.launchpad.net/heat/+spec/convergence


Hope it will help you.

Ladislav

On 07/23/2014 12:31 PM, Howley, Tom wrote:


(Resending to properly start new thread.)

Hi,

I'm running a HA overcloud configuration and as far as I'm aware, 
there is currently no mechanism in place for restarting failed nodes 
in the cluster. Originally, I had been wondering if we would use a 
corosync/pacemaker cluster across the control plane with STONITH 
resources configured for each node (a STONITH plugin for Ironic could 
be written). This might be fine if a corosync/pacemaker stack is 
already being used for HA of some components, but it seems overkill 
otherwise. The undercloud heat could be in a good position to restart 
the overcloud nodes -- is that the plan or are there other options 
being considered?


Thanks,

Tom



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Feedback on init-keystone spec

2014-04-30 Thread Ladislav Smola

Hello Steve,

the spec looks correct to me. Thanks for picking this up, it's very 
needed, especially for Tuskar.


Let me know if you will need some help with testing it, or with anything 
else.


Kind Regards,
Ladislav


On 04/30/2014 09:02 AM, Steve Kowalik wrote:

Hi,

I'm looking at moving init-keystone from tripleo-incubator to 
os-cloud-config, and I've drafted a spec at 
https://etherpad.openstack.org/p/tripleo-init-keystone-os-cloud-config .


Feedback welcome.

Cheers,



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Undercloud Ceilometer

2014-04-23 Thread Ladislav Smola

Hi Neal, thanks for response.

So I would see it as UNDERCLOUD_USE_UI (TripleO UI can be placed only to 
Undercloud)


And for overcloud: OVERCLOUD_USE_UI and OVERCLOUD_USE_CEILOMETER, cause in
overcloud users might not want UI, but only data for billing. Does it 
sound reasonable?


On 04/22/2014 06:23 PM, Neal, Phil wrote:

From: Ladislav Smola [mailto:lsm...@redhat.com]
Sent: Wednesday, April 16, 2014 8:37 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO] [Tuskar] Undercloud Ceilometer

No response so far, but -1 on the image element for making Ceilometer
optional.

Sorry for the delayed response, Ladislov. It turns out that the mailing list 
was filtering out these TripleO mails for me.

Let me add a little context to that -1: given that a TripleO user may not want 
to enable a UI layer at the undercloud level (there's a use case for using the 
undercloud solely for spinning up the overcloud), I think we want to support as 
small a footprint as possible.


OK, so what about having variable in devtest_variables: USE_TRIPLEO_UI.


I like this approach better...in fact I will look into adding something similar 
into the changes I'm making to enable Ceilometer by default in the overcloud 
control node: https://review.openstack.org/#/c/89625/1


It would add Undercloud Ceilometer, Tuskar-UI and Horizon. And Overcloud
SNMPd.

Defaulted to USE_TRIPLEO_UI=1 so we have UI stuff in CI.

How does it sound?


Perhaps specify something like UNDERCLOUD_USE_TRIPLEO_UI to be more specific 
on where this will be deployed.

On 04/14/2014 01:31 PM, Ladislav Smola wrote:

Hello,

I am planning to add Ceilometer to Undercloud as default. Since
Tuskar-UI uses
it as primary source of metering samples and Tuskar should be in
Undercloud
as default, it made sense to me.

So is my assumption correct or there are some reasons not to do this?

Here are the reviews, that are adding working Undercloud Ceilometer:
https://review.openstack.org/#/c/86915/
https://review.openstack.org/#/c/86917/  (depends on the template

change)

https://review.openstack.org/#/c/87215/

Configuration for automatic obtaining of stats from all Overcloud
nodes via.
SNMP will follow soon.

Thanks,
Ladislav


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][design] review based conceptual design process

2014-04-18 Thread Ladislav Smola

On 04/15/2014 08:44 PM, Robert Collins wrote:

I've been watching the nova process, and I think its working out well
- it certainly addresses:
  - making design work visible
  - being able to tell who has had input
  - and providing clear feedback to the designers

I'd like to do the same thing for TripleO this cycle..

I'm thinking we can just add docs to incubator, since thats already a
repository separate to our production code - what do folk think?

-Rob



+1 and +1 to separate specs repo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Undercloud Ceilometer

2014-04-16 Thread Ladislav Smola
No response so far, but -1 on the image element for making Ceilometer 
optional.


OK, so what about having variable in devtest_variables: USE_TRIPLEO_UI.

It would add Undercloud Ceilometer, Tuskar-UI and Horizon. And Overcloud
SNMPd.

Defaulted to USE_TRIPLEO_UI=1 so we have UI stuff in CI.

How does it sound?


On 04/14/2014 01:31 PM, Ladislav Smola wrote:

Hello,

I am planning to add Ceilometer to Undercloud as default. Since 
Tuskar-UI uses
it as primary source of metering samples and Tuskar should be in 
Undercloud

as default, it made sense to me.

So is my assumption correct or there are some reasons not to do this?

Here are the reviews, that are adding working Undercloud Ceilometer:
https://review.openstack.org/#/c/86915/
https://review.openstack.org/#/c/86917/  (depends on the template change)
https://review.openstack.org/#/c/87215/

Configuration for automatic obtaining of stats from all Overcloud 
nodes via.

SNMP will follow soon.

Thanks,
Ladislav


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] [Tuskar] Undercloud Ceilometer

2014-04-14 Thread Ladislav Smola

Hello,

I am planning to add Ceilometer to Undercloud as default. Since 
Tuskar-UI uses

it as primary source of metering samples and Tuskar should be in Undercloud
as default, it made sense to me.

So is my assumption correct or there are some reasons not to do this?

Here are the reviews, that are adding working Undercloud Ceilometer:
https://review.openstack.org/#/c/86915/
https://review.openstack.org/#/c/86917/  (depends on the template change)
https://review.openstack.org/#/c/87215/

Configuration for automatic obtaining of stats from all Overcloud nodes via.
SNMP will follow soon.

Thanks,
Ladislav


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [Horizon] Icehouse Release of TripleO UI + Demo

2014-04-11 Thread Ladislav Smola

Hello,

we have used this list of steps for the demo on Fedora 20:
https://wiki.openstack.org/wiki/Tuskar/Devtest

The demo is running on one machine with 24GB RAM and 120GB disk. We are 
using

virtualized baremetals(bm_poseur) for development.

KInd Regards,
Ladislav


On 04/10/2014 07:40 PM, Nachi Ueno wrote:

Hi Jarda

Congratulations
This release and the demo is super awesome!!
Do you have any instruction to install this one?




2014-04-10 1:32 GMT-07:00 Jaromir Coufal jcou...@redhat.com:

Dear Stackers,

I am happy to announce that yesterday Tuskar UI (TripleO UI) has tagged
branch 0.1.0 for Icehouse release [0].

I put together a narrated demo of all included features [1].

You can find one manual part in the whole workflow - cloud initialization.
There is ongoing work on automatic os-cloud-config, but for the release we
had to include manual way. Automation should be added soon though.

I want to thank all contributors for hard work to make this happen. It has
been pleasure to cooperate with all of you guys and I am looking forward to
bringing new features [2] in.


-- Jarda


[0] 0.1.0 Icehouse Release of the UI:
https://github.com/openstack/tuskar-ui/releases/tag/0.1.0

[1] Narrated demo of TripleO UI 0.1.0:
https://www.youtube.com/watch?v=-6whFIqCqLU

[2] Juno Planning for Tuskar:
https://wiki.openstack.org/wiki/TripleO/TuskarJunoPlanning

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [Horizon] Icehouse Release of TripleO UI + Demo

2014-04-11 Thread Ladislav Smola

On 04/11/2014 01:03 PM, Thomas Goirand wrote:

On 04/11/2014 06:05 PM, Jaromir Coufal wrote:

On 2014/11/04 10:27, Thomas Goirand wrote:

On 04/10/2014 04:32 PM, Jaromir Coufal wrote:

Dear Stackers,

I am happy to announce that yesterday Tuskar UI (TripleO UI) has tagged
branch 0.1.0 for Icehouse release [0].

I put together a narrated demo of all included features [1].

You can find one manual part in the whole workflow - cloud
initialization. There is ongoing work on automatic os-cloud-config, but
for the release we had to include manual way. Automation should be added
soon though.

I want to thank all contributors for hard work to make this happen. It
has been pleasure to cooperate with all of you guys and I am looking
forward to bringing new features [2] in.


-- Jarda

Are all needed components latest tags enough? In other words, if I
update all of TripleO in Debian, will I have a useable system?

Cheers,

Thomas

Hi Thomas,

I would say yes. The question is what you mean by usable system? You
want to try the Tuskar UI? If yes, here is devtest which will help you
to get the dev setup: https://wiki.openstack.org/wiki/Tuskar/Devtest and
here is the part for Tuskar UI:
https://github.com/openstack/tuskar-ui/blob/master/docs/install.rst

If you want more general info about Tuskar, here is wiki page:
https://wiki.openstack.org/wiki/Tuskar.

We are also very happy to help on #tuskar or #tripleo freenode channels
if you experience some troubles.

-- Jarda

Hi Jarda,

Thanks a lot for your reply.

Unfortunately, these instructions aren't very useful if you want to do
an installation based on packages. Something like:

git clone https://git.openstack.org/openstack/tripleo-incubator
$TRIPLEO_ROOT/tripleo-incubator/scripts/devtest.sh --trash-my-machine

is of course a no go. Stuff with pip install or easy_install, or git
clone, aren't what Debian users should read.

So, I guess the documentation for when using packages have to be written
from scratch. I'm not sure where to start... :( Do you have time to help
with this?

Now, yes, I'd be very happy to chat about this on IRC. However, I have
to get my hands on a powerful enough server. I've read a post in this
list by Robert that I would need at least 16 GB of RAM. I hope to get a
spare hardware for such tests soon.

Thomas


Hi Thomas,

Yes, devtest is suppose to run on bleeding edge. :-)

For Fedora:
I think you are looking for this 
https://github.com/agroup/instack-undercloud

It uses packages and pre-created images.

For Debian like, I am afraid nobody started to prepare a package based 
solution.


Kind Regards,
Ladislav



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march [additional cores]

2014-04-08 Thread Ladislav Smola

+1 for the -core changes

jdon sounds like a pretty cool Mafia name, +1 for Don Jay


On 04/08/2014 09:10 AM, Tomas Sedovic wrote:

On 08/04/14 01:50, Robert Collins wrote:

tl;dr: 3 more core members to propose:
bnemec
greghaynes
jdon

-1, there's a typo in jdob's nick ;-)

In all seriousness, I support all of them being added to core.



On 4 April 2014 08:55, Chris Jones c...@tenshu.net wrote:

Hi

+1 for your proposed -core changes.

Re your question about whether we should retroactively apply the 3-a-day
rule to the 3 month review stats, my suggestion would be a qualified no.

I think we've established an agile approach to the member list of -core, so
if there are a one or two people who we would have added to -core before the
goalposts moved, I'd say look at their review quality. If they're showing
the right stuff, let's get them in and helping. If they don't feel our new
goalposts are achievable with their workload, they'll fall out again
naturally before long.

So I've actioned the prior vote.

I said: Bnemec, jdob, greg etc - good stuff, I value your reviews
already, but...

So... looking at a few things - long period of reviews:
60 days:
|greghaynes   | 1210  22  99   0   081.8% |
14 ( 11.6%)  |
|  bnemec | 1160  38  78   0   067.2% |
10 (  8.6%)  |
|   jdob  |  870  15  72   0   082.8% |
4 (  4.6%)  |

90 days:

|  bnemec | 1450  40 105   0   072.4% |
17 ( 11.7%)  |
|greghaynes   | 1420  23 119   0   083.8% |
22 ( 15.5%)  |
|   jdob  | 1060  17  89   0   084.0% |
7 (  6.6%)  |

Ben's reviews are thorough, he reviews across all contributors, he
shows good depth of knowledge and awareness across tripleo, and is
sensitive to the pragmatic balance between 'right' and 'good enough'.
I'm delighted to support him for core now.

Greg is very active, reviewing across all contributors with pretty
good knowledge and awareness. I'd like to see a little more contextual
awareness though - theres a few (but not many) reviews where looking
at how the big picture of things fitting together more would have been
beneficial. *however*, I think that's a room-to-improve issue vs
not-good-enough-for-core - to me it makes sense to propose him for
core too.

Jay's reviews are also very good and consistent, somewhere between
Greg and Ben in terms of bigger-context awareness - so another
definite +1 from me.

-Rob






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] config options, defaults, oh my!

2014-04-08 Thread Ladislav Smola

+1 to this:

nova:
   config:
   default.compute_manager: 
ironic.nova.compute.manager.ClusterComputeManager
   cells.driver: nova.cells.rpc_driver.CellsRPCDriver

Adding a generic mechanism like this and having everything configurable
seems like a best option to me.


On 04/08/2014 01:51 AM, Dan Prince wrote:


- Original Message -

From: Robert Collins robe...@robertcollins.net
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Sent: Monday, April 7, 2014 4:00:30 PM
Subject: [openstack-dev] [TripleO] config options, defaults, oh my!

So one interesting thing from the influx of new reviews is lots of
patches exposing all the various plumbing bits of OpenStack. This is
good in some ways (yay, we can configure more stuff), but in some ways
its kindof odd - like - its not clear when
https://review.openstack.org/#/c/83122/ is needed.

I'm keen to expose things that are really needed, but i'm not sure
that /all/ options are needed - what do folk think?

I think we can learn much from some of the more mature configuration management 
tools in the community on this front. Using puppet as an example here (although 
I'm sure other tools may do similar things as well)... Take configuration of 
the Nova API server. There is a direct configuration parameter for 
'neutron_metadata_proxy_shared_secret' in the Puppet nova::api class. This 
parameter is exposed in the class (sort of the equivalent of a TripleO element) 
directly because it is convenient and many users may want to customize the 
value. There are however hundreds of Nova config options and most of them 
aren't exposed as parameters in the various Nova puppet classes. For these it 
is possible to define a nova_config resource to configure *any* nova.conf 
parameter in an ad hoc style for your own installation tuning purposes.

I could see us using a similar model in TripleO where our elements support configuring 
common config elements directly, but we also allow people to tune extra 
undocumented options for their own use. There is always going to be a need 
for this as people need to tune things for their own installations with options that may 
not be appropriate for the common set of elements.

Standardizing this mechanism across many of the OpenStack service elements 
would also make a lot of sense. Today we have this for Nova:

nova:
   verbose: False
 - Print more verbose output (set logging level to INFO instead of default 
WARNING level).
   debug: False
 - Print debugging output (set logging level to DEBUG instead of default 
WARNING level).
   baremetal:
 pxe_deploy_timeout: 1200
   .

I could see us adding a generic mechanism like this to overlay with the 
existing (documented) data structure:

nova:
config:
default.compute_manager: 
ironic.nova.compute.manager.ClusterComputeManager
cells.driver: nova.cells.rpc_driver.CellsRPCDriver

And in this manner a user might be able to add *any* supported config param to 
the element.



Also, some things
really should be higher order operations - like the neutron callback
to nova right - that should be either set to timeout in nova 
configured in neutron, *or* set in both sides appropriately, never
one-half or the other.

I think we need to sort out our approach here to be systematic quite
quickly to deal with these reviews.

I totally agree. I was also planning to email the list about this very issue this week :) 
My email subject was going to be TripleO templates... an upstream maintenance 
problem.

For the existing reviews today I think we should be somewhat selective about what 
parameters we expose as top level within the elements. That said we are missing some 
rather fundamental features to allow users to configure undocumented 
parameters as well. So we need to solve this problem quickly because there are certainly 
some configuration corner that users will need.

As is today we are missing some rather fundamental features in os-apply-config 
and the elements to be able to pull this off. What we really need is a generic 
INI style template generator. Or perhaps we could use something like augeus or 
even devstacks simple ini editing functions to pull this off. In any case the 
idea would be that we allow users to inject their own undocumented config 
parameters into the various service config files. Or perhaps we could 
auto-generate mustache templates based off of the upstream sample config files. 
Many approaches would work here I think...



Here's an attempt to do so - this could become a developers guide patch.

Config options in TripleO
==

Non-API driven configuration falls into four categories:
A - fixed at buildtime (e.g. ld.so path)
B - cluster state derived
C - local machine derived
D - deployer choices

For A, it should be entirely done within the elements concerned.

For B, the heat template should accept parameters to choose the
desired config (e.g. the Neutron-Nova example able) but then 

Re: [openstack-dev] [Tuskar][TripleO] Tuskar Planning for Juno

2014-04-08 Thread Ladislav Smola
Thanks Mainn for putting this together, looks like a fairly precise list 
of things

we need to do in J.

On 04/07/2014 03:36 PM, Tzu-Mainn Chen wrote:

Hi all,

One of the topics of discussion during the TripleO midcycle meetup a few weeks
ago was the direction we'd like to take Tuskar during Juno.  Based on the ideas
presented there, we've created a tentative list of items we'd like to address:

https://wiki.openstack.org/wiki/TripleO/TuskarJunoPlanning

Please feel free to take a look and question, comment, or criticize!


Thanks,
Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] TripleO Heat Templates and merge.py

2014-04-07 Thread Ladislav Smola

On 04/06/2014 11:27 PM, Steve Baker wrote:

On 05/04/14 04:47, Tomas Sedovic wrote:

Hi All,

I was wondering if the time has come to document what exactly are we
doing with tripleo-heat-templates and merge.py[1], figure out what needs
to happen to move away and raise the necessary blueprints on Heat and
TripleO side.

(merge.py is a script we use to build the final TripleO Heat templates
from smaller chunks)

There probably isn't an immediate need for us to drop merge.py, but its
existence either indicates deficiencies within Heat or our unfamiliarity
with some of Heat's features (possibly both).

I worry that the longer we stay with merge.py the harder it will be to
move forward. We're still adding new features and fixing bugs in it (at
a slow pace but still).

Below is my understanding of the main marge.py functionality and a rough
plan of what I think might be a good direction to move to. It is almost
certainly incomplete -- please do poke holes in this. I'm hoping we'll
get to a point where everyone's clear on what exactly merge.py does and
why. We can then document that and raise the appropriate blueprints.


## merge.py features ##


1. Merging parameters and resources

Any uniquely-named parameters and resources from multiple templates are
put together into the final template.

If a resource of the same name is in multiple templates, an error is
raised. Unless it's of a whitelisted type (nova server, launch
configuration, etc.) in which case they're all merged into a single
resource.

For example: merge.py overcloud-source.yaml swift-source.yaml

The final template has all the parameters from both. Moreover, these two
resources will be joined together:

 overcloud-source.yaml 

   notCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Properties:
   ImageId: '0'
   InstanceType: '0'
 Metadata:
   admin-password: {Ref: AdminPassword}
   admin-token: {Ref: AdminToken}
   bootstack:
 public_interface_ip:
   Ref: NeutronPublicInterfaceIP


 swift-source.yaml 

   notCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Metadata:
   swift:
 devices:
   ...
 hash: {Ref: SwiftHashSuffix}
 service-password: {Ref: SwiftPassword}


The final template will contain:

   notCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Properties:
   ImageId: '0'
   InstanceType: '0'
 Metadata:
   admin-password: {Ref: AdminPassword}
   admin-token: {Ref: AdminToken}
   bootstack:
 public_interface_ip:
   Ref: NeutronPublicInterfaceIP
   swift:
 devices:
   ...
 hash: {Ref: SwiftHashSuffix}
 service-password: {Ref: SwiftPassword}


We use this to keep the templates more manageable (instead of having one
huge file) and also to be able to pick the components we want: instead
of `undercloud-bm-source.yaml` we can pick `undercloud-vm-source` (which
uses the VirtualPowerManager driver) or `ironic-vm-source`.



2. FileInclude

If you have a pseudo resource with the type of `FileInclude`, we will
look at the specified Path and SubKey and put the resulting dictionary in:

 overcloud-source.yaml 

   NovaCompute0Config:
 Type: FileInclude
 Path: nova-compute-instance.yaml
 SubKey: Resources.NovaCompute0Config
 Parameters:
   NeutronNetworkType: gre
   NeutronEnableTunnelling: True


 nova-compute-instance.yaml 

   NovaCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Properties:
   InstanceType: '0'
   ImageId: '0'
 Metadata:
   keystone:
 host: {Ref: KeystoneHost}
   neutron:
 host: {Ref: NeutronHost}
   tenant_network_type: {Ref: NeutronNetworkType}
   network_vlan_ranges: {Ref: NeutronNetworkVLANRanges}
   bridge_mappings: {Ref: NeutronBridgeMappings}
   enable_tunneling: {Ref: NeutronEnableTunnelling}
   physical_bridge: {Ref: NeutronPhysicalBridge}
   public_interface: {Ref: NeutronPublicInterface}
 service-password:
   Ref: NeutronPassword
   admin-password: {Ref: AdminPassword}

The result:

   NovaCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 Properties:
   InstanceType: '0'
   ImageId: '0'
 Metadata:
   keystone:
 host: {Ref: KeystoneHost}
   neutron:
 host: {Ref: NeutronHost}
   tenant_network_type: gre
   network_vlan_ranges: {Ref: NeutronNetworkVLANRanges}
   bridge_mappings: {Ref: NeutronBridgeMappings}
   enable_tunneling: True
   physical_bridge: {Ref: NeutronPhysicalBridge}
   public_interface: {Ref: NeutronPublicInterface}
 service-password:
   Ref: NeutronPassword
   admin-password: {Ref: AdminPassword}

Note the `NeutronNetworkType` and `NeutronEnableTunneling` parameter
substitution.

This is useful when 

Re: [openstack-dev] [TripleO] reviewer update march

2014-04-04 Thread Ladislav Smola

+1
On 04/03/2014 01:02 PM, Robert Collins wrote:

Getting back in the swing of things...

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Dan Prince for -core
  - Jordan O'Mara for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.

Ghe, please let me know if you're willing to be in tripleo-core. Jan,
Jordan, Martyn, Jiri  Jaromir, if you are planning on becoming
substantially more active in TripleO reviews in the short term, please
let us know.

My approach to this caused some confusion a while back, so I'm keeping
the boilerplate :) - I'm
going to talk about stats here, but they are only part of the picture
: folk that aren't really being /felt/ as effective reviewers won't be
asked to take on -core responsibility, and folk who are less active
than needed but still very connected to the project may still keep
them : it's not pure numbers.

Also, it's a vote: that is direct representation by the existing -core
reviewers as to whether they are ready to accept a new reviewer as
core or not. This mail from me merely kicks off the proposal for any
changes.

But, the metrics provide an easy fingerprint - they are a useful tool
to avoid bias (e.g. remembering folk who are just short-term active) -
human memory can be particularly treacherous - see 'Thinking, Fast and
Slow'.

With that prelude out of the way:

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up so they
aren't caught by surprise.

90 day active-enough stats:

+-+---++
| Reviewer| Reviews   -2  -1  +1  +2  +A+/- % |
Disagreements* |
+-+---++
|slagle **| 6550 145   7 503 15477.9% |
36 (  5.5%)  |
| clint-fewbar ** | 5494 120  11 414 11577.4% |
32 (  5.8%)  |
|   lifeless **   | 518   34 203   2 279 11354.2% |
21 (  4.1%)  |
|  rbrady | 4530  14 439   0   096.9% |
60 ( 13.2%)  |
| cmsj ** | 3220  24   1 297 13692.5% |
22 (  6.8%)  |
|derekh **| 2610  50   1 210  9080.8% |
12 (  4.6%)  |
|dan-prince   | 2570  67 157  33  1673.9% |
15 (  5.8%)  |
|   jprovazn **   | 1900  21   2 167  4388.9% |
13 (  6.8%)  |
|ifarkas **   | 1860  28  18 140  8284.9% |
6 (  3.2%)  |
===
| jistr **| 1770  31  16 130  2882.5% |
4 (  2.3%)  |
|  ghe.rivero **  | 1761  21  25 129  5587.5% |
7 (  4.0%)  |
|lsmola **| 1722  12  55 103  6391.9% |
21 ( 12.2%)  |
|   jdob  | 1660  31 135   0   081.3% |
9 (  5.4%)  |
|  bnemec | 1380  38 100   0   072.5% |
17 ( 12.3%)  |
|greghaynes   | 1260  21 105   0   083.3% |
22 ( 17.5%)  |
|  dougal | 1250  26  99   0   079.2% |
13 ( 10.4%)  |
|   tzumainn **   | 1190  30  69  20  1774.8% |
2 (  1.7%)  |
|rpodolyaka   | 1150  15 100   0   087.0% |
15 ( 13.0%)  |
| ftcjeff | 1030   3 100   0   097.1% |
9 (  8.7%)  |
| thesheep|  930  26  31  36  2172.0% |
3 (  3.2%)  |
|pblaho **|  881   8  37  42  2289.8% |
3 (  3.4%)  |
| jonpaul-sullivan|  800  33  47   0   058.8% |
17 ( 21.2%)  |
|   tomas-8c8 **  |  780  15   4  59  2780.8% |
4 (  5.1%)  |
|marios **|  750   7  53  15  1090.7% |
14 ( 18.7%)  |
| stevenk |  750  15  60   0   080.0% |
9 ( 12.0%)  |
|   rwsu  |  740   3  71   0   095.9% |
11 ( 14.9%)  |
| mkerrin |  700  14  56   0   080.0% |
14 ( 20.0%)  |

The  line is set at the just voted on minimum expected of core: 3
reviews per work day, 60 work days in a 90 day period (64 - fudge for
holidays), 180 reviews.
I cut the full report out at the point we had been previously - with
the commitment to 3 reviews per day, next months report will have a
much higher minimum. In future reviews, we'll set the 

Re: [openstack-dev] [Horizon] Nominating Radomir Dopieralski to Horizon Core

2014-03-05 Thread Ladislav Smola

On 03/06/2014 04:47 AM, Jason Rist wrote:

On Wed 05 Mar 2014 03:36:22 PM MST, Lyle, David wrote:

I'd like to nominate Radomir Dopieralski to Horizon Core.  I find his reviews 
very insightful and more importantly have come to rely on their quality. He has 
contributed to several areas in Horizon and he understands the code base well.  
Radomir is also very active in tuskar-ui both contributing and reviewing.

David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

As someone who benefits from his insightful reviews, I second the
nomination.


I agree, Radomir has been doing excellent reviews and patches in both 
projects.



--
Jason E. Rist
Senior Software Engineer
OpenStack Management UI
Red Hat, Inc.
+1.720.256.3933
Freenode: jrist
github/identi.ca: knowncitizen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-28 Thread Ladislav Smola

On 02/27/2014 05:02 PM, Ana Krivokapic wrote:


On 02/27/2014 04:41 PM, Tzu-Mainn Chen wrote:

Hello,

I think if we will use Openstack CLI, it has to be something like this
https://github.com/dtroyer/python-oscplugin.
Otherwise we are not Openstack on Openstack.

Btw. abstracting it all to one big CLI will be just more confusing when
people will debug issues. So it would
have to be done very good.

E.g calling 'openstack-client net-create' fails.
Where do you find error log?
Are you using nova-networking or Neutron?
..

Calli 'neutron net-create' and you just know.

Btw. who would actually hire a sysadmin that will start to use CLI and
have no
idea what is he doing? They need to know what each service do, how to
correctly
use them and how do debug it when something is wrong.


For flavors just use flavors, we call them flavors in code too. It has
just nicer face in UI.

Actually, don't we called them node_profiles in the UI code?


We do: 
https://github.com/openstack/tuskar-ui/tree/master/tuskar_ui/infrastructure/node_profiles

  Personally,
I'd much prefer that we call them flavors in the code.
I agree, keeping the name flavor makes perfect sense here, IMO. The 
only benefit of using node profile seems to be that it is more 
descriptive. However, as already mentioned, admins are well used to 
the name flavor. It seems to me that this change introduces more 
confusion than it serves to clear things up. In other words, it brings 
more harm than good.




I see, we have brought an API flavor wrapper 
https://github.com/openstack/tuskar-ui/blob/master/tuskar_ui/api.py#L91


Nevertheless keeping 'flavor' make sense.



Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-28 Thread Ladislav Smola

On 02/27/2014 04:30 PM, Dougal Matthews wrote:

On 27/02/14 15:08, Ladislav Smola wrote:

Hello,

I think if we will use Openstack CLI, it has to be something like this
https://github.com/dtroyer/python-oscplugin.
Otherwise we are not Openstack on Openstack.

Btw. abstracting it all to one big CLI will be just more confusing when
people will debug issues. So it would
have to be done very good.

E.g calling 'openstack-client net-create' fails.
Where do you find error log?
Are you using nova-networking or Neutron?


I would at least expect the debug/log of the tuskar client to show what
calls its making on other clients so following this trail wouldn't be
too hard.



Well sure, this is part of 'being done very good'.

Though a lot of calls makes some asynchronous jobs, that can result in 
errors

you will just not see when you call the clients.
So you will need to know where to look depending on what is acting weird.

What I am trying to say is, the Openstack is just complex, there is no way
around, sysadmins just need to understand, what they are doing.

If we are going to simplify that, we would need to build something like
we have in UI. So some abstraction layer that leads the user and won't
let him break it. Though this leads to limited functionality we are able 
to control.

Which I am not entirely convinced is what CLI users want.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-27 Thread Ladislav Smola

Hello,

I think if we will use Openstack CLI, it has to be something like this 
https://github.com/dtroyer/python-oscplugin.

Otherwise we are not Openstack on Openstack.

Btw. abstracting it all to one big CLI will be just more confusing when 
people will debug issues. So it would

have to be done very good.

E.g calling 'openstack-client net-create' fails.
Where do you find error log?
Are you using nova-networking or Neutron?
..

Calli 'neutron net-create' and you just know.

Btw. who would actually hire a sysadmin that will start to use CLI and 
have no
idea what is he doing? They need to know what each service do, how to 
correctly

use them and how do debug it when something is wrong.


For flavors just use flavors, we call them flavors in code too. It has 
just nicer face in UI.


Kind regards,
Ladislav


On 02/26/2014 02:34 PM, Jiří Stránský wrote:

Hello,

i went through the CLI way of deploying overcloud, so if you're 
interested what's the workflow, here it is:


https://gist.github.com/jistr/9228638


I'd say it's still an open question whether we'll want to give better 
UX than that ^^ and at what cost (this is very much tied to the 
benefits and drawbacks of various solutions we discussed in December 
[1]). All in all it's not as bad as i expected it to be back then [1]. 
The fact that we keep Tuskar API as a layer in front of Heat means 
that CLI user doesn't care about calling merge.py and creating Heat 
stack manually, which is great.


In general the CLI workflow is on the same conceptual level as Tuskar 
UI, so that's fine, we just need to use more commands than tuskar.


There's one naming mismatch though -- Tuskar UI doesn't use Horizon's 
Flavor management, but implements its own and calls it Node Profiles. 
I'm a bit hesitant to do the same thing on CLI -- the most obvious 
option would be to make python-tuskarclient depend on 
python-novaclient and use a renamed Flavor management CLI. But that's 
wrong and high cost given that it's only about naming :)


The above issue is once again a manifestation of the fact that Tuskar 
UI, despite its name, is not a UI for Tuskar, it is UI for a bit more 
services. If this becomes a greater problem, or if we want a top-notch 
CLI experience despite reimplementing bits that can be already done 
(just not in a super-friendly way), we could start thinking about 
building something like OpenStackClient CLI [2], but directed 
specifically at Undercloud/Tuskar needs and using undercloud naming.


Another option would be to get Tuskar UI a bit closer back to the fact 
that Undercloud is OpenStack too, and keep the name Flavors instead 
of changing it to Node Profiles. I wonder if that would be unwelcome 
to the Tuskar UI UX, though.



Jirka


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2013-December/021919.html

[2] https://wiki.openstack.org/wiki/OpenStackClient

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-24 Thread Ladislav Smola

On 02/23/2014 01:16 AM, Clint Byrum wrote:

Excerpts from Imre Farkas's message of 2014-02-20 15:24:17 +:

On 02/20/2014 03:57 PM, Tomas Sedovic wrote:

On 20/02/14 15:41, Radomir Dopieralski wrote:

On 20/02/14 15:00, Tomas Sedovic wrote:


Are we even sure we need to store the passwords in the first place? All
this encryption talk seems very premature to me.

How are you going to redeploy without them?


What do you mean by redeploy?

1. Deploy a brand new overcloud, overwriting the old one
2. Updating the services in the existing overcloud (i.e. image updates)
3. Adding new machines to the existing overcloud
4. Autoscaling
5. Something else
6. All of the above

I'd guess each of these have different password workflow requirements.

I am not sure if all these use cases have different password
requirement. If you check devtest, no matter whether you are creating or
just updating your overcloud, all the parameters have to be provided for
the heat template:
https://github.com/openstack/tripleo-incubator/blob/master/scripts/devtest_overcloud.sh#L125

I would rather not require the user to enter 5/10/15 different passwords
every time Tuskar updates the stack. I think it's much better to
autogenerate the passwords for the first time, provide an option to
override them, then save and encrypt them in Tuskar. So +1 for designing
a proper system for storing the passwords.

Tuskar does not need to reinvent this.

Use OS::Heat::RandomString in the templates.

If any of them need to be exposed to the user, use an output on the
template.

If they need to be regenerated, you can pass a salt parameter.


Do we actually need to expose to the user something else than AdminPassword?

We are using tripleo-heat-templates currently, so we will need to make 
the change

there.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-19 Thread Ladislav Smola

Hello,

I would like to have your opinion about how to deal with passwords in 
Tuskar-API


The background is, that tuskarAPI is storing heat template parameters in 
its database, it's a
preparation for more complex workflows, when we will need to store the 
data before the actual

heat stack-create.

So right now, the state is unacceptable, we are storing sensitive 
data(all the heat passwords and keys)

in a raw form in the TuskarAPI database. That is wrong right?

So is anybody aware of the reasons, why we would need to store the 
passwords? Storing them
for a small amount of time (rather in a session) should be fine, so we 
can use them for latter init of the stack.

Do we need to store them for heat stack-update? Cause heat throws them away.

If yes, this bug should change to encrypting of the all sensitive data, 
right? Cause it might be just me,

but dealing with sensitive data like this the 8th deadly sin.

The second thing is, if users will update their passwords, info in the 
TuskarAPI will be obsolete and

can't be used anyway.

There is a bug filled for it:
https://bugs.launchpad.net/tuskar/+bug/1282066

Thanks for the feedback, seems like the bug is not as straightforward as 
I thought.


Kind Regards,
Ladislav
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-19 Thread Ladislav Smola

On 02/19/2014 08:05 PM, Dougal Matthews wrote:

On 19/02/14 18:49, Hugh O. Brock wrote:

On Wed, Feb 19, 2014 at 06:31:47PM +, Dougal Matthews wrote:

On 19/02/14 18:29, Jason Rist wrote:

Would it be possible to create some token for use throughout? Forgive
my naivete.


I don't think so, the token would need to be understood by all the
services that we store passwords for. I may be misunderstanding 
however.



Hmm... isn't this approximately what Keystone does? Accept a password
once from the user and then return session tokens?


Right - but I think the heat template expects passwords, not tokens. I
don't know how easily we can change that.



We most probably can't. Most of the passwords are sent to keystone to 
setup services. Etc.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-19 Thread Ladislav Smola

On 02/19/2014 06:29 PM, Dougal Matthews wrote:

On 19/02/14 17:10, Ladislav Smola wrote:

Hello,

I would like to have your opinion about how to deal with passwords in
Tuskar-API

The background is, that tuskarAPI is storing heat template parameters in
its database, it's a
preparation for more complex workflows, when we will need to store the
data before the actual
heat stack-create.

So right now, the state is unacceptable, we are storing sensitive
data(all the heat passwords and keys)
in a raw form in the TuskarAPI database. That is wrong right?


I agree, this situation needs to change.

I'm +1 for not storing the passwords if we can avoid it. This would 
apply to all situations and not just Tuskar.


The question for me, is what passwords will we have and when do we 
need them? Are any of the passwords required long term.




Only password I know about we need right now, is the AdminPassword. 
Which will be used for first sign in to overcloud Horizon and e.g. CLI. 
But we should not store that, just

display that at some point.

If we do need to store passwords it becomes a somewhat thorny issue, 
how does Tuskar know what a password is? If this is flagged up by the 
UI/client then we are relying on the user to tell us which isn't wise.


This is set on template level by NoEcho attribute. We are already using 
that information.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-02-04 Thread Ladislav Smola

Hello,

+1 to 'let's work towards having a single Node Profile (flavor) 
associated with each Deployment Role (pages 12  13 of the latest 
mockups[1])'


Good start.

We could have also more flavors per role now, user just would have to be 
advised: You are using one image for multiple hardware, so be sure it's 
compatible. So it probably make sense to limit it for one flavor per 
role for now.


Regards
Ladislav


On 02/03/2014 12:23 PM, Tomas Sedovic wrote:

My apologies for firing this off and then hiding under the FOSDEM rock.

In light of the points raised by Devananda and Robert, I no longer 
think fiddling with the scheduler is the way to go.


Note this was never intended to break/confuse all TripleO users -- I 
considered it a cleaner equivalent to entering incorrect HW specs 
(i.e. instead of doing that you would switch to this other filter in 
nova.conf).


Regardless, I stand corrected on the distinction between heterogeneous 
hardware all the way and having a flavour per service definition. That 
was a very good point to raise.


I'm fine with both approaches.

So yeah, let's work towards having a single Node Profile (flavor) 
associated with each Deployment Role (pages 12  13 of the latest 
mockups[1]), optionally starting with requiring all the Node Profiles 
to be equal.


Once that's working fine, we can look into the harder case of having 
multiple Node Profiles within a Deployment Role.


Is everyone comfortable with that?

Tomas

[1]: 
http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-27_tripleo-ui-icehouse.pdf


On 03/02/14 00:21, Robert Collins wrote:

On 3 February 2014 08:45, Jaromir Coufal jcou...@redhat.com wrote:





However, taking a step back, maybe the real answer is:

a) homogeneous nodes
b) document. . .
 - **unsupported** means of demoing Tuskar (set node 
attributes to

match flavors, hack
   the scheduler, etc)


Why are people calling it 'hack'? It's an additional filter to
nova-scheduler...?


It doesn't properly support the use case; its extra code to write and
test and configure that is precisely identical to mis-registering
nodes.


 - our goals of supporting heterogeneous nodes for the J-release.


I wouldn't talk about J-release. I would talk about next iteration 
or next

step. Nobody said that we are not able to make it in I-release.


+1




Does this seem reasonable to everyone?

Mainn



Well +1 for a) and it's documentation.

However me and Robert, we look to have different opinions on what
'homogeneous' means in our context. I think we should clarify that.


So I think my point is more this:
  - either this iteration is entirely limited to homogeneous hardware,
in which case, document it, not workarounds or custom schedulers etc.
  - or it isn't limited, in which case we should consider the options:
- flavor per service definition
- custom scheduler
- register nodes wrongly

-Rob




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-30 Thread Ladislav Smola

On 01/30/2014 12:39 PM, Jiří Stránský wrote:

On 01/30/2014 11:26 AM, Tomas Sedovic wrote:

1.1 Treat similar hardware configuration as equal

The way I understand it is this: we use a scheduler filter that wouldn't
do a strict match on the hardware in Ironic. E.g. if our baremetal
flavour said 16GB ram and 1TB disk, it would also match a node with 24GB
ram or 1.5TB disk.

The UI would still assume homogenous hardware and treat it as such. It's
just that we would allow for small differences.

This *isn't* proposing we match ARM to x64 or offer a box with 24GB RAM
when the flavour says 32. We would treat the flavour as a lowest common
denominator.

Nor is this an alternative to a full heterogenous hardware support. We
need to do that eventually anyway. This is just to make the first MVP
useful to more people.

It's an incremental step that would affect neither point 1. (strict
homogenous hardware) nor point 2. (full heterogenous hardware support).

If some of these assumptions are incorrect, please let me know. I don't
think this is an insane U-turn from anything we've already agreed to do,
but it seems to confuse people.


I think having this would allow users with almost-homogeous hardware 
use TripleO. If someone already has precisely homogenous hardware, 
they won't notice a difference.


So i'm +1 for this idea. The condition should be that it's easy to 
implement, because imho it's something that will get dropped when 
support for fully heterogenous hardware is added.


Jirka



Hello,

I am for implementing support for Heterogeneous hardware properly, 
lifeless should post what he recommends soon, so I would rather discuss 
that. We should be able to do simple version in I.


Lowest common denominator doesn't solve storage vs. compute node. If we 
really have similar hardware, we don't care about, we can just fill the 
nova-baremetal/ironic specs the same as the flavor.
Why would we want to see in UI that the hardware is different, when we 
can't really determine what goes where.
And as you say assume homogenous hardware and treat it as such. So 
showing in UI that the hardware is different doesn't make any sense then.

So the solution for similar hardware is already there.

I don't see this as an incremental step, but as ugly hack that is not 
placed anywhere on the roadmap.


Regards,
Ladislav


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [TripleO] adding process/service monitoring

2014-01-28 Thread Ladislav Smola

Hello,

excellent, this is exactly what we need in Tuskar. :-)

Might be good to monitor it via SNMPD. As this daemon will be
already running on each node. And I see it should be possible, though
not very popular.

Then it would be nice to have the data stored in Ceilometer, as
it provides generic backed for storing samples and querying them.
(would be nice to have history of those samples) It should be enough
to sent it in correct format to notification bus and Ceilometer will 
store it.

For now, Tuskar would just grab it from Ceilometer.

The problem here is that every node can have different services running
so you would have to write some smart inspector that would know what
is running where. We have been talking about exposing these kind of
information in Glance, so it would return you list of services for image.
Then you would get list of nodes for image and you can poll them via SNMP.
This could be probably inspector of central agent, same approach as for
getting the baremetal metrics.

Does it sound reasonable? Or you see some critical flaws in this 
approach? :-)


Kind Regards,
Ladislav



On 01/28/2014 02:59 AM, Richard Su wrote:

Hi,

I have been looking into how to add process/service monitoring to
tripleo. Here I want to be able to detect when an openstack dependent
component that is deployed on an instance has failed. And when a failure
has occurred I want to be notified and eventually see it in Tuskar.

Ceilometer doesn't handle this particular use case today. So I have been
doing some research and there are many options out there that provides
process checks: nagios, sensu, zabbix, and monit. I am a bit wary of
pulling one of these options into tripleo. There is some increased
operational and maintenance costs when pulling in each of them. And
physical device monitoring is currently in the works for Ceilometer
lessening the need for some of the other abilities that an another
monitoring tool would provide.

For the particular use case of monitoring processes/services, at a high
level, I am considering writing a simple daemon to perform the check.
Checks and failures are written out as messages to the notification bus.
Interested parties like Tuskar or Ceilometer can subscribe to these
messages.

In general does this sound like a reasonable approach?

There is also the question of how to configure or figure out which
processes we are interested in monitoring. I need to do more research
here but I'm considering either looking at the elements listed by
diskimage-builder or by looking at the orc post-configure.d scripts to
find service that are restarted.

I welcome your feedback and suggestions.

- Richard Su

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Blueprint decrypt-and-display-vm-generated-password

2014-01-23 Thread Ladislav Smola

Hello,

seems like there is no pushback against decrypting in Javascript. So 
unless somebody has something against, I guess it is fine.


I would say that it can be the best, to have both client side decrypting 
using JS and server side using Nova. We can let the user decide what to use.


Ladislav


On 01/17/2014 08:05 PM, Alessandro Pilotti wrote:

+1

Nova's get-password is corrently the only safe way from a security perspective 
to handle guest passwords.

This feature needs to be mirrored in Horizon, otherwise most users will 
continue to resort to unsafe solutions like the clear text admin_pass due to 
lack of practical alternatives.

The design and implementation proposed by Ala is IMO a good one. It provides a 
UX quite similar to what other cloud environments like AWS offer with the 
additional bonus of keeping any sensitive data on the client side.

Alessandro

P.S.: Sorry for replying only now, I somehow skipped the original email in the 
ML!




On 17 Dec 2013, at 13:58 , Ala Rezmerita ala.rezmer...@cloudwatt.com wrote:


Hi all,

I would like to get your opinion/feedback about the implementation of the blueprint 
Decrypt and display VM generated password[1]

Our use case is primarily targeting Windows instances with cloudbase-init, but 
the functionality can be also used on Linux instances.
The general idea of this blueprint is to give the user the ability to retrieve, 
through Horizon, administrative password for his Windows session.

There is two ways for the user to set/get his password on cloudbase-init 
Windows instances:
- The user sets the desired password as admin_pass key/value as metadata of the 
new server. Example : https://gist.github.com/arezmerita/8001673. In this case 
the password is visible in instance description, in metatada section.
- The user do not set his password. In this case the cloudbase-init will 
generate a random password, encrypt it with user provided public key, and will 
send the result to the metadata server. The only way to get the clear password 
is to use API/nova client and provide the private key. Example:  nova 
get-password  . The novaclient will retrieve encrypted password from Nova and 
will use locally the private key in order to decrypt the password.

Now about our blueprint implementation:
- We add an new action Retrieve password on an instance, that shows a form 
displaying the key pair name used to boot the instance and the encrypted password. The 
user can provide its private key, that will be used ONLY on the client side for password 
decryption using JSEncrypt library[2].
- We choose to not send the private key over the network (for decryption on 
server side), because we consider that the user should not be forced to share 
this information with the cloud operator.
Some may argue that the connection is protected, and we are already passing sensitive data over the network. 
However, openstack user password/tokens are openstack sensitive data, they are related to the 
openstack user. User's private key on the other hand, is something personal to the user, 
not-openstack related.

What do you think?

Note: On the whiteboard of the blueprint[1] I provided two demos and some 
instructions of how to test this functionality with Linux instances.

Thanks,

References:
[1] 
https://blueprints.launchpad.net/horizon/+spec/decrypt-and-display-vm-generated-password
[2] JSEncrypt library http://travistidwell.com/jsencrypt/


Ala Rezmerita
Software Engineer || Cloudwatt
M: (+33) 06 77 43 23 91
Immeuble Etik
892 rue Yves Kermen
92100 Boulogne-Billancourt – France


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-15 Thread Ladislav Smola

On 01/15/2014 10:53 AM, Jaromir Coufal wrote:

On 2014/13/01 13:15, Ladislav Smola wrote:

The usage of roles is new metric which doesn't exist. It is the most
consumed HW resource (which means if CPU is consumed by 60 % and RAM
or disk are less, then the role usage is 60 %). It would be great to
have such a metric from Ceilometer. However, I don't know how much
support they will give us. We can get partial metrics (CPU, RAM, Disk)
from Ceilometer, but the final Role usage is questionable.


We will be able to get 3 meters avg. in one query, so we should be able
to easily determine which metrics we want to show. As I am thinking
about it, I would like to show what metric we are showing anyway. Cause
naming 'capacity' as max(CPU, RAM, Disk) might be confusing.
Those 3 meters are correct. And exposing those 3 metrics at one 
overview page for quick glance is overwhelming for user. Important for 
him is to see, how much resources is left for my deployment/role. 
Showing them 3 graphs instead of one is too much information if I 
don't want to look at detailed data.


That's why I think that 'role capacity' is important to show. And not 
just actual consumption but also historical data.




Sure but still I would show capacity(cpu), capacity(memory) and later 
allow to click on that to show all meters. Otherwise people will ask: 
What is the capacity in this case? Just saying.


- When a role is edited, if it has existing nodes deployed with the 
old

version, are the automatically/immediately updated? If not, how do we
reflect that there's a difference between how the role is currently
configured and the nodes that were previously created from it?

I would expect any Role change to be applied immediately. If there is
some change where I want to keep older nodes how they are set up and
apply new settings only to new added nodes, I would create new Role 
then.




Hmm, I would rather see preview page, because it's quite dangerous
operation. Though that's future talking.
Well - yes. But what we are talking at the moment is Icehouse 
timeframe and we don't have anything where to store the information 
about changes at the moment. And it would be pretty expensive 
operation. So that's why I said - focus on allow changes, apply 
immediately. Of course, the direction should go the way of preview all 
the changes and apply all together.




ok




If there is some change where I want to keep older nodes how they are
set up and apply new settings only to new added nodes this should not
be ever possible. All nodes under the Role has to be the same.
That is incorrect. Nodes can be heterogeneous. You can have multiple 
node profiles, you can have higher hw spec then spec of the node 
profile (flavor), so that node can be chosen as well, etc.




I believe tripleo count with one baremetal flavor for I. So that would 
mean homogeneous hardware. 'All nodes under the Role has to be the same' 
within one group they will use the same image and will be the same.




I believe Jay was asking about the preview page. So if it won't be
immediately updated, you would store what you want to update. Then you
could even see it all summarized on a preview page before you hit 
'update'.
Nope, I believe it was about keeping nodes with different settings in. 
I hope I answered the question before.




So, I am not really sure about what are you talking about. :-) The stack 
will be always updated due to template.





Related question is - when send heat change, are the nodes immediately
ready for use once each node is provisioned? Or... when node is
provisioned, it waits for the heat template to get finished and then
they all get to operation together?



I would say that it depends on node. E.g. once compute node is
registered to overcloud nova scheduler, you can start to use it. So it
should be similar for others. This applies only for stack-update.
I am asking in general. When I am doing my first deployment, are the 
nodes coming up being ready one by one or all at the same time once 
the process is finished?


When doing heat template change and changing number of nodes, are the 
nodes coming up one by one when they are ready? Or do they appear 
ready only after stack-update is finished?


(talking about all roles, not just compute)


When you do stack-create, you need to do initialization after (step 8 
http://docs.openstack.org/developer/tripleo-incubator/devtest_undercloud.html) 
so you can't use overcloud until that is done.


With heat stack-update, you should be able to use nodes as they are 
being registered to overcloud controllers. I believe the is no final 
step that would prevent it from working.





-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi

Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-13 Thread Ladislav Smola

Hello,

some answers below:

On 01/10/2014 05:18 PM, Jay Dobies wrote:

Thanks for recording this. A few questions:

- I'm guessing the capacity metrics will come from Ceilometer. Will 
Ceilometer provide the averages for the role or is that calculated by 
Tuskar?


Definitely Ceilometer, though right now it can't do this aggregate 
query. Though it is in progress and it 'should land' early I3. Not sure 
if the backup plan should be computing it in the Tuskar.


- When on the change deployments screen, after making a change but not 
yet applying it, how are the projected capacity changes calculated?


I believe we wanted to make just simple algorithm, that was considering, 
that adding a new node would would spread the load between the nodes 
equally. Though this depends on how overcloud is set.
It would have to be set the way, that after adding new hardware, 
overcloud would migrate VM's over them equally(which is not always 
achievable).

So I am not really sure about this part.



- For editing a role, does it make a new image with the changes to 
what services are deployed each time it's saved?


I would say no, image should be created during heat stack update/create, 
right? So we would need to track it has been changed but not yet deployed.




- When a role is edited, if it has existing nodes deployed with the 
old version, are the automatically/immediately updated? If not, how do 
we reflect that there's a difference between how the role is currently 
configured and the nodes that were previously created from it?




We will have to store image metadata in tuskar probably, that would map 
to glance, once the image is generated. I would say we need to store the 
list of the elements and probably the commit hashes (because elements 
can change). Also it should be versioned, as the images in glance will 
be also versioned.
We can't probably store it in the Glance, cause we will first store the 
metadata, then generate image. Right?


Then we could see whether image was created from the metadata and 
whether that image was used in the heat-template. With versions we could 
also see what has changed.


But there was also idea that there will be some generic image, 
containing all services, we would just configure which services to 
start. In that case we would need to version also this.


- I don't see any indication that the role scaling process is taking 
place. That's a potentially medium/long running operation, we should 
have some sort of way to inform the user it's running and if any 
errors took place.


That last point is a bit of a concern for me. I like the simplicity of 
what the UI presents, but the nature of what we're doing doesn't 
really fit with that. I can click the count button to add 20 nodes in 
a few seconds, but the execution of that is a long running, 
asynchronous operation. We have no means of reflecting that it's 
running, nor finding any feedback on it as it runs or completes.


Related question. If I have 20 instances and I press the button to 
scale it out to 50, if I immediately return to the My Deployment 
screen what do I see? 20, 50, or the current count as they are stood up?



Agree, this is missing. I think right now we are able to show that stack 
is being deployed and how many nodes are already deployed. Page like 
this will be quite important for I.





It could all be written off as a future feature, but I think we should 
at least start to account for it in the wireframes. The initial user 
experience could be off putting if it's hard to discern the difference 
between what I told the UI to do and when it's actually finished being 
done.


It's also likely to influence the ultimate design as we figure out who 
keeps track of the running operations and their results (for both 
simple display purposes to the user and auditing reasons).



On 01/10/2014 09:58 AM, Jaromir Coufal wrote:

Hi everybody,

there is first stab of Deployment Management section with future
direction (note that it was discussed as a scope for Icehouse).

I tried to add functionality in time and break it down to steps. This
will help us to focus on one functionality at a time and if we will be
in time pressure for Icehouse release, we can cut off last steps.

Wireframes:
http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-10_tripleo-ui_deployment-management.pdf 




Recording of walkthrough:
https://www.youtube.com/watch?v=9ROxyc85IyE

We sare about to start with first step as soon as possible, so please
focus on our initial steps the most (which doesn't mean that we should
neglect the direction).

Every feedback is very welcome, thanks
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-13 Thread Ladislav Smola

Hi,

some answers inline:

On 01/13/2014 10:47 AM, Jaromir Coufal wrote:

Hi Jay,

thanks for your questions, they are great. I am going to answer inline:

On 2014/10/01 17:18, Jay Dobies wrote:

Thanks for recording this. A few questions:

- I'm guessing the capacity metrics will come from Ceilometer. Will
Ceilometer provide the averages for the role or is that calculated by
Tuskar?
The usage of roles is new metric which doesn't exist. It is the most 
consumed HW resource (which means if CPU is consumed by 60 % and RAM 
or disk are less, then the role usage is 60 %). It would be great to 
have such a metric from Ceilometer. However, I don't know how much 
support they will give us. We can get partial metrics (CPU, RAM, Disk) 
from Ceilometer, but the final Role usage is questionable.


We will be able to get 3 meters avg. in one query, so we should be able 
to easily determine which metrics we want to show. As I am thinking 
about it, I would like to show what metric we are showing anyway. Cause 
naming 'capacity' as max(CPU, RAM, Disk) might be confusing.





- When on the change deployments screen, after making a change but not
yet applying it, how are the projected capacity changes calculated?
At the moment, I am working with one time change. Which means - it 
appears in modal window, and if I don't apply the change, it will get 
canceled. So we don't have to store 'change' values anyhow. At least 
for now.



- For editing a role, does it make a new image with the changes to what
services are deployed each time it's saved?
So, there are two things - one thing is provisioning image. We are not 
dealing with image builder at the moment. So the image already 
contains services which we should be able to discover (what OpenStack 
services are included there). And then you go to service tab and 
enable/disable which services are provided within a role + their 
configuration.


I would expect, that each time I change some Role settings, it gets 
applied (which might mean re-provisioning nodes if needed). However, I 
think it is only the case when you change provisioning image.



- When a role is edited, if it has existing nodes deployed with the old
version, are the automatically/immediately updated? If not, how do we
reflect that there's a difference between how the role is currently
configured and the nodes that were previously created from it?
I would expect any Role change to be applied immediately. If there is 
some change where I want to keep older nodes how they are set up and 
apply new settings only to new added nodes, I would create new Role then.




Hmm, I would rather see preview page, because it's quite dangerous 
operation. Though that's future talking.


If there is some change where I want to keep older nodes how they are 
set up and apply new settings only to new added nodes this should not 
be ever possible. All nodes under the Role has to be the same.


I believe Jay was asking about the preview page. So if it won't be 
immediately updated, you would store what you want to update. Then you 
could even see it all summarized on a preview page before you hit 'update'.



- I don't see any indication that the role scaling process is taking
place. That's a potentially medium/long running operation, we should
have some sort of way to inform the user it's running and if any errors
took place.
That's correct, I didn't provide that view yet. I was more focusing on 
views with settings and cofnig then the flow. But I will add this view 
as well. I completely agree it is needed.



That last point is a bit of a concern for me. I like the simplicity of
what the UI presents, but the nature of what we're doing doesn't really
fit with that. I can click the count button to add 20 nodes in a few
seconds, but the execution of that is a long running, asynchronous
operation. We have no means of reflecting that it's running, nor finding
any feedback on it as it runs or completes.

As I mentioned above, yeah, you are right here. I will reflect that.


Related question. If I have 20 instances and I press the button to scale
it out to 50, if I immediately return to the My Deployment screen what
do I see? 20, 50, or the current count as they are stood up?

I'll try to send the screen soon.

Related question is - when send heat change, are the nodes immediately 
ready for use once each node is provisioned? Or... when node is 
provisioned, it waits for the heat template to get finished and then 
they all get to operation together?




I would say that it depends on node. E.g. once compute node is 
registered to overcloud nova scheduler, you can start to use it. So it 
should be similar for others. This applies only for stack-update.



It could all be written off as a future feature, but I think we should
at least start to account for it in the wireframes. The initial user
experience could be off putting if it's hard to discern the difference
between what I told the UI to do and when it's actually finished 

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2014-01-06 Thread Ladislav Smola

On 12/20/2013 05:51 PM, Clint Byrum wrote:

Excerpts from Ladislav Smola's message of 2013-12-20 05:48:40 -0800:

On 12/20/2013 02:37 PM, Imre Farkas wrote:

On 12/20/2013 12:25 PM, Ladislav Smola wrote:

2. Heat stack create, update
This is locked in the process of the operation, so nobody can mess with
it while it is updating or creating.
Once we will pack all operations that are now aside in this, we should
be alright. And that should be doable in I.
So we should push towards this, rather then building some temporary
locking solution in Tuskar-API.

It's not the issue of locking, but the goal of Tuskar with the
Provision button is not only a single stack creation. After Heat's job
is done, the overcloud needs to be properly configured: Keystone needs
to be initialized, the services need to be registered, etc. I don't
think Horizon wants to add a background worker to handle such operations.


Yes, that is a valid point. I hope we will be able to pack it all to
Heat Template in I. This could be the way
https://blueprints.launchpad.net/heat/+spec/hot-software-config

Seems like the consensus is: It belongs to Heat. We are just not able to
do it that way now.

So there is a question, whether we should try to solve it in Tuskar-API
temporarily. Or rather focus on the Heat.


Interestingly enough, what Imre has just mentioned isn't necessarily
covered by hot-software-config. That blueprint is specifically about
configuring machines, but not API's.

I think we actually need multi-cloud to support what Imre is talking
about. These are API operations that need to follow the entire stack
bring-up, but happen in a different cloud (the new one).

Assuming single servers instead of loadbalancers and stuff for simplicity:


resources:
   keystone:
 type: OS::Nova::Server
   glance:
 type: OS::Nova::Server
   nova:
 type: OS::Nova::Server
   cloud-setup:
 type: OS::Heat::Stack
 properties:
   cloud-endpoint: str_join [ 'https://', get_attribute [ 'keystone', 
'first_ip' ], ':35357/' ]
   cloud-credentials: get_parameter ['something']
   template:
 keystone-catalog:
   type: OS::Keystone::Catalog
   properties:
 endpoints:
   - type: Compute
 publicUrl: str_join [ 'https://', get_attribute [ 'nova', 
'first_ip' ], ':8447/' ]
   - type: Image
 publicUrl: str_join [ 'https://', get_attribute [ 'glance', 
'first_ip' ], ':12345/' ]

What I mean is, you want the Heat stack to be done not when the hardware
is up, but when the API's have been orchestrated.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Thanks for pointing that out, we should discuss it with Heat guys.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Horizon and Tuskar-UI codebase merge

2013-12-20 Thread Ladislav Smola

and +1 also from me :-)

Seems like this the way we want to go. So, what will be the next steps? 
Seems like this have to be done by cooperation of PTL's of Horizon and 
TripleO and ttx probably?


Thank you,
Ladislav


On 12/19/2013 05:29 PM, Lyle, David wrote:

So after a lot of consideration, my opinion is the two code bases should stay 
in separate repos under the Horizon Program, for a few reasons:
-Adding a large chunk of code for an incubated project is likely going to cause 
the Horizon delivery some grief due to dependencies and packaging issues at the 
distro level.
-The code in Tuskar-UI is currently in a large state of flux/rework.  The 
Tuskar-UI code needs to be able to move quickly and at times drastically, this 
could be detrimental to the stability of Horizon.  And conversely, the 
stability needs of Horizon and be detrimental to the speed at which Tuskar-UI 
can change.
-Horizon Core can review changes in the Tuskar-UI code base and provide 
feedback without the code needing to be integrated in Horizon proper.  
Obviously, with an eye to the code bases merging in the long run.

As far as core group organization, I think the current Tuskar-UI core should 
maintain their +2 for only Tuskar-UI.  Individuals who make significant review 
contributions to Horizon will certainly be considered for Horizon core in time. 
 I agree with Gabriel's suggestion of adding Horizon Core to tuskar-UI core.  
The idea being that Horizon core is looking for compatibility with Horizon 
initially and working toward a deeper understanding of the Tuskar-UI code base. 
 This will help insure the integration process goes as smoothly as possible 
when Tuskar/TripleO comes out of incubation.

I look forward to being able to merge the two code bases, but I don't think the 
time is right yet and Horizon should stick to only integrating code into 
OpenStack Dashboard that is out of incubation.  We've made exceptions in the 
past, and they tend to have unfortunate consequences.

-David



-Original Message-
From: Jiri Tomasek [mailto:jtoma...@redhat.com]
Sent: Thursday, December 19, 2013 4:40 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Horizon and Tuskar-UI codebase merge

On 12/19/2013 08:58 AM, Matthias Runge wrote:

On 12/18/2013 10:33 PM, Gabriel Hurley wrote:


Adding developers to Horizon Core just for the purpose of reviewing
an incubated umbrella project is not the right way to do things at
all.  If my proposal of two separate groups having the +2 power in
Gerrit isn't technically feasible then a new group should be created
for management of umbrella projects.

Yes, I totally agree.

Having two separate projects with separate cores should be possible
under the umbrella of a program.

Tuskar differs somewhat from other projects to be included in horizon,
because other projects contributed a view on their specific feature.
Tuskar provides an additional dashboard and is talking with several apis
below. It's a something like a separate dashboard to be merged here.

When having both under the horizon program umbrella, my concern is,

that

both projects wouldn't be coupled so tight, as I would like it.

Esp. I'd love to see an automatic merge of horizon commits to a
(combined) tuskar and horizon repository, thus making sure, tuskar will
work in a fresh (updated) horizon environment.

Please correct me if I am wrong, but I think this is not an issue.
Currently Tuskar-UI is run from Horizon fork. In local Horizon fork we
create symlink to tuskar-ui local clone and to run Horizon with
Tuskar-UI we simply start Horizon server. This means that Tuskar-UI runs
on latest version of Horizon. (If you pull regularly of course).


Matthias

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Jirka


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Ladislav Smola
May I propose we keep the conversation Icehouse related. I don't think 
we can make any sort of locking

mechanism in I.

Though it would be worth of creating some WikiPage that would present it 
whole in some consistent

manner. I am kind of lost in these emails. :-)

So, what do you thing are the biggest issues for the Icehouse tasks we have?

1. GET operations?
I don't think we need to be atomic here. We basically join resources 
from multiple APIs together. I think
it's perfectly fine that something will be deleted in the process. Even 
right now we join together only things
that exists. And we can handle when something is not. There is no need 
of locking or retrying here AFAIK.


2. Heat stack create, update
This is locked in the process of the operation, so nobody can mess with 
it while it is updating or creating.
Once we will pack all operations that are now aside in this, we should 
be alright. And that should be doable in I.
So we should push towards this, rather then building some temporary 
locking solution in Tuskar-API.


3. Reservation of resources
As we can deploy only one stack now, so I think it shouldn't be a 
problem with multiple users there. When
somebody will delete the resources from 'free pool' in the process, it 
will fail with 'Not enough free resources'

I guess that is fine.
Also not sure how it's now, but it should be possible to deploy smartly, 
so the stack will be working even
with smaller amount of resources. Then we would just heat stack-update 
with numbers it ended up with,

and it would switch to OK status without changing anything.

So, are there any other critical sections you see?

I know we did it bad way in the previous Tuskar-API and I think we are 
avoiding that now. And we will avoid
it in the future. By simply not doing these kind of stuff until there is 
a proper way to do it.


Thanks,
Ladislav


On 12/20/2013 10:13 AM, Radomir Dopieralski wrote:

On 20/12/13 00:17, Jay Pipes wrote:

On 12/19/2013 04:55 AM, Radomir Dopieralski wrote:

On 14/12/13 16:51, Jay Pipes wrote:

[snip]


Instead of focusing on locking issues -- which I agree are very
important in the virtualized side of things where resources are
thinner -- I believe that in the bare-metal world, a more useful focus
would be to ensure that the Tuskar API service treats related group
operations (like deploy an undercloud on these nodes) in a way that
can handle failures in a graceful and/or atomic way.

Atomicity of operations can be achieved by intoducing critical sections.
You basically have two ways of doing that, optimistic and pessimistic.
Pessimistic critical section is implemented with a locking mechanism
that prevents all other processes from entering the critical section
until it is finished.

I'm familiar with the traditional non-distributed software concept of a
mutex (or in Windows world, a critical section). But we aren't dealing
with traditional non-distributed software here. We're dealing with
highly distributed software where components involved in the
transaction may not be running on the same host or have much awareness
of each other at all.

Yes, that is precisely why you need to have a single point where they
can check if they are not stepping on each other's toes. If you don't,
you get race conditions and non-deterministic behavior. The only
difference with traditional, non-distributed software is that since the
components involved are communicating over a, relatively slow, network,
you have a much, much greater chance of actually having a conflict.
Scaling the whole thing to hundreds of nodes practically guarantees trouble.


And, in any case (see below), I don't think that this is a problem that
needs to be solved in Tuskar.


Perhaps you have some other way of making them atomic that I can't
think of?

I should not have used the term atomic above. I actually do not think
that the things that Tuskar/Ironic does should be viewed as an atomic
operation. More below.

OK, no operations performed by Tuskar need to be atomic, noted.


For example, if the construction or installation of one compute worker
failed, adding some retry or retry-after-wait-for-event logic would be
more useful than trying to put locks in a bunch of places to prevent
multiple sysadmins from trying to deploy on the same bare-metal nodes
(since it's just not gonna happen in the real world, and IMO, if it did
happen, the sysadmins/deployers should be punished and have to clean up
their own mess ;)

I don't see why they should be punished, if the UI was assuring them
that they are doing exactly the thing that they wanted to do, at every
step, and in the end it did something completely different, without any
warning. If anyone deserves punishment in such a situation, it's the
programmers who wrote the UI in such a way.

The issue I am getting at is that, in the real world, the problem of
multiple users of Tuskar attempting to deploy an undercloud on the exact
same set of bare metal machines is just not going to 

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Ladislav Smola

On 12/20/2013 01:04 PM, Radomir Dopieralski wrote:

On 20/12/13 12:25, Ladislav Smola wrote:

May I propose we keep the conversation Icehouse related. I don't think
we can make any sort of locking
mechanism in I.

By getting rid of tuskar-api and putting all the logic higher up, we are
forfeiting the ability to ever create it. That worries me. I hate to
remove potential solutions from my toolbox, even when the problems they
solve may as well never materialize.



Well, I expect that there will be decisions whether we should not land
a feature because it's not ready or we should make some temporary hack
that will make it work.

I am just little worried to have some temporary hacks in stable version,
cause then the update to the next version will be hard. And we will most 
likely

have to support these hacks as a backwards compatibility.

I wouldn't say we are forfeiting the ability to create it. I would say 
we are

forfeiting the ability to create hacked together temporary solutions, that
might go against how upstream wants to do it. That is a good thing I 
think. :-)



Though it would be worth of creating some WikiPage that would present it
whole in some consistent
manner. I am kind of lost in these emails. :-)

So, what do you thing are the biggest issues for the Icehouse tasks we
have?

1. GET operations?
I don't think we need to be atomic here. We basically join resources
from multiple APIs together. I think
it's perfectly fine that something will be deleted in the process. Even
right now we join together only things
that exists. And we can handle when something is not. There is no need
of locking or retrying here AFAIK.
2. Heat stack create, update
This is locked in the process of the operation, so nobody can mess with
it while it is updating or creating.
Once we will pack all operations that are now aside in this, we should
be alright. And that should be doable in I.
So we should push towards this, rather then building some temporary
locking solution in Tuskar-API.

3. Reservation of resources
As we can deploy only one stack now, so I think it shouldn't be a
problem with multiple users there. When
somebody will delete the resources from 'free pool' in the process, it
will fail with 'Not enough free resources'
I guess that is fine.
Also not sure how it's now, but it should be possible to deploy smartly,
so the stack will be working even
with smaller amount of resources. Then we would just heat stack-update
with numbers it ended up with,
and it would switch to OK status without changing anything.

So, are there any other critical sections you see?

It's hard for me to find critical sections in a system that doesn't
exist, is not documented and will be designed as we go. Perhaps you are
right and I am just panicking, and we won't have any such critical
sections, or can handle the ones we do without any need for
synchronization. You probably have a much better idea how the whole
system will look like. Even then, I think it still makes sense to keep
that door open an leave ourselves the possibility of implementing
locking/sessions/serialization/counters/any other synchronization if we
need them, unless there is a horrible cost involved. Perhaps I'm just
not aware of the cost?


Well yeah I guess for some J features, we might need to do
something like this. I have no idea right now. So the doors are
always open. :-)



As far as I know, Tuskar is going to have more than just GETs and Heat
stack operations. I seem to remember stuff like resource classes, roles,
node profiles, node discovery, etc. How will updates to those be handled
and how will they interact with the Heat stack updates? Will every
change trigger a heat stack update immediately and force a refresh for
all open tuskar-ui pages?


resource classes: it's definitely J, are we are not yet sure how it will 
look like


node_profiles: it's nova flavor in I and it will stay that way because 
of scheduler
From start we will have just one flavor. Even when we had more flavors, 
I think

I don't see issues here.
This heavily relies on how we are going to build the Heat Template. But 
adding

flavors should be separated form creating or updating a heat template.

creating and updating heat template: seems like we will be doing this in 
Tuskar-API

do you see any potential problems here?

node-discovery: will be in Ironic. Should be also a separate operation. 
So I don't see problems

here.



Every time we will have a number of operations batched together -- such
as in any of those wizard dialogs, for which we've had so many
wireframes already, and which I expect to see more -- we will have a
critical section. That critical section doesn't begin when the OK
button is pressed, it starts when the dialog is first displayed, because
the user is making decisions based on the information that is presented
to her or him there. If by the time he finished the wizard and presses
OK the situation has changed, you are risking doing something else than
the user

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Ladislav Smola

On 12/20/2013 02:06 PM, Radomir Dopieralski wrote:

On 20/12/13 13:04, Radomir Dopieralski wrote:

[snip]

I have just learned that tuskar-api stays, so my whole ranting is just a
waste of all our time. Sorry about that.



Hehe. :-)

Ok after the last meeting we are ready to say what goes to Tuskar-API.

Who wants to start that thread? :-)



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Ladislav Smola

On 12/20/2013 02:37 PM, Imre Farkas wrote:

On 12/20/2013 12:25 PM, Ladislav Smola wrote:

2. Heat stack create, update
This is locked in the process of the operation, so nobody can mess with
it while it is updating or creating.
Once we will pack all operations that are now aside in this, we should
be alright. And that should be doable in I.
So we should push towards this, rather then building some temporary
locking solution in Tuskar-API.


It's not the issue of locking, but the goal of Tuskar with the 
Provision button is not only a single stack creation. After Heat's job 
is done, the overcloud needs to be properly configured: Keystone needs 
to be initialized, the services need to be registered, etc. I don't 
think Horizon wants to add a background worker to handle such operations.




Yes, that is a valid point. I hope we will be able to pack it all to 
Heat Template in I. This could be the way 
https://blueprints.launchpad.net/heat/+spec/hot-software-config


Seems like the consensus is: It belongs to Heat. We are just not able to 
do it that way now.


So there is a question, whether we should try to solve it in Tuskar-API 
temporarily. Or rather focus on the Heat.




Imre

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-18 Thread Ladislav Smola

On 12/17/2013 04:20 PM, Tzu-Mainn Chen wrote:

On 2013/13/12 23:11, Jordan OMara wrote:

On 13/12/13 16:20 +1300, Robert Collins wrote:

However, on instance - 'instance' is a very well defined term in Nova
and thus OpenStack: Nova boot gets you an instance, nova delete gets
rid of an instance, nova rebuild recreates it, etc. Instances run
[virtual|baremetal] machines managed by a hypervisor. So
nova-scheduler is not ever going to be confused with instance in the
OpenStack space IMO. But it brings up a broader question, which is -
what should we do when terms that are well defined in OpenStack - like
Node, Instance, Flavor - are not so well defined for new users? We
could use different terms, but that may confuse 'stackers, and will
mean that our UI needs it's own dedicated terminology to map back to
e.g. the manuals for Nova and Ironic. I'm inclined to suggest that as
a principle, where there is a well defined OpenStack concept, that we
use it, even if it is not ideal, because the consistency will be
valuable.

I think this is a really important point. I think the consistency is a
powerful tool for teaching new users how they should expect
tripleo/tuskar to work and should lessen the learning curve, as long
they've used openstack before.

I don't 100% agree here. Yes it is important for user to keep
consistency in naming - but as long as he is working within the same
domain. Problem is that our TripleO/Tuskar UI user is very different
from OpenStack UI user. They are operators, and they might be very often
used to different terminology - globally used and known in their field
(for example Flavor is very OpenStack specific term, they might better
see HW profile, or similar).

I think that mixing these terms in overcloud and undercloud might lead
to problems and users' confusion. They already are confused about the
whole 'over/under cloud' stuff. They are not working with this approach
daily as we are. They care about deploying OpenStack, not about how it
works under the hood. Bringing another more complicated level of
terminology (explaining what is and belongs to under/overcloud) will
increase the confusion here.

For developers, it might be easier to deal with the same terms as
OpenStack already have or what is used in the background to make that
happen. But for user - it will be confusing going to
infrastructure/hardware management part and see the very same terms.

Therefor I incline more to broadly accepted general terminology and not
reusing OpenSTack terms (at least in the UI).

-- Jarda

I think you're correct with respect to the end-user, and I can see the argument
for terminology changes at the view level; it is important not to confuse the
end-user.

But at the level where developers are working with OpenStack APIs, I think it's
important not to confuse the developers and reviewers, and that's most easily 
done
by sticking with established OpenStack terminology.


Mainn


I think we are assuming a lot here. I would rather keep the same naming 
Openstack use

and possibly rename it later based on users real feedback.

There is not only UI, sysadmins will work with CLI, using Openstack 
services, using Openstack

naming. So naming it differently will be confusing.

Btw. I would never hire a sysadmin that should be managing my 100s nodes 
cloud and have no idea

what is happening under the hood. :-D

Ladislav


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] UI Wireframes for Resource Management - ready for implementation

2013-12-17 Thread Ladislav Smola

On 12/16/2013 08:48 PM, Jay Dobies wrote:



On 12/13/2013 01:53 PM, Tzu-Mainn Chen wrote:

On 2013/13/12 11:20, Tzu-Mainn Chen wrote:

These look good!  Quick question - can you explain the purpose of Node
Tags?  Are they
an additional way to filter nodes through nova-scheduler (is that even
possible?), or
are they there solely for display in the UI?

Mainn


We start easy, so that's solely for UI needs of filtering and 
monitoring

(grouping of nodes). It is already in Ironic, so there is no reason why
not to take advantage of it.
-- Jarda


Okay, great.  Just for further clarification, are you expecting this 
UI filtering
to be present in release 0?  I don't think Ironic natively supports 
filtering

by node tag, so that would be further work that would have to be done.

Mainn


I might be getting ahead of things, but will the tags be free-form 
entered by the user, pre-entered in a separate settings and selectable 
at node register/update time, or locked into a select few that we 
specify?




We are definitely going ahead. :-) Though I would lean to free form 
tags, that users could use as they want (grouping, creating policies, 
etc.). Plus there would be a basic set we specify for the users.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] [Horizon] [Tuskar] [UI] Horizon and Tuskar-UI codebase merge

2013-12-17 Thread Ladislav Smola

Horizoners,

As an alternative merge option, we could merge directly to Horizon code 
base. After some conversation, we have realized that it is possible to 
mix codebase of incubated and integrated projects, as Trove showed us. 
Contrary to what was said in the last meeting, we do not require any 
special treatment for the infrastructure tab and we will continue the 
development of the infrastructure tab with the same rules as the rest of 
the Horizon. Especially, we want to keep culture of cross company 
reviewers, so that we make sure that TripleO/Tuskar UI is not only in 
hands of one company. It is important to mention that there will be more 
code to keep eyes on but we believe that us helping more with reviews in 
Horizon will give more time for reviews in TripleO/Tuskar UI.


This is proposed Roadmap:

1. Before meeting 16.12.2013, send email about the merge.
2. Immediate steps after the meeting (days and weeks)
- Merge of the Tuskar-UI core team to Horizon core team. Namely: 
jtomasek, lsmola, jomara, tzumainn (a point of discussion)

- Tuskar will add a third panel, named infrastructure.
- Tuskar will be disabled by default in Horizon.
- Break tuskar-ui in smaller pieces, submit them individually as patches 
directly for horizon.

3. Long-term steps after the meeting (weeks and months)
- Synchronize coding style and policies.
- Transfer blueprints and bugs to horizon launchpad with 'tuskar-' prefix.
- Continue development under in Horizon codebase. Infrastructure tab 
will have some tabs implemented with mock data, until the underlying 
libraries are finished (Tuskar is depending on several apis, like nova, 
heat, triple-o, ironic.). It will get to stable state in I3 (we need to 
develop in parralel with API's to meet the I3 deadline)

- Transfer Documentation.

The benefits of this was already pointed out by mrunge.

We have a detailed plan of features for I2 and I3, put together by the 
tripleo community, those will be captured as blueprints and presented on 
Horizon meetings.


If you have any questions, please ask!

Thanks,
Tuskar UI team

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] [Horizon] [Tuskar] [UI] Horizon and Tuskar-UI merge

2013-12-13 Thread Ladislav Smola

Horizoners,

As discussed in TripleO and Horizon meetings, we are proposing to move 
Tuskar UI under the Horizon umbrella. Since we are building our UI 
solution on top of Horizon, we think this is a good fit. It will allow 
us to get feedback and reviews from the appropriate group of developers.


Tuskar UI is a user interface for the design, deployment, monitoring, 
and management of OpenStack. The code is built on the Horizon framework 
and facilitates the TripleO approach to deployment.  We work closely 
with the TripleO team and will continue to do so. The Tuskar UI itself 
is implemented as a new tab, headed Infrastructure, which is added as 
a dashboard to OpenStack Horizon. For more information about the TripleO 
project, check out the project wiki: 
https://wiki.openstack.org/wiki/TripleO.


The following is a proposal on how the Tuskar UI project could be 
integrated:
- Create a new codebase for the Tuskar-UI under the horizon umbrella, 
with its own core team
- As an exception to the usual contribution process, commits to 
Tuskar-UI codebase. may be pushed, +2 and approved by one company. This 
is intended to make the development process faster. We are currently 
developing the Tuskar-UI at a fast pace and there are not yet many 
contributors who aren't employed by Red Hat  that are familiar with the 
code. As the code stabilises, and attracts users and developers, this 
exception can be removed.
- The Tuskar-UI cores would be cores of Tuskar-UI codebase only. Horizon 
cores would be cores of the whole Horizon program.



What does it mean for Horizon?
- There will be more developers, reviewers and patches coming to Horizon 
(as a program).
- Horizon contributors will have time to get familiar with the Tuskar-UI 
code, before we decide to merge it into the Horizon codebase.


If you have any questions, please ask!

Thanks,
Tuskar UI team
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-12 Thread Ladislav Smola

On 12/11/2013 08:59 PM, James Slagle wrote:

This is really helpful, thanks for pulling it together.

comment inline...

On Wed, Dec 11, 2013 at 2:15 PM, Tzu-Mainn Chen tzuma...@redhat.com wrote:

* NODE a physical general purpose machine capable of running in many roles. 
Some nodes may have hardware layout that is particularly
useful for a given role.

  * REGISTRATION - the act of creating a node in Ironic

  * ROLE - a specific workload we want to map onto one or more nodes. 
Examples include 'undercloud control plane', 'overcloud control
plane', 'overcloud storage', 'overcloud compute' etc.

  * MANAGEMENT NODE - a node that has been mapped with an undercloud 
role
  * SERVICE NODE - a node that has been mapped with an overcloud role
 * COMPUTE NODE - a service node that has been mapped to an 
overcloud compute role
 * CONTROLLER NODE - a service node that has been mapped to an 
overcloud controller role
 * OBJECT STORAGE NODE - a service node that has been mapped to an 
overcloud object storage role
 * BLOCK STORAGE NODE - a service node that has been mapped to an 
overcloud block storage role

  * UNDEPLOYED NODE - a node that has not been mapped with a role

This begs the question (for me anyway), why not call it UNMAPPED NODE?
  If not, can we s/mapped/deployed in the descriptions above instead?

It might make sense then to define mapped and deployed in technical
terms as well.  Is mapped just the act of associating a node with a
role in the UI, or does it mean that bits have actually been
transferred across the wire to the node's disk and it's now running?


The way I see it, it depends where we have the node.

So Registered/Unregistered means the node is or is not in the 'nova 
baremetal-list' (Ironic). (we can't really have unregistered unless we 
can call autodiscover that just shows not registered nodes)

Registering is done via 'nova baremetal-node-create'

And Provisioned/Unprovisioned(Deployed, Booted, Mapped ??) means that 
the node is or is not in 'nova list'

Provisioning is done via 'nova boot'. Not sure about calling it mapping.
Basically, deciding what role the baremetal have is: if it was booted 
with image that is considered to be Compute Node image, then it's 
compute node. Right?

Other thing is whether the service it provides runs correctly.




   * another option - UNALLOCATED NODE - a node that has not been 
allocated through nova scheduler (?)
- (after reading lifeless's explanation, I agree that 
allocation may be a
   misleading term under TripleO, so I 
personally vote for UNDEPLOYED)

  * INSTANCE - A role deployed on a node - this is where work actually 
happens.

* DEPLOYMENT

  * SIZE THE ROLES - the act of deciding how many nodes will need to be 
assigned to each role
* another option - DISTRIBUTE NODES (?)
  - (I think the former is more accurate, but 
perhaps there's a better way to say it?)

  * SCHEDULING - the process of deciding which role is deployed on which 
node

  * SERVICE CLASS - a further categorization within a service role for a 
particular deployment.

   * NODE PROFILE - a set of requirements that specify what attributes 
a node must have in order to be mapped to
a service class






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-12 Thread Ladislav Smola

Agree with this.

Though I am an optimist,  I believe that this time, we can avoid calling 
multiple services in one request that depend on each other.
About the multiple users at once, this should be solved inside the API 
calls of the services.


So I think we should forbid building these complex API calls composites 
in the Tuskar-API. If we will want something like this, we should implement
it properly inside the services itself. If we will not be able to 
convince the community about it, maybe it's just not that good feature. :-D


Ladislav

On 12/12/2013 02:35 PM, Jiří Stránský wrote:

On 12.12.2013 14:26, Jiří Stránský wrote:

On 12.12.2013 11:49, Radomir Dopieralski wrote:

On 11/12/13 13:33, Jiří Stránský wrote:

[snip]

TL;DR: I believe that As an infrastructure administrator, Anna 
wants a
CLI for managing the deployment providing the same fundamental 
features
as UI. With the planned architecture changes (making tuskar-api 
thinner

and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few 
options

and look forward for feedback.


[snip]

2) Make a thicker tuskar-api and put the business logic there. 
(This is
the original approach with consuming other services from 
tuskar-api. The

feedback on this approach was mostly negative though.)


This is a very simple issue, actualy. We don't have any choice. We need
locks. We can't make the UI, CLI and API behave in consistent and
predictable manner when multiple people (and cron jobs on top of that)
are using them, if we don't have locks for the more complex operations.
And in order to have locks, we need to have a single point where the
locks are applied. We can't have it on the client side, or in the UI --
it has to be a single, shared place. It has to be Tuskar-API, and I
really don't see any other option.



You're right that we should strive for atomicity, but I'm afraid putting
the complex operations (which call other services) into tuskar-api will
not solve the problem for us. (Jay and Ladislav already discussed the
issue.)

If we have to do multiple API calls to perform a complex action, then
we're in the same old situation. Should i get back to the rack creation
example that Ladislav posted, it could still happen that Tuskar API
would return error to the UI like: We haven't created the rack in
Tuskar because we tried to modifiy info about 8 nodes in Ironic, but
only 5 modifications succeeded. So we've tried to revert those 5
modifications but we only managed to revert 2. Please figure this out
and come back. We moved the problem, but didn't solve it.

I think that if we need something to be atomic, we'll need to make sure
that one operation only writes to one service, where the single
source of truth for that data lies, and make sure that the operation is
atomic within that service. (See Ladislav's example with overcloud
deployment via Heat in this thread.)

Thanks :)

Jirka



And just to make it clear how that relates to locking: Even if i can 
lock something within Tuskar API, i cannot lock the related data 
(which i need to use in the complex operation) in the other API (say 
Ironic). Things can still change under Tuskar API's hands. Again, we 
just move the unpredictability, but not remove it.


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2013-12-11 Thread Ladislav Smola

+1 for Tatiana Mazur to Horizon Core

not sure if only cores should do the vote, but Tatiana has been very 
active, so it will be well deserved. :-)



On 12/11/2013 01:09 PM, Jiri Tomasek wrote:

+1 for Tatiana Mazur to Horizon Core



On 12/10/2013 09:24 PM, Lyle, David wrote:
I would like to nominate Tatiana Mazur to Horizon Core.  Tatiana has 
been a significant code contributor in the last two releases, 
understands the code base well and has been doing a significant 
number of reviews for the last to milestones.



Additionally, I'd like to remove some inactive members of 
Horizon-core who have been inactive since the early Grizzly release 
at the latest.

Devin Carlen
Jake Dahn
Jesse Andrews
Joe Heck
John Postlethwait
Paul McMillan
Todd Willey
Tres Henry
paul-tashima
sleepsonthefloor


Please respond with a +1/-1 by this Friday.

-David Lyle




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Ladislav Smola

Hi,

thanks for starting this conversation.
I will take it little side ways. I think we should be asking why have we 
needed the tuskar-api. It has done some more complex logic (e.g. 
building a heat template) or storing additional info, not supported by 
the services we use (like rack associations).

That is a perfectly fine use-case of introducing tuskar-api.

Although now, when everything is shifting to the services themselves, we 
don't need tuskar-api for that kind of stuff. Can you please list what 
complex operations are left, that should be done in tuskar? I think 
discussing concrete stuff would be best.


There can be a CLI or API deployment story using Openstack services, not 
necessarily calling only tuskar-cli and api as proxies.

E.g. in documentation you will have

now create the stack by: heat stack-create params

it's much better than:
You can create stack by tuskar-deploy params, which actually calls heat 
stack-create params


What is wrong about calling the original services? Why do we want to 
hide it?



Also, as I have been talking with rdopieralsky, there has been some 
problems in the past, with tuskar doing more steps in one. Like create a 
rack and register new nodes in the same time. As those have been 
separate API calls and there is no transaction handling, we should not 
do this kind of things in the first place. If we have actions that 
depends on each other, it should go from UI one by one. Otherwise we 
will be showing messages like, The rack has not been created, but 5 
from 8 nodes has been added. We have tried to delete those added nodes, 
but 2 of the 5 deletions has failed. Please figure this out, then you 
can run this awesome action that calls multiple dependent APIs without 
real rollback again. (or something like that, depending on what gets 
created first)


I am not saying we should not have tuskar-api. Just put there things 
that belongs there, not proxy everything.


btw. the real path of the diagram is

tuskar-ui - tuskarclient - tuskar-api - heatclient - heat-api   
.|ironic|etc.


My conclusion
--

I say if it can be tuskar-ui - heatclient - heat-api, lets keep it 
that way.


If we realize we are putting some business logic to UI, that needs to be 
done also to CLI, or we need to store some additional data, that doesn't 
belong anywhere let's put it in Tuskar-API.


Kind Regards,
Ladislav



On 12/11/2013 03:32 PM, Jay Dobies wrote:
Disclaimer: I swear I'll stop posting this sort of thing soon, but I'm 
new to the project. I only mention it again because it's relevant in 
that I missed any of the discussion on why proxying from tuskar API to 
other APIs is looked down upon. Jiri and I had been talking yesterday 
and he mentioned it to me when I started to ask these same sorts of 
questions.


On 12/11/2013 07:33 AM, Jiří Stránský wrote:

Hi all,

TL;DR: I believe that As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI. With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui - tuskarclient - tuskar-api - heat-api|ironic-api|etc.


My biggest concern was that having each client call out to the 
individual APIs directly put a lot of knowledge into the clients that 
had to be replicated across clients. At the best case, that's simply 
knowing where to look for data. But I suspect it's bigger than that 
and there are workflows that will be implemented for tuskar needs. If 
the tuskar API can't call out to other APIs, that workflow 
implementation needs to be done at a higher layer, which means in each 
client.


Something I'm going to talk about later in this e-mail but I'll 
mention here so that the diagrams sit side-by-side is the potential 
for a facade layer that hides away the multiple APIs. Lemme see if I 
can do this in ASCII:


tuskar-ui -+   +-tuskar-api
   |   |
   +-client-facade-+-nova-api
   |   |
tuskar-cli-+   +-heat-api

The facade layer runs client-side and contains the business logic that 
calls across APIs and adds in the tuskar magic. That keeps the tuskar 
API from calling into other APIs* but keeps all of the API call logic 
abstracted away from the UX pieces.


* Again, I'm not 100% up to speed with the API discussion, so I'm 
going off the assumption that we want to avoid API to API calls. If 
that isn't as strict of a design principle as I'm understanding it to 
be, then the above picture probably looks kinda silly, so keep in mind 
the context I'm going from.


For completeness, my gut reaction was expecting to see something like:

tuskar-ui -+
   |
   +-tuskar-api-+-nova-api
   ||

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Ladislav Smola

On 12/11/2013 04:35 PM, James Slagle wrote:

On Wed, Dec 11, 2013 at 7:33 AM, Jiří Stránský ji...@redhat.com wrote:

Hi all,

TL;DR: I believe that As an infrastructure administrator, Anna wants a CLI
for managing the deployment providing the same fundamental features as UI.
With the planned architecture changes (making tuskar-api thinner and getting
rid of proxying to other services), there's not an obvious way to achieve
that. We need to figure this out. I present a few options and look forward
for feedback.

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui - tuskarclient - tuskar-api - heat-api|ironic-api|etc.

To be clear, tuskarclient is just a library right?  So both the UI and
CLI use tuskarclient, at least was that the original plan?


This meant that the integration logic of how to use heat, ironic and other
services to manage an OpenStack deployment lied within *tuskar-api*. This
gave us an easy way towards having a CLI - just build tuskarclient to wrap
abilities of tuskar-api.


Nowadays we talk about using heat and ironic (and neutron? nova?
ceilometer?) apis directly from the UI, similarly as Dashboard does.

I think we should do that whereever we can for sure.  For example, to
get the status of a deployment we can do the same API call as heat
stack-status ... does, no need to write a new Tuskar API to do that.


But our approach cannot be exactly the same as in Dashboard's case.
Dashboard is quite a thin wrapper on top of python-...clients, which means
there's a natural parity between what the Dashboard and the CLIs can do.

We're not wrapping the APIs directly (if wrapping them directly would be
sufficient, we could just use Dashboard and not build Tuskar API at all).
We're building a separate UI because we need *additional logic* on top of
the APIs. E.g. instead of directly working with Heat templates and Heat
stacks to deploy overcloud, user will get to pick how many
control/compute/etc. nodes he wants to have, and we'll take care of Heat
things behind the scenes. This makes Tuskar UI significantly thicker than
Dashboard is, and the natural parity between CLI and UI vanishes. By having
this logic in UI, we're effectively preventing its use from CLI. (If i were
bold i'd also think about integrating Tuskar with other software which would
be prevented too if we keep the business logic in UI, but i'm not absolutely
positive about use cases here).

I don't think we want the business logic in the UI.


Can you specify what kind of business logic?

Like we do validations in UI before we send it to API (both on server 
and client).
We occasionally do some joins. E.g. list of nodes is join of nova 
baremetal-list and nova list.


That is considered to be a business logic. Though if it is only for UI 
purposes, it should stay in UI.


Other than this, it's just API calls.




Now this raises a question - how do we get CLI reasonably on par with
abilities of the UI? (Or am i wrong that Anna the infrastructure
administrator would want that?)

IMO, we want an equivalent CLI and UI.  A big reason is so that it can
be sanely scripted/automated.


Sure, we have. It's just API calls. Though e.g. when you want massive 
instance delete, you will write a script for that in CLI. In UI you will 
filter it and use checkboxes.

So the equivalence is in API calls, not in the complex operations.


Here are some options i see:

1) Make a thicker python-tuskarclient and put the business logic there. Make
it consume other python-*clients. (This is an unusual approach though, i'm
not aware of any python-*client that would consume and integrate other
python-*clients.)

python-openstackclient consumes other clients :).  Ok, that's probably
not a great example :).

This approach makes the most sense to me.  python-tuskarclient would
make the decisions about if it can call the heat api directly, or the
tuskar api, or some other api.  The UI and CLI would then both use
python-tuskarclient.


Guys, I am not sure about this. I thought python-xxxclient should follow 
Remote Proxy Pattern, being an object wrapper for the service API calls.


Even if you do this, it should call rather e.g. python-heatclient, 
rather than API directly. Though I haven't seen this one before in 
Openstack.




2) Make a thicker tuskar-api and put the business logic there. (This is the
original approach with consuming other services from tuskar-api. The
feedback on this approach was mostly negative though.)

So, typically, I would say this is the right approach.  However given
what you pointed out above that sometimes we can use other API's
directly, we then have a seperation where sometimes you have to use
tuskar-api and sometimes you'd use heat/etc api.  By using
python-tuskarclient, you're really just pushing that abstraction into
a library instead of an API, and I think that makes some sense.


Shouldn't be general libs in the Oslo, rather than client?


3) Keep tuskar-api and python-tuskarclient thin, make another library

Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-06 Thread Ladislav Smola

On 12/05/2013 03:01 PM, James Slagle wrote:

On Wed, Dec 4, 2013 at 2:10 PM, Robert Collins
robe...@robertcollins.net wrote:

On 5 December 2013 06:55, James Slagle james.sla...@gmail.com wrote:

On Wed, Dec 4, 2013 at 2:12 AM, Robert Collins

Jan, Jordan, Martyn, Jiri and Jaromir are still actively contributing
to TripleO and OpenStack, but I don't think they are tracking /
engaging in the code review discussions enough to stay in -core: I'd
be delighted if they want to rejoin as core - as we discussed last
time, after a shorter than usual ramp up period if they get stuck in.

What's the shorter than usual ramp up period?

You know, we haven't actually put numbers on it. But I'd be
comfortable with a few weeks of sustained involvement.

+1.  Sounds reasonable.


In general, I agree with your points about removing folks from core.

We do have a situation though where some folks weren't reviewing as
frequently when the Tuskar UI/API development slowed a bit post-merge.
  Since that is getting ready to pick back up, my concern with removing
this group of folks, is that it leaves less people on core who are
deeply familiar with that code base.  Maybe that's ok, especially if
the fast track process to get them back on core is reasonable.

Well, I don't think we want a situation where when a single org
decides to tackle something else for a bit, that noone can comfortably
fix bugs in e.g. Tuskar / or worse the whole thing stalls - thats why
I've been so keen to get /everyone/ in Tripleo-core familiar with the
entire collection of codebases we're maintaining.

So I think after 3 months that other cores should be reasonably familiar too ;).

Well, it's not so much about just fixing bugs.  I'm confident our set
of cores could fix bugs in almost any OpenStack related project, and
in fact most do.  It was more just a comment around people who worked
on the initial code being removed from core.  But, if others don't
share that concern, and in fact Ladislav's comment about having
confidence in the number of tuskar-ui guys still on core pretty much
mitigates my concern :).


Well if it would be possible, I would rather keep guys who want to be 
more active in core. It's true that most of us worked on Horizon till 
now, preparing libraries we will need.
And in next couple months, we will be implementing that in Tuskar-UI. So 
having more core who understands that will be beneficial.


Basically me and tzumainn are working on Tuskar-UI fulltime. And ifarkas 
and tomas-8c8 are familiar enough with the code, but will be working on 
other projects. So that seems
to me like a minimal number of cores to keep us rolling(if nobody gets 
sick, etc.).


We will need to get patches in at certain cadence to keep the 
deadlines(patches will also depend on each other blocking other people), 
so in certain cases a +1 one from a non core
guy I have a confidence in, regarding e.g. deep knowledge of Angular.js 
or Horizon will be enough for me to approve the patch.



That said, perhaps we should review these projects.

Tuskar as an API to drive deployment and ops clearly belongs in
TripleO - though we need to keep pushing features out of it into more
generalised tools like Heat, Nova and Solum. TuskarUI though, as far
as I know all the other programs have their web UI in Horizon itself -
perhaps TuskarUI belongs in the Horizon program as a separate code
base for now, and merge them once Tuskar begins integration?

IMO, I'd like to see Tuskar UI stay in tripleo for now, given that we
are very focused on the deployment story.  And our reviewers are
likely to have strong opinions on that :).  Not that we couldn't go
review in Horizon if we wanted to, but I don't think we need the churn
of making that change right now.

So, I'll send my votes on the other folks after giving them a little
more time to reply.

Thanks.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-06 Thread Ladislav Smola

On 12/05/2013 11:40 AM, Jan Provaznik wrote:

On 12/04/2013 08:12 AM, Robert Collins wrote:

And the 90 day not-active-enough status:

|   jprovazn **|  220   5  10   7   177.3% | 2 (  
9.1%)  |
|jomara ** |  210   2   4  15  1190.5% | 2 (  
9.5%)  |
|mtaylor **|  173   6   0   8   847.1% | 0 (  
0.0%)  |
|   jtomasek **|  100   0   2   8  10   100.0% | 1 ( 
10.0%)  |
|jcoufal **|   53   1   0   1   320.0% | 0 (  
0.0%)  |


Jan, Jordan, Martyn, Jiri and Jaromir are still actively contributing
to TripleO and OpenStack, but I don't think they are tracking /
engaging in the code review discussions enough to stay in -core: I'd
be delighted if they want to rejoin as core - as we discussed last
time, after a shorter than usual ramp up period if they get stuck in.



I will put more attention to reviews in future. Only a nit, it's quite 
a challenge to find something to review - most of the mornings I 
check pending patches everything is already reviewed ;).


Jan



Agreed. The policy of one review per day is not really what I do. I do 
go through all of the reviews and review as much as I can, because I 
can't be sure there will be something to review for me the

next day.
That takes me little more time than I would like. Though reviews are 
needed and I do learn new stuff.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-06 Thread Ladislav Smola

On 12/06/2013 09:56 AM, Jaromir Coufal wrote:


On 2013/04/12 08:12, Robert Collins wrote:

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Ghe Rivero for -core
  - Jan Provaznik for removal from -core
  - Jordan O'Mara for removal from -core
  - Martyn Taylor for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.

Ghe, please let me know if you're willing to be in tripleo-core. Jan,
Jordan, Martyn, Jiri  Jaromir, if you are planning on becoming
substantially more active in TripleO reviews in the short term, please
let us know.

Hey there,

thanks Rob for keeping eye on this. Speaking for myself, as current 
non-coder it was very hard to keep pace with others, especially when 
UI was on hold and I was designing future views. I'll continue working 
on designs much more, but I will also keep an eye on code which is 
going in. I believe that UX reviews will be needed before merging so 
that we assure keeping the vision. That's why I would like to express 
my will to stay within -core even when I don't deliver that big amount 
of reviews as other engineers. However if anybody feels that I should 
be just +1, I completely understand and I will give up my +2 power.




I wonder whether there can be a sort of honorary core title. jcoufal is 
contributing a lot, but not that much with code or reviews.


I vote +1 to that, if it is possible


-- Jarda


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-06 Thread Ladislav Smola

On 12/06/2013 05:36 PM, Ben Nemec wrote:


On 2013-12-06 03:22, Ladislav Smola wrote:


On 12/06/2013 09:56 AM, Jaromir Coufal wrote:


On 2013/04/12 08:12, Robert Collins wrote:

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Ghe Rivero for -core
  - Jan Provaznik for removal from -core
  - Jordan O'Mara for removal from -core
  - Martyn Taylor for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.

Ghe, please let me know if you're willing to be in tripleo-core. Jan,
Jordan, Martyn, Jiri  Jaromir, if you are planning on becoming
substantially more active in TripleO reviews in the short term, please
let us know.

Hey there,

thanks Rob for keeping eye on this. Speaking for myself, as current 
non-coder it was very hard to keep pace with others, especially when 
UI was on hold and I was designing future views. I'll continue 
working on designs much more, but I will also keep an eye on code 
which is going in. I believe that UX reviews will be needed before 
merging so that we assure keeping the vision. That's why I would 
like to express my will to stay within -core even when I don't 
deliver that big amount of reviews as other engineers. However if 
anybody feels that I should be just +1, I completely understand and 
I will give up my +2 power.




I wonder whether there can be a sort of honorary core title. jcoufal 
is contributing a lot, but not that much with code or reviews.


What purpose would this serve?  The only thing core gives you is the 
ability to +2 in Gerrit.  If you're not reviewing, core is 
meaningless.  It's great to contribute to the mailing list, but being 
core shouldn't have any influence on that one way or another.  This is 
a meritocracy where suggestions are judged based on their value, not 
whether the suggester has +2 ability (which honorary core wouldn't 
provide anyway, I assume).  At least that's the ideal.  I think 
everyone following the project is aware of Jaromir's contributions and 
a title isn't going to change that one way or another.




Well. It's true. The only thing that comes to my mind is a swell dinner 
at summit. :-D



-Ben



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-05 Thread Ladislav Smola

Hello,

+1 to core update. There are still enough Tuskar-UI guys in the core 
team I think.


Ladislav

On 12/04/2013 08:12 AM, Robert Collins wrote:

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Ghe Rivero for -core
  - Jan Provaznik for removal from -core
  - Jordan O'Mara for removal from -core
  - Martyn Taylor for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.

Ghe, please let me know if you're willing to be in tripleo-core. Jan,
Jordan, Martyn, Jiri  Jaromir, if you are planning on becoming
substantially more active in TripleO reviews in the short term, please
let us know.

My approach to this caused some confusion a while back, so I'm going
to throw in some boilerplate here for a few more editions... - I'm
going to talk about stats here, but they
are only part of the picture : folk that aren't really being /felt/ as
effective reviewers won't be asked to take on -core responsibility,
and folk who are less active than needed but still very connected to
the project may still keep them : it's not pure numbers.

Also, it's a vote: that is direct representation by the existing -core
reviewers as to whether they are ready to accept a new reviewer as
core or not. This mail from me merely kicks off the proposal for any
changes.

But, the metrics provide an easy fingerprint - they are a useful tool
to avoid bias (e.g. remembering folk who are just short-term active) -
human memory can be particularly treacherous - see 'Thinking, Fast and
Slow'.

With that prelude out of the way:

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up so they
aren't caught by surprise.

Our merger with Tuskar has now had plenty of time to bed down; folk
from the Tuskar project who have been reviewing widely within TripleO
for the last three months are not in any way disadvantaged vs previous
core reviewers when merely looking at the stats; and they've had three
months to get familiar with the broad set of codebases we maintain.

90 day active-enough stats:

+--+---++
| Reviewer | Reviews   -2  -1  +1  +2  +A+/- % | Disagreements* |
+--+---++
|   lifeless **| 521   16 181   6 318 14162.2% |   16 (  3.1%)  |
| cmsj **  | 4161  30   1 384 20692.5% |   22 (  5.3%)  |
| clint-fewbar **  | 3792  83   0 294 12077.6% |   11 (  2.9%)  |
|derekh ** | 1960  36   2 158  7881.6% |6 (  3.1%)  |
|slagle ** | 1650  36  94  35  1478.2% |   15 (  9.1%)  |
|ghe.rivero| 1500  26 124   0   082.7% |   17 ( 11.3%)  |
|rpodolyaka| 1420  34 108   0   076.1% |   21 ( 14.8%)  |
|lsmola ** | 1011  15  27  58  3884.2% |4 (  4.0%)  |
|ifarkas **|  950  10   8  77  2589.5% |4 (  4.2%)  |
| jistr ** |  951  19  16  59  2378.9% |5 (  5.3%)  |
|  markmc  |  940  35  59   0   062.8% |4 (  4.3%)  |
|pblaho ** |  831  13  45  24   983.1% |   19 ( 22.9%)  |
|marios ** |  720   7  32  33  1590.3% |6 (  8.3%)  |
|   tzumainn **|  670  17  15  35  1574.6% |3 (  4.5%)  |
|dan-prince|  590  10  35  14  1083.1% |7 ( 11.9%)  |
|   jogo   |  570   6  51   0   089.5% |2 (  3.5%)  |


This is a massive improvement over last months report. \o/ Yay. The
cutoff line here is pretty arbitrary - I extended a couple of rows
below one-per-work-day because Dan and Joe were basically there - and
there is a somewhat bigger gap to the next most active reviewer below
that.

About half of Ghe's reviews are in the last 30 days, and ~85% in the
last 60 - but he has been doing significant numbers of thoughtful
reviews over the whole three months - I'd like to propose him for
-core.
Roman has very similar numbers here, but I don't feel quite as
confident yet - I think he is still coming up to speed on the codebase
(nearly all his reviews are in the last 60 days only) - but I'm
confident that he'll be thoroughly indoctrinated in another month :).
Mark is contributing great throughtful reviews, but the vast majority
are very recent - like Roman, I want to give him some 

Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-05 Thread Ladislav Smola

Hello,

so what is the plan? Tuskar-UI stays in tripleo until tripleo is part of 
the integrated release?


Thanks,
Ladislav

On 12/04/2013 08:44 PM, Lyle, David wrote:

On 5 December 2013 12:10, Robert Collins robe...@robertcollins.net wrote:

-snip-


That said, perhaps we should review these projects.

Tuskar as an API to drive deployment and ops clearly belongs in
TripleO - though we need to keep pushing features out of it into more
generalised tools like Heat, Nova and Solum. TuskarUI though, as far
as I know all the other programs have their web UI in Horizon itself -
perhaps TuskarUI belongs in the Horizon program as a separate code
base for now, and merge them once Tuskar begins integration?


This sounds reasonable to me.  The code base for TuskarUI is building on 
Horizon and we are planning on integrating TuskarUI into Horizon once TripleO 
is part of the integrated release.  The review skills and focus for TuskarUI is 
certainly more consistent with Horizon than the rest of the TripleO program.
  

-Rob


--
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summit session wrapup

2013-11-28 Thread Ladislav Smola

Hello,

just few notes from me:

https://etherpad.openstack.org/p/tripleo-feature-map sounds like a great 
idea, we should go through them one by one maybe on meeting.
We should agree on what is doable for I, without violating the Openstack 
way in some very ugly way. So do we want to be Openstack on Openstack

or Almost Openstack on Openstack? Or what is the goal here?

So let's take a simple example, flat network 2 racks (32 nodes), 2 
controllers nodes, 2 neutron nodes, 14 nova compute, 14 storage


I. Manual way using Heat and Scheduler could be assigning every group of 
nodes to special flavor by hard. Then nova scheduler will take care of it.
1. How hard it will be to implement 'Assigning a specific nodes to 
Flavor' ? (probably adding a condition for MAC address?)
Or do you have some other idea how to do this in an almost clean 
way? Without reimplementing nova scheduler. (though this is probably 
messing with scheduler)
2. How this will be implementable in UI? Just assigning nodes to flavors 
and uploading a Heat template?


II. Having homogeneous hardware, all will be one flavor and then nova 
scheduler will decide, where to put what. When you give heat e.g. I want 
to spawn 2 controller images.
1. How hard is to set the policies, like we want to spread those nodes 
over all racks?
2. How this will be implementable in UI? It is basically building a 
complex Heat template, right? So just uploading a Heat template?


III. Having more flavors
1. We will be able to set in Heat something like, I want Nova compute 
node on compute_flavor(amazon c1,c3) with high priority or on 
all_purpose_flavor(amazon m1)  with normal_priority. How hard is that?

2. How this will be implementable in UI? Just uploading a Heat template?

IV. Tripleo way


1. From the OOO name I infer, we want to use openstack, that means using 
Heat, Nova scheduler etc.
From my point of view having Heat template for deploying e.g. 
Wordpress installation seems the same to me like having a Heat template
to deploy Openstack, it's just much more complex. Is this a valid 
assumption? If you think it's not, explain why please.



Radical idea : we could ask (e.g. on -operators) for a few potential 
users who'd be willing to let us interview them.

Yes please!!!

Talking to jcoufal, being able to edit Heat template in UI, being able 
to assign baremetals to flavors(later connected to template catalog). It 
could be all we need. Also later visualize
what will happen when you actually stack create the template, so we 
don't go blindly would be very needed.


Kind regards,
Ladislav


On 11/28/2013 06:41 AM, Robert Collins wrote:

Hey, I realise I've done a sort of point-bypoint thing below - sorry.
Let me say that I'm glad you're focused on what will help users, and
their needs - I am too. Hopefully we can figure out why we have
different opinions about what things are key, and/or how we can get
data to better understand our potential users.


On 28 November 2013 02:39, Jaromir Coufal jcou...@redhat.com wrote:


Important point here is, that we agree on starting with very basics - grow
then. Which is great.

The whole deployment workflow (not just UI) is all about user experience
which is built on top of TripleO's approach. Here I see two important
factors:
- There are users who are having some needs and expectations.

Certainly. Do we have Personas for those people? (And have we done any
validation of them?)


- There is underlying concept of TripleO, which we are using for
implementing features which are satisfying those needs.

mmm, so the technical aspect of TripleO is about setting up a virtuous
circle: where improvements in deploying cluster software via OpenStack
makes deploying OpenStack better, and those of us working on deploying
OpenStack will make deploying cluster software via OpenStack better in
general, as part of solving 'deploying OpenStack' in a nice way.


We are circling around and trying to approach the problem from wrong end -
which is implementation point of view (how to avoid own scheduling).

Let's try get out of the box and start with thinking about our audience
first - what they expect, what they need. Then we go back, put our
implementation thinking hat on and find out how we are going to re-use
OpenStack components to achieve our goals. In the end we have detailed plan.

Certainly, +1.


=== Users ===

I would like to start with our targeted audience first - without milestones,
without implementation details.

I think here is the main point where I disagree and which leads to different
approaches. I don't think, that user of TripleO cares only about deploying
infrastructure without any knowledge where the things go. This is overcloud
user's approach - 'I want VM and I don't care where it runs'. Those are
self-service users / cloud users. I know we are OpenStack on OpenStack, but
we shouldn't go that far that we expect same behavior from undercloud users.
I can tell you various examples of why the 

Re: [openstack-dev] [ceilometer][horizon] The meaning of Network Duration

2013-11-28 Thread Ladislav Smola

Hello Daisy,

the tables were deleted from Horizon, because of that confusion. 
https://bugs.launchpad.net/horizon/+bug/1249279
We are going to clearly document each ceilometer meter first. Then this 
information will appear again in Horizon.


E.g. the duration as stated in doc 
http://docs.openstack.org/developer/ceilometer/measurements.html
is kind of misguiding, as it actually mean presence. So the samples of 
these metrics contains 1 or 0, depending
on whether network was up or down in that time. The actual duration then 
must be inferred from these samples.


This should be also backported to H.

Kind regards.
Ladislav


On 11/27/2013 10:16 AM, Ying Chun Guo wrote:


Hello,

While I translate Horizon web UI, I'm a little confused with Network 
Duration,
Port Duration, and Router Duration in the Resources Usage 
statistics table.


What does Duration mean here?
If I translate it exactly as the meaning of Duration, my customers 
cannot understand.

Does it equal to usage time?

Regards
Ying Chun Guo (Daisy)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-11-26 Thread Ladislav Smola

Hello,

seems too big to do the inline comments, so just a few notes here:

If we truly want to have Templates portable, it would mean to have the 
'metadata' somehow standardised, right?
Otherwise if every UI will add their own metadata, then I hardly see the 
templates as portable. I think first step would be then to delete
the metadata and add your own, unless you are fine to have 80% of the 
template some metadata you don't use. That also won't

help the readability. What will help a readability are the verbose comments.

I am not really sure how long it can take to add new specialized tags, 
that are used only in Horizon and are well documented. I think showing
this, should get the patch merged very quickly. That seems to me like a 
portable solution.


IMO for the template catalogue we would probably need a new service, 
something like Glance, so that's probably a more distant future.


For the use-cases:
---

ad 1)
Something more general next to Description may be more useful, like 
keywords, packages or components.

Example:

Description...
Keywords: wordpress, mysql...

Or you could parse it from e.g. packages (though that is not always 
used, so being able to write it explicitly might be handy)


ad 2)
Maybe adding something like 'author' tag may be a good idea, though you 
can find all the history in git repo,
given you use https://github.com/openstack/heat-templates . If you have 
different repo, adding something like

Origin: https://github.com/openstack/heat-templates maybe?

ad 3)
So having a fix and documented schema seems to be a good way to be 
portable, at least to me. I am not
against UI only tags inside the template, that are really useful for 
everybody. We will find out by collectively

reviewing that, which usually brings some easier solution.

Or you don't think, it will get too wild to have some 'metadata' section 
completely ignored by Heat? Seems
to me like there will be a lot of cases, when people won't push their 
template to upstream, because of the
metadata they have added to their templates, that nobody else will ever 
use. Is somebody else concerned

about this?

That's my 2 cents.

Kind regards,
Ladislav


On 11/26/2013 07:32 AM, Keith Bray wrote:
Thanks Steve.  I appreciate your input. I have added the use cases for 
all to review:

https://wiki.openstack.org/wiki/Heat/StackMetadata

What are next steps to drive this to resolution?

Kind regards,
-Keith

From: Steve Baker sba...@redhat.com mailto:sba...@redhat.com
Reply-To: OpenStack Development Mailing List (not for usage 
questions) openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org

Date: Monday, November 25, 2013 11:47 PM
To: openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [heat][horizon]Heat UI related 
requirements  roadmap


On 11/26/2013 03:26 PM, Keith Bray wrote:

On 11/25/13 5:46 PM, Clint Byrumcl...@fewbar.com  wrote:


Excerpts from Tim Schnell's message of 2013-11-25 14:51:39 -0800:

Hi Steve,

As one of the UI developers driving the requirements behind these new
blueprints I wanted to take a moment to assure you and the rest of the
Openstack community that the primary purpose of pushing these
requirements
out to the community is to help improve the User Experience for Heat for
everyone. Every major UI feature that I have implemented for Heat has
been
included in Horizon, see the Heat Topology, and these requirements
should
improve the value of Heat, regardless of the UI.


Stack/template metadata
We have a fundamental need to have the ability to reference some
additional metadata about a template that Heat does not care about.
There
are many possible use cases for this need but the primary point is that
we
need a place in the template where we can iterate on the schema of the
metadata without going through a lengthy design review. As far as I
know,
we are the only team attempting to actually productize Heat at the
moment
and this means that we are encountering requirements and requests that
do
not affect Heat directly but simply require Heat to allow a little
wiggle
room to flesh out a great user experience.


Wiggle room is indeed provided. But reviewers need to understand your
motivations, which is usually what blueprints are used for. If you're
getting push back, it is likely because your blueprints to not make the
use cases and long term vision obvious.

Clint, can you be more specific on what is not clear about the use case?
What I am seeing is that the use case of meta data is not what is being
contested, but that the Blueprint of where meta data should go is being
contested by only a few (but not all) of the core devs.  The Blueprint for
in-template metadata was already approved for Icehouse, but now that work
has been delivered on the implementation of that blueprint, the blueprint
itself is being contested:

Re: [openstack-dev] [horizon] Javascript development improvement

2013-11-21 Thread Ladislav Smola

Hello,

as long as node won't be Production dependency, it shouldn't be a 
problem, right? I give +1 to that


Regards
Ladislav

On 11/20/2013 05:01 PM, Maxime Vidori wrote:

Hi all, I know it is pretty annoying but I have to resurrect this subject.

With the integration of Angularjs into Horizon we will encounter a lot of 
issues with javascript. I ask you to reconsider to bring back Nodejs as a 
development platform. I am not talking about production, we are all agree that 
Node is not ready for production, and we do not want it as a backend. But the 
facts are that we need a lot of its features, which will increase the tests and 
the development. Currently, we do not have any javascript code quality: jslint 
is a great tool and can be used easily into node. Angularjs also provides 
end-to-end testing based on nodejs again, testing is important especially if we 
start to put more logic into JS. Selenium is used just to run qUnit tests, we 
can bring back these tests into node and have a clean unified testing platform. 
Tests will be easier to perform.

Finally, (do not punch me in the face) lessc which is used for bootstrap is 
completely integrated into it. I am afraid that modern javascript development 
can not be perform without this tool.

Regards

Maxime Vidori


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Ceilometer] get IPMI data for ceilometer

2013-11-20 Thread Ladislav Smola
Ok, I'll try to summarize what will be done in the near future for 
Undercloud monitoring.


1. There will be Central agent running on the same host(hosts once the 
central agent horizontal scaling is finished) as Ironic
2. It will have SNMP pollster, SNMP pollster will be able to get list of 
hosts and their IPs from Nova (last time I
checked it was in Nova) so it can poll them for stats. Hosts to 
poll can be also defined statically in config file.
3. It will have IPMI pollster, that will poll Ironic API, getting list 
of hosts and a fixed set of stats (basically everything

that we can get :-))
4. Ironic will also emit messages (basically all events regarding the 
hardware) and send them directly to Ceilometer collector


Does it seems to be correct? I think that is the basic we must have to 
have Undercloud monitored. We can then build on that.


Kind regards,
Ladislav

On 11/20/2013 09:22 AM, Julien Danjou wrote:

On Tue, Nov 19 2013, Devananda van der Veen wrote:


If there is a fixed set of information (eg, temp, fan speed, etc) that
ceilometer will want,

Sure, we want everything.


let's make a list of that and add a driver interface
within Ironic to abstract the collection of that information from physical
nodes. Then, each driver will be able to implement it as necessary for that
vendor. Eg., an iLO driver may poll its nodes differently than a generic
IPMI driver, but the resulting data exported to Ceilometer should have the
same structure.

I like the idea.


An SNMP agent doesn't fit within the scope of Ironic, as far as I see, so
this would need to be implemented by Ceilometer.

We're working on adding pollster for that indeed.


As far as where the SNMP agent would need to run, it should be on the
same host(s) as ironic-conductor so that it has access to the
management network (the physically-separate network for hardware
management, IPMI, etc). We should keep the number of applications with
direct access to that network to a minimum, however, so a thin agent
that collects and forwards the SNMP data to the central agent would be
preferable, in my opinion.

We can keep things simple by having the agent only doing that polling I
think. Building a new agent sounds like it will complicate deployment
again.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Introduction of AngularJS in membership workflow

2013-11-12 Thread Ladislav Smola

+1000 Excellent

I am really excited about having a heavily tested proper client-side 
layer. This
is very needed, given that amount of javascript in Horizon is rising. 
The hacked
together libraries in JQuery, that are there now are very hard to orient 
in and will

be hard to maintain in the future.

Not sure what the Horizon consensus will be, but I would recommend writing
new libraries only in Angularjs with proper tests. In the meantime we 
can practise
Angularjs by rewriting the existing stuff. I am really looking forward 
to picking

something to rewrite. :-)

Also I am not sure how the Horizon community feels about 'syntax sugar' 
libraries
for Javascript and Angular. But from my experience, using Coffeescript 
and Sugarjs
makes programming in Javascript and Angular a fairy tale (you know, 
rainbows and

unicorns everywhere you look). :-D

Thanks for working on this.

Ladislav

On 11/11/2013 08:21 PM, Jordan OMara wrote:

Hello Horizon!

On November 11th, we submitted a patch to introduce AngularJS into
Horizon [1]. We believe AngularJS adds a lot of value to Horizon.

First, AngularJS allows us to write HTML templates for interactive
elements instead of doing jQuery-based DOM manipulation. This allows
the JavaScript layer to focus on business logic, provides easy to
write JavaScript testing that focuses on the concern (e.g. business
logic, template, DOM manipulation), and eases the on-boarding for new
developers working with the JavaScript libraries.
Second, AngularJS is not an all or nothing solution and integrates
with the existing Django templates. For each feature that requires
JavaScript, we can write a self-contained directive to handle the DOM,
a template to define our view and a controller to contain the business
logic. Then, we can add this directive to the existing template. To
see an example in action look at _workflow_step_update_member.html
[2]. It can also be done incrementally - this isn't an all-or-nothing
approach with a massive front-end time investment, as the Angular
components can be introduced over time.

Finally, the initial work to bring AngularJS to Horizon provides a
springboard to remove the DOM Database (i.e. hidden-divs) used on
the membership page (and others). Instead of abusing the DOM, we can
instead expose an API for membership data, add an AngularJS resource
(i.e. reusable representation of API entities) for the API. The data
can then be loaded data asynchronously and allow the HTML to focus on
expressing a semantic representation of the data to the user.
  Please give our patch a try! You can find the interactions on
Domains/Groups, Flavors/Access(this form does not seem to work in
current master or on my patch) and Projects/UsersGroups. You should
notice that it behaves...exactly the same!
  We look forward to your feedback.  Jordan O'Mara  Jirka Tomasek

[1] [https://review.openstack.org/#/c/55901/] [2] 
[https://github.com/jsomara/horizon/blob/angular2/horizon/templates/horizon/common/_workflow_step_update_members.html]



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Ladislav Smola

On 10/08/2013 10:27 AM, Robert Collins wrote:

Perhaps the best thing to do here is to get tuskar-ui to be part of
the horizon program, and utilise it's review team?


This is planned. But it wont happen soon.



On 8 October 2013 19:31, Tzu-Mainn Chen tzuma...@redhat.com wrote:

Hi, like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up: it's not
a purely mechanical process :).

As we've just merged review teams with Tuskar devs, we need to allow
some time for everyone to get up to speed; so for folk who are core as
a result of the merge will be retained as core, but November I expect
the stats will have normalised somewhat and that special handling
won't be needed.

IMO these are the reviewers doing enough over 90 days to meet the
requirements for core:

|   lifeless **| 3498 140   2 19957.6% |2
(  1.0%)  |
| clint-fewbar **  | 3292  54   1 27283.0% |7
(  2.6%)  |
| cmsj **  | 2481  25   1 22189.5% |   13
(  5.9%)  |
|derekh ** |  880  28  23  3768.2% |6
( 10.0%)  |

Who are already core, so thats easy.

If you are core, and not on that list, that may be because you're
coming from tuskar, which doesn't have 90 days of history, or you need
to get stuck into some more reviews :).

Now, 30 day history - this is the heads up for folk:

| clint-fewbar **  | 1792  27   0 15083.8% |6 (  4.0%)  |
| cmsj **  | 1791  15   0 16391.1% |   11 (  6.7%)  |
|   lifeless **| 1293  39   2  8567.4% |2 (  2.3%)  |
|derekh ** |  410  11   0  3073.2% |0 (  0.0%)  |
|  slagle  |  370  11  26   070.3% |3 ( 11.5%)  |
|ghe.rivero|  280   4  24   085.7% |2 (  8.3%)  |


I'm using the fairly simple metric of 'average at least one review a
day' as a proxy for 'sees enough of the code and enough discussion of
the code to be an effective reviewer'. James and Ghe, good stuff -
you're well on your way to core. If you're not in that list, please
treat this as a heads-up that you need to do more reviews to keep on
top of what's going on, whether so you become core, or you keep it.

In next month's update I'll review whether to remove some folk that
aren't keeping on top of things, as it won't be a surprise :).

Cheers,
Rob






--
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi,

I feel like I should point out that before tuskar merged with tripleo, we had 
some distinction between the team working on the tuskar api and the team 
working on the UI, with each team focusing reviews on its particular experties. 
 The latter team works quite closely with horizon, to the extent of spending a 
lot of time involved with horizon development and blueprints.  This is done so 
that horizon changes can be understood and utilized by tuskar-ui.

For that reason, I feel like a UI core reviewer split here might make sense. . 
. ?  tuskar-ui doesn't require as many updates as tripleo/tuskar api, but a 
certain level of horizon and UI expertise is definitely helpful in reviewing 
the UI patches.

Thanks,
Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Ladislav Smola

Hi,

seems like not all people agrees on what should be the 'metric' of a 
core reviewer.

Also what justify us to give +1 or +2.

Could it be a topic on today's meeting?

Ladislav


On 10/07/2013 09:03 PM, Robert Collins wrote:

Hi, like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up: it's not
a purely mechanical process :).

As we've just merged review teams with Tuskar devs, we need to allow
some time for everyone to get up to speed; so for folk who are core as
a result of the merge will be retained as core, but November I expect
the stats will have normalised somewhat and that special handling
won't be needed.

IMO these are the reviewers doing enough over 90 days to meet the
requirements for core:

|   lifeless **| 3498 140   2 19957.6% |2
(  1.0%)  |
| clint-fewbar **  | 3292  54   1 27283.0% |7
(  2.6%)  |
| cmsj **  | 2481  25   1 22189.5% |   13
(  5.9%)  |
|derekh ** |  880  28  23  3768.2% |6
( 10.0%)  |

Who are already core, so thats easy.

If you are core, and not on that list, that may be because you're
coming from tuskar, which doesn't have 90 days of history, or you need
to get stuck into some more reviews :).

Now, 30 day history - this is the heads up for folk:

| clint-fewbar **  | 1792  27   0 15083.8% |6 (  4.0%)  |
| cmsj **  | 1791  15   0 16391.1% |   11 (  6.7%)  |
|   lifeless **| 1293  39   2  8567.4% |2 (  2.3%)  |
|derekh ** |  410  11   0  3073.2% |0 (  0.0%)  |
|  slagle  |  370  11  26   070.3% |3 ( 11.5%)  |
|ghe.rivero|  280   4  24   085.7% |2 (  8.3%)  |


I'm using the fairly simple metric of 'average at least one review a
day' as a proxy for 'sees enough of the code and enough discussion of
the code to be an effective reviewer'. James and Ghe, good stuff -
you're well on your way to core. If you're not in that list, please
treat this as a heads-up that you need to do more reviews to keep on
top of what's going on, whether so you become core, or you keep it.

In next month's update I'll review whether to remove some folk that
aren't keeping on top of things, as it won't be a surprise :).

Cheers,
Rob









___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Undercloud Ceilometer

2013-10-07 Thread Ladislav Smola

Hello Clint,

thank you for your feedback.

On 10/04/2013 06:08 PM, Clint Byrum wrote:

Excerpts from Ladislav Smola's message of 2013-10-04 08:28:22 -0700:

Hello,

just a few words about role of Ceilometer in the Undercloud and the work
in progress.

Why we need Ceilometer in Undercloud:
---

In Tuskar-UI, we will display number of statistics, that will show
Undercloud metrics.
Later also number of alerts and notifications, that will come from
Ceilometer.

But I do suspect, that the Heat will use the Ceilometer Alarms, similar
way it is using it for
auto-scaling in Overcloud. Can anybody confirm?

I have not heard of anyone want to auto scale baremetal for the
purpose of scaling out OpenStack itself. There is certainly a use case
for it when we run out of compute resources and happen to have spare
hardware around. But unlike on a cloud where you have several
applications all contending for the same hardware, in the undercloud we
have only one application, so it seems less likely that auto-scaling
will be needed. We definitely need scaling, but I suspect it will not
be extremely elastic.


Yeah that's probably true. What I had in mind was something like
suspending hardware, that is no used at the time and e.g. have no
VM's running inside, for energy saving. And start it again when
we run out of compute resources, as you say.


What will be needed, however, is metrics for the rolling updates feature
we plan to add to Heat. We want to make sure that a rolling update does
not adversely affect the service level of the running cloud. If we're
early in the process with our canary-based deploy and suddenly CPU load is
shooting up on all of the completed nodes, something, perhaps Ceilometer,
should be able to send a signal to Heat, and trigger a rollback.


That is how Alarms should work now, you will just define the Alarm
inside of the Heat template, check the example:
https://github.com/openstack/heat-templates/blob/master/cfn/F17/AutoScalingCeilometer.yaml


What is planned in near future
---

The Hardware Agent capable of obtaining statistics:
https://blueprints.launchpad.net/ceilometer/+spec/monitoring-physical-devices
It uses SNMP inspector for obtaining the stats. I have tested that with
the Devtest tripleo setup
and it works.

The planned architecture is to have one Hardware Agent(will be merged to
central agent code)
placed on Control Node (or basically anywhere). That agent will poll
SNMP daemons placed on
hardware in the Undercloud(baremetals, network devices). Any objections
why this is a bad idea?

We will have to create a Ceilometer Image element, snmpd element is
already there, but we should
test it. Anybody volunteers for this task? There will be a hard part:
doing the right configurations.
(firewall, keystone, snmpd.conf) So it's all configured in a clean and a
secured way. That would
require a seasoned sysadmin to at least observe the thing. Any
volunteers here? :-)

The IPMI inspector for Hardware agent just started:
https://blueprints.launchpad.net/ceilometer/+spec/ipmi-inspector-for-monitoring-physical-devices
Seems it should query the Ironic API, which would provide the data
samples. Any objections?
Any volunteers for implementing this on Ironic side?

devananda and lifeless had a greatest concern about the scalability of a
Central agent. The Ceilometer
is not doing any scaling right now, but they are planning Horizontal
scaling of the central agent
for the future. So this is a very important task for us, for larger
deployments. Any feedback about
scaling? Or changing of architecture for better scalability?


I share their concerns. For  100 nodes it is no big deal. But centralized
monitoring has a higher cost than distributed monitoring. I'd rather see
agents on the machines themselves do a bit more than respond to polling
so that load is distributed as much as possible and non-essential
network chatter is reduced.


Right now, for the central agent, it should be matter of configuration.
So you can set one central agent, fetching all baremetals from nova. Or
You can bake the central agent to each baremetal and set it to poll only
from localhost. Or one of distributed architecture, that is planned as
configuration option, is having node (Management Leaf node), that is
managing bunch of hardware, so the Central agent could be baked into it.

What the agent does then, is process the data, pack it into message
and send it to openstack message bus (should be heavily scalable) where
it is collected by a Collector (should be able to have many workers) and 
saved

to database.



I'm extremely interested in the novel approach that Assimilation
Monitoring [1] is taking to this problem, which is to have each node
monitor itself and two of its immediate neighbors on a switch and
some nodes monitor an additional node on a different switch. Failures
are reported to an API server which uses graph database queries to

Re: [openstack-dev] [TripleO] Undercloud Ceilometer

2013-10-07 Thread Ladislav Smola

Hello Chris,

That would be much appreciated, thank you. :-)

Kind Regards,
Ladislav

On 10/05/2013 12:12 AM, Chris Jones wrote:

Hi

On 4 October 2013 16:28, Ladislav Smola lsm...@redhat.com 
mailto:lsm...@redhat.com wrote:


test it. Anybody volunteers for this task? There will be a hard
part: doing the right configurations.
(firewall, keystone, snmpd.conf) So it's all configured in a clean
and a secured way. That would
require a seasoned sysadmin to at least observe the thing. Any
volunteers here? :-)


I'm not familiar at all with Ceilometer, but I'd be happy to discuss 
how/where things like snmpd are going to be exposed, and look over the 
resulting bits in tripleo :)


--
Cheers,

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Undercloud Ceilometer

2013-10-04 Thread Ladislav Smola

Hello,

just a few words about role of Ceilometer in the Undercloud and the work 
in progress.


Why we need Ceilometer in Undercloud:
---

In Tuskar-UI, we will display number of statistics, that will show 
Undercloud metrics.
Later also number of alerts and notifications, that will come from 
Ceilometer.


But I do suspect, that the Heat will use the Ceilometer Alarms, similar 
way it is using it for

auto-scaling in Overcloud. Can anybody confirm?

What is planned in near future
---

The Hardware Agent capable of obtaining statistics:
https://blueprints.launchpad.net/ceilometer/+spec/monitoring-physical-devices
It uses SNMP inspector for obtaining the stats. I have tested that with 
the Devtest tripleo setup

and it works.

The planned architecture is to have one Hardware Agent(will be merged to 
central agent code)
placed on Control Node (or basically anywhere). That agent will poll 
SNMP daemons placed on
hardware in the Undercloud(baremetals, network devices). Any objections 
why this is a bad idea?


We will have to create a Ceilometer Image element, snmpd element is 
already there, but we should
test it. Anybody volunteers for this task? There will be a hard part: 
doing the right configurations.
(firewall, keystone, snmpd.conf) So it's all configured in a clean and a 
secured way. That would
require a seasoned sysadmin to at least observe the thing. Any 
volunteers here? :-)


The IPMI inspector for Hardware agent just started:
https://blueprints.launchpad.net/ceilometer/+spec/ipmi-inspector-for-monitoring-physical-devices
Seems it should query the Ironic API, which would provide the data 
samples. Any objections?

Any volunteers for implementing this on Ironic side?

devananda and lifeless had a greatest concern about the scalability of a 
Central agent. The Ceilometer
is not doing any scaling right now, but they are planning Horizontal 
scaling of the central agent
for the future. So this is a very important task for us, for larger 
deployments. Any feedback about

scaling? Or changing of architecture for better scalability?


Thank you for any feedback.

Kind Regards,
Ladislav









___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Ceilometer Alarm management page

2013-09-25 Thread Ladislav Smola

Hello,
thank you very much for the feedback.

Ok, I have tried to go through the triggers an events.

Let me summarize, so we can confirm I understand it correctly:
==

The trigger:
--

- has a pattern - that is something like starting condition, that leads 
to creating a trigger
- has a criteria - that is the condition, that has to be fulfilled in 
some given timeout after

 the trigger is created:
 --- then it can run some action(triggered 
notification pipeline) and it saves the
  trigger(so it has query-able history of 
the triggers)
 --- or the timeout action ( optional 
expiration notification pipeline).


Questions
-

1. In the example in 
https://blueprints.launchpad.net/ceilometer/+spec/notifications-triggers
the pattern and criteria are a condition that are checking appearance of 
specific events.

What are the options for the conditions? What is the querying API?

2. Can the conditions be tied only to the events? Or also to samples and 
statistics, so I can build

similar queries and conditions, that alarms have?

3. If I have set e.g. trigger for measuring health of my 
baremetals(checking disk failures), I could just set
both conditions(pattern, criteria) the same, to observing some events 
marking disk failure, right?


If there will be disk Failures, it would create a trigger for each disk 
failure notification, right? So I could then

browse the triggers to check which resources had a disk failures?

What are the querying options over the triggers? E.g. I would like to 
get number of triggers of some type

on some resource_ids, from last month, grouped by project?

Summary
==

If the trigger pattern and criteria supports a general condition like 
Alarms do, I believe this could work, yes.


Otherwise it seems we should use Alarms(and Alarms Groups) for checking 
sample based alerts, and Triggers
for checking events(notifications) based alerts. So e.g. the health of 
hardware would be likely computed from

combination of Alarms and Triggers.


On 09/24/2013 03:48 PM, Thomas Maddox wrote:

I think Dragon's BP for notification triggers would solve this problem.

Instead of looking at it as applying a single alarm to several resources,
you could instead leverage the similarities of the resources:
https://blueprints.launchpad.net/ceilometer/+spec/notifications-triggers.

Compound that with configurable events:
https://blueprints.launchpad.net/ceilometer/+spec/configurable-event-defini
tions

-Thomas

On 9/24/13 7:46 AM, Julien Danjou jul...@danjou.info wrote:


On Tue, Sep 24 2013, Ladislav Smola wrote:


Yes it would be good if something like this would be supported. -
relation of alarm to multiple entities, that
are result of sample-api query. Could it be worth creating a BP?

Probably indeed.

--
Julien Danjou
# Free Software hacker # independent consultant
# http://julien.danjou.info





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Ceilometer Alarm management page

2013-09-25 Thread Ladislav Smola

On 09/25/2013 01:51 AM, Gabriel Hurley wrote

4. There is a thought about tagging the alarms by user defined tag, so
user can easily group alarms together and then watch them together
based on their tag.

The alarm API don't provide that directly, but you can imagine some sort of
filter based on description matching some texts.

I'd love to see this as an extension to the alarm API. I think tracking 
metadata about alarms (e.g. tags or arbitrary key-value pairs) would be 
tremendously useful.


yes, that sound like a very good idea.


5. There is a thought about generating a default alarms, that could
observe the most important things (verifying good behaviour, showing bad

behaviour).

Does anybody have an idea which alarms could be the most important and
usable for everybody?

I'm not sure you want to create alarm by default; alarm are resources, I don't
think we should create resources without the user asking for it.

Seconded.


Continues as Alarms Groups or Triggers conversation in this thread.


Maybe you were talking about generating alarm template? You could start
with things like CPU usage staying at 90% for more than 1 hour, and having
an action that alerts the user via mail.
Same for disk usage.

We do this kind of template for common user tasks with security group rules 
already. The same concept applies to alarms.



Ok, will check this out.

Thank you for the feedback,
Ladislav


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] [UI] Identifying metrics to show in Tuskar UI detail pages

2013-09-23 Thread Ladislav Smola

Liz,

thank you very much for this, I will try to sort this and pick the ones, 
that will need to be implemented in Ceilometer. Will comment the etherpad.


FYI there are only two Agents, that collects Hardware related data now.
This one is almost complete:
https://blueprints.launchpad.net/ceilometer/+spec/monitoring-physical-devices
This one has just started:
https://blueprints.launchpad.net/ceilometer/+spec/ipmi-inspector-for-monitoring-physical-devices

Thank you,
Ladislav

On 09/23/2013 04:55 PM, Liz Blanchard wrote:

Ladislav  All,

I've spent some time the last few days trying to identify the metrics that a 
user would want to view around the details pages for Resource Classes, Racks, 
and Nodes. The notes have been captured here, where I've also tried to call out 
the items that are available in Ceilometer currently:
https://etherpad.openstack.org/tuskar-metrics

A lot of these metrics seem as though they will need to be calculated as 
aggregates of metrics that are currently available, but some are completely 
new, I think. Please take a look as you have time and feel free to comment and 
add things to this etherpad if you identify any metrics that should be shown in 
these views.

I hope to have some basic wireframes based on these metrics to review with the 
UX community and UI team by the end of the week.

Thanks,
Liz


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Ceilometer Alarm management page

2013-09-19 Thread Ladislav Smola

Hello everyone,

I am in the process of implementing Ceilometer Alarm API. Here are the 
blueprints.


https://blueprints.launchpad.net/horizon/+spec/ceilometer-alarm-api
https://blueprints.launchpad.net/horizon/+spec/ceilometer-alarm-management-page


While I am waiting for some Ceilometer patches to complete, I would like 
to start a discussion about the bp. 
https://blueprints.launchpad.net/horizon/+spec/ceilometer-alarm-management-page:


1. The points 1-4 from are some sort simple version of the page, that 
uses all basic alarm-api features. Do you think we need them all? Any 
feedback for them? Enhancements?


2. There is a thought, that we should maybe divide Alarms into (System, 
User-defined). The only system alarms now, are set up with Heat and used 
for auto-scaling.


3. There is a thought about watching correlation of multiple alarm 
histories in one Chart (either Alarm Histories, or the real statistics 
the Alarm is defined by). Do you think it will be needed? Any real life 
examples you have in mind?


4. There is a thought about tagging the alarms by user defined tag, so 
user can easily group alarms together and then watch them together based 
on their tag.


5. There is a thought about generating a default alarms, that could 
observe the most important things (verifying good behaviour, showing bad 
behaviour). Does anybody have an idea which alarms could be the most 
important and usable for everybody?


6. There is a thought about making overview pages customizable by the 
users, so they can really observe, what they need. (includes Ceilometer 
statistics and alarms)


Could you please give me some feedback for the points above(or anything 
else related)? After we collect what we need, I would push this to UX 
guys, so they can prepare some wireframes of how it could look like. So 
we can start discuss the UX.
e.g. even the Alarm management from 1 could be pretty challenging, as we 
have to come up with some sane UI, for defining general statistic query, 
that defines the Alarm.


Thank you very much for any feedback.

--Ladislav


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] [TripleO] The vision and looking ahead

2013-09-19 Thread Ladislav Smola

On 09/19/2013 10:08 AM, Tomas Sedovic wrote:

Hi everyone,

Some of us Tuskar developers have had the chance to meet the TripleO 
developers face to face and discuss the visions and goals of our 
projects.


Tuskar's ultimate goal is to have to a full OpenStack management 
solution: letting the cloud operators try OpenStack, install it, keep 
it running throughout the entire lifecycle (including bringing in new 
hardware, burning it in, decommissioning), help to scale it, secure 
the setup, monitor for failures, project the need for growth and so on.


And to provide a good user interface and API to let the operators 
control and script this easily.


Now, the scope of the OpenStack Deployment program (TripleO) includes 
not just installation, but the entire lifecycle management (from 
racking it up to decommissioning). Among other things they're thinking 
of are issue tracker integration and inventory management, but these 
could potentially be split into a separate program.


That means we do have a lot of goals in common and we've just been 
going at them from different angles: TripleO building the fundamental 
infrastructure while Tuskar focusing more on the end user experience.


We've come to a conclusion that it would be a great opportunity for 
both teams to join forces and build this thing together.


The benefits for Tuskar would be huge:

* being a part of an incubated project
* more eyballs (see Linus' Law (the ESR one))
* better information flow between the current Tuskar and TripleO teams
* better chance at attracting early users and feedback
* chance to integrate earlier into an OpenStack release (we could make 
it into the *I* one)


TripleO would get a UI and more developers trying it out and helping 
with setup and integration.


This shouldn't even need to derail us much from the rough roadmap we 
planned to follow in the upcoming months:


1. get things stable and robust enough to demo in Hong Kong on real 
hardware

2. include metrics and monitoring
3. security

What do you think?

Tomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


That certainly makes a lot of sense to me.
+1

-- Ladislav



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] All needed Tuskar metrics and alerts mapped to what Ceilometer supports

2013-09-17 Thread Ladislav Smola

Confirmation about the metrics of Hardware agent (Baremetal agent)
=

It is collecting:
- cpu, memoryspace, diskspace, network traffic (the same agent will be 
running on all services, collecting the same data)


It should be running on:
- the physical servers on which Glance, Cinder, Quantum, Swift, Nova 
compute node and Nova controller runs
- the network devices used in the OpenStack environment (switches, 
firewalls ...)


Supported metrics


* CPU utilisation for each CPU (percentage) (as cpu.util.1min, 
cpu.util.5min, cpu.util.15min )

* RAM utilisation (GB) (as memory.size.total, memory.size.used )
* Disk utilisation (GB) (as disk.size.total, disk.size.used)
* Incoming traffic for each NIC (Mbps) (as network.incoming.bytes)
* Outgoing traffic for each NIC (Mbps) (as network.outgoing.bytes)
- also track network.outgoing.errors, network.bandwidth.bytes
* Swap utilisation (GB)
- this should be part of Disk utilisation, we will just have to 
recognize the swap disk
* Number of currently running instances and the associated 
flavours(Ceilometer-Nova
  using instance:type and group_by resource_id) - This info will be 
queried from Overcloud Ceilometer


Missing metrics

* System load -- see /proc/loadavg (percentage)

as described here 
https://blueprints.launchpad.net/ceilometer/+spec/monitoring-physical-devices





On 09/16/2013 04:10 PM, Ladislav Smola wrote:

Hello,

this is follow up of T.Sedovic old email, trying to identify all 
metrics, we will need to track for Tuskar.
The Ceilometer API for Horizon is now in progress, so we have time to 
finish the list of metrics
and alarms we need. That may also raise the requests for some 
Ceilometer API optimization


This is meant for the open conversation, that will lead to the final list.


Measurements
=

The old list sent by tsedovic:
-

* CPU utilisation for each CPU (percentage) (Ceilometer-Nova as cpu_util)
* RAM utilisation (GB) (Ceilometer-Nova as memory)
- I do just assume, this is the used value and total value can be got 
from the service itself,

  needs confirmation
* Swap utilisation (GB) (Ceilometer-Nova as disk.ephemeral.size)
- I do just assume, this is the used value and total value can be got 
from the service itself,

  needs confirmation
* Disk utilisation (GB) (Ceilometer-Cinder as volume.size and 
Ceilometer-Swift as storage.objects.size)
- I do just assume, this is the used value and total value can be got 
from the service itself,

  needs confirmation
* System load -- see /proc/loadavg (percentage) (--)
* Incoming traffic for each NIC (Mbps) ( Ceilometer-Nova as 
network.incoming.bytes)
* Outgoing traffic for each NIC (Mbps) (Ceilometer-Nova as 
network.outgoing.bytes)
- It is connected to VM interface now, I do expect Baremetal 
agent(Hardware agent) will use NICs,

  needs confirmation
* Number of currently running instances and the associated 
flavours(Ceilometer-Nova

  using instance:type and group_by resource_id)


The additional meters used in wireframes
-

jcoufal could you add the additional measurements from the last 
wireframes?



The measurements the Ceilometer supports now
---

http://docs.openstack.org/developer/ceilometer/measurements.html

Feel free to include the others into wireframes jcoufal (I guess there 
will have to be different
overview pages for different Resource Classes, based on their service 
type)


I am in the process of finding out, whether all off this measurements 
will be also collected by the
Baremetal agent(Hardware agent). But I would say yes, from the 
description it has (except the VM

specific metrics like vcpusI guess)

The missing meters
-

We will have to probably implement these (meaning implementing a 
pollsters for the Baremetal

agent(Hardware agent), that will collect these metrics)

* System load -- see /proc/loadavg (percentage) (probably for all 
services?)


- Please add other Baremetal metrics you think we will need.


Alerts


Setting and Alarm
---

Simplified explanation of setting the alarm:
In order to have alerts, you have to set an alarm first. Alarm can 
contain any statistics query,
a threshold and an operator. (e.g. fire alarm when avg cpu_util  90% 
on all instances of project_1).
We can combine more alarms into one complex alarm. And you can browse 
alarms.

(There can be actions set up on alarm, but more about that later.)

Showing alerts
---

1. I would be bold enough to distinguish system-meter (e.g. similar to 
cpu_util  90%, are used
for Heat autoscaling). And user-defined-meter (the ones defined in 
UI). Will we show both in
the UI? Probably in different sections. System meters will require 
extra caution.


2. For the table view of alarms, I would see

[openstack-dev] [Tuskar] All needed Tuskar metrics and alerts mapped to what Ceilometer supports

2013-09-16 Thread Ladislav Smola

Hello,

this is follow up of T.Sedovic old email, trying to identify all 
metrics, we will need to track for Tuskar.
The Ceilometer API for Horizon is now in progress, so we have time to 
finish the list of metrics
and alarms we need. That may also raise the requests for some Ceilometer 
API optimization


This is meant for the open conversation, that will lead to the final list.


Measurements
=

The old list sent by tsedovic:
-

* CPU utilisation for each CPU (percentage) (Ceilometer-Nova as cpu_util)
* RAM utilisation (GB) (Ceilometer-Nova as memory)
- I do just assume, this is the used value and total value can be got 
from the service itself,

  needs confirmation
* Swap utilisation (GB) (Ceilometer-Nova as disk.ephemeral.size)
- I do just assume, this is the used value and total value can be got 
from the service itself,

  needs confirmation
* Disk utilisation (GB) (Ceilometer-Cinder as volume.size and 
Ceilometer-Swift as storage.objects.size)
- I do just assume, this is the used value and total value can be got 
from the service itself,

  needs confirmation
* System load -- see /proc/loadavg (percentage) (--)
* Incoming traffic for each NIC (Mbps) ( Ceilometer-Nova as 
network.incoming.bytes)
* Outgoing traffic for each NIC (Mbps) (Ceilometer-Nova as 
network.outgoing.bytes)
- It is connected to VM interface now, I do expect Baremetal 
agent(Hardware agent) will use NICs,

  needs confirmation
* Number of currently running instances and the associated 
flavours(Ceilometer-Nova

  using instance:type and group_by resource_id)


The additional meters used in wireframes
-

jcoufal could you add the additional measurements from the last wireframes?


The measurements the Ceilometer supports now
---

http://docs.openstack.org/developer/ceilometer/measurements.html

Feel free to include the others into wireframes jcoufal (I guess there 
will have to be different

overview pages for different Resource Classes, based on their service type)

I am in the process of finding out, whether all off this measurements 
will be also collected by the
Baremetal agent(Hardware agent). But I would say yes, from the 
description it has (except the VM

specific metrics like vcpusI guess)

The missing meters
-

We will have to probably implement these (meaning implementing a 
pollsters for the Baremetal

agent(Hardware agent), that will collect these metrics)

* System load -- see /proc/loadavg (percentage) (probably for all services?)

- Please add other Baremetal metrics you think we will need.


Alerts


Setting and Alarm
---

Simplified explanation of setting the alarm:
In order to have alerts, you have to set an alarm first. Alarm can 
contain any statistics query,
a threshold and an operator. (e.g. fire alarm when avg cpu_util  90% on 
all instances of project_1).
We can combine more alarms into one complex alarm. And you can browse 
alarms.

(There can be actions set up on alarm, but more about that later.)

Showing alerts
---

1. I would be bold enough to distinguish system-meter (e.g. similar to 
cpu_util  90%, are used
for Heat autoscaling). And user-defined-meter (the ones defined in UI). 
Will we show both in
the UI? Probably in different sections. System meters will require extra 
caution.


2. For the table view of alarms, I would see it as a general filterable 
order-able table of alarms.
So we can easily show something like e.g. all nova alarms, all alarms 
for cpu_util with condition  90%


3. Now there is a ongoing conversation with eglynn, how to show the 
'aggregate alarms stats'

and 'alarm time series':
https://wiki.openstack.org/wiki/Ceilometer/blueprints/alarm-audit-api-group-by#Discussion 

Next to the overview page with predefined charts, we should have a 
general filterable order-able

charts (the similar interface as table view above).

Here is pictured a one possible way of how the charts for Alarms could 
look like on the overview page:
( 
http://file.brq.redhat.com/~jcoufal/openstack-m/user_stories/racks_detail-overview.pdf 
http://file.brq.redhat.com/%7Ejcoufal/openstack-m/user_stories/racks_detail-overview.pdf) 
.
Any feedback is welcome. Also we should figure out what Alarms will be 
used for defining e.g. there is
something bad happening (like health chart?). Or what alarms to set and 
show as default (lot of them

is already being set by e.g. Heat)

4. There is a load of alerts used in wireframes, that are not currently 
supported in Ceilometer (alerts can
be only based on existing measurements), like instances failures, disk 
failures, etc... We should write those
down and probably write agents and pollsters for them. It make sense to 
integrate them to Ceilometer,

whatever they will be.


Dynamic Ceilometer


Due to the dynamic architecture of the 

Re: [openstack-dev] [horizon] Add a library js for creating charts

2013-08-28 Thread Ladislav Smola
The Rickshaw library is in the Master. Building of the reusable charts 
on top of it is in the progress.


On 08/27/2013 02:51 PM, Chmouel Boudjnah wrote:

Julien Danjou jul...@danjou.info writes:


It sounds like a good plan to pick Rickshaw. Better building on top of
it, contributing back to it, rather than starting cold or building a new
wheel.

+1

Chmouel.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Add a library js for creating charts

2013-08-27 Thread Ladislav Smola
I have prepared the testing implementation of Rickshaw wrapped into a 
general linechart and connected it to Ceilometer here:

https://review.openstack.org/#/c/35590/
(rendering is mostly copied from examples with some parts from Maxime 
Vidori)


Rickshaw really works like a charm. I think it will be the best choice.

It is work in the progress and the data of the statistics needs to be 
formatted correctly, but it shows this could work.


I will extract the parts to correct Blueprints and I will start the 
blueprints for implementation of the charts in the dashboard, then 
connect it through dependencies. So people can start implementing this 
to dashboard.


There is ongoing UX discussion about the ceilometer and the charts in 
the dashboard and how it will look like. I do expect we will use 
scatterplot, pie and bar charts (we are using this on overview pages in 
tuskar-ui). So these charts should be probably packed in similar manner 
(though only scatterplot is in the Rickshaw)



On 08/27/2013 10:14 AM, Julien Danjou wrote:

On Mon, Aug 26 2013, Maxime Vidori wrote:


Currently, the charts for Horizon are directly created with D3. Maybe if we
add a js library on top of d3 it will be easier and development will be
faster. A blueprint was created at
https://blueprints.launchpad.net/horizon/+spec/horizon-chart.js We actually
need some reviews or feedback.

It sounds like a good plan to pick Rickshaw. Better building on top of
it, contributing back to it, rather than starting cold or building a new
wheel.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev