Re: [openstack-dev] [TripleO] Propose adding StevenK to core reviewers

2014-09-12 Thread Jiří Stránský

On 9.9.2014 20:32, Gregory Haynes wrote:

Hello everyone!

I have been working on a meta-review of StevenK's reviews and I would
like to propose him as a new member of our core team.


+1



As I'm sure many have noticed, he has been above our stats requirements
for several months now. More importantly, he has been reviewing a wide
breadth of topics and seems to have a strong understanding of our code
base. He also seems to be doing a great job at providing valuable
feedback and being attentive to responses on his reviews.

As such, I think he would make a great addition to our core team. Can
the other core team members please reply with your votes if you agree or
disagree.

Thanks!
Greg



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Jiří Stránský

Hi all,

TL;DR: I believe that As an infrastructure administrator, Anna wants a 
CLI for managing the deployment providing the same fundamental features 
as UI. With the planned architecture changes (making tuskar-api thinner 
and getting rid of proxying to other services), there's not an obvious 
way to achieve that. We need to figure this out. I present a few options 
and look forward for feedback.


Previously, we had planned Tuskar arcitecture like this:

tuskar-ui - tuskarclient - tuskar-api - heat-api|ironic-api|etc.

This meant that the integration logic of how to use heat, ironic and 
other services to manage an OpenStack deployment lied within 
*tuskar-api*. This gave us an easy way towards having a CLI - just build 
tuskarclient to wrap abilities of tuskar-api.



Nowadays we talk about using heat and ironic (and neutron? nova? 
ceilometer?) apis directly from the UI, similarly as Dashboard does.
But our approach cannot be exactly the same as in Dashboard's case. 
Dashboard is quite a thin wrapper on top of python-...clients, which 
means there's a natural parity between what the Dashboard and the CLIs 
can do.


We're not wrapping the APIs directly (if wrapping them directly would be 
sufficient, we could just use Dashboard and not build Tuskar API at 
all). We're building a separate UI because we need *additional logic* on 
top of the APIs. E.g. instead of directly working with Heat templates 
and Heat stacks to deploy overcloud, user will get to pick how many 
control/compute/etc. nodes he wants to have, and we'll take care of Heat 
things behind the scenes. This makes Tuskar UI significantly thicker 
than Dashboard is, and the natural parity between CLI and UI vanishes. 
By having this logic in UI, we're effectively preventing its use from 
CLI. (If i were bold i'd also think about integrating Tuskar with other 
software which would be prevented too if we keep the business logic in 
UI, but i'm not absolutely positive about use cases here).


Now this raises a question - how do we get CLI reasonably on par with 
abilities of the UI? (Or am i wrong that Anna the infrastructure 
administrator would want that?)


Here are some options i see:

1) Make a thicker python-tuskarclient and put the business logic there. 
Make it consume other python-*clients. (This is an unusual approach 
though, i'm not aware of any python-*client that would consume and 
integrate other python-*clients.)


2) Make a thicker tuskar-api and put the business logic there. (This is 
the original approach with consuming other services from tuskar-api. The 
feedback on this approach was mostly negative though.)


3) Keep tuskar-api and python-tuskarclient thin, make another library 
sitting between Tuskar UI and all python-***clients. This new project 
would contain the logic of using undercloud services to provide the 
tuskar experience it would expose python bindings for Tuskar UI and 
contain a CLI. (Think of it like traditional python-*client but instead 
of consuming a REST API, it would consume other python-*clients. I 
wonder if this is overengineering. We might end up with too many 
projects doing too few things? :) )


4) Keep python-tuskarclient thin, but build a separate CLI app that 
would provide same integration features as Tuskar UI does. (This would 
lead to code duplication. Depends on the actual amount of logic to 
duplicate if this is bearable or not.)



Which of the options you see as best? Did i miss some better option? Am 
i just being crazy and trying to solve a non-issue? Please tell me :)


Please don't consider the time aspect of this, focus rather on what's 
the right approach, where we want to get eventually. (We might want to 
keep a thick Tuskar UI for Icehouse not to set the hell loose, there 
will be enough refactoring already.)



Thanks

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Jiří Stránský
A few clarifications added, next time i'll need to triple-read after 
myself :)


On 11.12.2013 13:33, Jiří Stránský wrote:

Hi all,

TL;DR: I believe that As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI. With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui - tuskarclient - tuskar-api - heat-api|ironic-api|etc.

This meant that the integration logic of how to use heat, ironic and
other services to manage an OpenStack deployment lied within
*tuskar-api*. This gave us an easy way towards having a CLI - just build
tuskarclient to wrap abilities of tuskar-api.


Nowadays we talk about using heat and ironic (and neutron? nova?
ceilometer?) apis directly from the UI, similarly as Dashboard does.
But our approach cannot be exactly the same as in Dashboard's case.
Dashboard is quite a thin wrapper on top of python-...clients, which
means there's a natural parity between what the Dashboard and the CLIs
can do.

We're not wrapping the APIs directly (if wrapping them directly would be
sufficient, we could just use Dashboard and not build Tuskar API at
all).


Sorry, this should have said not build Tuskar *UI* at all.


We're building a separate UI because we need *additional logic* on
top of the APIs. E.g. instead of directly working with Heat templates
and Heat stacks to deploy overcloud, user will get to pick how many
control/compute/etc. nodes he wants to have, and we'll take care of Heat
things behind the scenes. This makes Tuskar UI significantly thicker
than Dashboard is, and the natural parity between CLI and UI vanishes.
By having this logic in UI, we're effectively preventing its use from
CLI. (If i were bold i'd also think about integrating Tuskar with other
software which would be prevented too if we keep the business logic in
UI, but i'm not absolutely positive about use cases here).

Now this raises a question - how do we get CLI reasonably on par with
abilities of the UI? (Or am i wrong that Anna the infrastructure
administrator would want that?)

Here are some options i see:

1) Make a thicker python-tuskarclient and put the business logic there.
Make it consume other python-*clients. (This is an unusual approach
though, i'm not aware of any python-*client that would consume and
integrate other python-*clients.)

2) Make a thicker tuskar-api and put the business logic there. (This is
the original approach with consuming other services from tuskar-api. The
feedback on this approach was mostly negative though.)

3) Keep tuskar-api and python-tuskarclient thin, make another library
sitting between Tuskar UI and all python-***clients. This new project
would contain the logic of using undercloud services to provide the
tuskar experience it would expose python bindings for Tuskar UI and


expose python bindings for Tuskar UI is double-meaning - to be more 
precise: expose python bindings for use within Tuskar UI.



contain a CLI. (Think of it like traditional python-*client but instead
of consuming a REST API, it would consume other python-*clients. I
wonder if this is overengineering. We might end up with too many
projects doing too few things? :) )

4) Keep python-tuskarclient thin, but build a separate CLI app that
would provide same integration features as Tuskar UI does. (This would
lead to code duplication. Depends on the actual amount of logic to
duplicate if this is bearable or not.)


Which of the options you see as best? Did i miss some better option? Am
i just being crazy and trying to solve a non-issue? Please tell me :)

Please don't consider the time aspect of this, focus rather on what's
the right approach, where we want to get eventually. (We might want to
keep a thick Tuskar UI for Icehouse not to set the hell loose, there
will be enough refactoring already.)


Thanks

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Jiří Stránský

On 11.12.2013 16:43, Tzu-Mainn Chen wrote:

Thanks for writing this all out!

- Original Message -

Disclaimer: I swear I'll stop posting this sort of thing soon, but I'm
new to the project. I only mention it again because it's relevant in
that I missed any of the discussion on why proxying from tuskar API to
other APIs is looked down upon. Jiri and I had been talking yesterday
and he mentioned it to me when I started to ask these same sorts of
questions.

On 12/11/2013 07:33 AM, Jiří Stránský wrote:

Hi all,

TL;DR: I believe that As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI. With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui - tuskarclient - tuskar-api - heat-api|ironic-api|etc.


My biggest concern was that having each client call out to the
individual APIs directly put a lot of knowledge into the clients that
had to be replicated across clients. At the best case, that's simply
knowing where to look for data. But I suspect it's bigger than that and
there are workflows that will be implemented for tuskar needs. If the
tuskar API can't call out to other APIs, that workflow implementation
needs to be done at a higher layer, which means in each client.

Something I'm going to talk about later in this e-mail but I'll mention
here so that the diagrams sit side-by-side is the potential for a facade
layer that hides away the multiple APIs. Lemme see if I can do this in
ASCII:

tuskar-ui -+   +-tuskar-api
 |   |
 +-client-facade-+-nova-api
 |   |
tuskar-cli-+   +-heat-api

The facade layer runs client-side and contains the business logic that
calls across APIs and adds in the tuskar magic. That keeps the tuskar
API from calling into other APIs* but keeps all of the API call logic
abstracted away from the UX pieces.

* Again, I'm not 100% up to speed with the API discussion, so I'm going
off the assumption that we want to avoid API to API calls. If that isn't
as strict of a design principle as I'm understanding it to be, then the
above picture probably looks kinda silly, so keep in mind the context
I'm going from.

For completeness, my gut reaction was expecting to see something like:

tuskar-ui -+
 |
 +-tuskar-api-+-nova-api
 ||
tuskar-cli-++-heat-api

Where a tuskar client talked to the tuskar API to do tuskar things.
Whatever was needed to do anything tuskar-y was hidden away behind the
tuskar API.


This meant that the integration logic of how to use heat, ironic and
other services to manage an OpenStack deployment lied within
*tuskar-api*. This gave us an easy way towards having a CLI - just build
tuskarclient to wrap abilities of tuskar-api.

Nowadays we talk about using heat and ironic (and neutron? nova?
ceilometer?) apis directly from the UI, similarly as Dashboard does.
But our approach cannot be exactly the same as in Dashboard's case.
Dashboard is quite a thin wrapper on top of python-...clients, which
means there's a natural parity between what the Dashboard and the CLIs
can do.


When you say python- clients, is there a distinction between the CLI and
a bindings library that invokes the server-side APIs? In other words,
the CLI is packaged as CLI+bindings and the UI as GUI+bindings?


python-tuskarclient = Python bindings to tuskar-api + CLI, in one project

tuskar-ui doesn't have it's own bindings, it depends on 
python-tuskarclient for bindings to tuskar-api (and other clients for 
bindings to other APIs). UI makes use just of the Python bindings part 
of clients and doesn't interact with the CLI part. This is the general 
OpenStack way of doing things.





We're not wrapping the APIs directly (if wrapping them directly would be
sufficient, we could just use Dashboard and not build Tuskar API at
all). We're building a separate UI because we need *additional logic* on
top of the APIs. E.g. instead of directly working with Heat templates
and Heat stacks to deploy overcloud, user will get to pick how many
control/compute/etc. nodes he wants to have, and we'll take care of Heat
things behind the scenes. This makes Tuskar UI significantly thicker
than Dashboard is, and the natural parity between CLI and UI vanishes.
By having this logic in UI, we're effectively preventing its use from
CLI. (If i were bold i'd also think about integrating Tuskar with other
software which would be prevented too if we keep the business logic in
UI, but i'm not absolutely positive about use cases here).


I see your point about preventing its use from the CLI, but more
disconcerting IMO is that it just doesn't belong in the UI. That sort

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-12 Thread Jiří Stránský

On 12.12.2013 11:49, Radomir Dopieralski wrote:

On 11/12/13 13:33, Jiří Stránský wrote:

[snip]


TL;DR: I believe that As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI. With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.


[snip]


2) Make a thicker tuskar-api and put the business logic there. (This is
the original approach with consuming other services from tuskar-api. The
feedback on this approach was mostly negative though.)


This is a very simple issue, actualy. We don't have any choice. We need
locks. We can't make the UI, CLI and API behave in consistent and
predictable manner when multiple people (and cron jobs on top of that)
are using them, if we don't have locks for the more complex operations.
And in order to have locks, we need to have a single point where the
locks are applied. We can't have it on the client side, or in the UI --
it has to be a single, shared place. It has to be Tuskar-API, and I
really don't see any other option.



You're right that we should strive for atomicity, but I'm afraid putting 
the complex operations (which call other services) into tuskar-api will 
not solve the problem for us. (Jay and Ladislav already discussed the 
issue.)


If we have to do multiple API calls to perform a complex action, then 
we're in the same old situation. Should i get back to the rack creation 
example that Ladislav posted, it could still happen that Tuskar API 
would return error to the UI like: We haven't created the rack in 
Tuskar because we tried to modifiy info about 8 nodes in Ironic, but 
only 5 modifications succeeded. So we've tried to revert those 5 
modifications but we only managed to revert 2. Please figure this out 
and come back. We moved the problem, but didn't solve it.


I think that if we need something to be atomic, we'll need to make sure 
that one operation only writes to one service, where the single 
source of truth for that data lies, and make sure that the operation is 
atomic within that service. (See Ladislav's example with overcloud 
deployment via Heat in this thread.)


Thanks :)

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-12 Thread Jiří Stránský

On 12.12.2013 14:26, Jiří Stránský wrote:

On 12.12.2013 11:49, Radomir Dopieralski wrote:

On 11/12/13 13:33, Jiří Stránský wrote:

[snip]


TL;DR: I believe that As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI. With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.


[snip]


2) Make a thicker tuskar-api and put the business logic there. (This is
the original approach with consuming other services from tuskar-api. The
feedback on this approach was mostly negative though.)


This is a very simple issue, actualy. We don't have any choice. We need
locks. We can't make the UI, CLI and API behave in consistent and
predictable manner when multiple people (and cron jobs on top of that)
are using them, if we don't have locks for the more complex operations.
And in order to have locks, we need to have a single point where the
locks are applied. We can't have it on the client side, or in the UI --
it has to be a single, shared place. It has to be Tuskar-API, and I
really don't see any other option.



You're right that we should strive for atomicity, but I'm afraid putting
the complex operations (which call other services) into tuskar-api will
not solve the problem for us. (Jay and Ladislav already discussed the
issue.)

If we have to do multiple API calls to perform a complex action, then
we're in the same old situation. Should i get back to the rack creation
example that Ladislav posted, it could still happen that Tuskar API
would return error to the UI like: We haven't created the rack in
Tuskar because we tried to modifiy info about 8 nodes in Ironic, but
only 5 modifications succeeded. So we've tried to revert those 5
modifications but we only managed to revert 2. Please figure this out
and come back. We moved the problem, but didn't solve it.

I think that if we need something to be atomic, we'll need to make sure
that one operation only writes to one service, where the single
source of truth for that data lies, and make sure that the operation is
atomic within that service. (See Ladislav's example with overcloud
deployment via Heat in this thread.)

Thanks :)

Jirka



And just to make it clear how that relates to locking: Even if i can 
lock something within Tuskar API, i cannot lock the related data (which 
i need to use in the complex operation) in the other API (say Ironic). 
Things can still change under Tuskar API's hands. Again, we just move 
the unpredictability, but not remove it.


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-13 Thread Jiří Stránský

On 12.12.2013 17:10, Mark McLoughlin wrote:

On Wed, 2013-12-11 at 13:33 +0100, Jiří Stránský wrote:

Hi all,

TL;DR: I believe that As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI. With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.

..


1) Make a thicker python-tuskarclient and put the business logic there.
Make it consume other python-*clients. (This is an unusual approach
though, i'm not aware of any python-*client that would consume and
integrate other python-*clients.)

2) Make a thicker tuskar-api and put the business logic there. (This is
the original approach with consuming other services from tuskar-api. The
feedback on this approach was mostly negative though.)


FWIW, I think these are the two most plausible options right now.

My instinct is that tuskar could be a stateless service which merely
contains the business logic between the UI/CLI and the various OpenStack
services.

That would be a first (i.e. an OpenStack service which doesn't have a
DB) and it is somewhat hard to justify. I'd be up for us pushing tuskar
as a purely client-side library used by the UI/CLI (i.e. 1) as far it
can go until we hit actual cases where we need (2).


For the features that we identified for Icehouse, we probably don't need 
to store any data necessarily. But going forward, it's not entirely 
sure. We had a chat and identified some data that is probably not suited 
for storing in any of the other services (at least in their current state):


* Roles (like Compute, Controller, Object Storage, Block Storage) - for 
Icehouse we'll have these 4 roles hardcoded. Going forward, it's 
probable that we'll want to let admins define their own roles. (Is there 
an existing OpenStack concept that we could map Roles onto? Something 
similar to using Flavors as hardware profiles? I'm not aware of any.)


* Links to Flavors to use with the roles - to define on what hardware 
can a particular Role be deployed. For Icehouse we assume homogenous 
hardware.


* Links to Images for use with the Role/Flavor pairs - we'll have 
hardcoded Image names for those hardcoded Roles in Icehouse. Going 
forward, having multiple undercloud Flavors associated with a Role, 
maybe each [Role-Flavor] pair should have it's own image link defined - 
some hardware types (Flavors) might require special drivers in the image.


* Overcloud heat template - for Icehouse it's quite possible it might be 
hardcoded as well and we could just just use heat params to set it up, 
though i'm not 100% sure about that. Going forward, assuming dynamic 
Roles, we'll need to generate it.


^ So all these things could probably be hardcoded for Icehouse, but not 
in the future. Guys suggested that if we'll be storing them eventually 
anyway, we might build these things into Tuskar API right now (and 
return hardcoded values for now, allow modification post-Icehouse). That 
seems ok to me. The other approach of having all this hardcoding 
initially done in a library seems ok to me too.


I'm not 100% sure that we cannot store some of this info in existing 
APIs, but it didn't seem so to me (to us). We've talked briefly about 
using Swift for it, but looking back on the list i wrote, it doesn't 
seem as very Swift-suited data.




One example worth thinking through though - clicking deploy my
overcloud will generate a Heat template and sent to the Heat API.

The Heat template will be fairly closely tied to the overcloud images
(i.e. the actual image contents) we're deploying - e.g. the template
will have metadata which is specific to what's in the images.

With the UI, you can see that working fine - the user is just using a UI
that was deployed with the undercloud.

With the CLI, it is probably not running on undercloud machines. Perhaps
your undercloud was deployed a while ago and you've just installed the
latest TripleO client-side CLI from PyPI. With other OpenStack clients
we say that newer versions of the CLI should support all/most older
versions of the REST APIs.

Having the template generation behind a (stateless) REST API could allow
us to define an API which expresses deploy my overcloud and not have
the client so tied to a specific undercloud version.


Yeah i see that advantage of making it an API, Dean pointed this out 
too. The combination of this and the fact that we'll need to store the 
Roles and related data eventually anyway might be the tipping point.



Thanks! :)

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2014-01-03 Thread Jiří Stránský

On 21.12.2013 06:10, Jay Pipes wrote:

On 12/20/2013 11:34 AM, Clint Byrum wrote:

Excerpts from Radomir Dopieralski's message of 2013-12-20 01:13:20 -0800:

On 20/12/13 00:17, Jay Pipes wrote:

On 12/19/2013 04:55 AM, Radomir Dopieralski wrote:

On 14/12/13 16:51, Jay Pipes wrote:

[snip]


Instead of focusing on locking issues -- which I agree are very
important in the virtualized side of things where resources are
thinner -- I believe that in the bare-metal world, a more useful focus
would be to ensure that the Tuskar API service treats related group
operations (like deploy an undercloud on these nodes) in a way that
can handle failures in a graceful and/or atomic way.


Atomicity of operations can be achieved by intoducing critical sections.
You basically have two ways of doing that, optimistic and pessimistic.
Pessimistic critical section is implemented with a locking mechanism
that prevents all other processes from entering the critical section
until it is finished.


I'm familiar with the traditional non-distributed software concept of a
mutex (or in Windows world, a critical section). But we aren't dealing
with traditional non-distributed software here. We're dealing with
highly distributed software where components involved in the
transaction may not be running on the same host or have much awareness
of each other at all.


Yes, that is precisely why you need to have a single point where they
can check if they are not stepping on each other's toes. If you don't,
you get race conditions and non-deterministic behavior. The only
difference with traditional, non-distributed software is that since the
components involved are communicating over a, relatively slow, network,
you have a much, much greater chance of actually having a conflict.
Scaling the whole thing to hundreds of nodes practically guarantees trouble.



Radomir, what Jay is suggesting is that it seems pretty unlikely that
two individuals would be given a directive to deploy OpenStack into a
single pool of hardware at such a scale where they will both use the
whole thing.

Worst case, if it does happen, they both run out of hardware, one
individual deletes their deployment, the other one resumes. This is the
optimistic position and it will work fine. Assuming you are driving this
all through Heat (which, AFAIK, Tuskar still uses Heat) there's even a
blueprint to support you that I'm working on:

https://blueprints.launchpad.net/heat/+spec/retry-failed-update

Even if both operators put the retry in a loop, one would actually
finish at some point.


Yes, thank you Clint. That is precisely what I was saying.


Trying to make a complex series of related but distributed actions --
like the underlying actions of the Tuskar - Ironic API calls -- into an
atomic operation is just not a good use of programming effort, IMO.
Instead, I'm advocating that programming effort should instead be spent
coding a workflow/taskflow pipeline that can gracefully retry failed
operations and report the state of the total taskflow back to the user.


Sure, there are many ways to solve any particular synchronisation
problem. Let's say that we have one that can actually be solved by
retrying. Do you want to retry infinitely? Would you like to increase
the delays between retries exponentially? If so, where are you going to
keep the shared counters for the retries? Perhaps in tuskar-api, hmm?



I don't think a sane person would retry more than maybe once without
checking with the other operators.


Or are you just saying that we should pretend that the nondeteministic
bugs appearing due to the lack of synchronization simply don't exist?
They cannot be easily reproduced, after all. We could just close our
eyes, cover our ears, sing lalalala and close any bug reports with
such errors with could not reproduce on my single-user, single-machine
development installation. I know that a lot of software companies do
exactly that, so I guess it's a valid business practice, I just want to
make sure that this is actually the tactic that we are going to take,
before commiting to an architectural decision that will make those bugs
impossible to fix.



OpenStack is non-deterministic. Deterministic systems are rigid and unable
to handle failure modes of any kind of diversity. We tend to err toward
pushing problems back to the user and giving them tools to resolve the
problem. Avoiding spurious problems is important too, no doubt. However,
what Jay has been suggesting is that the situation a pessimistic locking
system would avoid is entirely user created, and thus lower priority
than say, actually having a complete UI for deploying OpenStack.


+1. I very much agree with Jay and Clint on this matter.

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Deployment Management section - Wireframes

2014-01-13 Thread Jiří Stránský

On 13.1.2014 11:43, Jaromir Coufal wrote:

On 2014/10/01 21:17, Jay Dobies wrote:

Another question:

- A Role (sounds like we're moving away from that so I'll call it
Resource Category) can have multiple Node Profiles defined (assuming I'm
interpretting the + and the tabs in the Create a Role wireframe
correctly). But I don't see anywhere where a profile is selected when
scaling the Resource Category. Is the idea behind the profiles that you
can select how much power you want to provide in addition to how many
nodes?


Yes, that is correct, Jay. I mentioned that in walkthrough and in
wireframes with the note More views needed (for deploying, scaling,
managing roles).

I would say there might be two approaches - one is to specify which node
profile you want to scale in order to select how much power you want to add.

The other approach is just to scale the number of nodes in a role and
let system decide the best match (which node profile is chosen will be
decided on the best fit, probably).


Hmm i'm not sure i understand - what do you think by best fit here? 
E.g. i have 32 GB RAM profile and 256 GB RAM profile in the compute role 
(and i have unused machines available for both profiles), and i increase 
compute node count by 2. What do i best-fit against?


(Alternatively, if we want to support scaling a role using just one 
spinner, even though the role has more profiles, maybe we could pick the 
largest profile with unused nodes available?)




I lean towards the first approach, where you specify what role and which
node profile you want to use for scaling. However this is just
introduction of the idea and I believe we can get answers until we get
to that step.


+1. I think we'll want the first approach to be at least possible (maybe 
not default). As a cloud operator, when i want deploy 2 more compute 
nodes, i imagine there are situations when i do care whether i'll get 
additional 64 GB or additional 512 GB capacity.


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Editing Nodes

2014-01-16 Thread Jiří Stránský

On 15.1.2014 14:07, James Slagle wrote:

I'll start by laying out how I see editing or updating nodes working
in TripleO without Tuskar:

To do my initial deployment:
1.  I build a set of images for my deployment for different roles. The
images are different based on their role, and only contain the needed
software components to accomplish the role they intend to be deployed.
2.  I load the images into glance
3.  I create the Heat template for my deployment, likely from
fragments that are already avaiable. Set quantities, indicate which
images (via image uuid) are for which resources in heat.
4.  heat stack-create with my template to do the deployment

To update my deployment:
1.  If I need to edit a role (or create a new one), I create a new image.
2.  I load the new image(s) into glance
3.  I edit my Heat template, update any quantities, update any image uuids, etc.
4.  heat stack-update my deployment

In both cases above, I see the role of Tuskar being around steps 3 and 4.


+1. Although it's worth noting that if we want zero downtime updates, 
we'll probably need ability to migrate content off the machines being 
updated - that would be a pre-3 step. (And for that we need spare 
capacity equal to the number of nodes being updated, so we'll probably 
want to do updating in chunks in the future, not the whole overcloud at 
once).




I may be misinterpreting, but let me say that I don't think Tuskar
should be building images. There's been a fair amount of discussion
around a Nova native image building service [1][2]. I'm actually not
sure what the status/concensus on that is, but maybe longer term,
Tuskar might call an API to kick off an image build.


Yeah I don't think image building should be driven through Tuskar API 
(and probably not even Tuskar UI?). Tuskar should just fetch images from 
Glance imho. However, we should be aware that image building *is* our 
concern, as it's an important prerequisite for deployment. We should 
provide at least directions how to easily build images for use with 
Tuskar, not leave users in doubt.


snip


We will have to store image metadata in tuskar probably, that would map to
glance, once the image is generated. I would say we need to store the list
of the elements and probably the commit hashes (because elements can
change). Also it should be versioned, as the images in glance will be also
versioned.


I'm not sure why this image metadata would be in Tuskar. I definitely
like the idea of knowing the versions/commit hashes of the software
components in your images, but that should probably be in Glance.


+1




We can't probably store it in the Glance, cause we will first store the
metadata, then generate image. Right?


I'm not sure I follow this point. But, mainly, I don't think Tuskar
should be automatically generating images.


+1




Then we could see whether image was created from the metadata and whether
that image was used in the heat-template. With versions we could also see
what has changed.


We'll be able to tell what image was used in the heat template, and
thus the deployment,  based on it's UUID.

I love the idea of seeing differences between images, especially
installed software versions, but I'm not sure that belongs in Tuskar.
That sort of utility functionality seems like it could apply to any
image you might want to launch in OpenStack, not just to do a
deployment.  So, I think it makes sense to have that as Glance
metadata or in Glance somehow. For instance, if I wanted to launch an
image that had a specific version of apache, it'd be nice to be able
to see that when I'm choosing an image to launch.


Yes. We might want to show the data to the user, but i don't see a need 
to run this through Tuskar API. Tuskar UI could query Glance directly 
and display the metadata to the user. (When using CLI, one could use 
Glance CLI directly. We're not adding any special logic on top.)





But there was also idea that there will be some generic image, containing
all services, we would just configure which services to start. In that case
we would need to version also this.


-1 to this.  I think we should stick with specialized images per role.
I replied on the wireframes thread, but I don't see how
enabling/disabling services in a prebuilt image should work. Plus, I
don't really think it fits with the TripleO model of having an image
created based on it's specific role (I hate to use that term and
muddy the wateri mean in the generic sense here).



= New Comments =

My comments on this train of thought:

- I'm afraid of the idea of applying changes immediately for the same
reasons I'm worried about a few other things. Very little of what we do will
actually finish executing immediately and will instead be long running
operations. If I edit a few roles in a row, we're looking at a lot of
outstanding operations executing against other OpenStack pieces (namely
Heat).

The idea of immediately also suffers from a sort of Oh shit, that's not
what I 

Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-30 Thread Jiří Stránský

On 01/30/2014 11:26 AM, Tomas Sedovic wrote:

1.1 Treat similar hardware configuration as equal

The way I understand it is this: we use a scheduler filter that wouldn't
do a strict match on the hardware in Ironic. E.g. if our baremetal
flavour said 16GB ram and 1TB disk, it would also match a node with 24GB
ram or 1.5TB disk.

The UI would still assume homogenous hardware and treat it as such. It's
just that we would allow for small differences.

This *isn't* proposing we match ARM to x64 or offer a box with 24GB RAM
when the flavour says 32. We would treat the flavour as a lowest common
denominator.

Nor is this an alternative to a full heterogenous hardware support. We
need to do that eventually anyway. This is just to make the first MVP
useful to more people.

It's an incremental step that would affect neither point 1. (strict
homogenous hardware) nor point 2. (full heterogenous hardware support).

If some of these assumptions are incorrect, please let me know. I don't
think this is an insane U-turn from anything we've already agreed to do,
but it seems to confuse people.


I think having this would allow users with almost-homogeous hardware use 
TripleO. If someone already has precisely homogenous hardware, they 
won't notice a difference.


So i'm +1 for this idea. The condition should be that it's easy to 
implement, because imho it's something that will get dropped when 
support for fully heterogenous hardware is added.


Jirka


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Jiří Stránský

On 20.2.2014 12:18, Radomir Dopieralski wrote:

On 20/02/14 12:02, Radomir Dopieralski wrote:

Anybody who gets access to Tuskar-API gets the
passwords, whether we encrypt them or not. Anybody who doesn't have
access to Tuskar-API doesn't get the passwords, whether we encrypt
them or not.


Yeah, i think so too.


Thinking about it some more, all the uses of the passwords come as a
result of an action initiated by the user either by tuskar-ui, or by
the tuskar command-line client. So maybe we could put the key in their
configuration and send it with the request to (re)deploy. Tuskar-API
would still need to keep it for the duration of deployment (to register
the services at the end), but that's it.


This would be possible, but it would damage the user experience quite a 
bit. Afaik other deployment tools solve password storage the same way we 
do now.


Imho keeping the passwords the way we do now is not among the biggest 
OpenStack security risks. I think we can make the assumption that 
undercloud will not be publicly accessible, so a potential external 
attacker would have to first gain network access to the undercloud 
machines and only then they can start trying to exploit Tuskar API to 
hand out the passwords. Overcloud services (which are meant to be 
publicly accessible) have their service passwords accessible in 
plaintext, e.g. in nova.conf you'll find nova password and neutron 
password -- i think this is comparatively greater security risk.


So if we can come up with a solution where the benefits outweigh the 
drawbacks and it makes sense in broader view at OpenStack security, we 
should go for it, but so far i'm not convinced there is such a solution. 
Just my 2 cents :)


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Tuskar CLI UX

2014-02-26 Thread Jiří Stránský

Hello,

i went through the CLI way of deploying overcloud, so if you're 
interested what's the workflow, here it is:


https://gist.github.com/jistr/9228638


I'd say it's still an open question whether we'll want to give better UX 
than that ^^ and at what cost (this is very much tied to the benefits 
and drawbacks of various solutions we discussed in December [1]). All in 
all it's not as bad as i expected it to be back then [1]. The fact that 
we keep Tuskar API as a layer in front of Heat means that CLI user 
doesn't care about calling merge.py and creating Heat stack manually, 
which is great.


In general the CLI workflow is on the same conceptual level as Tuskar 
UI, so that's fine, we just need to use more commands than tuskar.


There's one naming mismatch though -- Tuskar UI doesn't use Horizon's 
Flavor management, but implements its own and calls it Node Profiles. 
I'm a bit hesitant to do the same thing on CLI -- the most obvious 
option would be to make python-tuskarclient depend on python-novaclient 
and use a renamed Flavor management CLI. But that's wrong and high cost 
given that it's only about naming :)


The above issue is once again a manifestation of the fact that Tuskar 
UI, despite its name, is not a UI for Tuskar, it is UI for a bit more 
services. If this becomes a greater problem, or if we want a top-notch 
CLI experience despite reimplementing bits that can be already done 
(just not in a super-friendly way), we could start thinking about 
building something like OpenStackClient CLI [2], but directed 
specifically at Undercloud/Tuskar needs and using undercloud naming.


Another option would be to get Tuskar UI a bit closer back to the fact 
that Undercloud is OpenStack too, and keep the name Flavors instead of 
changing it to Node Profiles. I wonder if that would be unwelcome to 
the Tuskar UI UX, though.



Jirka


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2013-December/021919.html

[2] https://wiki.openstack.org/wiki/OpenStackClient

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-27 Thread Jiří Stránský

On 26.2.2014 20:43, Jay Dobies wrote:

I'd say it's still an open question whether we'll want to give better UX
than that ^^ and at what cost (this is very much tied to the benefits
and drawbacks of various solutions we discussed in December [1]). All in
all it's not as bad as i expected it to be back then [1]. The fact that
we keep Tuskar API as a layer in front of Heat means that CLI user
doesn't care about calling merge.py and creating Heat stack manually,
which is great.


I agree that it's great that Heat is abstracted away. I also agree that
it's not as bad as I too expected it to be.

But generally speaking, I think it's not an ideal user experience. A few
things jump out at me:

* We currently have glance, nova, and tuskar represented. We'll likely
need something to ceilometer as well for gathering metrics and
configuring notifications (I assume the notifications will fall under
that, but come with me on it).

That's a lot for an end user to comprehend and remember, which concerns
me for both adoption and long term usage. Even in the interim when a
user remembers nova is related to node stuff, doing a --help on nova is
huge.


+1



That's going to put a lot of stress on our ability to document our
prescribed path. It will be tricky for us to keep track of the relevant
commands and still point to the other project client documentation so as
to not duplicate it all.


+1



* Even at this level, it exposes the underlying guts. There are calls to
nova baremetal listed in there, but eventually those will turn into
ironic calls. It doesn't give us a ton of flexibility in terms of
underlying technology if that knowledge bubbles up to the end user that way.

* This is a good view into what third-party integrators are going to
face if they choose to skip our UIs and go directly to the REST APIs.


I like the notion of OpenStackClient. I'll talk ideals for a second. If
we had a standard framework and each project provided a command
abstraction that plugged in, we could pick and choose what we included
under the Tuskar umbrella. Advanced users with particular needs could go
directly to the project clients if needed.

I think this could go beyond usefulness for Tuskar as well. On a
previous project, I wrote a pluggable client framework, allowing the end
user to add their own commands that put a custom spin on what data was
returned or how it was rendered. That's a level between being locked
into what we decide the UX should be and having to go directly to the
REST APIs themselves.

That said, I know that's a huge undertaking to get OpenStack in general
to buy into. I'll leave it more that I think it is a lesser UX (not even
saying bad, just not great) to have so much for the end user to digest
to attempt to even play with it. I'm more of the mentality of a unified
TripleO CLI that would be catered towards handling TripleO stuffs. Short
of OpenStackClient, I realize I'm not exactly in the majority here, but
figured it didn't hurt to spell out my opinion  :)


Yeah i think having a unified TripleO CLI would be a great boost for the 
CLI user experience, and it would solve the problems we pointed out. 
It's another thing that we'd have to commit to maintain, but i hope CLI 
UX is enough priority that it should be fine to spend the dev time there.



Thanks

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-27 Thread Jiří Stránský

On 27.2.2014 10:16, Dougal Matthews wrote:

On 26/02/14 13:34, Jiří Stránský wrote:

get Tuskar UI a bit closer back to the fact
that Undercloud is OpenStack too, and keep the name Flavors instead of
changing it to Node Profiles. I wonder if that would be unwelcome to
the Tuskar UI UX, though.


I can imagine we would constantly explain it by saying its the same as
flavors because people will be familiar with this. So maybe being
consistent would help.


Yeah. This is a double bladed axe but i'm leaning towards naming flavors 
consistently a bit more too. Here's an attempt at +/- summary:



node profile

+ a bit more descriptive for a newcomer imho

- CLI renaming/reimplementing mentioned before

- inconsistency dangers lurking in the deep - e.g. if an error message 
bubbles up from Nova all the way to the user, it might mention flavors, 
and if we talk 99% of time about node profiles, then user will not know 
what is meant in the error message. I'm a bit worried that we'll keep 
hitting things like this in the long run.


- developers still often call them flavors, because that's what Nova 
calls them



flavor

+ fits with the rest, does not cause communication or development problems

- not so descriptive (but i agree with you - OpenStack admins will 
already be familiar what flavor means in the overcloud, and i think 
they'd be able to infer what it means in the undercloud)



I'm CCing Jarda as this affects his work quite a lot and i think he'll 
have some insight+opinion (he's on PTO now so it might take some time 
before he gets to this).






One other thing, I've looked at my own examples so far, so I didn't
really think about this but seeing it written down, I've realised the
way we specify the roles in the Tuskar CLI really bugs me.

  --roles 1=1 \
  --roles 2=1

I know what this means, but even reading it now I think: One equals
one? Two equals one? What? I think we should probably change the arg
name and also refer to roles by name.

  --role-count compute=10

and a shorter option

  -R compute=10


Yeah this is https://bugs.launchpad.net/tuskar/+bug/1281051

I agree with you on the solution (rename long option, support lookup by 
names, add a short option).



Thanks

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI UX

2014-02-27 Thread Jiří Stránský

On 26.2.2014 21:34, Dean Troyer wrote:

On Wed, Feb 26, 2014 at 1:43 PM, Jay Dobies jason.dob...@redhat.com wrote:


I like the notion of OpenStackClient. I'll talk ideals for a second. If we
had a standard framework and each project provided a command abstraction
that plugged in, we could pick and choose what we included under the Tuskar
umbrella. Advanced users with particular needs could go directly to the
project clients if needed.



This is a thing.  https://github.com/dtroyer/python-oscplugin is an example
of a stand-alone OSC plugin that only needs to be installed to be
recognized.  FWIW, four of the built-in API command sets in OSC also are
loaded in this manner even though they are in the OSC repo so they
represent additional examples of writing plugins.


Thanks for bringing this up. It looks really interesting. Is it possible 
to not only add commands to the OpenStackClient, but also purposefully 
blacklist some from appearing? As Jay mentioned in his reply, we don't 
make use of many commands in the undercloud and having the others appear 
in --help is just confusing. E.g. there's a lot of commands in Nova CLI 
that we'll not use and we have no use for Cinder CLI at all.


Even if we decided to build TripleO CLI separate from OpenStackClient, i 
think being able to consume this plugin API would help us. We could plug 
in the particular commands we want (and rename them if we want node 
profiles instead of flavors) and hopefully not reimplement everything.



Thanks

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] os-cloud-config ssh access to cloud

2014-03-07 Thread Jiří Stránský

Hi,

there's one step in cloud initialization that is performed over SSH -- 
calling keystone-manage pki_setup. Here's the relevant code in 
keystone-init [1], here's a review for moving the functionality to 
os-cloud-config [2].


The consequence of this is that Tuskar will need passwordless ssh key to 
access overcloud controller. I consider this suboptimal for two reasons:


* It creates another security concern.

* AFAIK nova is only capable of injecting one public SSH key into 
authorized_keys on the deployed machine, which means we can either give 
it Tuskar's public key and allow Tuskar to initialize overcloud, or we 
can give it admin's custom public key and allow admin to ssh into 
overcloud, but not both. (Please correct me if i'm mistaken.) We could 
probably work around this issue by having Tuskar do the user key 
injection as part of os-cloud-config, but it's a bit clumsy.



This goes outside the scope of my current knowledge, i'm hoping someone 
knows the answer: Could pki_setup be run by combining powers of Heat and 
os-config-refresh? (I presume there's some reason why we're not doing 
this already.) I think it would help us a good bit if we could avoid 
having to SSH from Tuskar to overcloud.



Thanks

Jirka


[1] 
https://github.com/openstack/tripleo-incubator/blob/4e2e8de41ba91a5699ea4eb9091f6ef4c95cf0ce/scripts/init-keystone#L85-L86

[2] https://review.openstack.org/#/c/78148/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] os-cloud-config ssh access to cloud

2014-03-10 Thread Jiří Stránský

On 7.3.2014 14:50, Imre Farkas wrote:

On 03/07/2014 10:30 AM, Jiří Stránský wrote:

Hi,

there's one step in cloud initialization that is performed over SSH --
calling keystone-manage pki_setup. Here's the relevant code in
keystone-init [1], here's a review for moving the functionality to
os-cloud-config [2].

The consequence of this is that Tuskar will need passwordless ssh key to
access overcloud controller. I consider this suboptimal for two reasons:

* It creates another security concern.

* AFAIK nova is only capable of injecting one public SSH key into
authorized_keys on the deployed machine, which means we can either give
it Tuskar's public key and allow Tuskar to initialize overcloud, or we
can give it admin's custom public key and allow admin to ssh into
overcloud, but not both. (Please correct me if i'm mistaken.) We could
probably work around this issue by having Tuskar do the user key
injection as part of os-cloud-config, but it's a bit clumsy.


This goes outside the scope of my current knowledge, i'm hoping someone
knows the answer: Could pki_setup be run by combining powers of Heat and
os-config-refresh? (I presume there's some reason why we're not doing
this already.) I think it would help us a good bit if we could avoid
having to SSH from Tuskar to overcloud.


Yeah, it came up a couple times on the list. The current solution is
because if you have an HA setup, the nodes can't decide on its own,
which one should run pki_setup.
Robert described this topic and why it needs to be initialized
externally during a weekly meeting in last December. Check the topic
'After heat stack-create init operations (lsmola)':
http://eavesdrop.openstack.org/meetings/tripleo/2013/tripleo.2013-12-17-19.02.log.html


Thanks for the reply Imre. Yeah i vaguely remember that meeting :)

I guess to do HA init we'd need to pick one of the controllers and run 
the init just there (set some parameter that would then be recognized by 
os-refresh-config). I couldn't find if Heat can do something like this 
on it's own, probably we'd need to deploy one of the controller nodes 
with different parameter set, which feels a bit weird.


Hmm so unless someone comes up with something groundbreaking, we'll 
probably keep doing what we're doing. Having the ability to inject 
multiple keys to instances [1] would help us get rid of the Tuskar vs. 
admin key issue i mentioned in the initial e-mail. We might try asking a 
fellow Nova developer to help us out here.



Jirka

[1] https://bugs.launchpad.net/nova/+bug/917850

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] os-cloud-config ssh access to cloud

2014-03-12 Thread Jiří Stránský

On 11.3.2014 15:50, Adam Young wrote:

On 03/11/2014 05:25 AM, Dmitry Mescheryakov wrote:

For what it's worth in Sahara (former Savanna) we inject the second
key by userdata. I.e. we add
echo ${public_key}  ${user_home}/.ssh/authorized_keys

to the other stuff we do in userdata.

Dmitry

2014-03-10 17:10 GMT+04:00 Jiří Stránský ji...@redhat.com:

On 7.3.2014 14:50, Imre Farkas wrote:

On 03/07/2014 10:30 AM, Jiří Stránský wrote:

Hi,

there's one step in cloud initialization that is performed over SSH --
calling keystone-manage pki_setup. Here's the relevant code in
keystone-init [1], here's a review for moving the functionality to
os-cloud-config [2].


You really should not be doing this.  I should never have written
pki_setup:  it is a developers tool:  user a real CA and a real certificate.


Thanks for all the replies everyone :)

I'm leaning towards going the way Robert suggested on the review [1] - 
upload pre-created signing cert, signing key and CA cert to controller 
nodes using Heat. This seems like a much cleaner approach to 
initializing overcloud than having to SSH into it, and it will solve 
both problems i outlined in the initial e-mail.


It creates another problem though - for simple (think PoC) deployments 
without external CA we'll need to create the keys/certs 
somehow/somewhere anyway :) It shouldn't be hard because it's already 
implemented in keystone-manage pki_setup but we should figure out a way 
to avoid copy-pasting the world. Maybe Tuskar calling pki_setup locally 
and passing a parameter to pki_setup to override default location where 
new keys/certs will be generated?



Thanks

Jirka

[1] https://review.openstack.org/#/c/78148/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] os-cloud-config ssh access to cloud

2014-03-14 Thread Jiří Stránský

On 12.3.2014 17:03, Jiří Stránský wrote:


Thanks for all the replies everyone :)

I'm leaning towards going the way Robert suggested on the review [1] -
upload pre-created signing cert, signing key and CA cert to controller
nodes using Heat. This seems like a much cleaner approach to
initializing overcloud than having to SSH into it, and it will solve
both problems i outlined in the initial e-mail.

It creates another problem though - for simple (think PoC) deployments
without external CA we'll need to create the keys/certs
somehow/somewhere anyway :) It shouldn't be hard because it's already
implemented in keystone-manage pki_setup but we should figure out a way
to avoid copy-pasting the world. Maybe Tuskar calling pki_setup locally
and passing a parameter to pki_setup to override default location where
new keys/certs will be generated?


Thanks

Jirka

[1] https://review.openstack.org/#/c/78148/



I'm adding [Heat] to the subject. After some discussion on IRC it seems 
that what we need to do with Heat is not totally straightforward.


Here's an attempt at a brief summary:

In TripleO we deploy OpenStack using Heat, the cloud is described in a 
Heat template [1]. We want to externally generate and then upload 3 
small binary files to the controller nodes (Keystone PKI key and 
certificates [2]). We don't want to generate them in place or scp them 
into the controller nodes, because that would require having ssh access 
to the deployed controller nodes, which comes with drawbacks [3].


It would be good if we could have the 3 binary files put into the 
controller nodes as part of the Heat stack creation. Can we include them 
in the template somehow? Or is there an alternative feasible approach?



Thank you

Jirka

[1] 
https://github.com/openstack/tripleo-heat-templates/blob/0490dd665899d3265a72965aeaf3a342275f4328/overcloud-source.yaml
[2] 
http://docs.openstack.org/developer/keystone/configuration.html#install-external-signing-certificate
[3] 
http://lists.openstack.org/pipermail/openstack-dev/2014-March/029327.html


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] os-cloud-config ssh access to cloud

2014-03-14 Thread Jiří Stránský

On 14.3.2014 14:42, Steven Dake wrote:

On 03/14/2014 06:33 AM, Jiří Stránský wrote:

On 12.3.2014 17:03, Jiří Stránský wrote:


Thanks for all the replies everyone :)

I'm leaning towards going the way Robert suggested on the review [1] -
upload pre-created signing cert, signing key and CA cert to controller
nodes using Heat. This seems like a much cleaner approach to
initializing overcloud than having to SSH into it, and it will solve
both problems i outlined in the initial e-mail.

It creates another problem though - for simple (think PoC) deployments
without external CA we'll need to create the keys/certs
somehow/somewhere anyway :) It shouldn't be hard because it's already
implemented in keystone-manage pki_setup but we should figure out a way
to avoid copy-pasting the world. Maybe Tuskar calling pki_setup locally
and passing a parameter to pki_setup to override default location where
new keys/certs will be generated?


Thanks

Jirka

[1] https://review.openstack.org/#/c/78148/



I'm adding [Heat] to the subject. After some discussion on IRC it
seems that what we need to do with Heat is not totally straightforward.

Here's an attempt at a brief summary:

In TripleO we deploy OpenStack using Heat, the cloud is described in a
Heat template [1]. We want to externally generate and then upload 3
small binary files to the controller nodes (Keystone PKI key and
certificates [2]). We don't want to generate them in place or scp them
into the controller nodes, because that would require having ssh
access to the deployed controller nodes, which comes with drawbacks [3].

It would be good if we could have the 3 binary files put into the
controller nodes as part of the Heat stack creation. Can we include
them in the template somehow? Or is there an alternative feasible
approach?


Jirka,

You can inject files via the heat-cfntools agents.  Check out:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html#aws-resource-init-files

You could also use raw cloudinit data to inject a files section.

There may be a final option with software config, but I'm not certain if
software config has grown a feature to inject files yet.

Regards
-steve



Are these approaches subject to size limits? In the IRC discussion a 
limit of 16 KB came up (i assumed total, not per-file), which could be a 
problem in theory. The files `keystone-manage pki_setup` generated for 
me were about 7.2 KB which gives about 10 KB when encoded as base64. So 
we wouldn't be over the limit but it's not exactly comfortable either 
(if that 16 KB limit still applies).


Thanks

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] os-cloud-config ssh access to cloud

2014-03-17 Thread Jiří Stránský

On 16.3.2014 21:20, Steve Baker wrote:

On 15/03/14 02:33, Jiří Stránský wrote:

On 12.3.2014 17:03, Jiří Stránský wrote:


Thanks for all the replies everyone :)

I'm leaning towards going the way Robert suggested on the review [1] -
upload pre-created signing cert, signing key and CA cert to controller
nodes using Heat. This seems like a much cleaner approach to
initializing overcloud than having to SSH into it, and it will solve
both problems i outlined in the initial e-mail.

It creates another problem though - for simple (think PoC) deployments
without external CA we'll need to create the keys/certs
somehow/somewhere anyway :) It shouldn't be hard because it's already
implemented in keystone-manage pki_setup but we should figure out a way
to avoid copy-pasting the world. Maybe Tuskar calling pki_setup locally
and passing a parameter to pki_setup to override default location where
new keys/certs will be generated?


Thanks

Jirka

[1] https://review.openstack.org/#/c/78148/



I'm adding [Heat] to the subject. After some discussion on IRC it
seems that what we need to do with Heat is not totally straightforward.

Here's an attempt at a brief summary:

In TripleO we deploy OpenStack using Heat, the cloud is described in a
Heat template [1]. We want to externally generate and then upload 3
small binary files to the controller nodes (Keystone PKI key and
certificates [2]). We don't want to generate them in place or scp them
into the controller nodes, because that would require having ssh
access to the deployed controller nodes, which comes with drawbacks [3].

It would be good if we could have the 3 binary files put into the
controller nodes as part of the Heat stack creation. Can we include
them in the template somehow? Or is there an alternative feasible
approach?


Thank you

Jirka

[1]
https://github.com/openstack/tripleo-heat-templates/blob/0490dd665899d3265a72965aeaf3a342275f4328/overcloud-source.yaml
[2]
http://docs.openstack.org/developer/keystone/configuration.html#install-external-signing-certificate
[3]
http://lists.openstack.org/pipermail/openstack-dev/2014-March/029327.html


It looks like the cert files you want to transfer are all ascii rather
than binary, which is good as we have yet to implement a way to attach
binary data to a heat stack create call.

One way to write out these files would be using cloud-config. The
disadvantages of this is that it is boot-time config only, so those keys
couldn't be updated with a stack update. You would also be consuming a
decent proportion of your 16k user_data limit.

   keystone_certs_config:
 Type: OS::Heat::CloudConfig
 Properties:
   cloud_config:
 write_files:
 - path: /etc/keystone/ssl/certs/signing_cert.pem
   content: |
 # You have 3 options for how to insert the content here:
 # 1. inline the content
 # 2. Same as 1, but automatically with your own template
pre-processing logic
 # 3. call {get_file: path/to/your/signing_cert.pem} but this
only works for HOT syntax templates
   permissions: '0600'

   keystone_init:
 Type: OS::Heat::MultipartMime
 Properties:
   parts:
   - subtype: cloud-config
 config:
   get_resource: keystone_certs_config
   notCompute0:
 Type: OS::Nova::Server
 Properties:
   user_data: {Ref: keystone_init}

But it looks like you should just be using os-apply-config templates for
all of the files in /etc/keystone/ssl/certs/

   notCompute0Config:
 Type: AWS::AutoScaling::LaunchConfiguration
 ...
 Metadata:
   ...
   keystone:
 signing_cert: |
   # You have 3 options for how to insert the content here:
   # 1. inline the content
   # 2. Same as 1, but automatically with your own template
pre-processing logic
   # 3. call {get_file: path/to/your/signing_cert.pem} but this
only works for HOT syntax templates

If the files really are binary then currently you'll have to encode to
base64 before including the content in your templates, then have an
os-refresh-config script to decode and write out the binary files.


Ah i don't know why i thought .pem files were binary. Thank you Steve, 
your reply is super helpful :)


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] os-cloud-config ssh access to cloud

2014-03-26 Thread Jiří Stránský

(Removing [Heat] from the subject.)

So here are the steps i think are necessary to get the PKI setup done 
and safely passed through Jenkins. If anyone thinks something is 
redundant or missing, please shout:


1. Patch to os-cloud-config:

  * Generation of keys and certs for cases user doesn't want to
specify their own - mainly PoC deployments. (Generation happens
in-memory, which is better for Tuskar than having to write
keys/certs to disk - we might have different sets for different
overclouds.)

  * Implement also a function that will write the keys/certs to a
specified location on disk (in-memory generation is not well
suited for use within Devtest).

2. Patch to T-I-E:

  * os-cloud-config image element.

3. Patch to tripleo-incubator (dependent on patches 1 and 2):

  * Generate keys using os-cloud-config and pass them into heat-create
if the T-H-T supports that (this is to make sure the next T-H-T
patch passes). Keep doing the current init-keystone anyway.

4. Patch to T-H-T (dependent on patch 3):

  * Accept 3 new parameters for controller nodes: KeystoneCACert,
KeystoneSigningKey, KeystoneSigningCert. Default them to empty
string so that they are not required (otherwise we'd have to
implement logic forking also for Tuskar, because it's
chicken-and-egg there too).

5. Patch to tuskar (dependent on patch 4):

  * Use os-cloud-config to generate keys and certs if user didn't
specify their own, pass new parameters to T-H-T.

6. Patch to T-I-E (dependent on patch 5):

  * Add the certs and signing key to keystone's os-apply-config
templates. Change key location to /etc instead of
/mnt/state/etc. Devtest should keep working because calling
`keystone-manage pki_setup` on already initialized system does not
have significant effect. It will keep generating a useless CA key,
but that will stop with patch 7.

7. Cleanup patch to tripleo-incubator (dependent on patch 6):

  * Remove conditional on passing the 3 new parameters only if
supported, pass them always.

  * Remove call to pki_setup.


Regarding the cloud initialization as a whole, on monday i sent a patch 
for creating users, roles etc. [1]. The parts still missing are endpoint 
registration [2,3] and neutron setup [4].


If anyone is willing to spare some cycles on endpoint registration or 
neturon setup or make the image element for os-cloud-config (patch no. 2 
in above list), it would be great, as we'd like to have this finished as 
soon as possible.



Thanks

Jirka

[1] https://review.openstack.org/#/c/78148/
[2] 
https://github.com/openstack/tripleo-incubator/blob/4e2e8de41ba91a5699ea4eb9091f6ef4c95cf0ce/scripts/init-keystone#L111-L114
[3] 
https://github.com/openstack/tripleo-incubator/blob/4e2e8de41ba91a5699ea4eb9091f6ef4c95cf0ce/scripts/setup-endpoints
[4] 
https://github.com/openstack/tripleo-incubator/blob/4e2e8de41ba91a5699ea4eb9091f6ef4c95cf0ce/scripts/setup-neutron


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Horizon] Searching for a new name for Tuskar UI

2014-03-27 Thread Jiří Stránský

On 27.3.2014 18:21, Dougal Matthews wrote:

On 27/03/14 15:56, Jaromir Coufal wrote:

Hi OpenStackers,

User interface which is managing the OpenStack Infrastructure is
currently named Tuskar-UI because of historical reasons. Tuskar itself
is a small service, which is giving logic into generating and managing
Heat templates and helps user to model and manage his deployment. The
user interface, which is the subject of this call, is based on TripleO
approach and resembles OpenStack Dashboard (Horizon) with the way of how
it consumes other services. The UI is consuming not just Tuskar API, but
also Ironic (nova-baremetal), Nova (flavors), Ceilometer, etc in order
to design, deploy, manage and monitor your OpenStack deployments.
Because of this I find the name Tuskar-UI improper (it's more closer to
say TripleO-UI) and I would like the community to help to find better
name for it. After brainstorming, we can start voting on the final
project's name.

https://etherpad.openstack.org/p/openstack-management-ui-names


Thanks for starting this.

As a side, but related note, I think we should rename the Tuskar client
to whatever name the Tuskar UI gets called. The client will eventually
have feature parity with the UI and thus will have the same naming
issues if it is to remain the tuskarclient

Dougal


It might be good to do a similar thing as Keystone does. We could keep 
python-tuskarclient focused only on Python bindings for Tuskar (but keep 
whatever CLI we already implemented there, for backwards compatibility), 
and implement CLI as a plugin to OpenStackClient. E.g. when you want to 
access Keystone v3 API features (e.g. domains resource), then 
python-keystoneclient provides only Python bindings, it no longer 
provides CLI.


I think this is a nice approach because it allows the python-*client to 
stay thin for including within Python apps, and there's a common 
pluggable CLI for all projects (one top level command for the user). At 
the same time it would solve our naming problems (tuskarclient would 
stay, because it would be focused on Tuskar only) and we could reuse the 
already implemented other OpenStackClient plugins for anything on 
undercloud.


We previously raised that OpenStackClient has more plugins (subcommands) 
that we need on undercloud and that could confuse users, but i'd say it 
might not be as troublesome to justify avoiding the OpenStackClient way. 
(Even if we decide that this is a big problem after all and OSC plugin 
is not enough, we should still probably aim for separating TripleO CLI 
and Tuskarclient in the future.)


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march

2014-04-04 Thread Jiří Stránský

On 3.4.2014 13:02, Robert Collins wrote:

Getting back in the swing of things...

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Dan Prince for -core
  - Jordan O'Mara for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.


+1 to all.

Jirka


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Jiří Stránský

Clint and Monty,

thank you for such good responses. I am new in TripleO team indeed and I 
was mostly concerned by the line in the sand. Your responses shed some 
more light on the issue for me and i hope we'll be heading the right way :)


Thanks

Jiri

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] TripleO core reviewer update - november

2013-10-31 Thread Jiří Stránský

On 30.10.2013 10:06, Robert Collins wrote:

Hi, like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - James Slagle for -core

+1


  - Arata Notsu to be removed from -core

+1


  - Devananda van der veen to be removed from -core

+1



Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.
James, please let me know if you're willing to be in tripleo-core.
Arata, Devananda, if you are planning on becoming substantially more
active in TripleO reviews in the short term, please let us know.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Version numbering of TripleO releases

2013-10-31 Thread Jiří Stránský

On 31.10.2013 06:09, Roman Podoliaka wrote:

Hi all,

0.MAJOR.MINOR  versioning totally makes sense to me until we get to 1.0.0.

Just a couple of examples of releases we are doing this week:
1) tripleo-image-elements is bumped from 0.0.8 to 0.1.0 (we introduced a
kind of incompatible change by switching Seed VM to neutron-dhcp-agent)
2) os-collect-config is bumped from 0.1.4 to 0.1.5 (we had Clint's patch
reducing the default polling interval, which is not a bug-fix, but doesn't
break backwards compatibility either)


Sounds good to me.

J.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Summit session wrapup

2013-11-28 Thread Jiří Stránský

Hi all,

just a few thoughts (subjective opinions) regarding the whole debate:

* I think that having a manually picking images for machines approach 
would make TripleO more usable in the beginning. I think it will take a 
good deal of time to get our smart solution working with the admin 
rather than against him [1], and a possibility of manual override is a 
good safety catch.


E.g. one question that i wonder about - how would our smart flavor-based 
approach solve this situation: I have homogenous nodes on which i want 
to deploy Cinder and Swift. Half of those nodes has better connectivity 
to the internet than the other half. I want Swift on the ones with 
better internet connectivity. How will i ensure such deployment with 
flavor-based approach? Could we use e.g. host aggregates defined on the 
undercloud for this? I think it will take time before our smart solution 
can understand such and similar conditions.


* On the other hand, i think relying on Nova to pick hosts feels more 
TripleO-spirited solution to me. It means using OpenStack to deploy 
OpenStack.


So i can't really lean towards one solution or the other. Maybe it's 
most important to make *something*, gather some feedback, and tweak what 
needs tweaking.



Cheers

Jirka


[1] http://i.technet.microsoft.com/dynimg/IC284957.jpg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-05 Thread Jiří Stránský

On 4.12.2013 08:12, Robert Collins wrote:

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Ghe Rivero for -core


+1


  - Jan Provaznik for removal from -core
  - Jordan O'Mara for removal from -core
  - Martyn Taylor for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.


I vote for keeping in -core those who already expressed or will express 
intention to be more active in reviews, removal of the rest.


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposal to add Jon Paul Sullivan and Alexis Lee to core review team

2014-07-10 Thread Jiří Stránský

On 9.7.2014 17:52, Clint Byrum wrote:

So, I propose that we add jonpaul-sullivan and lxsli to the TripleO core
reviewer team.


+1

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Feedback on init-keystone spec

2014-05-05 Thread Jiří Stránský

On 30.4.2014 09:02, Steve Kowalik wrote:

Hi,

I'm looking at moving init-keystone from tripleo-incubator to
os-cloud-config, and I've drafted a spec at
https://etherpad.openstack.org/p/tripleo-init-keystone-os-cloud-config .

Feedback welcome.

Cheers,



Hi Steve,

that looks good :) Just to clarify -- should the long-term plan for 
Keystone PKI initialization still be to generate the key+certs on 
undercloud and push it to overcloud via Heat? (Likewise for 
seed-undercloud.)


Thanks

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleO] Should #tuskar business be conducted in the #tripleo channel?

2014-05-30 Thread Jiří Stránský

On 30.5.2014 11:06, Tomas Sedovic wrote:

On 30/05/14 02:08, James Slagle wrote:

On Thu, May 29, 2014 at 12:25 PM, Anita Kuno ante...@anteaya.info wrote:

As I was reviewing this patch today:
https://review.openstack.org/#/c/96160/

It occurred to me that the tuskar project is part of the tripleo
program:
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n247

I wondered if business, including bots posting to irc for #tuskar is
best conducted in the #tripleo channel. I spoke with Chris Jones in
#tripleo and he said the topic hadn't come up before. He asked me if I
wanted to kick off the email thread, so here we are.

Should #tuskar business be conducted in the #tripleo channel?


I'd say yes. I don't think the additional traffic would be a large
distraction at all to normal TripleO business.


Agreed, I don't think the traffic increase would be problematic. Neither
channel seems particularly busy.

And it would probably be beneficial to the TripleO developers who aren't
working on the UI stuff as well as the UI people who aren't necessarily
hacking on the rest of TripleO. A discussion in one area can sometimes
use some input from the other, which is harder when you need to move the
conversation between channels.


+1

Jirka





I can however see how it might be nice to have #tuskar to talk tuskar
api and tuskar ui stuff in the same channel. Do folks usually do that?
Or is tuskar-ui conversation already happening in #openstack-horizon?




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] [TripleO] The vision and looking ahead

2013-09-19 Thread Jiří Stránský

On 19.9.2013 10:08, Tomas Sedovic wrote:

Hi everyone,

Some of us Tuskar developers have had the chance to meet the TripleO
developers face to face and discuss the visions and goals of our projects.

Tuskar's ultimate goal is to have to a full OpenStack management
solution: letting the cloud operators try OpenStack, install it, keep it
running throughout the entire lifecycle (including bringing in new
hardware, burning it in, decommissioning), help to scale it, secure the
setup, monitor for failures, project the need for growth and so on.

And to provide a good user interface and API to let the operators
control and script this easily.

Now, the scope of the OpenStack Deployment program (TripleO) includes
not just installation, but the entire lifecycle management (from racking
it up to decommissioning). Among other things they're thinking of are
issue tracker integration and inventory management, but these could
potentially be split into a separate program.

That means we do have a lot of goals in common and we've just been going
at them from different angles: TripleO building the fundamental
infrastructure while Tuskar focusing more on the end user experience.

We've come to a conclusion that it would be a great opportunity for both
teams to join forces and build this thing together.

The benefits for Tuskar would be huge:

* being a part of an incubated project
* more eyballs (see Linus' Law (the ESR one))
* better information flow between the current Tuskar and TripleO teams
* better chance at attracting early users and feedback
* chance to integrate earlier into an OpenStack release (we could make
it into the *I* one)

TripleO would get a UI and more developers trying it out and helping
with setup and integration.

This shouldn't even need to derail us much from the rough roadmap we
planned to follow in the upcoming months:

1. get things stable and robust enough to demo in Hong Kong on real hardware
2. include metrics and monitoring
3. security

What do you think?


I think this is a good idea, given that we heavily depend on TripleO 
already.


J.



Tomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Core reviewer update proposal

2015-05-05 Thread Jiří Stránský

On 5.5.2015 13:57, James Slagle wrote:

Hi, I'd like to propose adding Giulio Fidente and Steve Hardy to TripleO Core.

Giulio has been an active member of our community for a while. He
worked on the HA implementation in the elements and recently has been
making a lot of valuable contributions and reviews related to puppet
in the manifests, heat templates, ceph, and HA.

Steve Hardy has been instrumental in providing a lot of Heat domain
knowledge to TripleO and his reviews and guidance have been very
beneficial to a lot of the template refactoring. He's also been
reviewing and contributing in other TripleO projects besides just the
templates, and has shown a solid understanding of TripleO overall.

180 day stats:
| gfidente | 2080  42 166   0   079.8% |
16 (  7.7%)  |
|  shardy  | 2060  27 179   0   086.9% |
16 (  7.8%)  |

TripleO cores, please respond with +1/-1 votes and any
comments/objections within 1 week.


+1


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-07 Thread Jiří Stránský

Hi Dan,

On 7.5.2015 04:32, Dan Prince wrote:

Looking over some of the Puppet pacemaker stuff today. I appreciate all
the hard work going into this effort but I'm not quite happy about all
of the conditionals we are adding to our puppet overcloud_controller.pp
manifest. Specifically it seems that every service will basically have
its resources duplicated for pacemaker and non-pacemaker version of the
controller by checking the $enable_pacemaker variable.


+1



After seeing it play out for a couple services I think I might prefer it
better if we had an entirely separate template for the pacemaker
version of the controller. One easy way to kick off this effort would be
to use the Heat resource registry to enable pacemaker rather than a
parameter.

Something like this:

https://review.openstack.org/#/c/180833/


I have two mild concerns about this approach:

1) We'd duplicate the logic (or at least the inclusion logic) for the 
common parts in two places, making it prone for the two .pp variants to 
get out of sync. The default switches from if i want to make a 
difference between the two variants, i need to put in a conditional to 
if i want to *not* make a difference between the two variants, i need 
to put this / include this in two places.


2) If we see some other bit emerging in the future, which would be 
optional but at the same time omnipresent in a similar way as 
Pacemaker is, we'll see the same if/else pattern popping up. Using the 
same solution would mean we'd have 4 .pp files (a 2x2 matrix) doing the 
same thing to cover all scenarios. This is a somewhat hypothetical 
concern at this point, but it might become real in the future (?).




If we were to split out the controller into two separate templates I
think it might be appropriate to move a few things into puppet-tripleo
to de-duplicate a bit. Things like the database creation for example.
But probably not all of the services... because we are trying as much as
possible to use the stackforge puppet modules directly (and not our own
composition layer).


I think our restraint from having a composition layer (extracting things 
into puppet-tripleo) is what's behind my concern no. 1 above. I know one 
of the arguments against having a composition layer is that it makes 
things less hackable, but if we could amend puppet modules without 
rebuilding or altering the image, it should mitigate the problem a bit 
[1]. (It's almost a matter that would deserve a separate thread though :) )




I think this split is a good compromise and would probably even speed up
the implementation of the remaining pacemaker features too. And removing
all the pacemaker conditionals we have from the non-pacemaker version
puts us back in a reasonably clean state I think.

Dan



An alternative approach could be something like:

if hiera('step') = 2 {
include ::tripleo::mongodb
}

and move all the mongodb related logic to that class and let it deal 
with both pacemaker and non-pacemaker use cases. This would reduce the 
stress on the top-level .pp significantly, and we'd keep things 
contained in logical units. The extracted bits will still have 
conditionals but it's going to be more manageable because the bits will 
be a lot smaller. So this would mean splitting up the manifest per 
service rather than based on pacemaker on/off status. This would require 
more extraction into puppet-tripleo though, so it kinda goes against the 
idea of not having a composition layer. It would also probably consume a 
bit more time to implement initially and be more disruptive to the 
current state of things.


At this point i don't lean strongly towards one or the other solution, i 
just want us to have an option to discuss and consider benefits and 
drawbacks of both, so that we can take an informed decision. I think i 
need to let this sink in a bit more myself.



Cheers

Jirka

[1] https://review.openstack.org/#/c/179177/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] HA CI job is green, please consider it when merging patches

2015-05-22 Thread Jiří Stránský

Hi all,

after an epic battle with bugs this week, we have a passing CI job for HA.

Looking at the jobs which ran during last night, the success rate is 
decent (14 all-green runs vs. just 1 run where HA job was the sole CI 
failure).


I'm a bit reluctant still to say let's make it voting right now, but i 
think we should be heading that way gradually. If you see a failed HA CI 
job from now on, there's some chance it points out some real issue. 
Please try to go through the logs before overriding its vote and merging 
a patch with red HA CI. If the patch in question is not critical with 
regards to time, recheck is an option :)



Thanks

Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Plugin integration and environment file naming

2015-09-08 Thread Jiří Stránský

On 8.9.2015 10:40, Steven Hardy wrote:

Hi all,

So, lately we're seeing an increasing number of patches adding integration
for various third-party plugins, such as different neutron and cinder
backends.

This is great to see, but it also poses the question of how we organize the
user-visible interfaces to these things long term.

Originally, I was hoping to land some Heat composability improvements[1]
which would allow for tagging templates as providing a particular
capability (such as "provides neutron ML2 plugin"), but this has stalled on
some negative review feedback and isn't going to be implemented for
Liberty.

However, today looking at [2] and [3], (which both add t-h-t integration to
enable neutron ML2 plugins), a simpler interim solution occured to me,
which is just to make use of a suggested/mandatory naming convention.

For example:

environments/neutron-ml2-bigswitch.yaml
environments/neutron-ml2-cisco-nexus-ucsm.yaml

Or via directory structure:

environments/neutron-ml2/bigswitch.yaml
environments/neutron-ml2/cisco-nexus-ucsm.yaml


+1 for this one ^



This would require enforcement via code-review, but could potentially
provide a much more intuitive interface for users when they go to create
their cloud, and particularly it would make life much easier for any Ux to
ask "choose which neutron-ml2 plugin you want", because the available
options can simply be listed by looking at the available environment
files?


Yeah i like the idea of more structure in placing the environment files. 
It seems like customization of deployment via those files is becoming 
common, so we might see more environment files appearing over time.




What do folks think of this, is now a good time to start enforcing such a
convention?


We'd probably need to do this at some point anyway, and sooner seems 
better than later :)



Apart from "cinder" and "neutron-ml2" directories, we could also have a 
"combined" (or sth similar) directory for env files which combine 
multiple other env files. The use case which i see is for extra 
pre-deployment configs which would be commonly used together. E.g. 
combining Neutron and Horizon extensions of a single vendor [4].


Maybe also a couple of other categories could be found like "network" 
(for things related mainly to network isolation) or "devel" [5].



Jirka

[4] 
https://review.openstack.org/#/c/213142/1/puppet/extraconfig/pre_deploy/controller/all-bigswitch.yaml
[5] 
https://github.com/openstack/tripleo-heat-templates/blob/master/environments/puppet-ceph-devel.yaml




Steve

[1] https://review.openstack.org/#/c/196656/
[2] https://review.openstack.org/#/c/213142/
[3] https://review.openstack.org/#/c/198754/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Plugin integration and environment file naming

2015-09-08 Thread Jiří Stránský

On 8.9.2015 13:47, Jiří Stránský wrote:

Apart from "cinder" and "neutron-ml2" directories, we could also have a
"combined" (or sth similar) directory for env files which combine
multiple other env files. The use case which i see is for extra
pre-deployment configs which would be commonly used together. E.g.
combining Neutron and Horizon extensions of a single vendor [4].


Ah i mixed up two things in this paragraph -- env files vs. extraconfig 
nested stacks. Not sure if we want to start namespacing the extraconfig 
bits in a parallel manner. E.g. 
"puppet/extraconfig/pre_deploy/controller/cinder", 
"puppet/extraconfig/pre_deploy/controller/neutron-ml2". It would be 
nice, especially if we're sort of able to map the extraconfig categories 
to env file categories most of the time. OTOH the directory nesting is 
getting quite deep there :)


J.


[4]
https://review.openstack.org/#/c/213142/1/puppet/extraconfig/pre_deploy/controller/all-bigswitch.yaml



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Pin some puppet dependencies on git clone

2015-12-15 Thread Jiří Stránský

On 15.12.2015 17:46, Emilien Macchi wrote:

For information, Puppet OpenStack CI is consistent for unit & functional
tests, we use a single (versionned) Puppetfile:
https://github.com/openstack/puppet-openstack-integration/blob/master/Puppetfile

TripleO folks might want to have a look at this to follow the
dependencies actually supported by upstream OR if you prefer surfing on
the edge and risk to break CI every morning.

Let me know if you're interested to support that in TripleO Puppet
elements, I can help with that.


Syncing tripleo-puppet-elements with puppet-openstack-integration is a 
good idea i think, to prevent breakages like the puppet-mysql one 
mentioned before.


One thing to keep in mind is that the module sets in t-p-e and p-o-i are 
not the same. E.g. recently we added the timezone module to t-p-e, and 
it's not in the p-o-i Puppetfile.


Also, sometimes we do have to go to non-openstack puppet modules to fix 
things for TripleO (i don't recall a particular example but i think we 
did a couple of fixes in non-openstack modules to allow us to deploy HA 
with Pacemaker). In cases like this it would be helpful if we still had 
the possibility to pin to something different than what's in 
puppet-openstack-integration perhaps.



Considering the above, if we could figure out a way to have t-p-e behave 
like this:


* install the module set listed in t-p-e, not p-o-i.

* if there's a ref/branch specified directly in t-p-e, use that

* if t-p-e doesn't have a ref/branch specified, use ref/branch from p-o-i

* if t-p-e doesn't have a ref/branch specified, and the module is not 
present in p-o-i, use master


* still honor DIB_REPOREF_* variables to pin individual puppet modules 
to whatever wanted at time of building the image -- very useful for 
temporary workarounds done either manually or in tripleo.sh.


...then i think this would be very useful. Not sure at the moment what 
would be the best way to meet these points though, these are just some 
immediate thoughts on the matter.



Jirka



On 12/14/2015 02:25 PM, Dan Prince wrote:

On Fri, 2015-12-11 at 21:50 +0100, Jaume Devesa wrote:

Hi all,

Today TripleO CI jobs failed because a new commit introduced on
puppetlabs-mysql[1].
Mr. Jiri Stransky solved it as a temporally fix by pinning the puppet
module clone to a previous
commit in the tripleo-common project[2].

source-repositories puppet element[3] allows you to pin the puppet
module clone as well by
adding a reference commit in the source-repository-
file. In this case,
I am talking about the source-repository-puppet-modules[4].

I know you TripleO guys are brave people that live dangerously in the
cutting edge, but I think
the dependencies to puppet modules not managed by the OpenStack
community should be
pinned to last repo tag for the sake of stability.

What do you think?


I've previously considered added a stable puppet modules element for
just this case:

https://review.openstack.org/#/c/184844/

Using stable branches of things like MySQL, Rabbit, etc might make
sense. However I would want to consider following what the upstream
Puppet community does as well specifically because we do want to
continue using upstream openstack/puppet-* modules as well. At least
for our upstream CI.

We also want to make sure our stable TripleO jobs use the stable
branches of openstack/puppet-* so we might need to be careful about
pinning those things too.

Dan



  I can take care of this.

[1]: https://github.com/puppetlabs/puppetlabs-mysql/commit/bdf4d0f52d
fc244d10bbd5b67efb791a39520ed2
[2]: https://review.openstack.org/#/c/256572/
[3]: https://github.com/openstack/diskimage-builder/tree/master/eleme
nts/source-repositories
[4]: https://github.com/openstack/tripleo-puppet-elements/blob/master
/elements/puppet-modules/source-repository-puppet-modules

--
Jaume Devesa
Software Engineer at Midokura
_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
cribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [tripleo] Pin some puppet dependencies on git clone

2015-12-16 Thread Jiří Stránský

On 15.12.2015 19:12, Emilien Macchi wrote:



On 12/15/2015 12:23 PM, Jiří Stránský wrote:

On 15.12.2015 17:46, Emilien Macchi wrote:

For information, Puppet OpenStack CI is consistent for unit & functional
tests, we use a single (versionned) Puppetfile:
https://github.com/openstack/puppet-openstack-integration/blob/master/Puppetfile


TripleO folks might want to have a look at this to follow the
dependencies actually supported by upstream OR if you prefer surfing on
the edge and risk to break CI every morning.

Let me know if you're interested to support that in TripleO Puppet
elements, I can help with that.


Syncing tripleo-puppet-elements with puppet-openstack-integration is a
good idea i think, to prevent breakages like the puppet-mysql one
mentioned before.

One thing to keep in mind is that the module sets in t-p-e and p-o-i are
not the same. E.g. recently we added the timezone module to t-p-e, and
it's not in the p-o-i Puppetfile.

Also, sometimes we do have to go to non-openstack puppet modules to fix
things for TripleO (i don't recall a particular example but i think we
did a couple of fixes in non-openstack modules to allow us to deploy HA
with Pacemaker). In cases like this it would be helpful if we still had
the possibility to pin to something different than what's in
puppet-openstack-integration perhaps.


Considering the above, if we could figure out a way to have t-p-e behave
like this:

* install the module set listed in t-p-e, not p-o-i.

* if there's a ref/branch specified directly in t-p-e, use that

* if t-p-e doesn't have a ref/branch specified, use ref/branch from p-o-i

* if t-p-e doesn't have a ref/branch specified, and the module is not
present in p-o-i, use master

* still honor DIB_REPOREF_* variables to pin individual puppet modules
to whatever wanted at time of building the image -- very useful for
temporary workarounds done either manually or in tripleo.sh.

...then i think this would be very useful. Not sure at the moment what
would be the best way to meet these points though, these are just some
immediate thoughts on the matter.


I think we shout not use puppet-openstack-integration per-se, it was
just an example.

Though we can take this project as reference to build a tool that
prepare Puppet modules in TripleO CI.

If you look at puppet-openstack-integration, we have some scripts that
allow or not to use zuul-cloner with r10k, that's nice because it allows
us to:
* use depends-on puppet patches
* if the end-user does not have zuul, it will git-clone, in tripleo case
I think if DIB_REPOREF_* is set, let's use it
* otherwise use git clone master.

I would suggest also TripleO CI having a Puppetfile that would be gated
(maybe in tripleo-ci repo?).


We should probably put the pins somewhere else than tripleo-ci, because 
we'd want dev environments to use the pinned versions too. Perhaps t-p-e 
is the right place.


The more i think about this the more i like the approach in Dan's patch 
-- an extra element which will pin modules the DIB way. What we're 
lacking here is a tool which could take a Puppetfile (specifically the 
Puppetfile from puppet-openstack-integration) and produce the 
DIB_REPOREF variables (perhaps ignoring all :ref => 'master' ones), so 
that we don't have to track and update them by hand.


I'm not sure if we absolutely need a Puppetfile for TripleO. The value 
added is more in the pins themselves, not so much in syntax (Puppetfile 
vs. DIB-style-file). We could use Puppetfile format too, but since we'll 
not be able to use the one from puppet-openstack-integration directly 
(it's a different set of modules), i don't see much value in switching over.


Jirka



What do you think?



Jirka



On 12/14/2015 02:25 PM, Dan Prince wrote:

On Fri, 2015-12-11 at 21:50 +0100, Jaume Devesa wrote:

Hi all,

Today TripleO CI jobs failed because a new commit introduced on
puppetlabs-mysql[1].
Mr. Jiri Stransky solved it as a temporally fix by pinning the puppet
module clone to a previous
commit in the tripleo-common project[2].

source-repositories puppet element[3] allows you to pin the puppet
module clone as well by
adding a reference commit in the source-repository-
file. In this case,
I am talking about the source-repository-puppet-modules[4].

I know you TripleO guys are brave people that live dangerously in the
cutting edge, but I think
the dependencies to puppet modules not managed by the OpenStack
community should be
pinned to last repo tag for the sake of stability.

What do you think?


I've previously considered added a stable puppet modules element for
just this case:

https://review.openstack.org/#/c/184844/

Using stable branches of things like MySQL, Rabbit, etc might make
sense. However I would want to consider following what the upstream
Puppet community does as well specifically because we do want to
continue using upstream openstack/puppet-* modules as well. At least
for our upstream CI.

We also want to make sure our stable TripleO j

Re: [openstack-dev] [tripleo] When to use parameters vs parameter_defaults

2015-11-26 Thread Jiří Stránský




My personal preference is to say:

1. Any templates which are included in the default environment (e.g
overcloud-resource-registry-puppet.yaml), must expose their parameters
via overcloud-without-mergepy.yaml

2. Any templates which are included in the default environment, but via a
"noop" implementation *may* expose their parameters provided they are
common and not implementation/vendor specific.

3. Any templates exposing vendor specific interfaces (e.g at least anything
related to the OS::TripleO::*ExtraConfig* interfaces) must not expose any
parameters via the top level template.

How does this sound?


Pardon the longer e-mail please, but i think this topic is very far 
reaching and impactful on the future of TripleO, perhaps even strategic, 
and i'd like to present some food for thought.



I think as we progress towards more composable/customizable overcloud, 
using parameter_defaults will become a necessity in more and more places 
in the templates.


Nowadays we can get away with hierarchical passing of some parameters 
from the top-level template downwards because we can make very strong 
assumptions about how the overcloud is structured, and what each piece 
of the overcloud takes as its parameters. Even though we support 
customization via the resource registry, it's still mostly just 
switching between alternate implementations of the same thing, not 
strong composability.



I would imagine that going forward, TripleO would receive feature 
requests to add custom node types into the deployment, be it e.g. 
separating neutron network node functionality out of controller node 
onto its own hardware, or adding custom 3rd party node types into the 
overcloud, which need to integrate with the rest of the overcloud tightly.


When such scenario is considered, even the most code-static parameters 
like node-type-specific ExtraConfig, or a nova flavor to use for a node 
type, suddenly become dynamic on the code level (think 
parameter_defaults), simply because we can't predict upfront what node 
types we'll have.



I think a parallel with how Puppet evolved can be observed here. It used 
to be that Puppet classes included in deployments formed a sort-of 
hierarchy and got their parameters fed in a top-down cascade. This 
carried limitations on composability of machine configuration manifests 
(collisions when using the same class from multiple places, huge number 
of parameters in the higher-level manifests). Hiera was introduced to 
solve the problem, and nowadays top-level Puppet manifests contain a lot 
of include statements, and the parameter values are mostly read from 
external hiera data files, and hiera values transcend through the class 
hierarchy freely. This hinders easy discoverability of "what settings 
can i tune within this machine's configuration", but judging by the 
adoption of the approach, the benefits probably outweigh the drawbacks. 
In Puppet's case, at least :)


It seems TripleO is hitting similar composability and sanity limits with 
the top-down approach, and the number of parameters which can only be 
fed via parameter_defaults is increasing. (The disadvantage of 
parameter_defaults is that, unlike hiera, we currently have no clear 
namespacing rules, which means a higher chance of conflict. Perhaps the 
unit tests suggested in another subthread would be a good start, maybe 
we could even think about how to do proper namespacing.)



Does what i described seem somewhat accurate? Should we maybe buy into 
the concept of "composable templates, externally fed 
hierarchy-transcending parameters" for the long term?


Thanks for reading this far :)


Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] When to use parameters vs parameter_defaults

2015-11-26 Thread Jiří Stránský

On 26.11.2015 14:12, Jiří Stránský wrote:




My personal preference is to say:

1. Any templates which are included in the default environment (e.g
overcloud-resource-registry-puppet.yaml), must expose their parameters
via overcloud-without-mergepy.yaml

2. Any templates which are included in the default environment, but via a
"noop" implementation *may* expose their parameters provided they are
common and not implementation/vendor specific.

3. Any templates exposing vendor specific interfaces (e.g at least anything
related to the OS::TripleO::*ExtraConfig* interfaces) must not expose any
parameters via the top level template.

How does this sound?


Pardon the longer e-mail please, but i think this topic is very far
reaching and impactful on the future of TripleO, perhaps even strategic,
and i'd like to present some food for thought.


I think as we progress towards more composable/customizable overcloud,
using parameter_defaults will become a necessity in more and more places
in the templates.

Nowadays we can get away with hierarchical passing of some parameters
from the top-level template downwards because we can make very strong
assumptions about how the overcloud is structured, and what each piece
of the overcloud takes as its parameters. Even though we support
customization via the resource registry, it's still mostly just
switching between alternate implementations of the same thing, not
strong composability.


I would imagine that going forward, TripleO would receive feature
requests to add custom node types into the deployment, be it e.g.
separating neutron network node functionality out of controller node
onto its own hardware, or adding custom 3rd party node types into the
overcloud, which need to integrate with the rest of the overcloud tightly.

When such scenario is considered, even the most code-static parameters
like node-type-specific ExtraConfig, or a nova flavor to use for a node
type, suddenly become dynamic on the code level (think
parameter_defaults), simply because we can't predict upfront what node
types we'll have.


I think a parallel with how Puppet evolved can be observed here. It used
to be that Puppet classes included in deployments formed a sort-of
hierarchy and got their parameters fed in a top-down cascade. This
carried limitations on composability of machine configuration manifests
(collisions when using the same class from multiple places, huge number
of parameters in the higher-level manifests). Hiera was introduced to
solve the problem, and nowadays top-level Puppet manifests contain a lot
of include statements, and the parameter values are mostly read from
external hiera data files, and hiera values transcend through the class
hierarchy freely. This hinders easy discoverability of "what settings
can i tune within this machine's configuration", but judging by the
adoption of the approach, the benefits probably outweigh the drawbacks.
In Puppet's case, at least :)

It seems TripleO is hitting similar composability and sanity limits with
the top-down approach, and the number of parameters which can only be
fed via parameter_defaults is increasing. (The disadvantage of
parameter_defaults is that, unlike hiera, we currently have no clear
namespacing rules, which means a higher chance of conflict. Perhaps the
unit tests suggested in another subthread would be a good start, maybe
we could even think about how to do proper namespacing.)


Does what i described seem somewhat accurate? Should we maybe buy into
the concept of "composable templates, externally fed
hierarchy-transcending parameters" for the long term?


I now realized i might have used too generic or Puppetish terms in the 
explanation, perhaps drowning the gist of the message a bit :) What i'm 
suggesting is: let's consider going with parameter_defaults wherever we 
can, for the sake of composability, and figure out what is the best way 
to prevent parameter name collisions.




Thanks for reading this far :)


Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Deploy Overcloud Keystone in HTTPD

2016-01-19 Thread Jiří Stránský

On 19.1.2016 03:59, Adam Young wrote:

I have a review here for switching Keystone to HTTPD

https://review.openstack.org/#/c/269377/

But I have no idea how to kick off the CI to really test it.  The check
came back way too quick for it to have done a full install; less than 3
minutes.  I think it was little more than a lint check.

How can I get a real sense of if it is this easy or if there is
something more that needs to be done?


Jenkins reports in two phases, first come the unit tests (in minutes), 
then the integration tests (in about 1.5 hrs minimum, depending on the 
CI load).


Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] core members for tripleo-ui

2016-02-29 Thread Jiří Stránský

+1

On 29.2.2016 16:27, Dan Prince wrote:

There is a new projects for the ui called tripleo-ui. As most of the
existing TripleO core members aren't going to be reviewing UI specific
patches is seems reasonable that we might add a few review candidates
who can focus specifically on UI specific patches.

I'd like to proposed we add Jiri Tomasek and Ana Krivokapic as core
candidates that will focus primarily on the UI. They would be added to
tripleo core but would agree to only +2 patches within the UI for now,
or at least until they are re-nominated for more general TripleO core,
etc.

Core members if you could please vote on this so we can add these
members at the close of this week. Thanks,

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-20 Thread Jiří Stránský

On 18.1.2016 19:49, Tzu-Mainn Chen wrote:

- Original Message -

On Thu, 2016-01-14 at 16:04 -0500, Tzu-Mainn Chen wrote:


- Original Message -

On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:

Hey all,

I realize now from the title of the other TripleO/Mistral thread
[1] that
the discussion there may have gotten confused.  I think using
Mistral for
TripleO processes that are obviously workflows - stack
deployment, node
registration - makes perfect sense.  That thread is exploring
practicalities
for doing that, and I think that's great work.

What I inappropriately started to address in that thread was a
somewhat
orthogonal point that Dan asked in his original email, namely:

"what it might look like if we were to use Mistral as a
replacement for the
TripleO API entirely"

I'd like to create this thread to talk about that; more of a
'should we'
than 'can we'.  And to do that, I want to indulge in a thought
exercise
stemming from an IRC discussion with Dan and others.  All, please
correct
me
if I've misstated anything.

The IRC discussion revolved around one use case: deploying a Heat
stack
directly from a Swift container.  With an updated patch, the Heat
CLI can
support this functionality natively.  Then we don't need a
TripleO API; we
can use Mistral to access that functionality, and we're done,
with no need
for additional code within TripleO.  And, as I understand it,
that's the
true motivation for using Mistral instead of a TripleO API:
avoiding custom
code within TripleO.

That's definitely a worthy goal... except from my perspective,
the story
doesn't quite end there.  A GUI needs additional functionality,
which boils
down to: understanding the Heat deployment templates in order to
provide
options for a user; and persisting those options within a Heat
environment
file.

Right away I think we hit a problem.  Where does the code for
'understanding
options' go?  Much of that understanding comes from the
capabilities map
in tripleo-heat-templates [2]; it would make sense to me that
responsibility
for that would fall to a TripleO library.

Still, perhaps we can limit the amount of TripleO code.  So to
give API
access to 'getDeploymentOptions', we can create a Mistral
workflow.

   Retrieve Heat templates from Swift -> Parse capabilities map

Which is fine-ish, except from an architectural perspective
'getDeploymentOptions' violates the abstraction layer between
storage and
business logic, a problem that is compounded because
'getDeploymentOptions'
is not the only functionality that accesses the Heat templates
and needs
exposure through an API.  And, as has been discussed on a
separate TripleO
thread, we're not even sure Swift is sufficient for our needs;
one possible
consideration right now is allowing deployment from templates
stored in
multiple places, such as the file system or git.


Actually, that whole capabilities map thing is a workaround for a
missing
feature in Heat, which I have proposed, but am having a hard time
reaching
consensus on within the Heat community:

https://review.openstack.org/#/c/196656/

Given that is a large part of what's anticipated to be provided by
the
proposed TripleO API, I'd welcome feedback and collaboration so we
can move
that forward, vs solving only for TripleO.


Are we going to have duplicate 'getDeploymentOptions' workflows
for each
storage mechanism?  If we consolidate the storage code within a
TripleO
library, do we really need a *workflow* to call a single
function?  Is a
thin TripleO API that contains no additional business logic
really so bad
at that point?


Actually, this is an argument for making the validation part of the
deployment a workflow - then the interface with the storage
mechanism
becomes more easily pluggable vs baked into an opaque-to-operators
API.

E.g, in the long term, imagine the capabilities feature exists in
Heat, you
then have a pre-deployment workflow that looks something like:

1. Retrieve golden templates from a template store
2. Pass templates to Heat, get capabilities map which defines
features user
must/may select.
3. Prompt user for input to select required capabilites
4. Pass user input to Heat, validate the configuration, get a
mapping of
required options for the selected capabilities (nested validation)
5. Push the validated pieces ("plan" in TripleO API terminology) to
a
template store

This is a pre-deployment validation workflow, and it's a superset
of the
getDeploymentOptions feature you refer to.

Historically, TripleO has had a major gap wrt workflow, meaning
that we've
always implemented it either via shell scripts (tripleo-incubator)
or
python code (tripleo-common/tripleo-client, potentially TripleO
API).

So I think what Dan is exploring is, how do we avoid reimplementing
a
workflow engine, when a project exists which already does that.


My gut reaction is to say that proposing Mistral in place of a
TripleO API
is to look at the engineering concerns from the wrong
direction.  The

Re: [openstack-dev] [TripleO] propose ejuaso for core

2016-03-14 Thread Jiří Stránský

+1

On 14.3.2016 15:38, Dan Prince wrote:

http://russellbryant.net/openstack-stats/tripleo-reviewers-180.txt

Our top reviewer over the last half a year ejuaso (goes by Ozz for
Osorio or jaosorior on IRC). His reviews seem consistent, he
consistently attends the meetings and he chimes in on lots of things.
I'd like to propose we add him to our core team (probably long overdue
now too).

If you agree please +1. If there is no negative feedback I'll add him
next Monday.

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Aodh upgrades - Request backport exception for stable/liberty

2016-05-18 Thread Jiří Stránský

On 16.5.2016 23:54, Pradeep Kilambi wrote:

On Mon, May 16, 2016 at 3:33 PM, James Slagle 
wrote:


On Mon, May 16, 2016 at 10:34 AM, Pradeep Kilambi  wrote:

Hi Everyone:

I wanted to start a discussion around considering backporting Aodh to
stable/liberty for upgrades. We have been discussing quite a bit on whats
the best way for our users to upgrade ceilometer alarms to Aodh when

moving

from liberty to mitaka. A quick refresh on what changed, In Mitaka,
ceilometer alarms were replaced by Aodh. So only way to get alarms
functionality is use aodh. Now when the user kicks off upgrades from

liberty

to Mitaka, we want to make sure alarms continue to function as expected
during the process which could take multiple days. To accomplish this I
propose the following approach:

* Backport Aodh functionality to stable/liberty. Note, Aodh

functionality is

backwards compatible, so with Aodh running, ceilometer api and client

will

redirect requests to Aodh api. So this should not impact existing users

who

are using ceilometer api or client.

* As part of Aodh deployed via heat stack update, ceilometer alarms

services

will be replaced by openstack-aodh-*. This will be done by the puppet

apply

as part of stack convergence phase.

* Add checks in the Mitaka pre upgrade steps when overcloud install kicks
off to check and warn the user to update to liberty + aodh to ensure

aodh is

running. This will ensure heat stack update is run and, if alarming is

used,

Aodh is running as expected.

The upgrade scenarios between various releases would work as follows:

Liberty -> Mitaka

* Upgrade starts with ceilometer alarms running
* A pre-flight check will kick in to make sure Liberty is upgraded to
liberty + aodh with stack update
* Run heat stack update to upgrade to aodh
* Now ceilometer alarms should be removed and Aodh should be running
* Proceed with mitaka upgrade
* End result, Aodh continue to run as expected

Liberty + aodh -> Mitaka:

* Upgrade starts with Aodh running
* A pre-flight check will kick in to make sure Liberty is upgraded to

Aodh

with stack update
* Confirming Aodh is indeed running, proceed with Mitaka upgrade with

Aodh

running
* End result, Aodh continue to be run as expected


This seems to be a good way to get the upgrades working for aodh. Other

less

effective options I can think of are:

1. Let the Mitaka upgrade kick off and do "yum update" which replace aodh
during migration, alarm functionality will be down until puppet converge
runs and configures Aodh. This means alarms will be down during upgrade
which is not ideal.

2. During Mitaka upgrades, replace with Aodh and add a bash script that
fully configures Aodh and ensures aodh is functioning. This will involve
significant work and results in duplicating everything puppet does today.


How much duplication would this really be? Why would it have to be in bash?



Well pretty much entire aodh configuration will need to happen, Here is
what we do in devstack, something along these lines[1]. So in short, we'll
need to install, create users, configure db and coordination backends,
configure api to run under mod wsgi. Sure, it doesn't have to be bash,
assumed that would be easiest to invoke during upgrades.





Could it be:

Liberty -> Mitaka

* Upgrade starts with ceilometer alarms running
* Add a new hook for the first step of Mitaka upgrade that does:
  ** sets up mitaka repos
  ** migrates from ceilometer alarms to aodh, can use puppet
  ** ensures aodh is running
* Proceed with rest of mitaka upgrade

At most, it seems we'd have to surround the puppet apply with some
pacemaker commands to possibly set maintenance mode and migrate
constraints.

The puppet manifest itself would just be the includes and classes for aodh.




Yea I guess we could do something like this, i'm not fully clear on the
details on how and when this would be called. But with the below caveat you
mentioned already.



Yes this is a possibility, it's still not fully utilizing the Puppet we 
have for deployment, we'd have at least a custom manifest, but hopefully 
it wouldn't be too big.


In case the AODH classes include some other Puppet classes from their 
code, we could end up applying more config changes than desired in this 
phase and break something. I'm hoping that this is more of a theoretical 
concern rather than practical, but probably deserves some verification.







One complication might be that the aodh packages from Mitaka might
pull in new deps that required updating other OpenStack services,
which we wouldn't yet want to do. That is probably worth confirming
though.



Yea we will be pulling in at least some new oslo deps and client libraries
for sure. But wouldnt yum update during the upgrades do that anyway? or
would aodh setup run before yum update phase of upgrade process?


Good question :) We could probably also do it in the middle of 
controller update phase, between step 1 (stop 

Re: [openstack-dev] [tripleo] State of upgrade CLI commands

2016-08-17 Thread Jiří Stránský

On 16.8.2016 21:08, Brad P. Crochet wrote:

Hello TripleO-ians,

I've started to look again at the introduced, but unused/undocumented
upgrade commands. It seems to me that given the current state of the
upgrade process (at least from Liberty -> Mitaka), these commands make
a lot less sense.

I see one of two directions to take on this. Of course I would love to
hear other options.

1) Revert these commands immediately, and forget they ever existed.
They don't exactly work, and as I said, were never officially
documented, so I don't think a revert is out of the question.

or

2) Do a major overhaul, and rethink the interface entirely. For
instance, the L->M upgrade introduced a couple of new steps (the AODH
migration and the Keystone migration). These would have either had to
have completely new commands added, or have some type of override to
the existing upgrade command to handle them.

Personally, I would go for step 1. The 'overcloud deploy' command can
accomplish all of the upgrade steps that involve Heat. In order for
the new upgrade commands to work properly, there's a lot that needs to
be refactored out of the deploy command itself so that it can be
shared with deploy and upgrade, like passing of passwords and the
like. I just don't see a need for discrete commands when we have an
existing command that will do it for us. And with the addition of an
answer file, it makes it even easier.

Thoughts?



+1 for approach no. 1. Currently `overcloud deploy` meets the upgrade 
needs and it gave us some flexibility to e.g. do migrations like AODH 
and Keystone WSGI. I don't think we should have a special command for 
upgrades at this point.


The situation may change as we go towards upgrades of composable 
services, and perhaps wrap upgrades in Mistral if/when applicable, but 
then the potential upgrade command(s) would probably be different from 
the current ones anyway, so +1 for removing them.


Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Dmitry Tantsur and Alex Schultz as instack-undercloud cores

2017-02-01 Thread Jiří Stránský

On 31.1.2017 19:08, Ben Nemec wrote:



On 01/31/2017 11:03 AM, James Slagle wrote:

On Tue, Jan 31, 2017 at 11:02 AM, Ben Nemec  wrote:

In the spirit of all the core team changes, here are a couple more I'd like
to propose.

Dmitry has been very helpful reviewing in instack-undercloud for a long time
so this is way overdue.  I'm also going to propose that he be able to +2
anything Ironic-related in TripleO since that is his primary area of
expertise.

Alex has ramped up quickly on TripleO and has also been helping out with
instack-undercloud quite a bit.  He's already core for the puppet modules,
and a lot of the changes to instack-undercloud these days are primarily in
the puppet manifest so it's not a huge stretch to add him.

As usual, TripleO cores please vote and/or provide comments.  Thanks.


+1 both



-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


+1. Dmitry is also one of the top reviewers and committers to
tripleo-docs. I wouldn't be opposed to him having +2 there as well.



Oh, good call.  I forgot to mention that too.  +1 to adding him as docs
core as well.


+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Update TripleO core members

2017-01-25 Thread Jiří Stránský

On 23.1.2017 20:03, Emilien Macchi wrote:

Greeting folks,

I would like to propose some changes in our core members:

- Remove Jay Dobies who has not been active in TripleO for a while
(thanks Jay for your hard work!).
- Add Flavio Percoco core on tripleo-common and tripleo-heat-templates
docker bits.
- Add Steve Backer on os-collect-config and also docker bits in
tripleo-common and tripleo-heat-templates.

Indeed, both Flavio and Steve have been involved in deploying TripleO
in containers, their contributions are very valuable. I would like to
encourage them to keep doing more reviews in and out container bits.

As usual, core members are welcome to vote on the changes.


+1



Thanks,




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Honza Pokorny core on tripleo-ui

2017-01-25 Thread Jiří Stránský

On 24.1.2017 14:52, Emilien Macchi wrote:

I have been discussed with TripleO UI core reviewers and it's pretty
clear Honza's work has been valuable so we can propose him part of
Tripleo UI core team.
His quality of code and reviews make him a good candidate and it would
also help the other 2 core reviewers to accelerate the review process
in UI component.

Like usual, this is open for discussion, Tripleo UI core and TripleO
core, please vote.


+1



Thanks,




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Sergey (Sagi) Shnaidman for core on tripleo-ci

2017-01-25 Thread Jiří Stránský

On 24.1.2017 18:03, Juan Antonio Osorio wrote:

Sagi (sshnaidm on IRC) has done significant work in TripleO CI (both
on the current CI solution and in getting tripleo-quickstart jobs for
it); So I would like to propose him as part of the TripleO CI core team.



+1


I think he'll make a great addition to the team and will help move CI
issues forward quicker.

Best Regards,





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][yaql] Deep merge map of lists?

2016-08-30 Thread Jiří Stránský


On 30.8.2016 10:17, Steven Hardy wrote:




Yeah, that gets us closer, but we do need to handle more than one value
(list entry) per key, e.g:

 data:
   l:
 - "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
   "tripleo_packages_node_names": ["a0", "a1", "a2"]
 - "nova_compute_node_names": ["b0"]
   "tripleo_packages_node_names": ["b0"]

Output needs to be like:

 "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
 "tripleo_packages_node_names": ["a0", "a1", "a2", "b0"]
 "nova_compute_node_names": ["b0"]



Hoping this could do it:

[stack@instack ~]$ cat yaq.yaml
heat_template_version: 2016-10-14

outputs:
  debug:
value:
  yaql:
expression: $.data.l.reduce($1.mergeWith($2))
data:
  l:
- "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
  "tripleo_packages_node_names": ["a0", "a1", "a2"]
- "nova_compute_node_names": ["b0"]
  "tripleo_packages_node_names": ["b0"]


[stack@instack ~]$ heat output-show yaq debug
WARNING (shell) "heat output-show" is deprecated, please use "openstack 
stack output show" instead

{
  "gnocchi_metricd_node_names": [
"a0",
"a1",
"a2"
  ],
  "tripleo_packages_node_names": [
"a0",
"a1",
"a2",
"b0"
  ],
  "nova_compute_node_names": [
"b0"
  ]
}

Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][yaql] Deep merge map of lists?

2016-08-30 Thread Jiří Stránský

 expression: $.data.l.reduce($1.mergeWith($2))


Or maybe it's better with seed value for reduce, just in case:

$.data.l.reduce($1.mergeWith($2), {})


Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][yaql] Deep merge map of lists?

2016-08-30 Thread Jiří Stránský

On 30.8.2016 18:16, Zane Bitter wrote:

On 30/08/16 12:02, Steven Hardy wrote:

  debug_tripleo2:
value:
  yaql:
expression: $.data.l.reduce($1.mergeWith($2))
data:
  l:
- "gnocchi_metricd_node_names": ["overcloud-controller-0",
  "overcloud-controller-1", "overcloud-controller-2"]
  "tripleo_packages_node_names": ["overcloud-controller-0", 
"overcloud-controller-1", "overcloud-controller-2"]
- "nova_compute_node_names": ["overcloud-compute-0"]
  "tripleo_packages_node_names": ["overcloud-compute-0"]
  "tripleo_packages_node_names2": ["overcloud-compute-0"]
- "ceph_osd_node_names": ["overcloud-cephstorage-0"]
  "tripleo_packages_node_names": ["overcloud-cephstorage-0"]
  "tripleo_packages_node_names2": ["overcloud-cephstorage-0"]

$ heat output-show foo5 debug_tripleo2
stack output show" instead
Output error: can only concatenate tuple (not "list") to tuple

I've not dug too deeply yet, but assuming that's a yaql error vs a heat bug
it looks like it won't work.


It works flawlessly in yaqluator, so that sounds like a Heat bug.


Ack, i reported it so that it doesn't fall through the cracks:

https://bugs.launchpad.net/heat/+bug/1618538


Jirka



- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][yaql] Deep merge map of lists?

2016-08-30 Thread Jiří Stránský

On 30.8.2016 18:02, Steven Hardy wrote:

On Tue, Aug 30, 2016 at 04:10:47PM +0200, Jiří Stránský wrote:


On 30.8.2016 10:17, Steven Hardy wrote:




Yeah, that gets us closer, but we do need to handle more than one value
(list entry) per key, e.g:

 data:
   l:
 - "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
   "tripleo_packages_node_names": ["a0", "a1", "a2"]
 - "nova_compute_node_names": ["b0"]
   "tripleo_packages_node_names": ["b0"]

Output needs to be like:

 "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
 "tripleo_packages_node_names": ["a0", "a1", "a2", "b0"]
 "nova_compute_node_names": ["b0"]



Hoping this could do it:

[stack@instack ~]$ cat yaq.yaml
heat_template_version: 2016-10-14

outputs:
  debug:
value:
  yaql:
expression: $.data.l.reduce($1.mergeWith($2))
data:
  l:
- "gnocchi_metricd_node_names": ["a0", "a1", "a2"]
  "tripleo_packages_node_names": ["a0", "a1", "a2"]
- "nova_compute_node_names": ["b0"]
  "tripleo_packages_node_names": ["b0"]


Thanks for this!

Unfortunately I dont think it works with more than two list items:

  debug_tripleo2:
value:
  yaql:
expression: $.data.l.reduce($1.mergeWith($2))
data:
  l:
- "gnocchi_metricd_node_names": ["overcloud-controller-0",
  "overcloud-controller-1", "overcloud-controller-2"]
  "tripleo_packages_node_names": ["overcloud-controller-0", 
"overcloud-controller-1", "overcloud-controller-2"]
- "nova_compute_node_names": ["overcloud-compute-0"]
  "tripleo_packages_node_names": ["overcloud-compute-0"]
  "tripleo_packages_node_names2": ["overcloud-compute-0"]
- "ceph_osd_node_names": ["overcloud-cephstorage-0"]
  "tripleo_packages_node_names": ["overcloud-cephstorage-0"]
  "tripleo_packages_node_names2": ["overcloud-cephstorage-0"]

$ heat output-show foo5 debug_tripleo2
stack output show" instead
Output error: can only concatenate tuple (not "list") to tuple

I've not dug too deeply yet, but assuming that's a yaql error vs a heat bug
it looks like it won't work.


Hmm yea that's strange, because YAQL has a test case for reduce() with 5 
items:


https://github.com/openstack/yaql/blob/f71a0305089997cbfa5ff00f660920711b04f39e/yaql/tests/test_queries.py#L337-L339

Anyway, good that we have the solution below that works :)

Jirka



However I did find an approach earler with therve which seems to do what is
needed:

 debug_tripleo:
value:
  yaql:
# $.selectMany($.items()).groupBy($[0], $[1][0])
# reduce($1 + $2)')
# dict($.selectMany($.items()).groupBy($[0], $[1], [$[0],
# $[1].flatten()]))
expression: dict($.data.l.selectMany($.items()).groupBy($[0], $[1],
[$[0], $[1].flatten()]))
data:
  l:
- "gnocchi_metricd_node_names": ["overcloud-controller-0",
  "overcloud-controller-1", "overcloud-controller-2"]
  "tripleo_packages_node_names": ["overcloud-controller-0", 
"overcloud-controller-1", "overcloud-controller-2"]
  "tripleo_packages_node_names2": ["overcloud-controller-0", 
"overcloud-controller-1", "overcloud-controller-2"]
- "nova_compute_node_names": ["overcloud-compute-0"]
  "tripleo_packages_node_names": ["overcloud-compute-0"]
  "tripleo_packages_node_names2": ["overcloud-compute-0"]
- "ceph_osd_node_names": ["overcloud-cephstorage-0"]
  "tripleo_packages_node_names": ["overcloud-cephstorage-0"]
  "tripleo_packages_node_names2": ["overcloud-cephstorage-0"]

Output:

$ heat output-show foo5 debug_tripleo
stack output show" instead
{
  "gnocchi_metricd_node_names": [
"overcloud-controller-0",
"overcloud-controller-1",
"overcloud-controller-2"
  ],
  "tripleo_packages_node_names": [
"overcloud-controller-0",
"overcloud

Re: [openstack-dev] [tripleo] proposing Michele Baldessari part of core team

2016-11-04 Thread Jiří Stránský
+1, Michele does great reviews, and his contributions around HA and 
upgrades have been crucial.


On 4.11.2016 18:40, Emilien Macchi wrote:

MIchele Baldessari (bandini on IRC) has consistently demonstrated high
levels of contributions in TripleO projects, specifically in High
Availability area where's he's for us a guru (I still don't understand
how pacemaker works, but hopefully he does).

He has done incredible work on composable services and also on
improving our HA configuration by following reference architectures.
Always here during meetings, and on #tripleo to give support to our
team, he's a great team player and we are lucky to have him onboard.
I believe he would be a great core reviewer on HA-related work and we
expect his review stats to continue improving as his scope broadens
over time.

As usual, feedback is welcome and please vote for this proposal!

Thanks,




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Alex Schultz core on puppet-tripleo

2016-12-01 Thread Jiří Stránský

On 1.12.2016 23:26, Emilien Macchi wrote:

Team,

Alex Schultz (mwhahaha on IRC) has been active on TripleO since a few
months now.  While he's very active in different areas of TripleO, his
reviews and contributions on puppet-tripleo have been very useful.
Alex is a Puppet guy and also the current PTL of Puppet OpenStack. I
think he perfectly understands how puppet-tripleo works. His
involvement in the project and contributions on puppet-tripleo deserve
that we allow him to +2 puppet-tripleo.

Thanks Alex for your involvement and hard work in the project, this is
very appreciated!

As usual, I'll let the team to vote about this proposal.

Thanks,


+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] os-cloud-config retirement

2017-03-30 Thread Jiří Stránský

On 30.3.2017 14:58, Dan Prince wrote:

There is one case that I was thinking about reusing this piece of code
within a container to help initialize keystone endpoints. It would
require some changes and updates (to match how puppet-* configures
endpoints).

For TripleO containers we use various puppet modules (along with hiera)
to drive the creation of endpoints. This functionally works fine, but
is quite slow to execute (puppet is slow here) and takes several
minutes to complete. I'm wondering if a single optimized python script
might serve us better here. It could be driven via YAML (perhaps
similar to our Hiera), idempotent, and likely much faster than having
the code driven by puppet. This doesn't have to live in os-cloud-
config, but initially I thought that might be a reasonable place for
it. It is worth pointing out that this would be something that would
need to be driven by our t-h-t workflow and not a post-installation
task. So perhaps that makes it not a good fit for os-cloud-config. But
it is similar to the keystone initialization already there so I thought
I'd mention it.


I agree we could have an optimized python script instead of puppet to do 
the init. However, os-cloud-config also doesn't strike me as the ideal 
place.


What might be interesting is solving the keystone init within containers 
along with our container entrypoint situation. We've talked earlier that 
we may have to build our custom entrypoints into the images as we 
sometimes need to do things that the current entrypoints don't seem fit 
for, or don't give us enough control over what happens. This single 
optimized python script for endpoint config you mentioned could be one 
of such in-image entrypoint scripts. We could build multiple different 
scripts like this into a single image and select the right one when 
starting the container (defaulting to a script that handles the usual 
"worker" case, in this case Keystone API).


This gets somewhat similar to the os-cloud-config usecase, but even if 
we wanted a separate repo, or even a RPM for these, i suppose it would 
be cleaner to just start from scratch rather than repurpose os-cloud-config.


Jirka



Dan

On Thu, 2017-03-30 at 08:13 -0400, Emilien Macchi wrote:

Hi,

os-cloud-config was deprecated in the Ocata release and is going to
be
removed in Pike.

TripleO project doesn't need it anymore and after some investigation
in codesearch.openstack.org, nobody is using it in OpenStack.
I'm working on the removal this cycle, please let us know any
concern.

Thanks,


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] os-cloud-config retirement

2017-03-30 Thread Jiří Stránský

On 30.3.2017 18:02, Bogdan Dobrelya wrote:

On 30.03.2017 15:40, Jiří Stránský wrote:


What might be interesting is solving the keystone init within containers
along with our container entrypoint situation. We've talked earlier that
we may have to build our custom entrypoints into the images as we
sometimes need to do things that the current entrypoints don't seem fit
for, or don't give us enough control over what happens. This single
optimized python script for endpoint config you mentioned could be one
of such in-image entrypoint scripts. We could build multiple different


I'm concerned of having entry points in-image. Could it be mounted as a
hostpath instead, then executed? Custom entry-points could replace
existing ones this way. This would allow keep kolla or other images
clean from side changes.


That was actually my initial thought as well, but it means more 
entanglement between the containers and the bare-metal hosts, and 
creates some new issues.


E.g. it makes container image versioning harder. We'd need to implement 
additional logic to make sure we use the correct entrypoint version for 
a particular container image version (think rolling back to an older 
image but still using the newest entrypoint, perhaps those two not being 
fully compatible, and having the container crash because of this). This 
alone is quite disadvantageous IMO.


Jirka




scripts like this into a single image and select the right one when
starting the container (defaulting to a script that handles the usual


We could use a clean container and mount in that we need. Those entry
points looks similar to heat agent hooks, right? I think they should be
packaged as a separate artifacts.


"worker" case, in this case Keystone API).

This gets somewhat similar to the os-cloud-config usecase, but even if
we wanted a separate repo, or even a RPM for these, i suppose it would
be cleaner to just start from scratch rather than repurpose
os-cloud-config.

Jirka






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] os-cloud-config retirement

2017-03-31 Thread Jiří Stránský

On 30.3.2017 17:39, Juan Antonio Osorio wrote:

Why not drive the post-config with something like shade over ansible?
Similar to what the kolla-ansible community is doing.


We could use those perhaps, if they bring enough benefit to add them to 
the container image(s) (i think we'd still want to drive it via a 
container rather than fully externally). It's quite tempting to just 
load a yaml file with the endpoint definitions and just iterate over 
them and let Ansible handle the actual API calls...


However, currently i can't see endpoint management in the cloud modules 
docs [1], just service management. Looks like there's still a feature 
gap at the moment.


Jirka

[1] http://docs.ansible.com/ansible/list_of_cloud_modules.html#openstack



On 30 Mar 2017 16:42, "Jiří Stránský" <ji...@redhat.com> wrote:


On 30.3.2017 14:58, Dan Prince wrote:


There is one case that I was thinking about reusing this piece of code
within a container to help initialize keystone endpoints. It would
require some changes and updates (to match how puppet-* configures
endpoints).

For TripleO containers we use various puppet modules (along with hiera)
to drive the creation of endpoints. This functionally works fine, but
is quite slow to execute (puppet is slow here) and takes several
minutes to complete. I'm wondering if a single optimized python script
might serve us better here. It could be driven via YAML (perhaps
similar to our Hiera), idempotent, and likely much faster than having
the code driven by puppet. This doesn't have to live in os-cloud-
config, but initially I thought that might be a reasonable place for
it. It is worth pointing out that this would be something that would
need to be driven by our t-h-t workflow and not a post-installation
task. So perhaps that makes it not a good fit for os-cloud-config. But
it is similar to the keystone initialization already there so I thought
I'd mention it.



I agree we could have an optimized python script instead of puppet to do
the init. However, os-cloud-config also doesn't strike me as the ideal
place.

What might be interesting is solving the keystone init within containers
along with our container entrypoint situation. We've talked earlier that we
may have to build our custom entrypoints into the images as we sometimes
need to do things that the current entrypoints don't seem fit for, or don't
give us enough control over what happens. This single optimized python
script for endpoint config you mentioned could be one of such in-image
entrypoint scripts. We could build multiple different scripts like this
into a single image and select the right one when starting the container
(defaulting to a script that handles the usual "worker" case, in this case
Keystone API).

This gets somewhat similar to the os-cloud-config usecase, but even if we
wanted a separate repo, or even a RPM for these, i suppose it would be
cleaner to just start from scratch rather than repurpose os-cloud-config.

Jirka



Dan

On Thu, 2017-03-30 at 08:13 -0400, Emilien Macchi wrote:


Hi,

os-cloud-config was deprecated in the Ocata release and is going to
be
removed in Pike.

TripleO project doesn't need it anymore and after some investigation
in codesearch.openstack.org, nobody is using it in OpenStack.
I'm working on the removal this cycle, please let us know any
concern.

Thanks,




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
e
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Roadmap for Container CI work

2017-04-06 Thread Jiří Stránský

On 6.4.2017 12:46, Jiří Stránský wrote:

On 4.4.2017 22:01, Emilien Macchi wrote:

After our weekly meeting of today, I found useful to share and discuss
our roadmap for Container CI jobs in TripleO.
They are ordered by priority from the highest to lowest:

1. Swap ovb-nonha job with ovb-containers, enable introspection on the
container job and shuffle other coverage (e.g ssl) to other jobs
(HA?). It will help us to get coverage for ovb-containers scenario
again, without consuming more rh1 resources and keep existing
coverage.
2. Get multinode coverage of deployments - this should integrate with
the scenarios we already have defined for non-container deployment.
This is super important to cover all overcloud services, like we did
with classic deployments. It should be non voting to start and then
voting once it works. We should find a way to keep the same templates
as we have now, and just include the docker environment. In other
words, find a way to keep using:
https://github.com/openstack/tripleo-heat-templates/blob/master/ci/environments/scenario001-multinode.yaml
so we don't duplicate scenario environments.
3. Implement container upgrade job, which for Pike will be deploy a
baremetal overcloud, then migrate on upgrade to containers. Use
multinode jobs for this task. Start with a non-voting job and move to
the gate once it work. I also suggest to use scenarios framework, so
we keep good coverage.


The first iteration of this job is ready to be reviewed and landed.
Please see the patches here [1].

The latest job execution didn't go all the way to success yet, it failed
during Ansible upgrade steps execution [2], but i think the patches are
now far enough that they would be good to merge anyway, and issues can
be ironed out subsequently, as well as making the job actually
Ocata->master rather than master->master (currently just switching from
non-containers to containers).

[1] https://review.openstack.org/#/q/topic:container-upgrade
[2]
http://logs.openstack.org/84/450784/8/experimental/gate-tripleo-ci-centos-7-containers-multinode-upgrades-nv/a1850f7/logs/undercloud/home/jenkins/overcloud_deploy.log.txt.gz


Sorry the [2] link was incorrect, this is the right one:

http://logs.openstack.org/84/450784/9/experimental/gate-tripleo-ci-centos-7-containers-multinode-upgrades-nv/23f9190/logs/undercloud/home/jenkins/overcloud_upgrade_console.log.txt.gz




4. After we implement the workflow for minor updates, have a job with
tests container-to-container updates for minor (rolling) updates, this
ideally should add some coverage to ensure no downtime of APIs and
possibly checks for service restarts (ref recent bugs about bouncing
services on minor updates)
5. Once Pike is released and Queens starts, let's work on container to
containers upgrade job.

Any feedback or question is highly welcome,

Note: The proposal comes from shardy's notes on
https://etherpad.openstack.org/p/tripleo-container-ci - feel free to
contribute to the etherpad or mailing list.

Thanks,






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Roadmap for Container CI work

2017-04-06 Thread Jiří Stránský

On 4.4.2017 22:01, Emilien Macchi wrote:

After our weekly meeting of today, I found useful to share and discuss
our roadmap for Container CI jobs in TripleO.
They are ordered by priority from the highest to lowest:

1. Swap ovb-nonha job with ovb-containers, enable introspection on the
container job and shuffle other coverage (e.g ssl) to other jobs
(HA?). It will help us to get coverage for ovb-containers scenario
again, without consuming more rh1 resources and keep existing
coverage.
2. Get multinode coverage of deployments - this should integrate with
the scenarios we already have defined for non-container deployment.
This is super important to cover all overcloud services, like we did
with classic deployments. It should be non voting to start and then
voting once it works. We should find a way to keep the same templates
as we have now, and just include the docker environment. In other
words, find a way to keep using:
https://github.com/openstack/tripleo-heat-templates/blob/master/ci/environments/scenario001-multinode.yaml
so we don't duplicate scenario environments.
3. Implement container upgrade job, which for Pike will be deploy a
baremetal overcloud, then migrate on upgrade to containers. Use
multinode jobs for this task. Start with a non-voting job and move to
the gate once it work. I also suggest to use scenarios framework, so
we keep good coverage.


The first iteration of this job is ready to be reviewed and landed. 
Please see the patches here [1].


The latest job execution didn't go all the way to success yet, it failed 
during Ansible upgrade steps execution [2], but i think the patches are 
now far enough that they would be good to merge anyway, and issues can 
be ironed out subsequently, as well as making the job actually 
Ocata->master rather than master->master (currently just switching from 
non-containers to containers).


[1] https://review.openstack.org/#/q/topic:container-upgrade
[2] 
http://logs.openstack.org/84/450784/8/experimental/gate-tripleo-ci-centos-7-containers-multinode-upgrades-nv/a1850f7/logs/undercloud/home/jenkins/overcloud_deploy.log.txt.gz



4. After we implement the workflow for minor updates, have a job with
tests container-to-container updates for minor (rolling) updates, this
ideally should add some coverage to ensure no downtime of APIs and
possibly checks for service restarts (ref recent bugs about bouncing
services on minor updates)
5. Once Pike is released and Queens starts, let's work on container to
containers upgrade job.

Any feedback or question is highly welcome,

Note: The proposal comes from shardy's notes on
https://etherpad.openstack.org/p/tripleo-container-ci - feel free to
contribute to the etherpad or mailing list.

Thanks,




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Florian Fuchs for tripleo-validations core

2017-04-06 Thread Jiří Stránský

On 6.4.2017 11:53, Martin André wrote:

Hellooo,

I'd like to propose we extend Florian Fuchs +2 powers to the
tripleo-validations project. Florian is already core on tripleo-ui
(well, tripleo technically so this means there is no changes to make
to gerrit groups).

Florian took over many of the stalled patches in tripleo-validations
and is now the principal contributor in the project [1]. He has built
a good expertise over the last months and I think it's time he has
officially the right to approve changes in tripleo-validations.


+1



Consider this my +1 vote.

Martin

[1] 
http://stackalytics.com/?module=tripleo-validations=patches=pike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][kolla] extended start and hostpath persistent dirs

2017-04-10 Thread Jiří Stránský
Responses inline :) I started snipping parts out because the quotations 
are getting long.





tl;dr use kolla images and bootsrap OR upstream images with
direct
commands:

.. code-block:: yaml
kolla_config:
/var/lib/kolla/config_files/foo.json
  command: /usr/bin/foo


We insist on using kolla extended start, if only it can't function
w/o Kolla extended start using Kolla images (like Mysql and Rabbitmq
have some extra initialization)



I don't think we need to insist on this, in fact i'd even prefer not 
using kolla_start in cases where it doesn't provide significant value.


In addition to the Kolla flavor that Dan mentioned earlier, using these 
entrypoints also increases chance of TripleO being broken by a commit to 
Kolla, because Kolla commits aren't gated on deploying TripleO.


If the only thing we need is the file permissions setup, we don't even 
need to use kolla_start for that, we can use kolla_set_configs [4] 
directly in an init container. (But again, i don't think we need to 
force this as a rule.)






vs

.. code-block:: yaml
foo:
image: upstream/foo:latest
command: /usr/bin/foo


We accept direct commands as well, if it works w/o "user: root" for
Kolla containers omitting extended start OR if it just works as is with
upstream containers (non Kolla), like etcd [2] and perhaps redis


I think using direct commands is fine.

However, i think we should avoid targeting any images that we can't 
build ourselves easily. One of the benefits of using Kolla images is a 
uniform way how to build them.





* Custom entrypoints for containers adds complexity and head ache.


Good point. But the entry points Kolla uses for many containers don't
match what our systemd services already use on baremetal. As we are
striving for update path that does not break end users upgrading from
baremetal to containers we have to have a mechanism that gives us
configuration parity across the implementions. Controlling the entry
point either by injecting it into the container (via something like
Kolla's template overrides mechanism) or via tripleo-heat-templates
direction (much more hackable) is where we ended up.


Yea i'd very much prefer using entrypoints that are easily amendable by 
TripleO developers, and are gated on deploying TripleO.


As for having them in-image or externally passed from t-h-t, that could 
almost be a thread on its own :) The benefit of t-h-t approach is 
hackability. The benefit of in-image approach is being sure that the 
image version is compatible with its entrypoint (for rollbacks and such) 
and generally the image being more self contained (being able to use it 
easier manually, without Heat/t-h-t, should the need arise).


I think we may still be forming our opinion on this matter as we hit 
issues with one or the other approach, maybe we'll even continue using 
different approaches for different use cases.




In general we like Kolla images at the moment for what they provide.
But there are some cases where we need to control things that have too
much of a "kolla flavor" and would potentially break upgrades/features
if we used them directly.



We accept custom entry points for some tricky cases as well.

Having that said, I'd really like to see a final call for that topic.
Otherwise it's really hard to do a code review and perhaps to maintain
changes to t-h-t for future releases as well.




Cheers

Jirka

[4] 
https://github.com/openstack/kolla/blob/77903c70cd651ad97a6a918f6889e5120d85b8d1/docker/base/start.sh#L14


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] propose Alex Schultz core on tripleo-heat-templates

2017-03-13 Thread Jiří Stránský

On 13.3.2017 15:30, Emilien Macchi wrote:

Hi,

Alex is already core on instack-undercloud and puppet-tripleo.
His involvement and knowledge in TripleO Heat Templates has been very
appreciated over the last months and I think we can give him +2 on
this project.

As usual, feel free to vote -1/+1 on this proposal.


+1



Thanks,




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-03-10 Thread Jiří Stránský

On 10.3.2017 17:26, Heidi Joy Tretheway wrote:

Hi TripleO team,

Here’s an update on your project logo. Our illustrator tried to be as true as
possible to your original, while ensuring it matched the line weight, color
palette and style of the rest. We also worked to make sure that three Os in the
logo are preserved. Thanks for your patience as we worked on this! Feel free to
direct feedback to me.


I think it's great! The connection to our current logo is pretty clear 
to me, so hopefully we wouldn't be confusing anyone too much by 
switching to the new logo. Thanks for the effort!


Also personally i like the color scheme change to a more 
playful/cartoony look as you mentioned in your other e-mail.


Jirka





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-17 Thread Jiří Stránský

On 14.7.2017 23:00, Ben Nemec wrote:



On 07/14/2017 11:43 AM, Joshua Harlow wrote:

Out of curiosity, since I keep on hearing/reading all the tripleo
discussions on how tripleo folks are apparently thinking/doing?
redesigning the whole thing to use ansible + mistral + heat, or ansible
+ kubernetes or ansible + mistral + heat + ansible (a second time!) or ...

Seeing all those kinds of questions and suggestions around what should
be used and why and how (and even this thread) makes me really wonder
who actually uses tripleo and can afford/understand such kinds of changes?

Does anyone?

If there are  is there going to be an upgrade
path for there existing 'cloud/s' to whatever this solution is?

What operator(s) has the ability to do such a massive shift at this
point in time? Who are these 'mystical' operators?

All this has really peaked my curiosity because I am personally trying
to do that shift (not exactly the same solution...) and I know it is a
massive undertaking (that will take quite a while to get right) even for
a simple operator with limited needs out of openstack (ie godaddy); so I
don't really understand how the generic solution for all existing
tripleo operators can even work...


This is a valid point.  Up until now the answer has been that we
abstracted most of the ugliness of major changes behind either Heat or
tripleoclient.  If we end up essentially dropping those two in favor of
some other method of driving deployments it's going to be a lot harder
to migrate.  And I could be wrong, but I'm pretty sure it _is_ important
to our users to have an in-place upgrade path (see the first bullet
point in [1]).

New, shiny technology is great and all, but we do need to remember that
we have a lot of users out there already depending on the old,
not-so-shiny bits too.  They're not going to be happy if we leave them
hanging.


Exactly. Reuse is nice to have, while some sort of an upgrade path is a 
must have. We should be aware of this when selecting tools for Kubernetes.


Jirka



1: http://lists.openstack.org/pipermail/openstack-dev/2017-June/119063.html



Flavio Percoco wrote:


Greetings,

As some of you know, I've been working on the second phase of TripleO's
containerization effort. This phase if about migrating the docker based
deployment onto Kubernetes.

These phase requires work on several areas: Kubernetes deployment,
OpenStack
deployment on Kubernetes, configuration management, etc. While I've been
diving
into all of these areas, this email is about the second point, OpenStack
deployment on Kubernetes.

There are several tools we could use for this task. kolla-kubernetes,
openstack-helm, ansible roles, among others. I've looked into these
tools and
I've come to the conclusion that TripleO would be better of by having
ansible
roles that would allow for deploying OpenStack services on Kubernetes.

The existing solutions in the OpenStack community require using Helm.
While I
like Helm and both, kolla-kubernetes and openstack-helm OpenStack
projects, I
believe using any of them would add an extra layer of complexity to
TripleO,
which is something the team has been fighting for years years -
especially now
that the snowball is being chopped off.

Adopting any of the existing projects in the OpenStack communty would
require
TripleO to also write the logic to manage those projects. For example,
in the
case of openstack-helm, the TripleO team would have to write either
ansible
roles or heat templates to manage - install, remove, upgrade - the
charts (I'm
happy to discuss this point further but I'm keepping it at a
high-level on
purpose for the sake of not writing a 10k-words-long email).

James Slagle sent an email[0], a couple of days ago, to form TripleO
plans
around ansible. One take-away from this thread is that TripleO is
adopting
ansible more and more, which is great and it fits perfectly with the
conclusion
I reached.

Now, what this work means is that we would have to write an ansible role
for
each service that will deploy the service on a Kubernetes cluster.
Ideally these
roles will also generate the configuration files (removing the need of
puppet
entirely) and they would manage the lifecycle. The roles would be
isolated and
this will reduce the need of TripleO Heat templates. Doing this would
give
TripleO full control on the deployment process too.

In addition, we could also write Ansible Playbook Bundles to contain
these roles
and run them using the existing docker-cmd implementation that is coming
out in
Pike (you can find a PoC/example of this in this repo[1]).

Now, I do realize the amount of work this implies and that this is my
opinion/conclusion. I'm sending this email out to kick-off the
discussion and
gather thoughts and opinions from the rest of the community.

Finally, what I really like about writing pure ansible roles is that
ansible is
a known, powerfull, tool that has been adopted by many operators
already. It'll
provide the flexibility needed and, if 

Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Jiří Stránský

On 14.7.2017 11:17, Flavio Percoco wrote:


Greetings,

As some of you know, I've been working on the second phase of TripleO's
containerization effort. This phase if about migrating the docker based
deployment onto Kubernetes.

These phase requires work on several areas: Kubernetes deployment, OpenStack
deployment on Kubernetes, configuration management, etc. While I've been diving
into all of these areas, this email is about the second point, OpenStack
deployment on Kubernetes.

There are several tools we could use for this task. kolla-kubernetes,
openstack-helm, ansible roles, among others. I've looked into these tools and
I've come to the conclusion that TripleO would be better of by having ansible
roles that would allow for deploying OpenStack services on Kubernetes.

The existing solutions in the OpenStack community require using Helm. While I
like Helm and both, kolla-kubernetes and openstack-helm OpenStack projects, I
believe using any of them would add an extra layer of complexity to TripleO,
which is something the team has been fighting for years years - especially now
that the snowball is being chopped off.

Adopting any of the existing projects in the OpenStack communty would require
TripleO to also write the logic to manage those projects. For example, in the
case of openstack-helm, the TripleO team would have to write either ansible
roles or heat templates to manage - install, remove, upgrade - the charts (I'm
happy to discuss this point further but I'm keepping it at a high-level on
purpose for the sake of not writing a 10k-words-long email).

James Slagle sent an email[0], a couple of days ago, to form TripleO plans
around ansible. One take-away from this thread is that TripleO is adopting
ansible more and more, which is great and it fits perfectly with the conclusion
I reached.

Now, what this work means is that we would have to write an ansible role for
each service that will deploy the service on a Kubernetes cluster. Ideally these
roles will also generate the configuration files (removing the need of puppet
entirely) and they would manage the lifecycle. The roles would be isolated and
this will reduce the need of TripleO Heat templates. Doing this would give
TripleO full control on the deployment process too.

In addition, we could also write Ansible Playbook Bundles to contain these roles
and run them using the existing docker-cmd implementation that is coming out in
Pike (you can find a PoC/example of this in this repo[1]).

Now, I do realize the amount of work this implies and that this is my
opinion/conclusion. I'm sending this email out to kick-off the discussion and
gather thoughts and opinions from the rest of the community.


I agree this is a direction we should explore further. This would give 
us the option to tailor things exactly as we need -- good for keeping 
our balance in having interfaces as stable as possible, while still 
making enough development progress. And we'd keep our ability to make 
important changes (e.g. bugfixes) without delays.


We'll have to write more code ourselves, but it's possible that if we 
picked up an existing tool, we'd have to spend that time (if not more) 
elsewhere. Migrating existing non-kubernetized TripleO deployments to 
kubernetized is going to be pretty difficult even if we do what you 
suggested. I imagine that if we also had to fit into some pre-existing 
external deployment/management interfaces, while trying to keep ours 
stable or make just iterative changes, it might turn out to be a surreal 
effort. We will have to design things with migration from "legacy 
TripleO" in mind, or make later amendments here and there solely for 
this purpose. Such design and patches would probably not be a good fit 
for non-tripleo projects.


What i recall from our old PoC [2], defining the resources and init 
containers etc. will probably not be the most difficult task, and 
furthermore we can largely draw inspiration from our current 
containerized solution too. I think the more challenging things might be 
e.g. config generation with Ansible, and how major upgrades and rolling 
updates will be done (how all this ties into the APB way of 
provisioning/deprovisioning). And of course how to fulfill the 
expectations that TripleO has set around network isolation and HA :)


I'm eager to give the latest code a try myself :) Thanks for working on 
this, it looks like there's been great progress lately!


Jirka



Finally, what I really like about writing pure ansible roles is that ansible is
a known, powerfull, tool that has been adopted by many operators already. It'll
provide the flexibility needed and, if structured correctly, it'll allow for
operators (and other teams) to just use the parts they need/want without
depending on the full-stack. I like the idea of being able to separate concerns
in the deployment workflow and the idea of making it simple for users of TripleO
to do the same at runtime. Unfortunately, going down this road means 

[openstack-dev] [tripleo] containers-multinode-upgrades-nv is stable, please respect the results

2017-07-14 Thread Jiří Stránský

Hi all,

i'm just sending this plea -- let's pay attention to the 
containers-multinode-upgrades-nv job results in the CI please, and treat 
it as voting if possible. There's been a fair amount of breakage lately 
but all was caused by merging TripleO patches on which the job failed. 
The job in itself has been reliable.


We're on the way to make it voting, but this depends also on adding it 
to promotion jobs, so that RDO RPMs cannot be promoted if this job is 
failing (e.g. due to changes in non-TripleO projects which don't run the 
job in gerrit).



Thanks, and have a good day!

Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] proposing Alex Schultz tripleo-core in all projects

2017-07-10 Thread Jiří Stránský

On 7.7.2017 19:39, Emilien Macchi wrote:

Alex has demonstrated high technical and community skills in TripleO -
where he's already core on THT, instack-undercloud, and puppet-tripleo
- but also very involved in other repos.
I propose that we extend his core status to all TripleO projects and
of course trust him (like we trust all core members) to review patches
were we feel confortable with.

He has shown an high interest in reviewed other TripleO projects and I
think he would be ready for this change.
As usual, this is an open proposal, any feedback is welcome.

Thanks,



+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] breaking changes, new container image parameter formats

2017-07-25 Thread Jiří Stránský

On 25.7.2017 04:40, Michał Jastrzębski wrote:

...
DockerInsecureRegistryAddress: 172.19.0.2:8787/tripleoupstream
DockerKeystoneImage: 172.19.0.2:8787/tripleoupstream/centos-binary-
keystone:latest
...


That's strange construction, are you sure guys that you don't want to
separate address:port from namespace? (tripleoupstream here).

Say you'd like to setup docker to point to insecure registry (add
--insecure-registry do systemd conf), that will take addr:port not
whole thing.


Thanks Michał, i think it was just a copy-paste error in the e-mail and 
in reality the parameter value should indeed look like:


DockerInsecureRegistryAddress: 172.19.0.2:8787

I posted a tripleo-docs patch which documents the changes discussed in 
this thread:


https://review.openstack.org/486635


Thanks

Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Bogdan Dobrelya core on TripleO / Containers

2017-07-26 Thread Jiří Stránský

On 21.7.2017 16:55, Emilien Macchi wrote:

Hi,

Bogdan (bogdando on IRC) has been very active in Containerization of
TripleO and his quality of review has increased over time.
I would like to give him core permissions on container work in TripleO.
Any feedback is welcome as usual, we'll vote as a team.

Thanks,



+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] breaking changes, new container image parameter formats

2017-07-24 Thread Jiří Stránský

On 19.7.2017 14:41, Dan Prince wrote:

I wanted to give a quick heads up on some breaking changes that started
landing last week with regards to how container images are specified
with Heat parameters in TripleO. There are a few patches associated
with converting over to the new changes but the primary patches are
listed below here [1] and here [2].

Here are a few examples where I'm using a local (insecure) docker
registry on 172.19.0.2.

The old parameters were:

   
   DockerNamespaceIsRegistry: true
   DockerNamespace: 172.19.0.2:8787/tripleoupstream
   DockerKeystoneImage: centos-binary-keystone:latest
   ...

The new parameters simplify things quite a bit so that each
Docker*Image parameter contains the *entire* URL required to pull the
docker image. It ends up looking something like this:

   ...
   DockerInsecureRegistryAddress: 172.19.0.2:8787/tripleoupstream
   DockerKeystoneImage: 172.19.0.2:8787/tripleoupstream/centos-binary-
keystone:latest
   ...

The benefit of the new format is that it makes it possible to pull
images from multiple registries without first staging them to a local
docker registry. Also, we've removed the 'tripleoupstream' default
container names and now require them to be specified. Removing the
default should make it much more explicit that the end user has
specified container image names correctly and doesn't accidentally use
'tripleoupstream' by accident because one of the container image
parameters didn't get specified.


Additional info based on #tripleo discussion: To keep using the values 
that were the defaults, you need to add `-e 
$THT_PATH/environments/docker-centos-tripleoupstream.yaml` [3] to the 
`openstack overcloud deploy` command.



Finally the simplification of the
DockerInsecureRegistryAddress parameter into a single setting makes
things more clear to the end user as well.

A new python-tripleoclient command makes it possible to generate a
custom heat environment with defaults for your environment and
registry. For the examples above I can run 'overcloud container image
prepare' to generate a custom heat environment like this:

openstack overcloud container image prepare --
namespace=172.19.0.2:8787/tripleoupstream --env-
file=$HOME/containers.yaml

We choose not to implement backwards compatibility with the old image
formats as almost all of the Heat parameters here are net new in Pike
and as such have not yet been released yet. The changes here should
make it much easier to manage containers and work with other community
docker registries like RDO, etc.

[1] http://git.openstack.org/cgit/openstack/tripleo-heat-templates/comm
it/?id=e76d84f784d27a7a2d9e5f3a8b019f8254cb4d6c
[2] https://review.openstack.org/#/c/479398/17
[3] 
https://github.com/openstack/tripleo-heat-templates/blob/5cbcc8377c49e395dc1d02a976d9b4a94253f5ca/environments/docker-centos-tripleoupstream.yaml


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI Squad Meeting Summary (week 26) - job renaming discussion

2017-06-30 Thread Jiří Stránský

On 30.6.2017 15:04, Attila Darazs wrote:

= Renaming the CI jobs =

When we started the job transition to Quickstart, we introduced the
concept of featuresets[1] that define a certain combination of features
for each job.

This seemed to be a sensible solution, as it's not practical to mention
all the individual features in the job name, and short names can be
misleading (for example ovb-ha job does so much more than tests HA).

We decided to keep the original names for these jobs to simplify the
transition, but the plan is to rename them to something that will help
to reproduce the jobs locally with Quickstart.

The proposed naming scheme will be the same as the one we're now using
for job type in project-config:

gate-tripleo-ci-centos-7-{node-config}-{featureset-config}

So for example the current "gate-tripleo-ci-centos-7-ovb-ha-oooq" job
would look like "gate-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001"


I'd prefer to keep the job names somewhat descriptive... If i had to 
pick one or the other, i'd rather stick with the current way, as at 
least for me it's higher priority to see descriptive names in CI results 
than saving time on finding featureset file mapping when needing to 
reproduce a job result. My eyes scan probably more than a hundred of 
individual CI job results daily, but i only need to reproduce 0 or 1 job 
failures locally usually.


Alternatively, could we rename "featureset001.yaml" into 
"featureset-ovb-ha.yaml" and then have i guess something like 
"gate-tripleo-ci-centos-7-ovb-3ctlr_1comp-ovb-ha" for the job name? 
Maybe "ovb" would be there twice, in case it's needed both in node 
config and featureset parts of the job name...


Or we could pull the mapping between job name and job type in an 
automated way from project-config.


(Will be on PTO for a week from now, apologies if i don't respond timely 
here.)



Have a good day,

Jirka



The advantage of this will be that it will be easy to reproduce a gate
job on a local virthost by typing something like:

./quickstart.sh --release tripleo-ci/master \
  --nodes config/nodes/3ctlr_1comp.yml \
  --config config/general_config/featureset001.yml \
  

Please let us know if this method sounds like a step forward.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI Squad Meeting Summary (week 26) - job renaming discussion

2017-06-30 Thread Jiří Stránský

On 30.6.2017 17:06, Jiří Stránský wrote:

On 30.6.2017 15:04, Attila Darazs wrote:

= Renaming the CI jobs =

When we started the job transition to Quickstart, we introduced the
concept of featuresets[1] that define a certain combination of features
for each job.

This seemed to be a sensible solution, as it's not practical to mention
all the individual features in the job name, and short names can be
misleading (for example ovb-ha job does so much more than tests HA).

We decided to keep the original names for these jobs to simplify the
transition, but the plan is to rename them to something that will help
to reproduce the jobs locally with Quickstart.

The proposed naming scheme will be the same as the one we're now using
for job type in project-config:

gate-tripleo-ci-centos-7-{node-config}-{featureset-config}

So for example the current "gate-tripleo-ci-centos-7-ovb-ha-oooq" job
would look like "gate-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001"


I'd prefer to keep the job names somewhat descriptive... If i had to
pick one or the other, i'd rather stick with the current way, as at
least for me it's higher priority to see descriptive names in CI results
than saving time on finding featureset file mapping when needing to
reproduce a job result. My eyes scan probably more than a hundred of
individual CI job results daily, but i only need to reproduce 0 or 1 job
failures locally usually.

Alternatively, could we rename "featureset001.yaml" into
"featureset-ovb-ha.yaml" and then have i guess something like
"gate-tripleo-ci-centos-7-ovb-3ctlr_1comp-ovb-ha" for the job name?
Maybe "ovb" would be there twice, in case it's needed both in node
config and featureset parts of the job name...

Or we could pull the mapping between job name and job type in an
automated way from project-config.


^ I mean for the purposes of reproducing a CI job, in a similar way we 
do it for running the CI job in the first place.




(Will be on PTO for a week from now, apologies if i don't respond timely
here.)


Have a good day,

Jirka



The advantage of this will be that it will be easy to reproduce a gate
job on a local virthost by typing something like:

./quickstart.sh --release tripleo-ci/master \
   --nodes config/nodes/3ctlr_1comp.yml \
   --config config/general_config/featureset001.yml \
   

Please let us know if this method sounds like a step forward.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] critical situation with CI / upgrade jobs

2017-08-17 Thread Jiří Stránský

On 17.8.2017 00:47, Emilien Macchi wrote:

Here's an update on the situation.

On Tue, Aug 15, 2017 at 6:33 PM, Emilien Macchi  wrote:

Problem #1: Upgrade jobs timeout from Newton to Ocata
https://bugs.launchpad.net/tripleo/+bug/1702955

[...]

- revert distgit patch in RDO: https://review.rdoproject.org/r/8575
- push https://review.openstack.org/#/c/494334/ as a temporary solution
- we need https://review.openstack.org/#/c/489874/ landed ASAP.
- once https://review.openstack.org/#/c/489874/ is landed, we need to
revert https://review.openstack.org/#/c/494334 ASAP.

We still need some help to find out why upgrade jobs timeout so much
in stable/ocata.


Problem #2: from Ocata to Pike (containerized) missing container upload step
https://bugs.launchpad.net/tripleo/+bug/1710938
Wes has a patch (thanks!) that is currently in the gate:
https://review.openstack.org/#/c/493972

[...]

The patch worked and helped! We've got a successful job running today:
http://logs.openstack.org/00/461000/32/check/gate-tripleo-ci-centos-7-containers-multinode-upgrades-nv/2f13627/console.html#_2017-08-16_01_31_32_009061

We're now pushing to the next step: testing the upgrade with pingtest.
See https://review.openstack.org/#/c/494268/ and the Depends-On: on
https://review.openstack.org/#/c/461000/.

If pingtest proves to work, it would be a good news and prove that we
have a basic workflow in place on which we can iterate.

The next iterations afterward would be to work on the 4 scenarios that
are also going to be upgrades from Ocata to pike (001 to 004).
For that, we'll need Problem #1 and #2 resolved before we want to make
any progress here, to not hit the same issues that before.


Problem #3: from Ocata to Pike: all container images are
uploaded/specified, even for services not deployed
https://bugs.launchpad.net/tripleo/+bug/1710992
The CI jobs are timeouting during the upgrade process because
downloading + uploading _all_ containers in local cache takes more
than 20 minutes.
So this is where we are now, upgrade jobs timeout on that. Steve Baker
is currently looking at it but we'll probably offer some help.


Steve is still working on it: https://review.openstack.org/#/c/448328/
Steve, if you need any help (reviewing or coding) - please let us
know, as we consider this thing important to have and probably good to
have in Pike.


Independent, but related issue is that the job doesn't make use of 
CI-local registry mirrors. I seem to recall we already had mirror usage 
implemented at some point, but we must have lost it somehow. Fix is here:


https://review.openstack.org/#/c/494525/

Jirka



Thanks,




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Adding new roles after upgrade is broken.

2017-08-18 Thread Jiří Stránský

On 18.8.2017 13:18, Sofer Athlan-Guyot wrote:

Hi,

We may have missing packages when the user is adding a new role to its
roles_data file and the base image is coming from previous version.

The workflow would be this one:
  - install newton
  - upgrade to ocata
  - add collectd to roles_data and redeploy the stack

For instance if one is adding
OS::TripleO::Services::Collectdservices::collectd in an ocata env coming
from an upgraded newton env, he/she won't have the necessary packages
(for instance collectd-disk).  The puppet manifest will fail has the
package is missing and puppet doesn't install package.  The upgrade
task[1] is useless as the new role wasn't added during the upgrade but
after.


Right, but the package could be added during the upgrade. The 
upgrade_tasks could/should make the set of installed overcloud RPMs on 
par with the overcloud-full image of the respective release, ideally. So 
you'd have collectd RPMs installed always, both on freshly deployed and 
upgraded envs, regardless if you actually use collectd or not. We 
already did some package installs/uninstalls as part of upgrades and 
updates, but probably didn't have 100% coverage.




I don't see any easy way to solve this.  Basically we need a way to keep
in sync base image between release without using the upgrade_tasks,
maybe in the tripleo-package one ?


Given that released code is affected, we may treat it as a bug that 
requires a minor update, and in addition to upgrade_tasks, we can add 
all the necessary package installs into minor update code 
(yum_update.sh) too. Again this shouldn't depend on what services are 
actually enabled, just unconditionally sync with latest content of 
overcloud-full image of the respective release.


I guess the time consuming part will be preparing the envs that will 
allow comparing a fresh deploy vs. an upgraded one to get the `rpm -qa | 
sort` difference. Or we could try a shortcut and see what changes went 
into tripleo-puppet-elements in each release.




This shouldn't be a problem with container, but everything before pike
is affected.


Indeed. There will still be some basic baremetal host content management 
as long as we're not using Atomic, but the room for potential problems 
will be much smaller.


Jirka



Originially seen there[2]

[1] 
https://github.com/openstack/tripleo-heat-templates/blob/stable/ocata/puppet/services/metrics/collectd.yaml#L130..L134
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1455065




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][ci] Upgrade CI job for O->P (containerization)

2017-05-10 Thread Jiří Stránský

Hi all,

the upgrade job which tests Ocata -> Pike/master upgrade (from 
bare-metal to containers) just got a green flag from the CI [1].


I've listed the remaining patches we need to land at the very top of the 
container CI etherpad [2], please let's get them reviewed and landed as 
soon as we can. The sooner we get the job going, the fewer upgrade 
regressions will get merged in the meantime (e.g. we have one from last 
week).


The CI job utilizes mixed release deployment (master undercloud, 
overcloud deployed as Ocata and upgraded to latest). It tests the main 
overcloud upgrade phase (no separate compute role upgrades, no converge 
phase). This means the testing isn't exhaustive to the full expected 
"production scenario", but it covers the most important part where we're 
likely to see the most churn and potential breakages. We'll see how much 
spare wall time we have to add more things once we get the job to run on 
patches regularly.



Thanks and have a good day!

Jirka

[1] 
http://logs.openstack.org/61/460061/15/experimental/gate-tripleo-ci-centos-7-containers-multinode-upgrades-nv/d7faa50/

[2] https://etherpad.openstack.org/p/tripleo-containers-ci

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][ci] Saving memory in OOOQ multinode jobs

2017-05-17 Thread Jiří Stránský

Hi all,

we can save some memory in our OOOQ multinode jobs by specifying custom 
role data -- getting rid of resources and YAQL crunching for roles that 
aren't used at all in the end. Shout out to shardy for suggesting we 
should look into doing this.


First observations, hopefully at least somewhat telling:

nonha-multinode-oooq job: comparing [1][2] the RSS memory usage by the 4 
heat-engine processes on undercloud drops from 803 MB to 690 MB.


containers-multinode-upgrades job: comparing [3][4] heat-engine memory 
usage drops from 1221 MB to 968 MB.


I expected some time savings as well but wasn't able to spot any, looks 
like concurrency works well in Heat :)


Patches are here:
https://review.openstack.org/#/c/455730/
https://review.openstack.org/#/c/455719/


Have a good day,

Jirka


[1] 
http://logs.openstack.org/68/465468/1/check/gate-tripleo-ci-centos-7-nonha-multinode-oooq/9d354b5/logs/undercloud/var/log/host_info.txt.gz
[2] 
http://logs.openstack.org/30/455730/3/check/gate-tripleo-ci-centos-7-nonha-multinode-oooq/4e3bb4a/logs/undercloud/var/log/host_info.txt.gz
[3] 
http://logs.openstack.org/52/464652/5/experimental/gate-tripleo-ci-centos-7-containers-multinode-upgrades-nv/a20f7dd/logs/undercloud/var/log/host_info.txt.gz
[4] 
http://logs.openstack.org/30/455730/3/experimental/gate-tripleo-ci-centos-7-containers-multinode-upgrades-nv/f753cd9/logs/undercloud/var/log/host_info.txt.gz


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] Upgrade CI job for O->P (containerization)

2017-05-12 Thread Jiří Stránský

On 12.5.2017 15:30, Emilien Macchi wrote:

On Wed, May 10, 2017 at 9:26 AM, Jiří Stránský <ji...@redhat.com> wrote:

Hi all,

the upgrade job which tests Ocata -> Pike/master upgrade (from bare-metal to
containers) just got a green flag from the CI [1].

I've listed the remaining patches we need to land at the very top of the
container CI etherpad [2], please let's get them reviewed and landed as soon
as we can. The sooner we get the job going, the fewer upgrade regressions
will get merged in the meantime (e.g. we have one from last week).

The CI job utilizes mixed release deployment (master undercloud, overcloud
deployed as Ocata and upgraded to latest). It tests the main overcloud
upgrade phase (no separate compute role upgrades, no converge phase). This
means the testing isn't exhaustive to the full expected "production
scenario", but it covers the most important part where we're likely to see
the most churn and potential breakages. We'll see how much spare wall time
we have to add more things once we get the job to run on patches regularly.


The job you and the team made to make that happen is amazing and outstanding.


Thanks! I should mention i've utilized quite a lot of pre-existing 
upgrades skeleton code in OOOQ that i think mainly Mathieu Bultel put 
together.



Once the jobs are considered stable, I would move them to the gate so
we don't break it. Wdyt?


Yes absolutely, i'd start with non-voting as usual first.

We still need reviews on the patches to move forward (4 patches 
remaining), so when you (plural :) ) have some bandwidth, please take a 
look:


https://review.openstack.org/#/c/462664/ - 
multinode-container-upgrade.yaml usable for mixed upgrade


https://review.openstack.org/#/c/459789/ - TripleO CI mixed release 
master UC / ocata OC


https://review.openstack.org/#/c/462172/  - Don't overwrite tht_dir for 
upgrades to master


https://review.openstack.org/#/c/460061/ - Use mixed release for 
container upgrade



Thanks

Jirka






Thanks and have a good day!

Jirka

[1]
http://logs.openstack.org/61/460061/15/experimental/gate-tripleo-ci-centos-7-containers-multinode-upgrades-nv/d7faa50/
[2] https://etherpad.openstack.org/p/tripleo-containers-ci

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Undercloud backup and restore

2017-05-24 Thread Jiří Stránský

On 24.5.2017 15:02, Marios Andreou wrote:

On Wed, May 24, 2017 at 10:26 AM, Carlos Camacho Gonzalez <
ccama...@redhat.com> wrote:


Hey folks,

Based on what we discussed yesterday in the TripleO weekly team meeting,
I'll like to propose a blueprint to create 2 features, basically to backup
and restore the Undercloud.

I'll like to follow in the first iteration the available docs for this
purpose [1][2].

With the addition of backing up the config files on /etc/ specifically to
be able to recover from a failed Undercloud upgrade, i.e. recover the repos
info removed in [3].

I'll like to target this for P as I think I have enough time for
coding/testing these features.

I already have created a blueprint to track this effort
https://blueprints.launchpad.net/tripleo/+spec/undercloud-backup-restore

What do you think about it?



+1 from me as you know but adding my support on the list too. I think it is
a great idea - there are cases especially around changing network config
during an upgrade for example where the best remedy is to restore the
undercloud for the network definitions (both neutron config and heat db).


+1 i think there's not really an easy way out of these issues other than 
a restore. We already recommend doing a backup before upgrading [1], so 
having something that can further help operators in this regard would be 
good.


Jirka

[1] http://tripleo.org/post_deployment/upgrade.html



thanks,




Thanks,
Carlos.

[1]: https://access.redhat.com/documentation/en-us/red_hat_
enterprise_linux_openstack_platform/7/html/back_up_and_
restore_red_hat_enterprise_linux_openstack_platform/restore

[2]: https://docs.openstack.org/developer/tripleo-docs/post_
deployment/backup_restore_undercloud.html

[3]: https://docs.openstack.org/developer/tripleo-docs/
installation/updating.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI Squad Meeting Summary (week 21) - Devmode OVB, RDO Cloud and config management

2017-05-31 Thread Jiří Stránský

On 30.5.2017 23:03, Emilien Macchi wrote:

On Fri, May 26, 2017 at 4:58 PM, Attila Darazs  wrote:

If the topics below interest you and you want to contribute to the
discussion, feel free to join the next meeting:

Time: Thursdays, 14:30-15:30 UTC
Place: https://bluejeans.com/4113567798/

Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

= Periodic & Promotion OVB jobs Quickstart transition =

We had a lively technical discussions this week. Gabriele's work on
transitioning the periodic & promotion jobs is nearly complete, only needs
reviews at this point. We won't set a transition date for these as it is not
really impacting folks long term if these jobs are failing for a few days at
this point. We'll transition when everything is ready.

= RDO Cloud & Devmode OVB =

We continued planning the introduction of RDO Cloud for the upstream OVB
jobs. We're still at the point of account setup.

The new OVB based devmode seems to be working fine. If you have access to
RDO Cloud, and haven't tried it already, give it a go. It can set up a full
master branch based deployment within 2 hours, including any pending changes
baked into the under & overcloud.

When you have your account info sourced, all it takes is

$ ./devmode.sh --ovb

from your tripleo-quickstart repo! See here[1] for more info.

= Container jobs on nodepool multinode =

Gabriele is stuck with these new Quickstart jobs. We would need a deep dive
into debugging and using the container based TripleO deployments. Let us
know if you can do one!


I've pinged some folks around, let's see if someone volunteers to make it.


I can lead the deep dive as i've been involved in implementation of all 
the multinode jobs (deployment, upgrade, scenarios deployment + upgrade).


Currently only the upgrade job is merged and working. Deployment job is 
ready to merge (imho :) ), let's get it in place before we do the deep 
dive, so that we talk about things that are already working as intended. 
However, i think we don't have to block on getting the scenario jobs all 
going and polished, those are not all green yet (2/4 deployment jobs 
green, 1/4 upgrade jobs green).


I hope that sounds like a sensible plan. Let me know in case you have 
any feedback.


Thanks

Jirka




= How to handle Quickstart configuration =

This a never-ending topic, on which we managed to spend a good chunk of time
this week as well. Where should we put various configs? Should we duplicate
a bunch of variables or cut them into small files?

For now it seems we can agree on 3 levels of configuration:

* nodes config (i.e. how many nodes we want for the deployment)
* envionment + provisioner settings (i.e. you want to run on rdocloud with
ovb, or on a local machine with libvirt)
* featureset (a certain set of features enabled/disabled for the jobs, like
pacemaker and ssl)

This seems rather straightforward until we encounter exceptions. We're going
to figure out the edge cases and rework the current configs to stick to the
rules.


That's it for this week. Thank you for reading the summary.

Best regards,
Attila

[1] http://docs.openstack.org/developer/tripleo-quickstart/devmode-ovb.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] overcloud containers patches todo

2017-06-05 Thread Jiří Stránský

On 5.6.2017 08:59, Sagi Shnaidman wrote:

Hi
I think a "deep dive" about containers in TripleO and some helpful
documentation would help a lot for valuable reviews of these container
patches. The knowledge gap that's accumulated here is pretty big.


As per last week's discussion [1], i hope this is something i could do. 
I'm drafting a preliminary agenda in this etherpad, feel free to add 
more suggestions if i missed something:


https://etherpad.openstack.org/p/tripleo-deep-dive-containers

My current intention is to give a fairly high level view of the TripleO 
container land: from deployment, upgrades, debugging failed CI jobs, to 
how CI itself was done.


I'm hoping we could make it this Thursday still. If that's too short of 
a notice for several folks or if i hit some trouble with preparation, we 
might move it to 15th. Any feedback is welcome of course.


Have a good day,

Jirka



Thanks

On Jun 5, 2017 03:39, "Dan Prince"  wrote:


Hi,

Any help reviewing the following patches for the overcloud
containerization effort in TripleO would be appreciated:

https://etherpad.openstack.org/p/tripleo-containers-todo

If you've got new services related to the containerization efforts feel
free to add them here too.

Thanks,

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [EXTERNAL] Re: [TripleO] custom configuration to overcloud fails second time

2017-06-01 Thread Jiří Stránský

On 31.5.2017 17:40, Dnyaneshwar Pawar wrote:

Hi Ben,

On 5/31/17, 8:06 PM, "Ben Nemec" 
> wrote:

I think we would need to see what your custom config templates look like
as well.

Custom config templates: http://paste.openstack.org/show/64/


Hello Dnyaneshwar,

from a brief scan of that paste i think that:

  OS::TripleO::ControllerExtraConfig: /home/stack/example_2.yaml

should rather be:

  OS::TripleO::ControllerExtraConfigPre: /home/stack/example_2.yaml


The 'Pre' hook gets a `server` parameter (not `servers`) - it's 
instantiated per server [1], not per role. There are docs [2] that 
describe the interface, and they describe some alternative options as well.


(Please ask such questions on IRC channel #tripleo on freenode, as the 
openstack-dev list is meant mainly for development discussion.)



Have a good day,

Jirka

[1] 
https://github.com/openstack/tripleo-heat-templates/blob/b344f5994fcd16e562d55e6e00ad0980c5b32621/puppet/role.role.j2.yaml#L475-L479

[2] http://tripleo.org/advanced_deployment/extra_config.html




Also note that it's generally not recommended to drop environment files
from your deploy command unless you explicitly want to stop applying
them.  So if you applied myconfig_1.yaml and then later want to apply
myconfig_2.yaml your deploy command should look like: openstack
overcloud deploy --templates -e myconfig_1.yaml -e myconfig_2.yaml

Yes, I agree. But in my case even if I dropped myconfig_1.yaml while applying 
myconfig_2.yaml , config in step 1 remained unchanged.

On 05/31/2017 07:53 AM, Dnyaneshwar Pawar wrote:
Hi TripleO Experts,
I performed following steps -

   1. openstack overcloud deploy --templates -e myconfig_1.yaml
   2. openstack overcloud deploy --templates -e myconfig_2.yaml

Step 1  Successfully applied custom configuration to the overcloud.
Step 2 completed successfully but custom configuration is not applied to
the overcloud. And configuration applied by step 1 remains unchanged.

*Do I need to do anything before performing step 2?*


Thanks and Regards,
Dnyaneshwar


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] overcloud containers patches todo

2017-06-06 Thread Jiří Stránský

On 5.6.2017 23:52, Dan Prince wrote:

On Mon, 2017-06-05 at 16:11 +0200, Jiří Stránský wrote:

On 5.6.2017 08:59, Sagi Shnaidman wrote:

Hi
I think a "deep dive" about containers in TripleO and some helpful
documentation would help a lot for valuable reviews of these
container
patches. The knowledge gap that's accumulated here is pretty big.


As per last week's discussion [1], i hope this is something i could
do.
I'm drafting a preliminary agenda in this etherpad, feel free to add
more suggestions if i missed something:

https://etherpad.openstack.org/p/tripleo-deep-dive-containers

My current intention is to give a fairly high level view of the
TripleO
container land: from deployment, upgrades, debugging failed CI jobs,
to
how CI itself was done.

I'm hoping we could make it this Thursday still. If that's too short
of
a notice for several folks or if i hit some trouble with preparation,
we
might move it to 15th. Any feedback is welcome of course.


Nice Jirka. Thanks for organizing this!

Dan


Sure thing. I'll do it on 15th indeed as a couple more topics appeared 
and i'd like to familiarize myself with some details (alongside doing 
normal work :) ).


Jirka





Have a good day,

Jirka



Thanks

On Jun 5, 2017 03:39, "Dan Prince" <dpri...@redhat.com> wrote:


Hi,

Any help reviewing the following patches for the overcloud
containerization effort in TripleO would be appreciated:

https://etherpad.openstack.org/p/tripleo-containers-todo

If you've got new services related to the containerization
efforts feel
free to add them here too.

Thanks,

Dan

_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:un
subscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
___
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
bscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
cribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Containers Deep Dive - 15th June

2017-06-09 Thread Jiří Stránský

Hello,

as discussed previously on the list and at the weekly meeting, we'll do 
a deep dive about containers. The time:


Thursday 15th June, 14:00 UTC (the usual time)

Link for attending will be at the deep dives etherpad [1], preliminary 
agenda is in another etherpad [2], and i hope i'll be able to record it too.


This time it may be more of a "broad dive" :) as that's what containers 
in TripleO mostly are -- they add new bits into many TripleO 
areas/topics (composable services/upgrades, Quickstart/CI, etc.). So 
i'll be trying to bring light to the container-specific parts of the 
mix, and assume some familiarity with the generic TripleO 
concepts/features (e.g. via docs and previous deep dives). Given this 
pattern, i'll have slides with links into code. I'll post them online, 
so that you can reiterate or examine some code more closely later, in 
case you want to.



Have a good day!

Jirka

[1] https://etherpad.openstack.org/p/tripleo-deep-dive-topics
[2] https://etherpad.openstack.org/p/tripleo-deep-dive-containers

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Containers Deep Dive - 15th June

2017-06-13 Thread Jiří Stránský

On 9.6.2017 16:49, Jiří Stránský wrote:

Hello,

as discussed previously on the list and at the weekly meeting, we'll do
a deep dive about containers. The time:

Thursday 15th June, 14:00 UTC (the usual time)

Link for attending will be at the deep dives etherpad [1], preliminary
agenda is in another etherpad [2], and i hope i'll be able to record it too.

This time it may be more of a "broad dive" :) as that's what containers
in TripleO mostly are -- they add new bits into many TripleO
areas/topics (composable services/upgrades, Quickstart/CI, etc.). So
i'll be trying to bring light to the container-specific parts of the
mix, and assume some familiarity with the generic TripleO
concepts/features (e.g. via docs and previous deep dives). Given this
pattern, i'll have slides with links into code. I'll post them online,
so that you can reiterate or examine some code more closely later, in
case you want to.


For folks who haven't had any prior exposure to Docker containers 
whatsoever, i'd recommend giving these links a scan beforehand:


* Docker Overview
  https://docs.docker.com/engine/docker-overview/

* Docker - Get Started pt.1: Orientation and Setup
  https://docs.docker.com/get-started/

* Docker - Get Started pt.2: Containers
  https://docs.docker.com/get-started/part2/

(I'd like us to spend majority of the time talking about how we use 
containers in TripleO, rather than what containers are.)



Thanks, looking forward to seeing you at the deep dive!

Jirka




Have a good day!

Jirka

[1] https://etherpad.openstack.org/p/tripleo-deep-dive-topics
[2] https://etherpad.openstack.org/p/tripleo-deep-dive-containers

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Containers Deep Dive - 15th June

2017-06-13 Thread Jiří Stránský

On 13.6.2017 09:58, Or Idgar wrote:

Hi,
Can you please send me the meeting invitation?


Hi Or,

i've just added the bluejeans link to the etherpads. I'm not posting it 
here as i want the "sources of truth" for that link to be editable, in 
case we hit some issues with the setup / joining etc.


Jirka




Thanks in advance!

On Fri, Jun 9, 2017 at 5:49 PM, Jiří Stránský <ji...@redhat.com> wrote:


Hello,

as discussed previously on the list and at the weekly meeting, we'll do a
deep dive about containers. The time:

Thursday 15th June, 14:00 UTC (the usual time)

Link for attending will be at the deep dives etherpad [1], preliminary
agenda is in another etherpad [2], and i hope i'll be able to record it too.

This time it may be more of a "broad dive" :) as that's what containers in
TripleO mostly are -- they add new bits into many TripleO areas/topics
(composable services/upgrades, Quickstart/CI, etc.). So i'll be trying to
bring light to the container-specific parts of the mix, and assume some
familiarity with the generic TripleO concepts/features (e.g. via docs and
previous deep dives). Given this pattern, i'll have slides with links into
code. I'll post them online, so that you can reiterate or examine some code
more closely later, in case you want to.


Have a good day!

Jirka

[1] https://etherpad.openstack.org/p/tripleo-deep-dive-topics
[2] https://etherpad.openstack.org/p/tripleo-deep-dive-containers

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Containers Deep Dive - 15th June

2017-06-15 Thread Jiří Stránský

On 9.6.2017 16:49, Jiří Stránský wrote:

Hello,

as discussed previously on the list and at the weekly meeting, we'll do
a deep dive about containers. The time:

Thursday 15th June, 14:00 UTC (the usual time)

Link for attending will be at the deep dives etherpad [1], preliminary
agenda is in another etherpad [2], and i hope i'll be able to record it too.


The recording is available here:

https://www.youtube.com/watch?v=xhTwHfi65p8

Thanks Carlos for managing our youtube channel!

Jirka



This time it may be more of a "broad dive" :) as that's what containers
in TripleO mostly are -- they add new bits into many TripleO
areas/topics (composable services/upgrades, Quickstart/CI, etc.). So
i'll be trying to bring light to the container-specific parts of the
mix, and assume some familiarity with the generic TripleO
concepts/features (e.g. via docs and previous deep dives). Given this
pattern, i'll have slides with links into code. I'll post them online,
so that you can reiterate or examine some code more closely later, in
case you want to.


Have a good day!

Jirka

[1] https://etherpad.openstack.org/p/tripleo-deep-dive-topics
[2] https://etherpad.openstack.org/p/tripleo-deep-dive-containers

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][kolla][openstack-ansible][openstack-helm][tripleo] ansible role to produce oslo.config files for openstack services

2017-06-16 Thread Jiří Stránský

On 15.6.2017 19:06, Emilien Macchi wrote:

I missed [tripleo] tag.

On Thu, Jun 15, 2017 at 12:09 PM, Emilien Macchi  wrote:

If you haven't followed the "Configuration management with etcd /
confd" thread [1], Doug found out that using confd to generate
configuration files wouldn't work for the Cinder case where we don't
know in advance of the deployment what settings to tell confd to look
at.
We are still looking for a generic way to generate *.conf files for
OpenStack, that would be usable by Deployment tools and operators.
Right now, Doug and I are investigating some tooling that would be
useful to achieve this goal.

Doug has prototyped an Ansible role that would generate configuration
files by consumming 2 things:

* Configuration schema, generated by Ben's work with Machine Readable
Sample Config.
   $ oslo-config-generator --namespace cinder --format yaml > cinder-schema.yaml

It also needs: https://review.openstack.org/#/c/474306/ to generate
some extra data not included in the original version.

* Parameters values provided in config_data directly in the playbook:
config_data:
  DEFAULT:
transport_url: rabbit://user:password@hostname
verbose: true

There are 2 options disabled by default but which would be useful for
production environments:
* Set to true to always show all configuration values: config_show_defaults
* Set to true to show the help text: config_show_help: true

The Ansible module is available on github:
https://github.com/dhellmann/oslo-config-ansible

To try this out, just run:
   $ ansible-playbook ./playbook.yml

You can quickly see the output of cinder.conf:
 https://clbin.com/HmS58


What are the next steps:

* Getting feedback from Deployment Tools and operators on the concept
of this module.
   Maybe this module could replace what is done by Kolla with
merge_configs and OpenStack Ansible with config_template.
* On the TripleO side, we would like to see if this module could
replace the Puppet OpenStack modules that are now mostly used for
generating configuration files for containers.
   A transition path would be having Heat to generate Ansible vars
files and give it to this module. We could integrate the playbook into
a new task in the composable services, something like
   "os_gen_config_tasks", a bit like we already have for upgrade tasks,
also driven by Ansible.


This sounds good to me, though one issue i can presently see is that 
Puppet modules sometimes contain quite a bit of data processing logic 
("smart" variables which map 1-to-N rather than 1-to-1 to actual config 
values, and often not just in openstack service configs, e.g. 
puppet-nova also configures libvirt, etc.). Also we use some non-config 
aspects from the Puppet modules (e.g. seeding Keystone 
tenants/services/endpoints/...). We'd need to implement this 
functionality elsewhere when replacing the Puppet modules. Not a 
blocker, but something to keep in mind.



* Another similar option to what Doug did is to write a standalone
tool that would generate configuration, and for Ansible users we would
write a new module to use this tool.
   Example:
   Step 1. oslo-config-generator --namespace cinder --format yaml >
cinder-schema.yaml (note this tool already exists)
   Step 2. Create config_data.yaml in a specific format with
parameters values for what we want to configure (note this format
doesn't exist yet but look at what Doug did in the role, we could use
the same kind of schema).
   Step 3. oslo-gen-config -i config_data.yaml -s schema.yaml >
cinder.conf (note this tool doesn't exist yet)


+1 on standalone tool which can be used in different contexts (by 
different higher level tools), this sounds generally useful.




   For Ansible users, we would write an Ansible module that would
take in entry 2 files: the schema and the data. The module would just
run the tool provided by oslo.config.
   Example:
   - name: Generate cinder.conf
 oslo-gen-config: schema=cinder-schema.yaml
data=config_data.yaml


+1 for module rather than a role. "Take these inputs and produce that 
output" fits the module semantics better than role semantics IMO.


FWIW as i see it right now, this ^^ + ConfigMaps + immutable-config 
containers could result in a nicer/safer/more-debuggable containerized 
OpenStack setup than etcd + confd in daemon mode + mutable-config 
containers.





Please bring feedback and thoughts, it's really important to know what
folks from Installers think about this idea; again the ultimate goal
is to provide a reference tool to generate configuration in OpenStack,
in a way that scales and is friendly for our operators.

Thanks,

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-June/118176.html
--
Emilien Macchi






Have a good day,

Jirka

__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [tripleo] Containers Deep Dive - 15th June

2017-06-15 Thread Jiří Stránský

On 15.6.2017 18:25, Jiří Stránský wrote:

On 9.6.2017 16:49, Jiří Stránský wrote:

Hello,

as discussed previously on the list and at the weekly meeting, we'll do
a deep dive about containers. The time:

Thursday 15th June, 14:00 UTC (the usual time)

Link for attending will be at the deep dives etherpad [1], preliminary
agenda is in another etherpad [2], and i hope i'll be able to record it too.


The recording is available here:

https://www.youtube.com/watch?v=xhTwHfi65p8


And the slides:

https://jistr.github.io/tripleo-deep-dive-containers-june-2017/



Thanks Carlos for managing our youtube channel!

Jirka



This time it may be more of a "broad dive" :) as that's what containers
in TripleO mostly are -- they add new bits into many TripleO
areas/topics (composable services/upgrades, Quickstart/CI, etc.). So
i'll be trying to bring light to the container-specific parts of the
mix, and assume some familiarity with the generic TripleO
concepts/features (e.g. via docs and previous deep dives). Given this
pattern, i'll have slides with links into code. I'll post them online,
so that you can reiterate or examine some code more closely later, in
case you want to.


Have a good day!

Jirka

[1] https://etherpad.openstack.org/p/tripleo-deep-dive-topics
[2] https://etherpad.openstack.org/p/tripleo-deep-dive-containers

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-12 Thread Jiří Stránský

On 9.6.2017 18:51, Flavio Percoco wrote:

A-ha, ok! I figured this was another option. In this case I guess we would
have 2 options:

1. Run confd + openstack service in side the container. My concern in this
case
would be that we'd have to run 2 services inside the container and structure
things in a way we can monitor both services and make sure they are both
running. Nothing impossible but one more thing to do.


I see several cons with this option:

* Even if we do this in a sidecar container like Bogdan mentioned (which 
is better than running 2 "top-level" processes in a single container 
IMO), we still have to figure out when to restart the main service, 
IIUC. I see confd in daemon mode listens on the backend change and 
updates the conf files, but i can't find a mention that it would be able 
to restart services. Even if we implemented this auto-restarting in 
OpenStack services, we need to deal with services like MariaDB, Redis, 
..., so additional wrappers might be needed to make this a generic solution.


* Assuming we've solved the above, if we push a config change to etcd, 
all services get restarted at roughly the same time, possibly creating 
downtime or capacity issues.


* It complicates the reasoning about container lifecycle, as we have to 
start distinguishing between changes that don't require a new container 
(config change only) vs. changes which do require it (image content 
change). Mutable container config also hides this lifecycle from the 
operator -- the container changes on the inside without COE knowing 
about it, so any operator's queries to COE would look like no changes 
happened.


I think ideally container config would be immutable, and every time we 
want to change anything, we'd do that via a roll out of a new set of 
containers. This way we have a single way of making changes to reason 
about, and when we're doing rolling updates, it shouldn't result in a 
downtime or tangible performance drop. (Not talking about migrating to a 
new major OpenStack release, which will still remain a special case in 
foreseeable future.)




2. Run confd `-onetime` and then run the openstack service.


This sounds simpler both in terms of reasoning and technical complexity, 
so if we go with confd, i'd lean towards this option. We'd have to 
rolling-replace the containers from outside, but that's what k8s can 
take care of, and at least the operator can see what's happening on high 
level.


The issues that Michał mentioned earlier still remain to be solved -- 
config versioning ("accidentally" picking up latest config), and how to 
supply config elements that differ per host.


Also, it's probably worth diving a bit deeper into comparing `confd 
-onetime` and ConfigMaps...



Jirka




Either would work but #2 means we won't have config files monitored and the
container would have to be restarted to update the config files.

Thanks, Doug.
Flavio



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Install Kubernetes in the overcloud using TripleO

2017-09-20 Thread Jiří Stránský

On 20.9.2017 10:15, Bogdan Dobrelya wrote:

On 08.06.2017 18:36, Flavio Percoco wrote:

Hey y'all,

Just wanted to give an updated on the work around tripleo+kubernetes.
This is
still far in the future but as we move tripleo to containers using
docker-cmd,
we're also working on the final goal, which is to have it run these
containers
on kubernetes.

One of the first steps is to have TripleO install Kubernetes in the
overcloud
nodes and I've moved forward with this work:

https://review.openstack.org/#/c/471759/

The patch depends on the `ceph-ansible` work and it uses the
mistral-ansible
action to deploy kubernetes by leveraging kargo. As it is, the patch
doesn't
quite work as it requires some files to be in some places (ssh keys) and a
couple of other things. None of these "things" are blockers as in they
can be
solved by just sending some patches here and there.

I thought I'd send this out as an update and to request some early
feedback on
the direction of this patch. The patch, of course, works in my local
environment
;)


Note that Kubespray (former Kargo) now supports the kubeadm tool
natively [0]. This speeds up a cluster bootstrapping from an average
25-30 min to a 9 or so. I believe this makes Kubespray a viable option
for upstream development of OpenStack overclouds managed by K8s.
Especially, bearing in mind the #deployment-time effort and all the hard
work work done by tripleo and infra teams in order to shorten the CI
jobs time.


I tried deploying with kubeadm_enable yesterday and no luck yet on 
CentOS, but i do want to get back to this as the speed up sounds 
promising :)


AIO kubernetes deployment the non-kubeadm way seemed to work fine 
(Flavio's patch above with a workaround for [2] and a small Kubespray 
fix [3]).


Jirka



By the way, here is a package review [1] for adding a kubespray-ansible
library, just ansible roles and playbooks, to RDO. I'd appreciate some
help with moving this forward, like choosing another place to host the
package, it got stuck a little bit.

[0] https://github.com/kubernetes-incubator/kubespray/issues/553
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1482524

[2] https://bugs.launchpad.net/mistral/+bug/1718384
[3] https://github.com/kubernetes-incubator/kubespray/pull/1677

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] TripleO/Ansible PTG session

2017-09-21 Thread Jiří Stránský

On 21.9.2017 12:31, Giulio Fidente wrote:

On 09/20/2017 07:36 PM, James Slagle wrote:

On Tue, Sep 19, 2017 at 8:37 AM, Giulio Fidente  wrote:

On 09/18/2017 05:37 PM, James Slagle wrote:

- The entire sequence and flow is driven via Mistral on the Undercloud
by default. This preserves the API layer and provides a clean reusable
interface for the CLI and GUI.


I think it's worth saying that we want to move the deployment steps out
of heat and in ansible, not in mistral so that mistral will run the
workflow only once and let ansible go through the steps

I think having the steps in mistral would be a nice option to be able to
rerun easily a particular deployment step from the GUI, versus having
them in ansible which is instead a better option for CLI users ... but
it looks like having them in ansible is the only option which permits us
to reuse the same code to deploy an undercloud because having the steps
in mistral would require the undercloud installation itself to depend on
mistral which we don't want to

James, Dan, please comment on the above if I am wrong


That's correct. We don't want to require Mistral to install the
Undercloud. However, I don't think that necessarily means it has to be
a single call to ansible-playbook. We could have multiple invocations
of ansible-playbook. Both Mistral and CLI code for installing the
undercloud could handle that easily.

You wouldn't be able to interleave an external playbook among the
deploy steps however. That would have to be done under a single call
to ansible-playbook (at least how that is written now). We could
however have hooks that could serve as integration points to call
external playbooks after each step.


the benefits of driving the steps from mistral are that then we could
also interleave the deployment steps and we won't need the
ansible-playbook hook for the "external" services:

1) collect the ansible tasks *and* the workflow_tasks (per step) from heat

2) launch the stepN deployment workflow (ansible-playbook)

3) execute any workflow_task defined for stepN (like ceph-ansible playbook)

4) repeat 2 and 3 for stepN+1

I think this would also provide a nice interface for the UI ... but then
we'd need mistral to be able to deploy the undercloud



Alternatively we could do the main step loop in Ansible directly, and 
have the tasks do whatever they need to get the particular service 
deployed, from  to launching a nested ansible-playbook run if that's 
what it takes.


That way we could run the whole thing end-to-end via ansible-playbook, 
or if needed one could execute smaller bits by themselves (steps or 
nested playbook runs) -- that capability is not baked in by default, but 
i think we could make it so.


Also the interface for services would be clean and simple -- it's always 
the ansible tasks.


And Mistral-less use cases become easier to handle too (= undercloud 
installation when Mistral isn't present yet, or development envs when 
you want to tune the playbook directly without being forced to go 
through Mistral).


Logging becomes a bit more unwieldy in this scenario though, as for the 
nested ansible-playbook execution, all output would go into a task in 
the outer playbook, which would be harder to follow and the log of the 
outer playbook could be huge.


So this solution is no silver bullet, but from my current point of view 
it seems a bit less conceptually foreign than using Mistral to provide 
step loop functionality to Ansible, which should be able to handle that 
on its own.



- It would still be possible to run ansible-playbook directly for
various use cases (dev/test/POC/demos). This preserves the quick
iteration via Ansible that is often desired.

- The remaining SoftwareDeployment resources in tripleo-heat-templates
need to be supported by config download so that the entire
configuration can be driven with Ansible, not just the deployment
steps. The success criteria for this point would be to illustrate
using an image that does not contain a running os-collect-config.

- The ceph-ansible implementation done in Pike could be reworked to
use this model. "config download" could generate playbooks that have
hooks for calling external playbooks, or those hooks could be
represented in the templates directly. The result would be the same
either way though in that Heat would no longer be triggering a
separate Mistral workflow just for ceph-ansible.


I'd say for ceph-ansible, kubernetes and in general anything else which
needs to run with a standard playbook installed on the undercloud and
not one generated via the heat templates... these "external" services
usually require the inventory file to be in different format, to
describe the hosts to use on a per-service basis, not per-role (and I
mean tripleo roles here, not ansible roles obviously)

About that, we discussed a more long term vision where the playbooks
(static data) needd to describe how to deploy/upgrade a given service is
in a separate repo 

Re: [openstack-dev] [TripleO] TripleO/Ansible PTG session

2017-09-22 Thread Jiří Stránský

On 22.9.2017 13:44, Giulio Fidente wrote:

On 09/21/2017 07:53 PM, Jiří Stránský wrote:

On 21.9.2017 18:04, Marios Andreou wrote:

On Thu, Sep 21, 2017 at 3:53 PM, Jiří Stránský <ji...@redhat.com> wrote:


[...]


That way we could run the whole thing end-to-end via
ansible-playbook, or
if needed one could execute smaller bits by themselves (steps or nested
playbook runs) -- that capability is not baked in by default, but i
think
we could make it so.

Also the interface for services would be clean and simple -- it's always
the ansible tasks.

And Mistral-less use cases become easier to handle too (= undercloud
installation when Mistral isn't present yet, or development envs when
you
want to tune the playbook directly without being forced to go through
Mistral).



You don't *have* to go through mistral either way I mean you can always
just run ansible-playbook directly using the generated playbooks if
that is
what you need for dev/debug etc.




Logging becomes a bit more unwieldy in this scenario though, as for the
nested ansible-playbook execution, all output would go into a task in
the
outer playbook, which would be harder to follow and the log of the outer
playbook could be huge.

So this solution is no silver bullet, but from my current point of
view it
seems a bit less conceptually foreign than using Mistral to provide step
loop functionality to Ansible, which should be able to handle that on
its
own.



just saying using mistral to invoke ansible-playbook doesn't imply having
mistral do the looping/step control. I think it was already mentioned
that
we can/will have multiple invocations of ansible-playbook. Having the
loop
in the playbook then means organising our templates a certain way so that
there is a _single_ parent playbook which we can parameterise to then run
all or some of the steps (which as pointed above is currently the case
for
the upgrade and deployment steps playbooks).


Yup, +1 again :) However, the 1)2)3)4) approach discussed earlier in the
thread suggested to hand over the step loop control to Mistral and keep
using the Mistral workflow_tasks, which would make it impossible to have
a single parent playbook, if i understood correctly. So Mistral would be
a requirement for running all steps via a single command (impacting UC
install and developer workflow).


yes I am not sold (yet?) on the idea of ansible driving the deployment
and would like to keep some abstraction before it

the additional abstraction will make it possible for example to execute
tasks written as mistral actions (eg. python code) in between or during
any given deployment step, instead of ansible tasks only ... I guess we
could also write ansible actions in python but it's not trivial to ship
them from THT and given the project mission we have of being "openstack
on openstack" I'd also prefer writing a mistral action vs ansible

similarily, the ceph-ansible workflow runs a task to build the ansible
inventory; if we make the "external" services integration an
ansible->ansible process we'll probably need to ship from THT an heat
query (or ansible task) to be executed by the "outer" ansible to create
the inventory for the inner ansible


Yea, allowing e2e software deployment with Ansible requires converting 
the current Mistral workflow_tasks into Ansible. In terms of services 
affected by this, there's in-tree ceph-ansible [1] and we have proposed 
patches for Kubernetes and Skydive (that's what i'm aware of).




I supported the introduction of mistral as an API and would prefer to
have there more informations there versus moving it away into YACT (yet
another configuration tool)


We could mitigate this somewhat by doing what Marios and James suggested 
-- running the global playbook one step at a time when the playbook is 
executed from Mistral. It will not give Mistral 100% of the information 
when compared with the approach you suggested, but it's a bit closer...




depending on mistral for the undercloud install is also not very
different from depending on heat(-all)

I understand the ansible->ansible process addresses the "simplification"
issue we have been asked to look into; it is pretty much the only good
thing I see about it though :D


Right, it's a simpler design, which i consider important, as my hope is 
that over time it would result in some time savings, hopefully 
not just among developers, but also operators, when having to debug 
things or reason about how TripleO works.


Btw the points i didn't react to -- i mostly agree there :P. There are 
tradeoffs involved in both variants and it's not a clear-as-day choice 
for me either.


Thanks :)

Jirka

[1] 
https://github.com/openstack/tripleo-common/blob/master/workbooks/ceph-ansible.yaml


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:un

Re: [openstack-dev] [TripleO] TripleO/Ansible PTG session

2017-09-22 Thread Jiří Stránský

On 22.9.2017 15:30, Jiri Tomasek wrote:

Will it be possible to send Zaqar messages at each deployment step to make
the deployment process more interactive?


If we go the way of allowing the wrapping playbook to execute per step, 
i think we could send messages to Zaqar after each step. Mistral could 
trigger the playbook with sth like `tripleo_run_steps: [1]`, then with 
`tripleo_run_steps: [2]` etc. So even though the playbook would support 
running all steps by default, Mistral could run it step-by-step.


(Currently we only send Zaqar messages before Heat stack and after Heat 
stack, is that right? [1] Or do we send & read some messages from within 
the overcloud stack deployment too?)


With logs from external playbooks (like ceph-ansible) it would be less 
comfy than it is now, or would require extra effort to improve on that. 
Currently AFAIK we don't send any Zaqar messages there anyway [2], but 
we at least publish the logs into Mistral separately from the rest of 
deployment logs. If we trigger an external playbook from Ansible, making 
it a "normal citizen" of a TripleO deployment step, then the external 
playbook logs would be reported along with logs of that whole deployment 
step. This is IMO the main weak point of having the step loop in Ansible 
(assuming its most basic form), perhaps something where we could find an 
improvement, e.g. utilizing ARA as James suggested earlier in the 
thread. (I don't know enough about ARA at this point to suggest a 
particular way of integrating it with Mistral/UI though.)



in case of driving separate
playbooks from mistral workflow, that would be absolutely possible. As it
seems we're more keen on driving everything from wrapping ansible playbook,
is it going to be possible to send Zaqar messages from ansible playbook
directly?


I did a quick search but didn't find any existing Ansible module for 
this. If we needed/wanted this for additional granularity, we might have 
to implement it.




Being able to properly monitor progress of deployment is important so it
would be good to clarify how that is going to work.

-- Jirka


[1] 
https://github.com/openstack/tripleo-common/blob/979dc308e51bb9b8e7b66b4258da1d67e50d9c2b/workbooks/deployment.yaml#L180
[2] 
https://github.com/openstack/tripleo-common/blob/979dc308e51bb9b8e7b66b4258da1d67e50d9c2b/workbooks/ceph-ansible.yaml


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] TripleO/Ansible PTG session

2017-09-21 Thread Jiří Stránský

On 21.9.2017 18:04, Marios Andreou wrote:

On Thu, Sep 21, 2017 at 3:53 PM, Jiří Stránský <ji...@redhat.com> wrote:


On 21.9.2017 12:31, Giulio Fidente wrote:


On 09/20/2017 07:36 PM, James Slagle wrote:


On Tue, Sep 19, 2017 at 8:37 AM, Giulio Fidente <gfide...@redhat.com>
wrote:


On 09/18/2017 05:37 PM, James Slagle wrote:


- The entire sequence and flow is driven via Mistral on the Undercloud
by default. This preserves the API layer and provides a clean reusable
interface for the CLI and GUI.



I think it's worth saying that we want to move the deployment steps out
of heat and in ansible, not in mistral so that mistral will run the
workflow only once and let ansible go through the steps

I think having the steps in mistral would be a nice option to be able to
rerun easily a particular deployment step from the GUI, versus having
them in ansible which is instead a better option for CLI users ... but
it looks like having them in ansible is the only option which permits us
to reuse the same code to deploy an undercloud because having the steps
in mistral would require the undercloud installation itself to depend on
mistral which we don't want to

James, Dan, please comment on the above if I am wrong



That's correct. We don't want to require Mistral to install the
Undercloud. However, I don't think that necessarily means it has to be
a single call to ansible-playbook. We could have multiple invocations
of ansible-playbook. Both Mistral and CLI code for installing the
undercloud could handle that easily.

You wouldn't be able to interleave an external playbook among the
deploy steps however. That would have to be done under a single call
to ansible-playbook (at least how that is written now). We could
however have hooks that could serve as integration points to call
external playbooks after each step.



the benefits of driving the steps from mistral are that then we could
also interleave the deployment steps and we won't need the
ansible-playbook hook for the "external" services:

1) collect the ansible tasks *and* the workflow_tasks (per step) from heat

2) launch the stepN deployment workflow (ansible-playbook)

3) execute any workflow_task defined for stepN (like ceph-ansible
playbook)

4) repeat 2 and 3 for stepN+1

I think this would also provide a nice interface for the UI ... but then
we'd need mistral to be able to deploy the undercloud




Why not launch the _single_  deploy_steps playbook (so we have
when/conditionals with step numbers), passing in the step you want to have
executed (we can pass this in as a parameter to the mistral workflow and
pass through to the ansible-playbook invocation?), otherwise execute all
the steps.


+1 that's the sort of thing i meant by "it's not baked in by default but 
we could make it so". We could even give it a list of steps like 
`tripleo_run_steps: [4, 5, 6]`.



In either case whether it should be ansible handling the loop
based on a passed in parameter.
'Ansible-native' looping is currently the
case for the current deploy_steps_playbook here
https://github.com/openstack/tripleo-heat-templates/blob/259cf512b3b7e3539105cdb52421e3239701ef73/common/deploy-steps.j2#L335
- can we not work parameterise that playbook so that we either do loop with
the variable, or just step X?

Then for the upgrade workflow it is as above but  1.5 first launch the
upgrade_tasks_playbook  then the deploy_steps playbook for all the steps
(consider this
https://review.openstack.org/#/c/505624/3/scripts/upgrade-non-controller.sh@162
- download and run the playbooks for non-controllers in O->P upgrade
pointing this out to illustrate the workflow I have in mind). So I don't
see why we can't have mistral invoking ansible-playbook and taking
parameters like which playbook, which step etc.




Alternatively we could do the main step loop in Ansible directly, and have
the tasks do whatever they need to get the particular service deployed,
from  to launching a nested ansible-playbook run if that's what it takes.




I think you can do both, I mean mistral invoking ansible-playbook and still
let ansible handle the steps with a loop.


+1 exactly. FWIW i'm totally on board with wrapping everything in 
Mistral on the outermost level, as that's required for UI. I'm just not 
keen on having Mistral enter the process in between each step and drive 
the step loop.



In fact that is what the current
upgrade_steps_playbook does here
https://github.com/openstack/tripleo-heat-templates/blob/259cf512b3b7e3539105cdb52421e3239701ef73/common/deploy-steps.j2#L363-L365
with a loop variable 'step'. The upgrade_tasks have their 'tags: stepX'
converted to 'when: step == X' in the client here
https://github.com/openstack/python-tripleoclient/blob/4d342826d6c3db38ee88dccc92363b655b1161a5/tripleoclient/v1/overcloud_config.py#L63
- we must come up with a better solution than that; ultimately we can just
update the existing upgrade_tasks to have 'when' and the m

Re: [openstack-dev] [tripleo] Repo structure for ansible-k8s-roles-* under TripleO's umbrella

2017-10-09 Thread Jiří Stránský

On 9.10.2017 11:29, Flavio Percoco wrote:

Greetings,

I've been working on something called triple-apbs (and it's respective roles) in
the last couple of months. You can find more info about this work here[0][1][2]

This work is at the point where I think it would be worth start discussing how
we want these repos to exist under the TripleO umbrella. As far as I can tell,
we have 2 options (please comment with alternatives if there are more):

1. A repo per role: Each role would have its own repo - this is the way I've
been developing it on Github. This model is closer to the ansible way of doing
things and it'll make it easier to bundle, ship, and collaborate on, individual
roles. Going this way would produce something similar to what the
openstack-ansible folks have.

2. Everything in a single repo: this would ease the import process and
integration with the rest of TripleO. It'll make the early days of this work a
bit easier but it will take us in a direction that doesn't serve one of the
goals of this work.

My preferred option is #1 because one of the goals of this work is to have
independent roles that can also be consumed standalone. In other words, I would
like to stay closer to the ansible recommended structure for roles. Some
examples[3][4]

Any thoughts? preferences?


+1 for option #1. In addition to standalone usage, it feels like a 
better match for "the container way of doing things" in that we'll be 
able to easily mix and match APB versions when necessary. (E.g. having 
problems with bleeding edge Glance APB? Just use a slightly older one 
without being compelled to downgrade the other APBs.) A parallel could 
be drawn to how openstack/puppet-* repos are managed and IMO it's been 
working well that way.


Using APBs this way also seems more "out-of-the-box ready" for APBs that 
don't originate in TripleO project, should we ever want/need to use them 
(e.g. for non-OpenStack services).


Global changes will be harder as they'll require separate commits, and 
in general it's more repos (+ RPMs) to manage, but i hope the 
aforementioned benefits outweigh this.


Jirka


Flavio

[0] http://blog.flaper87.com/deploy-mariadb-kubernetes-tripleo.html
[1] http://blog.flaper87.com/glance-keystone-mariadb-on-k8s-with-tripleo.html
[2] https://github.com/tripleo-apb
[3] https://github.com/tripleo-apb/ansible-role-k8s-mariadb
[4] https://github.com/tripleo-apb/ansible-role-k8s-glance

--
@flaper87
Flavio Percoco



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >