Re: [openstack-dev] [tripleo] Ansible getting bumped up from 2.4 -> 2.6.6

2018-11-06 Thread Giulio Fidente
On 11/5/18 11:23 PM, Wesley Hayutin wrote:
> Greetings,
> 
> Please be aware of the following patch [1].  This updates ansible in
> queens, rocky, and stein.
>  This was just pointed out to me, and I didn't see it coming so I
> thought I'd email the group.
> 
> That is all, thanks
> 
> [1] https://review.rdoproject.org/r/#/c/14960
thanks Wes for bringing this up

note that we're trying to update ansible to 2.6 because 2.4 is
unsupported and 2.5 is only receiving security fixes already

with the upcoming updates for ceph-ansible in ceph luminous, support for
older ansible releases will be dropped
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer

2018-10-22 Thread Giulio Fidente
On 10/19/18 2:23 PM, Juan Antonio Osorio Robles wrote:
> Hello!
> 
> 
> I would like to propose Bob Fournier (bfournie) as a core reviewer in
> TripleO. His patches and reviews have spanned quite a wide range in our
> project, his reviews show great insight and quality and I think he would
> be a addition to the core team.
> 
> What do you folks think?

I think thank you Bob!

-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] request for feedback/review on docker2podman upgrade

2018-10-19 Thread Giulio Fidente
On 10/14/18 5:07 PM, Emilien Macchi wrote:
> I recently wrote a blog post about how we could upgrade an host from
> Docker containers to Podman containers.
> 
> http://my1.fr/blog/openstack-containerization-with-podman-part-3-upgrades/
thanks Emilien this looks nice and I believe the basic approach
consisting of:

1) create the podman systemd unit
2) delete the docker container
3) start the podman container

could be used to upgrade the Ceph containers as well (via ceph-ansible)
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][ironic][keystone][neutron][nova][edge] PTG summary on edge discussions

2018-09-26 Thread Giulio Fidente
>
> For the details on the discussions with related OpenStack projects you
can check the following etherpads for notes:
>
> * Cinder:
https://etherpad.openstack.org/p/cinder-ptg-planning-denver-9-2018
> * Glance: https://etherpad.openstack.org/p/glance-stein-edge-architecture
> * Ironic: https://etherpad.openstack.org/p/ironic-stein-ptg-edge
> * Keystone:
https://etherpad.openstack.org/p/keystone-stein-edge-architecture
> * Neutron: https://etherpad.openstack.org/p/neutron-stein-ptg
> * Nova: https://etherpad.openstack.org/p/nova-ptg-stein
>
> Notes from the StarlingX sessions:
https://etherpad.openstack.org/p/stx-PTG-agenda

here is a link to the TripleO edge squad etherpad as well:

https://etherpad.openstack.org/p/tripleo-edge-squad-status

the edge squad is meeting weekly.

> We are still working on the MVP architecture to clean it up and
discuss comments and questions before moving it to a wiki page. Please
let me know if you would like to get access to the document and I will
share it with you.
>
> Please let me know if you have any questions or comments to the above
captured items.

thanks again!
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG topics and agenda

2018-08-30 Thread Giulio Fidente
On 8/28/18 2:50 PM, Juan Antonio Osorio Robles wrote:
> Hello folks!
> 
> 
> With the PTG being quite soon, I just wanted to remind folks to add your
> topics on the etherpad: https://etherpad.openstack.org/p/tripleo-ptg-stein

thanks Juan,

I think the Edge (line 53) and Split Control Plane (line 74) sessions
can probably be merged into a single one.

I'd be fine with James driving it, I think it'd be fine to discuss the
"control plane updates" issue [1] in that same session.

1.
http://lists.openstack.org/pipermail/openstack-dev/2018-August/133247.html

-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Edge clouds and controlplane updates

2018-08-13 Thread Giulio Fidente
Hello,

I'd like to get some feedback regarding the remaining
work for the split controlplane spec implementation [1]

Specifically, while for some services like nova-compute it is not
necessary to update the controlplane nodes after an edge cloud is
deployed, for other services, like cinder (or glance, probably
others), it is necessary to do an update of the config files on the
controlplane when a new edge cloud is deployed.

In fact for services like cinder or glance, which are hosted in the
controlplane, we need to pull data from the edge clouds (for example
the newly deployed ceph cluster keyrings and fsid) to configure cinder
(or glance) with a new backend.

It looks like this demands for some architectural changes to solve the
following two:

- how do we trigger/drive updates of the controlplane nodes after the
edge cloud is deployed?

- how do we scale the controlplane parameters to accomodate for N
backends of the same type?

A very rough approach to the latter could be to use jinja to scale up
the CephClient service so that we can have multiple copies of it in the
controlplane.

Each instance of CephClient should provide the ceph config file and
keyring necessary for each cinder (or glance) backend.

Also note that Ceph is only a particular example but we'd need a similar
workflow for any backend type.

The etherpad for the PTG session [2] touches this, but it'd be good to
start this conversation before then.

1.
https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html

2. https://etherpad.openstack.org/p/tripleo-ptg-queens-split-controlplane

-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Proposing Lukas Bezdicka core on TripleO

2018-08-01 Thread Giulio Fidente
Hi,

I would like to propose Lukas Bezdicka core on TripleO.

Lukas did a lot work in our tripleoclient, tripleo-common and
tripleo-heat-templates repos to make FFU possible.

FFU, which is meant to permit upgrades from Newton to Queens, requires
in depth understanding of many TripleO components (for example Heat,
Mistral and the TripleO client) but also of specific TripleO features
which were added during the course of the three releases (for example
config-download and upgrade tasks). I believe his FFU work to have been
very challenging.

Given his broad understanding, more recently Lukas started helping doing
reviews in other areas.

I am so sure he'll be a great addition to our group that I am not even
looking for comments, just votes :D
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] prototype with standalone mode and remote edge compute nodes

2018-07-20 Thread Giulio Fidente
On 07/19/2018 10:37 PM, Emilien Macchi wrote:
> Today I played a little bit with Standalone deployment [1] to deploy a
> single OpenStack cloud without the need of an undercloud and overcloud.
> The use-case I am testing is the following:
> "As an operator, I want to deploy a single node OpenStack, that I can
> extend with remote compute nodes on the edge when needed."
> 
> We still have a bunch of things to figure out so it works out of the
> box, but so far I was able to build something that worked, and I found
> useful to share it early to gather some feedback:
>   https://gitlab.com/emacchi/tripleo-standalone-edge
> 
> Keep in mind this is a proof of concept, based on upstream documentation
> and re-using 100% what is in TripleO today. The only thing I'm doing is
> to change the environment and the roles for the remote compute node.
> I plan to work on cleaning the manual steps that I had to do to make it
> working, like hardcoding some hiera parameters and figure out how to
> override ServiceNetmap.
> 
> Anyway, feel free to test / ask questions / provide feedback.

hi Emilien,

thanks for sharing this.

I have started experimenting with edge deployments to help out on the
split-controplane spec [1], which Steven started addressing

I was able to deploy multiple stacks and isolated Ceph clusters, there
are some bits missing to provision a working configuration for
nova-compute to the edge services, but we could probably collect/export
the necessary outputs from the parent stack (eg. rabbit connection
infos) and feed the edge stacks with those.

A much bigger challenge seems to me that for some services (eg. glance
or cinder) we need to "refresh" the configuration of the controlplane
nodes to push the details of the newly deployed ceph clusters (backends)
of the edge nodes as backends for the controlplane services.

Alternatively, we could opt for the deployment of cinder-volume
instances on the edge nodes, but we would still have the same problem
for glance and possibly other services.

I'd like to discuss further this topic at the PTG to gether more
feedback so I added a bullet to the pad with the Stein PTG topics [2].

1. https://blueprints.launchpad.net/tripleo/+spec/split-controlplane
2. https://etherpad.openstack.org/p/tripleo-ptg-stein

-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Updates/upgrades equivalent for external_deploy_tasks

2018-07-17 Thread Giulio Fidente
On 07/10/2018 04:20 PM, Jiří Stránský wrote:
> Hi,
> 
> with the move to config-download deployments, we'll be moving from
> executing external installers (like ceph-ansible) via Heat resources
> encapsulating Mistral workflows towards executing them via Ansible
> directly (nested Ansible process via external_deploy_tasks).
> 
> Updates and upgrades still need to be addressed here. I think we should
> introduce external_update_tasks and external_upgrade_tasks for this
> purpose, but i see two options how to construct the workflow with them.
> 
> During update (mentioning just updates, but upgrades would be done
> analogously) we could either:
> 
> A) Run external_update_tasks, then external_deploy_tasks.
> 
> This works with the assumption that updates are done very similarly to
> deployment. The external_update_tasks could do some prep work and/or
> export Ansible variables which then could affect what
> external_deploy_tasks do (e.g. in case of ceph-ansible we'd probably
> override the playbook path). This way we could also disable specific
> parts of external_deploy_tasks on update, in case reuse is undesirable
> in some places.
thanks

+1 on A from me as well

we currently cycle through a list of playbooks to execute which can be
given as a Heat parameter ... I suppose we'll need to find a way to make
an ansible variable override the Heat value
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits

2018-06-13 Thread Giulio Fidente
On 06/13/2018 05:50 PM, Emilien Macchi wrote:
> Alan Bishop has been highly involved in the Storage backends integration
> in TripleO and Puppet modules, always here to update with new features,
> fix (nasty and untestable third-party backends) bugs and manage all the
> backports for stable releases:
> https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22
> 
> He's also well knowledgeable of how TripleO works and how containers are
> integrated, I would like to propose him as core on TripleO projects for
> patches related to storage things (Cinder, Glance, Swift, Manila, and
> backends).
> 
> Please vote -1/+1,

+1 :D


-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits

2018-04-20 Thread Giulio Fidente
On 04/19/2018 07:01 PM, Emilien Macchi wrote:
> Greetings,
> 
> As you probably know mcornea on IRC, Marius Cornea has been contributing
> on TripleO for a while, specially on the upgrade bits.
> Part of the quality team, he's always testing real customer scenarios
> and brings a lot of good feedback in his reviews, and quite often takes
> care of fixing complex bugs when it comes to advanced upgrades scenarios.
> He's very involved in tripleo-upgrade repository where he's already
> core, but I think it's time to let him +2 on other tripleo repos for the
> patches related to upgrades (we trust people's judgement for reviews).
> 
> As usual, we'll vote!
> 
> Thanks everyone for your feedback and thanks Marius for your hard work
> and involvement in the project.

+1

thanks Marius for your hard and very important work

-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Alternative to empty string for default values in Heat

2018-03-23 Thread Giulio Fidente
On 03/23/2018 05:43 PM, Wojciech Dec wrote:
> Hi All,
> 
> I'm converting a few heat service templates that have been working ok
> with puppet3 modules to run with Puppet 4, and am wondering if there is
> a way to pass an "undefined" default via heat to allow "default" values
> (eg params.pp) of the puppet modules to be used?
> The previous (puppet 3 working) way of passing an empty string in heat
> doesn't work, since Puppet 4 interprets this now as the actual setting.

yaml allows use of ~ to represent null

it looks like in a hiera lookup that is resolved as the "nil" value, not
sure if that is enough to make the default values for a class to apply

-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Ceph integration topics discussed at PTG

2018-03-18 Thread Giulio Fidente
Hi,

I wanted to share a short summary of the discussions happened around the
Ceph integration (in TripleO) at the PTG.


In no particular order:

- ceph-{container,ansible} branching

together with John Fulton and Guillaume Abrioux (and after PTG,
Sebastien Han) we put some thought into how to make the Ceph container
images and ceph ansible releases fit better the OpenStack model; the
container images and ceph-ansible are in fact loosely coupled (not all
versions of the container images work with all versions of ceph-ansible)
and we wanted to move from a "rolling release" into a "point release"
approach, mainly to permit regular maintenance of the previous versions
known to work with the previous OpenStack versions. The plan goes more
or less as follows:

  1) ceph-{container,ansible} should be released together with the
regular ceph updates
  2) ceph-container will start using tags and stable branches like
ceph-ansible does

The changes for the ceph/daemon docker images are visible already:
https://hub.docker.com/r/ceph/daemon/tags/


- multiple Ceph clusters

in the attempt to support better the "edge computing" use case, we
discussed addin support for the deployment of multiple Ceph clusters in
the overcloud

together with John Fulton and Steven Hardy (and after PTG, Gregory
Charot) we realized this could be done using multiple stacks and by
doing so, hopefully simplify managament of the "cells" and avoid
potential issues due to orchestration of large clusters

much of this will build on Shardy's blueprint to split the control
plane, see spec at: https://review.openstack.org/#/c/523459/

the multiple Ceph clusters specifics will be tracked via another
blueprint:
https://blueprints.launchpad.net/tripleo/+spec/deploy-multiple-ceph-clusters


- ceph-ansible testing with TripleO

we had a very good chat with John Fulton, Guillaume Abrioux, Wesley
Hayutin and Javier Pena on how to get tested new pull requests for
ceph-ansible with TripleO; basically trigger an existing TripleO
scenario on changes proposed to ceph-ansible

Given ceph-ansible is hosted on github, Wesley's and Javier suggested
this should be possible with Zuul v3 and volunteered to help; some of
the complications are about building an RPM from uncommitted changes for
testing


- move ceph-ansible triggering from workflow_tasks to external_deploy_tasks

this is a requirement for the Rocky release; we want to migrate away
from using workflow_tasks and use external_deploy_tasks instead, to
integrate into the "config-download" mechanism

this work is tracked via a blueprint and we have a WIP submission on
review:
https://blueprints.launchpad.net/tripleo/+spec/ceph-ansible-external-deploy-tasks


We're also working with Sofer Athlan-Guyot on the enablement of Ceph in
the upgrade CI jobs and with Tom Barron on scenario004 to deploy Manila
with Ganesha (and CephFS) instead of the CephFS native backend.

Hopefullt I didn't forget much; to stay updated on the progress check
our integration squad status at:
https://etherpad.openstack.org/p/tripleo-integration-squad-status

Thanks
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI'ing ceph-ansible against TripleO scenarios

2018-01-26 Thread Giulio Fidente
On 01/26/2018 01:49 AM, Paul Belanger wrote:
> On Thu, Jan 25, 2018 at 04:22:56PM -0800, Emilien Macchi wrote:
>> Is there any plans to run TripleO CI jobs in ceph-ansible?
>> I know the project is on github but thanks to zuulv3 we can now easily
>> configure ceph-ansible to run Ci jobs in OpenStack Infra.
>>
>> It would be really great to investigate that in the near future so we avoid
>> eventual regressions.
>> Sebastien, Giulio, John, thoughts?
>> -- 
>> Emilien Macchi
> 
> Just a note, we haven't actually agree to enable CI for github projects just
> yet.  While it is something zuul can do now, I believe we still need to decide
> when / how to enable it.
> 
> We are doing some initial testing with ansible/ansible however.

but we like being on the front line! :D

we discussed this same topic with Sebastien and John a few weeks back
and agreed on having some gate job for ceph-ansible CI'ing against TripleO!

how do we start? I think the candidate branch on ceph-ansible to gate is
"beta-3.1" but there will be more ... I am just not sure we're stable
enough to gate master yet ... but we might do it non-voting, it's up for
debate

on TripleO side we'd be looking at running scenarios 001 and 004 ...
maybe initially 004 only is good enough as it covers (at least for ceph)
most of what is in 001 as well

can we continue on IRC? :D

and thanks Emilien and Paul for starting the thread and helping
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] FFE nfs_ganesha integration

2018-01-22 Thread Giulio Fidente

hi,

I would like to request an FFE for the integration of nfs_ganesha, which 
will provide a better user experience to manila users


This work was slown down by a few factors:

- it depended on the migration of tripleo to the newer Ceph version 
(luminous), which happened during the queens cycle


- it depended on some additional functionalities to be implemented in 
ceph-ansible which were only recently been made available to tripleo/ci


- it proposes the addition of on an additional (and optional) network 
(storagenfs) so that guests don't need connectivity to the ceph frontend 
network to be able to use the cephfs shares


The submissions are on review and partially testable in CI [1]. If 
accepted, I'd like to reassign the blueprint [2] back to the queens 
cycle, as it was initially.


Thanks

1. https://review.openstack.org/#/q/status:open+topic:bp/nfs-ganesha
2. https://blueprints.launchpad.net/tripleo/+spec/nfs-ganesha
--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Request for input: scaling the number of Ceph clusters deployed in the overcloud

2017-11-21 Thread Giulio Fidente
Hi,

we're currently exploring ways to deploy multiple Ceph clusters in the
overcloud.

Given Ceph is now managed by a ceph-ansible playbook, we can "easily"
deploy multiple Ceph clusters running multiple times the playbook with
different parameters and inventory.


The initial idea to make this consumable in TripleO has been to have
jinja add a prefix to the Ceph service names and its parameters, and let
the user build custom roles (deploying on each a different instance of
the Ceph service) to distribute the Ceph services as needed on any
arbitrary role.

The benefits of the above approach are that daemons of different Ceph
clusters can be colocated on the same node and that operators continue
to customize any Ceph parameter using heat environment files as they
used to (they just add the jinja prefix to the parameter name).

The cons are that we'd need to scale (hence use jinja) also for other
services, like Cinder or Nova because the Ceph parameters can be
consumed by those too.


An alternate proposal has been to tag the roles, bound the Ceph cluster
to a tag to build the inventory and use role-specific settings so that
instances of the Ceph services deployed on a role would get different
parameters based on the role they run on.

The most important benefit that I can see of the above approach is that
it is a lot less intrusive as it does not require jinja processing of
the templates but I think I do not understand fully how the
implementation would look like so I was curious if there are examples in
tree of anything similar?

I would also like to know if other people is interested in this same
functionality so that we can come up with a more generalized solution?

Last but not least, I would like to hear more input, ideas and feedback
to see if there are more ways of doing this!

Thanks for the feedback
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] When to use parameters vs parameter_defaults

2017-11-10 Thread Giulio Fidente
On 11/26/2015 03:17 PM, Jiří Stránský wrote:
> On 26.11.2015 14:12, Jiří Stránský wrote:

[...]

>> It seems TripleO is hitting similar composability and sanity limits with
>> the top-down approach, and the number of parameters which can only be
>> fed via parameter_defaults is increasing. (The disadvantage of
>> parameter_defaults is that, unlike hiera, we currently have no clear
>> namespacing rules, which means a higher chance of conflict. Perhaps the
>> unit tests suggested in another subthread would be a good start, maybe
>> we could even think about how to do proper namespacing.)
>>
>>
>> Does what i described seem somewhat accurate? Should we maybe buy into
>> the concept of "composable templates, externally fed
>> hierarchy-transcending parameters" for the long term?
> 
> I now realized i might have used too generic or Puppetish terms in the
> explanation, perhaps drowning the gist of the message a bit :) What i'm
> suggesting is: let's consider going with parameter_defaults wherever we
> can, for the sake of composability, and figure out what is the best way
> to prevent parameter name collisions.
+1 I like very much the idea of parameter_defaults + strictier
namespacing rules

Specifically regarding namespaces, puppet was great but ansible doesn't
seem to be as good (at least to me), in fact I think we have chances for
conflicts in both THT and the ansible playbooks
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Nominate chem and matbu for tripleo-core !

2017-11-09 Thread Giulio Fidente
On 11/09/2017 09:48 AM, Marios Andreou wrote:
> Hello fellow owls,
> (appologies for duplicate, forgot to add the tripleo in subject so
> worried it would be missed)
> 
> I would like to nominate (and imo these are both long overdue already):
> 
> Sofer Athlan Guyot (chem)  and
> 
> Mathieu Bultel (matbu) 
> 
> to tripleo-core.
+1 to both with many thanks

long overdue indeed ! :D
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Proposing John Fulton core on TripleO

2017-11-08 Thread Giulio Fidente
Hi,

I would like to propose John Fulton core on TripleO.

I think John did an awesome work during the Pike cycle around the
integration of ceph-ansible as a replacement for puppet-ceph, for the
deployment of Ceph in containers.

I think John has good understanding of many different parts of TripleO
given that the ceph-ansible integration has been a complicated effort
involving changes in heat/tht/mistral workflows/ci and last but not
least, docs and he is more recently getting busier with reviews outside
his main comfort zone.

I am sure John would be a great addition to the team and I welcome him
first to tune into radioparadise with the rest of us when joining #tripleo

Feedback is welcomed!
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Pike Retrospective & Status reporting

2017-10-03 Thread Giulio Fidente
On 09/18/2017 08:50 PM, Alex Schultz wrote:
> Hey folks,
> 
> We started off our PTG with a retrospective for Pike. The output of
> which can be viewed here[0][1].
> 
> One of the recurring themes from the retrospective and the PTG was the
> need for better communication during the cycle.  One of the ideas that
> was mentioned was adding a section to the weekly meeting calling for
> current status from the various tripleo squads[2].  Starting next week
> (Sept 26th), I would like for folks who are members of one of the
> squads be able to provide a brief status or a link to the current
> status during the weekly meeting.  There will be a spot added to the
> agenda to do a status roll call.  It was mentioned that folks may
> prefer to send a message to the ML and just be able to link to it
> similar to what the CI squad currently does[3].  We'll give this a few
> weeks and review how it works.
hi,

I drafted an etherpad for the Integration squad which I hope we can use
during the meeting to report about our status [1], primarily consisting
of Ceph and IPA integration for now.

Juan, John, feel free to make any change or add there anything you feel
is useful/necessary.

1. https://etherpad.openstack.org/p/tripleo-integration-squad-status
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Newton End-Of-Life (EOL) next month (reminder #1)

2017-09-26 Thread Giulio Fidente
On 09/26/2017 06:58 PM, Emilien Macchi wrote:
> Newton is officially EOL next month:
> https://releases.openstack.org/index.html#release-series
> 
> As an action from our weekly meeting, we decided to accelerate the
> reviews for stable/newton before it's too late.
> This email is a reminder and a last reminder will be sent out before
> we EOL for real.
> 
> If you need any help to get backport merged, please raise it here or
> ask on IRC as usual.

I was thinking to backport this [1] into both ocata and newton.

It should be relatively safe as it is basically only a change for a
default value which we'd like to make more production-friendly

1. https://review.openstack.org/#/c/506330/
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Pike Retrospective & Status reporting

2017-09-26 Thread Giulio Fidente

On 09/26/2017 08:55 PM, Alex Schultz wrote:

On Mon, Sep 18, 2017 at 12:50 PM, Alex Schultz <aschu...@redhat.com> wrote:

Hey folks,

We started off our PTG with a retrospective for Pike. The output of
which can be viewed here[0][1].

One of the recurring themes from the retrospective and the PTG was the
need for better communication during the cycle.  One of the ideas that
was mentioned was adding a section to the weekly meeting calling for
current status from the various tripleo squads[2].  Starting next week
(Sept 26th), I would like for folks who are members of one of the
squads be able to provide a brief status or a link to the current
status during the weekly meeting.  There will be a spot added to the
agenda to do a status roll call.


I forgot to do this during the meeting[0] this week. I will make sure
to add it for the meeting next week.  Please remember to have a person
prepare a squad status for next time.

As a remember for those who didn't want to click the link, the listed
squads are:
ci
ui/cli
upgrade
validations
workflows
containers
networking
integration
python3

Thanks,
-Alex

[0] 
http://eavesdrop.openstack.org/meetings/tripleo/2017/tripleo.2017-09-26-14.00.html

great, thanks!

I think it will also help getting more attention/feedback/reviews on the 
various efforts

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] TripleO/Ansible PTG session

2017-09-22 Thread Giulio Fidente
On 09/21/2017 07:53 PM, Jiří Stránský wrote:
> On 21.9.2017 18:04, Marios Andreou wrote:
>> On Thu, Sep 21, 2017 at 3:53 PM, Jiří Stránský <ji...@redhat.com> wrote:

[...]

>>> That way we could run the whole thing end-to-end via
>>> ansible-playbook, or
>>> if needed one could execute smaller bits by themselves (steps or nested
>>> playbook runs) -- that capability is not baked in by default, but i
>>> think
>>> we could make it so.
>>>
>>> Also the interface for services would be clean and simple -- it's always
>>> the ansible tasks.
>>>
>>> And Mistral-less use cases become easier to handle too (= undercloud
>>> installation when Mistral isn't present yet, or development envs when
>>> you
>>> want to tune the playbook directly without being forced to go through
>>> Mistral).
>>>
>>
>> You don't *have* to go through mistral either way I mean you can always
>> just run ansible-playbook directly using the generated playbooks if
>> that is
>> what you need for dev/debug etc.
>>
>>
>>>
>>> Logging becomes a bit more unwieldy in this scenario though, as for the
>>> nested ansible-playbook execution, all output would go into a task in
>>> the
>>> outer playbook, which would be harder to follow and the log of the outer
>>> playbook could be huge.
>>>
>>> So this solution is no silver bullet, but from my current point of
>>> view it
>>> seems a bit less conceptually foreign than using Mistral to provide step
>>> loop functionality to Ansible, which should be able to handle that on
>>> its
>>> own.
>>>
>>>
>> just saying using mistral to invoke ansible-playbook doesn't imply having
>> mistral do the looping/step control. I think it was already mentioned
>> that
>> we can/will have multiple invocations of ansible-playbook. Having the
>> loop
>> in the playbook then means organising our templates a certain way so that
>> there is a _single_ parent playbook which we can parameterise to then run
>> all or some of the steps (which as pointed above is currently the case
>> for
>> the upgrade and deployment steps playbooks).
> 
> Yup, +1 again :) However, the 1)2)3)4) approach discussed earlier in the
> thread suggested to hand over the step loop control to Mistral and keep
> using the Mistral workflow_tasks, which would make it impossible to have
> a single parent playbook, if i understood correctly. So Mistral would be
> a requirement for running all steps via a single command (impacting UC
> install and developer workflow).

yes I am not sold (yet?) on the idea of ansible driving the deployment
and would like to keep some abstraction before it

the additional abstraction will make it possible for example to execute
tasks written as mistral actions (eg. python code) in between or during
any given deployment step, instead of ansible tasks only ... I guess we
could also write ansible actions in python but it's not trivial to ship
them from THT and given the project mission we have of being "openstack
on openstack" I'd also prefer writing a mistral action vs ansible

similarily, the ceph-ansible workflow runs a task to build the ansible
inventory; if we make the "external" services integration an
ansible->ansible process we'll probably need to ship from THT an heat
query (or ansible task) to be executed by the "outer" ansible to create
the inventory for the inner ansible

I supported the introduction of mistral as an API and would prefer to
have there more informations there versus moving it away into YACT (yet
another configuration tool)

depending on mistral for the undercloud install is also not very
different from depending on heat(-all)

I understand the ansible->ansible process addresses the "simplification"
issue we have been asked to look into; it is pretty much the only good
thing I see about it though :D
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] TripleO/Ansible PTG session

2017-09-21 Thread Giulio Fidente
On 09/20/2017 07:36 PM, James Slagle wrote:
> On Tue, Sep 19, 2017 at 8:37 AM, Giulio Fidente <gfide...@redhat.com> wrote:
>> On 09/18/2017 05:37 PM, James Slagle wrote:
>>> - The entire sequence and flow is driven via Mistral on the Undercloud
>>> by default. This preserves the API layer and provides a clean reusable
>>> interface for the CLI and GUI.
>>
>> I think it's worth saying that we want to move the deployment steps out
>> of heat and in ansible, not in mistral so that mistral will run the
>> workflow only once and let ansible go through the steps
>>
>> I think having the steps in mistral would be a nice option to be able to
>> rerun easily a particular deployment step from the GUI, versus having
>> them in ansible which is instead a better option for CLI users ... but
>> it looks like having them in ansible is the only option which permits us
>> to reuse the same code to deploy an undercloud because having the steps
>> in mistral would require the undercloud installation itself to depend on
>> mistral which we don't want to
>>
>> James, Dan, please comment on the above if I am wrong
> 
> That's correct. We don't want to require Mistral to install the
> Undercloud. However, I don't think that necessarily means it has to be
> a single call to ansible-playbook. We could have multiple invocations
> of ansible-playbook. Both Mistral and CLI code for installing the
> undercloud could handle that easily.
> 
> You wouldn't be able to interleave an external playbook among the
> deploy steps however. That would have to be done under a single call
> to ansible-playbook (at least how that is written now). We could
> however have hooks that could serve as integration points to call
> external playbooks after each step.

the benefits of driving the steps from mistral are that then we could
also interleave the deployment steps and we won't need the
ansible-playbook hook for the "external" services:

1) collect the ansible tasks *and* the workflow_tasks (per step) from heat

2) launch the stepN deployment workflow (ansible-playbook)

3) execute any workflow_task defined for stepN (like ceph-ansible playbook)

4) repeat 2 and 3 for stepN+1

I think this would also provide a nice interface for the UI ... but then
we'd need mistral to be able to deploy the undercloud

>>> - It would still be possible to run ansible-playbook directly for
>>> various use cases (dev/test/POC/demos). This preserves the quick
>>> iteration via Ansible that is often desired.
>>>
>>> - The remaining SoftwareDeployment resources in tripleo-heat-templates
>>> need to be supported by config download so that the entire
>>> configuration can be driven with Ansible, not just the deployment
>>> steps. The success criteria for this point would be to illustrate
>>> using an image that does not contain a running os-collect-config.
>>>
>>> - The ceph-ansible implementation done in Pike could be reworked to
>>> use this model. "config download" could generate playbooks that have
>>> hooks for calling external playbooks, or those hooks could be
>>> represented in the templates directly. The result would be the same
>>> either way though in that Heat would no longer be triggering a
>>> separate Mistral workflow just for ceph-ansible.
>>
>> I'd say for ceph-ansible, kubernetes and in general anything else which
>> needs to run with a standard playbook installed on the undercloud and
>> not one generated via the heat templates... these "external" services
>> usually require the inventory file to be in different format, to
>> describe the hosts to use on a per-service basis, not per-role (and I
>> mean tripleo roles here, not ansible roles obviously)
>>
>> About that, we discussed a more long term vision where the playbooks
>> (static data) needd to describe how to deploy/upgrade a given service is
>> in a separate repo (like tripleo-apb) and we "compose" from heat the
>> list of playbooks to be executed based on the roles/enabled services; in
>> this scenario we'd be much closer to what we had to do for ceph-ansible
>> and I feel like that might finally allow us merge back the ceph
>> deployment (or kubernetes deployment) process into the more general
>> approach driven by tripleo
>>
>> James, Dan, comments?
> 
> Agreed, I think this is the longer term plan in regards to using
> APB's, where everything consumed is an external playbook/role.
> 
> We definitely want to consider this plan in parallel with the POC work
> that Flavio is pulling together and make sure that they are aligned

Re: [openstack-dev] [TripleO] TripleO/Ansible PTG session

2017-09-19 Thread Giulio Fidente
ansible playbooks.
> 
> I plan to create blueprints this week for Queens and beyond. If you're
> interested in this work, please let me know. I'm open to the idea of
> creating an official squad for this work, but I'm not sure if it's
> needed or not.

+1 !

-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-09-06 Thread Giulio Fidente
On 09/05/2017 09:26 PM, Emilien Macchi wrote:
> On Wed, Apr 5, 2017 at 1:49 PM, Emilien Macchi <emil...@redhat.com> wrote:
> [...]
> 
>> == Solutions
>>
>> 1) Switch pingtest to Tempest run on some specific tests, with feature
>> parity of what we had with pingtest.
>> For example, we could imagine to run the scenarios that deploys VM and
>> boot from volume. It would test the same thing as pingtest (details
>> can be discussed here).
>> Each scenario would run more tests depending on the service that they
>> run (scenario001 is telemetry, so it would run some tempest tests for
>> Ceilometer, Aodh, Gnocchi, etc).
>> We should work at making the tempest run as short as possible, and the
>> close as possible from what we have with a pingtest.
> [...]
> 
> 4 months later :-)
> 
> We enabled Tempest on the following jobs:
> 
> gate-tripleo-ci-centos-7-scenario001-multinode-oooq-container
> gate-tripleo-ci-centos-7-scenario002-multinode-oooq-container
> gate-tripleo-ci-centos-7-scenario003-multinode-oooq-container
> gate-tripleo-ci-centos-7-scenario004-multinode-oooq-container
> 
> gate-tripleo-ci-centos-7-scenario001-multinode-oooq
> gate-tripleo-ci-centos-7-scenario002-multinode-oooq
> gate-tripleo-ci-centos-7-scenario003-multinode-oooq
> gate-tripleo-ci-centos-7-scenario004-multinode-oooq
> 
> gate-tripleo-ci-centos-7-nonha-multinode-oooq
> gate-tripleo-ci-centos-7-containers-multinode
> 
> It has a feature parity with what pingtest did.
> For example, scenario001 (focused on Telemetry) test boot from volume
> and the whole autoscsaling scenario from Telemetry services, so we can
> test end-to-end that our users can deploy autocsaling apps using
> OpenStack.

slightly OT but notable mention, scenario001/container is also the one
gating Ceph via ceph-ansible currently :D

... soon to convert scenario004/container as well
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG Agenda draft - action required

2017-09-01 Thread Giulio Fidente

On 08/29/2017 05:36 PM, Emilien Macchi wrote:

On Mon, Aug 28, 2017 at 3:17 PM, Emilien Macchi <emil...@redhat.com> wrote:
[...]

Also, it's still time to propose topics, please go ahead and
contribute to the etherpad. We'll review the schedule before the PTG
(probably during our weekly meetings tomorrow and next week).

[...]

I forgot to remind folks that PTG is a very good time to discuss about
blueprints, as we want to schedule together what we do in the next
cycle.


roger, I have added to the thursday afternoon a 1h slot to discuss 
future developments around Ceph integration


specifically three topics:

 - is use of jinja to create multiple ceph clusters a good idea?
 - upgrade ceph to luminous (maybe also in Kolla)
 - support multiple ceph-pools for cinder-volume


Which means, please be prepared and create blueprints / specs (even
drafts) prior to the PTG, so we have some support that we can use for
scheduling and discussions.


ack, will do and add links to the etherpad to some LP blueprints

thanks Emilien for setting everything up :D
--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Pacemaker + containers CI

2017-08-29 Thread Giulio Fidente

On 08/29/2017 02:33 PM, Jiří Stránský wrote:
A bit of context: Currently our only upgrade check job is non-OVB - 
containers-multinode-upgrades-nv. As of late we started hitting 
timeouts, and the job only does mixed-version deploy + 1 node AIO 
overcloud upgrade (just the main step). It doesn't do undercloud 
upgrade, nor compute upgrade, nor converge, and it still times out... 
It's a bit difficult to find things to cut off here. :D We could look 
into speeding things up (e.g. try to reintroduce selective container 
image upload etc.) but i think we might also be approaching the 
"natural" deploy+upgrade limits. We might need to bump up the timeouts 
if we want to test more things. Though it's not only about capacity of 
HW, it could also get unwieldy for devs if we keep increasing the 
feedback time from CI, so we're kinda in a tough spot with upgrade CI...


agreed which goes back to "nobody looks at the periodic jobs" but 
periodic job seems the answer?

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Pacemaker + containers CI

2017-08-29 Thread Giulio Fidente

On 08/29/2017 11:14 AM, Jiří Stránský wrote:

Hi owls,

the CI for containerized deployments with Pacemaker is close! In fact, 
it works [1][2] (but there are pending changes to merge).


cool :D

I also spotted this which we need for ceph 
https://review.openstack.org/#/c/498356/


but I am not sure if we want to enable ceph in this job as we have it 
already in a couple of scenarios, more below ...


The way it's proposed in gerrit currently is to switch the 
centos-7-containers-multinode job (featureset010) to deploy with 
Pacemaker. What do you think about making this switch as a first step? 
(The OVB job is an option too, but that one is considerably closer to 
timeouts already, so it may be better left as is.)


+1 on switching the existing job

Later it would be nice to get a proper clustering test with 3 
controllers. Should we try and switch the centos-7-ovb-ha-oooq job to 
deploy containers on master and stable/pike? (Probably by adding a new 
job that only runs on master + Pike, and making the old ovb-ha-oooq only 
run upto Ocata, to keep the OVB capacity demands unchanged?) I'd be +1 
on that since containers are the intended way of deploying Pike and 
beyond. WDYT?


switching OVB to containers from pike seems fine because that's the 
indended way as you pointed, yet I would like to enable ceph in the 
upgrade job, and it requires multiple MON instances (multiple controllers)


would it make any sense to deploy the pacemaker / ceph combination using 
multiple controllers in the upgrade job and drop the standard ovb job 
(which doesn't do upgrade) or use it for other purposes?

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][keystone] internal endpoints vs sanity

2017-07-21 Thread Giulio Fidente
Only a comment about the status in TripleO

On 07/21/2017 12:40 PM, Attila Fazekas wrote:

[...]

> We should seriously consider using names instead of ip address also
> on the devstack gates to avoid people thinking the catalog entries
> meant to be used with ip address and keystone is a replacement for DNS.

this is configurable, you can have names or ips in the keystone
endpoints ... actually you can chose to use names or ips independently
for each service and even for the different endpoints
(Internal/Admin/Public) of the same service

if an operator, like you suggested, configures the DNS to resolve
different IPs for the same name basing on where the request comes from,
then he can use the same 'hostname' for all Public, Admin and Internal
endpoints which I *think* is what you're suggesting

also using names is the default when ssl is enabled

check environments/ssl/tls-endpoints-public-dns.yaml and note how
EndpointMap can resolve to CLOUDNAME or IP_ADDRESS

adding Juan on CC as he did a great work around this and can help further
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] An experiment with Ansible

2017-07-21 Thread Giulio Fidente
On 07/21/2017 03:04 AM, Paul Belanger wrote:
> On Thu, Jul 20, 2017 at 06:21:22PM -0400, James Slagle wrote:

[...]

> Sorry if this hasn't been asked before but why don't you removed all of your
> ansible-playbook logic out of heat and write them directly as native 
> playbooks /
> roles? Then instead of having a tool that reads heat to then generate the
> playbooks / roles, you update heat just to directly call the playbooks? Any
> dynamic information about be stored in the inventory or using the --extra-vars
> on the CLI?
this is actually what the heat > mistral triggering mechansim implements

Heat provides the extra_vars and triggers the native ceph-ansible
playbook at the right time during the deployment

I believe Steven's goal in [1] is to prepare the data to build the
ansible inventory group vars from the role data which seems to me to go
in the same direction that James is proposing of driving the deployment
steps without the Heat software deployments

If from a service template we could point to a playbook, instead of
describing one in Heat, this might replace the Heat > Mistral calling too

Yet, there is another question we probably want to discuss: is this just
so we use ansible to drive the deployment steps or do we want ansible to
replace puppet for the overcloud configuration?

1. https://review.openstack.org/#/c/483929/
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Queens PTG

2017-07-19 Thread Giulio Fidente
On 07/07/2017 07:38 PM, Giulio Fidente wrote:
> On 07/04/2017 08:00 PM, Emilien Macchi wrote:
>> On Wed, Jun 28, 2017 at 9:37 PM, Giulio Fidente <gfide...@redhat.com> wrote:
>>> On 06/28/2017 04:35 PM, Emilien Macchi wrote:
>>>> Hey folks,
>>>>
>>>> Let's start to prepare the next PTG in Denver.
>>>>
>>>> Here's the schedule draft:
>>>> https://docs.google.com/spreadsheets/d/1xmOdT6uZ5XqViActr5sBOaz_mEgjKSCY7NEWcAEcT-A/pubhtml?gid=397241312=true=gmail
>>>> We'll have a room Wednesday, Thursday and Friday. We'll probably
>>>> finish by end of Friday morning.
>>>>
>>>> Etherpad for the PTG:
>>>> https://etherpad.openstack.org/p/tripleo-ptg-queens
>>>
>>> thanks Emilien!
>>>
>>> I've added a session into the agenda about the integration of
>>> ceph-ansible as this brough in a generic functionality in Heat and
>>> TripleO which allows services to describe workflows to be executed
>>> during the overcloud deployment steps
>>>
>>> I think it'd be nice to review together what the submissions tracked by
>>> the two blueprints actually do [1] [2] and how!
>>
>> Really cool. Indeed, a session for this topic would be awesome. Please
>> make sure we prepare the agenda (maybe prepare a TL;DR on the ML to
>> summarize what has been done and what remains).
>> So we prepare the session correctly and can directly start with
>> pro-active discussions.
> Thanks! I've just added a link to the session etherpad:
> 
> https://etherpad.openstack.org/p/tripleo-ptg-queens-heat-mistral-ceph-ansible
> 
> I think it'll be interesting to discuss it at the PTG because it
> provides for a pretty generic mechanism to:
> 
> 1) run unmodified playbooks, per service
> 2) pass native params to the playbooks without wrappers
> 3) build the full inventory to keep the decision on where to run the
> tasks in the playbook
> 4) keep in heat the mechanism to emit per-role settings and also
> orchestrate the deployment steps
> 
> But some of it can probably be generalized further and extended or
> changed for a potential even more tight integration with ansible (thanks
> Steven and James for the feedback)... bringing us slowly to the other
> topic which James added to the agenda!

I thought it might have been useful to put up a short walkthrough of the
more interesting changes we implemented to have this functionality and
wrote a blog post [1]

I would encourage everyone interested in mistral>ansible and/or Ceph in
taking a look to the submissions referenced by the blog post ... and
help reviewing the latest still on review

Thanks for the interest in the topic!

1.
http://giuliofidente.com/2017/07/understanding-ceph-ansible-in-tripleo.html

-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Forming our plans around Ansible

2017-07-12 Thread Giulio Fidente

On 07/12/2017 01:53 AM, James Slagle wrote:

On Tue, Jul 11, 2017 at 5:53 PM, Steve Baker <sba...@redhat.com> wrote:




[...]


I think its important that we allow full support for both mistral-driven and
manually running playbooks. If there was no option to run ansible-playbook
directly then operators would miss one of the main benefits of using ansible
in the first place (which is leveraging their knowledge of inventory,
playbooks and roles to deploy things).


+1, I like this idea as well. If you have a few minutes could you
summarize it here:
https://etherpad.openstack.org/p/tripleo-ptg-queens-ansible


note that this is how option (3) currently operates; it runs an 
unmodified version of ceph-ansible, installed on the undercloud so what 
the user needs to do on failure is to look for the mistral task that 
triggered the playbook and rerun the command


what it misses, as pointed by Steven, is a dump of the execution 
environment, that provides the extra_vars given to the playbook ... heat 
has this data, it should be possible to dump it in a file on the 
undercloud if we want to


I believe Steven is, with (4), trying to improve/reuse the mechanim


I'm attempting to capture some of the common requirements from this
thread for discussion at the ptg so we can consider them when choosing
solution(s).




What would be nice is when a heat->mistral->ansible upgrade step fails, the
operator is given an ansible-playbook command to run which skips directly to
the failing step. This would dramatically reduce the debug cycle and also
make it possible for the operator to automate any required fixes over every
host in a role. This would likely mean rendering out ansible config files,
playbooks, (and roles?) to the operator's working directory. What happens to
these rendered files after deployment is an open question. Delete them?
Encourage the operator to track them in source control?


interesting question, as long as we run playbooks from a filesystem, I 
suppose users can make customizations without "changing" anything in 
tripleo ... this is how we tested some of the ceph-ansible fixes!


for upgrades we should maintain the tasks outside the templates do be 
able to do that though, assuming we want users to customize the upgrade 
tasks

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Forming our plans around Ansible

2017-07-10 Thread Giulio Fidente
On 07/10/2017 09:23 PM, James Slagle wrote:
> On Mon, Jul 10, 2017 at 2:54 PM, Giulio Fidente <gfide...@redhat.com> wrote:
>> On 07/10/2017 07:06 PM, James Slagle wrote:
>>> On Mon, Jul 10, 2017 at 11:19 AM, Giulio Fidente <gfide...@redhat.com> 
>>> wrote:
>>>> splitstack though requires changes in how the *existing* openstack
>>>> services are deployed and we didn't want to do that just for the purpose
>>>> of integrating ceph-ansible so I still believe (3) to be a sensible
>>>> compromise to provide the needed functionalities and not breaking the
>>>> existing deployment logic
>>>
>>> We might be talking about different definitions of "splitstack", as
>>> I'm not sure what changes are required for existing services. FWIW, I
>>> refer to what we do in CI with multinode to be splitstack in that the
>>> nodes are already provisioned and we deploy the services on those
>>> nodes using the same templates that we do for a "full" stack.
>>
>>> Those nodes could have just as easily been provisioned with our
>>> undercloud and the services deployed using 2 separate stacks, and that
>>> model works just as well.
>>
>> true, sorry for the misuse of the term splistack; the existing
>> splitstack implementation continues to work well and option (3), like
>> the others, can be plugged on top of it
>>
>> what I had in mind was instead the "split stack" scenario described by
>> Steven, where the orchestration steps are moved outside heat, this is
>> what we didn't have, still don't have and can be discussed at the PTG
> 
> Ok, thanks for clarifying. So when you're saying split-stack in this
> context, you imply just deploying a baremetal stack, then use whatever
> tool we want (or may develop) to deploy the service configuration.

yes but I am still assuming heat to be tool providing the per-role and
per-service settings, while not the tool orchestrating the steps anymore

I also don't think we should assume puppet or ansible to be "the
deployment tool"; the past seems to be telling us that we changed the
tool once already, later decided to use a new one fitting better our
needs for upgrades and yet resorted to a third more generic 'workflow
triggering' mechanism to decouple further some services configuration
from the general approach and I wouldn't give away flexibility easily
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Forming our plans around Ansible

2017-07-10 Thread Giulio Fidente
On 07/10/2017 07:06 PM, James Slagle wrote:
> On Mon, Jul 10, 2017 at 11:19 AM, Giulio Fidente <gfide...@redhat.com> wrote:
>> On 07/10/2017 03:19 PM, Steven Hardy wrote:
>>> On Fri, Jul 7, 2017 at 6:50 PM, James Slagle <james.sla...@gmail.com> wrote:
>>
>> [...]
>>
>>> Yeah, I think the first step is to focus on a clean "split stack"
>>> model where the nodes/networks etc are still deployed via heat, then
>>> ansible handles the configuration of the nodes.
>>
>> +1
>>
>> as per my previous email, if splitstack was available we might have been
>> able to use that for the ceph-ansible integration : "if we had migrated
>> to splitstack already, it might have been possible"
> 
> Can you expand on what isn't available? I've primarily been the one
> working on different parts of splitstack, and I'm not sure what it
> can't do that you need it to do :).

the idea behind option (3) was to make it possible to run any mistral
workflow (or task) to deploy a service

we decoupled, on a per-service basis, how a given service is deployed
from the rest of the stack, yet maintained orchestration of the
overcloud deployment steps in heat; I know for sure that not everybody
liked this idea but it was the goal

as a result via option (3) you can deploy a new service in tripleo by
pointing it to a workflow ... and it doesn't matter if the workflow uses
ansible, puppet or simply returns 0

plus the workflow can be executed at a given deployment step, making it
possible to interleave its execution with the rest of the deployment
steps (the puppet apply steps); splitstack couldn't interleave the steps
and even if we made it to, we needed to add the parts to describe which
workflow/task needed to be run

but now that option (3) is implemented, assuming we move outside heat
the capability to collect and run tasks/workflows for a given service,
it'll be trivial to remove the "mistral > heat > mistral" loop, we'd
just need to execute the service workflows from the
$new_tool_driving_the_deployment_steps

>> splitstack though requires changes in how the *existing* openstack
>> services are deployed and we didn't want to do that just for the purpose
>> of integrating ceph-ansible so I still believe (3) to be a sensible
>> compromise to provide the needed functionalities and not breaking the
>> existing deployment logic
> 
> We might be talking about different definitions of "splitstack", as
> I'm not sure what changes are required for existing services. FWIW, I
> refer to what we do in CI with multinode to be splitstack in that the
> nodes are already provisioned and we deploy the services on those
> nodes using the same templates that we do for a "full" stack.

> Those nodes could have just as easily been provisioned with our
> undercloud and the services deployed using 2 separate stacks, and that
> model works just as well.

true, sorry for the misuse of the term splistack; the existing
splitstack implementation continues to work well and option (3), like
the others, can be plugged on top of it

what I had in mind was instead the "split stack" scenario described by
Steven, where the orchestration steps are moved outside heat, this is
what we didn't have, still don't have and can be discussed at the PTG

-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Forming our plans around Ansible

2017-07-10 Thread Giulio Fidente
On 07/10/2017 03:19 PM, Steven Hardy wrote:
> On Fri, Jul 7, 2017 at 6:50 PM, James Slagle <james.sla...@gmail.com> wrote:

[...]

> Yeah, I think the first step is to focus on a clean "split stack"
> model where the nodes/networks etc are still deployed via heat, then
> ansible handles the configuration of the nodes.

+1

as per my previous email, if splitstack was available we might have been
able to use that for the ceph-ansible integration : "if we had migrated
to splitstack already, it might have been possible"

splitstack though requires changes in how the *existing* openstack
services are deployed and we didn't want to do that just for the purpose
of integrating ceph-ansible so I still believe (3) to be a sensible
compromise to provide the needed functionalities and not breaking the
existing deployment logic

note that I know of at least another case (the swift rings building)
which would benefit from being able to trigger a workflow during the
overcloud deployment and does not need to run ansible

[...]

>> Personally, I'm pretty apprehensive about the approach taken in (3). I
>> feel that it is a lot of complexity that could be done simpler if we
>> took a step back and thought more about a longer term approach. I
>> recognize that it's mostly an experiment/POC at this stage, and I'm
>> not trying to directly knock down the approach. It's just that when I
>> start to see more patches (Kubernetes installation) using the same
>> approach, I figure it's worth discussing more broadly vs trying to
>> have a discussion by -1'ing patch reviews, etc.
> 
> I agree, I think the approach in (3) is a stopgap until we can define
> a cleaner approach with less layers.

> IMO the first step towards that is likely to be a "split stack" which
> outputs heat data, then deployment configuration is performed via
> mistral->ansible just like we already do in (1).

given option (3) allows triggering of workflows during a particular
deployment step, it seems that option (1) would need to be revisited to
implement some sort of a loop in mistral, instead of heat, to provide
that same functionality ... which in the end moves the existing logic
from heat into mistral

>> I'm interested in all feedback of course. And I plan to take a shot at
>> working on the prototype I mentioned in (5) if anyone would like to
>> collaborate around that.
> 
> I'm very happy to collaborate, and this is quite closely related to
> the investigations I've been doing around enabling minor updates for
> containers.
> 
> Lets sync up about it, but as I mentioned above I'm not yet fully sold
> on a new translation tool, vs just more t-h-t refactoring to enable
> output of data directly consumable via ansible-playbook (which can
> then be run via operators, or heat, or mistral, or whatever).
I'd be happy to revisit the requirements around the ceph-ansible
integration as well, to see how those can still be met
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] proposing Alex Schultz tripleo-core in all projects

2017-07-10 Thread Giulio Fidente
thanks Alex!

+1

On 07/07/2017 07:39 PM, Emilien Macchi wrote:
> Alex has demonstrated high technical and community skills in TripleO -
> where he's already core on THT, instack-undercloud, and puppet-tripleo
> - but also very involved in other repos.
> I propose that we extend his core status to all TripleO projects and
> of course trust him (like we trust all core members) to review patches
> were we feel confortable with.
> 
> He has shown an high interest in reviewed other TripleO projects and I
> think he would be ready for this change.
> As usual, this is an open proposal, any feedback is welcome.


-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Forming our plans around Ansible

2017-07-10 Thread Giulio Fidente
from heat into mistral (as opposed to the approach
in (3) which keeps all the understanding in heat); I am not sure if
having this information in two tools will help in the long term but I
guess it has to be weighted with its pros

[...]
> I recognize that saying "moving away from Heat" may be quite
> controversial. While it's not 100% the same discussion as what we are
> doing with Ansible, I think it is a big part of the discussion and if
> we want to continue with Heat as the primary orchestration tool in
> TripleO.

I think this is a key question for the conversation we'll have; the
approach in (3) is based on the idea that heat stays and keeps
understanding of what/when is happening in the templates; I think we are
testing use of heat for the deployment of the undercloud cloud with the
intent to reuse this understanding.
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Queens PTG

2017-07-07 Thread Giulio Fidente
On 07/04/2017 08:00 PM, Emilien Macchi wrote:
> On Wed, Jun 28, 2017 at 9:37 PM, Giulio Fidente <gfide...@redhat.com> wrote:
>> On 06/28/2017 04:35 PM, Emilien Macchi wrote:
>>> Hey folks,
>>>
>>> Let's start to prepare the next PTG in Denver.
>>>
>>> Here's the schedule draft:
>>> https://docs.google.com/spreadsheets/d/1xmOdT6uZ5XqViActr5sBOaz_mEgjKSCY7NEWcAEcT-A/pubhtml?gid=397241312=true=gmail
>>> We'll have a room Wednesday, Thursday and Friday. We'll probably
>>> finish by end of Friday morning.
>>>
>>> Etherpad for the PTG:
>>> https://etherpad.openstack.org/p/tripleo-ptg-queens
>>
>> thanks Emilien!
>>
>> I've added a session into the agenda about the integration of
>> ceph-ansible as this brough in a generic functionality in Heat and
>> TripleO which allows services to describe workflows to be executed
>> during the overcloud deployment steps
>>
>> I think it'd be nice to review together what the submissions tracked by
>> the two blueprints actually do [1] [2] and how!
> 
> Really cool. Indeed, a session for this topic would be awesome. Please
> make sure we prepare the agenda (maybe prepare a TL;DR on the ML to
> summarize what has been done and what remains).
> So we prepare the session correctly and can directly start with
> pro-active discussions.
Thanks! I've just added a link to the session etherpad:

https://etherpad.openstack.org/p/tripleo-ptg-queens-heat-mistral-ceph-ansible

I think it'll be interesting to discuss it at the PTG because it
provides for a pretty generic mechanism to:

1) run unmodified playbooks, per service
2) pass native params to the playbooks without wrappers
3) build the full inventory to keep the decision on where to run the
tasks in the playbook
4) keep in heat the mechanism to emit per-role settings and also
orchestrate the deployment steps

But some of it can probably be generalized further and extended or
changed for a potential even more tight integration with ansible (thanks
Steven and James for the feedback)... bringing us slowly to the other
topic which James added to the agenda!
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Queens PTG

2017-06-28 Thread Giulio Fidente
On 06/28/2017 04:35 PM, Emilien Macchi wrote:
> Hey folks,
> 
> Let's start to prepare the next PTG in Denver.
> 
> Here's the schedule draft:
> https://docs.google.com/spreadsheets/d/1xmOdT6uZ5XqViActr5sBOaz_mEgjKSCY7NEWcAEcT-A/pubhtml?gid=397241312=true=gmail
> We'll have a room Wednesday, Thursday and Friday. We'll probably
> finish by end of Friday morning.
> 
> Etherpad for the PTG:
> https://etherpad.openstack.org/p/tripleo-ptg-queens

thanks Emilien!

I've added a session into the agenda about the integration of
ceph-ansible as this brough in a generic functionality in Heat and
TripleO which allows services to describe workflows to be executed
during the overcloud deployment steps

I think it'd be nice to review together what the submissions tracked by
the two blueprints actually do [1] [2] and how!

Flavio is using some of this code for Kubernetes integration as well

1.
https://blueprints.launchpad.net/heat/+spec/mistral-new-resource-type-workflow-execution

2. https://blueprints.launchpad.net/tripleo/+spec/tripleo-ceph-ansible

-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][manila] Ganesha deployment

2017-04-11 Thread Giulio Fidente
On Tue, 2017-04-11 at 16:50 +0200, Jan Provaznik wrote:
> On Mon, Apr 10, 2017 at 6:55 PM, Ben Nemec <openst...@nemebean.com>
> wrote:
> > On 04/10/2017 03:22 AM, Jan Provaznik wrote:
> > Well, on second thought it might be possible to make the Storage
> > network
> > only routable within overcloud Neutron by adding a bridge mapping
> > for the
> > Storage network and having the admin configure a shared Neutron
> > network for
> > it.  That would be somewhat more secure since it wouldn't require
> > the
> > Storage network to be routable by the world.  I also think this
> > would work
> > today in TripleO with no changes.
> > 
> 
> This sounds interesting, I was searching for more info how bridge
> mapping should be done in this case and how specific setup steps
> should look like, but the process is still not clear to me, I would
> be
> grateful for more details/guidance with this.

I think this will be represented in neutron as a provider network,
which has to be created by the overcloud admin, after the overcloud
deployment is finished

While based on Kilo, this was one of the best docs I could find and it
includes config examples [1]

It assumes that the operator created a bridge mapping for it when
deploying the overcloud

> > I think the answer here will be the same as for vanilla Ceph.  You
> > need to
> > make the network routable to instances, and you'd have the same
> > options as I
> > discussed above.
> > 
> 
> Yes, it seems that using the mapping to provider network would solve
> the existing problem when using ceph directly and when using ganesha
> servers in future (it would be just matter of to which network is
> exposed).

+1

regarding the composability questions, I think this represents a
"composable HA" scenario where we want to manage a remote service with
pacemaker using pacemaker-remote

yet at this stage I think we want to add support for new services by
running them in containers first (only?) and pacemaker+containers is
still a work in progress so there aren't easy answers

containers will have access to the host networks though, so the case
for a provider network in the overcloud remains valid

1. https://docs.openstack.org/kilo/networking-guide/scenario_provider_o
vs.html
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Consistent way of integrating services with different backends

2017-04-06 Thread Giulio Fidente
On Thu, 2017-04-06 at 13:07 +0200, Ricardo Noriega De Soto wrote:
> Hi owls!
> 
> This is something that I've been discussing in the IRC channel but
> still I
> think we should define a consistent way of integrating services which
> support different backends. In this case, I'm refering to BGPVPN and
> L2GW
> Neutron services, but it could be applied to any other type of
> service.

yes indeed there is a similar issue with the storage services and thei
supported backends

> These two Neutron service plugins support different backends such an
> agent
> and a SDN controller (OpenDaylight). Usually the reference
> architecture
> will use the agents.
> 
> My main question is about how to model this into THT properly.
> 
> It is clear that we have to create two different composable services:
> one
> for the API and one for the agent. However, how many environment
> files we
> should have and which should be the content?

currently for cinder we use a tht service for each backend; multiple
backends can be enabled at the same time; having multiple instances of
the same backend is a bit trickier and requires some yaml editing

> i.e. L2GW project
> 
> Option 1:
> 
>    - neutron-l2gw-api.yaml enabling the corresponding API composable
>    service.
>    - neutron-l2gw-agent.yaml enabling the corresponding agent
> composable
>    service.
> 
> openstack overcloud deploy -e neutron-l2gw-api.yaml -e
>  neutron-l2gw-agent.yaml (with agent)
> openstack overcloud deploy -e neutron-l2gw-api.yaml -e
>  neutron-opendaylight-l3.yaml (with ODL)
> 
> Option 2:
> 
>    - neutron-l2gw.yaml enabling the API and the agent as a reference
>    architecture scenario.
>    - neutron-l2gw-odl.yaml enabling the API with OpenDaylight as
>    service_provider
> 
> openstack overcloud deploy -e neutron-l2gw.yaml (with agent)
> openstack overcloud deploy -e neutron-l2gw-odl.yaml -e
>  neutron-opendaylight-l3.yaml (with ODL)
> 
> 
> I'm not really pushing for any option, but I'm just concern from the
> user
> experience point of view. As a user, which way is more friendly? or
> understandable? Where in the documentation is this reflected?

I am not sure there is a single answer; option 2) seems to me more user
friendly and easier to consume in the UI

Yet when working on the integration of CephMDS and the Manila/CephFS
backend, we decided to use two different environment files, one to
enable CephMDS and one to enable the CephFS backend in Manila. This was
so that operators could deploy CephMDS without Manila, to provide
CephFS to the overcloud or connect the Manila/CephFS backend to an
external unmanaged Ceph cluster and use only one or the other
environment file.

My conclusion, if there aren't reasons to deploy the two services
indepedently, I'd probably go with option 2), if there are reasons to
deploy only one of them, option 1) is the only one which seems to allow
that avoiding yaml edits to the users.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Propose Attila Darazs and Gabriele Cerami for tripleo-ci core

2017-03-15 Thread Giulio Fidente
On 03/15/2017 04:44 PM, John Trowbridge wrote:
> Both Attila and Gabriele have been rockstars with the work to transition
> tripleo-ci to run via quickstart, and both have become extremely
> knowledgeable about how tripleo-ci works during that process. They are
> both very capable of providing thorough and thoughtful reviews of
> tripleo-ci patches.
> 
> On top of this Attila has greatly increased the communication from the
> tripleo-ci squad as the liason, with weekly summary emails of our
> meetings to this list.

++

where would CI be without you guys :)
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-02-16 Thread Giulio Fidente
On 02/16/2017 07:25 PM, Heidi Joy Tretheway wrote:
> Amid the spirited discussion on your mascot, I wanted to clear up confusion 
> about our next steps and your options. 
> 
> 1. I’ve taken Carlos’s sketches to our illustration team to ask them to make 
> the original TripleO owl in the style of the new logo family. This takes a 
> bit of time and probably won’t be ready before the PTG, but I'll send it to 
> Emilien (as I have been communicating directly with PTLs) as soon as it’s 
> ready, to share with the team. 
> 
> 2. At that point, your team can review and decide either (1) yes - adopt it, 
> (2) request slight changes, or (3) decline to use a new mascot. 
> 
> If your team picks #1, you’ll get the logo in about 10 variations 
> (horizontal, vertical, vector, etc).
> 
> If your team picks #2, I need one person (for most teams, this has been the 
> PTL) or a small group who will be willing to represent TripleO’s preference, 
> and chat with me for a few minutes to nail down exactly what needs to be 
> changed. We get a lot of conflicting feedback, so if your team doesn’t have 
> one person to help me select which feedback to use, then I end up doing it 
> (and that makes nobody happy). 
> 
> If your team picks #3, then on things like the project navigator, you’ll just 
> see an empty space (represented by a light gray circle) above the word 
> “TripleO” where the mascot creature would have been, while the other five to 
> eight projects on the page will have their mascot illustration shown. On 
> signage or places where we need to print a logo, your team name will be 
> printed without any illustration. You can keep using your old logo on 
> whatever you print/create, but it won’t appear on official channels. 
> 
> Thanks for your passion for this project!

Heidi, thank you.

Can I ask you to also share what are the requirements/goals around the
new logo? That would help me understand *why* and *what* has been
changed in the original logo and, hopefully, facilitate the happy ending! ;)
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-02-16 Thread Giulio Fidente
On 02/14/2017 03:38 AM, Emilien Macchi wrote:
> Team, I've got this email from Heidi.
> 
> I see 3 options :
> 
> 1. Keep existing logo: http://tripleo.org/_static/tripleo_owl.svg .
> 
> 2. Re-design a new logo that "meets" OpenStack "requirements".
> 
> 3. Pick-up the one proposed (see below).

while not the most skilled around, I can tell I have seen tripleo
changing a lot over time, the project, not the logo, so I wouldn't ever
think of tripleo people as hostile to changes

I'd vote for option (1) as well but if someone could iterate over the
*existing* logo to make it meet the requirements, instead of proposing a
new one completely different, that would be received better I think; is
this still an option? I think Carlos (in CC) suggested something like
that too?

also, to propose a change I think it would be useful to share the (new?)
requirements as well, so that people can eventually understand those
better too
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo-heat-templates, vendor plugins and the new hiera hook

2017-01-25 Thread Giulio Fidente
On 01/25/2017 04:32 PM, Steven Hardy wrote:
> On Wed, Jan 25, 2017 at 02:59:42PM +0200, Marios Andreou wrote:
>> Hi, as part of the composable upgrades workflow shaping up for Newton to
>> Ocata, we need to install the new hiera hook that was first added with
>> [1] and disable the old hook and data as part of the upgrade
>> initialization [2]. Most of the existing hieradata was ported to use the
>> new hook in [3]. The deletion of the old hiera data is necessary for the
>> Ocata upgrade, but it also means it will break any plugins still using
>> the 'old' os-apply-config hiera hook.
>>
>> In order to be able to upgrade to Ocata any templates that define hiera
>> data need to be using the new hiera hook and then the overcloud nodes
>> need to have the new hook installed (installing is done in [2] as a
>> matter of necessity, and that is what prompted this email in the first
>> place). I've had a go at updating all the plugin templates that are
>> still using the old hiera data with a review at [4] which I have -1 for now.
>>
>> I'll try and reach out to some individuals more directly as well but
>> wanted to get the review at [4] and this email out as a first step,
> 
> Thanks for raising this marios, and yeah it's unfortunate as we've had to
> do a switch from the old to new hiera hook this release with out a
> transition where both work.
> 
> I think we probably need to do the following:
> 
> 1. Convert anything in t-h-t refering to the old hook to the new (seems you
> have this in progress, we need to ensure it all lands before ocata)

was working on this as well today for Ceph

https://review.openstack.org/#/c/425288/

thanks marios
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-23 Thread Giulio Fidente
On 01/23/2017 11:07 AM, Saravanan KR wrote:
> Thanks John for the info.
> 
> I am going through the spec in detail. And before that, I had few
> thoughts about how I wanted to approach this, which I have drafted in
> https://etherpad.openstack.org/p/tripleo-derive-params. And it is not
> 100% ready yet, I was still working on it.

I've linked this etherpad for the session we'll have at the PTG

> As of now, there are few differences on top of my mind, which I want
> to highlight, I am still going through the specs in detail:
> * Profiles vs Features - Considering a overcloud node as a profiles
> rather than a node which can host these features, would have
> limitations to it. For example, if i need a Compute node to host both
> Ceph (OSD) and DPDK, then the node will have multiple profiles or we
> have to create a profile like -
> hci_enterprise_many_small_vms_with_dpdk? The first one is not
> appropriate and the later is not scaleable, may be something else in
> your mind?
> * Independent - The initial plan of this was to be independent
> execution, also can be added to deploy if needed.
> * Not to expose/duplicate parameters which are straight forward, for
> example tuned-profile name should be associated with feature
> internally, Workflows will decide it.

for all of the above, I think we need to decide if we want the
optimizations to be profile-based and gathered *before* the overcloud
deployment is started or if we want to set these values during the
overcloud deployment basing on the data we have at runtime

seems like both approaches have pros and cons and this would be a good
conversation to have with more people at the PTG

> * And another thing, which I couldn't get is, where will the workflow
> actions be defined, in THT or tripleo_common?

to me it sounds like executing the workflows before stack creation is
started would be fine, at least for the initial phase

running workflows from Heat depends on the other blueprint/session we'll
have about the WorkflowExecution resource and once that will be
available, we could trigger the workflow execution from tht if beneficial

> The requirements which I thought of, for deriving workflow are:
> Parameter Deriving workflow should be
> * independent to run the workflow
> * take basic parameters inputs, for easy deployment, keep very minimal
> set of mandatory parameters, and rest as optional parameters
> * read introspection data from Ironic DB and Swift-stored blob
> 
> I will add these comments as starting point on the spec. We will work
> towards bringing down the differences, so that operators headache is
> reduced to a greater extent.

thanks

-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-10 Thread Giulio Fidente

On 01/04/2017 09:13 AM, Saravanan KR wrote:

Hello,

The aim of this mail is to ease the DPDK deployment with TripleO. I
would like to see if the approach of deriving THT parameter based on
introspection data, with a high level input would be feasible.

Let me brief on the complexity of certain parameters, which are
related to DPDK. Following parameters should be configured for a good
performing DPDK cluster:
* NeutronDpdkCoreList (puppet-vswitch)
* ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under review)
* NovaVcpuPinset (puppet-nova)

* NeutronDpdkSocketMemory (puppet-vswitch)
* NeutronDpdkMemoryChannels (puppet-vswitch)
* ComputeKernelArgs (PreNetworkConfig [4]) (under review)
* Interface to bind DPDK driver (network config templates)

The complexity of deciding some of these parameters is explained in
the blog [1], where the CPUs has to be chosen in accordance with the
NUMA node associated with the interface. We are working a spec [2], to
collect the required details from the baremetal via the introspection.
The proposal is to create mistral workbook and actions
(tripleo-common), which will take minimal inputs and decide the actual
value of parameters based on the introspection data. I have created
simple workbook [3] with what I have in mind (not final, only
wireframe). The expected output of this workflow is to return the list
of inputs for "parameter_defaults",  which will be used for the
deployment. I would like to hear from the experts, if there is any
drawbacks with this approach or any other better approach.


hi, I am not an expert, I think John (on CC) knows more but this looks 
like a good initial step to me.


once we have the workbook in good shape, we could probably integrate it 
in the tripleo client/common to (optionally) trigger it before every 
deployment


would you be able to join the PTG to help us with the session on the 
overcloud settings optimization?


https://etherpad.openstack.org/p/tripleo-ptg-pike
--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Mistral Workflow for deriving THT parameters

2017-01-10 Thread Giulio Fidente

On 01/04/2017 09:13 AM, Saravanan KR wrote:

Hello,

The aim of this mail is to ease the DPDK deployment with TripleO. I
would like to see if the approach of deriving THT parameter based on
introspection data, with a high level input would be feasible.

Let me brief on the complexity of certain parameters, which are
related to DPDK. Following parameters should be configured for a good
performing DPDK cluster:
* NeutronDpdkCoreList (puppet-vswitch)
* ComputeHostCpusList (PreNetworkConfig [4], puppet-vswitch) (under review)
* NovaVcpuPinset (puppet-nova)

* NeutronDpdkSocketMemory (puppet-vswitch)
* NeutronDpdkMemoryChannels (puppet-vswitch)
* ComputeKernelArgs (PreNetworkConfig [4]) (under review)
* Interface to bind DPDK driver (network config templates)

The complexity of deciding some of these parameters is explained in
the blog [1], where the CPUs has to be chosen in accordance with the
NUMA node associated with the interface. We are working a spec [2], to
collect the required details from the baremetal via the introspection.
The proposal is to create mistral workbook and actions
(tripleo-common), which will take minimal inputs and decide the actual
value of parameters based on the introspection data. I have created
simple workbook [3] with what I have in mind (not final, only
wireframe). The expected output of this workflow is to return the list
of inputs for "parameter_defaults",  which will be used for the
deployment. I would like to hear from the experts, if there is any
drawbacks with this approach or any other better approach.


hi, I am not an expert, I think John (on CC) knows more but this looks 
like a good initial step to me.


once we have the workbook in good shape, we could probably integrate it 
in the tripleo client/common to (optionally) trigger it before every 
deployment


would you be able to join the PTG to help us with the session on the 
overcloud settings optimization?


https://etherpad.openstack.org/p/tripleo-ptg-pike
--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][TripleO] How to run mistral workflows via templates

2016-12-16 Thread Giulio Fidente

On 12/16/2016 01:56 PM, Christian Schwede wrote:

we're trying to address in TripleO a couple of use cases for which we'd
like to trigger a Mistral workflow from a Heat template.

One example where this would be useful is the creation of the Swift
rings, which need some data related to the Heat stack (like the list of
Swift nodes and their disks), so it can't be executed in advance, yet it
provides data which is needed to complete successfully the deployment of
the overcloud.

Currently we can create a workflow from Heat, but we can't trigger its
execution and also we can't block Heat on the result of the execution.

I was wondering if it would make sense to have a property for the
existing Workflow resource to let the user decide if the workflow should
*also* be triggered on CREATE/UPDATE? And if it would make sense to
block the Workflow resource until the execution result is returned in
that case?


I think it needs to be triggered a bit later actually? For the Swift use
case it needs to be executed after all instances are created (but
preferably before starting any Puppet actions on the nodes), not when
the CREATE/UPDATE itself actually starts.


yep, I was referring to the workflow resource CREATE/UPDATE action

we have complete control in Heat about when the workflow resource itself 
should be created



Alternatively, would an ex-novo Execution resource make more sense?

Or are there different ideas, approaches to the problem?


As a workaround for now I'm using the signal URL and trigger it in a
shell script on the nodes (the shell script is running anyways to fetch
and validate the rings). To avoid multiple parallel workflow executions
triggered by a dozen nodes I set a flag in the Mistral environment;
further actions will immediately return then.

I'd prefer a different and cleaner approach like you proposed but for me
that's working well for the moment.


ack
--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat][TripleO] How to run mistral workflows via templates

2016-12-16 Thread Giulio Fidente

hi,

we're trying to address in TripleO a couple of use cases for which we'd 
like to trigger a Mistral workflow from a Heat template.


One example where this would be useful is the creation of the Swift 
rings, which need some data related to the Heat stack (like the list of 
Swift nodes and their disks), so it can't be executed in advance, yet it 
provides data which is needed to complete successfully the deployment of 
the overcloud.


Currently we can create a workflow from Heat, but we can't trigger its 
execution and also we can't block Heat on the result of the execution.


I was wondering if it would make sense to have a property for the 
existing Workflow resource to let the user decide if the workflow should 
*also* be triggered on CREATE/UPDATE? And if it would make sense to 
block the Workflow resource until the execution result is returned in 
that case?


Alternatively, would an ex-novo Execution resource make more sense?

Or are there different ideas, approaches to the problem?

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Move redis out of Pacemaker

2016-12-12 Thread Giulio Fidente

On 12/12/2016 02:51 PM, Giulio Fidente wrote:

On 12/09/2016 04:49 PM, Pradeep Kilambi wrote:

Hey Folks:

I would like to get some thoughts on $Subject. This came up when i was
discussing the standalone roles for telemetry. Currently when we deploy
redis in tripleo, its a pacemaker managed service. So if we were to
deploy telemetry services on a dedicated node we could. But redis will
have to be on a another node? (assuming we dont want to pull in
pacemaker on to telemetry nodes).


currently redis instances are not configured as a redis cluster but use
the master/slave replication model instead and pacemaker is taking care
of electing/relocating the redis master as needed

there shouldn't be any dependency on the redis profile for the telemetry
roles, they should instead just point at the redis_vip

the redis_vip is always guaranteed (by haproxy) to point to the redis
master


With most services moved out of pacemaker in Newton, I think its time to
move redis as well? Are there any constraints in moving redis to be
managed by systemd? Looking at how we do it, It should be easily movable
to systemd? Can we consider doing this for Ocata?


I think we could look at using the redis cluster which allows multiple
masters, but I am not sure this can happen in Ocata ... yet again, there
shouldn't be in the telemetry roles any dependency on redis itself

if we were to use the cluster mode the only difference would probably be
that the redis_vip will start balancing requests across the nodes


also note that the cluster model is not replicating data across the 
nodes, but rather sharding the data, with the promise of surviving 
failures of some of the nodes and I'm not sure we want that anyway


https://redis.io/topics/cluster-spec
--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Move redis out of Pacemaker

2016-12-12 Thread Giulio Fidente

On 12/09/2016 04:49 PM, Pradeep Kilambi wrote:

Hey Folks:

I would like to get some thoughts on $Subject. This came up when i was
discussing the standalone roles for telemetry. Currently when we deploy
redis in tripleo, its a pacemaker managed service. So if we were to
deploy telemetry services on a dedicated node we could. But redis will
have to be on a another node? (assuming we dont want to pull in
pacemaker on to telemetry nodes).


currently redis instances are not configured as a redis cluster but use 
the master/slave replication model instead and pacemaker is taking care 
of electing/relocating the redis master as needed


there shouldn't be any dependency on the redis profile for the telemetry 
roles, they should instead just point at the redis_vip


the redis_vip is always guaranteed (by haproxy) to point to the redis master


With most services moved out of pacemaker in Newton, I think its time to
move redis as well? Are there any constraints in moving redis to be
managed by systemd? Looking at how we do it, It should be easily movable
to systemd? Can we consider doing this for Ocata?


I think we could look at using the redis cluster which allows multiple 
masters, but I am not sure this can happen in Ocata ... yet again, there 
shouldn't be in the telemetry roles any dependency on redis itself


if we were to use the cluster mode the only difference would probably be 
that the redis_vip will start balancing requests across the nodes

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Alex Schultz core on puppet-tripleo

2016-12-02 Thread Giulio Fidente

On 12/01/2016 11:26 PM, Emilien Macchi wrote:

Team,

Alex Schultz (mwhahaha on IRC) has been active on TripleO since a few
months now.  While he's very active in different areas of TripleO, his
reviews and contributions on puppet-tripleo have been very useful.
Alex is a Puppet guy and also the current PTL of Puppet OpenStack. I
think he perfectly understands how puppet-tripleo works. His
involvement in the project and contributions on puppet-tripleo deserve
that we allow him to +2 puppet-tripleo.

Thanks Alex for your involvement and hard work in the project, this is
very appreciated!


+1 !


--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI scenarios design - how to add more services

2016-11-23 Thread Giulio Fidente

hi Emilien,

thanks for putting some thought into this. We have a similar problem to 
test RGW which was only added in Newton.


On 11/23/2016 03:02 AM, Emilien Macchi wrote:

== Context

In Newton we added new multinode jobs called "scenarios".
The challenged we tried to solve was "how to test the maximum of
services without overloading the nodes that run tests".

Each scenarios deploys a set of services, which allows us to
horizontally scale the number of scenarios to increase the service
testing coverage.
See the result here:
https://github.com/openstack-infra/tripleo-ci#service-testing-matrix

To implement this model, we took example of Puppet OpenStack CI:
https://github.com/openstack/puppet-openstack-integration#description
We even tried to keep consistent the services/scenarios relations, so
it's consistent and easier to maintain.

Everything was fine until we had to add new services during Ocata cycles.
Because tripleo-ci repository is not branched, adding Barbican service
in the TripleO environment for scenario002 would break Newton CI jobs.
During my vacations, the team created a new scenario, scenario004,
that deploys Barbican and that is only run for Ocata jobs.
I don't think we should proceed this way, and let me explain why.

== Problem

How to scale the number of services that we test without increasing
the number of scenarios and therefore the complexity of maintaining
them on long-term.


== Solutions

The list is not exhaustive, feel free to add more.

1) Re-use experience from Puppet OpenStack CI and have environments
that are in a branched repository.
environments.
In Puppet OpenStack CI, the repository that deploys environments
(puppet-openstack-integration) is branched. So if puppet-barbican is
ready to be tested in Ocata, we'll patch
puppet-openstack-integration/master to start testing it and it won't
break stable jobs.
Like this, we were successfully able to maintain a fair number of
scenarios and keep our coverage increasing over each cycle.

I see 2 sub-options here:

a) Move CI environments and pingtest into
tripleo-heat-templates/environments/ci/(scenarios|pingtest). This repo
is branched and we could add a README to explain these files are used
in CI and we don't guarantee they would work outside TripleO CI tools.
b) Branch tripleo-ci repository. Personally I don't like this solution
because a lot of patches in this repo are not related to OpenStack
versions, which means we would need to backport most of the things
from master.


I'd +1 this idea. It sounds like we could make the scenarios generic 
enough to be usable also outside CI? Maybe they can serve as samples?

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Julie Pichon for tripleo core

2016-11-22 Thread Giulio Fidente

On 11/22/2016 06:01 PM, Dougal Matthews wrote:

Hi all,

I would like to propose we add Julie (jpich) to the TripleO core team
for python-tripleoclient and tripleo-common. This nomination is based
partially on review stats[1] and also my experience with her reviews and
contributions.

Julie has consistently provided thoughtful and detailed reviews since
the start of the Newton cycle. She has made a number of contributions
which improve the CLI and has been extremely helpful with other tasks
that don't often get enough attention (backports, bug triaging/reporting
and improving our processes[2]).

I think she will be a valuable addition to the review team


Thanks Dougal and Julie, +1
--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] proposing Michele Baldessari part of core team

2016-11-06 Thread Giulio Fidente

On 11/04/2016 06:40 PM, Emilien Macchi wrote:

MIchele Baldessari (bandini on IRC) has consistently demonstrated high
levels of contributions in TripleO projects, specifically in High
Availability area where's he's for us a guru (I still don't understand
how pacemaker works, but hopefully he does).


+1 ;_)
--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Default the HA scenario to Ceph

2016-10-12 Thread Giulio Fidente

On 10/12/2016 02:29 PM, Thiago da Silva wrote:



On 10/12/2016 07:10 AM, Giulio Fidente wrote:

hi,

we introduced support for the deployment of Ceph in the liberty
release so that it could optionally be used as backend for one or more
of Cinder, Glance, Nova and more recently Gnocchi.

We used to deploy Ceph MONs on the controller nodes and Ceph OSDs on
dedicated ceph-storage nodes so a deployment of OpenStack with Ceph
would need at least 1 more additional node to host a Ceph OSD.

In our HA scenario the storage backends are configured as follows:

Glance -> Swift
Nova (ephemeral) -> Local
Cinder (persistent) -> LVM (on controllers)
Gnocchi -> Swift

The downside of the above configuration is that Cinder volumes can not
be replicated across the controller nodes and become unavailable if a
controller fails, while production environments generally expect
persistent storage to be highly available. Cinder volumes instead
could even get lost completely in case of a permanent failure of a
controller.

With the Newton release and the composable roles we can now deploy
Ceph OSDs on the compute nodes, removing the requirement we had for an
additional node to host a Ceph OSD.

I would like to ask for some feedback on the possibility of deploying
Ceph by default in the HA scenario and use it as backend for Cinder.

Also using Swift as backend for Glance and Gnocchi is enough to cover
the availability issue for the data, but it also means we're storing
that data on the controller nodes which might or might not be wanted;
I don't see a strong reason for defaulting them to Ceph, but it might
make more sense when Ceph is available; feedback about this would be
appreciated as well.

I think it would be important to take into account the recently created
guiding principles [0]:

"While the software that OpenStack produces has well defined and
documented APIs, the primary output of OpenStack is software, not API
definitions. We expect people who say they run “OpenStack” to run the
software produced by and in the community, rather than alternative
implementations of the API."

In the case of Cinder, I think the situation is a bit muddy as LVM is
not openstack software, and my limited understanding is that LVM is used
as a reference implementation, but in the case of Swift, I think RGW
would be considered an 'alternative implementation of the API'.

Thiago


hi Thiago,

sorry if it wasn't clear in my original message but I did not suggest to 
replace Swift with Ceph RGW.


Swift would continue to be deployed by default, not RGW.

The feedback I'm asking for is about storing (or not) the Cinder volumes 
in Ceph for the HA scenario by default, and also store the Glance images 
and Gnocchi metrics in Ceph or rather keep that data in Swift.

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Default the HA scenario to Ceph

2016-10-12 Thread Giulio Fidente

hi,

we introduced support for the deployment of Ceph in the liberty release 
so that it could optionally be used as backend for one or more of 
Cinder, Glance, Nova and more recently Gnocchi.


We used to deploy Ceph MONs on the controller nodes and Ceph OSDs on 
dedicated ceph-storage nodes so a deployment of OpenStack with Ceph 
would need at least 1 more additional node to host a Ceph OSD.


In our HA scenario the storage backends are configured as follows:

Glance -> Swift
Nova (ephemeral) -> Local
Cinder (persistent) -> LVM (on controllers)
Gnocchi -> Swift

The downside of the above configuration is that Cinder volumes can not 
be replicated across the controller nodes and become unavailable if a 
controller fails, while production environments generally expect 
persistent storage to be highly available. Cinder volumes instead could 
even get lost completely in case of a permanent failure of a controller.


With the Newton release and the composable roles we can now deploy Ceph 
OSDs on the compute nodes, removing the requirement we had for an 
additional node to host a Ceph OSD.


I would like to ask for some feedback on the possibility of deploying 
Ceph by default in the HA scenario and use it as backend for Cinder.


Also using Swift as backend for Glance and Gnocchi is enough to cover 
the availability issue for the data, but it also means we're storing 
that data on the controller nodes which might or might not be wanted; I 
don't see a strong reason for defaulting them to Ceph, but it might make 
more sense when Ceph is available; feedback about this would be 
appreciated as well.


Finally a shared backend (Ceph) for Nova would allow live migrations but 
probably decrease performances for the guests in general; so I'd be 
against defaulting Nova to Ceph. Feedback?

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [puppet] Preparing TripleO agenda for Barcelona - action needed

2016-09-21 Thread Giulio Fidente

On 09/19/2016 10:49 PM, Emilien Macchi wrote:

(adding puppet tag for cross project session).

Let's continue to prepare TripleO sessions.

https://etherpad.openstack.org/p/ocata-tripleo

For reminder, we have 2 fishbowls and 4 working rooms.
I looked at the topic proposals and I started to organize some sessions.

Some actions from you are required:
- review the session proposal
- if you want to drive a session, please put your name in "Chair".
- for each session we need to choose if we want it to be a work room
or a fishbowl session.
- 4 topics are still there, please propose a session (concatenate them
if possible)
- if you missed this etherpad until now, feel free to propose a
session with your topic (ex: TripleO UI - roadmap, etc).

At least but not least, I would propose a cross project session with
Puppet OpenStack group (using a slot from their schedule) so we might
have a 7th session.


the cross project session with the puppet group is a nice idea indeed, 
thanks Emilien


in that context it would be nice to gather some ideas/feedback on the 
status of openstack integration scenarios vs tripleo scenarios and see 
if we can optimize resources and/or coverage

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] *ExtraConfig, backwards compatibility & deprecation

2016-09-19 Thread Giulio Fidente

On 09/19/2016 01:25 PM, Steven Hardy wrote:

On Wed, Sep 14, 2016 at 06:32:07PM +0200, Giulio Fidente wrote:

On 09/14/2016 05:59 PM, Giulio Fidente wrote:

On 09/14/2016 02:31 PM, Steven Hardy wrote:

Related to this is the future of all of the per-role customization
interfaces.  I'm thinking these don't really make sense to maintain
long-term now we have the new composable services architecture, and it
would be better if we can deprecate them and move folks towards the
composable services templates instead?


my experience is that the ExtraConfig interfaces have been useful to
provide arbitrary hiera and class includes

I wonder if we could ship by default some roles parsing those parameters?


thinking more about it, the *ExtraConfig interfaces also offer a simple
mechanism to *override* any hiera setting we push via the templates ...
which isn't easy to achieve with roles

a simple short-term solution could be to merge ExtraConfig in the $role
mapped_data, thoughts?


Thanks for the feedback, so yeah I agree there are reasons to keep the
ExtraConfig *parameters* around, or some similar interface.

I probably should have clarified this in my original post, but there are
two types of *ExtraConfig interfaces, the parameters you refer to, which
simply override some hieradata (we probably want to keep this, but it still
means we have ExtraConfig tied the the role (not the service), but
presumably an operator will know what services are deployed on what role).

The second (and more problematic from a containers point of view) is the
ExtraConfig *resources*, where you can pass an arbitrary heat template,
which typically is used to run stuff on the host (which will be impossible,
or at least not useful on an atomic host in a fully containerized
deployment).

I think your concerns are mostly around the ExtraConfig *parameters* thus,
provided we maintain some way to do those hiera overrides, e.g the
documented interfaces for Ceph ExtraConfig can still be used?


hi Steve

ack, my concern is about the way to do hiera overrides and the way to 
push additional hiera data for a service


maybe the latter can be implemented with a custom role but that seems 
overkilling where the need could be just to push some additional hiera 
data for a class; also a custom role would not work nicely to override 
hiera settings


as an alternative, we could add a $serviceExtraConfig parameter to every 
service and merge it with the heat output; this would work nicely with 
containers as well but add some boilerplate code


not sure if there are other ideas?
--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] TripleO Core nominations

2016-09-16 Thread Giulio Fidente

On 09/15/2016 11:20 AM, Steven Hardy wrote:

Hi all,

As we work to finish the last remaining tasks for Newton, it's a good time
to look back over the cycle, and recognize the excellent work done by
several new contributors.

We've seen a different contributor pattern develop recently, where many
folks are subsystem experts and mostly focus on a particular project or
area of functionality.  I think this is a good thing, and it's hopefully
going to allow our community to scale more effectively over time (and it
fits pretty nicely with our new composable/modular architecture).

We do still need folks who can review with the entire TripleO architecture
in mind, but I'm very confident folks will start out as subsystem experts
and over time broaden their area of experience to encompass more of
the TripleO projects (we're already starting to see this IMO).

We've had some discussion in the past[1] about strictly defining subteams,
vs just adding folks to tripleo-core and expecting good judgement to be
used (e.g only approve/+2 stuff you're familiar with - and note that it's
totally fine for a core reviewer to continue to +1 things if the patch
looks OK but is outside their area of experience).

So, I'm in favor of continuing that pattern and just welcoming some of our
subsystem expert friends to tripleo-core, let me know if folks feel
strongly otherwise :)

The nominations, are based partly on the stats[2] and partly on my own
experience looking at reviews, patches and IRC discussion with these folks
- I've included details of the subsystems I expect these folks to focus
their +2A power on (at least initially):

1. Brent Eagles

Brent has been doing some excellent work mostly related to Neutron this
cycle - his reviews have been increasingly detailed, and show a solid
understanding of our composable services architecture.  He's also provided
a lot of valuable feedback on specs such as dpdk and sr-iov.  I propose
Brent continues this exellent Neutron focussed work, while also expanding
his review focus such as the good feedback he's been providing on new
Mistral actions in tripleo-common for custom-roles.

2. Pradeep Kilambi

Pradeep has done a large amount of pretty complex work around Ceilomenter
and Aodh over the last two cycles - he's dealt with some pretty tough
challenges around upgrades and has consistently provided good review
feedback and solid analysis via discussion on IRC.  I propose Prad
continues this excellent Ceilomenter/Aodh focussed work, while also
expanding review focus aiming to cover more of t-h-t and other repos over
time.

3. Carlos Camacho

Carlos has been mostly focussed on composability, and has done a great job
of working through the initial architecture implementation, including
writing some very detailed initial docs[3] to help folks make the transition
to the new architecture.  I'd suggest that Carlos looks to maintain this
focus on composable services, while also building depth of reviews in other
repos.

4. Ryan Brady

Ryan has been one of the main contributors implementing the new Mistral
based API in tripleo-common.  His reviews, patches and IRC discussion have
consistently demonstrated that he's an expert on the mistral
actions/workflows and I think it makes sense for him to help with review
velocity in this area, and also look to help with those subsystems
interacting with the API such as tripleoclient.

5. Dan Sneddon

For many cycles, Dan has been driving direction around our network
architecture, and he's been consistently doing a relatively small number of
very high-quality and insightful reviews on both os-net-config and the
network templates for tripleo-heat-templates.  I'd suggest Dan continues
this focus, and he's indicated he may have more bandwidth to help with
reviews around networking in future.

Please can I get feedback from exisitng core reviewers - you're free to +1
these nominations (or abstain), but any -1 will veto the process.  I'll
wait one week, and if we have consensus add the above folks to
tripleo-core.


+1

espcially in this project, diversification is a strategy!


 - We need folks to regularly show up, establishing a long-term pattern of
   doing useful reviews, but core status isn't about raw number of reviews,
   it's about consistent downvotes and detailed, well considered and
   insightful feedback that helps increase quality and catch issues early.

 - Try to spend some time reviewing stuff outside your normal area of
   expertise, to build understanding of the broader TripleO system - as
   discussed above subsystem experts are a good thing, but we also need
   to see some appreciation of the broader Tripleo archticture &
   interfaces (all the folks above have demonstrated solid knowledge of one
   or more of our primary interfaces, e.g the Heat or the Mistral layer)


thanks for sharing these ^^
--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenS

Re: [openstack-dev] [TripleO] *ExtraConfig, backwards compatibility & deprecation

2016-09-14 Thread Giulio Fidente

On 09/14/2016 05:59 PM, Giulio Fidente wrote:

On 09/14/2016 02:31 PM, Steven Hardy wrote:

Related to this is the future of all of the per-role customization
interfaces.  I'm thinking these don't really make sense to maintain
long-term now we have the new composable services architecture, and it
would be better if we can deprecate them and move folks towards the
composable services templates instead?


my experience is that the ExtraConfig interfaces have been useful to
provide arbitrary hiera and class includes

I wonder if we could ship by default some roles parsing those parameters?


thinking more about it, the *ExtraConfig interfaces also offer a simple 
mechanism to *override* any hiera setting we push via the templates ... 
which isn't easy to achieve with roles


a simple short-term solution could be to merge ExtraConfig in the $role 
mapped_data, thoughts?


while to move in a more container-aware condition we could probably have 
some $serviceExtraConfig param mapped into each service?

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] *ExtraConfig, backwards compatibility & deprecation

2016-09-14 Thread Giulio Fidente

On 09/14/2016 02:31 PM, Steven Hardy wrote:

Related to this is the future of all of the per-role customization
interfaces.  I'm thinking these don't really make sense to maintain
long-term now we have the new composable services architecture, and it
would be better if we can deprecate them and move folks towards the
composable services templates instead?


my experience is that the ExtraConfig interfaces have been useful to 
provide arbitrary hiera and class includes


I wonder if we could ship by default some roles parsing those parameters?
--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] FFE request for Ceph RGW integration

2016-09-14 Thread Giulio Fidente

On 08/30/2016 06:40 PM, Giulio Fidente wrote:

Together with Keith we're working on some patches to integrate (via
puppet-ceph) the deployment of Ceph RGW in TripleO as a composable
service which can optionally replace SwiftProxy


Changes are tracked via blueprint at:

https://blueprints.launchpad.net/tripleo/+spec/ceph-rgw-integration

They should be tagged with the appropriate topic branch, so can be found
with:

https://review.openstack.org/#/q/topic:bp/ceph-rgw-integration,n,z


There is also a [NO MERGE] change which we use to test the above in
upstream CI:

https://review.openstack.org/#/c/357182/


We'd like to formally request an FFE for this feature.

Thanks for consideration, feedback, help and reviews :)


a quick update,

the last submission needed for this feature has been merged today, 
thanks to all who helped


from the RC release it will be possible to use ceph/rgw as a swift 
drop-in replacement for those deploying ceph

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] FFE request for Ceph RGW integration

2016-09-01 Thread Giulio Fidente

On 09/01/2016 02:11 AM, Giulio Fidente wrote:

On 08/30/2016 10:50 PM, Steven Hardy wrote:

On Tue, Aug 30, 2016 at 03:25:30PM -0400, Emilien Macchi wrote:

Here's my 2 cents:

The patch in puppet-ceph has been here for long time now and it still
doesn't work (recent update of today, puppet-ceph is not idempotent
when deploying RGW service. It must be fixed in order to get
successful deployment).
Puppet CI is still not gating on Ceph RGW (scenario004 still in
progress and really low progress to make it working recently).


This does sound concerning, Giulio, can you provide any feedback on work
in-progress or planned to improve this?


we invested quite some time today testing and updating the patches as
needed

I've a got a successful deployment where by just adding the Member role
to my user I could use the regular swiftclient to operate against RadosGW


fwiw, we just amended one of the patches to remove the need for the 
users to have the 'Member' role set


which means that with its latest version it is possible to deploy 
radosgw as a drop-in replacement for swift and use with swiftclient 
without any additional step, by using the ceph-radosgw.yaml enironment 
at deployment time


unfortunately CI is having some unrelated issues so it's hard to get all 
patches to pass but I would encourage people to review and test this as 
I think it is in good shape

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] FFE request for Ceph RGW integration

2016-08-31 Thread Giulio Fidente

On 08/30/2016 10:50 PM, Steven Hardy wrote:

On Tue, Aug 30, 2016 at 03:25:30PM -0400, Emilien Macchi wrote:

Here's my 2 cents:

The patch in puppet-ceph has been here for long time now and it still
doesn't work (recent update of today, puppet-ceph is not idempotent
when deploying RGW service. It must be fixed in order to get
successful deployment).
Puppet CI is still not gating on Ceph RGW (scenario004 still in
progress and really low progress to make it working recently).


This does sound concerning, Giulio, can you provide any feedback on work
in-progress or planned to improve this?


we invested quite some time today testing and updating the patches as needed

I've a got a successful deployment where by just adding the Member role 
to my user I could use the regular swiftclient to operate against RadosGW


This is by pulling in:

https://review.openstack.org/#/c/347956/
https://review.openstack.org/#/c/363164/

https://review.openstack.org/#/c/334081/ (and its dependencies)

https://review.openstack.org/#/c/289027/

Emilien can you re-evaluate the status of the puppet-ceph and 
puppet-tripleo submissions?



My opinion says we should not push to have it in Newton. Work to do it
was not extremely pushed during the cycle I see zero reason to push
for it now the cycle is ending.


agreed, this might not have been pushed much during the cycle as other 
priorities needed attention too but it seems to be an interesting 
feature for those deploying Ceph and in decent state; also as per 
Steven's comment below, it'll be optional in TripleO, we'll continue to 
deploy Swift by default so it's not going to have a great impact on 
other existing work



I agree this is being proposed too late, but given it will be disabled by
default that does mitigate the risk somewhat.

Giulio - can you confirm this will just be a new service template and
puppet profile, and that it's not likely to require rework outside of the
composable services interface?  If so I'm inclined to say OK even if we
know the puppet module needs work.


no rework of the composable services interface will be needed, the tht 
submission is, in addition to adding the new service template, adding an 
output to the endpoint map for the new service, the puppet submission is 
adding a new role


https://review.openstack.org/#/c/289027/

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] FFE request for Ceph RGW integration

2016-08-30 Thread Giulio Fidente
Together with Keith we're working on some patches to integrate (via 
puppet-ceph) the deployment of Ceph RGW in TripleO as a composable 
service which can optionally replace SwiftProxy



Changes are tracked via blueprint at:

https://blueprints.launchpad.net/tripleo/+spec/ceph-rgw-integration

They should be tagged with the appropriate topic branch, so can be found 
with:


https://review.openstack.org/#/q/topic:bp/ceph-rgw-integration,n,z


There is also a [NO MERGE] change which we use to test the above in 
upstream CI:


https://review.openstack.org/#/c/357182/


We'd like to formally request an FFE for this feature.

Thanks for consideration, feedback, help and reviews :)
--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Need more undercloud resources

2016-08-24 Thread Giulio Fidente

On 08/25/2016 01:51 AM, Steve Baker wrote:

Heat now has efficient polling of nested events, but it doesn't look
like tripleoclient is using that.

Its not clear if the current polling is contributing to the above issue
but I'd definitely recommend switching over.


was simple enough so here is a change which switches to it

https://review.openstack.org/#/c/360141/


This is the recommended approach:
http://git.openstack.org/cgit/openstack/python-heatclient/tree/heatclient/osc/v1/stack.py#n180


This is what tripleoclient does currently:

http://git.openstack.org/cgit/openstack/python-tripleoclient/tree/tripleoclient/utils.py#n272


The get_events call is low-overhead, but the get_stack call isn't, and
calling it in a loop won't be helping.

poll_for_events currently doesn't have an argument for specifying the
nested_depth for what events to log. I can add that to heatclient unless
you can live with only logging the events for the top level resources.


right, if you do I'll update the submission to use it ... also looks 
like we might need a mechanism to suppress events logging; shall I pass 
some fake fd to out= or can we change it to make None not logging, maybe 
defaulting it to stdout but skip if it is None?

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-19 Thread Giulio Fidente

On 08/05/2016 01:21 PM, Steven Hardy wrote:

On Fri, Aug 05, 2016 at 12:27:40PM +0200, Dmitry Tantsur wrote:

On 08/04/2016 11:48 PM, Dan Prince wrote:

Last week I started some prototype work on what could be a new way to
install the Undercloud. The driving force behind this was some of the
recent "composable services" work we've done in TripleO so initially I
called in composable undercloud. There is an etherpad here with links
to some of the patches already posted upstream (many of which stand as
general imporovements on their own outside the scope of what I'm
talking about here).

https://etherpad.openstack.org/p/tripleo-composable-undercloud

The idea in short is that we could spin up a small single process all-
in-one heat-all (engine and API) and thereby avoid things like Rabbit,
and MySQL. Then we can use Heat templates to drive the Undercloud
deployment just like we do in the Overcloud.


I don't want to sound rude, but please no. The fact that you have a hammer
does not mean everything around is nails :( What problem are you trying to
solve by doing it?


I think Dan explains it pretty well in his video, and your comment
indicates a fundamental misunderstanding around the entire TripleO vision,
which is about symmetry and reuse between deployment tooling and the
deployed cloud.

The problems this would solve are several:

1. Remove divergence between undercloud and overcloud puppet implementation
(instead of having an undercloud specific manifest, we reuse the *exact*
same stuff we use for overcloud deployments)


this; to reuse the service templates and puppet classes as they are 
sounds good



2. Better modularity, far easier to enable/disable services

3. Get container integration "for free" when we land it in the overcloud

4. Any introspection and debugging workflow becomes identical between the
undercloud and overcloud

5. We remove dependencies on a bunch of legacy scripts which run outside of
puppet

6. Whenever someone lands support for a new service in the overcloud, we
automatically get undercloud support for it, completely for free.

7. Potential for much easier implementation of a multi-node undercloud


Undercloud installation is already sometimes fragile, but it's probably the
least fragile part right now (at least from my experience) And at the very
least it's pretty obviously debuggable in most cases. THT is hard to
understand and often impossible to debug. I'd prefer we move away from THT
completely rather than trying to fix it in one more place where heat does
not fit..


I *do* see your point about the undercloud installation being the less 
problematic but I part of that is because we didn't need to plug into 
the undercloud the same level of flexibility we demand for overcloud


Now, maybe we also shouldn't make things complicated where they don't 
need to (see points 2 and 3) but in addition to reusing tht/puppet code, 
I think it would be interesting to have undercloud/ha (point 7)


fwiw, I'd like to try this out myself before the summit to get a better 
picture.

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Memory shortage in HA jobs, please increase it

2016-08-19 Thread Giulio Fidente

On 08/19/2016 12:12 PM, Erno Kuvaja wrote:

On Fri, Aug 19, 2016 at 10:53 AM, Hugh Brock <hbr...@redhat.com> wrote:

On Fri, Aug 19, 2016 at 11:41 AM, Derek Higgins <der...@redhat.com> wrote:

On 19 August 2016 at 00:07, Sagi Shnaidman <sshna...@redhat.com> wrote:

Hi,

we have a problem again with not enough memory in HA jobs, all of them
constantly fails in CI: http://status-tripleoci.rhcloud.com/


Have we any idea why we need more memory all of a sudden? For months
the overcloud nodes have had 5G of RAM, then last week[1] we bumped it
too 5.5G now we need it bumped too 6G.

If a new service has been added that is needed on the overcloud then
bumping to 6G is expected and probably the correct answer but I'd like
to see us avoiding blindly increasing the resources each time we see
out of memory errors without investigating if there was a regression
causing something to start hogging memory.

Sorry if it seems like I'm being picky about this (I seem to resist
these bumps every time they come up) but there are two good reasons to
avoid this if possible
o at peak we are currently configured to run 75 simultaneous jobs
(although we probably don't reach that at the moment), and each HA job
has 5 baremetal nodes so bumping from 5G too 6G increases the amount
of RAM ci can use at peak by 375G
o When we bump the RAM usage of baremetal nodes from 5G too 6G what
we're actually doing is increasing the minimum requirements for
developers from 28G(or whatever the number is now) too 32G

So before we bump the number can we just check first if its justified,
as I've watched this number increase from 2G since we started running
tripleo-ci

thanks,
Derek.

[1] - https://review.openstack.org/#/c/353655/


Wondering if it makes sense to enable any but the most basic overcloud
services in TripleO CI. The idea of using some type of on-demand job
for services other than the ones needed for the ping test has been
proposed elsewhere -- maybe this should be our default mode for
TripleO CI. Thoughts?

--Hugh


Problem with periodic jobs are that the results are bit hidden and 1
to 2 people care about them when they happen to have time. OTOH if I
understand correctly we don't test the services even now, just that
their deployment goes through without failures.


we do some testing of the overcloud in the gate jobs, we actually deploy 
a heat stack in the overcloud [1], creating a volume based nova guest 
(backed by Ceph for HA job), set some routing and ping it (in network 
isolation!)


1. 
https://github.com/openstack-infra/tripleo-ci/blob/master/templates/tenantvm_floatingip.yaml

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Memory shortage in HA jobs, please increase it

2016-08-19 Thread Giulio Fidente

On 08/19/2016 11:41 AM, Derek Higgins wrote:

On 19 August 2016 at 00:07, Sagi Shnaidman <sshna...@redhat.com> wrote:

Hi,

we have a problem again with not enough memory in HA jobs, all of them
constantly fails in CI: http://status-tripleoci.rhcloud.com/


Have we any idea why we need more memory all of a sudden? For months
the overcloud nodes have had 5G of RAM, then last week[1] we bumped it
too 5.5G now we need it bumped too 6G.

If a new service has been added that is needed on the overcloud then
bumping to 6G is expected and probably the correct answer but I'd like
to see us avoiding blindly increasing the resources each time we see
out of memory errors without investigating if there was a regression
causing something to start hogging memory.


fwiw, one recent addition was the cinder-backup service

though this service wasn't enabled by default in mitaka so with [1] we 
can disable the service by default for newton as well


1. https://review.openstack.org/#/c/357729

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Testing optional composable services in the CI

2016-08-16 Thread Giulio Fidente

On 08/15/2016 10:54 AM, Dmitry Tantsur wrote:

Hi everyone, happy Monday :)

I'd like to start the discussion about CI-testing the optional
composable services in the CI (I'm primarily interested in Ironic, but I
know there are a lot more).


thanks for bringing this up, with "pluggability" comes responsibility it 
seems


there is also a conflicting (yet valid) interest in keeping the number 
of services deployed in the overcloud to a minimum to avoid even longer 
CI run times



So, are there any plans to start covering optional services? Maybe at
least a non-HA job with all environment files included? It would be cool
to also somehow provide additional checks though. Or, in case of ironic,
to disable the regular nova compute, so that the ping test runs on an
ironic instance.


it isn't really a case of HA vs non-HA, with the newer HA architecture 
we're only managing via pcmk those openstack services which need to be 
(including recent additions like manila-share or cinder-backup) and 
these should be tested in the HA scenario which IMHO at this point could 
become a default


it looks to me that a scenario in the experimental queue deploying a 
"full" overcloud could work?


there is a similar requirement for testing 'competing' services, like 
swift and ceph/rgw which we're about to merge ... but applies to other 
things, like the neutron plugins

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Improving Swift deployments with TripleO

2016-08-04 Thread Giulio Fidente

On 08/04/2016 01:26 PM, Christian Schwede wrote:

On 04.08.16 10:27, Giulio Fidente wrote:

On 08/02/2016 09:36 PM, Christian Schwede wrote:

Hello everyone,


thanks Christian,


I'd like to improve the Swift deployments done by TripleO. There are a
few problems today when deployed with the current defaults:

1. Adding new nodes (or replacing existing nodes) is not possible,
because the rings are built locally on each host and a new node doesn't
know about the "history" of the rings. Therefore rings might become
different on the nodes, and that results in an unusable state eventually.


one of the ideas for this was to use a tempurl in the undercloud swift
where to upload the rings built by a single overcloud node, not by the
undercloud

so I proposed a new heat resource which would permit us to create a
swift tempurl in the undercloud during the deployment

https://review.openstack.org/#/c/350707/

if we build the rings on the undercloud we can ignore this and use a
mistral action instead, as pointed by Steven

the good thing about building rings in the overcloud is that it doesn't
force us to have a static node mapping for each and every deployment but
it makes hard to cope with heterogeneous environments


That's true. However - we still need to collect the device data from all
the nodes from the undercloud, push it to at least one overcloud mode,
build/update the rings there, push it to the undercloud Swift and use
that on all overcloud nodes. Or not?


sure, let's build on the undercloud, when automated with mistral it 
shouldn't make a big difference for the user



I was also thinking more about the static node mapping and how to avoid
this. Could we add a host alias using the node UUIDs? That would never
change, it's available from the introspection data and therefore could
be used in the rings.

http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_specific_hieradata.html#collecting-the-node-uuid


right, this is the mechanism I wanted to use to proviude per-node disk 
maps, it's how it works for ceph disks as well



2. The rings are only using a single device, and it seems that this is
just a directory and not a mountpoint with a real device. Therefore data
is stored on the root device - even if you have 100TB disk space in the
background. If not fixed manually your root device will run out of space
eventually.


for the disks instead I am thinking to add a create_resources wrapper in
puppet-swift:

https://review.openstack.org/#/c/350790
https://review.openstack.org/#/c/350840/

so that we can pass via hieradata per-node swift::storage::disks maps

we have a mechanism to push per-node hieradata based on the system uuid,
we could extend the tool to capture the nodes (system) uuid and generate
per-node maps


Awesome, thanks Giulio!

I will test that today. So the tool could generate the mapping
automatically, and we don't need to filter puppet facts on the nodes
itself. Nice!   


and we could re-use the same tool to generate the ceph::osds disk maps 
as well :)


--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Improving Swift deployments with TripleO

2016-08-04 Thread Giulio Fidente

On 08/02/2016 09:36 PM, Christian Schwede wrote:

Hello everyone,


thanks Christian,


I'd like to improve the Swift deployments done by TripleO. There are a
few problems today when deployed with the current defaults:

1. Adding new nodes (or replacing existing nodes) is not possible,
because the rings are built locally on each host and a new node doesn't
know about the "history" of the rings. Therefore rings might become
different on the nodes, and that results in an unusable state eventually.


one of the ideas for this was to use a tempurl in the undercloud swift 
where to upload the rings built by a single overcloud node, not by the 
undercloud


so I proposed a new heat resource which would permit us to create a 
swift tempurl in the undercloud during the deployment


https://review.openstack.org/#/c/350707/

if we build the rings on the undercloud we can ignore this and use a 
mistral action instead, as pointed by Steven


the good thing about building rings in the overcloud is that it doesn't 
force us to have a static node mapping for each and every deployment but 
it makes hard to cope with heterogeneous environments



2. The rings are only using a single device, and it seems that this is
just a directory and not a mountpoint with a real device. Therefore data
is stored on the root device - even if you have 100TB disk space in the
background. If not fixed manually your root device will run out of space
eventually.


for the disks instead I am thinking to add a create_resources wrapper in 
puppet-swift:


https://review.openstack.org/#/c/350790
https://review.openstack.org/#/c/350840/

so that we can pass via hieradata per-node swift::storage::disks maps

we have a mechanism to push per-node hieradata based on the system uuid, 
we could extend the tool to capture the nodes (system) uuid and generate 
per-node maps


then, with the above puppet changes and having the per-node map and the 
rings download url, we could feed them to the templates, replace with an 
environment the rings building implementation and deploy without further 
customizations


what do you think?
--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] moving tripleo-ci/test-environments into THT

2016-07-21 Thread Giulio Fidente

On 07/21/2016 04:32 AM, Emilien Macchi wrote:

Hi,

We're currently using tripleo-ci to store our test-environments files
(ie: multinode.yaml, etc).
To make it compatible with our different versions of TripleO, we have 2 options:

* Duplicate templates and use bash conditionals in tripleo-ci scripts
to select which one we want at each release.
* Move them to THT (THT is branched & released).

I would vote for option #2 for 2 reasons:
* we don't have to do complex conditionals in tripleo-ci
* we can easily consume it outside tripleo-ci (oooq one day?)
* we can easily make them evolve, when new composable services are
created for example.


plus it would be possible to update the template parameters *and* the 
test-environment files as needed in a single submission


finally, we've hit cases where a particular value change was not tested 
because the test-environment file was overriding the setting anyway and 
we got tricked into thinking the change was working while it wasn't


so +1 from me on option number 2
--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rdo-list] [tripleo] How to setup hyper-converged nodes (compute+ceph)?

2016-05-18 Thread Giulio Fidente

On 05/18/2016 04:13 AM, Gerard Braad wrote:

Hi all,


Hereby updating the status of this message:

On Mon, May 16, 2016 at 9:55 AM, Gerard Braad <m...@gbraad.nl> wrote:

we would like
to deploy Compute nodes with Ceph installed on them. This will
probably be a change to the tripleo-heat-templates and the compute,
and cephstorage resources


I noticed a review enabling the deployment of Ceph OSDs on the compute
node: https://review.openstack.org/#/c/273754/5
At the moment, it is marked as Workflow -1 due to possible
implementation of this feature by composable roles.


hi, that change is most probably working fine as-is but we are in the 
process of migrating service defintions into more isolated roles and 
that would supersede the existing submission


there is some WIP (changes Ie92b25a9c68a76b6d92abedef31e8039b16d9863 and 
I1921115cb6218c7554348636c404245c79937673) to migrate ceph 
mon/client/osd services into isolated roles but it's far from landing at 
this point because we miss some additional machinery in the templates to 
distribute roles on more


one way or another this should be possible in Newton though so I suggest 
to keep tracking all three submission; we could fallback on the initial 
submission should the ceph migration reveal to be problematic

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] sharing bits between nodes during deployment

2016-05-17 Thread Giulio Fidente

On 05/12/2016 08:20 PM, Emilien Macchi wrote:

Hi,

During the recent weeks, we've noticed that some features would have a
common challenge to solve:
How to share informations or files between nodes, during a multi-node
deployment.


Some additional examples are the haproxy backends, which should be 
dynamically built from the nodes hosting a particular service or the 
list of rabbitmq nodes, which should be distributed to all nodes hosting 
a service depending on rabbitmq.


While not the cleanest approach, I was thinking about plugging an 
'allNodesConfig' output in the service templates. I'll try to put up a 
WIP unless people has better ideas.

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Change 247669: Ceph Puppet: implement per-pool parameters:

2016-05-03 Thread Giulio Fidente

On 04/22/2016 09:51 AM, Shinobu Kinjo wrote:

Hi TripleO Team,

If you could take care of ${subject}, it would be nice.

[1] https://review.openstack.org/#/c/247669


ack, I liked revision 6 more than 7 but it might not work as 
interpolation always returns a string and we need a dict there


I'll post an update as soon as I get it to work
--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] propose EmilienM for core

2016-03-22 Thread Giulio Fidente

On 03/20/2016 07:22 PM, Dan Prince wrote:

I'd like to propose that we add Emilien Macchi to the TripleO core
review team. Emilien has been getting more involved with TripleO during
this last release. In addition to help with various Puppet things he
also has experience in building OpenStack installation tooling,
upgrades, and would be a valuable prospective to the core team. He has
also added several new features around monitoring into instack-
undercloud.

Emilien is currently acting as the Puppet PTL. Adding him to the
TripleO core review team could help us move faster towards some of the
upcoming features like composable services, etc.

If you agree please +1. If there is no negative feedback I'll add him
next Monday.


+1 !


--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] propose ejuaso for core

2016-03-19 Thread Giulio Fidente

On 03/14/2016 03:38 PM, Dan Prince wrote:

http://russellbryant.net/openstack-stats/tripleo-reviewers-180.txt

Our top reviewer over the last half a year ejuaso (goes by Ozz for
Osorio or jaosorior on IRC). His reviews seem consistent, he
consistently attends the meetings and he chimes in on lots of things.
I'd like to propose we add him to our core team (probably long overdue
now too).


+2
--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Contributing to TripleO is challenging

2016-03-04 Thread Giulio Fidente
.

How many times I've heard "I've tested this patch locally, and it does
not work so -1".

The only test we do in current CI is a ping to an instance. Seriously?
Most of OpenStack CIs (Fuel included), run Tempest, for testing APIs and
real scenarios. And we run a ping.
That's similar to 1/ but I wanted to raise it too.


I'd say a consequence of 1) as well


If we don't change our way to work on TripleO, people will be more
frustrated and reduce contributions at some point.
I hope from here we can have a open and constructive discussion to try
to improve the TripleO project.


me too so let me add my idea as the 6th point

5/ Documentation isn't great

We did some good things, we have a repo with usable doc and a website to 
point people to, but the docs honestly are a bit confusing and even lack 
documentation about quite some features for end users.


Maybe we can start some minor restructuring of the docs, splitting them 
into a 'quickstart' guide and a 'feature-complete' guide and ask people 
to submit together with a feature the matching documentation in the 
'feature-complete' guide?

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TripleO client answers file.

2015-11-26 Thread Giulio Fidente

On 11/26/2015 02:34 PM, Steven Hardy wrote:

On Thu, Nov 26, 2015 at 01:37:16PM +0100, Lennart Regebro wrote:

We are proposing to have an "answers file" for the tripleoclient so
that you don't have to have a ling line of

openstack overcloud deploy --templates /home/stack/mytemplates
--environment superduper.yaml --environment loadsofstuff.yaml
--envirnoment custom.yaml

But instead can just do

opennstack overcloud deploy --answers-file answers.yaml

And in that file have:

   templates: /home/stack/mytemplates
   environments:
 - superduper.yaml
 - loadsofstuff.yaml
 - custom.yaml


I like the idea of this, provided we keep the scope limited to what is not
already possible via the heat environment files.

So, for example in the reply from Qasim Sarfraz there is mention of passing
other deployment parameters, and I would prefer we did not do that, because
it duplicates functionality that already exists in the heat environment
(I'll reply separately to explain that further).

I do have a couple of questions:

1. How will this integrate with the proposal to add an optional environment
directory? See https://review.openstack.org/#/c/245172/

2. How will this integrate with non "deploy" actions, such as image
building (e.g both the current interface and the new yaml definition
proposed in https://review.openstack.org/#/c/235569/)

It's probably fine to say it's only scoped to the deploy command initially,
but I wanted to at least consider if a broader answer-file format could be
adopted which could potentially support all overcloud * commands.


deploy and update actually, as per the change request, given you'll 
probably want those to remain the same on update


haven't checked the submission in the details but have a few comments:

1. what is the benefit of having the templates location specified in the 
answers file as well? how about keeping the templates path out of the yaml?


2. I'd also rename the answers file into something like 
environments-file, given it's not an answers-file but more a list of env 
files


3. in which order are the env files appended? it is important that order 
is respected and known in advance to the user


4. how does this behave if one is passing some env with -e as well? 
looks like we should append the -e files to the list of environments 
gathered from environments-file?


5. make sure not to mangle the file paths; these can be absolute or 
relative to the templates location

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] When to use parameters vs parameter_defaults

2015-11-16 Thread Giulio Fidente

On 11/16/2015 04:25 PM, Steven Hardy wrote:

Hi all,

I wanted to start some discussion re $subject, because it's been apparrent
that we have a lack of clarity on this issue (and have done ever since we
started using parameter_defaults).


[...]


How do people feel about this example, and others like it, where we're
enabling common, but not mandatory functionality?


At first I was thinking about something as simple as: "don't use 
top-level params for resources which the registry doesn't enable by 
default".


It seems to be somewhat what we tried to do with the existing pluggable 
resources.


Also, not to hijack the thread but I wanted to add another question 
related to a similar issue:


  Is there a reason to prefer use of parameters: instead of 
parameter_defaults: in the environment files?


It looks to me that by defaulting to parameter_defaults: users won't 
need to update their environment files in case the parameter is moved 
from top-level into a specific nested stack so I'm inclined to prefer 
this. Are there reasons not to?

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Location of TripleO REST API

2015-11-10 Thread Giulio Fidente

On 11/10/2015 04:47 PM, Dmitry Tantsur wrote:

On 11/10/2015 04:37 PM, Giulio Fidente wrote:

On 11/10/2015 04:16 PM, Dmitry Tantsur wrote:

On 11/10/2015 04:08 PM, Tzu-Mainn Chen wrote:

Hi all,

At the last IRC meeting it was agreed that the new TripleO REST API
should forgo the Tuskar name, and simply be called... the TripleO
API.  There's one more point of discussion: where should the API
live?  There are two possibilities:

a) Put it in tripleo-common, where the business logic lives.  If we
do this, it would make sense to rename tripleo-common to simply
tripleo.


+1 for both



b) Put it in its own repo, tripleo-api


if both the api (coming) and the cli (currently python-tripleoclient)
are meant to consume the shared code (business logic) from
tripleo-common, then I think it makes sense to keep each in its own repo
... so that we avoid renaming tripleo-common as well


tripleoclient should not consume tripleo-common


so FWIW I think my vote is different depending on the plans:

a. if python-tripleoclient will be changed so that it uses tripleo-api, 
then I'd vote for option 1) (have tripleo-api in -common and rename)


b. if python-tripleoclient will be changed so that it uses the shared 
library in -common, then I'd vote for for option 2) (have tripleo-api in 
its own repo)



on a side note, I think it should always be possible for the deployer to 
skip any business logic to give complete control over the template 
params for both the initial deployments and the updates

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: giulivo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Location of TripleO REST API

2015-11-10 Thread Giulio Fidente

On 11/10/2015 04:16 PM, Dmitry Tantsur wrote:

On 11/10/2015 04:08 PM, Tzu-Mainn Chen wrote:

Hi all,

At the last IRC meeting it was agreed that the new TripleO REST API
should forgo the Tuskar name, and simply be called... the TripleO
API.  There's one more point of discussion: where should the API
live?  There are two possibilities:

a) Put it in tripleo-common, where the business logic lives.  If we
do this, it would make sense to rename tripleo-common to simply
tripleo.


+1 for both



b) Put it in its own repo, tripleo-api


if both the api (coming) and the cli (currently python-tripleoclient) 
are meant to consume the shared code (business logic) from 
tripleo-common, then I think it makes sense to keep each in its own repo 
... so that we avoid renaming tripleo-common as well

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: giulivo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Upgrades, Releases Branches

2015-08-17 Thread Giulio Fidente

On 08/17/2015 09:29 PM, James Slagle wrote:

On Mon, Aug 17, 2015 at 9:28 AM, Steven Hardy sha...@redhat.com wrote:

Hi all,

Recently I had some discussion with folks around the future strategy for
TripleO wrt upgrades, releases and branches, specifically:

- How do we support a stable TripleO release/branch that enables folks to
   easily deploy the current stable release of OpenStack
- Related to the above, how do we allow development of TripleO components
   (and in particular t-h-t) to proceed without imposing undue constraints
   on what new features may be used (e.g new-for-liberty Heat features which
   aren't present in the current released OpenStack version)
- We're aiming to provide upgrade support, thus from and to which versions?

I know historically TripleO has taken something of a developer and/or
continuous deployment model for granted, but I'd like to propose that we
revisit that discusion, such that we move towards something that's more
consumable by users/operators that are consuming the OpenStack coordinated
releases.

The obvious answer is a stable branch for certain TripleO components, and
in particular for t-h-t, but this has disadvantages if we take the
OpenStack wide no feature backports approach - for example upgrade
support to liberty could be considered a feature, and many other TripleO
features are really more about making features of the deployed OpenStack
services consumable.

I'd like propose we take a somewhat modified release branch approach,
which combines many of the advantages of the stable-branch model, but
allows for a somewhat more liberal approach to backports, where most things
are considered valid backports provided they work with the currently
released OpenStack services (e.g right now, a t-h-t release/kilo branch
would have to maintain compatibility with a kilo Heat in the undercloud)


I like the idea, it seems reasonable to me.


/me too

from what I understand this would apply only to the latest stable, the 
previous releases won't get automated updates when a new release is out, 
is this correct?



I do think we should clarify if the rule is:

We *can* backport anything to release/kilo that doesn't break
compatibility with kilo Heat.

Or:

We *must* backport anything to release/kilo that doesn't break
compatibility with kilo Heat.



[...]



I think it's important to decide this up front so we can set the
expectation. I'm leaning towards the latter (must backport) myself,
but then I wonder if the release branch would really solve the
downstream use.


I am for a must as well; a CI job deploying openstack/kilo code using 
the proposed tripleo/master changes might help exposing 
incompatibilities early on



Again, I just go back to the point of the branch. Does it exist to
provide some semblance of stability so that we make distros happy? Or
does it exist solely for the purpose so that we can iterate faster on
using new Heat features in master?


I am not a puppet expert but another reason for the branches could be to 
avoid cross-release incompatibilities due to updates of the 
manifests/modules, not only of the templates


Architectural (incompatible) changes might also benefit from having 
different branches; even though these would remain a hard problem to 
solve in an upgrade scenario



One way to minimise the overhead of maintaing such a branch could be to
implement a bot which automatically proposes commits which land in master
to the branch (using Depends-On to maintain the history order).


I'm not sure I'm following how this would work. Which patch depends on
which other one? If the master patch is A'd, is the release branch
automatically +A'd as well (as long as it's not -2'd)? It seems like
that might be necessary to maintain consistent looking history between
master and the release branch.

Likewise, if the release branch were to fail to merge, it would need
to block the master patch from merging so that there wasn't potential
for different patches to merge out of order in the 2 branches.


yeah looks like the automated process should try to backport the changes 
in the same order they are merged in the master branch and completely 
stop if one of them fails? then assuming manual intervention continue 
from more recent backport?



Interested to hear feedback on the idea, as well as if anyone has direct
experience of implementing the bot/automation pieces.


/me doesn't have experience but would think about the bot as a very 
'stupid' tool rather than an intelligent one; stopping the backports 
seems generally safer than an automated merge which breaks things out of 
immediate sight

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Liberty Summit Topics etherpad

2015-05-08 Thread Giulio Fidente

On 04/28/2015 09:43 PM, James Slagle wrote:

On Mon, Apr 6, 2015 at 3:11 PM, James Slagle james.sla...@gmail.com wrote:

I've created an etherpad for for TripleO to track the topics we'd like
to discuss at the Liberty Summit:
https://etherpad.openstack.org/p/tripleo-liberty-proposed-sessions

It's also linked from the main Design Summit Planning wiki page:
https://wiki.openstack.org/wiki/Design_Summit/Planning

If you have something you'd like to propose to discuss, please add it
to the etherpad.


TripleO has 2 fishbowl sessions and 2 working sessions at the Summit,
as well as an all day contributor's meetup on Friday.  I'd like to
finalize the topics for the sessions over the next day or 2. We can
continue to refine as needed, but I'd like to get the summaries out
there so that folks can start planning what sessions they want to
attend.

My thinking right now is that we devote one fishbowl session to a
discussion around tripleo-heat-templates. Particularly around refining
the interfaces and what we can further do to enable Docker integration
for a containerized Overcloud. We could also discuss making the
template implementations more composable at the individual service
level, and plans around deprecating the elements based templates.


I won't be at the summit but the idea of making the params 'more 
composable' by distributing them on a per-service basis seems very 
interesting to me


I am not sure to which extent these would need to be matched by a 
per-service manifest; sounds like something which could be done by steps



For the second fishbowl session, we could make it testing/CI focused.
We could devote some time to talking about diskimage-builder testing,
and TripleO CI as it relates to quintupleo, the puppet modules, and
possibly using the infra clouds. Depending on time and interest, we
could also discuss if and how we might move forward with a devtest
alternative that was more production oriented.


For the CI topic I would also think about making a fedora job use the 
puppet-stack-config element to configure the seed!



For the working sessions, I don't think we need as much of a defined
summary. But I suspect we could pick a few things to focus on at each
session: tripleo-heat-templates, HA, network architecture,
diskimage-builder testing.

Let me know any feedback/opinions, and I'll get the schedule updated
on sched.org this week. Thanks.


I added a line to the etherpad, not something I can show with code, only 
a thought:


How about 'changing' our approach for the Undercloud turning it from a 
separate entity into just a 'peculiar' configuration of the Overcloud 
(maybe with a different base image)? Sure there is some complexity but 
we won't have to rethink about HA for the undercloud for example nor to 
duplicate the templates/manifests for it.


--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-08 Thread Giulio Fidente

On 05/08/2015 05:41 PM, James Slagle wrote:

On Thu, May 7, 2015 at 5:46 PM, Giulio Fidente gfide...@redhat.com wrote:

On 05/07/2015 07:35 PM, Dan Prince wrote:


On Thu, 2015-05-07 at 17:36 +0200, Giulio Fidente wrote:


On 05/07/2015 03:31 PM, Dan Prince wrote:


On Thu, 2015-05-07 at 11:22 +0200, Giulio Fidente wrote:



EnablePacemaker is set to 'false' by default. IMO it should be opt-in:


http://git.openstack.org/cgit/openstack/tripleo-heat-templates/commit/?id=1f7426a014f0f83ace4d2b3531014e22f7778b4d



sure that param is false by default, but one can enable it and deploy with
pacemaker on single node, and in fact many people do this for dev purposes

before that change, we were even running CI on single node with pacemaker so
as a matter of fact, one could get rid of the conditionals in the manifest
today by just assuming there will be pacemaker


This is the direction I thought we were moving. When you deploy a
single controller, it is an HA cluster of 1. As opposed to just not
using pacemaker entirely. This is the model we did previously for HA
and I thought it worked well in that it got everyone testing and using
the same code path.


indeed this holds true

today if EnablePacemaker is true and ControllerCount is 1 you do get a 
working overcloud, with Pacemaker and 1 controller


the very same overcloud config applies to ControllerCount = 3


I thought the EnablePacemaker parameter was more or less a temporary
thing to get us over the initial disruption of moving things over to
pacemaker.


not really, purpose of that boolean is to support the deployment of an 
overcloud without using Pacemaker


this is possible today as well, but only with ControllerCount 1

probably, in the future, we will work on the non-pacemaker scenario with 
ControllerCount = 3 as well, in which case a split of the manifests 
would really be useful


my only concerns on the topics are:

1. where and when to move the parts which are shared amongst the manifests
2. if it is urgent or not to do the split today given the shared parts 
would be duplicated

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-07 Thread Giulio Fidente

On 05/07/2015 11:15 AM, marios wrote:

On 07/05/15 05:32, Dan Prince wrote:


[..]


Something like this:

https://review.openstack.org/#/c/180833/


+1 I like this as an idea. Given we've already got quite a few reviews
in flight making changes to overcloud_controller.pp (we're still working
out how to, and enabling services) I'd be happier to let those land and
have the tidy up once it settles (early next week at the latest) -
especially since there's some working out+refactoring to do still,


+1 on not block ongoing work

as of today a split would cause the two .pp to have a lot of duplicated 
data, making them not better than one with the ifs IMHO


we should probably move out of the existing .pp the duplicated parts 
first (see my other email on the matter)

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-07 Thread Giulio Fidente

hi Dan!

On 05/07/2015 04:32 AM, Dan Prince wrote:

Looking over some of the Puppet pacemaker stuff today. I appreciate all
the hard work going into this effort but I'm not quite happy about all
of the conditionals we are adding to our puppet overcloud_controller.pp
manifest. Specifically it seems that every service will basically have
its resources duplicated for pacemaker and non-pacemaker version of the
controller by checking the $enable_pacemaker variable.


not sure about the meaning of 'resources duplicated' but I think it is 
safe to say that the pacemaker ifs are there coping mainly with the 
following two:


1. when pacemaker, we don't want puppet to enable/start the service, 
pacemaker will manage so we need to tell the module not to


2. when pacemaker, there are some pacemaker related steps to be 
performed, like adding a resource into the cluster so that it is 
effectively monitoring the service status


in the future, we might need to pass some specific config params to a 
module only when pacemaker, but that looks like covered by 1) already



After seeing it play out for a couple services I think I might prefer it
better if we had an entirely separate template for the pacemaker
version of the controller. One easy way to kick off this effort would be
to use the Heat resource registry to enable pacemaker rather than a
parameter.

Something like this:

https://review.openstack.org/#/c/180833/

If we were to split out the controller into two separate templates I
think it might be appropriate to move a few things into puppet-tripleo
to de-duplicate a bit. Things like the database creation for example.
But probably not all of the services... because we are trying as much as
possible to use the stackforge puppet modules directly (and not our own
composition layer).


I think the change is good, I am assuming we don't want the shared parts 
to get duplicated into the two .pp though.


What is your idea about those shared parts? To move them into 
puppet-tripleo? To provision a shared .pp in addition to a 
differentiated top-level template maybe? Something else?

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-07 Thread Giulio Fidente

On 05/07/2015 07:35 PM, Dan Prince wrote:

On Thu, 2015-05-07 at 17:36 +0200, Giulio Fidente wrote:

On 05/07/2015 03:31 PM, Dan Prince wrote:

On Thu, 2015-05-07 at 11:22 +0200, Giulio Fidente wrote:


[...]


on the other hand, we can very well get rid of the ifs today by
deploying *with* pacemaker in single node scenario as well! we already
have EnablePacemaker always set to true for dev purposes, even on single
node


EnablePacemaker is set to 'false' by default. IMO it should be opt-in:

http://git.openstack.org/cgit/openstack/tripleo-heat-templates/commit/?id=1f7426a014f0f83ace4d2b3531014e22f7778b4d


sure that param is false by default, but one can enable it and deploy 
with pacemaker on single node, and in fact many people do this for dev 
purposes


before that change, we were even running CI on single node with 
pacemaker so as a matter of fact, one could get rid of the conditionals 
in the manifest today by just assuming there will be pacemaker


this said, I prefer myself to leave some air for a (future?) 
non-pacemaker scenario, but I still wanted to point out the reason why 
the conditionals are there in the first place

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-07 Thread Giulio Fidente

On 05/07/2015 03:31 PM, Dan Prince wrote:

On Thu, 2015-05-07 at 11:22 +0200, Giulio Fidente wrote:


[...]


I think the change is good, I am assuming we don't want the shared parts
to get duplicated into the two .pp though.


So again. Duplicating the puppet class includes doesn't bother me too
much. Some of the logic (perhaps the DB creation) should move over to
puppet-tripleo however. But I would like to see us not go crazy with the
composition layer... using the stackforge/puppet modules directly is
best I think.


but it is not only includes really, I understand we would like it to be 
so, but it isn't


eg, this would be duplicated, if not moved elsewhere:

https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/manifests/overcloud_controller.pp#L166-L227

this as well:

https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/manifests/overcloud_controller.pp#L296-L333

and there are quite a lot of similar examples, the change from marios as 
well, ended up duplicating lots of code:


https://review.openstack.org/#/c/180833/


Any conversion code in Puppet (functions using split, downcase, etc) I
view as technical debt which should ideally we would eventually be able
to convert within the Heat templates themselves into formats usable by
Hiera directly. Any duplication around that sort of thing would
eventually get cleaned up as Heat gets an extra function or two.


FWIW, I do agree with the longish-term plan, most of the duplicated code 
*should go away when some more string manipulation can be covered by 
heat* but I still think that will be some of it, not all and yet this 
isn't the case today (and I don't know when it will be honestly)


I think a split will still be worth some duplication when we will start 
supporting *multiple controllers without pacemaker* as well, today not 
so much


on the other hand, we can very well get rid of the ifs today by 
deploying *with* pacemaker in single node scenario as well! we already 
have EnablePacemaker always set to true for dev purposes, even on single 
node

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] puppet pacemaker thoughts... and an idea

2015-05-07 Thread Giulio Fidente

On 05/07/2015 05:36 PM, Giulio Fidente wrote:

On 05/07/2015 03:31 PM, Dan Prince wrote:

On Thu, 2015-05-07 at 11:22 +0200, Giulio Fidente wrote:


[...]


and there are quite a lot of similar examples, the change from marios as
well, ended up duplicating lots of code:

https://review.openstack.org/#/c/180833/


here I should have linked this change instead:

https://review.openstack.org/#/c/181015/1

from marios, built on your first
--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] on supporting multiple implementations of tripleo-heat-templates

2015-04-17 Thread Giulio Fidente

Hi,

the Heat/Puppet implementation of the Overcloud deployment seems to be 
surpassing in features the Heat/Elements implementation.


The changes for Ceph are an example, the Puppet based version is already 
adding features which don't have they counterpart into Elements based.


Recently we started working on the addition of Pacemaker into the 
Overcloud, to monitor the services and provide a number of 'auto 
healing' features, and again this is happening in the Puppet 
implementation only (at least for now) so I think the gap will become 
bigger.


Given we support different implementations with a single top-level 
template [1], to keep other templates valid we're forced to propagate 
the params into the Elements based templates as well, even though there 
is no use for these there, see for example [2].


The extra work itself is not of great concern but I wonder if it 
wouldn't make sense to deprecate the Elements based templates at this 
point, instead of keep adding there unused parts? Thoughts?


1. 
https://github.com/openstack/tripleo-heat-templates/blob/master/overcloud-without-mergepy.yaml

2. https://review.openstack.org/#/c/173773

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Alternate meeting time

2015-04-15 Thread Giulio Fidente

On 04/15/2015 10:46 AM, marios wrote:

On 15/04/15 00:13, James Slagle wrote:

Hi, TripleO currently has an alternate meeting time scheduled for
Wednesdays at 08:00 UTC. The alternate meeting actually hasn't
happened the last 4 occurrences that I know of [2].

Do we still need the alternate meeting time slot? I'd like to
accommodate as many people as possible to be either to attend one of
our two meeting times. The last time this came up, we tracked people's
opinions in an etherpad[3], and a doodle, which has since expired.
Maybe a good first step would be to just update your preferences in
the etherpad, so we can start to see if there's a larger group of
people we can accommodate at the alternate meeting time.


I don't think we need the alternate slot.


I added myself in the etherpad to the list of people who can stick with 
the single primary time slot as well

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat] Overcloud software updates and ResourceGroups

2015-04-02 Thread Giulio Fidente

hi there,

thanks for sharing this, I have a

On 04/03/2015 12:31 AM, Zane Bitter wrote:

A few of us have been looking for a way to perform software updates to
servers in a TripleO Heat/Puppet-based overcloud


[...]


Here's a trivial example of what this deployment might look like:

   update_config:
 type: OS::Heat::SoftwareConfig
 properties:
   config: {get_file: do_sw_update.sh}
   inputs:
 - name: update_after_time
   description: Timestamp of the most recent update request

   update_deployment:
 type: OS::Heat::SoftwareDeployment
 properties:
   actions:
 - UPDATE
   config: {get_resource: update_config}
   server: {get_resource: my_server}
   input_values:
 update_after_time: {get_param: update_timestamp}


[...]


   heat stack-update my_overcloud -f $TMPL -P update_timestamp=$(date)


leaving the ResourceGroup/AutoScalingGroup to more knowledgeable people 
on a side and trying instead to translate the templating approach into 
user features, if I read it correctly this would also make it possible to:


1. perform a config update without a software update as long as the 
update_timestamp param remains unchanged


2. perform software updates of each ResourceGroup independently from the 
others by using as many update_timestamp params


3. use different update.sh scripts per ResourceGroup

are the above correct?

My single minor concern is about the update script itself which, if not 
left for editing to the user but bundled instead with t-h-t , should be 
clever enough to cope with different distros and distro versions because 
we can't know that from the template ... but this can be achieved by 
abstracting it on top of Puppet itself it seems (or whichever other 
config management tool is deployed)

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Getting to a 1.0

2015-03-09 Thread Giulio Fidente

On 03/07/2015 04:34 AM, Dan Prince wrote:

On Tue, 2015-03-03 at 17:30 -0500, James Slagle wrote:

Hi,

Don't let the subject throw you off :)

I wasn't sure how to phrase what I wanted to capture in this mail, and
that seemed reasonable enough. I wanted to kick off a discussion about
what gaps people think are missing from TripleO before we can meet the
goal of realistically being able to use TripleO in production.

The things in my mind are:

Upgrades - I believe the community is trending away from the image
based upgrade rebuild process. The ongoing Puppet integration work is
integrated with Heat's SoftwareConfig/SoftwareDeployment features and
is package driven. There is still work to be done, especially around
supporting rollbacks, but I think this could be part of the answer to
how the upgrade problem gets solved.


+1 Using packages solves some problems very nicely. We haven't solved
all the CI related issues around using packages with upstream but it is
getting better. I mention this because it would be nice to have CI
testing on the upgrade process automated at some point...



HA - We have an implementation of HA in tripleo-image-elements today.
However, the Puppet codepath leaves that mostly unused. The Puppet
modules however do support HA. Is that the answer here as well?


In general most of the puppet modules support the required HA bits. We
are still working to integrate some of the final pieces here but in
general I expect this to proceed quickly.


going back to CI, I think this would benefit from an additional CI job

given we have a non-voting HA job running on precise/elements, I'd like 
to add one running on fedora/puppet, maybe initially non-voting as well


this said, it would also be nice to have a job which deploys additional 
block storage (cinder) and object storage (swift) nodes ...


... and to save some resources, maybe we can switch 
'check-tripleo-ironic-overcloud-f20puppet-nonha' and 
'check-tripleo-ironic-overcloud-precise-nonha' to deploy a single 
compute node instead of two



CLI - We have devtest. I'm not sure if anyone would argue that should
be used in production. It could be...but I don't think that was it's
original goal and it shows. The downstreams of TripleO that I'm aware
of each ended up more of less having their own CLI tooling. Obviously
I'm only very familiar with one of the downstreams, but in some
instances I believe parts of devtest were reused, and other times not.
That begs the question, do we need a well represented unified CLI in
TripleO? We have a pretty good story about using Nova/Ironic/Heat[0]
to deploy OpenStack, and devtest is one such implementation of that
story. Perhaps we need something more production oriented.


I think this is an interesting idea and perhaps has some merit. I'd like
to see some specific examples showing how the unified CLI might make
things easier for end users...


I am of the same feeling; AFAIK devtest was meant to setup a development 
environment, not a production environment, more on this later



Baremetal management - To what extent should TripleO venture into this
space? I'm thinking things like discovery/introspection, ready state,
and role assignment. Ironic is growing features to expose things like
RAID management via vendor passthrough API's. Should TripleO take a
role in exercising those API's? It's something that could be built
into the flow of the unified CLI if we were to end up going that
route.

Bootstrapping - The undercloud needs to be
bootstrapped/deployed/installed itself. We have the seed vm to do
that. I've also worked on an implementation to install an undercloud
via an installation script assuming the base OS is already installed.
Are these the only 2 options we should consider, or are there other
ideas that will integrate better into existing infrastructure?


And also should we think about possibly renaming these? I find that many
times when talking about TripleO to someone new they find the whole
undercloud/overcloud thing confusing. Calling the undercloud the
baremetal cloud makes it click.


I don't think we need more; what I would like to have instead is a tool, 
targeted at end users, capable of setting up an undercloud without going 
through the seed


this said, I am really not sure if that should be a wrapper around 
devtest --no-undercloud or a tool which turns the existing base os into 
an undercloud; both seem to have pros and cons



Release Cadence with wider OpenStack - I'd love to be able to say on
the day that a new release of OpenStack goes live that you can use
TripleO to deploy that release in production...and here's how you'd do
it


personally, while I tried to join this conversation in the past, I am 
still unsure whether for tripleo a stable/master approach would work 
better or not than a synchronized release cadence

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List

Re: [openstack-dev] [TripleO] Switching CI back to amd64

2015-01-12 Thread Giulio Fidente

On 01/08/2015 10:22 AM, Derek Higgins wrote:

On 07/01/15 23:41, Ben Nemec wrote:

I don't feel like we've been all that capacity constrained lately
anyway, so as I said in my other (largely unnecessary, as it turns out)
email, I'm +1 on doing this.

Correct we're not currently constrained on capacity at all (most days we
run less then 300 jobs), but once the other region is in use we'll be
hoping to add jobs to other projects.


does that mean we could also add some jobs from the 'wanted ci jobs' 
matrix? :)


https://github.com/openstack-infra/tripleo-ci/blob/master/docs/wanted_ci_jobs.csv
--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [CI] Cinder/Ceph CI setup

2014-12-02 Thread Giulio Fidente

On 11/27/2014 02:23 PM, Derek Higgins wrote:

On 27/11/14 10:21, Duncan Thomas wrote:

I'd suggest starting by making it an extra job, so that it can be
monitored for a while for stability without affecting what is there.


we have to be careful here, adding an extra job for this is probably the
safest option but tripleo CI resources are a constraint, for that reason
I would add it to the HA job (which is currently non voting) and once
its stable we should make it voting.



I'd be supportive of making it the default HA job in the longer term as
long as the LVM code is still getting tested somewhere - LVM is still
the reference implementation in cinder and after discussion there was
strong resistance to changing that.



We are and would continue to use lvm for our non ha jobs, If I
understand it correctly the tripleo lvm support isn't HA so continuing
to test it on our HA job doesn't achieve much.



I've no strong opinions on the node layout, I'll leave that to more
knowledgable people to discuss.

Is the ceph/tripleO code in a working state yet? Is there a guide to
using it?


hi guys, thanks for replying

I just wanted to add here a link to the blueprint so you can keep track 
of development [1]


all the code to make it happen (except the actual CI job config changes) 
is up for review now so feedback and reviews are indeed appreciated :)


1. https://blueprints.launchpad.net/tripleo/+spec/tripleo-kilo-cinder-ha
--
Giulio Fidente
GPG KEY: 08D733BA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >