Re: [openstack-dev] [TripleO] Defining a public API for tripleo-common

2015-10-02 Thread Jan Provazník

On 09/30/2015 10:08 AM, Dougal Matthews wrote:

Hi,

What is the standard practice for defining public API's for OpenStack
libraries? As I am working on refactoring and updating tripleo-common I have
to grep through the projects I know that use it to make sure I don't break
anything.

Personally I would choose to have a policy of "If it is documented, it is
public" because that is very clear and it still allows us to do internal
refactoring.

Otherwise we could use __all__ to define what is public in each file, or
assume everything that doesn't start with an underscore is public.

Cheers,
Dougal



Hi,
my preference would be to follow the same approach which is used in oslo 
libraries (I think these libs should be take as a best practice example 
if possible). And oslo libs AFAIK use the last of your options.


But if there is a plan to build some thin REST API layer on top if it, I 
think that versioning will be necessary. So I would lean to the default 
underscore convention in versioned directory structure :).


Jan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CLI and RHEL registration of overcloud nodes

2015-08-26 Thread Jan Provazník

Hi,
although rdomanager-oscplugin is not yet under TripleO it should be 
soon, so sending this to TripleO audience.


Satellite registration from user's point of view is now done by passing 
couple of specific parameters when running openstack overcloud deploy 
command [1]. rdomanager-oscplugin checks presence of these params and 
adds additional env files which are then passed to heat and it also 
generates a temporary env file containing default_params required for 
rhe-registration template [2].


This approach is not optimal because it means that:
a) registration params have to be passed on each call of openstack 
overcloud deploy
b) other CLI commands (pkg update, node deletion) should implement/reuse 
the same logic (support same parameters) to be consistent


This is probably not necessary because registration params should be 
needed only when creating OC stack, no need to pass them later when 
running any update operation.


As a short term solution I think it would be better to pass registration 
templates in the same way as other extra files (-e param) - although 
user will still have to pass additional parameter when using 
rhel-registration, it will be consistent with the way how e.g. network 
configuration is used and the -e mechanism for passing additional env 
files is already supported in other CLI commands. 
_create_registration_env method [2] would be updated to generateadd 
just user's credentials [3] env file - and this would be needed only 
when creating overcloud, no need to pass them when updating stack later.


If there are no objections/feedback, I'll send a patch (and of course 
update documentation too) which updates CLI as described above.


Jan


[1] 
https://repos.fedorapeople.org/repos/openstack-m/docs/master/basic_deployment/basic_deployment_cli.html
[2] 
https://github.com/rdo-management/python-rdomanager-oscplugin/blob/master/rdomanager_oscplugin/v1/overcloud_deploy.py#L366
[3] 
https://github.com/rdo-management/python-rdomanager-oscplugin/blob/master/rdomanager_oscplugin/v1/overcloud_deploy.py#L378


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Puppet] Package updates strategy

2015-05-29 Thread Jan Provazník

On 05/28/2015 08:24 PM, Zane Bitter wrote:

On 28/05/15 03:35, Jan Provaznik wrote:

On 05/28/2015 01:10 AM, Steve Baker wrote:

On 28/05/15 10:54, Richard Raseley wrote:

Zane Bitter wrote:

Steve is working on a patch to allow package-based updates of
overcloud
nodes[1] using the distro's package manager (yum in the case of RDO,
but
conceivable apt in others). Note we're talking exclusively about minor
updates, not version-to-version upgrades here.

Dan mentioned at the summit that this approach fails to take into
account the complex ballet of service restarts required to update
OpenStack services. (/me shakes fist at OpenStack services.) And
furthermore, that the Puppet manifests already encode the necessary
relationships to do this properly. (Thanks Puppeteers!) Indeed we'd be
doing the Wrong Thing by Puppet if we changed this stuff from under
it.

The problem of course is that neither Puppet nor yum/apt has a view of
the entire system. Yum doesn't know about the relationships between
services and Puppet doesn't know about all of the _other_ packages
that
they depend on.

One solution proposed was to do a yum update first but specifically
exclude any packages that Puppet knows about (the --excludes flag
appears sufficient for this); then follow that up with another Puppet
run using ensure - latest.


My only concern with this approach is how do we collect and maintain the
excludes list. Other than that it sounds reasonable.


Why not swap the order? First run puppet using ensure=latest which
updates/restarts everything Openstack depends on, then run yum/apt
update to update any remaining bits. You wouldn't need exclude list then.


Will ensure=latest update all packages that the given one is dependent
on, even if it doesn't require new versions? I assumed that it wouldn't,
so by doing Puppet first we would just ensure that we are even less
likely to pick up library changes by restarting services after the
libraries are updated.



We could take advantage of this only when both a service and a 
dependency library are part of the upgrade. Other services depending on 
the lib would have to be restarted out-of-puppet in post yum-update 
phase anyway. I wonder if it is worth to try generate list of exclude 
packages (to be able to run yum first) given quite limited benefit it 
provides, but if it's simple enough then +1 :).



A problem with that approach is that it still fails to restart
services
which have had libraries updated but have not themselves been updated.
That's no worse than the pure yum approach though. We might need an
additional way to just manually trigger a restart of services.


Maybe this could be handled at the packaging stage by reving the package
version when there is a known fix in a low-level library, thus
triggering a service restart in the puppet phase.



My concern is that then e.g. libc update would mean repackaging (bumping
version) of zillion of other packages, also zillion of packages would be
downloaded/upgraded on each system because of a new package version.


Yes, no distro ever created works like this - for good reason - and it
is way out of scope for us to try to change that :)


I think that providing user a manual (script) way to restart services
after update would be sufficient solution (though not so sophisticated).


Maybe there's a way we can poke Puppet to do it.

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Common library for shared code

2015-03-20 Thread Jan Provazník

On 03/18/2015 04:22 PM, Ben Nemec wrote:

On 03/17/2015 09:13 AM, Zane Bitter wrote:

On 16/03/15 16:38, Ben Nemec wrote:

On 03/13/2015 05:53 AM, Jan Provaznik wrote:

On 03/10/2015 05:53 PM, James Slagle wrote:

On Mon, Mar 9, 2015 at 4:35 PM, Jan Provazník jprov...@redhat.com wrote:

Hi,
it would make sense to have a library for the code shared by Tuskar UI and
CLI (I mean TripleO CLI - whatever it will be, not tuskarclient which is
just a thing wrapper for Tuskar API). There are various actions which
consist from more that a single API call to an openstack service, to give
some examples:

- nodes registration - for loading a list of nodes from a user defined file,
this means parsing a CSV file and then feeding Ironic with this data
- decommission a resource node - this might consist of disabling
monitoring/health checks on this node, then gracefully shut down the node
- stack breakpoints - setting breakpoints will allow manual
inspection/validation of changes during stack-update, user can then update
nodes one-by-one and trigger rollback if needed


I agree something is needed. In addition to the items above, it's much
of the post deployment steps from devtest_overcloud.sh. I'd like to see that be
consumable from the UI and CLI.

I think we should be aware though that where it makes sense to add things
to os-cloud-config directly, we should just do that.



Yes, actually I think most of the devtest_overcloud content fits
os-cloud-config (and IIRC for this purpose os-cloud-config was created).



It would be nice to have a place (library) where the code could live and
where it could be shared both by web UI and CLI. We already have
os-cloud-config [1] library which focuses on configuring OS cloud after
first installation only (setting endpoints, certificates, flavors...) so not
all shared code fits here. It would make sense to create a new library where
this code could live. This lib could be placed on Stackforge for now and it
might have very similar structure as os-cloud-config.

And most important... what is the best name? Some of ideas were:
- tuskar-common


I agree with Dougal here, -1 on this.


- tripleo-common
- os-cloud-management - I like this one, it's consistent with the
os-cloud-config naming


I'm more or less happy with any of those.

However, If we wanted something to match the os-*-config pattern we might
could go with:
- os-management-config
- os-deployment-config



Well, the scope of this lib will be beyond configuration of a cloud so
having -config in the name is not ideal. Based on feedback in this
thread I tend to go ahead with os-cloud-management and unless someone
rises an objection here now, I'll ask infra team what is the process of
adding the lib to stackforge.


Any particular reason you want to start on stackforge?  If we're going
to be consuming this in TripleO (and it's basically going to be
functionality graduating from incubator) I'd rather just have it in the
openstack namespace.  The overhead of some day having to rename this
project seems unnecessary in this case.


I think the long-term hope for this code is for it to move behind the
Tuskar API, so at this stage the library is mostly to bootstrap that
development to the point where the API is more or less settled. In that
sense stackforge seems like a natural fit, but if folks feel strongly
that it should be part of TripleO (i.e. in the openstack namespace) from
the beginning then there's probably nothing wrong with that either.


So is this eventually going to live in Tuskar?  If so, I would point out
that it's going to be awkward to move it there if it starts out as a
separate thing.  There's no good way I know of to copy code from one git
repo to another without losing its history.

I guess my main thing is that everyone seems to agree we need to do
this, so it's not like we're testing the viability of a new project.
I'd rather put this code in the right place up front than have to mess
around with moving it later.  That said, this is kind of outside my
purview so I don't want to hold things up, I just want to make sure
we've given some thought to where it lives.

-Ben



Hi,
I don't have a strong opinion where this lib should live. James, as 
TripleO PTL, what is your opinion about the lib location?


For now, I set WIP on the patch which adds this lib into Stackforge [1] 
(which I sent shortly before Ben pointed out the concern about its 
location).


Jan

[1] https://review.openstack.org/#/c/165433/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Tuskar] Common library for shared code

2015-03-09 Thread Jan Provazník

Hi,
it would make sense to have a library for the code shared by Tuskar UI 
and CLI (I mean TripleO CLI - whatever it will be, not tuskarclient 
which is just a thing wrapper for Tuskar API). There are various actions 
which consist from more that a single API call to an openstack 
service, to give some examples:


- nodes registration - for loading a list of nodes from a user defined 
file, this means parsing a CSV file and then feeding Ironic with this data
- decommission a resource node - this might consist of disabling 
monitoring/health checks on this node, then gracefully shut down the node
- stack breakpoints - setting breakpoints will allow manual 
inspection/validation of changes during stack-update, user can then 
update nodes one-by-one and trigger rollback if needed


It would be nice to have a place (library) where the code could live and 
where it could be shared both by web UI and CLI. We already have 
os-cloud-config [1] library which focuses on configuring OS cloud after 
first installation only (setting endpoints, certificates, flavors...) so 
not all shared code fits here. It would make sense to create a new 
library where this code could live. This lib could be placed on 
Stackforge for now and it might have very similar structure as 
os-cloud-config.


And most important... what is the best name? Some of ideas were:
- tuskar-common
- tripleo-common
- os-cloud-management - I like this one, it's consistent with the 
os-cloud-config naming


Any thoughts? Thanks, Jan


[1] https://github.com/openstack/os-cloud-config

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Propose adding StevenK to core reviewers

2014-09-10 Thread Jan Provazník

On 09/09/2014 08:32 PM, Gregory Haynes wrote:

Hello everyone!

I have been working on a meta-review of StevenK's reviews and I would
like to propose him as a new member of our core team.

As I'm sure many have noticed, he has been above our stats requirements
for several months now. More importantly, he has been reviewing a wide
breadth of topics and seems to have a strong understanding of our code
base. He also seems to be doing a great job at providing valuable
feedback and being attentive to responses on his reviews.

As such, I think he would make a great addition to our core team. Can
the other core team members please reply with your votes if you agree or
disagree.

Thanks!
Greg



+1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Use MariaDB by default on Fedora

2014-06-16 Thread Jan Provazník

Hi,
MariaDB is now included in Fedora repositories, this makes it easier to 
install and more stable option for Fedora installations. Currently 
MariaDB can be used by including mariadb (use mariadb.org pkgs) or 
mariadb-rdo (use redhat RDO pkgs) element when building an image. What 
do you think about using MariaDB as default option for Fedora when 
running devtest scripts?


Pros: better elements coverage, both mysql and mariadb elements would be 
tested, now only mysql element is used in tests

Cons: different elements would be used for various distributions

Jan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] pacemaker management tools

2014-06-11 Thread Jan Provazník

Hi,
ceilometer-agent-central element was added recently into overcloud 
image. To be able scale out overcloud control nodes, we need HA for this 
central agent. Currently central agent can not scale out (until [1] is 
done). For now, the simplest way is add the central agent to Pacemaker, 
which is quite simple.


The issue is that distributions supported in TripleO provide different 
tools for managing Pacemaker. Ubuntu/Debian provides crmsh, Fedora/RHEL 
provides pcs, OpenSuse provides both. I didn't find packages for all our 
distros for any of the tools. Also if there is a third-party repo 
providing packages for various distros, adding dependency on an 
untrusted third-party repo might be a problem for some users.


Although it's a little bit annoying, I think we will end up with 
managing commands for both config tools, a resource creation sample:


if $USE_PCS;then
  crm configure primitive ClusterIP ocf:heartbeat:IPaddr2 params 
ip=192.168.122.120 cidr_netmask=32 op monitor interval=30s

else
  pcs resource create ClusterIP IPaddr2 ip=192.168.0.120 cidr_netmask=32
fi

There are not many places where pacemaker configuration would be 
required, so I think this is acceptable. Any other opinions?


Jan


[1] 
https://blueprints.launchpad.net/ceilometer/+spec/central-agent-improvement


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-16 Thread Jan Provazník

On 05/12/2014 10:35 AM, Dmitriy Shulyak wrote:

Adding haproxy (or keepalived with lvs for load balancing) will
require binding haproxy and openstack services on different sockets.
Afaik there is 3 approaches that tripleo could go with.

Consider configuration with 2 controllers:

haproxy: nodes: -   name: controller0 ip: 192.0.2.20 -   name:
controller1 ip: 192.0.2.21

1. Binding haproxy on virtual ip and standard ports

haproxy: services: -   name: horizon proxy_ip: 192.0.2.22 (virtual
ip) port: 80 proxy_port: 80 -   name: neutron proxy_ip: 192.0.2.22
(virtual ip) proxy_port: 9696 port: 9696

Pros: - No additional modifications in elements is required


Actually openstack services elements have to be updated to bind to local
address only.


HA-Proxy version 1.4.24 2013/06/17 What was the reason this approach
 was dropped?


IIRC the major reason was that having 2 services on same port (but
different interface) would be too confusing for anyone who is not aware
of this fact.



2. Haproxy listening on standard ports, services on non-standard

haproxy: services: -   name: horizon proxy_ip: 192.0.2.22 (virtual
ip) port: 8080 proxy_port: 80 -   name: neutron proxy_ip: 192.0.2.22
 (virtual ip) proxy_port: 9696 port: 9797

Pros: - No changes will be required to init-keystone part of
workflow - Proxied services will be accessible on accustomed ports


Bear in mind that we already use not-standard SSL ports for public
endpoints. Also extra work will be required for setting up stunnels
(element openstack-ssl).


- No changes to configs where services ports need to be hardcoded,
for example in nova.conf https://review.openstack.org/#/c/92550/

Cons: - Config files should be changed to add possibility of ports
configuration


Another cons is also updating selinux and firewall rules for each node.



3. haproxy on non-standard ports, with services on standard

haproxy: services: -   name: horizon proxy_ip: 192.0.2.22 (virtual
ip) port: 8080 proxy_port: 80 -   name: neutron proxy_ip: 192.0.2.22
 (virtual ip) proxy_port: 9797 port: 9696

Notice that i changed only port for neutron, main endpoint for
horizon should listen on default http or https ports.



Agree that having 2 service ports switched in other way than other is 
sub-optimal.



Basicly it is opposite to 2 approach. I would prefer to go with 2,
cause it requires only minor refactoring.

Thoughts?



Options 2 and 3 seem quite equal based on pros vs cons. Maybe we should 
reconsider option 1?


Jan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] HAProxy and Keystone setup (in Overcloud)

2014-04-25 Thread Jan Provazník

Hello,
one of missing bits for running multiple control nodes in Overcloud is 
setting up endpoints in Keystone to point to HAProxy which will listen 
on a virtual IP and not-standard ports.


HAProxy ports are defined in heat template, e.g.:

haproxy:
  nodes:
  - name: control1
ip: 192.0.2.5
  - name: control2
ip: 192.0.2.6
  services:
  - name: glance_api_cluster
proxy_ip: 192.0.2.254 (=virtual ip)
proxy_port: 9293
port:9292


means that Glance's Keystone endpoint should be set to:
http://192.0.2.254:9293/

ATM Keystone setup is done from devtest_overcloud.sh when Overcloud 
stack creation successfully completes. I wonder what of the following 
options how to set up endpoints in HA mode, is preferred by community?:
1) leave it in post-stack-create phase and extend init-keystone script. 
But then how to deal with list of not-standard ports (proxy_port in 
example above):
  1a) consider these not-standard ports as static and just hardcode 
them (similar to what we do with SSL ports already). But ports would be 
hardcoded on 2 places (heat template and this script). If a user changes 
them in heat template, he has to change them in init-keystone script too.
  2b) init-keystone script would fetch list of ports from heat stack 
description (if it's possible?)


Long-term plan seems to be rewrite Keystone setup into os-cloud-config:
https://blueprints.launchpad.net/tripleo/+spec/tripleo-keystone-cloud-config
So alternative to extending init-keystone script would be implement it 
as part of the blueprint, anyway the concept of keeping Keystone setup 
in post-stack-create phase remains.



2) do Keystone setup from inside Overcloud:
Extend keystone element, steps done in init-keystone script would be 
done in keystone's os-refresh-config script. This script would have to 
be called only on one of nodes in cluster and only once (though we 
already do similar check for other services - mysql/rabbitmq, so I don't 
think this is a problem). Then this script can easily get list of 
haproxy ports from heat metadata. This looks like more attractive option 
to me - it eliminates an extra post-create config step.


Related to Keystone setup is also the plan around keys/cert setup 
described here:

http://lists.openstack.org/pipermail/openstack-dev/2014-March/031045.html
But I think this plan would remain same no matter which of the options 
above would be used.



What do you think?

Jan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march

2014-04-04 Thread Jan Provazník

On 04/03/2014 01:02 PM, Robert Collins wrote:

Getting back in the swing of things...

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Dan Prince for -core
  - Jordan O'Mara for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core


+1 to all

Jan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][reviews] We're falling behind

2014-03-26 Thread Jan Provazník

On 03/25/2014 09:17 PM, Robert Collins wrote:

TripleO has just seen an influx of new contributors. \o/. Flip side -
we're now slipping on reviews /o\.

In the meeting today we had basically two answers: more cores, and
more work by cores.

We're slipping by 2 reviews a day, which given 16 cores is a small amount.

I'm going to propose some changes to core in the next couple of days -
I need to go and re-read a bunch of reviews first - but, right now we
don't have a hard lower bound on the number of reviews we request
cores commit to (on average).

We're seeing 39/day from the 16 cores - which isn't enough as we're
falling behind. Thats 2.5 or so. So - I'd like to ask all cores to
commit to doing 3 reviews a day, across all of tripleo (e.g. if your
favourite stuff is all reviewed, find two new things to review even if
outside comfort zone :)).

And we always need more cores - so if you're not a core, this proposal
implies that we'll be asking that you a) demonstrate you can sustain 3
reviews a day on average as part of stepping up, and b) be willing to
commit to that.

Obviously if we have enough cores we can lower the minimum commitment
- so I don't think this figure should be fixed in stone.

And now - time for a loose vote - who (who is a tripleo core today)
supports / disagrees with this proposal - lets get some consensus
here.

I'm in favour, obviously :), though it is hard to put reviews ahead of
direct itch scratching, its the only way to scale the project.

-Rob



+1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev