Re: [openstack-dev] Minimum version of shred in our supported distros?

2017-08-21 Thread Michael Still
But to do that I'd have to learn how to use git and then you wouldn't be
forced to take the 17 unrelated patches!

You see where I'm going with this right?

An alternative would be to move it to the start of that series before the
-2 of doom.

Michael

On 22 Aug. 2017 2:55 am, "Matt Riedemann"  wrote:

> On 8/20/2017 1:11 AM, Michael Still wrote:
>
>> Specifically we could do something like this:
>> https://review.openstack.org/#/c/495532
>>
>
> Sounds like we're OK with doing this in Queens given the other discussion
> in this thread. However, this is part of a much larger series. It looks
> like it doesn't need to be though, so could you split this out and we could
> just merge it on it's own?
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] RequestSpec questions about force_hosts/nodes and requested_destination

2017-08-21 Thread Matt Riedemann
I don't dabble in the RequestSpec code much, but in trying to fix bug 
1712008 [1] I'm venturing in there and have some questions. This is 
mostly an email to Sylvain for when he gets back from vacation but I 
wanted to dump it before moving forward.


Mainly, what is the difference between 
RequestSpec.force_hosts/force_nodes and RequestSpec.requested_destination?


When should one be used over the other? I take it that 
requested_destination is the newest and coolest thing and we should 
always use that first, and that's what the nova-api code is using, but I 
also see the scheduler code checking force_hosts/force_nodes.


Is that all legacy compatibility code? And if so, then why don't we 
handle requested_destination in RequestSpec routines like 
reset_forced_destinations() and to_legacy_filter_properties_dict(), i.e. 
for the latter, if it's a new style RequestSpec with 
requested_destination set, but we have to backport and call 
to_legacy_filter_properties_dict(), shouldn't requested_destination be 
used to set force_hosts/force_nodes on the old style filter properties?


Since RequestSpec.requested_destination is the thing that restricts a 
move operation to a single cell, it seems pretty important to always be 
using that field when forcing where an instance is moving to. But I'm 
confused about whether or not both requested_destination *and* 
force_hosts/force_nodes should be set since the compat code doesn't seem 
to transform the former into the latter.


If this is all transitional code, we should really document the hell out 
of this in the RequestSpec class itself for anyone trying to write new 
client side code with it, like me.


[1] https://bugs.launchpad.net/nova/+bug/1712008

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] this week's priorities and subteam reports

2017-08-21 Thread Yeleswarapu, Ramamani
Hi,

We are glad to present this week's priorities and subteam report for Ironic. As 
usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. Finishing the ironic pike release
1.1. needs the reno prelude: https://review.openstack.org/#/c/495316/
1.2. also fix for iRMC BFV capability name? +1 rloo master: 
https://review.openstack.org/495736
2. Refactoring of the way we access clients: 
https://review.openstack.org/#/q/topic:bug/1699547
3. Review specs in preparation for the PTG

Pike Release Plan
=

- As of Thur Aug 17, in absence of PTL, as discussed with TheJulia, rloo, 
sambetts, vdrok :)
- see discussion before/after 
http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2017-08-17.log.html#t2017-08-17T13:18:35

1. Revert reno-prelude patch: https://review.openstack.org/494542 - MERGED
2. Cut a stable/pike release, version 9.0.0 - proposed to openstack/releases 
https://review.openstack.org/#/c/494530/2 - ttx has confirmed he will process 
the request - DONE
3.1. For CI to work on master, these need to land: 
- Increase host_subset_size for ironic 
https://review.openstack.org/#/c/493990/ - MERGED on all branches
- Adds 9.0 to release_mappings
- master: https://review.openstack.org/494620 - MERGED
- stable/pike; https://review.openstack.org/#/c/494662/ - MERGED
- Get rid of sourcing stackrc in grenade settings 
https://review.openstack.org/#/c/480905/ MERGED
3.2. For CI to work on stable/pike, this needs to land:
- Increase host_subset_size for ironic 
https://review.openstack.org/#/c/493991/ - MERGED
- release mapping for 9.0 https://review.openstack.org/#/c/494662/ - 
MERGED
4. Land patches in master and backport to stable/pike. NOTE: the backported 
patch must pass CI first, before landing master patch. - DONE
- patches for iDRAC hardware type 
(https://review.openstack.org/#/c/491263/ MERGED on master)
- https://review.openstack.org/494737 MERGED
5. reno-prelude patch - actually, TheJulia and rloo think this should ONLY go 
into stable/pike branch... ++
- (TheJulia) Local tests reveal that the output still appears as 
expected. Master branch will need to receieve "sem-ver: feature" afterwards.
6. Cut first offical stable/pike release 9.1.0 from head of stable/pike
(TheJulia) I believe 2 are required, 9.0.1 with a reno saying stating 
what occured in case anyone tries to package >9.0.x,<9.1.0 9.0.1 released, 
9.1.0 to follow in a few days
post-mortem: discuss grenade issues wrt not having a stable/release branch 
until after 'every project that counts' has; we need to come up with a process 
to handle this since it has occurred since Ocata: 
http://lists.openstack.org/pipermail/openstack-dev/2017-February/111849.html


Bugs (dtantsur, vdrok, TheJulia)

- Stats (diff between 14 Aug 2017 and 21 Aug 2017)
- Ironic: 246 bugs + 259 wishlist items (+1). 23 new (-1), 187 in progress 
(-1), 0 critical, 32 high (+1) and 31 incomplete
- Inspector: 12 bugs + 29 wishlist items (+1). 2 new, 10 in progress, 0 
critical, 2 high and 3 incomplete
- Nova bugs with Ironic tag: 17. 0 new, 0 critical, 1 high

CI refactoring and missing test coverage

- not considered a priority, it's a 'do it always' thing
- Standalone CI tests (vsaienk0)
- next patch to be reviewed, needed for 3rd party CI: 
https://review.openstack.org/#/c/429770/
- Missing test coverage (all)
- portgroups and attach/detach tempest tests: 
https://review.openstack.org/382476
- local boot with partition images: TODO 
https://bugs.launchpad.net/ironic/+bug/1531149
- adoption: https://review.openstack.org/#/c/344975/
- should probably be changed to use standalone tests
- root device hints: TODO

Essential Priorities


Nova resource class based scheduling changes (dtantsur)
---
- I have to add this, as there are things to finish by end of Pike to avoid 
problems in Queens
- TODO as of 21 Aug 2017:
- all done
- devstack:
- always set resource class: https://review.openstack.org/491777 MERGED
- support scheduling based on rsc: https://review.openstack.org/476968 
MERGED
- nova:
- fix reporting inventory: https://review.openstack.org/492964
- prevent updating resource_class for active nodes: 
https://review.openstack.org/#/c/492216/ MERGED
- upgrade documentation and reno: https://review.openstack.org/491773 MERGED
- Optionally:
- integration tests: https://review.openstack.org/#/c/443628/

Generic boot-from-volume (TheJulia, dtantsur)
-
- BFV Meetings on hold until September.
- specs and blueprints:

Re: [openstack-dev] [release][ptl] tools for creating new releases

2017-08-21 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2017-08-21 11:21:59 -0400:
> Excerpts from Dmitry Tantsur's message of 2017-08-15 14:11:05 +0200:
> > On 08/08/2017 03:30 PM, Doug Hellmann wrote:
> > > We realized recently that we haven't publicized some of the tools
> > > in the releases repository very well. One tool that will be useful
> > > this week as you prepare your release candidates is the 'new-release'
> > > command, which edits a deliverable file to add a new release from
> > > HEAD of the given branch, automatically computing the next verison
> > > number based on the inputs.
> > > 
> > > Use the ``venv`` tox environment to run the tool, like this:
> > > 
> > > $ tox -e venv -- new-release SERIES DELIVERABLE TYPE
> > > 
> > > The SERIES value should be the release series, such as "pike".
> > > 
> > > The DELIVERABLE value should be the deliverable name, such as
> > > "oslo.config" or "cinder".
> > > 
> > > The TYPE value should be one of "bugfix", "feature", "major",
> > > "milestone", or "rc".
> > > 
> > > If the most recent release of cinder during the pike series is
> > > 11.0.0.0b3 then running:
> > > 
> > > $ tox -e venv -- new-release pike cinder rc
> > 
> > On systems with Python 3 by default this fails on installing 
> > lazr.restfulclient. 
> > I think we should add
> > 
> >   basepython = python2
> > 
> > for now.

I wonder if you have an old copy of the releases repo. In master we
explicitly set basepython to python3 and lazr.restfulclient is no longer
a dependency.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] PTG planning etherpad

2017-08-21 Thread Dmitry Tantsur

A reminder that the PTG is coming :)

We have only two Mondays left, and Sep 4th is apparently a holiday in the US. 
That means we have to more or less decide on the schedule next Monday, Aug 28th. 
Please add your ideas by that time!


On 07/28/2017 12:19 PM, Dmitry Tantsur wrote:

Hi all!

It was already announced on the meeting, but not on the ML.
Here is our planning etherpad for the PTG:
https://etherpad.openstack.org/p/ironic-queens-ptg

Please check "The Rules" section before proposing a topic. Please also add 
yourself to the list of potential attendees in the bottom.


Thanks



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Minimum version of shred in our supported distros?

2017-08-21 Thread Matt Riedemann

On 8/20/2017 1:11 AM, Michael Still wrote:
Specifically we could do something like this: 
https://review.openstack.org/#/c/495532


Sounds like we're OK with doing this in Queens given the other 
discussion in this thread. However, this is part of a much larger 
series. It looks like it doesn't need to be though, so could you split 
this out and we could just merge it on it's own?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] group/host specific config file overrides: how-to?

2017-08-21 Thread Markus Zoeller
On 21.08.2017 16:40, Andy McCrae wrote:
> Hey Markus,
> 
> 
>> I'm wondering which possibilities I have to do group/host specific
>> config file overrides. After reading [1], I'm still a little clueless.
>> To be specific, I have this setup (expressed as Ansible inventory file):
>>
>> [z_compute_nodes]
>> compute1
>> # more nodes
>> [x_compute_nodes]
>> compute2
>> # more nodes
>> [computes:children]
>> z_compute_nodes
>> x_compute_nodes
>>
>> As an example, I want to set Nova's config option
>> `reserved_host_memory_mb` of the `DEFAULT` config file section:
>>
>> ### nova.conf
>> [DEFAULT]
>> reserved_host_memory_mb=$VALUE
>>
>> My goal is this:
>>
>>  | reserved_host_memory_mb
>> --
>> compute1 | 256
>> compute2 | 512
>>
>> I know there are overrides like `nova_nova_conf_overrides`.
>> So I tried to set a default override in `user_variables.yml`:
>>
>> ### /etc/openstack_deploy/user_variables.yml 
>>
>> nova_nova_conf_overrides:
>>   DEFAULT:
>> reserved_host_memory_mb: 512
>>
>> But I wanted to override this depending on the host in
>> `openstack_user_config.yml`:
>>
>> ### /etc/openstack_deploy/openstack_user_config.yml 
>> # [...]
>> # nova hypervisors
>> compute_hosts:
>>   compute1:
>> ip: 192.168.100.12
>> host_vars:
>>   nova_nova_conf_overrides:
>> DEFAULT:
>>   reserved_host_memory_mb: 256
>>   compute2:
>> ip: 192.168.100.10
>>
> 
> Try change "host_vars" to "container_vars".
> If that doesn't work let me know, I'll spin up a test to recreate the
> actual problem, but at a glance that looks correct otherwise.
> 


Replacing `host_vars` with `container_vars` didn't have an effect:

### controller1: /etc/openstack_deploy/openstack_user_config.yml
# nova hypervisors
compute_hosts:
  compute1:
ip: 192.168.100.12
container_vars:
  nova_nova_conf_overrides:
DEFAULT:
reserved_host_memory_mb: 256
  compute2:
ip: 192.168.100.10

Both compute nodes still have the same $VALUE, although `compute1`
should have 256:

### compute1: /etc/nova/nova.conf
root@compute1:~# grep reserved_host_memory_mb /etc/nova/nova.conf
reserved_host_memory_mb = 512


### compute2: /etc/nova/nova.conf
root@compute2:~# grep reserved_host_memory_mb /etc/nova/nova.conf
reserved_host_memory_mb = 512

I'd like to avoid to introduce some "clever" dict merging algorithm I
won't understand anymore after a few weeks. :/

Any hint is appreciated!

-- 
Regards, Markus Zoeller (markus_z)

>>
>> After testing this locally, it turned out that *both* hosts will
>> have 512 for $VALUE. which was not my intended configuration.
>>
>> Please note that I only used 2 hosts here as an example but I'm looking
>> for a solution which scales with much more hosts. I'm also applying
>> those settings in a templated way like this:
>>
>> ### /etc/openstack_deploy/openstack_user_config.yml 
>> # [...]
>> # nova hypervisors
>> compute_hosts:
>> {% for host in groups['computes'] %}
>>   {{ hostvars[host]['inventory_hostname'] }}:
>> ip: {{ hostvars[host]['ansible_host'] }}
>> {% endfor %}
>>
>> The reason is, that I use the same steps for different environments
>> (dev, test, prod) with a different amount of nodes.
>>
>> Any tips how to do this properly?
>>
>>
> Andy
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [release] [stable] pike release

2017-08-21 Thread Dmitry Tantsur

On 08/21/2017 05:44 PM, Doug Hellmann wrote:

Excerpts from Sam Betts (sambetts)'s message of 2017-08-21 10:01:35 +:

Quick reply with my thoughts in-line.

Sam

On 21/08/2017, 10:13, "Dmitry Tantsur"  wrote:

 (adding the release and stable team just for their information)
 
 Thanks Julia and everyone for handling this situation while I was out. More

 comments inline.
 
 On 08/17/2017 07:13 PM, Julia Kreger wrote:

 > Greetings everyone!
 >
 > As some of you may have noticed, we released ironic 9.0.0 today. But
 > wait! There is more!
 >
 > We triggered this release due to a number of issues, one of which was
 > that we learned that we needed the stable/pike branch for our grenade
 > jobs to execute properly. This was not done previously because
 > Ironic’s release model is incompatible with making release candidate
 > releases.
 
 Yep :( So, I think the lesson to learn is to create our stable/XXX branch at the

 same time as the other projects. We kind of knew that already, but did not
 anticipate such a huge breakage so quickly. I suggest we don't try it in 
Queens :)


Yes, we do try to encourage teams to branch around that RC1 period when
the milestone-based projects branch to avoid these sorts of issues.

 
 Now, with that in place we still have two options:

 1. A conservative one - make the branching the hard feature freeze, 
similar to
 other projects. We may start with a soft freeze at around M3, and just 
move into
 Queens when stable/queens is created. As that point, what is out - is out.
 2. Alternative - continue making selected feature backports until the final
 freeze roughly one week before the final release. This kind of contradicts
 calling a branch "stable" though.


I see that Ironic didn't really take advantage of the
cycle-with-intermediary model by preparing any intermediary releases
during pike, so perhaps it would be simplest to change the release
model to cycle-with-milestones and align with the other projects
that way?


This may be an option, though I personally don't see much value in doing 
milestone beta releases compared to intermediary ones.


The problem with this cycle was that we tries to land several hge features, 
and we did not want to release them unfinished. There are some strong voices in 
the community to avoid such situation in Queens.


In any case, we'll discuss it on the PTG, and will consider your proposal too.



 
 I don't have a strong opinion, but I'm slightly more in favor of the

 conservation option #1 to avoid confusing people and complicating the 
process.
 
 Thoughts?


Personally, I think option 2 still makes sense, and it aligns us closely with 
the process in the other projects, the difference between us and them is that 
their branch is cut using a release candidate instead of a real release. The 
act of backporting things into the stable branch and then re-releasing is the 
same though.

Another alternative I wonder if we should consider is cutting our branch 
earlier in the cycle, when we make our first intermediary release, and then 
finding out if we can sync the branches at each release time instead of 
backporting everything. E.g. git checkout stable/X, git reset –hard 
origin/master or git rebase master, git push. Doing this will allow us to 
retain the git history and same commit ids from master to stable/X until master 
stops developing stable/X and moves on to stable/X+1. I think another advantage 
of this is it also allows people to find and use our latest intermediary 
releases easier. But I don’t know how nicely this would work with all the 
tooling etc the release team has in place.


I don't think that is something the release team would be prepared
to support. We're trying to avoid having every project handling
releases and branches in their own way, because it makes tooling
that much harder to deal with.


+1 to this, I don't think we should become too creative :)



 
 >

 > Once we’ve confirmed that our grenade testing is passing, we will back
 > port patches we had previously approved, but that had not landed, from
 > master to stable/pike.
 
 ++ I've approved a few patches already, and will continue approving them today.
 
 >

 > As a result, please anticipate Ironic’s official Pike release for this
 > cycle to be 9.1.0, if the stars, gates, and job timeouts align with
 > us.
 
 Right, I think we will request it on Wednesday, to allow a bit more time to test

 our newly populated not-so-stable stable/pike :)
 
 >

 > If there are any questions, please feel free to stop by
 > #openstack-ironic. We have also been keeping our general purpose
 > whiteboard[1] up to date, you can see our notes regarding our current
 > plan starting at line 120, and notes regarding gate failures and

Re: [openstack-dev] [ironic] [release] [stable] pike release

2017-08-21 Thread Doug Hellmann
Excerpts from Sam Betts (sambetts)'s message of 2017-08-21 10:01:35 +:
> Quick reply with my thoughts in-line.
> 
> Sam
> 
> On 21/08/2017, 10:13, "Dmitry Tantsur"  wrote:
> 
> (adding the release and stable team just for their information)
> 
> Thanks Julia and everyone for handling this situation while I was out. 
> More 
> comments inline.
> 
> On 08/17/2017 07:13 PM, Julia Kreger wrote:
> > Greetings everyone!
> > 
> > As some of you may have noticed, we released ironic 9.0.0 today. But
> > wait! There is more!
> > 
> > We triggered this release due to a number of issues, one of which was
> > that we learned that we needed the stable/pike branch for our grenade
> > jobs to execute properly. This was not done previously because
> > Ironic’s release model is incompatible with making release candidate
> > releases.
> 
> Yep :( So, I think the lesson to learn is to create our stable/XXX branch 
> at the 
> same time as the other projects. We kind of knew that already, but did 
> not 
> anticipate such a huge breakage so quickly. I suggest we don't try it in 
> Queens :)

Yes, we do try to encourage teams to branch around that RC1 period when
the milestone-based projects branch to avoid these sorts of issues.

> 
> Now, with that in place we still have two options:
> 1. A conservative one - make the branching the hard feature freeze, 
> similar to 
> other projects. We may start with a soft freeze at around M3, and just 
> move into 
> Queens when stable/queens is created. As that point, what is out - is out.
> 2. Alternative - continue making selected feature backports until the 
> final 
> freeze roughly one week before the final release. This kind of 
> contradicts 
> calling a branch "stable" though.

I see that Ironic didn't really take advantage of the
cycle-with-intermediary model by preparing any intermediary releases
during pike, so perhaps it would be simplest to change the release
model to cycle-with-milestones and align with the other projects
that way?

> 
> I don't have a strong opinion, but I'm slightly more in favor of the 
> conservation option #1 to avoid confusing people and complicating the 
> process.
> 
> Thoughts?
> 
> Personally, I think option 2 still makes sense, and it aligns us closely with 
> the process in the other projects, the difference between us and them is that 
> their branch is cut using a release candidate instead of a real release. The 
> act of backporting things into the stable branch and then re-releasing is the 
> same though.
> 
> Another alternative I wonder if we should consider is cutting our branch 
> earlier in the cycle, when we make our first intermediary release, and then 
> finding out if we can sync the branches at each release time instead of 
> backporting everything. E.g. git checkout stable/X, git reset –hard 
> origin/master or git rebase master, git push. Doing this will allow us to 
> retain the git history and same commit ids from master to stable/X until 
> master stops developing stable/X and moves on to stable/X+1. I think another 
> advantage of this is it also allows people to find and use our latest 
> intermediary releases easier. But I don’t know how nicely this would work 
> with all the tooling etc the release team has in place.

I don't think that is something the release team would be prepared
to support. We're trying to avoid having every project handling
releases and branches in their own way, because it makes tooling
that much harder to deal with.

> 
> > 
> > Once we’ve confirmed that our grenade testing is passing, we will back
> > port patches we had previously approved, but that had not landed, from
> > master to stable/pike.
> 
> ++ I've approved a few patches already, and will continue approving them 
> today.
> 
> > 
> > As a result, please anticipate Ironic’s official Pike release for this
> > cycle to be 9.1.0, if the stars, gates, and job timeouts align with
> > us.
> 
> Right, I think we will request it on Wednesday, to allow a bit more time 
> to test 
> our newly populated not-so-stable stable/pike :)
> 
> > 
> > If there are any questions, please feel free to stop by
> > #openstack-ironic. We have also been keeping our general purpose
> > whiteboard[1] up to date, you can see our notes regarding our current
> > plan starting at line 120, and notes regarding gate failures and
> > issues starting at line 37.
> > Thanks!
> > 
> > -Julia
> > 
> > [1]: https://etherpad.openstack.org/p/IronicWhiteBoard
> > 
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > 

Re: [openstack-dev] [tripleo] critical situation with CI / upgrade jobs

2017-08-21 Thread Paul Belanger
On Mon, Aug 21, 2017 at 10:43:07AM +1200, Steve Baker wrote:
> On Thu, Aug 17, 2017 at 4:13 PM, Steve Baker  wrote:
> 
> >
> >
> > On Thu, Aug 17, 2017 at 10:47 AM, Emilien Macchi 
> > wrote:
> >
> >>
> >> > Problem #3: from Ocata to Pike: all container images are
> >> > uploaded/specified, even for services not deployed
> >> > https://bugs.launchpad.net/tripleo/+bug/1710992
> >> > The CI jobs are timeouting during the upgrade process because
> >> > downloading + uploading _all_ containers in local cache takes more
> >> > than 20 minutes.
> >> > So this is where we are now, upgrade jobs timeout on that. Steve Baker
> >> > is currently looking at it but we'll probably offer some help.
> >>
> >> Steve is still working on it: https://review.openstack.org/#/c/448328/
> >> Steve, if you need any help (reviewing or coding) - please let us
> >> know, as we consider this thing important to have and probably good to
> >> have in Pike.
> >>
> >
> > I have a couple of changes up now, one to capture the relationship between
> > images and services[1], and another to add an argument to the prepare
> > command to filter the image list based on which services are containerised
> > [2]. Once these land, all the calls to prepare in CI can be modified to
> > also specify these heat environment files, and this will reduce uploads to
> > only the images required.
> >
> > [1] https://review.openstack.org/#/c/448328/
> > [2] https://review.openstack.org/#/c/494367/
> >
> >
> Just updating progress on this, with infra caching from docker.io I'm
> seeing transfer times of 16 minutes (an improvement on 20 minutes ->
> $timeout).
> 
> Only transferring the required images [3] reduces this to 8 minutes.
> 
> [3] https://review.openstack.org/#/c/494767/

I'd still like to have docker daemon running with debug:True, just for peace of
mind. In our testing of the cache, it was possible for docker to silently
failure on the reverse proxy cache and hit docker.io directly.  Regardless this
is good news.

Because the size of the containers we are talking about here, I think it is a
great idea to only download / cache images that will only be used for the job.

Lets me know if you see any issues

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [release] [stable] pike release

2017-08-21 Thread Loo, Ruby
Hi,

I'd like to get more information from the release folks (wrt grenade support or 
lack of, what might be reasonable or not to do, etc.), and how other OpenStack 
projects that use the same release model as ironic, do it. I think that 
whatever we do, it ought to be the easiest for all concerned; e.g.: I don't 
want to have to keep track of which patches in master need to be backported to 
the stable branch if there is going to be more than a small handful of them. 
And for users, it ought to be (somewhat) clear to them, what these releases 
are/mean.

We saw at least one issue, grenade jobs needing stable/pike branch. Were there 
others?

--ruby

From: "Sam Betts (sambetts)" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, August 21, 2017 at 6:01 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [ironic] [release] [stable] pike release

Quick reply with my thoughts in-line.

Sam

On 21/08/2017, 10:13, "Dmitry Tantsur" 
> wrote:

(adding the release and stable team just for their information)

Thanks Julia and everyone for handling this situation while I was out. More
comments inline.

On 08/17/2017 07:13 PM, Julia Kreger wrote:
> Greetings everyone!
>
> As some of you may have noticed, we released ironic 9.0.0 today. But
> wait! There is more!
>
> We triggered this release due to a number of issues, one of which was
> that we learned that we needed the stable/pike branch for our grenade
> jobs to execute properly. This was not done previously because
> Ironic’s release model is incompatible with making release candidate
> releases.

Yep :( So, I think the lesson to learn is to create our stable/XXX branch 
at the
same time as the other projects. We kind of knew that already, but did not
anticipate such a huge breakage so quickly. I suggest we don't try it in 
Queens :)

Now, with that in place we still have two options:
1. A conservative one - make the branching the hard feature freeze, similar 
to
other projects. We may start with a soft freeze at around M3, and just move 
into
Queens when stable/queens is created. As that point, what is out - is out.
2. Alternative - continue making selected feature backports until the final
freeze roughly one week before the final release. This kind of contradicts
calling a branch "stable" though.

I don't have a strong opinion, but I'm slightly more in favor of the
conservation option #1 to avoid confusing people and complicating the 
process.

Thoughts?

Personally, I think option 2 still makes sense, and it aligns us closely with 
the process in the other projects, the difference between us and them is that 
their branch is cut using a release candidate instead of a real release. The 
act of backporting things into the stable branch and then re-releasing is the 
same though.

Another alternative I wonder if we should consider is cutting our branch 
earlier in the cycle, when we make our first intermediary release, and then 
finding out if we can sync the branches at each release time instead of 
backporting everything. E.g. git checkout stable/X, git reset –hard 
origin/master or git rebase master, git push. Doing this will allow us to 
retain the git history and same commit ids from master to stable/X until master 
stops developing stable/X and moves on to stable/X+1. I think another advantage 
of this is it also allows people to find and use our latest intermediary 
releases easier. But I don’t know how nicely this would work with all the 
tooling etc the release team has in place.

>
> Once we’ve confirmed that our grenade testing is passing, we will back
> port patches we had previously approved, but that had not landed, from
> master to stable/pike.

++ I've approved a few patches already, and will continue approving them 
today.

>
> As a result, please anticipate Ironic’s official Pike release for this
> cycle to be 9.1.0, if the stars, gates, and job timeouts align with
> us.

Right, I think we will request it on Wednesday, to allow a bit more time to 
test
our newly populated not-so-stable stable/pike :)

>
> If there are any questions, please feel free to stop by
> #openstack-ironic. We have also been keeping our general purpose
> whiteboard[1] up to date, you can see our notes regarding our current
> plan starting at line 120, and notes regarding gate failures and
> issues starting at line 37.
> Thanks!
>
> -Julia
>
> [1]: https://etherpad.openstack.org/p/IronicWhiteBoard
>
> __
> OpenStack Development Mailing List (not for usage questions)
> 

Re: [openstack-dev] [release][ptl] tools for creating new releases

2017-08-21 Thread Doug Hellmann
Excerpts from Dmitry Tantsur's message of 2017-08-15 14:11:05 +0200:
> On 08/08/2017 03:30 PM, Doug Hellmann wrote:
> > We realized recently that we haven't publicized some of the tools
> > in the releases repository very well. One tool that will be useful
> > this week as you prepare your release candidates is the 'new-release'
> > command, which edits a deliverable file to add a new release from
> > HEAD of the given branch, automatically computing the next verison
> > number based on the inputs.
> > 
> > Use the ``venv`` tox environment to run the tool, like this:
> > 
> > $ tox -e venv -- new-release SERIES DELIVERABLE TYPE
> > 
> > The SERIES value should be the release series, such as "pike".
> > 
> > The DELIVERABLE value should be the deliverable name, such as
> > "oslo.config" or "cinder".
> > 
> > The TYPE value should be one of "bugfix", "feature", "major",
> > "milestone", or "rc".
> > 
> > If the most recent release of cinder during the pike series is
> > 11.0.0.0b3 then running:
> > 
> > $ tox -e venv -- new-release pike cinder rc
> 
> On systems with Python 3 by default this fails on installing 
> lazr.restfulclient. 
> I think we should add
> 
>   basepython = python2
> 
> for now.

We should definitely address that. We need lazr.restfulclient to
talk to Launchpad, and we need that in the release scripts in
release-tools but I don't think we need it in the releases repo. Maybe
we can remove the dependency, or set it up so it is only installed when
python 2 is used?

Happy-to-review-patches-ly,
Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] notification subteam meeting is cancelled this week

2017-08-21 Thread Balazs Gibizer

Hi,

The notification subteam meeting is canceled this week.

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet neutron]

2017-08-21 Thread Alex Schultz
On Sat, Aug 19, 2017 at 12:37 AM, hanish gogada
 wrote:
> Hi all,
>
> Currently neutron ml2 ovs agent puppet module does not support the
> configuration of  ovsdb_connection. Does any work on this is in progress.
>

We had 
https://github.com/openstack/puppet-neutron/blob/721fb14e1654d002b49d363dfbcca8fdddb46167/manifests/plugins/ovn.pp
but it seems that is deprecated in favor of
https://github.com/openstack/puppet-neutron/blob/721fb14e1654d002b49d363dfbcca8fdddb46167/manifests/plugins/ml2/ovn.pp
which would leverage, https://github.com/openstack/puppet-ovn

Not sure what you need specifically and I'm not aware of any work in
this area at the moment.

Thanks,
-Alex

> Thanks
> hanish gogada
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] On idmapshift deprecation

2017-08-21 Thread Matt Riedemann

On 8/20/2017 3:28 AM, Michael Still wrote:
I'm going to take the general silence on this as permission to remove 
the idmapshift binary from nova. You're welcome.




The reality is that no one is using the LXC code as far as I know. 
Rackspace was the only one ever contributing changes for LXC and we 
never got a CI stood up for it in the gate. So if the changes break 
something, being a Rackspace employee yourself I'd hope you'd find out 
soon enough. So having said that, I think it's fine to go forward with 
removing the binary dependency if you can replace it with privsep.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [skip-level-upgrades][upgrades] Denver PTG room & etherpad

2017-08-21 Thread Lee Yarwood
Hello all,

This is a brief announcement to highlight that there will be a skip
level upgrades room again at the PTG in Denver. I'll be chairing the
room and have seeded the etherpad below with a few goal and topic ideas.
I'd really welcome additional input from others, especially if you were
present at the previous discussions in Boston!

https://etherpad.openstack.org/p/queens-PTG-skip-level-upgrades

Thanks in advance and see you in Denver!

Lee
-- 
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] FreeIPA Deployment

2017-08-21 Thread Juan Antonio Osorio
On Mon, Aug 21, 2017 at 5:48 PM, Ben Nemec  wrote:

>
>
> On 08/21/2017 01:45 AM, Juan Antonio Osorio wrote:
>
>> The second option seems like the most viable. Not sure how the TripleO
>> integration would go though. Care to elaborate on what you had in mind?
>>
>
> I can't remember if we discussed this when we were first implementing the
> ci job, but could FreeIPA run on the undercloud itself?  We could have the
> undercloud install process install FreeIPA before it does the rest of the
> undercloud install, and then the undercloud by default would talk to that
> local instance of FreeIPA.  We'd provide configuration options to allow use
> of a standalone server too, of course.
>

Right, this would have been the preferred option, and we did try to do
this. However, FreeIPA not very flexible (it isn't at all) on its port
configuration. And unfortunately there are port conflicts. Hence why we
decided to use a separate node.


> I feel like there was probably a reason we didn't do that in the first
> place (port conflicts?), but it would be the easiest option for deployers
> if we could make it work.
>
>
>> On Fri, Aug 18, 2017 at 9:11 PM, Emilien Macchi > > wrote:
>>
>> On Fri, Aug 18, 2017 at 8:34 AM, Harry Rybacki > > wrote:
>>  > Greetings Stackers,
>>  >
>>  > Recently, I brought up a discussion around deploying FreeIPA via
>>  > TripleO-Quickstart vs TripleO. This is part of a larger discussion
>>  > around expanding security related CI coverage for OpenStack.
>>  >
>>  > A few months back, I added the ability to deploy FreeIPA via
>>  > TripleO-Quickstart through three reviews:
>>  >
>>  > 1) Adding a role to deploy FreeIPA via OOOQ_E[1]
>>  > 2) Providing OOOQ with the ability to deploy a supplemental node
>>  > (alongside the undercloud)[2]
>>  > 3) Update the quickstart-extras playbook to deploy FreeIPA[3]
>>  >
>>  >
>>  > The reasoning behind this is as follows (copied from a conversation
>>  > with jaosorior):
>>  >
>>  >> So the deal is that both the undercloud and the overcloud need
>> to be registered as a FreeIPA client.
>>  >> This is because they need to authenticate to it in order to
>> execute actions.
>>  >>
>>  >> * The undercloud needs to have FreeIPA credentials because it's
>> running novajoin, which in turn
>>  >> executes requests to FreeIPA in order to create service principals
>>  >>  - The service principals are ultimately the service name and
>> the node name entries for which we'll
>>  >> requests the certificates.
>>  >> * The overcloud nodes need to be registered and authenticated to
>> FreeIPA (which right now happens > through a cloud-init script
>> provisioned by nova/nova-metadata) because that's how it requests
>>  >> certificates.
>>  >>
>>  >> So the flow is as follows:
>>  >>
>>  >> * FreeIPA node is provisioned.
>>  >>  - We'll appropriate credentials at this point.
>>  >>  - We register the undercloud as a FreeIPA client and get an OTP
>> (one time password) for it
>>  >> - We add the OTP to the undercloud.conf and enable novajoin.
>>  >> * We trigger the undercloud install.
>>  >>  - after the install, we have novajoin running, which is the
>> service that registers automatically the
>>  >> overcloud nodes to FreeIPA.
>>  >> * We trigger the overcloud deploy
>>  >>  - We need to set up a flag that tells the deploy to pass
>> appropriate nova metadata (which tells
>>  >> novajoin that the nodes should be registered).
>>  >>  - profit!! we can now get certificates from the CA (and do
>> other stuff that FreeIPA allows you to do,
>>  >> such as use kerberos auth, control sudo rights of the nodes'
>> users, etc.)
>>  >>
>>  >> Since the nodes need to be registered to FreeIPA, we can't rely
>> on FreeIPA being installed by
>>  >> TripleO, even if that's possible by doing it through a
>> composable service.
>>  >> If we would use a composable service to install FreeIPA, the
>> flow would be like this:
>>  >>
>>  >> * Install undercloud
>>  >> * Install overcloud with one node (running FreeIPA)
>>  >> * register undercloud node to FreeIPA and modify undercloud.conf
>>  >> * Update undercloud
>>  >> * scale overcloud and register the rest of the nodes to FreeIPA
>> through novajoin.
>>  >>
>>  >> So, while we could install FreeIPA with TripleO. This really
>> complicates the deployment to an
>>  >> unnecessary point.
>>  >>
>>  >> So I suggest keeping the current behavior, which treats FreeIPA
>> as a separate node to be
>>  >> provisioned before the undercloud). And if folks would like to
>> have a separate FreeIPA node for their > overcloud 

Re: [openstack-dev] [all][elections] Project Team Lead Election Conclusion and Results

2017-08-21 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2017-08-17 09:42:15 +0200:
> Kendall Nelson wrote:
> > Hello Everyone!
> > 
> > Thank you to the electorate, to all those who voted and to all
> > candidates who put their name forward for Project Team Lead (PTL) in
> > this election. A healthy, open process breeds trust in our decision
> > making capability thank you to all those who make this process possible.
> > Now for the results of the PTL election process, please join me in
> > extending congratulations to the following PTLs:
> > [...]
> 
> Congratulations to all the elected PTLs, new names and returning ones!
> 
> The PTLs are at the same time a steward for their team, facilitating the
> work of their teammates, an ambassador for the project, communicating
> the work being done to the outside world, and a default contact point
> for anyone having questions about a given team.
> 
> That can be daunting, especially for larger teams. Don't hesitate to
> delegate as much of those tasks as you can!
> 
> Thank you again for your help. Let's make Queens a great cycle!
> 

And if any new PTLs would like advice or tips, please get in touch
with a former PTL, TC member, or other community leader. We're here
to help make your lives easier.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config][ansible][tripleo][kolla][ptg] Pluggable drivers and protect plaintext secrets

2017-08-21 Thread Doug Hellmann
Excerpts from Raildo Mascena de Sousa Filho's message of 2017-08-17 12:16:15 
+:
> Hi all,
> 
> Should we reserve a room in the extra session ethercalc [0
> ] or we
> already have a time slot scheduled for that discussion?
> 
> [0] https://ethercalc.openstack.org/Queens-PTG-Discussion-Rooms

I think this topic was on Emilien's list for TripleO. Would the other
groups mind if the TripleO team hosts the discussion in their room? That
would save the more limited reserveable rooms for discussions that don't
have an obvious host.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] FreeIPA Deployment

2017-08-21 Thread Ben Nemec



On 08/21/2017 01:45 AM, Juan Antonio Osorio wrote:
The second option seems like the most viable. Not sure how the TripleO 
integration would go though. Care to elaborate on what you had in mind?


I can't remember if we discussed this when we were first implementing 
the ci job, but could FreeIPA run on the undercloud itself?  We could 
have the undercloud install process install FreeIPA before it does the 
rest of the undercloud install, and then the undercloud by default would 
talk to that local instance of FreeIPA.  We'd provide configuration 
options to allow use of a standalone server too, of course.


I feel like there was probably a reason we didn't do that in the first 
place (port conflicts?), but it would be the easiest option for 
deployers if we could make it work.




On Fri, Aug 18, 2017 at 9:11 PM, Emilien Macchi > wrote:


On Fri, Aug 18, 2017 at 8:34 AM, Harry Rybacki > wrote:
 > Greetings Stackers,
 >
 > Recently, I brought up a discussion around deploying FreeIPA via
 > TripleO-Quickstart vs TripleO. This is part of a larger discussion
 > around expanding security related CI coverage for OpenStack.
 >
 > A few months back, I added the ability to deploy FreeIPA via
 > TripleO-Quickstart through three reviews:
 >
 > 1) Adding a role to deploy FreeIPA via OOOQ_E[1]
 > 2) Providing OOOQ with the ability to deploy a supplemental node
 > (alongside the undercloud)[2]
 > 3) Update the quickstart-extras playbook to deploy FreeIPA[3]
 >
 >
 > The reasoning behind this is as follows (copied from a conversation
 > with jaosorior):
 >
 >> So the deal is that both the undercloud and the overcloud need
to be registered as a FreeIPA client.
 >> This is because they need to authenticate to it in order to
execute actions.
 >>
 >> * The undercloud needs to have FreeIPA credentials because it's
running novajoin, which in turn
 >> executes requests to FreeIPA in order to create service principals
 >>  - The service principals are ultimately the service name and
the node name entries for which we'll
 >> requests the certificates.
 >> * The overcloud nodes need to be registered and authenticated to
FreeIPA (which right now happens > through a cloud-init script
provisioned by nova/nova-metadata) because that's how it requests
 >> certificates.
 >>
 >> So the flow is as follows:
 >>
 >> * FreeIPA node is provisioned.
 >>  - We'll appropriate credentials at this point.
 >>  - We register the undercloud as a FreeIPA client and get an OTP
(one time password) for it
 >> - We add the OTP to the undercloud.conf and enable novajoin.
 >> * We trigger the undercloud install.
 >>  - after the install, we have novajoin running, which is the
service that registers automatically the
 >> overcloud nodes to FreeIPA.
 >> * We trigger the overcloud deploy
 >>  - We need to set up a flag that tells the deploy to pass
appropriate nova metadata (which tells
 >> novajoin that the nodes should be registered).
 >>  - profit!! we can now get certificates from the CA (and do
other stuff that FreeIPA allows you to do,
 >> such as use kerberos auth, control sudo rights of the nodes'
users, etc.)
 >>
 >> Since the nodes need to be registered to FreeIPA, we can't rely
on FreeIPA being installed by
 >> TripleO, even if that's possible by doing it through a
composable service.
 >> If we would use a composable service to install FreeIPA, the
flow would be like this:
 >>
 >> * Install undercloud
 >> * Install overcloud with one node (running FreeIPA)
 >> * register undercloud node to FreeIPA and modify undercloud.conf
 >> * Update undercloud
 >> * scale overcloud and register the rest of the nodes to FreeIPA
through novajoin.
 >>
 >> So, while we could install FreeIPA with TripleO. This really
complicates the deployment to an
 >> unnecessary point.
 >>
 >> So I suggest keeping the current behavior, which treats FreeIPA
as a separate node to be
 >> provisioned before the undercloud). And if folks would like to
have a separate FreeIPA node for their > overcloud deployment (which
could provision certs for the tenants) then we could do that as a
 >> composable service, if people request it.
 >
 > I am now re-raising this to the group at large for discussion about
 > the merits of this approach vs deploying via TripleO itself.

There are 3 approaches here:

- Keep using Quickstart which is of course not the viable option since
TripleO Quickstart is only used by CI and developers right now. Not by
customers neither in production.
- Deploy your own Ansible playbooks or automation tool to deploy

Re: [openstack-dev] [openstack-ansible] group/host specific config file overrides: how-to?

2017-08-21 Thread Andy McCrae
Hey Markus,


> I'm wondering which possibilities I have to do group/host specific
> config file overrides. After reading [1], I'm still a little clueless.
> To be specific, I have this setup (expressed as Ansible inventory file):
>
> [z_compute_nodes]
> compute1
> # more nodes
> [x_compute_nodes]
> compute2
> # more nodes
> [computes:children]
> z_compute_nodes
> x_compute_nodes
>
> As an example, I want to set Nova's config option
> `reserved_host_memory_mb` of the `DEFAULT` config file section:
>
> ### nova.conf
> [DEFAULT]
> reserved_host_memory_mb=$VALUE
>
> My goal is this:
>
>  | reserved_host_memory_mb
> --
> compute1 | 256
> compute2 | 512
>
> I know there are overrides like `nova_nova_conf_overrides`.
> So I tried to set a default override in `user_variables.yml`:
>
> ### /etc/openstack_deploy/user_variables.yml 
>
> nova_nova_conf_overrides:
>   DEFAULT:
> reserved_host_memory_mb: 512
>
> But I wanted to override this depending on the host in
> `openstack_user_config.yml`:
>
> ### /etc/openstack_deploy/openstack_user_config.yml 
> # [...]
> # nova hypervisors
> compute_hosts:
>   compute1:
> ip: 192.168.100.12
> host_vars:
>   nova_nova_conf_overrides:
> DEFAULT:
>   reserved_host_memory_mb: 256
>   compute2:
> ip: 192.168.100.10
>

Try change "host_vars" to "container_vars".
If that doesn't work let me know, I'll spin up a test to recreate the
actual problem, but at a glance that looks correct otherwise.


>
> After testing this locally, it turned out that *both* hosts will
> have 512 for $VALUE. which was not my intended configuration.
>
> Please note that I only used 2 hosts here as an example but I'm looking
> for a solution which scales with much more hosts. I'm also applying
> those settings in a templated way like this:
>
> ### /etc/openstack_deploy/openstack_user_config.yml 
> # [...]
> # nova hypervisors
> compute_hosts:
> {% for host in groups['computes'] %}
>   {{ hostvars[host]['inventory_hostname'] }}:
> ip: {{ hostvars[host]['ansible_host'] }}
> {% endfor %}
>
> The reason is, that I use the same steps for different environments
> (dev, test, prod) with a different amount of nodes.
>
> Any tips how to do this properly?
>
>
Andy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] group/host specific config file overrides: how-to?

2017-08-21 Thread Markus Zoeller

I'm wondering which possibilities I have to do group/host specific
config file overrides. After reading [1], I'm still a little clueless.
To be specific, I have this setup (expressed as Ansible inventory file):

[z_compute_nodes]
compute1
# more nodes
[x_compute_nodes]
compute2
# more nodes
[computes:children]
z_compute_nodes
x_compute_nodes

As an example, I want to set Nova's config option
`reserved_host_memory_mb` of the `DEFAULT` config file section:

### nova.conf
[DEFAULT]
reserved_host_memory_mb=$VALUE

My goal is this:

 | reserved_host_memory_mb
--
compute1 | 256
compute2 | 512

I know there are overrides like `nova_nova_conf_overrides`.
So I tried to set a default override in `user_variables.yml`:

### /etc/openstack_deploy/user_variables.yml 

nova_nova_conf_overrides:
  DEFAULT:
reserved_host_memory_mb: 512

But I wanted to override this depending on the host in
`openstack_user_config.yml`:

### /etc/openstack_deploy/openstack_user_config.yml 
# [...]
# nova hypervisors
compute_hosts:
  compute1:
ip: 192.168.100.12
host_vars:
  nova_nova_conf_overrides:
DEFAULT:
  reserved_host_memory_mb: 256
  compute2:
ip: 192.168.100.10

After testing this locally, it turned out that *both* hosts will
have 512 for $VALUE. which was not my intended configuration.

Please note that I only used 2 hosts here as an example but I'm looking
for a solution which scales with much more hosts. I'm also applying
those settings in a templated way like this:

### /etc/openstack_deploy/openstack_user_config.yml 
# [...]
# nova hypervisors
compute_hosts:
{% for host in groups['computes'] %}
  {{ hostvars[host]['inventory_hostname'] }}:
ip: {{ hostvars[host]['ansible_host'] }}
{% endfor %}

The reason is, that I use the same steps for different environments
(dev, test, prod) with a different amount of nodes.

Any tips how to do this properly?

References:
[1]
https://docs.openstack.org/project-deploy-guide/openstack-ansible/draft/app-advanced-config-override.html


-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] [api] cache headers in placement service

2017-08-21 Thread Mooney, Sean K


> -Original Message-
> From: Chris Dent [mailto:cdent...@anticdent.org]
> Sent: Monday, August 21, 2017 10:44 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [nova] [placement] [api] cache headers in
> placement service
> 
> On Mon, 21 Aug 2017, Jay Pipes wrote:
> > On 08/21/2017 04:59 AM, Chris Dent wrote:
> > We do have cache validation on the server side for resource classes.
> > Any time a resource class is added or deleted, we call
> > _RC_CACHE.clear(). Couldn't we add a single attribute to the
> > ResourceClassCache that returns the last time the cache was reset?
> 
> That's server side cache, of which the client side (or proxy side) has
> no visibility. If we had etags, and were caching etag to resource pairs
> when we sent out responses, we could then have a conditional GET
> handler which checked etags, returning 304 on a cache hit.
> At _RC_CACHE changes we could flush the etag cache.
[Mooney, Sean K]  I agree this is likely needed if caching is used. One of the 
changes
Intel would like to make is to transition the attestation server integration for
Trusted boot with our cloud integrity technologies to use traits on the compute 
node
Instead of a custom filter to attest that the server is trusted. In that case we
We would want to ensure that if we add or remove a trait for resource provider 
that
The cache is invalidated. So we would have to invalidate the etag or updated 
everytime
We update the tratis.
> 
> > But meh, you're right that the simpler solution is just to not do
> HTTP
> > caching.
> 
> 'xactly
> 
> > But then again, if the default behaviour of HTTP is to never cache
> > anything unless some cache-related headers are present [1] and you
> > *don't* want proxies to cache any placement API information, why are
> > we changing anything at all anyway? If we left it alone (and continue
> > not sending Cache-Control headers for anything), the same exact
> result would be achieved, no?
> 
> Essentially so we can put last-modified headers on things, which in RFC
> speak we SHOULD do. And if we do that then we SHOULD make sure no
> caching happens.
> 
> Also it seems like last-modified headers is a nice-to-have for that
> "uknown client" I spoke up in the first message.
> 
> But as you correctly identify the immediate practical value to nova is
> pretty small, which is one of the reasons I was looking for the
> lightest-weight implementation.
> 
> --
> Chris Dent  (⊙_⊙') https://anticdent.org/
> freenode: cdent tw: @anticdent
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-vif] [vif_plug_ovs] Queries on VIF_Type VIFHostDevice

2017-08-21 Thread pranab boruah
Thank you Sean K Mooney and Moshe Levi for your comments.

I have few follow-up questions. Not looking for a detailed answer(I
know you guys must be busy J). Looking for some basic info and will be
obliged if you can point me to a direction(link to code or docs) where
I can continue my research to understand more deeply.


1. What is the difference between neutron port_binding extension
vif_type and vnic_type?

2.How is a vif object in os_vif(eg VifHostDevice) gets related with a
vif_type(direct)?

3.Where does port_profile related data gets populated?

4.How is the decision of picking the correct networking back-end
os_vif plugin is made?

I guess everything comes under port binding negotiation.

-Pranab

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [l2gw]

2017-08-21 Thread Lajos Katona

Hi,

We faced an issue with l2-gw-update, which means that actually if there
are connections for a gw the update will throw an exception
(L2GatewayInUse), and the update is only possible after deleting first
the connections, do the update and add the connections back.

It is not exactly clear why this restriction is there in the code (at
least I can't find it in docs or comments in the code, or review).
As I see the check for network connections was introduced in this patch:
https://review.openstack.org/#/c/144097
(https://review.openstack.org/#/c/144097/21..22/networking_l2gw/db/l2gateway/l2gateway_db.py)

Could you please give me a little background why the update operation is
not allowed on an l2gw with network connections?

Thanks in advance for the help.

Regards
Lajos

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [release] [stable] pike release

2017-08-21 Thread Sam Betts (sambetts)
Quick reply with my thoughts in-line.

Sam

On 21/08/2017, 10:13, "Dmitry Tantsur"  wrote:

(adding the release and stable team just for their information)

Thanks Julia and everyone for handling this situation while I was out. More 
comments inline.

On 08/17/2017 07:13 PM, Julia Kreger wrote:
> Greetings everyone!
> 
> As some of you may have noticed, we released ironic 9.0.0 today. But
> wait! There is more!
> 
> We triggered this release due to a number of issues, one of which was
> that we learned that we needed the stable/pike branch for our grenade
> jobs to execute properly. This was not done previously because
> Ironic’s release model is incompatible with making release candidate
> releases.

Yep :( So, I think the lesson to learn is to create our stable/XXX branch 
at the 
same time as the other projects. We kind of knew that already, but did not 
anticipate such a huge breakage so quickly. I suggest we don't try it in 
Queens :)

Now, with that in place we still have two options:
1. A conservative one - make the branching the hard feature freeze, similar 
to 
other projects. We may start with a soft freeze at around M3, and just move 
into 
Queens when stable/queens is created. As that point, what is out - is out.
2. Alternative - continue making selected feature backports until the final 
freeze roughly one week before the final release. This kind of contradicts 
calling a branch "stable" though.

I don't have a strong opinion, but I'm slightly more in favor of the 
conservation option #1 to avoid confusing people and complicating the 
process.

Thoughts?

Personally, I think option 2 still makes sense, and it aligns us closely with 
the process in the other projects, the difference between us and them is that 
their branch is cut using a release candidate instead of a real release. The 
act of backporting things into the stable branch and then re-releasing is the 
same though.

Another alternative I wonder if we should consider is cutting our branch 
earlier in the cycle, when we make our first intermediary release, and then 
finding out if we can sync the branches at each release time instead of 
backporting everything. E.g. git checkout stable/X, git reset –hard 
origin/master or git rebase master, git push. Doing this will allow us to 
retain the git history and same commit ids from master to stable/X until master 
stops developing stable/X and moves on to stable/X+1. I think another advantage 
of this is it also allows people to find and use our latest intermediary 
releases easier. But I don’t know how nicely this would work with all the 
tooling etc the release team has in place.

> 
> Once we’ve confirmed that our grenade testing is passing, we will back
> port patches we had previously approved, but that had not landed, from
> master to stable/pike.

++ I've approved a few patches already, and will continue approving them 
today.

> 
> As a result, please anticipate Ironic’s official Pike release for this
> cycle to be 9.1.0, if the stars, gates, and job timeouts align with
> us.

Right, I think we will request it on Wednesday, to allow a bit more time to 
test 
our newly populated not-so-stable stable/pike :)

> 
> If there are any questions, please feel free to stop by
> #openstack-ironic. We have also been keeping our general purpose
> whiteboard[1] up to date, you can see our notes regarding our current
> plan starting at line 120, and notes regarding gate failures and
> issues starting at line 37.
> Thanks!
> 
> -Julia
> 
> [1]: https://etherpad.openstack.org/p/IronicWhiteBoard
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] [api] cache headers in placement service

2017-08-21 Thread Chris Dent

On Mon, 21 Aug 2017, Jay Pipes wrote:

On 08/21/2017 04:59 AM, Chris Dent wrote:
We do have cache validation on the server side for resource classes. Any time 
a resource class is added or deleted, we call _RC_CACHE.clear(). Couldn't we 
add a single attribute to the ResourceClassCache that returns the last time 
the cache was reset?


That's server side cache, of which the client side (or proxy side)
has no visibility. If we had etags, and were caching etag to resource
pairs when we sent out responses, we could then have a conditional
GET handler which checked etags, returning 304 on a cache hit.
At _RC_CACHE changes we could flush the etag cache.

But meh, you're right that the simpler solution is just to not do HTTP 
caching.


'xactly

But then again, if the default behaviour of HTTP is to never cache anything 
unless some cache-related headers are present [1] and you *don't* want 
proxies to cache any placement API information, why are we changing anything 
at all anyway? If we left it alone (and continue not sending Cache-Control 
headers for anything), the same exact result would be achieved, no?


Essentially so we can put last-modified headers on things, which in
RFC speak we SHOULD do. And if we do that then we SHOULD make sure
no caching happens.

Also it seems like last-modified headers is a nice-to-have for that
"uknown client" I spoke up in the first message.

But as you correctly identify the immediate practical value to nova
is pretty small, which is one of the reasons I was looking for the
lightest-weight implementation.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] [api] cache headers in placement service

2017-08-21 Thread Jay Pipes

On 08/21/2017 04:59 AM, Chris Dent wrote:

On Sun, 20 Aug 2017, Jay Pipes wrote:

On 08/18/2017 01:23 PM, Chris Dent wrote:

So my change above adds 'last-modified' and 'cache-control:
no-cache' to GET of /resource_providers and
/resource_providers/{uuid} and proposes we do it for everything
else.

Should we?


No. :) Not everything. In particular, I think both the GET 
/resource_classes and GET /traits URIs are very cacheable and we 
shouldn't disallow proxies from caching that content if they want to.


Except that unless we have cache validation handling on the server
side, which we don't, then the "very cacheable" dependent on use
setting a max-age and coming to agreement over what the right
max-age seems unlikely. The simpler solution is to not cache.


We do have cache validation on the server side for resource classes. Any 
time a resource class is added or deleted, we call _RC_CACHE.clear(). 
Couldn't we add a single attribute to the ResourceClassCache that 
returns the last time the cache was reset?


But meh, you're right that the simpler solution is just to not do HTTP 
caching.



If we do, some things to think about:

* The related OVO will need the updated_at and created_at
   fields exposed. This is pretty easy to do with the
   NovaTimestampObject mixin. This doesn't need to require a object
   version bump because we don't do RPC with them.


Technically, only the updated_at field would need to be exposed via 
the OVO objects. But OK, sure. I'd even advocate a patch to OVO that 
would bring in the NovaTimestampObject mixin. Just call it Timestamped 
or something...


The way the database tables are currently set up, when a entity is
first created, created_at is set, and updated_at is null. Therefore,
on new entities, failing over to created_at when updated_at is null
is necessary.

The work I've done thus far has tried to have the smallest impact on
the database tables and the queries used to get at them. They're
already complex enough.

The entity tables already have created_at and updated_at columns.
Exposing those columns on the objects is a matter of adding the
mixin.


Right.


I agree that making a change on OVO to have a Timestamped would be
useful.


* The current implementation of getting the last modified time for a
   collection of resources is intentionally naive and decoupled from
   other stuff. For very large result sets[3] this might be annoying,
   but since we are already doing plenty of traversing of long lists,
   it may not be a big deal. If it is we can incorporate getting the
   last modified time in the loop that serializes objects to JSON
   output.


I'm not sure what you're referring to above as "intentionally naive 
and decoupled from other stuff"? Adding the updated_at field of the 
underlying DB tables would be trivial -- maybe 10-15 lines total for 
DB/object layer and REST API as well. Am I missing something?


By "other stuff" I mean two things:

* the code is nova/objects/resource_provider.py
* the serialization (to JSON) code in placement/handlers/*.py

For those requests that return collections, we _could_ adapt the
queries used to retrieve those resources to find us the max
updated_at time during the query.


No, I don't recommend that... just return the updated_at and created_at 
fields.



Or we could also do the same while traversing the list of objects to
create the JSON output.


Yeah, that's fine.


I've chosen not to do the DB/object side changes because that is a
maze of many twisting passages, composed in fun ways. For those
situations where a list of native (e.g. /resource_providers) objects
it return it is simply easier to extract the info later in the
process. For those situations there the returned data is composed on
the fly (e.g. /allocation_candidates, /usages) we want the
last-modified to be now() anyway, so it doesn't matter.

So the concern/question is around whether people deem it a problem
to traverse the list of objects a second time after already
traversing them a first time to create the JSON output. If so, we
can make the serialization loop have two purposes.


I have no problem calculating the last modified time in the 
serialization loop.


But then again, if the default behaviour of HTTP is to never cache 
anything unless some cache-related headers are present [1] and you 
*don't* want proxies to cache any placement API information, why are we 
changing anything at all anyway? If we left it alone (and continue not 
sending Cache-Control headers for anything), the same exact result would 
be achieved, no?


Best,
-jay

[1] https://tools.ietf.org/html/rfc7234#page-5

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [release] [stable] pike release

2017-08-21 Thread Dmitry Tantsur

(adding the release and stable team just for their information)

Thanks Julia and everyone for handling this situation while I was out. More 
comments inline.


On 08/17/2017 07:13 PM, Julia Kreger wrote:

Greetings everyone!

As some of you may have noticed, we released ironic 9.0.0 today. But
wait! There is more!

We triggered this release due to a number of issues, one of which was
that we learned that we needed the stable/pike branch for our grenade
jobs to execute properly. This was not done previously because
Ironic’s release model is incompatible with making release candidate
releases.


Yep :( So, I think the lesson to learn is to create our stable/XXX branch at the 
same time as the other projects. We kind of knew that already, but did not 
anticipate such a huge breakage so quickly. I suggest we don't try it in Queens :)


Now, with that in place we still have two options:
1. A conservative one - make the branching the hard feature freeze, similar to 
other projects. We may start with a soft freeze at around M3, and just move into 
Queens when stable/queens is created. As that point, what is out - is out.
2. Alternative - continue making selected feature backports until the final 
freeze roughly one week before the final release. This kind of contradicts 
calling a branch "stable" though.


I don't have a strong opinion, but I'm slightly more in favor of the 
conservation option #1 to avoid confusing people and complicating the process.


Thoughts?



Once we’ve confirmed that our grenade testing is passing, we will back
port patches we had previously approved, but that had not landed, from
master to stable/pike.


++ I've approved a few patches already, and will continue approving them today.



As a result, please anticipate Ironic’s official Pike release for this
cycle to be 9.1.0, if the stars, gates, and job timeouts align with
us.


Right, I think we will request it on Wednesday, to allow a bit more time to test 
our newly populated not-so-stable stable/pike :)




If there are any questions, please feel free to stop by
#openstack-ironic. We have also been keeping our general purpose
whiteboard[1] up to date, you can see our notes regarding our current
plan starting at line 120, and notes regarding gate failures and
issues starting at line 37.
Thanks!

-Julia

[1]: https://etherpad.openstack.org/p/IronicWhiteBoard

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] [api] cache headers in placement service

2017-08-21 Thread Chris Dent

On Sun, 20 Aug 2017, Jay Pipes wrote:

On 08/18/2017 01:23 PM, Chris Dent wrote:

So my change above adds 'last-modified' and 'cache-control:
no-cache' to GET of /resource_providers and
/resource_providers/{uuid} and proposes we do it for everything
else.

Should we?


No. :) Not everything. In particular, I think both the GET /resource_classes 
and GET /traits URIs are very cacheable and we shouldn't disallow proxies 
from caching that content if they want to.


Except that unless we have cache validation handling on the server
side, which we don't, then the "very cacheable" dependent on use
setting a max-age and coming to agreement over what the right
max-age seems unlikely. The simpler solution is to not cache.


If we do, some things to think about:

* The related OVO will need the updated_at and created_at
   fields exposed. This is pretty easy to do with the
   NovaTimestampObject mixin. This doesn't need to require a object
   version bump because we don't do RPC with them.


Technically, only the updated_at field would need to be exposed via the OVO 
objects. But OK, sure. I'd even advocate a patch to OVO that would bring in 
the NovaTimestampObject mixin. Just call it Timestamped or something...


The way the database tables are currently set up, when a entity is
first created, created_at is set, and updated_at is null. Therefore,
on new entities, failing over to created_at when updated_at is null
is necessary.

The work I've done thus far has tried to have the smallest impact on
the database tables and the queries used to get at them. They're
already complex enough.

The entity tables already have created_at and updated_at columns.
Exposing those columns on the objects is a matter of adding the
mixin.

I agree that making a change on OVO to have a Timestamped would be
useful.


* The current implementation of getting the last modified time for a
   collection of resources is intentionally naive and decoupled from
   other stuff. For very large result sets[3] this might be annoying,
   but since we are already doing plenty of traversing of long lists,
   it may not be a big deal. If it is we can incorporate getting the
   last modified time in the loop that serializes objects to JSON
   output.


I'm not sure what you're referring to above as "intentionally naive and 
decoupled from other stuff"? Adding the updated_at field of the underlying DB 
tables would be trivial -- maybe 10-15 lines total for DB/object layer and 
REST API as well. Am I missing something?


By "other stuff" I mean two things:

* the code is nova/objects/resource_provider.py
* the serialization (to JSON) code in placement/handlers/*.py

For those requests that return collections, we _could_ adapt the
queries used to retrieve those resources to find us the max
updated_at time during the query.

Or we could also do the same while traversing the list of objects to
create the JSON output.

I've chosen not to do the DB/object side changes because that is a
maze of many twisting passages, composed in fun ways. For those
situations where a list of native (e.g. /resource_providers) objects
it return it is simply easier to extract the info later in the
process. For those situations there the returned data is composed on
the fly (e.g. /allocation_candidates, /usages) we want the
last-modified to be now() anyway, so it doesn't matter.

So the concern/question is around whether people deem it a problem
to traverse the list of objects a second time after already
traversing them a first time to create the JSON output. If so, we
can make the serialization loop have two purposes.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] FreeIPA Deployment

2017-08-21 Thread Juan Antonio Osorio
The second option seems like the most viable. Not sure how the TripleO
integration would go though. Care to elaborate on what you had in mind?

On Fri, Aug 18, 2017 at 9:11 PM, Emilien Macchi  wrote:

> On Fri, Aug 18, 2017 at 8:34 AM, Harry Rybacki 
> wrote:
> > Greetings Stackers,
> >
> > Recently, I brought up a discussion around deploying FreeIPA via
> > TripleO-Quickstart vs TripleO. This is part of a larger discussion
> > around expanding security related CI coverage for OpenStack.
> >
> > A few months back, I added the ability to deploy FreeIPA via
> > TripleO-Quickstart through three reviews:
> >
> > 1) Adding a role to deploy FreeIPA via OOOQ_E[1]
> > 2) Providing OOOQ with the ability to deploy a supplemental node
> > (alongside the undercloud)[2]
> > 3) Update the quickstart-extras playbook to deploy FreeIPA[3]
> >
> >
> > The reasoning behind this is as follows (copied from a conversation
> > with jaosorior):
> >
> >> So the deal is that both the undercloud and the overcloud need to be
> registered as a FreeIPA client.
> >> This is because they need to authenticate to it in order to execute
> actions.
> >>
> >> * The undercloud needs to have FreeIPA credentials because it's running
> novajoin, which in turn
> >> executes requests to FreeIPA in order to create service principals
> >>  - The service principals are ultimately the service name and the node
> name entries for which we'll
> >> requests the certificates.
> >> * The overcloud nodes need to be registered and authenticated to
> FreeIPA (which right now happens > through a cloud-init script provisioned
> by nova/nova-metadata) because that's how it requests
> >> certificates.
> >>
> >> So the flow is as follows:
> >>
> >> * FreeIPA node is provisioned.
> >>  - We'll appropriate credentials at this point.
> >>  - We register the undercloud as a FreeIPA client and get an OTP (one
> time password) for it
> >> - We add the OTP to the undercloud.conf and enable novajoin.
> >> * We trigger the undercloud install.
> >>  - after the install, we have novajoin running, which is the service
> that registers automatically the
> >> overcloud nodes to FreeIPA.
> >> * We trigger the overcloud deploy
> >>  - We need to set up a flag that tells the deploy to pass appropriate
> nova metadata (which tells
> >> novajoin that the nodes should be registered).
> >>  - profit!! we can now get certificates from the CA (and do other stuff
> that FreeIPA allows you to do,
> >> such as use kerberos auth, control sudo rights of the nodes' users,
> etc.)
> >>
> >> Since the nodes need to be registered to FreeIPA, we can't rely on
> FreeIPA being installed by
> >> TripleO, even if that's possible by doing it through a composable
> service.
> >> If we would use a composable service to install FreeIPA, the flow would
> be like this:
> >>
> >> * Install undercloud
> >> * Install overcloud with one node (running FreeIPA)
> >> * register undercloud node to FreeIPA and modify undercloud.conf
> >> * Update undercloud
> >> * scale overcloud and register the rest of the nodes to FreeIPA through
> novajoin.
> >>
> >> So, while we could install FreeIPA with TripleO. This really
> complicates the deployment to an
> >> unnecessary point.
> >>
> >> So I suggest keeping the current behavior, which treats FreeIPA as a
> separate node to be
> >> provisioned before the undercloud). And if folks would like to have a
> separate FreeIPA node for their > overcloud deployment (which could
> provision certs for the tenants) then we could do that as a
> >> composable service, if people request it.
> >
> > I am now re-raising this to the group at large for discussion about
> > the merits of this approach vs deploying via TripleO itself.
>
> There are 3 approaches here:
>
> - Keep using Quickstart which is of course not the viable option since
> TripleO Quickstart is only used by CI and developers right now. Not by
> customers neither in production.
> - Deploy your own Ansible playbooks or automation tool to deploy
> FreeIPA and host it wherever you like. Integrate the playbooks in
> TripleO, as an external component (can be deployed manually between
> some steps but will be to be documented).
> - Create a composable service that will deploy FreeIPA service(s),
> part of TripleO Heat Templates. The way it works *now* will require
> you to have a puppet-freeipa module to deploy the bits but we're
> working toward migrating to Ansible at some point.
>

This approach is not ideal and will be quite a burdain as I described
above. I wouldn't consider this an option.


> I hope it helps, let me know if you need further details on a specific
> approach.
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>