Re: [openstack-dev] [tripleo] Blocking gate - do not recheck / rebase / approve any patch now (please)

2017-10-25 Thread Emilien Macchi
On Wed, Oct 25, 2017 at 1:59 PM, Emilien Macchi  wrote:
> Quick update before being afk for some hours:
>
> - Still trying to land https://review.openstack.org/#/c/513701 (thanks
> Paul for promoting it in gate).

Landed.

> - Disabling voting on scenario001 and scenario004 container jobs:
> https://review.openstack.org/#/c/515188/

Done, please be very careful while these jobs are not voting.
If any doubt, please ping me or fultonj or gfidente on #tripleo.

> - overcloudrc/keystone v2 workaround:
> https://review.openstack.org/#/c/515161/ (d0ugal will work on proper
> fix for https://bugs.launchpad.net/tripleo/+bug/1727454)

Merged - Dougal will work on the real fix this week but not urgent anymore.

> - Fixing zaqar/notification issues on
> https://review.openstack.org/#/c/515123 - we hope that helps to reduce
> some failures in gate

In gate right now and hopefully merged in less than 2 hours.
Otherwise, please keep rechecking it.
According to Thomas Hervé, il will reduce the change to timeout.

> - puppet-tripleo gate broken on stable branches (syntax jobs not
> running properly) - jeblair is looking at it now

jeblair will provide a fix hopefully this week but this is not
critical at this time.
Thanks Jim for your help.

> Once again, we'll need to retrospect and see why we reached that
> terrible state but let's focus on bringing our CI in a good shape
> again.
> Thanks a ton to everyone who is involved,

I'm now restoring all patches that I killed from the gate.
You can now recheck / rebase / approve what you want, but please save
our CI resources and do it with moderation. We are not done yet.

I won't call victory but we've merged almost all our blockers, one is
missing but currently in gate:
https://review.openstack.org/515123 - need babysit until merged.

Now let's see how RDO promotion works. We're close :-)

Thanks everyone,

> On Wed, Oct 25, 2017 at 7:25 AM, Emilien Macchi  wrote:
>> Status:
>>
>> - Heat Convergence switch *might* be a reason why overcloud timeout so
>> much. Thomas proposed to disable it:
>> https://review.openstack.org/515077
>> - Every time a patch fails in the tripleo gate queue, it reset the
>> gate. I proposed to remove this common queue:
>> https://review.openstack.org/515070
>> - I cleared the patches in check and queue to make sure the 2 blockers
>> are tested and can be merged in priority. I'll keep an eye on it
>> today.
>>
>> Any help is very welcome.
>>
>> On Wed, Oct 25, 2017 at 5:58 AM, Emilien Macchi  wrote:
>>> We have been working very hard to get a package/container promotions
>>> (since 44 days) and now our blocker is
>>> https://review.openstack.org/#/c/513701/.
>>>
>>> Because the gate queue is huge, we decided to block the gate and kill
>>> all the jobs running there until we can get
>>> https://review.openstack.org/#/c/513701/ and its backport
>>> https://review.openstack.org/#/c/514584 (both are blocking the whole
>>> production chain).
>>> We hope to promote after these 2 patches, unless there is something
>>> else, in that case we would iterate to the next problem.
>>>
>>> We hope you understand and support us during this effort.
>>> So please do not recheck, rebase or approve any patch until further notice.
>>>
>>> Thank you,
>>> --
>>> Emilien Macchi
>>
>>
>>
>> --
>> Emilien Macchi
>
>
>
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [elections] Technical Committee Election Results

2017-10-25 Thread Tony Breeds
On Wed, Oct 25, 2017 at 10:06:46PM -0400, David Moreau Simard wrote:
> Was it just me or did the "official" period for campaigning/questions was
> awfully short ?
> 
> The schedule [1] went:
> ​TC Campaigning: (Start) Oct 11, 2017 23:59 UTC (End) Oct 14, 2017 23:45
> UTC​

The original was:
  - name: 'TC Campaigning'
start: '2017-10-09T23:59'
end:   '2017-10-12T23:45'

but that needed to be adjusted (https://review.openstack.org/509654/)

While that was still the same duration it was mid-week.

> ​That's three days, one of which was a saturday.
> Was it always this short ? It seems to me that this is not a lot of time to
> the community to ask (read, and answer) thoughful ​questions.
> 
> I realize this doesn't mean you can't keep asking questions once the actual
> election voting start but I wonder if we should cut a few days from the
> nomination and give it to the campaigning.

I can't find anything that documents how long the nomination period
needed to be, perhaps I missed it?  So we could do this but it's already
quite short.  So more likely we could just extend the Campaigning period
if that's the consensus.

The whole election takes close to 3 weeks of officials time so I'd like
to ask we be mindful of that before we extend things too much

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [elections] Technical Committee Election Results

2017-10-25 Thread David Moreau Simard
Was it just me or did the "official" period for campaigning/questions was
awfully short ?

The schedule [1] went:
​TC Campaigning: (Start) Oct 11, 2017 23:59 UTC (End) Oct 14, 2017 23:45
UTC​

​That's three days, one of which was a saturday.
Was it always this short ? It seems to me that this is not a lot of time to
the community to ask (read, and answer) thoughful ​questions.

I realize this doesn't mean you can't keep asking questions once the actual
election voting start but I wonder if we should cut a few days from the
nomination and give it to the campaigning.

​[1]: https://governance.openstack.org/election/#openstack-election​

David Moreau Simard
Senior Software Engineer | OpenStack RDO

dmsimard = [irc, github, twitter]

On Fri, Oct 20, 2017 at 8:20 PM, Tony Breeds 
wrote:

>
> Hi All,
> With the election behind us it's somewhat traditional to look at
> some simple stats from the elections:
>
> +--+---+---+
> ---+
> | Election | Electorate  (delta %) | Voted   (delta %) | Turnout %
>  (delta %) |
> +--+---+---+
> ---+
> |  10/2013 |   1106  (nan) |   342   (nan) | 30.92   (
> nan) |
> |  04/2014 |   1510  (  36.53) |   448   (  30.99) | 29.67   (
> -4.05) |
> |  10/2014 |   1893  (  25.36) |   506   (  12.95) | 26.73   (
> -9.91) |
> |  04/2015 |   2169  (  14.58) |   548   (   8.30) | 25.27   (
> -5.48) |
> |  10/2015 |   2759  (  27.20) |   619   (  12.96) | 22.44   (
> -11.20) |
> |  04/2016 |   3284  (  19.03) |   652   (   5.33) | 19.85   (
> -11.51) |
> |  10/2016 |   3517  (   7.10) |   801   (  22.85) | 22.78   (
> 14.71) |
> |  04/2017 |   3191  (  -9.27) |   427   ( -46.69) | 13.38   (
> -41.25) |
> |  10/2017 |   2430  ( -23.85) |   420   (  -1.64) | 17.28   (
> 29.16) |
> +--+---+---+
> ---+
>
> Election CIVS links
>  10/2014: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_
> c105db929e6c11f4
>  04/2015: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_
> ef1379fee7b94688
>  10/2015: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_
> 4ef58718618691a0
>  04/2016: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_
> fef5cc22eb3dc27a
>  10/2016: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_
> 356e6c1b16904010
>  04/2017: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_
> 072c4cd7ff0673b5
>  10/2017: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_
> ce86063991ef8aae
>
> I don't have a feel for with the Pike electorate decreased but my gut
> feel is that it was organic drop-off possibly in part to the shorter
> Ocata development cycle.  The Queens drop-off was due to a new[1]
> membership API being available that meant we could validate Foundation
> membership instead of using gerrit permission as a proxy.
>
> I'd like to call out that with Pike we had a very dramatic decrease in
> voter turnout both in absolute and relative terms.  As a community it's
> worth trying to understand whether this is a problem and/or a trend that
> needs to change.
>
> Yours Tony.
>
> [1] It wasn't that new it was also used during the PTL election[2]
> [2] See:
> http://lists.openstack.org/pipermail/openstack-dev/2017-
> July/119786.html ; and
> http://lists.openstack.org/pipermail/openstack-dev/2017-
> August/120544.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [elections] Technical Committee Election Results

2017-10-25 Thread Tony Breeds
On Wed, Oct 25, 2017 at 09:48:06PM +, Jeremy Stanley wrote:
> On 2017-10-25 12:18:59 -0400 (-0400), Zane Bitter wrote:
> [...]
> > Can we maybe calculate the electorate size using the old method as well so
> > that we can quantify how much of the dropoff (in theory it could be more
> > than 100%) was due to the change in effective eligibility criteria vs.
> > organic change in the number of contributors?
> [...]
> 
> I did a baseline comparison of the old and new methods as they were
> developed, and as of a few days before the most recent PTL elections
> the percentage of "old" TC electorate who lacked discoverable
> foundation member profiles (either because of no matching E-mail
> addresses or lapsed membership due to a failure to vote in board
> elections) was right at 10%. It's likely to have diverged a little since
> that time (my guess is that it won't have moved much), but the
> election officials _should_ be able to produce this number trivially
> since the structured data output by the validation script includes
> all contributors and merely flags the "member" contributors eligible
> to vote by including their discovered OpenStack Foundation
> Individual Member Id numbers.

Yup :)  for the record I replied to Zane before reading you email ...
glad they agree :)

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [elections] Technical Committee Election Results

2017-10-25 Thread Tony Breeds
On Wed, Oct 25, 2017 at 12:18:59PM -0400, Zane Bitter wrote:
> On 20/10/17 20:20, Tony Breeds wrote:
> > 
> > Hi All,
> >  With the election behind us it's somewhat traditional to look at
> > some simple stats from the elections:
> > 
> > +--+---+---+---+
> > | Election | Electorate  (delta %) | Voted   (delta %) | Turnout %   (delta 
> > %) |
> > +--+---+---+---+
> > |  10/2013 |   1106  (nan) |   342   (nan) | 30.92   (
> > nan) |
> > |  04/2014 |   1510  (  36.53) |   448   (  30.99) | 29.67   (  
> > -4.05) |
> > |  10/2014 |   1893  (  25.36) |   506   (  12.95) | 26.73   (  
> > -9.91) |
> > |  04/2015 |   2169  (  14.58) |   548   (   8.30) | 25.27   (  
> > -5.48) |
> > |  10/2015 |   2759  (  27.20) |   619   (  12.96) | 22.44   ( 
> > -11.20) |
> > |  04/2016 |   3284  (  19.03) |   652   (   5.33) | 19.85   ( 
> > -11.51) |
> > |  10/2016 |   3517  (   7.10) |   801   (  22.85) | 22.78   (  
> > 14.71) |
> > |  04/2017 |   3191  (  -9.27) |   427   ( -46.69) | 13.38   ( 
> > -41.25) |
> > |  10/2017 |   2430  ( -23.85) |   420   (  -1.64) | 17.28   (  
> > 29.16) |
> > +--+---+---+---+
> > 
> > Election CIVS links
> >   10/2014: 
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_c105db929e6c11f4
> >   04/2015: 
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ef1379fee7b94688
> >   10/2015: 
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_4ef58718618691a0
> >   04/2016: 
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_fef5cc22eb3dc27a
> >   10/2016: 
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_356e6c1b16904010
> >   04/2017: 
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_072c4cd7ff0673b5
> >   10/2017: 
> > http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ce86063991ef8aae
> > 
> > I don't have a feel for with the Pike electorate decreased but my gut
> > feel is that it was organic drop-off possibly in part to the shorter
> > Ocata development cycle.  The Queens drop-off was due to a new[1]
> > membership API being available that meant we could validate Foundation
> > membership instead of using gerrit permission as a proxy.
> 
> Can we maybe calculate the electorate size using the old method as well so
> that we can quantify how much of the dropoff (in theory it could be more
> than 100%) was due to the change in effective eligibility criteria vs.
> organic change in the number of contributors?

You can insert this line into the table above to replace the official
10/2017 election.

|10/2017  |   2696  ( -15.51) |   420   (  -1.64) | 15.58   (  
16.42) |

So ~10% of all change owners were excluded due to not being Foundation
members.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [elections] Technical Committee Election Results

2017-10-25 Thread Tony Breeds
On Wed, Oct 25, 2017 at 11:05:44AM +0100, Chris Dent wrote:
> On Tue, 24 Oct 2017, Tony Breeds wrote:
> 
> > On Mon, Oct 23, 2017 at 09:35:34AM +0100, Jean-Philippe Evrard wrote:
> > 
> > > I agree, we should care about not repeating this Pike trend. It looks
> > > like Queens is better in terms of turnout (see the amazing positive
> > > delta!). However, I can't help but noticing that the trend for
> > > turnouts is slowly reducing (excluding some outliers) since the
> > > beginning of these stats.
> > 
> > Yup, the table makes that pretty visible.
> 
> I think we can't really make much in the way of conclusions about
> the turnout data without comparing it with contributor engagement in
> general. If many of the eligible voters have only barely crossed the
> eligibility threshold (e.g., one commit) it's probably not
> reasonable to expect them to care much about TC elections. We've
> talked quite a bit lately that "casual contribution" is a growth
> area.

So this is clearly bogus because we don't have any way of knowing who
voted and therefore can't adjust the number of votes cast:
+-+---+---+---+
|   Election  | Electorate  (delta %) | Voted   (delta %) | Turnout %   (delta 
%) |
+-+---+---+---+
|10/2017  |   2430  ( -23.85) |   420   (  -1.64) | 17.28   (  
29.16) |
|   1 change  |   2373  (  -2.35) |   420   (   0.00) | 17.70   (   
2.40) |
|   5 changes |   1162  ( -51.03) |   420   (   0.00) | 36.14   ( 
104.22) |
|  10 changes |829  ( -28.66) |   420   (   0.00) | 50.66   (  
40.17) |
| 100 changes |153  ( -81.54) |   420   (   0.00) |274.51   ( 
441.83) |
+-+---+---+---+

However it gives you some idea of the electorate size at the various
thresholds.  This is public data I just happen to have it quick access
to it.

> A possibly meaningful correlation may be eligible voters to PTG
> attendance to turnout, or before the PTG, number of people who got a
> free pass to summit, chose to use it, and voters.

Sure, that'd be closer but we still don't really have anyway to know who
from that set is voting.

> Dunno. Obviously it would be great if more people voted.

:)
 
> > Me? No ;P  I do think we need to work out *why* turnout is attending
> > before determining how to correct it.  I don't really think that we can
> > get that information though.  Community member that aren't engaged
> > enough to participate in the election(s) are also unlikely to
> > participate in a survey askign why they didn't participate ;P
> 
> This is a really critical failing in the way we typical gather data.
> We have huge survivorship bias.

Sure.  I have no idea how to fix that though

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [elections] Technical Committee Election Results

2017-10-25 Thread Jeremy Stanley
On 2017-10-25 12:18:59 -0400 (-0400), Zane Bitter wrote:
[...]
> Can we maybe calculate the electorate size using the old method as well so
> that we can quantify how much of the dropoff (in theory it could be more
> than 100%) was due to the change in effective eligibility criteria vs.
> organic change in the number of contributors?
[...]

I did a baseline comparison of the old and new methods as they were
developed, and as of a few days before the most recent PTL elections
the percentage of "old" TC electorate who lacked discoverable
foundation member profiles (either because of no matching E-mail
addresses or lapsed membership due to a failure to vote in board
elections) was right at 10%. It's likely to have diverged a little since
that time (my guess is that it won't have moved much), but the
election officials _should_ be able to produce this number trivially
since the structured data output by the validation script includes
all contributors and merely flags the "member" contributors eligible
to vote by including their discovered OpenStack Foundation
Individual Member Id numbers.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Preping for the stable/newton EOL

2017-10-25 Thread Tony Breeds
On Wed, Oct 25, 2017 at 10:24:59AM -0500, Matt Riedemann wrote:
> On 10/24/2017 9:57 PM, Tony Breeds wrote:
> > The timing of the next phase is uncertain right now but I'd like to take
> > care of:
> > 
> > - openstack/nova
> 
> Just a status update, but the final nova newton release is waiting on two
> changes:
> 
> 1. https://review.openstack.org/#/c/514685/ - in the gate, should be merged
> today.

Okay.
 
> 2. https://review.openstack.org/#/c/514339/ - we need a fix for that in
> master and then to get backported through to stable/newton. This is a fix
> for a regression introduced in pike which was unfortunately backported to
> newton, so I think we need to fix the regression we introduced into
> stable/newton before EOL.

Okay.  This is a good outcome to a bad situation.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Preping for the stable/newton EOL

2017-10-25 Thread Tony Breeds
On Wed, Oct 25, 2017 at 01:11:53PM +0200, Dmitry Tantsur wrote:

> The last ironic newton release was done, we're ready for EOL.

Thanks that must've happened overnight.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Preping for the stable/newton EOL

2017-10-25 Thread Tony Breeds
On Wed, Oct 25, 2017 at 10:11:01AM +0100, Jean-Philippe Evrard wrote:
> On 25 October 2017 at 03:57, Tony Breeds  wrote:
> > On Tue, Oct 24, 2017 at 05:11:15PM +1100, Tony Breeds wrote:
> >> On Fri, Oct 06, 2017 at 10:15:56AM +1100, Tony Breeds wrote:
> >> > On Wed, Oct 04, 2017 at 02:51:06PM +1100, Tony Breeds wrote:
> >> > > I'll prep the list of repos that will be tagged EOL real soon now for
> >> > > review.
> >> >
> >> > As promised here's the list.  The fomat is new, It's grouped by project
> >> > team so it should be easy for teams to find repos they care about.
> >> >
> >> > The only wart may be repos I couldn't find an owning team for, so check
> >> > the '-' as the owning team.
> >> >
> >> > I'm proposing to EOL all projects that meet one or more of the following
> >> > criteria:
> >> >
> >> > - The project is openstack-dev/devstack, openstack-dev/grenade or
> >> >   openstack/requirements (although these wil be done last)
> >> > - The project has the 'check-requirements' job listed as a template in
> >> >   project-config:zuul/layout.yaml
> >> > - The project gates with either devstack or grenade jobs
> >> > - The project is listed in governance:reference/projects.yaml and is 
> >> > tagged
> >> >   with 'stable:follows-policy'.
> >> >
> >> >
> >> > Based on previous cycles I have opted out:
> >> > - 'openstack/group-based-policy'
> >> > - 'openstack/openstack-ansible' # So they can add EOL tags
> >> >
> >> > Also based on recent email's with tripleo I have opted out:
> >> > - 'openstack/instack'
> >> > - 'openstack/instack-undercloud'
> >> > - 'openstack/os-apply-config'
> >> > - 'openstack/os-collect-config'
> >> > - 'openstack/os-net-config'
> >> > - 'openstack/os-refresh-config'
> >> > - 'openstack/puppet-tripleo'
> >> > - 'openstack/python-tripleoclient'
> >> > - 'openstack/tripleo-common'
> >> > - 'openstack/tripleo-heat-templates'
> >> > - 'openstack/tripleo-puppet-elements'
> >> > - 'openstack/tripleo-validations'
> >> > - 'openstack/tripleo-image-elements'
> >> > - 'openstack/tripleo-ui'
> >>
> >> I've also removed the following repos as the have open release requests
> >> for stable/newton
> >>  - openstack/nova
> >>  - openstack/ironic
> >>  - openstack/openstack-ansible*
> >>
> >> And at the request of the docs team I've omitted:
> >>  - openstack/openstack-manuals
> >>
> >> to facilitate 'badging' of the newton docs.
> >
> > The repos listed in 
> > http://lists.openstack.org/pipermail/openstack-dev/2017-October/123910.html
> > have been retired.
> >
> > There were a couple of issues
> > - openstack/deb-python-os-cloud-config
> > - openstack/bareon
> > My clones of both had stale gerrit remotes that has been corrected
> > manually.
> >
> > The timing of the next phase is uncertain right now but I'd like to take
> > care of:
> >
> > - openstack/nova
> > - openstack/ironic
> > - openstack/openstack-ansible*
> > - openstack/openstack-manuals
> >
> > before the summit if possible.
> >
> > Thanks to the infra team for enabling this to happen today.
> >
> > Tony.
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> Hello Tony,
> 
> We'd like to continue doing as before: updating all our upstream
> projects to their EOL tag, then creating an EOL release based on our
> roles that would successfully deploy those EOL upstream projects.
> If any role need a change, due to latest upstream changes, we need to be 
> ready.
> 
> TL:DR; I'll submit a patch soon to bump our upstream roles to EOL,
> when nova/ironic will have their EOL tag :p

Yup that was my assumption, sorry it wasn't clear from my
talking-to-myself email thread ;p

So I see it working more or less like:

1. ironic and releases happen and repos tagged
2. The existing OSA review is merged
3. A new review is created for OSA using the tags for $all_projects
that's merged and released
4. a review in openstack-ansible pins the OSA roles to that from 3. (I'm
making that bit up is it right?)
5. openstack-ansible* repos tagged EOL

Where'd I get it wrong?

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][infra][CI] Moving OVB jobs from RH1 cloud to RDO cloud, plan

2017-10-25 Thread David Moreau Simard
We're currently running with a max-servers of 80 for the TripleO tenant.
This number doesn't include OVB nodes.

When taking into account OVB nodes, we are already nearing vCPU capacity
and could consider raising the overcommit ratio from 2.0 to 4.0 to make use
of the available RAM.

See the rough maths in my comment here [1].

[1]: https://review.rdoproject.org/r/#/c/10249/1/nodepool/nodepool.yaml@133

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

On Oct 25, 2017 1:39 PM, "Ben Nemec"  wrote:

> Overall sounds good.  A couple of comments inline.
>
> On 10/23/2017 05:46 AM, Sagi Shnaidman wrote:
>
>> Hi,
>>
>> as you know we prepare transition of all OVB jobs from RH1 cloud to RDO
>> cloud, also a few long multinode upgrades jobs as well. We prepared a
>> workflow of transition below, please feel free to comment.
>>
>>
>> 1) We run one job (ovb-ha-oooq) on every patch in following repos: oooq,
>> oooq-extras, tripleo-ci. We run rest of ovb jobs (containers and fs024) as
>> experimental in rdo cloud for following repos: oooq, oooq-extras,
>> tripleo-ci, tht, tripleo-common. It should cover most of our testing. This
>> step is completed.
>>
>> Currently it's blocked by newton bug in RDO cloud:
>> https://bugs.launchpad.net/heat/+bug/1626256 , where cloud release
>> doesn't contain its fix: https://review.openstack.org/#/c/501592/ . From
>> other side, the upgrade to Ocata release (which would solve this issue too)
>> is blocked by bug: https://bugs.launchpad.net/tripleo/+bug/1724328
>> So we are in blocked state right now with moving.
>>
>> Next steps:
>>
>> 2) We solve all issues with running on every patch job (ovb-ha-oooq) so
>> that it's passing (or failing exactly for same results as on rh1) for a 2
>> regular working days. (not weekend).
>> 3) We should trigger experimental jobs in this time on various patches in
>> tht and tripleo-common and solve all issues for experimental jobs so all
>> ovb jobs pass.
>> 4) We need to monitor all this time resources in openstack-nodepool
>> tenant (with help of rhops maybe) and be sure that it has the capacity to
>> run configured jobs.
>>
>
> I assume we will have a max jobs limit in nodepool (or whatever we're
> using for that purpose) that will ensure we stay within capacity regardless
> of what jobs are configured.  We probably want to keep that limit low
> initially so we don't have to worry about throwing a huge number of jobs at
> the cloud accidentally (say someone submits a large patch series that
> triggers our subset of jobs).
>
> Obviously as we add jobs we'll need to bump the concurrent jobs limit, but
> I think that should be the primary variable we change and that we add more
> jobs as necessary to fill the configured limit.  Also, rather than set a
> time period of two days or whatever, ensure we run at the configured limit
> for some period of time before increasing it.  There are slow days in ci
> where we might not get much useful information so we need to make sure we
> don't get a false positive result from a step just because of the quirks of
> ci load.
>
> 5) We set ovb-ha-oooq job as running for every patch in all places where
>> it's running in rh1 (in parallel with existing rh1 job). We monitor RDO
>> cloud that it doesn't fail and still have resources - 1.5 working days
>> 6) We add featureset024 ovb job to run in every patch where it runs in
>> rh1. We continue to monitor RDO cloud - 1.5 working days
>> 7) We add last containers ovb job to all patches where it runs on rh1. We
>> continue monitor RDO cloud - 2 days.
>> 8) In case if everything is OK in all previous points and RDO cloud still
>> performs well, we remove ovb jobs from rh1 configuration and make them as
>> experimental.
>> 9) During next few days we monitor ovb jobs and run rh1 ovb jobs as
>> experimental to check if we have the same results (or better :) )
>> 10) OVB jobs on rh1 cloud stay in experimental pipeline in tripleo for a
>> next month or two.
>>
>> --
>> Best regards
>> Sagi Shnaidman
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Blocking gate - do not recheck / rebase / approve any patch now (please)

2017-10-25 Thread Emilien Macchi
Quick update before being afk for some hours:

- Still trying to land https://review.openstack.org/#/c/513701 (thanks
Paul for promoting it in gate).
- Disabling voting on scenario001 and scenario004 container jobs:
https://review.openstack.org/#/c/515188/
- overcloudrc/keystone v2 workaround:
https://review.openstack.org/#/c/515161/ (d0ugal will work on proper
fix for https://bugs.launchpad.net/tripleo/+bug/1727454)
- Fixing zaqar/notification issues on
https://review.openstack.org/#/c/515123 - we hope that helps to reduce
some failures in gate
- puppet-tripleo gate broken on stable branches (syntax jobs not
running properly) - jeblair is looking at it now

Once again, we'll need to retrospect and see why we reached that
terrible state but let's focus on bringing our CI in a good shape
again.
Thanks a ton to everyone who is involved,

On Wed, Oct 25, 2017 at 7:25 AM, Emilien Macchi  wrote:
> Status:
>
> - Heat Convergence switch *might* be a reason why overcloud timeout so
> much. Thomas proposed to disable it:
> https://review.openstack.org/515077
> - Every time a patch fails in the tripleo gate queue, it reset the
> gate. I proposed to remove this common queue:
> https://review.openstack.org/515070
> - I cleared the patches in check and queue to make sure the 2 blockers
> are tested and can be merged in priority. I'll keep an eye on it
> today.
>
> Any help is very welcome.
>
> On Wed, Oct 25, 2017 at 5:58 AM, Emilien Macchi  wrote:
>> We have been working very hard to get a package/container promotions
>> (since 44 days) and now our blocker is
>> https://review.openstack.org/#/c/513701/.
>>
>> Because the gate queue is huge, we decided to block the gate and kill
>> all the jobs running there until we can get
>> https://review.openstack.org/#/c/513701/ and its backport
>> https://review.openstack.org/#/c/514584 (both are blocking the whole
>> production chain).
>> We hope to promote after these 2 patches, unless there is something
>> else, in that case we would iterate to the next problem.
>>
>> We hope you understand and support us during this effort.
>> So please do not recheck, rebase or approve any patch until further notice.
>>
>> Thank you,
>> --
>> Emilien Macchi
>
>
>
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [policy] AWS IAM session

2017-10-25 Thread Lance Bragstad
I'm not sure how I didn't include -operators the first time around, but
adding them to this thread now.

TL;DR

We're going through policy/RBAC for other systems to get an idea of how
we want to shape OpenStack's policy and RBAC model. We're going to meet
next Wednesday at 15:00 UTC. The meeting will be recorded. Previous
context and information is in this thread.

Thanks!

On 10/25/2017 01:29 PM, Lance Bragstad wrote:
> I've recapped the notes from today's session and I'll post a follow up
> with the recording as soon as it's available. All notes can be found
> in the etherpad (agreement and outcomes are in *bold*)**[0]. Next week
> at the same time (15:00 UTC) we will continue going through AWS IAM
> flows.
>
> While today's discussion was helpful, it was very free-form. Let's aim
> to target a very specific flow for next week. What do we want that to be?
>
> Thanks!
>
> [0] https://etherpad.openstack.org/p/analyzing-other-policy-systems
>
> On 10/24/2017 02:02 PM, Lance Bragstad wrote:
>> Gentle reminder that this will be happening tomorrow. See you then!
>>
>> On 10/20/2017 09:46 AM, Lance Bragstad wrote:
>>> I just sent a calendar invite to everyone who responded to this
>>> thread or voted in the agenda. The session will be recorded if you
>>> are unable to make it.
>>>
>>> Thanks!
>>>
>>> On 10/18/2017 10:10 AM, Lance Bragstad wrote:
 Now that we have some good feedback on the doodle, it looks like we
 have two sessions that will work for everyone. One is October 25th
 from 15:00 - 16:00 UTC and the other is also the 25th from 16:00 -
 17:00.

 Let's shoot to meet at *15:00 UTC* on *October 25th* and if the
 meeting goes over, we have time allocated for that. Would anyone
 like a formal calendar invite? If so, I can send one out. The
 etherpad [0] will act as our "schedule", but we'll likely just work
 through the cases we've documented.

 Thanks!

 [0] https://etherpad.openstack.org/p/analyzing-other-policy-systems


 On 10/16/2017 08:45 AM, Lance Bragstad wrote:
> Sending out a gentle reminder to vote for time slots that work for
> you [0]. We'll keep the poll open for a few more days, or until we
> reach quorum. Thanks!
>
> [0] https://beta.doodle.com/poll/ntkpzgmcv3k6v5qu
>
> On 10/11/2017 01:48 PM, Lance Bragstad wrote:
>> Oh - one note about the doodle [0]. All proposed times are in
>> UTC, so just keep that in mind when selecting your availability.
>>
>> Thanks!
>>
>> [0] https://beta.doodle.com/poll/ntkpzgmcv3k6v5qu
>>
>> On 10/11/2017 01:44 PM, Lance Bragstad wrote:
>>> In today's policy meeting we went through and started prepping
>>> for the session. Relevant information has been captured in the
>>> etherpad [0].
>>>
>>> We're going to hold the meeting using *Google* *Hangouts*. I'll
>>> update the etherpad with a link to the hangout once we settle on
>>> a date. If you plan on attending, please *vote* *for*
>>> *available* *times* [1]. I've proposed a bunch of time slots (4
>>> each day for the next two weeks) to try and find a time that
>>> works for everyone. People from US, AU, and EU will be trying to
>>> attended, so we're not going to find a perfect time for
>>> everyone. Having said that, *we're going to record the session*.
>>>
>>> Most of what we talked about in the meeting today uncovered the
>>> need to go over the basics of AWS IAM. That should be something
>>> we can do with a free account, which I'm going to sign up for.
>>> If we need more time we can have another session or look at
>>> options for upgrading the account.
>>>
>>>
>>> [0] https://etherpad.openstack.org/p/analyzing-other-policy-systems
>>> [1] https://doodle.com/poll/ntkpzgmcv3k6v5qu
>>>
>>> On 10/09/2017 04:23 PM, Lance Bragstad wrote:
 I've put a scheduling session on the books for the next policy
 meeting [0][1]. Advertising it here since folks who want to be
 involved have responded to the thread.

 Let's use this meeting time to iron out account details and
 figure out what exactly we want to get out of the session.


 [0] http://eavesdrop.openstack.org/#Keystone_Policy_Meeting
 [1] https://etherpad.openstack.org/p/keystone-policy-meeting

 On 10/05/2017 02:24 AM, Colleen Murphy wrote:
> On Tue, Oct 3, 2017 at 10:08 PM, Lance Bragstad
> > wrote:
>
> Hey all,
>
> It was mentioned in today's keystone meeting [0] that it
> would be useful
> to go through AWS IAM (or even GKE) as a group. With all
> the recent
> policy discussions and work, it seems useful to get our
> eyes on another
> system. The idea would be to 

Re: [openstack-dev] [policy] AWS IAM session

2017-10-25 Thread Lance Bragstad
I've recapped the notes from today's session and I'll post a follow up
with the recording as soon as it's available. All notes can be found in
the etherpad (agreement and outcomes are in *bold*)**[0]. Next week at
the same time (15:00 UTC) we will continue going through AWS IAM flows.

While today's discussion was helpful, it was very free-form. Let's aim
to target a very specific flow for next week. What do we want that to be?

Thanks!

[0] https://etherpad.openstack.org/p/analyzing-other-policy-systems

On 10/24/2017 02:02 PM, Lance Bragstad wrote:
> Gentle reminder that this will be happening tomorrow. See you then!
>
> On 10/20/2017 09:46 AM, Lance Bragstad wrote:
>> I just sent a calendar invite to everyone who responded to this
>> thread or voted in the agenda. The session will be recorded if you
>> are unable to make it.
>>
>> Thanks!
>>
>> On 10/18/2017 10:10 AM, Lance Bragstad wrote:
>>> Now that we have some good feedback on the doodle, it looks like we
>>> have two sessions that will work for everyone. One is October 25th
>>> from 15:00 - 16:00 UTC and the other is also the 25th from 16:00 -
>>> 17:00.
>>>
>>> Let's shoot to meet at *15:00 UTC* on *October 25th* and if the
>>> meeting goes over, we have time allocated for that. Would anyone
>>> like a formal calendar invite? If so, I can send one out. The
>>> etherpad [0] will act as our "schedule", but we'll likely just work
>>> through the cases we've documented.
>>>
>>> Thanks!
>>>
>>> [0] https://etherpad.openstack.org/p/analyzing-other-policy-systems
>>>
>>>
>>> On 10/16/2017 08:45 AM, Lance Bragstad wrote:
 Sending out a gentle reminder to vote for time slots that work for
 you [0]. We'll keep the poll open for a few more days, or until we
 reach quorum. Thanks!

 [0] https://beta.doodle.com/poll/ntkpzgmcv3k6v5qu

 On 10/11/2017 01:48 PM, Lance Bragstad wrote:
> Oh - one note about the doodle [0]. All proposed times are in UTC,
> so just keep that in mind when selecting your availability.
>
> Thanks!
>
> [0] https://beta.doodle.com/poll/ntkpzgmcv3k6v5qu
>
> On 10/11/2017 01:44 PM, Lance Bragstad wrote:
>> In today's policy meeting we went through and started prepping
>> for the session. Relevant information has been captured in the
>> etherpad [0].
>>
>> We're going to hold the meeting using *Google* *Hangouts*. I'll
>> update the etherpad with a link to the hangout once we settle on
>> a date. If you plan on attending, please *vote* *for* *available*
>> *times* [1]. I've proposed a bunch of time slots (4 each day for
>> the next two weeks) to try and find a time that works for
>> everyone. People from US, AU, and EU will be trying to attended,
>> so we're not going to find a perfect time for everyone. Having
>> said that, *we're going to record the session*.
>>
>> Most of what we talked about in the meeting today uncovered the
>> need to go over the basics of AWS IAM. That should be something
>> we can do with a free account, which I'm going to sign up for. If
>> we need more time we can have another session or look at options
>> for upgrading the account.
>>
>>
>> [0] https://etherpad.openstack.org/p/analyzing-other-policy-systems
>> [1] https://doodle.com/poll/ntkpzgmcv3k6v5qu
>>
>> On 10/09/2017 04:23 PM, Lance Bragstad wrote:
>>> I've put a scheduling session on the books for the next policy
>>> meeting [0][1]. Advertising it here since folks who want to be
>>> involved have responded to the thread.
>>>
>>> Let's use this meeting time to iron out account details and
>>> figure out what exactly we want to get out of the session.
>>>
>>>
>>> [0] http://eavesdrop.openstack.org/#Keystone_Policy_Meeting
>>> [1] https://etherpad.openstack.org/p/keystone-policy-meeting
>>>
>>> On 10/05/2017 02:24 AM, Colleen Murphy wrote:
 On Tue, Oct 3, 2017 at 10:08 PM, Lance Bragstad
 > wrote:

 Hey all,

 It was mentioned in today's keystone meeting [0] that it
 would be useful
 to go through AWS IAM (or even GKE) as a group. With all
 the recent
 policy discussions and work, it seems useful to get our
 eyes on another
 system. The idea would be to spend time using a video
 conference/screen
 share to go through and play with policy together. The end
 result should
 keep us focused on the implementations we're working on
 today, but also
 provide clarity for the long-term vision of OpenStack's
 RBAC system.

 Are you interested in attending? If so, please respond to
 the thread.
 Once we have some interest, we can gauge when to hold the
 

Re: [openstack-dev] [TripleO][infra][CI] Moving OVB jobs from RH1 cloud to RDO cloud, plan

2017-10-25 Thread Ben Nemec

Overall sounds good.  A couple of comments inline.

On 10/23/2017 05:46 AM, Sagi Shnaidman wrote:

Hi,

as you know we prepare transition of all OVB jobs from RH1 cloud to RDO 
cloud, also a few long multinode upgrades jobs as well. We prepared a 
workflow of transition below, please feel free to comment.



1) We run one job (ovb-ha-oooq) on every patch in following repos: oooq, 
oooq-extras, tripleo-ci. We run rest of ovb jobs (containers and fs024) 
as experimental in rdo cloud for following repos: oooq, oooq-extras, 
tripleo-ci, tht, tripleo-common. It should cover most of our testing. 
This step is completed.


Currently it's blocked by newton bug in RDO cloud: 
https://bugs.launchpad.net/heat/+bug/1626256 , where cloud release 
doesn't contain its fix: https://review.openstack.org/#/c/501592/ . From 
other side, the upgrade to Ocata release (which would solve this issue 
too) is blocked by bug: https://bugs.launchpad.net/tripleo/+bug/1724328

So we are in blocked state right now with moving.

Next steps:

2) We solve all issues with running on every patch job (ovb-ha-oooq) so 
that it's passing (or failing exactly for same results as on rh1) for a 
2 regular working days. (not weekend).
3) We should trigger experimental jobs in this time on various patches 
in tht and tripleo-common and solve all issues for experimental jobs so 
all ovb jobs pass.
4) We need to monitor all this time resources in openstack-nodepool 
tenant (with help of rhops maybe) and be sure that it has the capacity 
to run configured jobs.


I assume we will have a max jobs limit in nodepool (or whatever we're 
using for that purpose) that will ensure we stay within capacity 
regardless of what jobs are configured.  We probably want to keep that 
limit low initially so we don't have to worry about throwing a huge 
number of jobs at the cloud accidentally (say someone submits a large 
patch series that triggers our subset of jobs).


Obviously as we add jobs we'll need to bump the concurrent jobs limit, 
but I think that should be the primary variable we change and that we 
add more jobs as necessary to fill the configured limit.  Also, rather 
than set a time period of two days or whatever, ensure we run at the 
configured limit for some period of time before increasing it.  There 
are slow days in ci where we might not get much useful information so we 
need to make sure we don't get a false positive result from a step just 
because of the quirks of ci load.


5) We set ovb-ha-oooq job as running for every patch in all places where 
it's running in rh1 (in parallel with existing rh1 job). We monitor RDO 
cloud that it doesn't fail and still have resources - 1.5 working days
6) We add featureset024 ovb job to run in every patch where it runs in 
rh1. We continue to monitor RDO cloud - 1.5 working days
7) We add last containers ovb job to all patches where it runs on rh1. 
We continue monitor RDO cloud - 2 days.
8) In case if everything is OK in all previous points and RDO cloud 
still performs well, we remove ovb jobs from rh1 configuration and make 
them as experimental.
9) During next few days we monitor ovb jobs and run rh1 ovb jobs as 
experimental to check if we have the same results (or better :) )
10) OVB jobs on rh1 cloud stay in experimental pipeline in tripleo for a 
next month or two.


--
Best regards
Sagi Shnaidman


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [Openstack-operators] replace node "tags" with node "traits"

2017-10-25 Thread Mathieu Gagné
Hi,

On Wed, Oct 25, 2017 at 10:17 AM, Loo, Ruby  wrote:
> Hello ironic'ers,
>
> A while ago, we approved a spec to add node tag support to ironic [1]. The
> feature itself did not land yet (although some of the code has). Now that
> the (nova) community has come up with traits, ironic wants to support node
> traits, and there is a spec proposing that [2]. At the ironic node level,
> this is VERY similar to the node tag support, so the thought is to drop (not
> implement) the node tagging feature, since the node traits feature could be
> used instead. There are a few differences between the tags and traits.
> "Traits" means something in OpenStack, and there are some restrictions about
> it:
>
> - max 50 per node
>
> - names must be one of those in os-traits library OR prefixed with 'CUSTOM_'
>
> For folks that wanted the node tagging feature, will this new node traits
> feature work for your use case? Should we support both tags and traits? I
> was wondering about e.g. using ironic standalone.
>
> Please feel free to comment in [2].
>
> Thanks in advance,
>
> --ruby
>
> [1]
> http://specs.openstack.org/openstack/ironic-specs/specs/approved/nodes-tagging.html
>
> [2] https://review.openstack.org/#/c/504531/
>

Are tags/traits serving a different purpose? One serves the purpose of
helping the scheduling/placement while the other is more or less aims
at grouping for the "end users"?
I understand that the code will be *very* similar but who/what will be
the consumers/users?
I fell they won't be the same and could artificially limit its use due
to technical/design "limitations". (must be in os-traits or be
prefixed by CUSTOM)

For example which I personally foresee:
* I might want to populate Ironic inventory from an external system
which would also injects the appropriate traits.
* I might also want some technical people to use/query Ironic and
allow them to tag nodes based on their own needs while not messing
with the traits part (as it's managed by an external system and will
influence the scheduling later)

Lets not assume traits/tags have the same purpose and same user.

--
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][infra][zuul][zuulv3][horizon][neutron] project-specific release job templates

2017-10-25 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2017-10-24 09:42:25 -0400:
> The neutron queens milestone release is held up right now because
> some of the repositories are using a release job template that isn't
> recognized by the validation code in the releases repository.  I'm
> trying to decide between adding the custom job template to the
> validation code or changing the release jobs for those neutron
> repositories to use the one that isn't custom. I think we'll have
> the same problem with horizon plugins using the custom job template
> set up for them.
> 
> It looks like the publish-to-pypi-neutron template modifies
> publish-to-pypi by adding openstack/neutron to the required-repositories
> list for the release-openstack-python job. That repository was also at
> some point added directly to the release-openstack-python job. So
> technically the extension via the template is not needed. The same
> applies to publish-to-pypi-horizon.
> 
> I see a few issues with keeping job template variants:
> 
> 1. Having multiple release job variants complicates the release
>repo validation logic. That logic was put in place after we
>discovered several projects without release jobs defined at all,
>so we definitely want to keep some level of validation in place.
> 
> 2. As we continue to make changes to the release jobs, we're going
>to have to consider whether to make those changes in multiple
>places, which seems error prone.
> 
> 3. As we find other projects with more dependencies, we're going
>to end up with more custom templates.
> 
> Those issues may be mitigated if we move the release job definitions
> into the releases repo as we have discussed, because it will be
> more obvious that we have multiple related templates that are
> variants of one another and we can make the relevant changes all
> together in one patch.
> 
> One alternative to keeping multiple variants, and defining more in
> the future, is to add required-repositories to the release-openstack-python
> job directly, as we discover they are needed. Of course that means
> we will clone repositories for some jobs that don't actually use
> them. I don't know how big of an issue that really is, but the issue
> of not knowing which instances of a job actually need a particular
> dependency seems like more of a justification for keeping separate
> templates than any runtime savings we would have by skipping a
> couple of extra calls to git clone.
> 
> It feels like we have two related by not necessarily dependent
> policy questions we need to answer before we decide how to proceed:
> 
> (a) Do we want to move the release job definitions from project-config
> and openstack-zuul-jobs to the releases repo?
> 
> (b) Do we want to have multiple release job templates due to custom
> dependencies (or any other reason)?
> 
> Thoughts?
> 
> Doug
> 

Based on the conversation in this thread and on IRC, we decided to
keep the job variants and update the releases repo so projects can
explicitly indicate that they are using those variants instead of
the default. See https://review.openstack.org/515119 for that change,
and there are related changes in the series for deliverable files
that needed to be updated.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [elections] Technical Committee Election Results

2017-10-25 Thread Zane Bitter

On 20/10/17 20:20, Tony Breeds wrote:


Hi All,
 With the election behind us it's somewhat traditional to look at
some simple stats from the elections:

+--+---+---+---+
| Election | Electorate  (delta %) | Voted   (delta %) | Turnout %   (delta %) |
+--+---+---+---+
|  10/2013 |   1106  (nan) |   342   (nan) | 30.92   (nan) |
|  04/2014 |   1510  (  36.53) |   448   (  30.99) | 29.67   (  -4.05) |
|  10/2014 |   1893  (  25.36) |   506   (  12.95) | 26.73   (  -9.91) |
|  04/2015 |   2169  (  14.58) |   548   (   8.30) | 25.27   (  -5.48) |
|  10/2015 |   2759  (  27.20) |   619   (  12.96) | 22.44   ( -11.20) |
|  04/2016 |   3284  (  19.03) |   652   (   5.33) | 19.85   ( -11.51) |
|  10/2016 |   3517  (   7.10) |   801   (  22.85) | 22.78   (  14.71) |
|  04/2017 |   3191  (  -9.27) |   427   ( -46.69) | 13.38   ( -41.25) |
|  10/2017 |   2430  ( -23.85) |   420   (  -1.64) | 17.28   (  29.16) |
+--+---+---+---+

Election CIVS links
  10/2014: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_c105db929e6c11f4
  04/2015: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ef1379fee7b94688
  10/2015: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_4ef58718618691a0
  04/2016: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_fef5cc22eb3dc27a
  10/2016: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_356e6c1b16904010
  04/2017: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_072c4cd7ff0673b5
  10/2017: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ce86063991ef8aae

I don't have a feel for with the Pike electorate decreased but my gut
feel is that it was organic drop-off possibly in part to the shorter
Ocata development cycle.  The Queens drop-off was due to a new[1]
membership API being available that meant we could validate Foundation
membership instead of using gerrit permission as a proxy.


Can we maybe calculate the electorate size using the old method as well 
so that we can quantify how much of the dropoff (in theory it could be 
more than 100%) was due to the change in effective eligibility criteria 
vs. organic change in the number of contributors?


- ZB


I'd like to call out that with Pike we had a very dramatic decrease in
voter turnout both in absolute and relative terms.  As a community it's
worth trying to understand whether this is a problem and/or a trend that
needs to change.

Yours Tony.

[1] It wasn't that new it was also used during the PTL election[2]
[2] See:
 http://lists.openstack.org/pipermail/openstack-dev/2017-July/119786.html ; 
and
 http://lists.openstack.org/pipermail/openstack-dev/2017-August/120544.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Preping for the stable/newton EOL

2017-10-25 Thread Matt Riedemann

On 10/24/2017 9:57 PM, Tony Breeds wrote:

The timing of the next phase is uncertain right now but I'd like to take
care of:

- openstack/nova


Just a status update, but the final nova newton release is waiting on 
two changes:


1. https://review.openstack.org/#/c/514685/ - in the gate, should be 
merged today.


2. https://review.openstack.org/#/c/514339/ - we need a fix for that in 
master and then to get backported through to stable/newton. This is a 
fix for a regression introduced in pike which was unfortunately 
backported to newton, so I think we need to fix the regression we 
introduced into stable/newton before EOL.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 43

2017-10-25 Thread Balazs Gibizer

Hi,

Here is the status update / focus settings mail for w43.

Bugs

[High] https://bugs.launchpad.net/nova/+bug/1706563
TestRPC.test_cleanup_notifier_null fails with timeou
[High] https://bugs.launchpad.net/nova/+bug/1685333 Fatal Python error:
Cannot recover from stack overflow. - in py35 unit test job
The first bug is just a duplicate of the second. It seems the TetRPC
test suite has a way to end up in an infinite recusion.
Related patch https://review.openstack.org/#/c/507239/ has been merged. 
It makes the test run with timeout and lock support this might help 
with the troubleshooting of the bug. Based on logstash there was no new 
appearance of this problem since then so I think the related patch 
actually fixed the problem.



Versioned notification transformation
-
Here are the 3 patches for this week:
* https://review.openstack.org/#/c/467514 Transform keypair.import 
notification
* https://review.openstack.org/#/c/396225 Transform 
instance.trigger_crash_dump notification
* https://review.openstack.org/#/c/443764 use context mgr in 
instance.delete



Service create and destroy notifications


This is the only notification heavy spec that was approved to Queens. 
It adds two new notifications service.create and service.delete similar 
to the already existing service.update versioned notification.


https://blueprints.launchpad.net/nova/+spec/service-create-destroy-notification
https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/service-create-destroy-notification.html


Small improvements
--

* https://review.openstack.org/#/q/topic:refactor-notification-samples
Factor out duplicated notification sample data
This is a start of a longer patch series to deduplicate notification
sample data.
The series needs to be updated.

Weekly meeting
--
Next subteam meeting will be held on 31th of October, Tuesday 17:00 UTC 
on openstack-meeting-4.

https://www.timeanddate.com/worldclock/fixedtime.html?iso=20171031T17


Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-vif] [nova] Changes to os-vif cores

2017-10-25 Thread Jay Pipes

+1

On 10/24/2017 10:32 AM, Stephen Finucane wrote:

Hey,

I'm not actually sure what the protocol is for adding/removing cores to a
library project without a PTL, so I'm just going to put this out there: I'd
like to propose the following changes to the os-vif core team.

- Add 'nova-core'

   os-vif makes extensive use of objects and we've had a few hiccups around
   versionings and the likes recently [1][2]. I'd the expertise of some of the
   other nova cores here as we roll this out to projects other than nova, and I
   trust those not interested/knowledgeable in this area to stay away :)

- Remove Russell Bryant, Maxime Leroy

   These folks haven't been active on os-vif  [3][4] for a long time and I think
   they can be safely removed.

To the existing core team members, please respond with a yay/nay and we'll wait
a week before doing anything.

Cheers,
Stephen

[1] https://review.openstack.org/#/c/508498/
[2] https://review.openstack.org/#/c/509107/
[3] https://review.openstack.org/#/q/reviewedby:%22Russell+Bryant+%253Crbryant%
2540redhat.com%253E%22+project:openstack/os-vif
[4] https://review.openstack.org/#/q/reviewedby:%22Maxime+Leroy+%253Cmaxime.ler
oy%25406wind.com%253E%22+project:openstack/os-vif

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] retiring trove-integration?

2017-10-25 Thread Amrith Kumar
Agreed, please also see[1].

-amrith

[1] https://review.openstack.org/515084


On Wed, Oct 25, 2017 at 3:28 AM, Andreas Jaeger  wrote:

> Trove team,
>
> with the retirement of stable/newton, you can now retire
> trove-integration AFAIU.
>
> For information on what to do, see:
> https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project
>
> I just pushed out two changes for stable/newton retirement that also
> take care of step 1 of the retiring process, see:
>
> https://review.openstack.org/#/c/514916/
> https://review.openstack.org/#/c/514918/
>
> Will you take care of the other steps, please?
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Blocking gate - do not recheck / rebase / approve any patch now (please)

2017-10-25 Thread Emilien Macchi
Status:

- Heat Convergence switch *might* be a reason why overcloud timeout so
much. Thomas proposed to disable it:
https://review.openstack.org/515077
- Every time a patch fails in the tripleo gate queue, it reset the
gate. I proposed to remove this common queue:
https://review.openstack.org/515070
- I cleared the patches in check and queue to make sure the 2 blockers
are tested and can be merged in priority. I'll keep an eye on it
today.

Any help is very welcome.

On Wed, Oct 25, 2017 at 5:58 AM, Emilien Macchi  wrote:
> We have been working very hard to get a package/container promotions
> (since 44 days) and now our blocker is
> https://review.openstack.org/#/c/513701/.
>
> Because the gate queue is huge, we decided to block the gate and kill
> all the jobs running there until we can get
> https://review.openstack.org/#/c/513701/ and its backport
> https://review.openstack.org/#/c/514584 (both are blocking the whole
> production chain).
> We hope to promote after these 2 patches, unless there is something
> else, in that case we would iterate to the next problem.
>
> We hope you understand and support us during this effort.
> So please do not recheck, rebase or approve any patch until further notice.
>
> Thank you,
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [Openstack-operators] replace node "tags" with node "traits"

2017-10-25 Thread Ruby Loo
Sending again, I don't think it went to openstack-operators@.

--ruby

On Wed, Oct 25, 2017 at 10:17 AM, Loo, Ruby  wrote:

> Hello ironic'ers,
>
>
>
> A while ago, we approved a spec to add node tag support to ironic [1]. The
> feature itself did not land yet (although some of the code has). Now that
> the (nova) community has come up with traits, ironic wants to support node
> traits, and there is a spec proposing that [2]. At the ironic node level,
> this is VERY similar to the node tag support, so the thought is to drop
> (not implement) the node tagging feature, since the node traits feature
> could be used instead. There are a few differences between the tags and
> traits. "Traits" means something in OpenStack, and there are some
> restrictions about it:
>
> - max 50 per node
>
> - names must be one of those in os-traits library OR prefixed with
> 'CUSTOM_'
>
>
>
> For folks that wanted the node tagging feature, will this new node traits
> feature work for your use case? Should we support both tags and traits? I
> was wondering about e.g. using ironic standalone.
>
>
>
> Please feel free to comment in [2].
>
>
>
> Thanks in advance,
>
> --ruby
>
>
>
> [1] http://specs.openstack.org/openstack/ironic-specs/specs/
> approved/nodes-tagging.html
>
> [2] https://review.openstack.org/#/c/504531/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [Openstack-operators] replace node "tags" with node "traits"

2017-10-25 Thread Loo, Ruby
Hello ironic'ers,

A while ago, we approved a spec to add node tag support to ironic [1]. The 
feature itself did not land yet (although some of the code has). Now that the 
(nova) community has come up with traits, ironic wants to support node traits, 
and there is a spec proposing that [2]. At the ironic node level, this is VERY 
similar to the node tag support, so the thought is to drop (not implement) the 
node tagging feature, since the node traits feature could be used instead. 
There are a few differences between the tags and traits. "Traits" means 
something in OpenStack, and there are some restrictions about it:
- max 50 per node
- names must be one of those in os-traits library OR prefixed with 'CUSTOM_'

For folks that wanted the node tagging feature, will this new node traits 
feature work for your use case? Should we support both tags and traits? I was 
wondering about e.g. using ironic standalone.

Please feel free to comment in [2].

Thanks in advance,
--ruby

[1] 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/nodes-tagging.html
[2] https://review.openstack.org/#/c/504531/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Kernel parameters needed to boot from iscsi

2017-10-25 Thread Dmitry Tantsur

This is a bit offtopic, but a couple of comments on BFV.

On 10/25/2017 03:55 PM, Derek Higgins wrote:

On 25 October 2017 at 13:03, Dmitry Tantsur  wrote:

(ooops, I somehow missed this email. sorry!)

Hi Yolanda,

On 10/16/2017 11:06 AM, Yolanda Robla Mota wrote:


Hi
Recently i've been helping some customers in the boot from ISCSI feature.
So far everything was working, but we had a problem when booting the
deployment image.
It needed specifically a flag rd.iscsi.ibft=1 rd.iscsi.firmware=1 in the
grub commands. But as the generated deployment image doesn't contain these
flags, ISCSI was not booting properly. For other hardware setups, different
flags may be needed.



Note that we only support BFV in the form of booting from a cinder volume
officially. We haven't looked into iBFV in depth.


The solution was to manually execute a virt-customize on the deployment
image to hardcode these parameters.
I wonder if we can add some feature in Ironic to support it. We have
discussed about kernel parameters several times. But at this time, it
affects ISCSI booting. Not having a way in Ironic to customize these
parameters forces to manual workarounds.



This has been discussed several times, and every time the idea of making it
a generic feature was rejected. There is an option to configure kernel
parameters for PXE boot. However, apparently, you cannot add
rd.iscsi.firmware=1 if you don't use iSCSI, it will fail to boot (Derek told
me that, I did not check).

When I tried it I got this
[  370.704896] dracut-initqueue[387]: Warning: iscistart: Could not
get list of targets from firmware.

perhaps we could alter iscistart to not complain if there are no
targets attached and just continue, then simply always have
rd.iscsi.firmware=1 in the kernel param regardless of storage type


I think we can fix ironic (the PXE boot interface) to pass this flag when using 
boot-from-volume, what do you think?





If your deployment only uses iSCSI - you can
modify [pxe]pxe_append_params in your ironic.conf to include it.


I'm not sure this would help, in the boot from cinder volume case the
iPXE script simply attaches the target and then hands control over to
boot what ever is on the target. The kernel parameters use are already
baked into the grub config. iPXE doesn't alter them and IPA isn't
involved at all.

If anybody is looking to try any of this out in tripleo, here are some
instructions to boot from cinder volume with ironic on a tripleo
overcloud
https://etherpad.openstack.org/p/tripleo-bfv


Nice! I think we should start moving it to tripleo-docs, when we figure out the 
problem above.








So can we reconsider the proposal to add kernel parameters there? It could
be a settable argument (driver_info/kernel_args), and then the IPA could set
the parameters properly on the image. Or any other option is welcome.
What are your thoughts there?



Well, we could probably do that *for IPA only*. Something like
driver_info/deploy_image_append_params. This is less controversial than
doing that for user instances, as we fully control the IPA boot. If you want
to work on it, let's start with a detailed RFE please.



Thanks

--

Yolanda Robla Mota

Principal Software Engineer, RHCE

Red Hat



C/Avellana 213

Urb Portugal

yrobl...@redhat.com  M: +34605641639





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] SPDK uses Swift as a target system to support k-v store

2017-10-25 Thread We We
Hi, all
I am so sorry. I might not be able to attend  IRC meetings (#openstack-swift) 
tonight because of the bad Internet. I have tried many times but I still can 
not login.  If you have any suggestions on my topic(“SPDK uses Swift as a 
target system to support k-v store"), please let me know. I will be grateful 
for you.
The detail is on https://trello.com/b/P5xBO7UR/things-to-do 

Thx,
Helloway
> 在 2017年10月14日,下午12:18,We We  写道:
> 
> Hi, all
> 
> I am a newcomer in Swift, I have proposed a proposal for k-v store  in the 
> SPDK community. The  proposal has submitted on 
> https://trello.com/b/P5xBO7UR/things-to-do 
> , please spare some time to visit 
> it. In this proposal, we would like to  uses Swift as a target system to 
> support k-v store. Could you please share with me if you have any ideas about 
> it. I'd love to hear from your professional thoughts.
> 
> Thx,
> 
> Helloway
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Kernel parameters needed to boot from iscsi

2017-10-25 Thread Derek Higgins
On 25 October 2017 at 13:03, Dmitry Tantsur  wrote:
> (ooops, I somehow missed this email. sorry!)
>
> Hi Yolanda,
>
> On 10/16/2017 11:06 AM, Yolanda Robla Mota wrote:
>>
>> Hi
>> Recently i've been helping some customers in the boot from ISCSI feature.
>> So far everything was working, but we had a problem when booting the
>> deployment image.
>> It needed specifically a flag rd.iscsi.ibft=1 rd.iscsi.firmware=1 in the
>> grub commands. But as the generated deployment image doesn't contain these
>> flags, ISCSI was not booting properly. For other hardware setups, different
>> flags may be needed.
>
>
> Note that we only support BFV in the form of booting from a cinder volume
> officially. We haven't looked into iBFV in depth.
>
>> The solution was to manually execute a virt-customize on the deployment
>> image to hardcode these parameters.
>> I wonder if we can add some feature in Ironic to support it. We have
>> discussed about kernel parameters several times. But at this time, it
>> affects ISCSI booting. Not having a way in Ironic to customize these
>> parameters forces to manual workarounds.
>
>
> This has been discussed several times, and every time the idea of making it
> a generic feature was rejected. There is an option to configure kernel
> parameters for PXE boot. However, apparently, you cannot add
> rd.iscsi.firmware=1 if you don't use iSCSI, it will fail to boot (Derek told
> me that, I did not check).
When I tried it I got this
[  370.704896] dracut-initqueue[387]: Warning: iscistart: Could not
get list of targets from firmware.

perhaps we could alter iscistart to not complain if there are no
targets attached and just continue, then simply always have
rd.iscsi.firmware=1 in the kernel param regardless of storage type

> If your deployment only uses iSCSI - you can
> modify [pxe]pxe_append_params in your ironic.conf to include it.

I'm not sure this would help, in the boot from cinder volume case the
iPXE script simply attaches the target and then hands control over to
boot what ever is on the target. The kernel parameters use are already
baked into the grub config. iPXE doesn't alter them and IPA isn't
involved at all.

If anybody is looking to try any of this out in tripleo, here are some
instructions to boot from cinder volume with ironic on a tripleo
overcloud
https://etherpad.openstack.org/p/tripleo-bfv

>
>
>> So can we reconsider the proposal to add kernel parameters there? It could
>> be a settable argument (driver_info/kernel_args), and then the IPA could set
>> the parameters properly on the image. Or any other option is welcome.
>> What are your thoughts there?
>
>
> Well, we could probably do that *for IPA only*. Something like
> driver_info/deploy_image_append_params. This is less controversial than
> doing that for user instances, as we fully control the IPA boot. If you want
> to work on it, let's start with a detailed RFE please.
>
>>
>> Thanks
>>
>> --
>>
>> Yolanda Robla Mota
>>
>> Principal Software Engineer, RHCE
>>
>> Red Hat
>>
>> 
>>
>> C/Avellana 213
>>
>> Urb Portugal
>>
>> yrobl...@redhat.com  M: +34605641639
>> 
>>
>> 
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [acceleration]Cyborg Team Weekly Meeting 2017.10.25

2017-10-25 Thread Zhipeng Huang
Hi Team,

Sorry again that due to my ongoing business trip i cannot lead the
discussion for this week's team meeting, however as we discussed in the IRC
channel, core members are more than welcomed to host the meeting and lead
the discussions.

i will try to join on my cell phone :P

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] the workload partition willcauseconsumer disappeared

2017-10-25 Thread 李田清
I  use 5.10.2 




> I test newton 5.10.2, and in ceilometer agent notification, the log shows
> 2017-10-21 03:33:19.779 225636 ERROR root [-] Unexpected exception
> occurred 60 time(s)... retrying.
> 2017-10-21 03:33:19.779 225636 ERROR root Traceback (most recent call last):
> 2017-10-21 03:33:19.779 225636 ERROR root File
> "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 250, in
> wrapper
> 2017-10-21 03:33:19.779 225636 ERROR root return infunc(*args, **kwargs)
> 2017-10-21 03:33:19.779 225636 ERROR root File
> "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/base.py", line
> 203, in _runner
> 2017-10-21 03:33:19.779 225636 ERROR root batch_size=self.batch_size,
> batch_timeout=self.batch_timeout)
> 2017-10-21 03:33:19.779 225636 ERROR root File
> "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/base.py", line
> 52, in wrapper
> 2017-10-21 03:33:19.779 225636 ERROR root msg = func(in_self,
> timeout=timeout_watch.leftover(True))
> 2017-10-21 03:33:19.779 225636 ERROR root File
> "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line
> 286, in poll
> 2017-10-21 03:33:19.779 225636 ERROR root
> self._message_operations_handler.process()
> 2017-10-21 03:33:19.779 225636 ERROR root File
> "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line
> 89, in process
> 2017-10-21 03:33:19.779 225636 ERROR root task()
> 2017-10-21 03:33:19.779 225636 ERROR root File
> "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py",
> line 251, in acknowledge
> 2017-10-21 03:33:19.779 225636 ERROR root self._raw_message.ack()
> 2017-10-21 03:33:19.779 225636 ERROR root File
> "/usr/lib/python2.7/site-packages/kombu/message.py", line 88, in ack
> 2017-10-21 03:33:19.779 225636 ERROR root
> self.channel.basic_ack(self.delivery_tag)
> 2017-10-21 03:33:19.779 225636 ERROR root File
> "/usr/lib/python2.7/site-packages/amqp/channel.py", line 1583, in basic_ack
> 2017-10-21 03:33:19.779 225636 ERROR root self._send_method((60, 80), args)
> 2017-10-21 03:33:19.779 225636 ERROR root File
> "/usr/lib/python2.7/site-packages/amqp/abstract_channel.py", line 50, in
> _send_method
> 2017-10-21 03:33:19.779 225636 ERROR root raise
> RecoverableConnectionError('connection already closed')
> 2017-10-21 03:33:19.779 225636 ERROR root RecoverableConnectionError:
> connection already closed

what's the oslo.messaging version? i feel like this is something on 
oslo.messaging and it does sound familiar to possibly it's fixed.

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Blocking gate - do not recheck / rebase / approve any patch now (please)

2017-10-25 Thread Emilien Macchi
We have been working very hard to get a package/container promotions
(since 44 days) and now our blocker is
https://review.openstack.org/#/c/513701/.

Because the gate queue is huge, we decided to block the gate and kill
all the jobs running there until we can get
https://review.openstack.org/#/c/513701/ and its backport
https://review.openstack.org/#/c/514584 (both are blocking the whole
production chain).
We hope to promote after these 2 patches, unless there is something
else, in that case we would iterate to the next problem.

We hope you understand and support us during this effort.
So please do not recheck, rebase or approve any patch until further notice.

Thank you,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Kernel parameters needed to boot from iscsi

2017-10-25 Thread Yolanda Robla Mota
> Hmm, are we talking about IPA or user images? IPA always PXE boots, no
> matter what boot_option we use. For user images with boot_option=local our
> only bet is using the ansible deploy interface, I think (please review it!)
>

User images, i'm talking about user images (specifically about
overcloud-full image). I know ansible deploy driver will be an option, but
it will mean, that for booting from ISCSI with some specific hardware,
ansible deploy driver will be the only option possible. So exploring other
options as well, that could be simpler and will not restrict the driver.

>
>
> So, this is where the confusion probably is: "deployment image" is IPA.
> And by the way the IPA image used in TripleO is based on dracut.
>

That's specifically what made IPA agent work with ibft:

http://git.openstack.org/cgit/openstack/tripleo-image-elements/commit/?id=22a5e4e50f2bf2a71128614218ed208ee8f6f5c2

>
> If you're talking about instance or user image, then, as I mention above,
> the only option we have now is the ansible deploy interface.
>
>
>>
>>
>> Thanks
>>
>> --
>> Yolanda Robla Mota
>>
>> Principal Software Engineer, RHCE
>>
>> Red Hat
>>
>> 
>>
>> C/Avellana 213
>>
>> Urb Portugal
>>
>> yrobl...@redhat.com 
>> > M:
>> +34605641639 
>> > >
>>
>> 
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > subscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > ck-dev>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>>
>> --
>>
>> Yolanda Robla Mota
>>
>> Principal Software Engineer, RHCE
>>
>> Red Hat
>>
>> 
>>
>> C/Avellana 213
>>
>> Urb Portugal
>>
>> yrobl...@redhat.com  M: +34605641639 <
>> http://redhatemailsignature-marketing.itos.redhat.com/>
>>
>> 
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Yolanda Robla Mota

Principal Software Engineer, RHCE

Red Hat



C/Avellana 213

Urb Portugal

yrobl...@redhat.comM: +34605641639


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] the workload partition will causeconsumer disappeared

2017-10-25 Thread gordon chung


On 25/10/17 03:25 AM, 李田清 wrote:
> I test newton 5.10.2, and in ceilometer agent notification, the log shows
> 2017-10-21 03:33:19.779 225636 ERROR root [-] Unexpected exception
> occurred 60 time(s)... retrying.
> 2017-10-21 03:33:19.779 225636 ERROR root Traceback (most recent call last):
> 2017-10-21 03:33:19.779 225636 ERROR root File
> "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 250, in
> wrapper
> 2017-10-21 03:33:19.779 225636 ERROR root return infunc(*args, **kwargs)
> 2017-10-21 03:33:19.779 225636 ERROR root File
> "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/base.py", line
> 203, in _runner
> 2017-10-21 03:33:19.779 225636 ERROR root batch_size=self.batch_size,
> batch_timeout=self.batch_timeout)
> 2017-10-21 03:33:19.779 225636 ERROR root File
> "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/base.py", line
> 52, in wrapper
> 2017-10-21 03:33:19.779 225636 ERROR root msg = func(in_self,
> timeout=timeout_watch.leftover(True))
> 2017-10-21 03:33:19.779 225636 ERROR root File
> "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line
> 286, in poll
> 2017-10-21 03:33:19.779 225636 ERROR root
> self._message_operations_handler.process()
> 2017-10-21 03:33:19.779 225636 ERROR root File
> "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line
> 89, in process
> 2017-10-21 03:33:19.779 225636 ERROR root task()
> 2017-10-21 03:33:19.779 225636 ERROR root File
> "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py",
> line 251, in acknowledge
> 2017-10-21 03:33:19.779 225636 ERROR root self._raw_message.ack()
> 2017-10-21 03:33:19.779 225636 ERROR root File
> "/usr/lib/python2.7/site-packages/kombu/message.py", line 88, in ack
> 2017-10-21 03:33:19.779 225636 ERROR root
> self.channel.basic_ack(self.delivery_tag)
> 2017-10-21 03:33:19.779 225636 ERROR root File
> "/usr/lib/python2.7/site-packages/amqp/channel.py", line 1583, in basic_ack
> 2017-10-21 03:33:19.779 225636 ERROR root self._send_method((60, 80), args)
> 2017-10-21 03:33:19.779 225636 ERROR root File
> "/usr/lib/python2.7/site-packages/amqp/abstract_channel.py", line 50, in
> _send_method
> 2017-10-21 03:33:19.779 225636 ERROR root raise
> RecoverableConnectionError('connection already closed')
> 2017-10-21 03:33:19.779 225636 ERROR root RecoverableConnectionError:
> connection already closed

what's the oslo.messaging version? i feel like this is something on 
oslo.messaging and it does sound familiar to possibly it's fixed.

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Kernel parameters needed to boot from iscsi

2017-10-25 Thread Dmitry Tantsur

On 10/25/2017 02:15 PM, Yolanda Robla Mota wrote:

Answering inline...


Note that we only support BFV in the form of booting from a cinder volume
officially. We haven't looked into iBFV in depth.


I have been testing that on the context of booting from SAN. It is at deploy 
time, in the context of TripleO, in the undercloud. At that point no cinder is 
there. We have been deploying with some ISCSI targets, and ironic consuming 
those instead of booting from local hard disk. I guess it is a different context.




This has been discussed several times, and every time the idea of making it
a generic feature was rejected. There is an option to configure kernel
parameters for PXE boot. However, apparently, you cannot add
rd.iscsi.firmware=1 if you don't use iSCSI, it will fail to boot (Derek told
me that, I did not check). If your deployment only uses iSCSI - you can
modify [pxe]pxe_append_params in your ironic.conf to include it.


No PXE boot possible. As it is in the context of TripleO, we use the 
boot_option=local. I know that it is possible to customize with pxe boot, but we 
cannot rely on it, but have the kernel parameter on local boot.


Hmm, are we talking about IPA or user images? IPA always PXE boots, no matter 
what boot_option we use. For user images with boot_option=local our only bet is 
using the ansible deploy interface, I think (please review it!)






Well, we could probably do that *for IPA only*. Something like
driver_info/deploy_image_append_params. This is less controversial than
doing that for user instances, as we fully control the IPA boot. If you want
to work on it, let's start with a detailed RFE please.


IPA boot works fine. Some time ago I added some patches on the ironic-agent 
element, to force the modprobe of several modules (including ibft, fcoe, etc...) 
That is because IPA image is not based on dracut, so it doesn't rely on parsing 
the cmdline from kernel boot and dracut hooks. The problem with kernel 
parameters is only happening now on the deployment image, and so far, the only 
working solution i could find, is execute a virt-customize on the image to add 
those.


So, this is where the confusion probably is: "deployment image" is IPA. And by 
the way the IPA image used in TripleO is based on dracut.


If you're talking about instance or user image, then, as I mention above, the 
only option we have now is the ansible deploy interface.






Thanks

-- 


Yolanda Robla Mota

Principal Software Engineer, RHCE

Red Hat



C/Avellana 213

Urb Portugal

yrobl...@redhat.com 
> M:
+34605641639 
>





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--

Yolanda Robla Mota

Principal Software Engineer, RHCE

Red Hat



C/Avellana 213

Urb Portugal

yrobl...@redhat.com  M: +34605641639 






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Kernel parameters needed to boot from iscsi

2017-10-25 Thread Yolanda Robla Mota
Answering inline...


Note that we only support BFV in the form of booting from a cinder volume
> officially. We haven't looked into iBFV in depth.
>

I have been testing that on the context of booting from SAN. It is at
deploy time, in the context of TripleO, in the undercloud. At that point no
cinder is there. We have been deploying with some ISCSI targets, and ironic
consuming those instead of booting from local hard disk. I guess it is a
different context.

>
>
> This has been discussed several times, and every time the idea of making
> it a generic feature was rejected. There is an option to configure kernel
> parameters for PXE boot. However, apparently, you cannot add
> rd.iscsi.firmware=1 if you don't use iSCSI, it will fail to boot (Derek
> told me that, I did not check). If your deployment only uses iSCSI - you
> can modify [pxe]pxe_append_params in your ironic.conf to include it.
>

No PXE boot possible. As it is in the context of TripleO, we use the
boot_option=local. I know that it is possible to customize with pxe boot,
but we cannot rely on it, but have the kernel parameter on local boot.

>
>
> Well, we could probably do that *for IPA only*. Something like
> driver_info/deploy_image_append_params. This is less controversial than
> doing that for user instances, as we fully control the IPA boot. If you
> want to work on it, let's start with a detailed RFE please.
>

IPA boot works fine. Some time ago I added some patches on the ironic-agent
element, to force the modprobe of several modules (including ibft, fcoe,
etc...) That is because IPA image is not based on dracut, so it doesn't
rely on parsing the cmdline from kernel boot and dracut hooks. The problem
with kernel parameters is only happening now on the deployment image, and
so far, the only working solution i could find, is execute a virt-customize
on the image to add those.

>
>
>> Thanks
>>
>> --
>>
>> Yolanda Robla Mota
>>
>> Principal Software Engineer, RHCE
>>
>> Red Hat
>>
>> 
>>
>> C/Avellana 213
>>
>> Urb Portugal
>>
>> yrobl...@redhat.com  M: +34605641639 <
>> http://redhatemailsignature-marketing.itos.redhat.com/>
>>
>> 
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Yolanda Robla Mota

Principal Software Engineer, RHCE

Red Hat



C/Avellana 213

Urb Portugal

yrobl...@redhat.comM: +34605641639


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 43

2017-10-25 Thread Flavio Percoco

On 24/10/17 19:26 +0100, Chris Dent wrote:

# TC Participation

At last Thursday's [office
hours](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-19.log.html#t2017-10-19T15:01:02)
Emilien asked, as a thought experiment, what people thought of the
idea of TC term limits. In typical office hours fashion, this quickly
went off into a variety of topics, some only tangentially related to
term limits.

To summarize, incompletely, the pro-reason is: Make room and
opportunities for new leadership. The con-reason is: Maintain a degree
of continuity.

This led to some discussion of the value of "history and baggage" and
whether such things are a keel or anchor in managing the nautical
metaphor of OpenStack. We did not agree, which is probably good
because somewhere in the middle is likely true.

Things then circled back to the nature of the TC: court of last resort
or something with a more active role in executive leadership. If the former,
who does the latter? Many questions related to significant change are
never resolved because it is not clear who does these things.

There's a camp that says "the people who step up to do it". In my experience
this is a statement made by people in a position of privilege and may
(intentionally or otherwise) exclude others or lead to results which have
unintended consequences.

This then led to meandering about the nature of facilitation.

(Like I said, a variety of topics.)

We did not resolve these questions except to confirm that the only way
to address these things is to engage with not just the discussion, but
also the work.


Sad I couldn't attend this office hour :(

I would love to see this idea being explored further. Perhaps a mailing list
thread, then a resolution (Depending on the ML thread feedback) and some f2f
conversations at the next PTG (or even the forum.

Emilien, up to start the thread?
Flavio


# OpenStack Technical Blog

Josh Harlow showed up with [an
idea](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-19.log.html#t2017-10-19T18:19:30).
An OpenStack equivalent of the [kubernetes
blog](http://blog.kubernetes.io/), focused on interesting technology
in OpenStack. This came up again on
[Friday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-20.log.html#t2017-10-20T18:13:01).

It's clear that anyone and everyone _could_ write their own blogs and
syndicate to the [OpenStack planet](http://planet.openstack.org/) but
this doesn't have the same panache and potential cadence as an
official thing _might_. It comes down to people having the time. Eking
out the time for this blog, for example, can be challenging.

Since this is the second [week in a
row](https://anticdent.org/tc-report-42.html) that Josh showed up with
an idea, I wonder what next week will bring?


I might not be exactly the same but, I think the superuser's blog could be a
good place to do some of this writing. There are posts of various kinds in that
blog: technical, community, news, etc. I wonder how many folks from the
community are aware of it and how many would be willing to contribute to it too.
Contributing to the superuser's blog is quite simple, really.

http://superuser.openstack.org/

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Ironic 3rd Party CI Meetings

2017-10-25 Thread Dmitry Tantsur

Hi!

Sorry, I've somehow lost this email. This slot will work for me personally.

On 09/26/2017 10:23 PM, rajini.kart...@dell.com wrote:

*Dell - Internal Use - Confidential

*

Hi all,

It was actually discussed in irc, after the ironic meeting yesterday that we 
will have weekly/biweekly 3^rd Party CI IRC meetings going forward.


The goal is to harden the third-party CI results for ironic, share ideas to make 
it robust and trust worthy.


Would like to know if this time slot works for you all?

http://eavesdrop.openstack.org/#Ironic/neutron_Integration_team_meeting - Not in 
use now


Weekly on Monday at 1600 UTC 
in 
#openstack-meeting-4  (IRC 
webclient 
)


Regards

Rajini



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystoneauth] [osc] [ironic] Usage of none loader in the CLI

2017-10-25 Thread Dmitry Tantsur

Hi!

Thanks for raising this.

On 10/19/2017 01:11 PM, Vladyslav Drok wrote:

Hi!

I'd like to discuss the usage of the new noauth plugin to keystoneauth, which 
was introduced in [1]. The docstring of the loader says it is intended to be 
used during adapter initialization along with endpoint_override. But what about 
the CLI usage in the OpenStack client? I was trying to make the none loader work 
with baremetal plugin, as part of testing [2], and encountered some problems, 
which are hacked around in [3].


So, here are some questions:

1. Was it intended to be used in CLI at all, or should we still use the 
token_endpoint?

2. If it was intended, should we:
     2.1. do the hacks as in [3]?


I don't particularly like hardcoding an entrypoint name in the code here, to be 
honest.


     2.2. introduce endpoint as an option for the none loader, making it a bit 
similar to token_endpoint, with token hardcoded (and also get_endpoint method to 
the auth plugin I think)?


I think that's the way to go, we should fix the none loader in keystoneauth.

     2.3. leave it as-is, allowing the usage of none loader only by specifying 
the parameters in the clouds.yaml, as in [4] for example?


That's not great. We're getting rid of the "ironic" command in favour of 
"openstack baremetal", but inability to properly use a no-auth mode hurts quite 
a few of our use cases (like Bifrost).




[1] https://review.openstack.org/469863 
[2] https://review.openstack.org/359061
[3] https://review.openstack.org/512699
[4] 
https://github.com/openstack/bifrost/blob/21ca45937a9cb36c6f04073182bf2edea8acbd5d/playbooks/roles/bifrost-keystone-client-config/templates/clouds.yaml.j2#L17-L19


Thanks,
Vlad


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Kernel parameters needed to boot from iscsi

2017-10-25 Thread Dmitry Tantsur

(ooops, I somehow missed this email. sorry!)

Hi Yolanda,

On 10/16/2017 11:06 AM, Yolanda Robla Mota wrote:

Hi
Recently i've been helping some customers in the boot from ISCSI feature. So far 
everything was working, but we had a problem when booting the deployment image.
It needed specifically a flag rd.iscsi.ibft=1 rd.iscsi.firmware=1 in the grub 
commands. But as the generated deployment image doesn't contain these flags, 
ISCSI was not booting properly. For other hardware setups, different flags may 
be needed.


Note that we only support BFV in the form of booting from a cinder volume 
officially. We haven't looked into iBFV in depth.


The solution was to manually execute a virt-customize on the deployment image to 
hardcode these parameters.
I wonder if we can add some feature in Ironic to support it. We have discussed 
about kernel parameters several times. But at this time, it affects ISCSI 
booting. Not having a way in Ironic to customize these parameters forces to 
manual workarounds.


This has been discussed several times, and every time the idea of making it a 
generic feature was rejected. There is an option to configure kernel parameters 
for PXE boot. However, apparently, you cannot add rd.iscsi.firmware=1 if you 
don't use iSCSI, it will fail to boot (Derek told me that, I did not check). If 
your deployment only uses iSCSI - you can modify [pxe]pxe_append_params in your 
ironic.conf to include it.



So can we reconsider the proposal to add kernel parameters there? It could be a 
settable argument (driver_info/kernel_args), and then the IPA could set the 
parameters properly on the image. Or any other option is welcome.

What are your thoughts there?


Well, we could probably do that *for IPA only*. Something like 
driver_info/deploy_image_append_params. This is less controversial than doing 
that for user instances, as we fully control the IPA boot. If you want to work 
on it, let's start with a detailed RFE please.




Thanks

--

Yolanda Robla Mota

Principal Software Engineer, RHCE

Red Hat



C/Avellana 213

Urb Portugal

yrobl...@redhat.com  M: +34605641639 






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Preping for the stable/newton EOL

2017-10-25 Thread Dmitry Tantsur

On 10/25/2017 04:57 AM, Tony Breeds wrote:

On Tue, Oct 24, 2017 at 05:11:15PM +1100, Tony Breeds wrote:

On Fri, Oct 06, 2017 at 10:15:56AM +1100, Tony Breeds wrote:

On Wed, Oct 04, 2017 at 02:51:06PM +1100, Tony Breeds wrote:

I'll prep the list of repos that will be tagged EOL real soon now for
review.


As promised here's the list.  The fomat is new, It's grouped by project
team so it should be easy for teams to find repos they care about.

The only wart may be repos I couldn't find an owning team for, so check
the '-' as the owning team.

I'm proposing to EOL all projects that meet one or more of the following
criteria:

- The project is openstack-dev/devstack, openstack-dev/grenade or
   openstack/requirements (although these wil be done last)
- The project has the 'check-requirements' job listed as a template in
   project-config:zuul/layout.yaml
- The project gates with either devstack or grenade jobs
- The project is listed in governance:reference/projects.yaml and is tagged
   with 'stable:follows-policy'.


Based on previous cycles I have opted out:
- 'openstack/group-based-policy'
- 'openstack/openstack-ansible' # So they can add EOL tags

Also based on recent email's with tripleo I have opted out:
- 'openstack/instack'
- 'openstack/instack-undercloud'
- 'openstack/os-apply-config'
- 'openstack/os-collect-config'
- 'openstack/os-net-config'
- 'openstack/os-refresh-config'
- 'openstack/puppet-tripleo'
- 'openstack/python-tripleoclient'
- 'openstack/tripleo-common'
- 'openstack/tripleo-heat-templates'
- 'openstack/tripleo-puppet-elements'
- 'openstack/tripleo-validations'
- 'openstack/tripleo-image-elements'
- 'openstack/tripleo-ui'


I've also removed the following repos as the have open release requests
for stable/newton
  - openstack/nova
  - openstack/ironic
  - openstack/openstack-ansible*

And at the request of the docs team I've omitted:
  - openstack/openstack-manuals

to facilitate 'badging' of the newton docs.


The repos listed in 
http://lists.openstack.org/pipermail/openstack-dev/2017-October/123910.html
have been retired.

There were a couple of issues
- openstack/deb-python-os-cloud-config
- openstack/bareon
My clones of both had stale gerrit remotes that has been corrected
manually.

The timing of the next phase is uncertain right now but I'd like to take
care of:

- openstack/nova
- openstack/ironic


The last ironic newton release was done, we're ready for EOL.


- openstack/openstack-ansible*
- openstack/openstack-manuals

before the summit if possible.

Thanks to the infra team for enabling this to happen today.

Tony.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [elections] Technical Committee Election Results

2017-10-25 Thread Chris Dent

On Tue, 24 Oct 2017, Tony Breeds wrote:


On Mon, Oct 23, 2017 at 09:35:34AM +0100, Jean-Philippe Evrard wrote:


I agree, we should care about not repeating this Pike trend. It looks
like Queens is better in terms of turnout (see the amazing positive
delta!). However, I can't help but noticing that the trend for
turnouts is slowly reducing (excluding some outliers) since the
beginning of these stats.


Yup, the table makes that pretty visible.


I think we can't really make much in the way of conclusions about
the turnout data without comparing it with contributor engagement in
general. If many of the eligible voters have only barely crossed the
eligibility threshold (e.g., one commit) it's probably not
reasonable to expect them to care much about TC elections. We've
talked quite a bit lately that "casual contribution" is a growth
area.

A possibly meaningful correlation may be eligible voters to PTG
attendance to turnout, or before the PTG, number of people who got a
free pass to summit, chose to use it, and voters.

Dunno. Obviously it would be great if more people voted.


Me? No ;P  I do think we need to work out *why* turnout is attending
before determining how to correct it.  I don't really think that we can
get that information though.  Community member that aren't engaged
enough to participate in the election(s) are also unlikely to
participate in a survey askign why they didn't participate ;P


This is a really critical failing in the way we typical gather data.
We have huge survivorship bias.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [stable] [tripleo] [kolla] [ansible] [puppet] Proposing changes in stable policy for installers

2017-10-25 Thread Flavio Percoco

On 24/10/17 15:35 -0700, Emilien Macchi wrote:

I figured that Sydney would be a great opportunity to have face2face
discussion around this topic, and I commit to be there and try to make
progress on this discussion.
I would love to get some people representing their deployment projects
and operators as well.

Please join us :
https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20456/what-do-operators-want-from-the-stable-policy
and probably 
https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20480/upstream-lts-releases


I'm interested in joining this discussion!
Flavio


Thanks,

On Tue, Oct 17, 2017 at 8:32 AM, Fox, Kevin M  wrote:

So, my $0.02.

A supported/recent version of a tool to install an unsupported version of a 
software is not a bad thing.

OpenStack has a bad reputation (somewhat deservedly) for being hard to upgrade. 
This has mostly gotten better over time but there are still a large number of 
older, unsupported deployments at this point.

Sometimes, burning down the cloud isn't an option and sometimes upgrading in 
place isn't an option either, and they are stuck on an unsupported version.

Being able to deploy with a more modern installer the same version of the cloud 
your running in production and shift the load to it (sideways upgrade), but 
then have an upgrade path provided by the tool would be a great thing.

Thanks,
Kevin

From: Michał Jastrzębski [inc...@gmail.com]
Sent: Monday, October 16, 2017 3:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] [stable] [tripleo] [kolla] [ansible] [puppet] 
Proposing changes in stable policy for installers

So my 0.02$

Problem with handling Newton goes beyond deployment tools. Yes, it's
popular to use, but if our dependencies (openstack services
themselves) are unmaintained, so should we. If we say "we support
Newton" in deployment tools, we make kind of promise we can't keep. If
for example there is CVE in Nova that affects Newton, there is nothing
we can do about it and our "support" is meaningless.

Not having LTS kind of model was issue for OpenStack operators
forever, but that's not problem we can solve in deployment tools
(although we are often asked for that because our communities are
largely operators and we're arguably closest projects to operators).

I, for one, think we should keep current stable policy, not make
exception for deployment tools, and address this issue across the
board. What Emilien is describing is real issue that hurts operators.

On 16 October 2017 at 15:38, Emilien Macchi  wrote:

On Mon, Oct 16, 2017 at 4:27 AM, Thierry Carrez  wrote:

Emilien Macchi wrote:

[...]
## Proposal

Proposal 1: create a new policy that fits for projects like installers.
I kicked-off something here: https://review.openstack.org/#/c/511968/
(open for feedback).
Content can be read here:
http://docs-draft.openstack.org/68/511968/1/check/gate-project-team-guide-docs-ubuntu-xenial/1a5b40e//doc/build/html/stable-branches.html#support-phases
Tag created here: https://review.openstack.org/#/c/511969/ (same,
please review).

The idea is really to not touch the current stable policy and create a
new one, more "relax" that suits well for projects like installers.

Proposal 2: change the current policy and be more relax for projects
like installers.
I haven't worked on this proposal while it was something I was
considering doing first, because I realized it could bring confusion
in which projects actually follow the real stable policy and the ones
who have exceptions.
That's why I thought having a dedicated tag would help to separate them.

Proposal 3: no change anywhere, projects like installer can't claim
stability etiquette (not my best option in my opinion).

Anyway, feedback is welcome, I'm now listening. If you work on Kolla,
TripleO, OpenStack-Ansible, PuppetOpenStack (or any project who has
this need), please get involved in the review process.


My preference goes to proposal 1, however rather than call it "relaxed"
I would make it specific to deployment/lifecycle or cycle-trailing
projects.

Ideally this policy could get adopted by any such project. The
discussion started on the review and it's going well, so let's see where
it goes :)


Thierry, when I read your comment on Gerrit I understand you prefer to
amend the existing policy and just make a note for installers (which
is I think the option #2 that I proposed). Can you please confirm
that?
So far I see option #1 has large consensus here, I'll wait for
Thierry's answer to continue to work on it.

Thanks for the feedback so far!
--
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [stable] Preping for the stable/newton EOL

2017-10-25 Thread Jean-Philippe Evrard
On 25 October 2017 at 03:57, Tony Breeds  wrote:
> On Tue, Oct 24, 2017 at 05:11:15PM +1100, Tony Breeds wrote:
>> On Fri, Oct 06, 2017 at 10:15:56AM +1100, Tony Breeds wrote:
>> > On Wed, Oct 04, 2017 at 02:51:06PM +1100, Tony Breeds wrote:
>> > > I'll prep the list of repos that will be tagged EOL real soon now for
>> > > review.
>> >
>> > As promised here's the list.  The fomat is new, It's grouped by project
>> > team so it should be easy for teams to find repos they care about.
>> >
>> > The only wart may be repos I couldn't find an owning team for, so check
>> > the '-' as the owning team.
>> >
>> > I'm proposing to EOL all projects that meet one or more of the following
>> > criteria:
>> >
>> > - The project is openstack-dev/devstack, openstack-dev/grenade or
>> >   openstack/requirements (although these wil be done last)
>> > - The project has the 'check-requirements' job listed as a template in
>> >   project-config:zuul/layout.yaml
>> > - The project gates with either devstack or grenade jobs
>> > - The project is listed in governance:reference/projects.yaml and is tagged
>> >   with 'stable:follows-policy'.
>> >
>> >
>> > Based on previous cycles I have opted out:
>> > - 'openstack/group-based-policy'
>> > - 'openstack/openstack-ansible' # So they can add EOL tags
>> >
>> > Also based on recent email's with tripleo I have opted out:
>> > - 'openstack/instack'
>> > - 'openstack/instack-undercloud'
>> > - 'openstack/os-apply-config'
>> > - 'openstack/os-collect-config'
>> > - 'openstack/os-net-config'
>> > - 'openstack/os-refresh-config'
>> > - 'openstack/puppet-tripleo'
>> > - 'openstack/python-tripleoclient'
>> > - 'openstack/tripleo-common'
>> > - 'openstack/tripleo-heat-templates'
>> > - 'openstack/tripleo-puppet-elements'
>> > - 'openstack/tripleo-validations'
>> > - 'openstack/tripleo-image-elements'
>> > - 'openstack/tripleo-ui'
>>
>> I've also removed the following repos as the have open release requests
>> for stable/newton
>>  - openstack/nova
>>  - openstack/ironic
>>  - openstack/openstack-ansible*
>>
>> And at the request of the docs team I've omitted:
>>  - openstack/openstack-manuals
>>
>> to facilitate 'badging' of the newton docs.
>
> The repos listed in 
> http://lists.openstack.org/pipermail/openstack-dev/2017-October/123910.html
> have been retired.
>
> There were a couple of issues
> - openstack/deb-python-os-cloud-config
> - openstack/bareon
> My clones of both had stale gerrit remotes that has been corrected
> manually.
>
> The timing of the next phase is uncertain right now but I'd like to take
> care of:
>
> - openstack/nova
> - openstack/ironic
> - openstack/openstack-ansible*
> - openstack/openstack-manuals
>
> before the summit if possible.
>
> Thanks to the infra team for enabling this to happen today.
>
> Tony.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Hello Tony,

We'd like to continue doing as before: updating all our upstream
projects to their EOL tag, then creating an EOL release based on our
roles that would successfully deploy those EOL upstream projects.
If any role need a change, due to latest upstream changes, we need to be ready.

TL:DR; I'll submit a patch soon to bump our upstream roles to EOL,
when nova/ironic will have their EOL tag :p

Best regards,
Jean-Philippe Evrard (evrardjp)

PS: The existing newton release to review was the last bump before
EOL, and was submitted during EOL week IIRC. It's good to keep it,
this way we are still following our release train, while some EOL tags
are not yet issued.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] roles_data.yaml equivalent in containers

2017-10-25 Thread Steven Hardy
On Wed, Oct 25, 2017 at 6:41 AM, Abhishek Kane
 wrote:
>
> Hi,
>
>
>
> In THT I have an environment file and corresponding puppet service for 
> Veritas HyperScale.
>
> https://github.com/openstack/tripleo-heat-templates/blob/master/environments/veritas-hyperscale/veritas-hyperscale-config.yaml
>
> https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/veritas-hyperscale-controller.yaml
>
>
>
> This service needs rabbitmq user the hooks for it is 
> “veritas_hyperscale::hs_rabbitmq”-
>
> https://github.com/openstack/puppet-tripleo/blob/master/manifests/profile/base/rabbitmq.pp#L172
>
>
>
> In order to configure Veritas HyperScale, I add 
> “OS::TripleO::Services::VRTSHyperScale” to roles_data.yaml file and use 
> following command-
>
>
>
> # openstack overcloud deploy --templates -r /home/stack/roles_data.yaml -e 
> /usr/share/openstack-tripleo-heat-templates/environments/veritas-hyperscale/veritas-hyperscale-config.yaml
>  -e 
> /usr/share/openstack-tripleo-heat-templates/environments/veritas-hyperscale/cinder-veritas-hyperscale-config.yaml
>
>
>
> This command sets “veritas_hyperscale_controller_enabled” to true in 
> hieradata and all the hooks gets called.
>
>
>
> I am trying to containerize Veritas HyperScale services. I used following 
> config file in quickstart-
>
> http://paste.openstack.org/show/624438/
>
>
>
> It has the environment files-
>
>   -e 
> {{overcloud_templates_path}}/environments/veritas-hyperscale/cinder-veritas-hyperscale-config.yaml
>
>   -e 
> {{overcloud_templates_path}}/environments/veritas-hyperscale/veritas-hyperscale-config.yaml
>
>
>
> But this itself doesn’t set “veritas_hyperscale_controller_enabled” to true 
> in hieradata and veritas_hyperscale::hs_rabbitmq doesn’t get called.
>
> https://github.com/openstack/tripleo-heat-templates/blob/master/roles_data.yaml#L56
>
>
>
>
>
> How do I add OS::TripleO::Services::VRTSHyperScale in case of containers?

the roles_data.yaml approach you used previously should still work in
the case of containers, but the service template referenced will be
different (the files linked above still refer to the puppet service
template)

e.g

https://github.com/openstack/tripleo-heat-templates/blob/master/environments/veritas-hyperscale/veritas-hyperscale-config.yaml#L18

defines:

OS::TripleO::Services::VRTSHyperScale:
../../puppet/services/veritas-hyperscale-controller.yaml

Which overrides this default mapping to OS::Heat::None:

https://github.com/openstack/tripleo-heat-templates/blob/master/overcloud-resource-registry-puppet.j2.yaml#L297

For containerized services, there are different resource_registry
mappings that refer to the templates in
tripleo-heat-templates/docker/services. e.g like this:

https://github.com/openstack/tripleo-heat-templates/blob/master/environments/services-docker/sahara.yaml

I think you'll need to create similar new service templates under
docker/services, then create some new environment files which map to
the new implementation that defines the data needed to start the
contianers.

You can get help with this in #tripleo on Freenode, and there are some
docs here:

https://github.com/openstack/tripleo-heat-templates/blob/master/docker/services/README.rst
https://docs.openstack.org/tripleo-docs/latest/install/containers_deployment/index.html

There was also a deep-dive recorded which is linked from here:

https://etherpad.openstack.org/p/tripleo-deep-dive-topics

Hope that helps somewhat?

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-vif] [nova] Changes to os-vif cores

2017-10-25 Thread Moshe Levi


> -Original Message-
> From: Sahid Orentino Ferdjaoui [mailto:sferd...@redhat.com]
> Sent: Wednesday, October 25, 2017 11:22 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [os-vif] [nova] Changes to os-vif cores
> 
> On Tue, Oct 24, 2017 at 03:32:15PM +0100, Stephen Finucane wrote:
> > Hey,
> >
> > I'm not actually sure what the protocol is for adding/removing cores
> > to a library project without a PTL, so I'm just going to put this out
> > there: I'd like to propose the following changes to the os-vif core team.
> >
> > - Add 'nova-core'
> >
> >   os-vif makes extensive use of objects and we've had a few hiccups around
> >   versionings and the likes recently [1][2]. I'd the expertise of some of 
> > the
> >   other nova cores here as we roll this out to projects other than nova, 
> > and I
> >   trust those not interested/knowledgeable in this area to stay away
> > :)
> >
> > - Remove Russell Bryant, Maxime Leroy
> >
> >   These folks haven't been active on os-vif  [3][4] for a long time and I 
> > think
> >   they can be safely removed.
> 
> Indeed, they are not active. Seems to be reasonable.
+1
> 
> > To the existing core team members, please respond with a yay/nay and
> > we'll wait a week before doing anything.
> >
> > Cheers,
> > Stephen
> >
> > [1]
> >
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Frev
> >
> iew.openstack.org%2F%23%2Fc%2F508498%2F=02%7C01%7Cmoshele
> %40mella
> >
> nox.com%7Cf9470c2a06914266f7b508d51b819b76%7Ca652971c7d2e4d9ba6a
> 4d1492
> >
> 56f461b%7C0%7C0%7C636445165832350206=PU5gKHSscjFPUaFb5%2B
> g4QyWdv
> > VPWZQE6IuVTERcg1Zg%3D=0 [2]
> >
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Frev
> >
> iew.openstack.org%2F%23%2Fc%2F509107%2F=02%7C01%7Cmoshele
> %40mella
> >
> nox.com%7Cf9470c2a06914266f7b508d51b819b76%7Ca652971c7d2e4d9ba6a
> 4d1492
> >
> 56f461b%7C0%7C0%7C636445165832350206=CrXbAUP%2Fut5hov%2F
> YpfVVevu
> > pDzyJpqaoYjXDzSJnbDE%3D=0
> > [3]
> >
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Frev
> >
> iew.openstack.org%2F%23%2Fq%2Freviewedby%3A%2522Russell%2BBryant
> %2B%25
> >
> 253Crbryant%25=02%7C01%7Cmoshele%40mellanox.com%7Cf9470c2a
> 0691426
> >
> 6f7b508d51b819b76%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C
> 6364451
> >
> 65832350206=l7iWjHiCxiouHJdWxbvwI9Gi%2Bne8o14TlcfBWHttT8Y%3
> D
> > erved=0 2540redhat.com%253E%22+project:openstack/os-vif
> > [4]
> >
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Frev
> >
> iew.openstack.org%2F%23%2Fq%2Freviewedby%3A%2522Maxime%2BLeroy
> %2B%2525
> >
> 3Cmaxime.ler=02%7C01%7Cmoshele%40mellanox.com%7Cf9470c2a06
> 914266f
> >
> 7b508d51b819b76%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C63
> 6445165
> >
> 832350206=8ywHQPppzRO2CAl6e1jzSOLx5G4gvHxRkGzl6qEgbyY%3D
> 
> > d=0 oy%25406wind.com%253E%22+project:openstack/os-vif
> >
> >
> __
> 
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >
> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flist
> > s.openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack-
> dev=02
> >
> %7C01%7Cmoshele%40mellanox.com%7Cf9470c2a06914266f7b508d51b819b
> 76%7Ca6
> >
> 52971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636445165832350206
> a=2NIN
> > pJxKv%2BIq0iPCCm7UDPyFNaI9X47d%2B2MmpEQSlPI%3D=0
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.
> openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack-
> dev=02%7C01%7Cmoshele%40mellanox.com%7Cf9470c2a06914266f7b
> 508d51b819b76%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C6364
> 45165832350206=2NINpJxKv%2BIq0iPCCm7UDPyFNaI9X47d%2B2Mm
> pEQSlPI%3D=0
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-vif] [nova] Changes to os-vif cores

2017-10-25 Thread Sahid Orentino Ferdjaoui
On Tue, Oct 24, 2017 at 03:32:15PM +0100, Stephen Finucane wrote:
> Hey,
> 
> I'm not actually sure what the protocol is for adding/removing cores to a
> library project without a PTL, so I'm just going to put this out there: I'd
> like to propose the following changes to the os-vif core team.
> 
> - Add 'nova-core'
> 
>   os-vif makes extensive use of objects and we've had a few hiccups around
>   versionings and the likes recently [1][2]. I'd the expertise of some of the
>   other nova cores here as we roll this out to projects other than nova, and I
>   trust those not interested/knowledgeable in this area to stay away :)
> 
> - Remove Russell Bryant, Maxime Leroy
> 
>   These folks haven't been active on os-vif  [3][4] for a long time and I 
> think
>   they can be safely removed.

Indeed, they are not active. Seems to be reasonable.

> To the existing core team members, please respond with a yay/nay and we'll 
> wait
> a week before doing anything.
> 
> Cheers,
> Stephen
> 
> [1] https://review.openstack.org/#/c/508498/
> [2] https://review.openstack.org/#/c/509107/
> [3] 
> https://review.openstack.org/#/q/reviewedby:%22Russell+Bryant+%253Crbryant%
> 2540redhat.com%253E%22+project:openstack/os-vif
> [4] 
> https://review.openstack.org/#/q/reviewedby:%22Maxime+Leroy+%253Cmaxime.ler
> oy%25406wind.com%253E%22+project:openstack/os-vif
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] retiring trove-integration?

2017-10-25 Thread Andreas Jaeger
Trove team,

with the retirement of stable/newton, you can now retire
trove-integration AFAIU.

For information on what to do, see:
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

I just pushed out two changes for stable/newton retirement that also
take care of step 1 of the retiring process, see:

https://review.openstack.org/#/c/514916/
https://review.openstack.org/#/c/514918/

Will you take care of the other steps, please?

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] the workload partition will causeconsumer disappeared

2017-10-25 Thread 李田清
On 2017-10-23 09:57 PM, 李田清 wrote:
>   We test ceilometer workload partition, and find even with one 
> rabbitmq server, the ceilometer-pipe
>   will lost its consumers. Does anyone know this?
>   I configure, batch_size =1, batch_timeout =1, 
> and pipeline_processing_queues = 1.
>   If anyone know this, please point it out. Thanks
and you see no errors in notification-agent? does it start with a 
consumer or is there never a consumer?


The error is 
I test newton 5.10.2, and in ceilometer agent notification, the log shows
2017-10-21 03:33:19.779 225636 ERROR root [-] Unexpected exception occurred 60 
time(s)... retrying.
2017-10-21 03:33:19.779 225636 ERROR root Traceback (most recent call last):
2017-10-21 03:33:19.779 225636 ERROR root File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 250, in wrapper
2017-10-21 03:33:19.779 225636 ERROR root return infunc(*args, **kwargs)
2017-10-21 03:33:19.779 225636 ERROR root File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/base.py", line 203, 
in _runner
2017-10-21 03:33:19.779 225636 ERROR root batch_size=self.batch_size, 
batch_timeout=self.batch_timeout)
2017-10-21 03:33:19.779 225636 ERROR root File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/base.py", line 52, in 
wrapper
2017-10-21 03:33:19.779 225636 ERROR root msg = func(in_self, 
timeout=timeout_watch.leftover(True))
2017-10-21 03:33:19.779 225636 ERROR root File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
286, in poll
2017-10-21 03:33:19.779 225636 ERROR root 
self._message_operations_handler.process()
2017-10-21 03:33:19.779 225636 ERROR root File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
89, in process
2017-10-21 03:33:19.779 225636 ERROR root task()
2017-10-21 03:33:19.779 225636 ERROR root File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 
251, in acknowledge
2017-10-21 03:33:19.779 225636 ERROR root self._raw_message.ack()
2017-10-21 03:33:19.779 225636 ERROR root File 
"/usr/lib/python2.7/site-packages/kombu/message.py", line 88, in ack
2017-10-21 03:33:19.779 225636 ERROR root 
self.channel.basic_ack(self.delivery_tag)
2017-10-21 03:33:19.779 225636 ERROR root File 
"/usr/lib/python2.7/site-packages/amqp/channel.py", line 1583, in basic_ack
2017-10-21 03:33:19.779 225636 ERROR root self._send_method((60, 80), args)
2017-10-21 03:33:19.779 225636 ERROR root File 
"/usr/lib/python2.7/site-packages/amqp/abstract_channel.py", line 50, in 
_send_method
2017-10-21 03:33:19.779 225636 ERROR root raise 
RecoverableConnectionError('connection already closed')
2017-10-21 03:33:19.779 225636 ERROR root RecoverableConnectionError: 
connection already closed

are you setting pipeline_processing_queues =1 as a test? because it sort 
of defeats purpose.


Yes, i just test the workload partition. If processing queues = 4, there will 
be more
partition for each pipeline 

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev