Re: [openstack-dev] ​ [mistral] timeout and retry

2018-04-26 Thread Vitalii Solodilov
> No matter at what stage the task is, but if it’s still in RUNNING state or FAILED but we know that retry policy still didn’t use all attempts then the ‘timeout’ policy should force the task to fail.Ok, then we have a bug because timeout policy doesn't force the task to fail. It retries task. And after that, we have two running action parallel.https://github.com/openstack/mistral/blob/master/mistral/engine/policies.py#L537 27.04.2018, 07:50, "Renat Akhmerov" :Hi, I don’t clearly understand the problem you’re trying to point to. IMO, when using these two policies at the same time ‘timeout’ policy should have higher priority. No matter at what stage the task is, but if it’s still in RUNNING state or FAILED but we know that retry policy still didn’t use all attempts then the ‘timeout’ policy should force the task to fail. If it’s the second case when it’s FAILED but the retry policy is still in play then we need to leave some ‘force’ marker in the task state to make sure that there’s no need to retry it further.ThanksRenat Akhmerov@NokiaOn 24 Apr 2018, 05:00 +0700, Vitalii Solodilov , wrote:Hi Renat, Can you explain me and Dougal how timeout policy should work with retry policy?I guess there is bug right now.The behaviour is something like this https://ibb.co/hhm0eHExample: https://review.openstack.org/#/c/563759/Logs: http://logs.openstack.org/59/563759/1/check/openstack-tox-py27/6f38808/job-output.txt.gz#_2018-04-23_20_54_55_376083Even we will fix this bug and after task timeout we will not retry task. I don't understand which problem is decided by this timeout and retry.Other problem. What about task retry? I mean using mistral api. The problem is that timeout delayed calls was not created.IMHO the combination of these policies should work like this https://ibb.co/fe5tzHIt is not a timeout per action because when task retry it move to some complete state and then back to RUNNING state. And it will work fine with with-items policy.The main advantage is executor and rabbitmq HA. I can specify small timeout if executor will die the task retried by timeout and create new action.The second is predictable behaviour. When I specify timeout: 10 and retry.count: 5 I know that will be create maximum 5 action before SUCCESS state and every action will be executes no longer than 10 seconds.-- Best regards,Vitalii Solodilov   -- Best regards, Vitalii Solodilov __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] A mechanism to close stuck running executions

2018-04-26 Thread András Kövi
Hi Vitalii,
thanks for reminding. I've almost forgotten about it.
I've updated with the stuff we have tested locally for months.
Looking forward to your comments!

Thanks,
Andras
Vitalii Solodilov  ezt írta (időpont: 2018. ápr. 27., P,
3:31):

> Hi, Jozsef and Andras.
> Do you plan to finish this patch? https://review.openstack.org/#/c/527085
> I think the stuck RUNNING executions is very a sensitive subject for
Mistral.

> --
> Best regards,

> Vitalii Solodilov


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ​[openstack-dev] [mistral] timeout and retry

2018-04-26 Thread Renat Akhmerov
Hi,

I don’t clearly understand the problem you’re trying to point to.

IMO, when using these two policies at the same time ‘timeout’ policy should 
have higher priority. No matter at what stage the task is, but if it’s still in 
RUNNING state or FAILED but we know that retry policy still didn’t use all 
attempts then the ‘timeout’ policy should force the task to fail. If it’s the 
second case when it’s FAILED but the retry policy is still in play then we need 
to leave some ‘force’ marker in the task state to make sure that there’s no 
need to retry it further.

Thanks

Renat Akhmerov
@Nokia

On 24 Apr 2018, 05:00 +0700, Vitalii Solodilov , wrote:
> Hi Renat, Can you explain me and Dougal how timeout policy should work with 
> retry policy?
>
> I guess there is bug right now.
> The behaviour is something like this https://ibb.co/hhm0eH
> Example: https://review.openstack.org/#/c/563759/
> Logs: 
> http://logs.openstack.org/59/563759/1/check/openstack-tox-py27/6f38808/job-output.txt.gz#_2018-04-23_20_54_55_376083
> Even we will fix this bug and after task timeout we will not retry task. I 
> don't understand which problem is decided by this timeout and retry.
> Other problem. What about task retry? I mean using mistral api. The problem 
> is that timeout delayed calls was not created.
>
> IMHO the combination of these policies should work like this 
> https://ibb.co/fe5tzH
> It is not a timeout per action because when task retry it move to some 
> complete state and then back to RUNNING state. And it will work fine with 
> with-items policy.
> The main advantage is executor and rabbitmq HA. I can specify small timeout 
> if executor will die the task retried by timeout and create new action.
> The second is predictable behaviour. When I specify timeout: 10 and 
> retry.count: 5 I know that will be create maximum 5 action before SUCCESS 
> state and every action will be executes no longer than 10 seconds.
>
> --
> Best regards,
>
> Vitalii Solodilov
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-04-26 Thread Emilien Macchi
We created a new board where we'll track the efforts for the all-in-one
installer:
https://trello.com/b/iAHhAgjV/tripleo-all-in-one-installer

Note: please do not use the containerized undercloud dashboard for these
tasks, it is a separated effort.
Feel free to join the board and feed the backlog!

Thanks,

On Thu, Apr 5, 2018 at 10:02 AM, Dan Prince  wrote:

> On Thu, Apr 5, 2018 at 12:24 PM, Emilien Macchi 
> wrote:
> > On Thu, Apr 5, 2018 at 4:37 AM, Dan Prince  wrote:
> >
> >> Much of the work on this is already there. We've been using this stuff
> >> for over a year to dev/test the containerization efforts for a long
> >> time now (and thanks for your help with this effort). The problem I
> >> think is how it is all packaged. While you can use it today it
> >> involves some tricks (docker in docker), or requires you to use an
> >> extra VM to minimize the installation footprint on your laptop.
> >>
> >> Much of the remaining work here is really just about packaging and
> >> technical debt. If we put tripleoclient and heat-monolith into a
> >> container that solves much of the requirements problems and
> >> essentially gives you a container which can transform Heat templates
> >> to Ansible. From the ansible side we need to do a bit more work to
> >> mimimize our dependencies (i.e. heat hooks). Using a virtual-env would
> >> be one option for developers if we could make that work. I lighter set
> >> of RPM packages would be another way to do it. Perhaps both...
> >> Then a smaller wrapper around these things (which I personally would
> >> like to name) to make it all really tight.
> >
> >
> > So if I summarize the discussion:
> >
> > - A lot of positive feedback about the idea and many use cases, which is
> > great.
> >
> > - Support for non-containerized services is not required, as long as we
> > provide a way to update containers with under-review patches for
> developers.
>
> I think we still desire some (basic no upgrades) support for
> non-containerized baremetal at this time.
>
> >
> > - We'll probably want to breakdown the "openstack undercloud deploy"
> process
> > into pieces
> > * start an ephemeral Heat container
>
> It already supports this if use don't use the --heat-native option,
> also you can customize the container used for heat via
> --heat-container-image. So we already have this! But rather than do
> this I personally prefer the container to have python-tripleoclient
> and heat-monolith in it. That way everything everything is in there to
> generate Ansible templates. If you just use Heat you are missing some
> of the pieces that you'd still have to install elsewhere on your host.
> Having them all be in one scoped container which generates Ansible
> playbooks from Heat templates is better IMO.
>
> > * create the Heat stack passing all requested -e's
> > * run config-download and save the output
> >
> > And then remove undercloud specific logic, so we can provide a generic
> way
> > to create the config-download playbooks.
>
> Yes. Lets remove some of the undercloud logic. But do note that most
> of the undercloud specific login is now in undercloud_config.py anyway
> so this is mostly already on its way.
>
> > This generic way would be consumed by the undercloud deploy commands but
> > also by the new all-in-one wrapper.
> >
> > - Speaking of the wrapper, we will probably have a new one. Several names
> > were proposed:
> > * openstack tripleo deploy
> > * openstack talon deploy
> > * openstack elf deploy
>
> The wrapper could be just another set of playbooks. That we give a
> name too... and perhaps put a CLI in front of as well.
>
> >
> > - The wrapper would work with deployed-server, so we would noop Neutron
> > networks and use fixed IPs.
>
> This would be configurable I think depending on which templates were
> used. Noop as a default for developer deployments but do note that
> some services like Neutron aren't going to work unless you have some
> basic network setup. Noop is useful if you prefer to do this manually,
> but our os-net-config templates are quite useful to automate things.
>
> >
> > - Investigate the packaging work: containerize tripleoclient and
> > dependencies, see how we can containerized Ansible + dependencies (and
> > eventually reduce them at strict minimum).
> >
> > Let me know if I missed something important, hopefully we can move things
> > forward during this cycle.
> > --
> > Emilien Macchi
>



-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osc][swift] Setting storage policy for a container possible via the client?

2018-04-26 Thread Mark Kirkwood



On 20/04/18 04:54, Dean Troyer wrote:

On Thu, Apr 19, 2018 at 7:51 AM, Doug Hellmann  wrote:

Excerpts from Mark Kirkwood's message of 2018-04-19 16:47:58 +1200:

Swift has had storage policies for a while now. These are enabled by
setting the 'X-Storage-Policy' header on a container.

It looks to me like this is not possible using openstack-client (even in
master branch) - while there is a 'set' operation for containers this
will *only* set  'Meta-*' type headers.

It seems to me that adding this would be highly desirable. Is it in the
pipeline? If not I might see how much interest there is at my end for
adding such - as (famous last words) it looks pretty straightforward to do.

I can't imagine why we wouldn't want to implement that and I'm not
aware of anyone working on it. If you're interested and have time,
please do work on the patch(es).

The primary thing that hinders Swift work like this is OSC does not
use swiftclient as it wasn't a standalone thing yet when I wrote that
bit (lifting much of the actual API code from swiftclient) .  We
decided a while ago to not add that dependency and drop the
OSC-specific object code and use the SDK when we start using SDK for
everything else, after there is an SDK 1.0 release.

Moving forward on this today using either OSC's api.object code or the
SDK would be fine, with the same SDK caveat we have with Neutron,
since SDK isn't 1.0 we may have to play catch-up and maintain multiple
SDK release compatibilities (which has happened at least twice).


Ok, understood. I've uploaded a small patch that adding policy 
specification to 'container create' and adds some policy details display 
to 'container show' and 'object store account show' [1]. It uses the 
existing api design, but tries to get the display to look a little like 
what the swift cli provides (particularly for the account info).


regards
Mark
[1] Gerrit topic is objectstorepolicies


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] A mechanism to close stuck running executions

2018-04-26 Thread Vitalii Solodilov
Hi, Jozsef and Andras.
Do you plan to finish this patch? https://review.openstack.org/#/c/527085
I think the stuck RUNNING executions is very a sensitive subject for Mistral. 

-- 
Best regards,

Vitalii Solodilov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member

2018-04-26 Thread Surya Singh
+1

Good Contribution !! Mark


On Thu, Apr 26, 2018 at 9:01 PM, Jeffrey Zhang  wrote:
> Kolla core reviewer team,
>
> It is my pleasure to nominate
> mgoddard for kolla core team.
> Mark has been working both upstream and downstream with kolla and
> kolla-ansible for over two years, building bare metal compute clouds with
> ironic for HPC. He's been involved with OpenStack since 2014. He started
> the kayobe deployment project which complements kolla-ansible. He is
> also the most active non-core contributor for last 90 days[1]
> Consider this nomination a +1 vote from me
>
> A +1 vote indicates you are in favor of
> mgoddard as a candidate, a -1
> is a
> veto. Voting is open for 7 days until
> May
>
> 4
> th, or a unanimous
> response is reached or a veto vote occurs.
>
> [1] http://stackalytics.com/report/contribution/kolla-group/90
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

--
Cheers
Surya

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Forum Schedule

2018-04-26 Thread Jimmy McArthur
No problem at all: 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21780/manila-ops-feedback-running-at-scale-overcoming-barriers-to-deployment


Tom Barron wrote:
running at scale, barriers to deployment 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Forum Schedule

2018-04-26 Thread Tom Barron

Jimmy,

Also can we 's/barriers/overcoming barriers/' in the title of the 
manila session?


Thanks!

-- Tom

On 26/04/18 15:27 -0500, Jimmy McArthur wrote:

No problem.  Done :)


Colleen Murphy 
April 26, 2018 at 1:23 PM
Hi Jimmy,

I have a conflict on Thursday afternoon. Could I propose swapping 
these two sessions:


Monday 11:35-12:15 Manila Ops feedback: running at scale, barriers 
to deployment

Thursday 1:50-2:30 Default Roles

I've gotten affirmation from Tom and Lance on the swap, though if 
this causes problems for anyone else I'm happy to retract this 
request.


Colleen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Jimmy McArthur 
April 25, 2018 at 4:07 PM
Hi everyone -

Please have a look at the Vancouver Forum schedule: https://docs.google.com/spreadsheets/d/15hkU0FLJ7yqougCiCQgprwTy867CGnmUvAvKaBjlDAo/edit?usp=sharing 
(also attached as a CSV) The proposed schedule was put together by 
two members from UC, TC and Foundation.


We do our best to avoid moving scheduled items around as it tends to 
create a domino affect, but we do realize we might have missed 
something.  The schedule should generally be set, but if you see a 
major conflict in either content or speaker availability, please 
email speakersupp...@openstack.org.


Thanks all,
Jimmy
___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Changes to keystone-stable-maint members

2018-04-26 Thread Tony Breeds
On Tue, Apr 24, 2018 at 10:58:06AM -0700, Morgan Fainberg wrote:
> Hi,
> 
> I am proposing making some changes to the Keystone Stable Maint team.
> A lot of this is cleanup for contributors that have moved on from
> OpenStack. For the most part, I've been the only one responsible for
> Keystone Stable Maint reviews, and I'm not comfortable being this
> bottleneck
> 
> Removals
> 
> Dolph Matthews
> Steve Martinelli
> Brant Knudson
> 
> Each of these members have left/moved on from OpenStack, or in the
> case of Brant, less involved with Keystone (and I believe OpenStack as
> a whole).
> 
> Additions
> ===
> Lance Bragstad

Done.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] new tool for quickly fixing nits on gerrit reviews

2018-04-26 Thread Doug Hellmann
For a while now I've been encouraging folks to propose follow-up patches
to fix nits on proposed changes, rather than waiting ages for someone to
respond to a -1 for a little typo. Today I've release git-nit, a tool to
make doing that easier.

The idea is that you would run a command like:

  $ git nit https://review.openstack.org/#/c/564559/

to download that review into a new local sandbox, ready for your
follow-up patch.

There are more examples in the README.rst on github [1] (until the repo
is imported into our gerrit server [2]).

I released version 1.0.0 a few minutes ago, so you should be able
to pip install it. Please try it out and let me know what your
experience is like.

Doug

[1] https://github.com/dhellmann/git-nit
[2] https://review.openstack.org/564622

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Forum Schedule

2018-04-26 Thread Jimmy McArthur

No problem.  Done :)


Colleen Murphy 
April 26, 2018 at 1:23 PM
Hi Jimmy,

I have a conflict on Thursday afternoon. Could I propose swapping 
these two sessions:


Monday 11:35-12:15 Manila Ops feedback: running at scale, barriers to 
deployment

Thursday 1:50-2:30 Default Roles

I've gotten affirmation from Tom and Lance on the swap, though if this 
causes problems for anyone else I'm happy to retract this request.


Colleen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Jimmy McArthur 
April 25, 2018 at 4:07 PM
Hi everyone -

Please have a look at the Vancouver Forum schedule: 
https://docs.google.com/spreadsheets/d/15hkU0FLJ7yqougCiCQgprwTy867CGnmUvAvKaBjlDAo/edit?usp=sharing 
(also attached as a CSV) The proposed schedule was put together by two 
members from UC, TC and Foundation.


We do our best to avoid moving scheduled items around as it tends to 
create a domino affect, but we do realize we might have missed 
something.  The schedule should generally be set, but if you see a 
major conflict in either content or speaker availability, please email 
speakersupp...@openstack.org.


Thanks all,
Jimmy
___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier?

2018-04-26 Thread Adam Spiers

Doug Hellmann  wrote:

Excerpts from Adam Spiers's message of 2018-04-25 18:15:42 +0100:

[BTW I hope it's not considered off-bounds for those of us who aren't
TC election candidates to reply within these campaign question threads
to responses from the candidates - but if so, let me know and I'll
shut up ;-) ]


Everyone should feel free to participate!


Jeremy Stanley  wrote:

Not only are responses from everyone in the community welcome (and
like many, I think we should be asking questions like this often
outside the context of election campaigning), but I wholeheartedly
agree with your stance on this topic and also strongly encourage you
to consider running for a seat on the TC in the future if you can
swing it.


Thanks both for your support!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Forum Schedule

2018-04-26 Thread Colleen Murphy
Hi Jimmy,

On Wed, Apr 25, 2018, at 11:07 PM, Jimmy McArthur wrote:
> Hi everyone -
> 
> Please have a look at the Vancouver Forum schedule: 
> https://docs.google.com/spreadsheets/d/15hkU0FLJ7yqougCiCQgprwTy867CGnmUvAvKaBjlDAo/edit?usp=sharing
>  
> (also attached as a CSV) The proposed schedule was put together by two 
> members from UC, TC and Foundation.
> 
> We do our best to avoid moving scheduled items around as it tends to 
> create a domino affect, but we do realize we might have missed 
> something.  The schedule should generally be set, but if you see a major 
> conflict in either content or speaker availability, please email 
> speakersupp...@openstack.org.

I have a conflict on Thursday afternoon. Could I propose swapping these two 
sessions:

Monday 11:35-12:15 Manila Ops feedback: running at scale, barriers to deployment
Thursday 1:50-2:30 Default Roles

I've gotten affirmation from Tom and Lance on the swap, though if this causes 
problems for anyone else I'm happy to retract this request.

Colleen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] final stages of python 3 transition

2018-04-26 Thread Clark Boylan
On Thu, Apr 26, 2018, at 7:27 AM, Sean McGinnis wrote:
> On Wed, Apr 25, 2018 at 04:54:46PM -0400, Doug Hellmann wrote:
> > It's time to talk about the next steps in our migration from python
> > 2 to python 3.
> > 
> > [...]
> > 
> > 2. Change (or duplicate) all functional test jobs to run under
> >python 3.
> 
> As a test I ran Cinder functional and unit test jobs on bionic using 3.6. All
> went well.
> 
> That made me realize something though - right now we have jobs that explicitly
> say py35, both for unit tests and functional tests. But I realized setting up
> these test jobs that it works to just specify "basepython = python3" or run
> unit tests with "tox -e py3". Then with that, it just depends on whether the
> job runs on xenial or bionic as to whether the job is run with py35 or py36.
> 
> It is less explicit, so I see some downside to that, but would it make sense 
> to
> change jobs to drop the minor version to make it more flexible and easy to 
> make
> these transitions?

One reason to use it would be local user simplicity. Rather than need to 
explicitly add new python3 releases to the default env list so that it does 
what we want every year or two we can just list py3,py2,linters in the default 
list and get most of the way there for local users. Then we can continue to be 
more specific in the CI jobs if that is desirable.

I do think we likely want to be explicit about the python versions we are using 
in CI testing. This makes it clear to developers who may need to reproduce or 
just understand why failures happen what platform is used. It also makes it 
explicit that "openstack runs on $pythonversion".

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] final stages of python 3 transition

2018-04-26 Thread Jeremy Stanley
On 2018-04-26 13:17:33 -0400 (-0400), Paul Belanger wrote:
[...]
> This maybe okay, will allow others to comment, but the main reason
> I am not a fan, is we can no longer infer the nodeset by looking
> at the job name. tox-py3 could be xenial or bionic.

This brings back a question we've struggled with for years: are we
testing against "Python X.Y" or are we testing against "Python as
provided by distro Z"? Depending on how you think about that, one
solution or the other is technically a more accurate reflection of
our choice here.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] final stages of python 3 transition

2018-04-26 Thread Paul Belanger
On Thu, Apr 26, 2018 at 09:27:31AM -0500, Sean McGinnis wrote:
> On Wed, Apr 25, 2018 at 04:54:46PM -0400, Doug Hellmann wrote:
> > It's time to talk about the next steps in our migration from python
> > 2 to python 3.
> > 
> > [...]
> > 
> > 2. Change (or duplicate) all functional test jobs to run under
> >python 3.
> 
> As a test I ran Cinder functional and unit test jobs on bionic using 3.6. All
> went well.
> 
> That made me realize something though - right now we have jobs that explicitly
> say py35, both for unit tests and functional tests. But I realized setting up
> these test jobs that it works to just specify "basepython = python3" or run
> unit tests with "tox -e py3". Then with that, it just depends on whether the
> job runs on xenial or bionic as to whether the job is run with py35 or py36.
> 
> It is less explicit, so I see some downside to that, but would it make sense 
> to
> change jobs to drop the minor version to make it more flexible and easy to 
> make
> these transitions?
> 
I still think using tox-py35 / tox-py36 makes sense, as those jobs are already
setup to use the specific nodeset of ubuntu-xenial or ubuntu-bionic.  If we did
move to just tox-py3, it would actually result if more code projects would need
to add to their .zuul.yaml files:

  -project:
  check:
jobs:
  - tox-py35


  -project:
  check:
jobs:
  - tox-py3:
  nodeset: ubuntu-xenial

This maybe okay, will allow others to comment, but the main reason I am not a
fan, is we can no longer infer the nodeset by looking at the job name.
tox-py3 could be xenial or bionic.

Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-26 Thread Tim Bell
My worry with changing the default is that it would be like adding the 
following in /etc/environment,

alias ls=' rm -rf / --no-preserve-root'

i.e. an operation which was previously read-only now becomes irreversible.

We also have current use cases with Ironic where we are moving machines between 
projects by 'disowning' them to the spare pool and then reclaiming them (by 
UUID) into new projects with the same state.

However, other operators may feel differently which is why I suggest asking 
what people feel about changing the default.

In any case, changes in default behaviour need to be highly visible.

Tim

-Original Message-
From: "arkady.kanev...@dell.com" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 26 April 2018 at 18:48
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

+1.
It would be good to also identify the use cases.
Surprised that node should be cleaned up automatically.
I would expect that we want it to be a deliberate request from 
administrator to do.
Maybe user when they "return" a node to free pool after baremetal usage.
Thanks,
Arkady

-Original Message-
From: Tim Bell [mailto:tim.b...@cern.ch] 
Sent: Thursday, April 26, 2018 11:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

How about asking the operators at the summit Forum or asking on 
openstack-operators to see what the users think?

Tim

-Original Message-
From: Ben Nemec 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 26 April 2018 at 17:39
To: "OpenStack Development Mailing List (not for usage questions)" 
, Dmitry Tantsur 
Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default?



On 04/26/2018 09:24 AM, Dmitry Tantsur wrote:
> Answering to both James and Ben inline.
> 
> On 04/25/2018 05:47 PM, Ben Nemec wrote:
>>
>>
>> On 04/25/2018 10:28 AM, James Slagle wrote:
>>> On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur 
>>>  wrote:
 On 04/25/2018 04:26 PM, James Slagle wrote:
>
> On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur 

> wrote:
>>
>> Hi all,
>>
>> I'd like to restart conversation on enabling node automated 
>> cleaning by
>> default for the undercloud. This process wipes partitioning 
tables
>> (optionally, all the data) from overcloud nodes each time they 
>> move to
>> "available" state (i.e. on initial enrolling and after each tear 
>> down).
>>
>> We have had it disabled for a few reasons:
>> - it was not possible to skip time-consuming wiping if data from 
>> disks
>> - the way our workflows used to work required going between 
>> manageable
>> and
>> available steps several times
>>
>> However, having cleaning disabled has several issues:
>> - a configdrive left from a previous deployment may confuse 
>> cloud-init
>> - a bootable partition left from a previous deployment may take
>> precedence
>> in some BIOS
>> - an UEFI boot partition left from a previous deployment is 
likely to
>> confuse UEFI firmware
>> - apparently ceph does not work correctly without cleaning (I'll 
>> defer to
>> the storage team to comment)
>>
>> For these reasons we don't recommend having cleaning disabled, 
and I
>> propose
>> to re-enable it.
>>
>> It has the following drawbacks:
>> - The default workflow will require another node boot, thus 
becoming
>> several
>> minutes longer (incl. the CI)
>> - It will no longer be possible to easily restore a deleted 
overcloud
>> node.
>
>
> I'm trending towards -1, for these exact reasons you list as
> drawbacks. There has been no shortage of occurrences of users who 
have
> ended up with accidentally deleted overclouds. These are usually
> caused by user error or unintended/unpredictable Heat operations.
> Until we have a way to guarantee 

[openstack-dev] [all][api] POST /api-sig/news

2018-04-26 Thread Michael McCune
Greetings OpenStack community,

Today's meeting was quite short and saw a review of everyone's status
and the merging of one guideline.

We began by sharing our current work and plans for the near future.
Although everyone is on tight schedules currently, we discussed the
next steps for the work on the OpenAPI proposal [7] and elmiko has
mentioned that he will return to updating the microversion patch [8]
in the near future.

Next was our standard business of reviewing the frozen and open
guidelines. The guideline on cache-control headers, which had been
frozen last week, received no negative responses from the community,
so it was merged. You can find the link to the merged guideline in the
section below.

As we reviewed our bug status, the group agreed that at some point in
the near future we should take another pass at triaging our bugs. This
work will take place after the upcoming Vancouver forum.

As always if you're interested in helping out, in addition to coming
to the meetings, there's also:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for
changes over time. If you find something that's not quite right,
submit a patch [6] to fix it.
* Have you done something for which you think guidance would have made
things easier but couldn't find any? Submit a patch and help others
[6].

# Newly Published Guidelines

* Add guidance on needing cache-control headers
  https://review.openstack.org/550468

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None

# Guidelines Currently Under Review [3]

* Update parameter names in microversion sdk spec
  https://review.openstack.org/#/c/557773/

* Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

* A (shrinking) suite of several documents about doing version and
service discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet
ready for review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs
that you are developing or changing, please address your concerns in
an email to the OpenStack developer mailing list[1] with the tag
"[api]" in the subject. In your email, you should include any relevant
reviews, links, and comments to help guide the discussion of the
specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our
wiki page [4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://bugs.launchpad.net/openstack-api-wg
[6] https://git.openstack.org/cgit/openstack/api-wg
[7] https://gist.github.com/elmiko/7d97fef591887aa0c594c3dafad83442
[8] https://review.openstack.org/#/c/444892/

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-26 Thread Arkady.Kanevsky
+1.
It would be good to also identify the use cases.
Surprised that node should be cleaned up automatically.
I would expect that we want it to be a deliberate request from administrator to 
do.
Maybe user when they "return" a node to free pool after baremetal usage.
Thanks,
Arkady

-Original Message-
From: Tim Bell [mailto:tim.b...@cern.ch] 
Sent: Thursday, April 26, 2018 11:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

How about asking the operators at the summit Forum or asking on 
openstack-operators to see what the users think?

Tim

-Original Message-
From: Ben Nemec 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 26 April 2018 at 17:39
To: "OpenStack Development Mailing List (not for usage questions)" 
, Dmitry Tantsur 
Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default?



On 04/26/2018 09:24 AM, Dmitry Tantsur wrote:
> Answering to both James and Ben inline.
> 
> On 04/25/2018 05:47 PM, Ben Nemec wrote:
>>
>>
>> On 04/25/2018 10:28 AM, James Slagle wrote:
>>> On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur 
>>>  wrote:
 On 04/25/2018 04:26 PM, James Slagle wrote:
>
> On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur 
> wrote:
>>
>> Hi all,
>>
>> I'd like to restart conversation on enabling node automated 
>> cleaning by
>> default for the undercloud. This process wipes partitioning tables
>> (optionally, all the data) from overcloud nodes each time they 
>> move to
>> "available" state (i.e. on initial enrolling and after each tear 
>> down).
>>
>> We have had it disabled for a few reasons:
>> - it was not possible to skip time-consuming wiping if data from 
>> disks
>> - the way our workflows used to work required going between 
>> manageable
>> and
>> available steps several times
>>
>> However, having cleaning disabled has several issues:
>> - a configdrive left from a previous deployment may confuse 
>> cloud-init
>> - a bootable partition left from a previous deployment may take
>> precedence
>> in some BIOS
>> - an UEFI boot partition left from a previous deployment is likely to
>> confuse UEFI firmware
>> - apparently ceph does not work correctly without cleaning (I'll 
>> defer to
>> the storage team to comment)
>>
>> For these reasons we don't recommend having cleaning disabled, and I
>> propose
>> to re-enable it.
>>
>> It has the following drawbacks:
>> - The default workflow will require another node boot, thus becoming
>> several
>> minutes longer (incl. the CI)
>> - It will no longer be possible to easily restore a deleted overcloud
>> node.
>
>
> I'm trending towards -1, for these exact reasons you list as
> drawbacks. There has been no shortage of occurrences of users who have
> ended up with accidentally deleted overclouds. These are usually
> caused by user error or unintended/unpredictable Heat operations.
> Until we have a way to guarantee that Heat will never delete a node,
> or Heat is entirely out of the picture for Ironic provisioning, then
> I'd prefer that we didn't enable automated cleaning by default.
>
> I believe we had done something with policy.json at one time to
> prevent node delete, but I don't recall if that protected from both
> user initiated actions and Heat actions. And even that was not enabled
> by default.
>
> IMO, we need to keep "safe" defaults. Even if it means manually
> documenting that you should clean to prevent the issues you point out
> above. The alternative is to have no way to recover deleted nodes by
> default.


 Well, it's not clear what is "safe" here: protect people who explicitly
 delete their stacks or protect people who don't realize that a previous
 deployment may screw up their new one in a subtle way.
>>>
>>> The latter you can recover from, the former you can't if automated
>>> cleaning is true.
> 
> Nor can we recover from 'rm -rf / --no-preserve-root', but it's not a 
> reason to disable the 'rm' command :)
> 
>>>
>>> It's not just about people who explicitly delete their stacks (whether
>>> intentional or not). There could be user error (non-explicit) or
>>> 

Re: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way

2018-04-26 Thread James E. Blair
Clark Boylan  writes:

...

> I've since worked out a change that passes tempest using a global
> virtualenv installed devstack at
> https://review.openstack.org/#/c/558930/. This needs to be cleaned up
> so that we only check for and install the virtualenv(s) once and we
> need to handle mixed python2 and python3 environments better (so that
> you can run a python2 swift and python3 everything else).
>
> The other major issue we've run into is that nova file injection
> (which is tested by tempest) seems to require either libguestfs or
> nbd. libguestfs bindings for python aren't available on pypi and
> instead we get them from system packaging. This means if we want
> libguestfs support we have to enable system site packages when using
> virtualenvs. The alternative is to use nbd which apparently isn't
> preferred by nova and doesn't work under current devstack anyways.
>
> Why is this a problem? Well the new pip10 behavior that breaks
> devstack is pip10's refusable to remove distutils installed
> packages. Distro packages by and large are distutils packaged which
> means if you mix system packages and pip installed packages there is a
> good chance something will break (and it does break for current
> devstack). I'm not sure that using a virtualenv with system site
> packages enabled will sufficiently protect us from this case (but we
> should test it further). Also it feels wrong to enable system packages
> in a virtualenv if the entire point is avoiding system python
> packages.
>
> I'm not sure what the best option is here but if we can show that
> system site packages with virtualenvs is viable with pip10 and people
> want to move forward with devstack using a global virtualenv we can
> work to clean up this change and make it mergeable.

Now that pip 10 is here and we've got things relatively stable, it's
probably time to revisit this.

I think we should continue to explore the route that Clark has opened
up.  This isn't an emergency because all of the current devstack/pip10
conflicts have been resolved, however, there's no guarantee that
we won't add a new package with a conflict (which may be even more
difficult to resolve) or even that a future pip won't take an even
harder line.

I believe that installing all in one virtualenv has the advantage of
behaving more like what is expected of a project in the current python
ecosystem, while still retaining the co-installability testing that we
get with devstack.

What I'm a bit fuzzy on is how this impacts devstack plugins or related
applications.  However, it seems to me that we ought to be able to
essentially define the global venv as part of the API and then plugins
can participate in it.  Perhaps that won't be able to be automatic?
Maybe we'll need to set this up and then all devstack plugins will need
to change in order to use it?  If so, hopefully we'll be able to export
the functions needed to make that easy.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][neutron][requirements][pbr]Use git+https line in requirements.txt break the pip install

2018-04-26 Thread Marcin Juszkiewicz
W dniu 18.04.2018 o 04:48, Jeffrey Zhang pisze:

> Is this expected? and how could we fix this?

I posted a workaround: https://review.openstack.org/#/c/564552/

But this should be fixed in networking-odl (imho).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-26 Thread Tim Bell
How about asking the operators at the summit Forum or asking on 
openstack-operators to see what the users think?

Tim

-Original Message-
From: Ben Nemec 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 26 April 2018 at 17:39
To: "OpenStack Development Mailing List (not for usage questions)" 
, Dmitry Tantsur 
Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default?



On 04/26/2018 09:24 AM, Dmitry Tantsur wrote:
> Answering to both James and Ben inline.
> 
> On 04/25/2018 05:47 PM, Ben Nemec wrote:
>>
>>
>> On 04/25/2018 10:28 AM, James Slagle wrote:
>>> On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur 
>>>  wrote:
 On 04/25/2018 04:26 PM, James Slagle wrote:
>
> On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur 
> wrote:
>>
>> Hi all,
>>
>> I'd like to restart conversation on enabling node automated 
>> cleaning by
>> default for the undercloud. This process wipes partitioning tables
>> (optionally, all the data) from overcloud nodes each time they 
>> move to
>> "available" state (i.e. on initial enrolling and after each tear 
>> down).
>>
>> We have had it disabled for a few reasons:
>> - it was not possible to skip time-consuming wiping if data from 
>> disks
>> - the way our workflows used to work required going between 
>> manageable
>> and
>> available steps several times
>>
>> However, having cleaning disabled has several issues:
>> - a configdrive left from a previous deployment may confuse 
>> cloud-init
>> - a bootable partition left from a previous deployment may take
>> precedence
>> in some BIOS
>> - an UEFI boot partition left from a previous deployment is likely to
>> confuse UEFI firmware
>> - apparently ceph does not work correctly without cleaning (I'll 
>> defer to
>> the storage team to comment)
>>
>> For these reasons we don't recommend having cleaning disabled, and I
>> propose
>> to re-enable it.
>>
>> It has the following drawbacks:
>> - The default workflow will require another node boot, thus becoming
>> several
>> minutes longer (incl. the CI)
>> - It will no longer be possible to easily restore a deleted overcloud
>> node.
>
>
> I'm trending towards -1, for these exact reasons you list as
> drawbacks. There has been no shortage of occurrences of users who have
> ended up with accidentally deleted overclouds. These are usually
> caused by user error or unintended/unpredictable Heat operations.
> Until we have a way to guarantee that Heat will never delete a node,
> or Heat is entirely out of the picture for Ironic provisioning, then
> I'd prefer that we didn't enable automated cleaning by default.
>
> I believe we had done something with policy.json at one time to
> prevent node delete, but I don't recall if that protected from both
> user initiated actions and Heat actions. And even that was not enabled
> by default.
>
> IMO, we need to keep "safe" defaults. Even if it means manually
> documenting that you should clean to prevent the issues you point out
> above. The alternative is to have no way to recover deleted nodes by
> default.


 Well, it's not clear what is "safe" here: protect people who explicitly
 delete their stacks or protect people who don't realize that a previous
 deployment may screw up their new one in a subtle way.
>>>
>>> The latter you can recover from, the former you can't if automated
>>> cleaning is true.
> 
> Nor can we recover from 'rm -rf / --no-preserve-root', but it's not a 
> reason to disable the 'rm' command :)
> 
>>>
>>> It's not just about people who explicitly delete their stacks (whether
>>> intentional or not). There could be user error (non-explicit) or
>>> side-effects triggered by Heat that could cause nodes to get deleted.
> 
> If we have problems with Heat, we should fix Heat or stop using it. What 
> you're saying is essentially "we prevent ironic from doing the right 
> thing because we're using a tool that can invoke 'rm -rf /' at a wrong 
> moment."
> 
>>>
>>> You couldn't recover from those scenarios if automated cleaning were
>>> true. Whereas you could always fix a deployment error by opting in to
>>> do an automated clean. Does Ironic 

Re: [openstack-dev] [openstack-ansible] Proposing Mohammed Naser as core reviewer

2018-04-26 Thread Logan V.
+2!

On Thu, Apr 26, 2018 at 10:20 AM, Carter, Kevin  wrote:
> +2 from me!
>
>
> --
>
> Kevin Carter
> IRC: Cloudnull
>
> On Wed, Apr 25, 2018 at 4:06 AM, Markos Chandras  wrote:
>>
>> On 24/04/18 16:05, Jean-Philippe Evrard wrote:
>> > Hi everyone,
>> >
>> > I’d like to propose Mohammed Naser [1] as a core reviewer for
>> > OpenStack-Ansible.
>> >
>>
>> +2
>>
>> --
>> markos
>>
>> SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton
>> HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Designate] Plan for OSM

2018-04-26 Thread Ben Nemec



On 04/25/2018 11:31 PM, da...@vn.fujitsu.com wrote:

Hi forks,

We tested and completed our process with OVO migration in Queens cycle.
Now, we can continue with OSM implementation for Designate.
Actually, we have pushed some patches related to OSM[1] and it's ready to 
review.


Out of curiosity, what does OSM stand for?  Based on the patches it 
seems related to rolling upgrades, but a quick glance at them doesn't 
make it obvious to me what's going on.  Thanks.


-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member

2018-04-26 Thread Marcin Juszkiewicz
W dniu 26.04.2018 o 17:31, Jeffrey Zhang pisze:
> Kolla core reviewer team,
> 
> It is my pleasure to nominate
> ​
> mgoddard for kolla core team.

+1


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member

2018-04-26 Thread Christian Berendt
+1

> On 26. Apr 2018, at 17:31, Jeffrey Zhang  wrote:
> 
> Kolla core reviewer team,
> 
> It is my pleasure to nominate ​mgoddard for kolla core team.
> ​
> Mark has been working both upstream and downstream with kolla and
> kolla-ansible for over two years, building bare metal compute clouds with
> ironic for HPC. He's been involved with OpenStack since 2014. He started
> the kayobe deployment project which complements kolla-ansible. He is 
> also the most active non-core contributor for last 90 days[1]
> ​​
> Consider this nomination a +1 vote from me
> 
> A +1 vote indicates you are in favor of ​mgoddard as a candidate, a -1 
> is a ​​veto. Voting is open for 7 days until ​May ​4​th, or a unanimous
> response is reached or a veto vote occurs.
> 
> [1] http://stackalytics.com/report/contribution/kolla-group/90
> 
> -- 
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Christian Berendt
Chief Executive Officer (CEO)

Mail: bere...@betacloud-solutions.de
Web: https://www.betacloud-solutions.de

Betacloud Solutions GmbH
Teckstrasse 62 / 70190 Stuttgart / Deutschland

Geschäftsführer: Christian Berendt
Unternehmenssitz: Stuttgart
Amtsgericht: Stuttgart, HRB 756139


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-26 Thread Ben Nemec



On 04/26/2018 09:24 AM, Dmitry Tantsur wrote:

Answering to both James and Ben inline.

On 04/25/2018 05:47 PM, Ben Nemec wrote:



On 04/25/2018 10:28 AM, James Slagle wrote:
On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur 
 wrote:

On 04/25/2018 04:26 PM, James Slagle wrote:


On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur 
wrote:


Hi all,

I'd like to restart conversation on enabling node automated 
cleaning by

default for the undercloud. This process wipes partitioning tables
(optionally, all the data) from overcloud nodes each time they 
move to
"available" state (i.e. on initial enrolling and after each tear 
down).


We have had it disabled for a few reasons:
- it was not possible to skip time-consuming wiping if data from 
disks
- the way our workflows used to work required going between 
manageable

and
available steps several times

However, having cleaning disabled has several issues:
- a configdrive left from a previous deployment may confuse 
cloud-init

- a bootable partition left from a previous deployment may take
precedence
in some BIOS
- an UEFI boot partition left from a previous deployment is likely to
confuse UEFI firmware
- apparently ceph does not work correctly without cleaning (I'll 
defer to

the storage team to comment)

For these reasons we don't recommend having cleaning disabled, and I
propose
to re-enable it.

It has the following drawbacks:
- The default workflow will require another node boot, thus becoming
several
minutes longer (incl. the CI)
- It will no longer be possible to easily restore a deleted overcloud
node.



I'm trending towards -1, for these exact reasons you list as
drawbacks. There has been no shortage of occurrences of users who have
ended up with accidentally deleted overclouds. These are usually
caused by user error or unintended/unpredictable Heat operations.
Until we have a way to guarantee that Heat will never delete a node,
or Heat is entirely out of the picture for Ironic provisioning, then
I'd prefer that we didn't enable automated cleaning by default.

I believe we had done something with policy.json at one time to
prevent node delete, but I don't recall if that protected from both
user initiated actions and Heat actions. And even that was not enabled
by default.

IMO, we need to keep "safe" defaults. Even if it means manually
documenting that you should clean to prevent the issues you point out
above. The alternative is to have no way to recover deleted nodes by
default.



Well, it's not clear what is "safe" here: protect people who explicitly
delete their stacks or protect people who don't realize that a previous
deployment may screw up their new one in a subtle way.


The latter you can recover from, the former you can't if automated
cleaning is true.


Nor can we recover from 'rm -rf / --no-preserve-root', but it's not a 
reason to disable the 'rm' command :)




It's not just about people who explicitly delete their stacks (whether
intentional or not). There could be user error (non-explicit) or
side-effects triggered by Heat that could cause nodes to get deleted.


If we have problems with Heat, we should fix Heat or stop using it. What 
you're saying is essentially "we prevent ironic from doing the right 
thing because we're using a tool that can invoke 'rm -rf /' at a wrong 
moment."




You couldn't recover from those scenarios if automated cleaning were
true. Whereas you could always fix a deployment error by opting in to
do an automated clean. Does Ironic keep track of it a node has been
previously cleaned? Could we add a validation to check whether any
nodes might be used in the deployment that were not previously
cleaned?


It's may be possible possible to figure out if a node was ever cleaned. 
But then we'll force operators to invoke cleaning manually, right? It 
will work, but that's another step on the default workflow. Are you okay 
with it?




Is there a way to only do cleaning right before a node is deployed?  
If you're about to write a new image to the disk then any data there 
is forfeit anyway. Since the concern is old data on the disk messing 
up subsequent deploys, it doesn't really matter whether you clean it 
right after it's deleted or right before it's deployed, but the latter 
leaves the data intact for longer in case a mistake was made.


If that's not possible then consider this an RFE. :-)


It's a good idea, but it may cause problems with rebuilding instances. 
Rebuild is essentially a re-deploy of the OS, users may not expect the 
whole disk to be wiped..


Also it's unclear whether we want to write additional features to work 
around disabled cleaning.


No matter how good the tooling gets, user error will always be a thing. 
Someone will scale down the wrong node or something similar.  I think 
there's value to allowing recovery from mistakes.  We all make them. :-)


__
OpenStack 

Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-26 Thread James Slagle
On Thu, Apr 26, 2018 at 11:24 AM, Dmitry Tantsur  wrote:

> Sure, but how do people know if they want it? Okay, if they use Ceph, they
> have to. Then.. mm.. "if you have multiple disks and you're not sure what's
> on them, please clean"? It may work, I wonder how many people will care to
> follow it though.

Yes, this sounds pretty reasonable to me.


-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member

2018-04-26 Thread Jeffrey Zhang
Kolla core reviewer team,

It is my pleasure to nominate
​
mgoddard for kolla core team.
​
Mark has been working both upstream and downstream with kolla and
kolla-ansible for over two years, building bare metal compute clouds with
ironic for HPC. He's been involved with OpenStack since 2014. He started
the kayobe deployment project which complements kolla-ansible. He is
also the most active non-core contributor for last 90 days[1]
​​
Consider this nomination a +1 vote from me

A +1 vote indicates you are in favor of
​
mgoddard as a candidate, a -1
is a
​​
veto. Voting is open for 7 days until
​May

​4​
th, or a unanimous
response is reached or a veto vote occurs.

[1] http://stackalytics.com/report/contribution/kolla-group/90

-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-26 Thread Dmitry Tantsur

On 04/26/2018 05:12 PM, James Slagle wrote:

On Thu, Apr 26, 2018 at 10:24 AM, Dmitry Tantsur  wrote:

Answering to both James and Ben inline.


On 04/25/2018 05:47 PM, Ben Nemec wrote:




On 04/25/2018 10:28 AM, James Slagle wrote:


On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur 
wrote:


On 04/25/2018 04:26 PM, James Slagle wrote:
Well, it's not clear what is "safe" here: protect people who explicitly
delete their stacks or protect people who don't realize that a previous
deployment may screw up their new one in a subtle way.



The latter you can recover from, the former you can't if automated
cleaning is true.



Nor can we recover from 'rm -rf / --no-preserve-root', but it's not a reason
to disable the 'rm' command :)


This is a really disingenuous comparison. If you really want to
compare these things with what you're proposing, then it would be to
make --no-preserve-root the default with rm. Which it is not.


If we really go down this path, what TripleO does right now is removing the 'rm' 
command by default and saying "well, you can install it back, if you realize you 
cannot work without it" :)








It's not just about people who explicitly delete their stacks (whether
intentional or not). There could be user error (non-explicit) or
side-effects triggered by Heat that could cause nodes to get deleted.



If we have problems with Heat, we should fix Heat or stop using it. What
you're saying is essentially "we prevent ironic from doing the right thing
because we're using a tool that can invoke 'rm -rf /' at a wrong moment."


Agreed on the Heat point, and once/if we're there, I'd probably not
object to making automated clean the default.

I disagree on how you characterized what I'm saying. I'm not proposing
to prevent Ironic from doing the right thing. If people want to use
automated cleaning, they can. Nothing will prevent that. It just
shouldn't be the default.


It's not about "want to use". It's about "we don't guarantee the correct 
behavior in presence of previous deployments on non-root disks" and "if you use 
ceph, you must use cleaning".








You couldn't recover from those scenarios if automated cleaning were
true. Whereas you could always fix a deployment error by opting in to
do an automated clean. Does Ironic keep track of it a node has been
previously cleaned? Could we add a validation to check whether any
nodes might be used in the deployment that were not previously
cleaned?



It's may be possible possible to figure out if a node was ever cleaned. But
then we'll force operators to invoke cleaning manually, right? It will work,
but that's another step on the default workflow. Are you okay with it?


I would be ok with it. But I don't even characterize it as a
completely necessary step on the default workflow. It fixes some
issues as you've pointed out, but also comes with a cost. What we're
discussing is whether it's the default or not. If it is not true by
default, then we wouldn't make it a required step in the default
workflow to make sure it's done. It'd be documented as choice.



Sure, but how do people know if they want it? Okay, if they use Ceph, they have 
to. Then.. mm.. "if you have multiple disks and you're not sure what's on them, 
please clean"? It may work, I wonder how many people will care to follow it though.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Forum Schedule

2018-04-26 Thread Thierry Carrez
Matt Riedemann wrote:
> On 4/25/2018 4:07 PM, Jimmy McArthur wrote:
>> Please have a look at the Vancouver Forum schedule:
>> https://docs.google.com/spreadsheets/d/15hkU0FLJ7yqougCiCQgprwTy867CGnmUvAvKaBjlDAo/edit?usp=sharing
>> (also attached as a CSV) The proposed schedule was put together by two
>> members from UC, TC and Foundation.
>>
>> We do our best to avoid moving scheduled items around as it tends to
>> create a domino affect, but we do realize we might have missed
>> something.  The schedule should generally be set, but if you see a
>> major conflict in either content or speaker availability, please email
>> speakersupp...@openstack.org.
> 
> Two questions:
> 
> 1. What does the yellow on the pre-emptible instances cell mean?

It was two sessions submitted after the deadline that the selection
committee decided to keep. Were highlighted yellow so that they could
find them.

> 2. Just from looking at this, it looks like there were far fewer
> submissions for forum topics than actual slots available, so basically
> everything was approved (unless it was an obvious duplicate or not
> appropriate for a forum session), is that right?

Yes, there were less submissions, but they were all actually quite good.
Encouraging teams to go through a round of brainstorming before
submitting yields a lot less duplicates and crazy sessions.

We also had ample space (3 parallel rooms for 4 days, with only one day
of keynotes), more than we used to have. We decided to use the 3rd room
as a room available to schedule follow-up sessions, in case 40 min does
not cut it. More details on that later once the schedule is approved.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Proposing Mohammed Naser as core reviewer

2018-04-26 Thread Carter, Kevin
+2 from me!


--

Kevin Carter
IRC: Cloudnull

On Wed, Apr 25, 2018 at 4:06 AM, Markos Chandras  wrote:

> On 24/04/18 16:05, Jean-Philippe Evrard wrote:
> > Hi everyone,
> >
> > I’d like to propose Mohammed Naser [1] as a core reviewer for
> OpenStack-Ansible.
> >
>
> +2
>
> --
> markos
>
> SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton
> HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][horizon][neutron] plugins depending on services

2018-04-26 Thread Boden Russell
On 4/25/18 10:13 PM, Shu M. wrote:
> Hi folks,
> 
>> unwinding things
>> 
>>
>> There are a few different options, but it's important to keep in mind
>> that we ultimately want all of the following:
>>
>> * The code works
>> * Tests can run properly in CI
>> * "Depends-On" works in CI so that you can test changes cross-repo
>> * Tests can run properly locally for developers
>> * Deployment requirements are accurately communicated to deployers
> 
> One more thing:
> * Test environments between CI and local should be as same as possible.
> 
> To run tests in CI and local successfully, I have tried to add new
> testenv for local into tox.ini
> (https://review.openstack.org/#/c/564099/4/tox.ini
> ) as alternative
> solution last night, this would be same as adding new requirements.txt
> for local check. This seems to run fine, but this might make difference
> in environments between CI and local. So I can not conclude this is the
> best way for now.

I'd like to echo this point from a neutron plugin (project) perspective.

While we can move all our inter-project dependencies to requirements now
[1], this does not address how we can run tox targets or devstack
locally with master branches (to test changes locally before submitting
to gate).

To mitigate running tox locally we've introduced new targets that
manually install the inter-project dependencies [2] in editable mode.
While this works, IMHO it's not optimal and furthermore requires
"special" steps if you want to add some changes to those editable
projects and run with them.

And we've done something similar for devstack [3].

Finally, we also have some periodic jobs used to pre-validate our shared
neutron-lib project using master branches as defined by the
periodic-jobs-with-neutron-lib-master template. Certainly we want to
keep these working.


Frankly it's been a bit of a cat-and-mouse game to keep up with the
infra/zuul changes in the past 2 releases so it's possible what we've
done could be improved upon. If that's the case please do let me know so
we can works towards and optimized approach.

Thanks


[1] https://review.openstack.org/#/c/554292/
[2] https://review.openstack.org/#/c/555005/5/tox.ini
[3] https://review.openstack.org/#/c/555005/5/devstack/lib/nsx_common

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [E] [tripleo] ironic automated cleaning by default?

2018-04-26 Thread Gordon, Kent S
This change would need to be very clearly mentioned in
Documentation/Release Notes.
It could be a really nasty surprise for an operator expecting the current
behavior.

On Wed, Apr 25, 2018 at 8:14 AM, Dmitry Tantsur  wrote:

> Hi all,
>
> I'd like to restart conversation on enabling node automated cleaning by
> default for the undercloud. This process wipes partitioning tables
> (optionally, all the data) from overcloud nodes each time they move to
> "available" state (i.e. on initial enrolling and after each tear down).
>
> We have had it disabled for a few reasons:
> - it was not possible to skip time-consuming wiping if data from disks
> - the way our workflows used to work required going between manageable and
> available steps several times
>
> However, having cleaning disabled has several issues:
> - a configdrive left from a previous deployment may confuse cloud-init
> - a bootable partition left from a previous deployment may take precedence
> in some BIOS
> - an UEFI boot partition left from a previous deployment is likely to
> confuse UEFI firmware
> - apparently ceph does not work correctly without cleaning (I'll defer to
> the storage team to comment)
>
> For these reasons we don't recommend having cleaning disabled, and I
> propose to re-enable it.
>
> It has the following drawbacks:
> - The default workflow will require another node boot, thus becoming
> several minutes longer (incl. the CI)
> - It will no longer be possible to easily restore a deleted overcloud node.
>
> What do you think? If I don't hear principal objections, I'll prepare a
> patch in the coming days.
>
> Dmitry
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.op
> enstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=
> DwIGaQ=udBTRvFvXC5Dhqg7UHpJlPps3mZ3LRxpb6__0PomBTQ=Xkn6r
> 0Olgrmyl97VKakpX0o-JiB_old4u22bFbcLdRo=ymioAO-4rAyApEj0Gix
> bEC4KhMk6z9HBlW_z5nqnuno=hg1h8si71iXioJ-TvA3F2ZVt1O7ipViyYI3MASclYpI=
>



-- 
Kent S. Gordon
kent.gor...@verizonwireless.com Work:682-831-3601 Mobile: 817-905-6518
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-26 Thread James Slagle
On Thu, Apr 26, 2018 at 10:24 AM, Dmitry Tantsur  wrote:
> Answering to both James and Ben inline.
>
>
> On 04/25/2018 05:47 PM, Ben Nemec wrote:
>>
>>
>>
>> On 04/25/2018 10:28 AM, James Slagle wrote:
>>>
>>> On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur 
>>> wrote:

 On 04/25/2018 04:26 PM, James Slagle wrote:
 Well, it's not clear what is "safe" here: protect people who explicitly
 delete their stacks or protect people who don't realize that a previous
 deployment may screw up their new one in a subtle way.
>>>
>>>
>>> The latter you can recover from, the former you can't if automated
>>> cleaning is true.
>
>
> Nor can we recover from 'rm -rf / --no-preserve-root', but it's not a reason
> to disable the 'rm' command :)

This is a really disingenuous comparison. If you really want to
compare these things with what you're proposing, then it would be to
make --no-preserve-root the default with rm. Which it is not.

>
>>>
>>> It's not just about people who explicitly delete their stacks (whether
>>> intentional or not). There could be user error (non-explicit) or
>>> side-effects triggered by Heat that could cause nodes to get deleted.
>
>
> If we have problems with Heat, we should fix Heat or stop using it. What
> you're saying is essentially "we prevent ironic from doing the right thing
> because we're using a tool that can invoke 'rm -rf /' at a wrong moment."

Agreed on the Heat point, and once/if we're there, I'd probably not
object to making automated clean the default.

I disagree on how you characterized what I'm saying. I'm not proposing
to prevent Ironic from doing the right thing. If people want to use
automated cleaning, they can. Nothing will prevent that. It just
shouldn't be the default.

>
>>>
>>> You couldn't recover from those scenarios if automated cleaning were
>>> true. Whereas you could always fix a deployment error by opting in to
>>> do an automated clean. Does Ironic keep track of it a node has been
>>> previously cleaned? Could we add a validation to check whether any
>>> nodes might be used in the deployment that were not previously
>>> cleaned?
>
>
> It's may be possible possible to figure out if a node was ever cleaned. But
> then we'll force operators to invoke cleaning manually, right? It will work,
> but that's another step on the default workflow. Are you okay with it?

I would be ok with it. But I don't even characterize it as a
completely necessary step on the default workflow. It fixes some
issues as you've pointed out, but also comes with a cost. What we're
discussing is whether it's the default or not. If it is not true by
default, then we wouldn't make it a required step in the default
workflow to make sure it's done. It'd be documented as choice.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [Upgrades] Cancel next IRC meeting (May 3rd)

2018-04-26 Thread Luo, Lujin
Hi,

We are canceling our next Neutron Upgrades subteam meeting on May 3rd. We will 
resume on May 10th.

Thanks,
Lujin 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-04-26 Thread Stephen Finucane
On Wed, 2018-04-25 at 12:06 -0500, Sean McGinnis wrote:
> > > > 
> > > > [1] https://review.openstack.org/#/c/564232/
> > > > 
> > > 
> > > The only concern I have is that it may slow the transition to the
> > > python 3 version of the jobs, since someone would have to actually
> > > fix the warnings before they could add the new job. I'm not sure I
> > > want to couple the tasks of fixing doc build warnings with also
> > > making those docs build under python 3 (which is usually quite
> > > simple).
> > > 
> > > Is there some other way to enable this flag independently of the move to
> > > the python3 job?
> > 
> > The existing proposal is:
> > 
> > https://review.openstack.org/559348
> > 
> > TL;DR if you still have a build_sphinx section in setup.cfg then defaults
> > will remain the same, but when removing it as part of the transition to the
> > new PTI you'll have to eliminate any warnings. (Although AFAICT it doesn't
> > hurt to leave that section in place as long as you need, and you can still
> > do the rest of the PTI conversion.)
> > 
> > The hold-up is that the job in question is also potentially used by other
> > Zuul users outside of OpenStack - including those who aren't using pbr at
> > all (i.e. there's no setup.cfg). So we need to warn those folks to prepare.
> > 
> > cheers,
> > Zane.
> > 
> 
> Ah, I had looked but did not find an existing proposal. Looks like that would
> work too. I am good either way, but I will leave my approach out there just as
> another option to consider. I'll abandon that if folks prefer this way.

Yeah, I reviewed your patch but assumed you'd seen mine already and
were looking for a quicker alternative.

I've started the process of adding this to zuul-jobs by posting the
warning to zuul-announce (though it's waiting moderation by corvus). We
 only need to wait two weeks after sending that message before we can
merge the patch to zuul-jobs, so I guess we should go that way now?

Stephen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] final stages of python 3 transition

2018-04-26 Thread Sean McGinnis
On Wed, Apr 25, 2018 at 04:54:46PM -0400, Doug Hellmann wrote:
> It's time to talk about the next steps in our migration from python
> 2 to python 3.
> 
> [...]
> 
> 2. Change (or duplicate) all functional test jobs to run under
>python 3.

As a test I ran Cinder functional and unit test jobs on bionic using 3.6. All
went well.

That made me realize something though - right now we have jobs that explicitly
say py35, both for unit tests and functional tests. But I realized setting up
these test jobs that it works to just specify "basepython = python3" or run
unit tests with "tox -e py3". Then with that, it just depends on whether the
job runs on xenial or bionic as to whether the job is run with py35 or py36.

It is less explicit, so I see some downside to that, but would it make sense to
change jobs to drop the minor version to make it more flexible and easy to make
these transitions?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-26 Thread Dmitry Tantsur

Answering to both James and Ben inline.

On 04/25/2018 05:47 PM, Ben Nemec wrote:



On 04/25/2018 10:28 AM, James Slagle wrote:

On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur  wrote:

On 04/25/2018 04:26 PM, James Slagle wrote:


On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur 
wrote:


Hi all,

I'd like to restart conversation on enabling node automated cleaning by
default for the undercloud. This process wipes partitioning tables
(optionally, all the data) from overcloud nodes each time they move to
"available" state (i.e. on initial enrolling and after each tear down).

We have had it disabled for a few reasons:
- it was not possible to skip time-consuming wiping if data from disks
- the way our workflows used to work required going between manageable
and
available steps several times

However, having cleaning disabled has several issues:
- a configdrive left from a previous deployment may confuse cloud-init
- a bootable partition left from a previous deployment may take
precedence
in some BIOS
- an UEFI boot partition left from a previous deployment is likely to
confuse UEFI firmware
- apparently ceph does not work correctly without cleaning (I'll defer to
the storage team to comment)

For these reasons we don't recommend having cleaning disabled, and I
propose
to re-enable it.

It has the following drawbacks:
- The default workflow will require another node boot, thus becoming
several
minutes longer (incl. the CI)
- It will no longer be possible to easily restore a deleted overcloud
node.



I'm trending towards -1, for these exact reasons you list as
drawbacks. There has been no shortage of occurrences of users who have
ended up with accidentally deleted overclouds. These are usually
caused by user error or unintended/unpredictable Heat operations.
Until we have a way to guarantee that Heat will never delete a node,
or Heat is entirely out of the picture for Ironic provisioning, then
I'd prefer that we didn't enable automated cleaning by default.

I believe we had done something with policy.json at one time to
prevent node delete, but I don't recall if that protected from both
user initiated actions and Heat actions. And even that was not enabled
by default.

IMO, we need to keep "safe" defaults. Even if it means manually
documenting that you should clean to prevent the issues you point out
above. The alternative is to have no way to recover deleted nodes by
default.



Well, it's not clear what is "safe" here: protect people who explicitly
delete their stacks or protect people who don't realize that a previous
deployment may screw up their new one in a subtle way.


The latter you can recover from, the former you can't if automated
cleaning is true.


Nor can we recover from 'rm -rf / --no-preserve-root', but it's not a reason to 
disable the 'rm' command :)




It's not just about people who explicitly delete their stacks (whether
intentional or not). There could be user error (non-explicit) or
side-effects triggered by Heat that could cause nodes to get deleted.


If we have problems with Heat, we should fix Heat or stop using it. What you're 
saying is essentially "we prevent ironic from doing the right thing because 
we're using a tool that can invoke 'rm -rf /' at a wrong moment."




You couldn't recover from those scenarios if automated cleaning were
true. Whereas you could always fix a deployment error by opting in to
do an automated clean. Does Ironic keep track of it a node has been
previously cleaned? Could we add a validation to check whether any
nodes might be used in the deployment that were not previously
cleaned?


It's may be possible possible to figure out if a node was ever cleaned. But then 
we'll force operators to invoke cleaning manually, right? It will work, but 
that's another step on the default workflow. Are you okay with it?




Is there a way to only do cleaning right before a node is deployed?  If you're 
about to write a new image to the disk then any data there is forfeit anyway.  
Since the concern is old data on the disk messing up subsequent deploys, it 
doesn't really matter whether you clean it right after it's deleted or right 
before it's deployed, but the latter leaves the data intact for longer in case a 
mistake was made.


If that's not possible then consider this an RFE. :-)


It's a good idea, but it may cause problems with rebuilding instances. Rebuild 
is essentially a re-deploy of the OS, users may not expect the whole disk to be 
wiped..


Also it's unclear whether we want to write additional features to work around 
disabled cleaning.




-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack 

Re: [openstack-dev] [all][tc] final stages of python 3 transition

2018-04-26 Thread Thomas Goirand
On 04/25/2018 11:40 PM, Jeremy Stanley wrote:
> It may be worth considering how this interacts with the switch of
> our default test platform from Ubuntu 16.04 (which provides Python
> 3.5) to 18.04 (which provides Python 3.6). If we switch from 3.5 to
> 3.6 before we change most remaining jobs over to Python 3.x versions
> then it gives us a chance to spot differences between 3.5 and 3.6 at
> that point.

I don't think you'll find lots of issues, as all Debian and Gentoo
packages were built against Python 3.6, and hopefully, prometheanfire
and myself have reported the issues.

> So I guess that raises the question: switch to Python 3.5 by default
> for most jobs in Rocky and then have a potentially more disruptive
> default platform switch with Python 3.5->3.6 at the beginning of
> Stein, or wait until the default platform switch to move from Python
> 2.7 to 3.6 as the job default? I can see some value in each option.

I'd love to see gating on both Python 3.5 and 3.6 if possible.

Also, can we restart the attempts (non-voting) gating jobs with Debian
Sid? That's always were we get all updates first.

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][horizon][neutron] plugins depending on services

2018-04-26 Thread William M Edmonds
Monty Taylor  wrote on 04/25/2018 09:40:47 AM:
...
> Introduce a whitelist of git repo urls, starting with:
>
>* https://git.openstack.org/openstack/neutron
>* https://git.openstack.org/openstack/horizon
>
We would also need to include at least nova (e.g. [1]) and ceilometer (e.g.
[2]).

[1] https://github.com/openstack/nova-powervm
[2] https://github.com/openstack/ceilometer-powervm
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-17, April 30 - May 4

2018-04-26 Thread Sean McGinnis

Time again for our regular release countdown email.

Development Focus
-

Teams should now be focused on feature development and completion of release
goals [0].

[0] https://governance.openstack.org/tc/goals/rocky/index.html

General Information
---

If you have not requested a Rocky-1, please be aware that the next two
milestones can not be missed or your deliverable may not be included as part of
the official Rocky cycle project set.

There were a few projects where the release team had to force a release in
Queens. We would like to avoid that situation in Rocky. For some stable or
code-complete projects, you may want to consider switching to be an independent
release.

This would also be a good time to check whether to do a release for any
independent, library, or stable releases.

As always, if you have any questions or concerns, feel free to swing by the
#openstack-release channel.

And just a reminder that the TC election is currently under way. Details of the
election can be found here [1].

[1] https://governance.openstack.org/election/

Upcoming Deadlines & Dates
--

TC election closes: Apr 30, 23:45 UTC
Forum at OpenStack Summit in Vancouver: May 21-24
Rocky-2 Milestone: June 7

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Forum Schedule

2018-04-26 Thread Matt Riedemann

On 4/25/2018 4:07 PM, Jimmy McArthur wrote:
Please have a look at the Vancouver Forum schedule: 
https://docs.google.com/spreadsheets/d/15hkU0FLJ7yqougCiCQgprwTy867CGnmUvAvKaBjlDAo/edit?usp=sharing 
(also attached as a CSV) The proposed schedule was put together by two 
members from UC, TC and Foundation.


We do our best to avoid moving scheduled items around as it tends to 
create a domino affect, but we do realize we might have missed 
something.  The schedule should generally be set, but if you see a major 
conflict in either content or speaker availability, please email 
speakersupp...@openstack.org.


Two questions:

1. What does the yellow on the pre-emptible instances cell mean?

2. Just from looking at this, it looks like there were far fewer 
submissions for forum topics than actual slots available, so basically 
everything was approved (unless it was an obvious duplicate or not 
appropriate for a forum session), is that right?


In the past when I've wondered if topic x should be a forum session or 
if I shouldn't bother, I was told to load up the proposals because 
chances were there would be more slots than proposals, and that seems to 
still be true.


On the one hand, less content to choose from is refreshing so you don't 
have to worry about picking between as many sessions that you're 
interested in. But I also wonder how many people held back on proposing 
something for fear of rejection or that they'd be taking a slot for 
something with a higher priority. I'm not sure if there is a problem 
here, or a solution needed, or if it would be useful for the people that 
pick the sessions to give a heads up before the deadline that there are 
still a lot of slots open.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] [heat-templates] Deprecated environment files

2018-04-26 Thread Waleed Musa
Hi guys,


I'm wondering, what is the plan of having  these environments/*.yaml and 
enviroments/services-baremetal/*.yaml.

It seems that it's deprecated files, Please advice here.


Regards

Waleed Mousa

SW Engineer at Mellanox
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [publiccloud-wg] Meeting this afternoon for Public Cloud WG

2018-04-26 Thread Tobias Rydberg

Hi folks,

Time for a new meeting for the Public Cloud WG. Vancouver is coming 
closer, very open agenda this week so please join and bring your topic 
to discuss.


The open agenda (please add topics) can be found at 
https://etherpad.openstack.org/p/publiccloud-wg


See you all at 1400 UTC in #opensstack-publiccloud

Cheers,
Tobias

--
Tobias Rydberg
Senior Developer
Mobile: +46 733 312780

www.citynetwork.eu | www.citycloud.com

INNOVATION THROUGH OPEN IT INFRASTRUCTURE
ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED




smime.p7s
Description: S/MIME Cryptographic Signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] How to take over a project?

2018-04-26 Thread Sangho Shin
Thank you, Thierry

I will follow that link.

Sangho


> On 26 Apr 2018, at 5:09 PM, Thierry Carrez  wrote:
> 
> Sangho Shin wrote:
>> Ihar,
>> 
>> I tried to add netwokring-onos-core group to "Create Reference” permission 
>> using the gerrit UI, and it is registered as a new gerrit review.
>> But, it seems that it is not a right process, according to the gerrit 
>> history of the similar issues.
>> Can you please let me know how to change the project ACL?
> 
> The ACLs are maintained in the openstack-infra/project-config
> repository. You need to propose a change to the ACL file at:
> 
> gerrit/acls/openstack/networking-onos.config
> 
> For more information on how to create and maintain projects, you can
> read the Infra manual at:
> 
> https://docs.openstack.org/infra/manual/creators.html
> 
> While it's geared towards creating NEW projects, the guide can be
> helpful at pointing to the right files and processes.
> 
> -- 
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] How to take over a project?

2018-04-26 Thread Thierry Carrez
Sangho Shin wrote:
> Ihar,
> 
> I tried to add netwokring-onos-core group to "Create Reference” permission 
> using the gerrit UI, and it is registered as a new gerrit review.
> But, it seems that it is not a right process, according to the gerrit history 
> of the similar issues.
> Can you please let me know how to change the project ACL?

The ACLs are maintained in the openstack-infra/project-config
repository. You need to propose a change to the ACL file at:

gerrit/acls/openstack/networking-onos.config

For more information on how to create and maintain projects, you can
read the Infra manual at:

https://docs.openstack.org/infra/manual/creators.html

While it's geared towards creating NEW projects, the guide can be
helpful at pointing to the right files and processes.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] How to take over a project?

2018-04-26 Thread Sangho Shin
Ihar,

I tried to add netwokring-onos-core group to "Create Reference” permission 
using the gerrit UI, and it is registered as a new gerrit review.
But, it seems that it is not a right process, according to the gerrit history 
of the similar issues.
Can you please let me know how to change the project ACL?

Thank you,

Sangho


> On 25 Apr 2018, at 11:46 PM, Ihar Hrachyshka  wrote:
> 
> ONOS is not part of Neutron and hence Neutron Release team should not
> be involved in its matters. If gerrit ACLs say otherwise, you should
> fix the ACLs.
> 
> Ihar
> 
> On Tue, Apr 24, 2018 at 1:22 AM, Sangho Shin  
> wrote:
>> Dear Neutron-Release team members,
>> 
>> Can any of you handle the issue below?
>> 
>> Thank you so much for your help in advance.
>> 
>> Sangho
>> 
>> 
>>> On 20 Apr 2018, at 10:01 AM, Sangho Shin  wrote:
>>> 
>>> Dear Neutron-Release team,
>>> 
>>> I wonder if any of you can add me to the network-onos-release member.
>>> It seems that Vikram is busy. :-)
>>> 
>>> Thank you,
>>> 
>>> Sangho
>>> 
>>> 
>>> 
 On 19 Apr 2018, at 9:18 AM, Sangho Shin  wrote:
 
 Ian,
 
 Thank you so much for your help.
 I have requested Vikram to add me to the release team.
 He should be able to help me. :-)
 
 Sangho
 
 
> On 19 Apr 2018, at 8:36 AM, Ian Wienand  wrote:
> 
> On 04/19/2018 01:19 AM, Ian Y. Choi wrote:
>> By the way, since the networking-onos-release group has no neutron
>> release team group, I think infra team can help to include neutron
>> release team and neutron release team can help to create branches
>> for the repo if there is no reponse from current
>> networking-onos-release group member.
> 
> This seems sane and I've added neutron-release to
> networking-onos-release.
> 
> I'm hesitant to give advice on branching within a project like neutron
> as I'm sure there's stuff I'm not aware of; but members of the
> neutron-release team should be able to get you going.
> 
> Thanks,
> 
> -i
 
>>> 
>> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev