[openstack-dev] [infra] pypi publishing

2017-09-30 Thread Gary Kotton
Hi,
Any idea why latest packages are not being published to pypi.
Examples are:
vmware-nsxlib 10.0.2 (latest stable/ocata)
vmware-nsxlib 11.0.1 (latest stable/pike)
vmware-nsxlib 11.1.0 (latest queens)
Did we miss a configuration that we needed to do in the infra projects?
Thanks
Gary


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg]Nomination of zhuli to the core team

2017-09-30 Thread Zhipeng Huang
Hi Team,

First of all thanks to everyone's hard work to help Cyborg becoming the new
official project. As part of the recognition I want to nominate zhuli as a
new core reviewer.

As well known to the current team, zhuli was responsible for the whole
api/db module code implementation in Pike, and he has made solid
contributions to the testing aspect [0]. His stats could be found here [1]
and ranking among the team could be found here[2].

If you have any concerns against this nomination, please feedback to this
thread before Friday.

[0]
https://review.openstack.org/#/q/project:openstack/cyborg+owner:%22zhuli+%253Czhuli27%2540huawei.com%253E%22
[1]
http://stackalytics.com/?release=all=cyborg-group=person-day_id=zhuli

[2] http://stackalytics.com/?release=all=person-day=cyborg

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][DIB] how create triplo overcloud image with latest kernel?

2017-09-30 Thread Moshe Levi
Thanks you all for the tips.
We were able to create image using blogpost that Yolanda mentioned.

From: Yolanda Robla Mota [mailto:yrobl...@redhat.com]
Sent: Wednesday, September 27, 2017 6:40 PM
To: OpenStack Development Mailing List (not for usage questions) 

Cc: Moshe Levi ; Hasan Qunoo ; 
Waleed Musa 
Subject: Re: [openstack-dev] [TripleO][DIB] how create triplo overcloud image 
with latest kernel?

If you need a guideline about how to build TripleO images with DIB, i have that 
blogpost: 
http://teknoarticles.blogspot.com.es/2017/07/build-and-use-security-hardened-images.html
This if for security hardened images, but your replace 
"overcloud-hardened-images" by "overcloud-images", it will build the default 
one. You can specify the base image you want to use, as well as enable any repo 
you have, that may take the latest kernel.
Hope it helps!

On Wed, Sep 27, 2017 at 5:21 PM, Brad P. Crochet 
> wrote:

On Tue, Sep 26, 2017 at 2:58 PM Ben Nemec 
> wrote:


On 09/26/2017 05:43 AM, Moshe Levi wrote:
> Hi all,
>
> As part of the OVS Hardware Offload [1] [2],  we need to create new
> Centos/Redhat 7 image  with latest kernel/ovs/iproute.
>
> We tried to use virsh-customize to install the packages and we were able
> to update iproute and ovs, but for the kernel there is no space.
>
> We also tried with virsh-customize to uninstall the old kenrel but we no
> luck.
>
> Is other ways to replace kernel  package in existing image?

Do you have to use an existing image?  The easiest way to do this would
be to create a DIB element that installs what you want and just include
that in the image build in the first place.  I don't think that would be
too difficult to do now that we're keeping the image definitions in
simple YAML files.

If it is just packages, a DIB element wouldn't even be necessary. You could 
define a new yaml that just adds the packages that you want, and add that to 
the CLI when you build the images.

>
> [1] - 
> https://review.openstack.org/#/c/504911/
> 
>
>
> [2] - 
> https://review.openstack.org/#/c/502313/
> 
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [tc][docs][release] Updating the PTI for docs and tarballs

2017-09-30 Thread Tim Bell
Having a PDF (or similar offline copy) was also requested during OpenStack UK 
days event during the executive Q with jbryce.

Tim

-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Saturday, 30 September 2017 at 17:44
To: openstack-dev 
Subject: Re: [openstack-dev] [tc][docs][release] Updating the PTI for docs  
and tarballs

Excerpts from Monty Taylor's message of 2017-09-30 10:20:08 -0500:
> Hey everybody,
> 
> Oh goodie, I can hear you say, let's definitely spend some time 
> bikeshedding about specific command invocations related to building docs 
> and tarballs!!!
> 
> tl;dr I want to change the PTI for docs and tarball building to be less 
> OpenStack-specific
> 
> The Problem
> ===
> 
> As we work on Zuul v3, there are a set of job definitions that are 
> rather fundamental that can totally be directly shared between Zuul 
> installations whether those Zuuls are working with OpenStack content or 
> not. As an example "tox -epy27" is a fairly standard thing, so a Zuul 
> job called "tox-py27" has no qualities specific to OpenStack and could 
> realistically be used by anyone who uses tox to manage their project.
> 
> Docs and Tarballs builds for us, however, are the following:
> 
> tox -evenv -- python setup.py sdist
> tox -evenv -- python setup.py build_sphinx
> 
> Neither of those are things that are likely to work outside of 
> OpenStack. (The 'venv' tox environment is not a default tox thing)
> 
> I'm going to argue neither of them are actually providing us with much 
> value.
> 
> Tarball Creation
> 
> 
> Tarball creation is super simple. setup_requires is already handled out 
> of band of everything else. Go clone nova into a completely empty system 
> and run python setup.py sdist ... and it works. (actually, nova is big. 
> use something smaller like gertty ...)
> 
> docker run -it --rm python bash -c 'git clone \
>   https://git.openstack.org/openstack/gertty && cd gertty \
>   && python setup.py sdist'
> 
> There is not much value in that tox wrapper - and it's not like it's 
> making it EASIER to run the command. In fact, it's more typing.
> 
> I propose we change the PTI from:
> 
>tox -evenv python setup.py sdist
> 
> to:
> 
>python setup.py sdist
> 
> and then change the gate jobs to use the non-tox form of the command.
> 
> I'd also like to further change it to be explicit that we also build 
> wheels. So the ACTUAL commands that the project should support are:
> 
>python setup.py sdist
>python setup.py bdist_wheel
> 
> All of our projects support this already, so this should be a no-op.
> 
> Notes:
> 
> * Python projects that need to build C extensions might need their pip 
> requirements (and bindep requirements) installed in order to run 
> bdist_wheel. We do not support that broadly at the moment ANYWAY - so 
> I'd like to leave that as an outlier and handle it when we need to 
> handle it.
> 
> * It's *possible* that somewhere we have a repo that has somehow done 
> something that would cause python setup.py sdist or python setup.py 
> bdist_wheel to not work without pip requirements installed. I believe we 
> should consider that a bug and fix it in the project if we find such a 
> thing - but since we use pbr in all of the OpenStack projects, I find it 
> extremely unlikely.
> 
> Governance patch submitted: https://review.openstack.org/508693
> 
> Sphinx Documentation
> 
> 
> Doc builds are more complex - but I think there is a high amount of 
> value in changing how we invoke them for a few reasons.
> 
> a) nobody uses 'tox -evenv -- python setup.py build_sphinx' but us
> b) we decided to use sphinx for go and javascript - but we invoke sphinx 
> differently for each of those (since they naturally don't have tox), 
> meaning we can't just have a "build-sphinx-docs" job and even share it 
> with ourselves.
> c) readthedocs.org is an excellent Open Source site that builds and 
> hosts sphinx docs for projects. They have an interface for docs 
> requirements documented and defined that we can align. By aligning, 
> projects can use migrate between docs.o.o and readthedocs.org and still 
> have a consistent experience.
> 
> The PTI I'd like to propose for this is more complex, so I'd like to 
> describe it in terms of:
> 
> - OpenStack organizational requirements
> - helper sugar for developers with per-language recommendations
> 

Re: [openstack-dev] [Glance][Security] Secure Hash Algorithm Spec

2017-09-30 Thread Brian Rosmaita
On Fri, Sep 29, 2017 at 1:38 PM, Jeremy Stanley  wrote:
> On 2017-09-29 12:31:21 -0400 (-0400), Jay Pipes wrote:
> [...]
>> Can someone please inform me how changing the checksum algorithm
>> for this operation to SHA-1 or something else would improve the
>> security of this operation?
> [...]
[...]
> The simpler explanation is that people hear "MD5 is broken" and so
> anyone writing policies and auditing security/compliance just tells
> you it's verboten. That, and uninformed alarmists who freak out when
> they find uses of MD5 and think that means the software will be
> hax0red the moment you put it into production. Sometimes it's easier
> to just go through the pain of replacing unpopular cryptographic
> primitives so you can avoid having this same discussion over and
> over with people whose eyes glaze over as soon as you start to try
> and tell them anything which disagrees with their paranoid
> sensationalist media experts.

This is the primary motivator.  Regardless of whether it makes sense
for the particular use of md5 in Glance or not, operators have to fill
in checkboxes in security compliance documentation that will be
consumed by increasingly less-well-informed people.  This way, rather
than try to justify Glance's use of md5 in 140 chars or less (assuming
there even is a "comment" field), operators can just answer "no" to
the question "does the system rely on md5" and be done with it.  I
think that's why the general reaction to this spec is a sigh of relief
that Glance is eliminating a dependency on md5.

Additionally, there's a use case of locating the same image in
different regions served by different Glance installations.  The
'checksum' property was indexed back in Folsom or Grizzly so that a
user could do an image-list call filtered by a particular checksum
value to find the same image they were using in one region in another
region.  But with an md5 checksum, we really can't recommend this
strategy of locating an image.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Retiring openstack/pylockfile

2017-09-30 Thread Doug Hellmann
Excerpts from ChangBo Guo's message of 2017-09-30 11:07:33 +0800:
> pylockfile was  deprecated  about two years ago in [1] and it is not used
> in any OpenStack Projects now [2] , we would like to retire it according
> to steps of retiring a project[3].
> 
> 
> [1]c8798cedfbc4d738c99977a07cde2de54687ac6c#diff-88b99bb28683bd5b7e3a204826ead112
> [2] http://codesearch.openstack.org/?q=pylockfile=nope==
> [3]https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][docs][release] Updating the PTI for docs and tarballs

2017-09-30 Thread Doug Hellmann
Excerpts from Monty Taylor's message of 2017-09-30 10:20:08 -0500:
> Hey everybody,
> 
> Oh goodie, I can hear you say, let's definitely spend some time 
> bikeshedding about specific command invocations related to building docs 
> and tarballs!!!
> 
> tl;dr I want to change the PTI for docs and tarball building to be less 
> OpenStack-specific
> 
> The Problem
> ===
> 
> As we work on Zuul v3, there are a set of job definitions that are 
> rather fundamental that can totally be directly shared between Zuul 
> installations whether those Zuuls are working with OpenStack content or 
> not. As an example "tox -epy27" is a fairly standard thing, so a Zuul 
> job called "tox-py27" has no qualities specific to OpenStack and could 
> realistically be used by anyone who uses tox to manage their project.
> 
> Docs and Tarballs builds for us, however, are the following:
> 
> tox -evenv -- python setup.py sdist
> tox -evenv -- python setup.py build_sphinx
> 
> Neither of those are things that are likely to work outside of 
> OpenStack. (The 'venv' tox environment is not a default tox thing)
> 
> I'm going to argue neither of them are actually providing us with much 
> value.
> 
> Tarball Creation
> 
> 
> Tarball creation is super simple. setup_requires is already handled out 
> of band of everything else. Go clone nova into a completely empty system 
> and run python setup.py sdist ... and it works. (actually, nova is big. 
> use something smaller like gertty ...)
> 
> docker run -it --rm python bash -c 'git clone \
>   https://git.openstack.org/openstack/gertty && cd gertty \
>   && python setup.py sdist'
> 
> There is not much value in that tox wrapper - and it's not like it's 
> making it EASIER to run the command. In fact, it's more typing.
> 
> I propose we change the PTI from:
> 
>tox -evenv python setup.py sdist
> 
> to:
> 
>python setup.py sdist
> 
> and then change the gate jobs to use the non-tox form of the command.
> 
> I'd also like to further change it to be explicit that we also build 
> wheels. So the ACTUAL commands that the project should support are:
> 
>python setup.py sdist
>python setup.py bdist_wheel
> 
> All of our projects support this already, so this should be a no-op.
> 
> Notes:
> 
> * Python projects that need to build C extensions might need their pip 
> requirements (and bindep requirements) installed in order to run 
> bdist_wheel. We do not support that broadly at the moment ANYWAY - so 
> I'd like to leave that as an outlier and handle it when we need to 
> handle it.
> 
> * It's *possible* that somewhere we have a repo that has somehow done 
> something that would cause python setup.py sdist or python setup.py 
> bdist_wheel to not work without pip requirements installed. I believe we 
> should consider that a bug and fix it in the project if we find such a 
> thing - but since we use pbr in all of the OpenStack projects, I find it 
> extremely unlikely.
> 
> Governance patch submitted: https://review.openstack.org/508693
> 
> Sphinx Documentation
> 
> 
> Doc builds are more complex - but I think there is a high amount of 
> value in changing how we invoke them for a few reasons.
> 
> a) nobody uses 'tox -evenv -- python setup.py build_sphinx' but us
> b) we decided to use sphinx for go and javascript - but we invoke sphinx 
> differently for each of those (since they naturally don't have tox), 
> meaning we can't just have a "build-sphinx-docs" job and even share it 
> with ourselves.
> c) readthedocs.org is an excellent Open Source site that builds and 
> hosts sphinx docs for projects. They have an interface for docs 
> requirements documented and defined that we can align. By aligning, 
> projects can use migrate between docs.o.o and readthedocs.org and still 
> have a consistent experience.
> 
> The PTI I'd like to propose for this is more complex, so I'd like to 
> describe it in terms of:
> 
> - OpenStack organizational requirements
> - helper sugar for developers with per-language recommendations
> 
> I believe the result can be a single in-tree doc PTI that applies to 
> python, go and javascript - and a single Zuul job that applies to all of 
> our projects AND non-OpenStack projects as well.
> 
> Organizational Requirements
> ---
> 
> Following readthedocs.org logic we can actually support a wider range of 
> schemes technically, but there is human value in having consistency on 
> these topics across our OpenStack repos.
> 
> * docs live in doc/source
> * Python requirements needed by Sphinx to build the docs live in 
> doc/requirements.txt
> 
> If the project is python:
> 
> * doc/requirements.txt can assume the project will have been installed
> * The following should be set in setup.cfg:
> 
>[build_sphinx]
>source-dir = doc/source
>build-dir = doc/build
> 
> Doing the above allows the following commands to work cleanly in 
> automation no matter what the 

Re: [openstack-dev] [tc][docs][release] Updating the PTI for docs and tarballs

2017-09-30 Thread Jeremy Stanley
On 2017-09-30 10:20:08 -0500 (-0500), Monty Taylor wrote:
[...]
> I want to change the PTI for docs and tarball building to be less
> OpenStack-specific
[...]

Looks great! Anything to make OpenStack less snowflake-like and our
build processes more familiar to developers from other communities.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][docs][release] Updating the PTI for docs and tarballs

2017-09-30 Thread Monty Taylor

Hey everybody,

Oh goodie, I can hear you say, let's definitely spend some time 
bikeshedding about specific command invocations related to building docs 
and tarballs!!!


tl;dr I want to change the PTI for docs and tarball building to be less 
OpenStack-specific


The Problem
===

As we work on Zuul v3, there are a set of job definitions that are 
rather fundamental that can totally be directly shared between Zuul 
installations whether those Zuuls are working with OpenStack content or 
not. As an example "tox -epy27" is a fairly standard thing, so a Zuul 
job called "tox-py27" has no qualities specific to OpenStack and could 
realistically be used by anyone who uses tox to manage their project.


Docs and Tarballs builds for us, however, are the following:

tox -evenv -- python setup.py sdist
tox -evenv -- python setup.py build_sphinx

Neither of those are things that are likely to work outside of 
OpenStack. (The 'venv' tox environment is not a default tox thing)


I'm going to argue neither of them are actually providing us with much 
value.


Tarball Creation


Tarball creation is super simple. setup_requires is already handled out 
of band of everything else. Go clone nova into a completely empty system 
and run python setup.py sdist ... and it works. (actually, nova is big. 
use something smaller like gertty ...)


   docker run -it --rm python bash -c 'git clone \
 https://git.openstack.org/openstack/gertty && cd gertty \
 && python setup.py sdist'

There is not much value in that tox wrapper - and it's not like it's 
making it EASIER to run the command. In fact, it's more typing.


I propose we change the PTI from:

  tox -evenv python setup.py sdist

to:

  python setup.py sdist

and then change the gate jobs to use the non-tox form of the command.

I'd also like to further change it to be explicit that we also build 
wheels. So the ACTUAL commands that the project should support are:


  python setup.py sdist
  python setup.py bdist_wheel

All of our projects support this already, so this should be a no-op.

Notes:

* Python projects that need to build C extensions might need their pip 
requirements (and bindep requirements) installed in order to run 
bdist_wheel. We do not support that broadly at the moment ANYWAY - so 
I'd like to leave that as an outlier and handle it when we need to 
handle it.


* It's *possible* that somewhere we have a repo that has somehow done 
something that would cause python setup.py sdist or python setup.py 
bdist_wheel to not work without pip requirements installed. I believe we 
should consider that a bug and fix it in the project if we find such a 
thing - but since we use pbr in all of the OpenStack projects, I find it 
extremely unlikely.


Governance patch submitted: https://review.openstack.org/508693

Sphinx Documentation


Doc builds are more complex - but I think there is a high amount of 
value in changing how we invoke them for a few reasons.


a) nobody uses 'tox -evenv -- python setup.py build_sphinx' but us
b) we decided to use sphinx for go and javascript - but we invoke sphinx 
differently for each of those (since they naturally don't have tox), 
meaning we can't just have a "build-sphinx-docs" job and even share it 
with ourselves.
c) readthedocs.org is an excellent Open Source site that builds and 
hosts sphinx docs for projects. They have an interface for docs 
requirements documented and defined that we can align. By aligning, 
projects can use migrate between docs.o.o and readthedocs.org and still 
have a consistent experience.


The PTI I'd like to propose for this is more complex, so I'd like to 
describe it in terms of:


- OpenStack organizational requirements
- helper sugar for developers with per-language recommendations

I believe the result can be a single in-tree doc PTI that applies to 
python, go and javascript - and a single Zuul job that applies to all of 
our projects AND non-OpenStack projects as well.


Organizational Requirements
---

Following readthedocs.org logic we can actually support a wider range of 
schemes technically, but there is human value in having consistency on 
these topics across our OpenStack repos.


* docs live in doc/source
* Python requirements needed by Sphinx to build the docs live in 
doc/requirements.txt


If the project is python:

* doc/requirements.txt can assume the project will have been installed
* The following should be set in setup.cfg:

  [build_sphinx]
  source-dir = doc/source
  build-dir = doc/build

Doing the above allows the following commands to work cleanly in 
automation no matter what the language is:


  [ -f doc/requirements.txt ] && pip install -rdoc/requirements.txt
  sphinx-build -b doc/source doc/build

No additional commands should be required.

The setup.cfg stanza allows:

  python setup.py build_sphinx

to continue to work. (also, everyone already has one)

Helper sugar for developers

Re: [openstack-dev] [all][oslo] Retiring openstack/pylockfile

2017-09-30 Thread Jay Pipes

ack, +1 from me.

On 09/29/2017 11:07 PM, ChangBo Guo wrote:
pylockfile was  deprecated  about two years ago in [1] and it is not 
used in any OpenStack Projects now [2] , we would like to retire it 
according  to steps of retiring a project[3].



[1]c8798cedfbc4d738c99977a07cde2de54687ac6c#diff-88b99bb28683bd5b7e3a204826ead112
[2] http://codesearch.openstack.org/?q=pylockfile=nope==
[3]https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project
--
ChangBo Guo(gcb)
Community Director @EasyStack


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Retiring openstack/pylockfile

2017-09-30 Thread Davanum Srinivas
+1 from me

On Fri, Sep 29, 2017 at 11:07 PM, ChangBo Guo  wrote:
> pylockfile was  deprecated  about two years ago in [1] and it is not used in
> any OpenStack Projects now [2] , we would like to retire it according  to
> steps of retiring a project[3].
>
>
> [1]c8798cedfbc4d738c99977a07cde2de54687ac6c#diff-88b99bb28683bd5b7e3a204826ead112
> [2] http://codesearch.openstack.org/?q=pylockfile=nope==
> [3]https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project
> --
> ChangBo Guo(gcb)
> Community Director @EasyStack
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Update on Zuul v3 Migration - and what to do about issues

2017-09-30 Thread Mohammed Naser
Hi Vega,

Please check the document. Some jobs were migrated with incorrect nodesets and 
have to be switched to multinode in the job definition in openstack-zuul-jobs

Good luck
Mohammed

Sent from my iPhone

> On Sep 30, 2017, at 7:35 AM, Vega Cai  wrote:
> 
> Hi,
> 
> In Tricircle we use the "multinode" topology to setup a test environment with 
> three regions, "CentralRegion" and "RegionOne" in one node, and "RegionTwo" 
> in the other node. I notice that the job definition has been migrated to 
> openstack-zuul-jobs/blob/master/playbooks/legacy/tricircle-dsvm-multiregion/run.yaml,
>  but the job fails with the error that "public endpoint for image service in 
> RegionTwo region not found", so I guess the node of "RegionTwo" is not 
> correctly running. Since the original log folder for the second "subnode-2/" 
> is missing in the job report, I also cannot figure out what the wrong is with 
> the second node.
> 
> Any hints to debug this problem?
> 
> 
>> On Fri, 29 Sep 2017 at 22:59 Monty Taylor  wrote:
>> Hey everybody!
>> 
>> tl;dr - If you're having issues with your jobs, check the FAQ, this
>> email and followups on this thread for mentions of them. If it's an
>> issue with your job and you can spot it (bad config) just submit a patch
>> with topic 'zuulv3'. If it's bigger/weirder/you don't know - we'd like
>> to ask that you send a follow up email to this thread so that we can
>> ensure we've got them all and so that others can see it too.
>> 
>> ** Zuul v3 Migration Status **
>> 
>> If you haven't noticed the Zuul v3 migration - awesome, that means it's
>> working perfectly for you.
>> 
>> If you have - sorry for the disruption. It turns out we have a REALLY
>> complicated array of job content you've all created. Hopefully the pain
>> of the moment will be offset by the ability for you to all take direct
>> ownership of your awesome content... so bear with us, your patience is
>> appreciated.
>> 
>> If you find yourself with some extra time on your hands while you wait
>> on something, you may find it helpful to read:
>> 
>>https://docs.openstack.org/infra/manual/zuulv3.html
>> 
>> We're adding content to it as issues arise. Unfortunately, one of the
>> issues is that the infra manual publication job stopped working.
>> 
>> While the infra manual publication is being fixed, we're collecting FAQ
>> content for it in an etherpad:
>> 
>>https://etherpad.openstack.org/p/zuulv3-migration-faq
>> 
>> If you have a job issue, check it first to see if we've got an entry for
>> it. Once manual publication is fixed, we'll update the etherpad to point
>> to the FAQ section of the manual.
>> 
>> ** Global Issues **
>> 
>> There are a number of outstanding issues that are being worked. As of
>> right now, there are a few major/systemic ones that we're looking in to
>> that are worth noting:
>> 
>> * Zuul Stalls
>> 
>> If you say to yourself "zuul doesn't seem to be doing anything, did I do
>> something wrong?", we're having an issue that jeblair and Shrews are
>> currently tracking down with intermittent connection issues in the
>> backend plumbing.
>> 
>> When it happens it's an across the board issue, so fixing it is our
>> number one priority.
>> 
>> * Incorrect node type
>> 
>> We've got reports of things running on trusty that should be running on
>> xenial. The job definitions look correct, so this is also under
>> investigation.
>> 
>> * Multinode jobs having POST FAILURE
>> 
>> There is a bug in the log collection trying to collect from all nodes
>> while the old jobs were designed to only collect from the 'primary'.
>> Patches are up to fix this and should be fixed soon.
>> 
>> * Branch Exclusions being ignored
>> 
>> This has been reported and its cause is currently unknown.
>> 
>> Thank you all again for your patience! This is a giant rollout with a
>> bunch of changes in it, so we really do appreciate everyone's
>> understanding as we work through it all.
>> 
>> Monty
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> -- 
> BR
> Zhiyuan
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Update on Zuul v3 Migration - and what to do about issues

2017-09-30 Thread Vega Cai
Hi,

In Tricircle we use the "multinode" topology to setup a test environment
with three regions, "CentralRegion" and "RegionOne" in one node, and
"RegionTwo" in the other node. I notice that the job definition has been
migrated to
openstack-zuul-jobs/blob/master/playbooks/legacy/tricircle-dsvm-multiregion/run.yaml,
but the job fails with the error that "public endpoint for image service in
RegionTwo region not found", so I guess the node of "RegionTwo" is not
correctly running. Since the original log folder for the second
"subnode-2/" is missing in the job report, I also cannot figure out what
the wrong is with the second node.

Any hints to debug this problem?


On Fri, 29 Sep 2017 at 22:59 Monty Taylor  wrote:

> Hey everybody!
>
> tl;dr - If you're having issues with your jobs, check the FAQ, this
> email and followups on this thread for mentions of them. If it's an
> issue with your job and you can spot it (bad config) just submit a patch
> with topic 'zuulv3'. If it's bigger/weirder/you don't know - we'd like
> to ask that you send a follow up email to this thread so that we can
> ensure we've got them all and so that others can see it too.
>
> ** Zuul v3 Migration Status **
>
> If you haven't noticed the Zuul v3 migration - awesome, that means it's
> working perfectly for you.
>
> If you have - sorry for the disruption. It turns out we have a REALLY
> complicated array of job content you've all created. Hopefully the pain
> of the moment will be offset by the ability for you to all take direct
> ownership of your awesome content... so bear with us, your patience is
> appreciated.
>
> If you find yourself with some extra time on your hands while you wait
> on something, you may find it helpful to read:
>
>https://docs.openstack.org/infra/manual/zuulv3.html
>
> We're adding content to it as issues arise. Unfortunately, one of the
> issues is that the infra manual publication job stopped working.
>
> While the infra manual publication is being fixed, we're collecting FAQ
> content for it in an etherpad:
>
>https://etherpad.openstack.org/p/zuulv3-migration-faq
>
> If you have a job issue, check it first to see if we've got an entry for
> it. Once manual publication is fixed, we'll update the etherpad to point
> to the FAQ section of the manual.
>
> ** Global Issues **
>
> There are a number of outstanding issues that are being worked. As of
> right now, there are a few major/systemic ones that we're looking in to
> that are worth noting:
>
> * Zuul Stalls
>
> If you say to yourself "zuul doesn't seem to be doing anything, did I do
> something wrong?", we're having an issue that jeblair and Shrews are
> currently tracking down with intermittent connection issues in the
> backend plumbing.
>
> When it happens it's an across the board issue, so fixing it is our
> number one priority.
>
> * Incorrect node type
>
> We've got reports of things running on trusty that should be running on
> xenial. The job definitions look correct, so this is also under
> investigation.
>
> * Multinode jobs having POST FAILURE
>
> There is a bug in the log collection trying to collect from all nodes
> while the old jobs were designed to only collect from the 'primary'.
> Patches are up to fix this and should be fixed soon.
>
> * Branch Exclusions being ignored
>
> This has been reported and its cause is currently unknown.
>
> Thank you all again for your patience! This is a giant rollout with a
> bunch of changes in it, so we really do appreciate everyone's
> understanding as we work through it all.
>
> Monty
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
BR
Zhiyuan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mogan] Weekly meeting canceled next week

2017-09-30 Thread Zhenguo Niu
Hello team,

I'll cancel next week's IRC meeting due to China's National Day holiday.

Happy Golden Week!

-- 
Best Regards,
Zhenguo Niu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev