Re: [openstack-dev] [tripleo] using molecule to test ansible playbooks and roles

2018-11-11 Thread Paul Belanger
On Sun, Nov 11, 2018 at 11:29:43AM +, Sorin Sbarnea wrote:
> I recently came across molecule   a 
> project originated at Cisco which recently become an officially Ansible 
> project, at the same time as ansible-lint. Both projects were transferred 
> from their former locations to Ansible github organization -- I guess as a 
> confirmation that they are now officially supported by the core team. I used 
> ansible-lint for years and it did same me a lot of time, molecule is still 
> new to me.
> 
> Few weeks back I started to play with molecule as at least on paper it was 
> supposed to resolve the problem of testing roles on multiple platforms and 
> usage scenarios and while the work done for enabling tripleo-quickstart to 
> support fedora-28 (py3). I was trying to find a faster way to test these 
> changes faster and locally --- and avoid increasing the load on CI before I 
> get the confirmation that code works locally.
> 
> The results of my testing that started about two weeks ago are very positive 
> and can be seen on:
> https://review.openstack.org/#/c/613672/ 
> 
> You can find there a job names opentstack-tox-molecule which runs in 
> ~15minutes but this is only because on CI docker caching does not work as 
> well as locally, locally it re-runs in ~2-3minutes.
> 
> I would like to hear your thoughts on this and if you also have some time to 
> checkout that change and run it yourself it would be wonderful.
> 
> Once you download the change you only have to run "tox -e molecule", (or 
> "make" which also clones sister extras repo if needed)
> 
> Feel free to send questions to the change itself, on #oooq or by email.
> 
I've been doing this for a while with ansible-role-nodepool[1], same
idea you run tox -emolecule and the role will use the docker backend to
validate. I also run it in the gate (with docker backend) however this
is online to validate that end users will not be broken locally if they
run tox -emolecule. There is a downside with docker, no systemd
integration, which is fine for me as I have other tests that are able to
provide coverage.

With zuul, it really isn't needed to run nested docker for linters and
smoke testing, as it mostly creates unneeded overhead.  However, if you
do want to standardize on molecule, I recommend you don't use docker
backend but use the delegated and reused the inventory provided by zuul.
Then you still use molecule but get the bonus of using the VMs presented
by zuul / nodepool.

- Paul

[1] http://git.openstack.org/cgit/openstack/ansible-role-nodepool

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo

2018-10-15 Thread Paul Belanger
On Mon, Oct 15, 2018 at 04:27:00PM -0500, Monty Taylor wrote:
> On 10/15/2018 05:49 AM, Stephen Finucane wrote:
> > On Wed, 2018-10-10 at 18:51 +, Jeremy Stanley wrote:
> > > On 2018-10-10 13:35:00 -0500 (-0500), Greg Hill wrote:
> > > [...]
> > > > We plan to still have a CI gatekeeper, probably Travis CI, to make sure 
> > > > PRs
> > > > past muster before being merged, so it's not like we're wanting to
> > > > circumvent good contribution practices by committing whatever to HEAD.
> > > 
> > > Travis CI has gained the ability to prevent you from merging changes
> > > which fail testing? Or do you mean something else when you refer to
> > > it as a "gatekeeper" here?
> > 
> > Yup but it's GitHub feature rather than specifically a Travis CI
> > feature.
> > 
> >https://help.github.com/articles/about-required-status-checks/
> > 
> > Doesn't help the awful pull request workflow but that's neither here
> > nor there.
> 
> It's also not the same as gating.
> 
> The github feature is the equivalent of "Make sure the votes in check are
> green before letting someone click the merge button"
> 
> The zuul feature is "run the tests between the human decision to merge and
> actually merging with the code in the state it will actually be in when
> merged".
> 
> It sounds nitpicky, but the semantic distinction is important - and it
> catches things more frequently than you might imagine.
> 
> That said - Zuul supports github, and there are Zuuls run by not-openstack,
> so taking a project out of OpenStack's free infrastructure does not mean you
> have to also abandon Zuul. The OpenStack Infra team isn't going to run a
> zuul to gate patches on a GitHub project - but other people might be happy
> to let you use a Zuul so that you don't have to give up the Zuul features in
> place today. If you go down that road, I'd suggest pinging the
> softwarefactory-project.io folks or the openlab folks.
> 
As somebody who has recently moved from a gerrit workflow to a github
workflow using Zuul, keep in mind this is not a 1:1 feature map.  The
biggest difference, as people have said, is code review in github.com is
terrible.  It was something added after the fact, I wish daily to be
able to use gerrit again :)

Zuul does make things better, and 100% with Monty here.  You want Zuul
to be the gate, not travis CI.

- Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] non-candidacy for TC

2018-09-04 Thread Paul Belanger
Greetings,

After a year on the TC, I've decided not to run for another term. I'd
like to thank the other TC members for helping bring me up to speed over
the last year and community for originally voting.  There is always work
to do, and I'd like to use this email to encourage everybody to strongly
consider running for the TC if you are interested in the future of
OpenStack.

It is a great learning opportunity, great humans to work with and great
project! Please do consider running if you are at all interested.

Thanks again,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican][ara][helm][tempest] Removal of fedora-27 nodes

2018-08-13 Thread Paul Belanger
On Mon, Aug 13, 2018 at 09:56:44AM -0400, Paul Belanger wrote:
> On Thu, Aug 02, 2018 at 08:01:46PM -0400, Paul Belanger wrote:
> > Greetings,
> > 
> > We've had fedora-28 nodes online for some time in openstack-infra, I'd like 
> > to
> > finish the migration process and remove fedora-27 images.
> > 
> > Please take a moment to review and approve the following patches[1]. We'll 
> > be
> > using the fedora-latest nodeset now, which make is a little easier for
> > openstack-infra to migrate to newer versions of fedora.  Next time around, 
> > we'll
> > send out an email to the ML once fedora-29 is online to give projects some 
> > time
> > to test before we make the change.
> > 
> > Thanks
> > - Paul
> > 
> > [1] https://review.openstack.org/#/q/topic:fedora-latest
> > 
> Thanks for the approval of the patches above, today we are blocked by the
> following backport for barbican[2]. If we can land this today, we can proceed
> with the removal from nodepool.
> 
> Thanks
> - Paul
> 
> [2] https://review.openstack.org/590420/
> 
Thanks to the fast approvals today, we've been able to fully remove
fedora-27 from nodepool.  All jobs will now use fedora-latest, which is
currently fedora-28.

We'll send out an enail once we are ready to bring fedora-29 online, and
promote it to fedora-latest.

Thanks
- Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican][ara][helm][tempest] Removal of fedora-27 nodes

2018-08-13 Thread Paul Belanger
On Thu, Aug 02, 2018 at 08:01:46PM -0400, Paul Belanger wrote:
> Greetings,
> 
> We've had fedora-28 nodes online for some time in openstack-infra, I'd like to
> finish the migration process and remove fedora-27 images.
> 
> Please take a moment to review and approve the following patches[1]. We'll be
> using the fedora-latest nodeset now, which make is a little easier for
> openstack-infra to migrate to newer versions of fedora.  Next time around, 
> we'll
> send out an email to the ML once fedora-29 is online to give projects some 
> time
> to test before we make the change.
> 
> Thanks
> - Paul
> 
> [1] https://review.openstack.org/#/q/topic:fedora-latest
> 
Thanks for the approval of the patches above, today we are blocked by the
following backport for barbican[2]. If we can land this today, we can proceed
with the removal from nodepool.

Thanks
- Paul

[2] https://review.openstack.org/590420/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do

2018-08-09 Thread Paul Belanger
On Thu, Aug 09, 2018 at 08:00:13PM -0600, Wesley Hayutin wrote:
> On Thu, Aug 9, 2018 at 5:33 PM Alex Schultz  wrote:
> 
> > On Thu, Aug 9, 2018 at 2:56 PM, Doug Hellmann 
> > wrote:
> > > Excerpts from Alex Schultz's message of 2018-08-09 14:31:34 -0600:
> > >> Ahoy folks,
> > >>
> > >> I think it's time we come up with some basic rules/patterns on where
> > >> code lands when it comes to OpenStack related Ansible roles and as we
> > >> convert/export things. There was a recent proposal to create an
> > >> ansible-role-tempest[0] that would take what we use in
> > >> tripleo-quickstart-extras[1] and separate it for re-usability by
> > >> others.   So it was asked if we could work with the openstack-ansible
> > >> team and leverage the existing openstack-ansible-os_tempest[2].  It
> > >> turns out we have a few more already existing roles laying around as
> > >> well[3][4].
> > >>
> > >> What I would like to propose is that we as a community come together
> > >> to agree on specific patterns so that we can leverage the same roles
> > >> for some of the core configuration/deployment functionality while
> > >> still allowing for specific project specific customization.  What I've
> > >> noticed between all the project is that we have a few specific core
> > >> pieces of functionality that needs to be handled (or skipped as it may
> > >> be) for each service being deployed.
> > >>
> > >> 1) software installation
> > >> 2) configuration management
> > >> 3) service management
> > >> 4) misc service actions
> > >>
> > >> Depending on which flavor of the deployment you're using, the content
> > >> of each of these may be different.  Just about the only thing that is
> > >> shared between them all would be the configuration management part.
> > >
> > > Does that make the 4 things separate roles, then? Isn't the role
> > > usually the unit of sharing between playbooks?
> > >
> >
> > It can be, but it doesn't have to be.  The problem comes in with the
> > granularity at which you are defining the concept of the overall
> > action.  If you want a role to encompass all that is "nova", you could
> > have a single nova role that you invoke 5 different times to do the
> > different actions during the overall deployment. Or you could create a
> > role for nova-install, nova-config, nova-service, nova-cells, etc etc.
> > I think splitting them out into their own role is a bit too much in
> > terms of management.   In my particular openstack-ansible is already
> > creating a role to manage "nova".  So is there a way that I can
> > leverage part of their process within mine without having to duplicate
> > it.  You can pull in the task files themselves from a different so
> > technically I think you could define a ansible-role-tripleo-nova that
> > does some include_tasks: ../../os_nova/tasks/install.yaml but then
> > we'd have to duplicate the variables in our playbook rather than
> > invoking a role with some parameters.
> >
> > IMHO this structure is an issue with the general sharing concepts of
> > roles/tasks within ansible.  It's not really well defined and there's
> > not really a concept of inheritance so I can't really extend your
> > tasks with mine in more of a programming sense. I have to duplicate it
> > or do something like include a specific task file from another role.
> > Since I can't really extend a role in the traditional OO programing
> > sense, I would like to figure out how I can leverage only part of it.
> > This can be done by establishing ansible variables to trigger specific
> > actions or just actually including the raw tasks themselves.   Either
> > of these concepts needs some sort of contract to be established to the
> > other won't get broken.   We had this in puppet via parameters which
> > are checked, there isn't really a similar concept in ansible so it
> > seems that we need to agree on some community established rules.
> >
> > For tripleo, I would like to just invoke the os_nova role and pass in
> > like install: false, service: false, config_dir:
> > /my/special/location/, config_data: {...} and it spit out the configs.
> > Then my roles would actually leverage these via containers/etc.  Of
> > course most of this goes away if we had a unified (not file based)
> > configuration method across all services (openstack and non-openstack)
> > but we don't. :D
> >
> 
> I like your idea here Alex.
> So having a role for each of these steps is too much management I agree,
> however
> establishing a pattern of using tasks for each step may be a really good
> way to cleanly handle this.
> 
> Are you saying something like the following?
> 
> openstack-nova-role/
> * * /tasks/
> * * /tasks/install.yml
> * * /tasks/service.yml
> * */tasks/config.yml
> * */taks/main.yml
> ---
> # main.yml
> 
> include: install.yml
> when: nova_install|bool
> 
> include: service.yml
> when: nova_service|bool
> 
> include: config.yml
> when: nova_config.yml
> 

[openstack-dev] [barbican][ara][helm][tempest] Removal of fedora-27 nodes

2018-08-02 Thread Paul Belanger
Greetings,

We've had fedora-28 nodes online for some time in openstack-infra, I'd like to
finish the migration process and remove fedora-27 images.

Please take a moment to review and approve the following patches[1]. We'll be
using the fedora-latest nodeset now, which make is a little easier for
openstack-infra to migrate to newer versions of fedora.  Next time around, we'll
send out an email to the ML once fedora-29 is online to give projects some time
to test before we make the change.

Thanks
- Paul

[1] https://review.openstack.org/#/q/topic:fedora-latest

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Disk space requirement - any way to lower it a little?

2018-07-19 Thread Paul Belanger
On Thu, Jul 19, 2018 at 05:30:27PM +0200, Cédric Jeanneret wrote:
> Hello,
> 
> While trying to get a new validation¹ in the undercloud preflight
> checks, I hit an (not so) unexpected issue with the CI:
> it doesn't provide flavors with the minimal requirements, at least
> regarding the disk space.
> 
> A quick-fix is to disable the validations in the CI - Wes has already
> pushed a patch for that in the upstream CI:
> https://review.openstack.org/#/c/583275/
> We can consider this as a quick'n'temporary fix².
> 
> The issue is on the RDO CI: apparently, they provide instances with
> "only" 55G of free space, making the checks fail:
> https://logs.rdoproject.org/17/582917/3/openstack-check/legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master/c9cf398/logs/undercloud/home/zuul/undercloud_install.log.txt.gz#_2018-07-17_10_23_46
> 
> So, the question is: would it be possible to lower the requirment to,
> let's say, 50G? Where does that 60G³ come from?
> 
> Thanks for your help/feedback.
> 
> Cheers,
> 
> C.
> 
> 
> 
> ¹ https://review.openstack.org/#/c/582917/
> 
> ² as you might know, there's a BP for a unified validation framework,
> and it will allow to get injected configuration in CI env in order to
> lower the requirements if necessary:
> https://blueprints.launchpad.net/tripleo/+spec/validation-framework
> 
> ³
> http://tripleo.org/install/environments/baremetal.html#minimum-system-requirements
> 
Keep in mind, upstream we don't really have control over partitions of nodes, in
some case it is a single, other multiple. I'd suggest looking more at:

  https://docs.openstack.org/infra/manual/testing.html

As for downstream RDO, the same is going to apply once we start adding more
cloud providers. I would look to see if you actually need that much space for
deployments, and make try to mock the testing of that logic.

- Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [TIP] tox release 3.1.1

2018-07-09 Thread Paul Belanger
On Mon, Jul 09, 2018 at 01:44:00PM -0400, Doug Hellmann wrote:
> Excerpts from Eric Fried's message of 2018-07-09 11:16:11 -0500:
> > Doug-
> > 
> > How long til we can start relying on the new behavior in the gate?  I
> > gots me some basepython to purge...
> > 
> > -efried
> 
> Great question. I have to defer to the infra team to answer, since I'm
> not sure how we're managing the version of tox we use in CI.
> 
Should be less then 24 hours, likely sooner. We pull in the latest tox when we
rebuild images each day[1].

[1] 
http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/elements/infra-package-needs/install.d/10-pip-packages

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] dropping selinux support

2018-06-28 Thread Paul Belanger
On Thu, Jun 28, 2018 at 12:56:22PM -0400, Mohammed Naser wrote:
> Hi everyone:
> 
> This email is to ask if there is anyone out there opposed to removing
> SELinux bits from OpenStack ansible, it's blocking some of the gates
> and the maintainers for them are no longer working on the project
> unfortunately.
> 
> I'd like to propose removing any SELinux stuff from OSA based on the 
> following:
> 
> 1) We don't gate on it, we don't test it, we don't support it.  If
> you're running OSA with SELinux enforcing, please let us know how :-)
> 2) It extends beyond the scope of the deployment project and there are
> no active maintainers with the resources to deal with them
> 3) With the work currently in place to let OpenStack Ansible install
> distro packages, we can rely on upstream `openstack-selinux` package
> to deliver deployments that run with SELinux on.
> 
> Is there anyone opposed to removing it?  If so, please let us know. :-)
> 
While I don't use OSA, I would be surprised to learn that selinux wouldn't be
supported.  I also understand it requires time and care to maintain. Have you
tried reaching out to people in #RDO, IIRC all those packages should support
selinux.

As for gating, maybe default to selinux passive for it to report errors, but not
fail.  And if anybody is interested in support it, they can do so and enable
enforcing again when everything is fixed.

- Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo gate is blocked - please read

2018-06-16 Thread Paul Belanger
On Sat, Jun 16, 2018 at 12:47:10PM +, Jeremy Stanley wrote:
> On 2018-06-15 23:15:01 -0700 (-0700), Emilien Macchi wrote:
> [...]
> > ## Dockerhub proxy issue
> > Infra using wrong image layer object storage proxy for Dockerhub:
> > https://review.openstack.org/#/c/575787/
> > Huge thanks to infra team, specially Clark for fixing this super quickly,
> > it clearly helped to stabilize our container jobs, I actually haven't seen
> > timeouts since we merged your patch. Thanks a ton!
> [...]
> 
> As best we can tell from logs, the way Dockerhub served these images
> changed a few weeks ago (at the end of May) leading to this problem.
> -- 
> Jeremy Stanley

Should also note what we are doing here is a terrible hack, we've only been able
to learn the information by sniffing the traffic to hub.docker.io for our 
reverse
proxy cache configuration. It is also possible this can break in the future too,
so something to always keep in the back of your mind.

It would be great if docker tools just worked with HTTP proxies.

-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][python3][tc][infra] Python 3.6

2018-06-05 Thread Paul Belanger
On Tue, Jun 05, 2018 at 04:48:00PM -0400, Zane Bitter wrote:
> On 05/06/18 16:38, Doug Hellmann wrote:
> > Excerpts from Zane Bitter's message of 2018-06-05 15:55:49 -0400:
> > > We've talked a bit about migrating to Python 3, but (unless I missed it)
> > > not a lot about which version of Python 3. Currently all projects that
> > > support Python 3 are gating against 3.5. However, Ubuntu Artful and
> > > Fedora 26 already ship Python 3.6 by default. (And Bionic and F28 have
> > > been released since then.) The one time it did come up in a thread, we
> > > decided it was blocked on the availability of 3.6 in Ubuntu to run on
> > > the test nodes, so it's time to discuss it again.
> > > 
> > > AIUI we're planning to switch the test nodes to Bionic, since it's the
> > > latest LTS release, so I'd assume that means that when we talk about
> > > running docs jobs, pep8  with Python3 (under the python3-first
> > > project-wide goal) that means 3.6. And while 3.5 jobs should continue to
> > > work, it seems like we ought to start testing ASAP with the version that
> > > users are going to get by default if they choose to use our Python3
> > > packages.
> > > 
> > > The list of breaking changes in 3.6 is quite short (although not zero),
> > > so I wouldn't expect too many roadblocks:
> > > https://docs.python.org/3/whatsnew/3.6.html#porting-to-python-3-6
> > > 
> > > I think we can split the problem into two parts:
> > > 
> > > * How can we detect any issues ASAP.
> > > 
> > > Would it be sane to give all projects with a py35 unit tests job a
> > > non-voting py36 job so that they can start fixing any issues right away?
> > > Like this: https://review.openstack.org/572535
> > 
> > That seems like a good way to start.
> > 
> > Maybe we want to rename that project template to openstack-python3-jobs
> > to keep it version-agnostic?
> 
> You mean the 35_36 one? Actually, let's discuss this on the review.
> 
Yes please lets keep python35 / python36 project-templates, I've left comments
in review.

> > > 
> > > * How can we ensure every project fixes any issues and migrates to
> > > voting gates, including for functional test jobs?
> > > 
> > > Would it make sense to make this part of the 'python3-first'
> > > project-wide goal?
> > 
> > Yes, that seems like a good idea. We can be specific about the version
> > of python 3 to be used to achieve that goal (assuming it is selected as
> > a goal).
> > 
> > The instructions I've been putting together are based on just using
> > "python3" in the tox.ini file because I didn't want to have to update
> > that every time we update to a new version of python. Do you think we
> > should be more specific there, too?
> 
> That's probably fine IMHO. We should just be aware that e.g. when distros
> start switching to 3.7 then people's local jobs will start running in 3.7.
> 
> For me, at least, this has already been the case with 3.6 - tox is now
> python3 by default in Fedora, so e.g. pep8 jobs have been running under 3.6
> for a while now. There were a *lot* of deprecation warnings at first.
> 
> > Doug
> > 
> > > 
> > > cheers,
> > > Zane.
> > > 
> > > 
> > > (Disclaimer for the conspiracy-minded: you might assume that I'm
> > > cleverly concealing inside knowledge of which version of Python 3 will
> > > replace Python 2 in the next major release of RHEL/CentOS, but in fact
> > > you would be mistaken. The truth is I've been too lazy to find out, so
> > > I'm as much in the dark as anybody. Really. Anyway, this isn't about
> > > that, it's about testing within upstream OpenStack.)
> > > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Gerrit server replacement finished

2018-05-02 Thread Paul Belanger
Hello from Infra.
 
Gerrit maintenance has concluded successfully and running happily on Ubuntu
Xenial.  We were able to save and restore the queues from zuul, but as always
be sure to check your patches as a recheck maybe be required.

If you have any questions or comments, please reach out to us in
#openstack-infra.

I've leave the text below in case anybody missed our previous emails.

---

It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack
Project Infrastructure team is upgrading the server which runs
review.openstack.org to Ubuntu Xenial, and that means a new virtual machine
instance with new IP addresses assigned by our service provider. The new IP
addresses will be as follows:

IPv4 -> 104.130.246.32
IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229

They will replace these current production IP addresses:

IPv4 -> 104.130.246.91
IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525

We understand that some users may be running from egress-filtered
networks with port 29418/tcp explicitly allowed to the current
review.openstack.org IP addresses, and so are providing this
information as far in advance as we can to allow them time to update
their firewalls accordingly.

Note that some users dealing with egress filtering may find it
easier to switch their local configuration to use Gerrit's REST API
via HTTPS instead, and the current release of git-review has support
for that workflow as well.
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html

We will follow up with final confirmation in subsequent announcements.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Gerrit server replacement scheduled for May 2nd 2018

2018-05-02 Thread Paul Belanger
Hello from Infra.

Today is the day for scheduled maintenance of gerrit, we'll be allocating 2
hours for the outage but don't expect it to take that long. During this time you
will not be able to access gerrit.

If you have any questions, or would like to follow along, please join us in
#openstack-infra.

---

It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack
Project Infrastructure team is upgrading the server which runs
review.openstack.org to Ubuntu Xenial, and that means a new virtual machine
instance with new IP addresses assigned by our service provider. The new IP
addresses will be as follows:

IPv4 -> 104.130.246.32
IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229

They will replace these current production IP addresses:

IPv4 -> 104.130.246.91
IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525

We understand that some users may be running from egress-filtered
networks with port 29418/tcp explicitly allowed to the current
review.openstack.org IP addresses, and so are providing this
information as far in advance as we can to allow them time to update
their firewalls accordingly.

Note that some users dealing with egress filtering may find it
easier to switch their local configuration to use Gerrit's REST API
via HTTPS instead, and the current release of git-review has support
for that workflow as well.
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html

We will follow up with final confirmation in subsequent announcements.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thank you TryStack!!

2018-04-30 Thread Paul Belanger
On Mon, Apr 30, 2018 at 09:37:21AM +, Jens Harbott wrote:
> 2018-03-26 22:51 GMT+00:00 Jimmy Mcarthur :
> > Hi everyone,
> >
> > We recently made the tough decision, in conjunction with the dedicated
> > volunteers that run TryStack, to end the service as of March 29, 2018.  For
> > those of you that used it, thank you for being part of the TryStack
> > community.
> >
> > The good news is that you can find more resources to try OpenStack at
> > http://www.openstack.org/start, including the Passport Program, where you
> > can test on any participating public cloud. If you are looking to test
> > different tools or application stacks with OpenStack clouds, you should
> > check out Open Lab.
> >
> > Thank you very much to Will Foster, Kambiz Aghaiepour, Rich Bowen, and the
> > many other volunteers who have managed this valuable service for the last
> > several years!  Your contribution to OpenStack was noticed and appreciated
> > by many in the community.
> 
> Seems it would be great if https://trystack.openstack.org/ would be
> updated with this information, according to comments in #openstack
> users are still landing on that page and try to get a stack there in
> vain.
> 
The code is hosted by openstack-infra[1], if somebody would like to propose a
patch with the new information.

[1] http://git.openstack.org/cgit/openstack-infra/trystack-site

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] final stages of python 3 transition

2018-04-26 Thread Paul Belanger
On Thu, Apr 26, 2018 at 09:27:31AM -0500, Sean McGinnis wrote:
> On Wed, Apr 25, 2018 at 04:54:46PM -0400, Doug Hellmann wrote:
> > It's time to talk about the next steps in our migration from python
> > 2 to python 3.
> > 
> > [...]
> > 
> > 2. Change (or duplicate) all functional test jobs to run under
> >python 3.
> 
> As a test I ran Cinder functional and unit test jobs on bionic using 3.6. All
> went well.
> 
> That made me realize something though - right now we have jobs that explicitly
> say py35, both for unit tests and functional tests. But I realized setting up
> these test jobs that it works to just specify "basepython = python3" or run
> unit tests with "tox -e py3". Then with that, it just depends on whether the
> job runs on xenial or bionic as to whether the job is run with py35 or py36.
> 
> It is less explicit, so I see some downside to that, but would it make sense 
> to
> change jobs to drop the minor version to make it more flexible and easy to 
> make
> these transitions?
> 
I still think using tox-py35 / tox-py36 makes sense, as those jobs are already
setup to use the specific nodeset of ubuntu-xenial or ubuntu-bionic.  If we did
move to just tox-py3, it would actually result if more code projects would need
to add to their .zuul.yaml files:

  -project:
  check:
jobs:
  - tox-py35


  -project:
  check:
jobs:
  - tox-py3:
  nodeset: ubuntu-xenial

This maybe okay, will allow others to comment, but the main reason I am not a
fan, is we can no longer infer the nodeset by looking at the job name.
tox-py3 could be xenial or bionic.

Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Gerrit server replacement scheduled for May 2nd 2018

2018-04-25 Thread Paul Belanger
On Thu, Apr 19, 2018 at 11:49:12AM -0400, Paul Belanger wrote:
Hello from Infra.

This is our weekly reminder of the upcoming gerrit replacement.  We'll continue
to send these announcements out up until the day of the migration. We are now 1
weeks away from replacement date.

If you have any questions, please contact us in #openstack-infra.

---

It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack
Project Infrastructure team is upgrading the server which runs
review.openstack.org to Ubuntu Xenial, and that means a new virtual machine
instance with new IP addresses assigned by our service provider. The new IP
addresses will be as follows:

IPv4 -> 104.130.246.32
IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229

They will replace these current production IP addresses:

IPv4 -> 104.130.246.91
IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525

We understand that some users may be running from egress-filtered
networks with port 29418/tcp explicitly allowed to the current
review.openstack.org IP addresses, and so are providing this
information as far in advance as we can to allow them time to update
their firewalls accordingly.

Note that some users dealing with egress filtering may find it
easier to switch their local configuration to use Gerrit's REST API
via HTTPS instead, and the current release of git-review has support
for that workflow as well.
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html

We will follow up with final confirmation in subsequent announcements.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [bifrost][bandit][magnum][ironic][kolla][pyeclib][mistral] Please merge bindep changes

2018-04-23 Thread Paul Belanger
Greetings,

Could you please review the following bindep.txt[1] changes to your projects and
approve them, it would be helpful to the openstack-infra team.  We are looking
to remove some legacy jenkins scripts from openstack-infra/project-config and
your projects are still using them.  The following patches will update your jobs
to the new functionality of using our bindep role.

If you have any questions, please reach out to us in #openstack-infra.

Thanks,
Paul

[1] https://review.openstack.org/#/q/topic:bindep.txt+status:open

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] ubuntu-bionic and legacy nodesets

2018-04-20 Thread Paul Belanger
On Fri, Apr 20, 2018 at 08:16:07AM +0100, Jean-Philippe Evrard wrote:
> That's very cool.
> Any idea of the repartition of nodes xenial vs bionic? Is that a very
> restricted amount of nodes?
> 
According to upstream, ubuntu-bionic releases next week. In openstack-infra we
are in really good shape to have projects start using it once we rebuild using
the released version. Projects are able to use ubuntu-bionic today, we just ask
they don't gate on them until the official release.

As for switching the PTI job to use ubuntu-bionic, that is a different
discussion. It would bump python to 3.6 and likely be too late in the cycle to
do it.  I guess something we can hash out with infra / requirements / tc /
EALLTHEPROJECTS.

-Paul

> 
> On 20 April 2018 at 00:37, Paul Belanger <pabelan...@redhat.com> wrote:
> > Greetings,
> >
> > With ubuntu-bionic release around the corner we'll be starting discussions 
> > about
> > migrating jobs from ubuntu-xenial to ubuntu-bionic.
> >
> > On topic I'd like to raise, is round job migrations from legacy to native
> > zuulv3.  Specifically, I'd like to propose we do not add 
> > legacy-ubuntu-bionic
> > nodesets into openstack-zuul-jobs. Projects should be working towards moving
> > away from the legacy format, as they were just copypasta from our previous 
> > JJB
> > templates.
> >
> > Projects would still be free to move them intree, but I would highly 
> > encourage
> > projects do not do this, as it only delays the issue.
> >
> > The good news is the majority of jobs have already been moved to native 
> > zuulv3
> > jobs, but there are still some projects still depending on the legacy 
> > nodesets.
> > For example, tox bases jobs would not be affected.  It mostly would be dsvm
> > based jobs that haven't been switch to use the new devstack jobs for zuulv3.
> >
> > -Paul
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][infra] ubuntu-bionic and legacy nodesets

2018-04-19 Thread Paul Belanger
Greetings,

With ubuntu-bionic release around the corner we'll be starting discussions about
migrating jobs from ubuntu-xenial to ubuntu-bionic.

On topic I'd like to raise, is round job migrations from legacy to native
zuulv3.  Specifically, I'd like to propose we do not add legacy-ubuntu-bionic
nodesets into openstack-zuul-jobs. Projects should be working towards moving
away from the legacy format, as they were just copypasta from our previous JJB
templates.

Projects would still be free to move them intree, but I would highly encourage
projects do not do this, as it only delays the issue.

The good news is the majority of jobs have already been moved to native zuulv3
jobs, but there are still some projects still depending on the legacy nodesets.
For example, tox bases jobs would not be affected.  It mostly would be dsvm
based jobs that haven't been switch to use the new devstack jobs for zuulv3.

-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Removal of debian-jessie, replaced by debian-stable (stretch)

2018-04-19 Thread Paul Belanger
Greetings,

I'd like to propose now that we have debian-stable (stretch) nodesets online for
nodepool, that we start the process to remove debian-jessie.  As I can see,
there really is only 2 projects using debian-jessie:

  * ARA
  * ansible-hardening

I've already proposed patches to update their jobs to debian-stable, replacing
debian-jessie:

  https://review.openstack.org/#/q/topic:debian-stable

You'll also noticed we are not using debian-stretch directly for the nodeset,
this is on purpose as when the next release of debian happens (buster). We don't
need to make a bunch of in repo changes to projects.  But update the label of
the nodeset from debian-stretch to debian-buster.

Of course, we'd need to give a fair amount of notice when we plan to make that
change, but given this nodeset isn't part of our LTS platform (ubuntu / centos)
I believe this will help us in openstack-infra migrate projects to the latest
distro as fast a possible.

Thoughts?
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Gerrit server replacement scheduled for May 2nd 2018

2018-04-19 Thread Paul Belanger
Hello from Infra.

This is our weekly reminder of the upcoming gerrit replacement.  We'll continue
to send these announcements out up until the day of the migration. We are now 2
weeks away from replacement date.

If you have any questions, please contact us in #openstack-infra.

---

It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack
Project Infrastructure team is upgrading the server which runs
review.openstack.org to Ubuntu Xenial, and that means a new virtual machine
instance with new IP addresses assigned by our service provider. The new IP
addresses will be as follows:

IPv4 -> 104.130.246.32
IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229

They will replace these current production IP addresses:

IPv4 -> 104.130.246.91
IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525

We understand that some users may be running from egress-filtered
networks with port 29418/tcp explicitly allowed to the current
review.openstack.org IP addresses, and so are providing this
information as far in advance as we can to allow them time to update
their firewalls accordingly.

Note that some users dealing with egress filtering may find it
easier to switch their local configuration to use Gerrit's REST API
via HTTPS instead, and the current release of git-review has support
for that workflow as well.
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html

We will follow up with final confirmation in subsequent announcements.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Gerrit server replacement scheduled for May 2nd 2018

2018-04-10 Thread Paul Belanger
Hello from Infra.

This is our weekly reminder of the upcoming gerrit replacement.  We'll continue
to send these announcements out up until the day of the migration.

If you have any questions, please contact us in #openstack-infra.

---

It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack
Project Infrastructure team is upgrading the server which runs
review.openstack.org to Ubuntu Xenial, and that means a new virtual machine
instance with new IP addresses assigned by our service provider. The new IP
addresses will be as follows:

IPv4 -> 104.130.246.32
IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229

They will replace these current production IP addresses:

IPv4 -> 104.130.246.91
IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525

We understand that some users may be running from egress-filtered
networks with port 29418/tcp explicitly allowed to the current
review.openstack.org IP addresses, and so are providing this
information as far in advance as we can to allow them time to update
their firewalls accordingly.

Note that some users dealing with egress filtering may find it
easier to switch their local configuration to use Gerrit's REST API
via HTTPS instead, and the current release of git-review has support
for that workflow as well.
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html

We will follow up with final confirmation in subsequent announcements.

Thanks,
Paul


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way

2018-04-08 Thread Paul Belanger
On Sat, Apr 07, 2018 at 12:56:31AM -0700, Arun SAG wrote:
> Hello,
> 
> On Thu, Apr 5, 2018 at 5:39 PM, Paul Belanger <pabelan...@redhat.com> wrote:
> 
> > Yah, I agree your approach is the better, i just wanted to toggle what was
> > supported by default. However, it is pretty broken today.  I can't imagine
> > anybody actually using it, if so they must be carrying downstream patches.
> >
> > If we think USE_VENV is valid use case, for per project VENV, I suggest we
> > continue to fix it and update neutron to support it.  Otherwise, we maybe 
> > should
> > rip and replace it.
> 
> I work for Yahoo (Oath). We use USE_VENV a lot in our CI. We use VENVs
> to deploy software to
> production as well. we have some downstream patches to devstack to fix
> some issues with
> USE_VENV feature, i would be happy to upstream them. Please do not rip
> this out. Thanks.
> 
Yes, please upstream them if at all possible. I've been tracking all the fixes
so far at https://review.openstack.org/552939/ but still having an issue with
rootwrap.  I think clarkb managed to fix this in his patchset.

Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27

2018-04-05 Thread Paul Belanger
On Wed, Apr 04, 2018 at 11:27:34AM -0400, Paul Belanger wrote:
> On Tue, Mar 13, 2018 at 10:54:26AM -0400, Paul Belanger wrote:
> > On Mon, Mar 05, 2018 at 06:45:13PM -0500, Paul Belanger wrote:
> > > Greetings,
> > > 
> > > A quick search of git shows your projects are using fedora-26 nodes for 
> > > testing.
> > > Please take a moment to look at gerrit[1] and help land patches.  We'd 
> > > like to
> > > remove fedora-26 nodes in the next week and to avoid broken jobs you'll 
> > > need to
> > > approve these patches.
> > > 
> > > If you jobs are failing under fedora-27, please take the time to fix any 
> > > issue
> > > or update said patches to make them non-voting.
> > > 
> > > We (openstack-infra) aim to only keep the latest fedora image online, 
> > > which
> > > changes aprox every 6 months.
> > > 
> > > Thanks for your help and understanding,
> > > Paul
> > > 
> > > [1] https://review.openstack.org/#/q/topic:fedora-27+status:open
> > > 
> > Greetings,
> > 
> > This is a friendly reminder, about moving jobs to fedora-27. I'd like to 
> > remove
> > our fedora-26 images next week and if jobs haven't been migrated you may 
> > start
> > to see NODE_FAILURE messages while running jobs.  Please take a moment to 
> > merge
> > the open changes or update them to be non-voting while you work on fixes.
> > 
> > Thanks again,
> > Paul
> > 
> Hi,
> 
> It's been a month since we started asking projects to migrate to fedora-26.
> 
> I've proposed the patch to review fedora-26 nodes from nodepool[2], if your
> project hasn't merge the patches above you will start to see NODE_FAILURE
> results for your jobs. Please take the time to approve the changes above.
> 
> Because new fedora images come online every 6 months, we like to only keep one
> of them online at any given time. Fedora is meant to be a fast moving distro 
> to
> pick up new versions of software out side of the Ubuntu LTS releases.
> 
> If you have any questions please reach out to us in #openstack-infra.
> 
> Thanks,
> Paul
> 
> [2] https://review.openstack.org/558847/
> 
We've just landed the patch, fedora-26 images are now removed. If you haven't
upgraded your jobs to fedora-27, you'll now start setting NODE_FAILURE return by
zuul.

If you have any questions please reach out to us in #openstack-infra.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way

2018-04-05 Thread Paul Belanger
On Thu, Apr 05, 2018 at 01:27:13PM -0700, Clark Boylan wrote:
> On Mon, Apr 2, 2018, at 9:13 AM, Clark Boylan wrote:
> > On Mon, Apr 2, 2018, at 8:06 AM, Matthew Thode wrote:
> > > On 18-03-31 15:00:27, Jeremy Stanley wrote:
> > > > According to a notice[1] posted to the pypa-announce and
> > > > distutils-sig mailing lists, pip 10.0.0.b1 is on PyPI now and 10.0.0
> > > > is expected to be released in two weeks (over the April 14/15
> > > > weekend). We know it's at least going to start breaking[2] DevStack
> > > > and we need to come up with a plan for addressing that, but we don't
> > > > know how much more widespread the problem might end up being so
> > > > encourage everyone to try it out now where they can.
> > > > 
> > > 
> > > I'd like to suggest locking down pip/setuptools/wheel like openstack
> > > ansible is doing in 
> > > https://github.com/openstack/openstack-ansible/blob/master/global-requirement-pins.txt
> > > 
> > > We could maintain it as a separate constraints file (or infra could
> > > maintian it, doesn't mater).  The file would only be used for the
> > > initial get-pip install.
> > 
> > In the past we've done our best to avoid pinning these tools because 1) 
> > we've told people they should use latest for openstack to work and 2) it 
> > is really difficult to actually control what versions of these tools end 
> > up on your systems if not latest.
> > 
> > I would strongly push towards addressing the distutils package deletion 
> > problem that we've run into with pip10 instead. One of the approaches 
> > thrown out that pabelanger is working on is to use a common virtualenv 
> > for devstack and avoid the system package conflict entirely.
> 
> I was mistaken and pabelanger was working to get devstack's USE_VENV option 
> working which installs each service (if the service supports it) into its own 
> virtualenv. There are two big drawbacks to this. This first is that we would 
> lose coinstallation of all the openstack services which is one way we ensure 
> they all work together at the end of the day. The second is that not all 
> services in "base" devstack support USE_VENV and I doubt many plugins do 
> either (neutron apparently doesn't?).
> 
Yah, I agree your approach is the better, i just wanted to toggle what was
supported by default. However, it is pretty broken today.  I can't imagine
anybody actually using it, if so they must be carrying downstream patches.

If we think USE_VENV is valid use case, for per project VENV, I suggest we
continue to fix it and update neutron to support it.  Otherwise, we maybe should
rip and replace it.

Paul

> I've since worked out a change that passes tempest using a global virtualenv 
> installed devstack at https://review.openstack.org/#/c/558930/. This needs to 
> be cleaned up so that we only check for and install the virtualenv(s) once 
> and we need to handle mixed python2 and python3 environments better (so that 
> you can run a python2 swift and python3 everything else).
> 
> The other major issue we've run into is that nova file injection (which is 
> tested by tempest) seems to require either libguestfs or nbd. libguestfs 
> bindings for python aren't available on pypi and instead we get them from 
> system packaging. This means if we want libguestfs support we have to enable 
> system site packages when using virtualenvs. The alternative is to use nbd 
> which apparently isn't preferred by nova and doesn't work under current 
> devstack anyways.
> 
> Why is this a problem? Well the new pip10 behavior that breaks devstack is 
> pip10's refusable to remove distutils installed packages. Distro packages by 
> and large are distutils packaged which means if you mix system packages and 
> pip installed packages there is a good chance something will break (and it 
> does break for current devstack). I'm not sure that using a virtualenv with 
> system site packages enabled will sufficiently protect us from this case (but 
> we should test it further). Also it feels wrong to enable system packages in 
> a virtualenv if the entire point is avoiding system python packages.
> 
> I'm not sure what the best option is here but if we can show that system site 
> packages with virtualenvs is viable with pip10 and people want to move 
> forward with devstack using a global virtualenv we can work to clean up this 
> change and make it mergeable.
> 
> Clark
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asking for ask.openstack.org

2018-04-04 Thread Paul Belanger
On Wed, Apr 04, 2018 at 04:26:12PM -0500, Jimmy McArthur wrote:
> Hi everyone!
> 
> We have a very robust and vibrant community at ask.openstack.org
> .  There are literally dozens of posts a day.
> However, many of them don't receive knowledgeable answers.  I'm really
> worried about this becoming a vacuum where potential community members get
> frustrated and don't realize how to get more involved with the community.
> 
> I'm looking for thoughts/ideas/feelings about this tool as well as potential
> admin volunteers to help us manage the constant influx of technical and
> not-so-technical questions around OpenStack.
> 
> For those of you already contributing there, Thank You!  For those that are
> interested in becoming a moderator (instant AUC status!) or have some
> additional ideas around fostering this community, please respond.
> 
> Looking forward to your thoughts :)
> 
> Thanks!
> Jimmy
> irc: jamesmcarthur

We also have a 2nd issue where the ask.o.o server doesn't appear to be large
enough any more to handle the traffic. A few times over the last few weeks we've
had outages due to the HDD being full.

We likely need to reduce the number of days we retain database backups / http
logs or look to attach a volume to increase storage.

Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-helm][infra] Please consider using experimental pipeline for non-voting jobs

2018-04-04 Thread Paul Belanger
Greetings,

I've recently proposed https://review.openstack.org/558870 to the openstack-helm
project. This moves both centos / fedora jobs into the experimental pipeline.
The reason for this, the multinode jobs in helm each use 5 nodes per distro.
This this case, 10 shared between centos / fedora.

Given that this happens on every patchset propose to helm, and these jobs have
been non-voting for some time 3+ months, I think it is fair to now move them
into experimental to help conserve CI resources.

Once they have been properly fixed, I see no issue on moving them back to check
/ gate pipelines.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27

2018-04-04 Thread Paul Belanger
On Tue, Mar 13, 2018 at 10:54:26AM -0400, Paul Belanger wrote:
> On Mon, Mar 05, 2018 at 06:45:13PM -0500, Paul Belanger wrote:
> > Greetings,
> > 
> > A quick search of git shows your projects are using fedora-26 nodes for 
> > testing.
> > Please take a moment to look at gerrit[1] and help land patches.  We'd like 
> > to
> > remove fedora-26 nodes in the next week and to avoid broken jobs you'll 
> > need to
> > approve these patches.
> > 
> > If you jobs are failing under fedora-27, please take the time to fix any 
> > issue
> > or update said patches to make them non-voting.
> > 
> > We (openstack-infra) aim to only keep the latest fedora image online, which
> > changes aprox every 6 months.
> > 
> > Thanks for your help and understanding,
> > Paul
> > 
> > [1] https://review.openstack.org/#/q/topic:fedora-27+status:open
> > 
> Greetings,
> 
> This is a friendly reminder, about moving jobs to fedora-27. I'd like to 
> remove
> our fedora-26 images next week and if jobs haven't been migrated you may start
> to see NODE_FAILURE messages while running jobs.  Please take a moment to 
> merge
> the open changes or update them to be non-voting while you work on fixes.
> 
> Thanks again,
> Paul
> 
Hi,

It's been a month since we started asking projects to migrate to fedora-26.

I've proposed the patch to review fedora-26 nodes from nodepool[2], if your
project hasn't merge the patches above you will start to see NODE_FAILURE
results for your jobs. Please take the time to approve the changes above.

Because new fedora images come online every 6 months, we like to only keep one
of them online at any given time. Fedora is meant to be a fast moving distro to
pick up new versions of software out side of the Ubuntu LTS releases.

If you have any questions please reach out to us in #openstack-infra.

Thanks,
Paul

[2] https://review.openstack.org/558847/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gerrit server replacement scheduled for May 2nd 2018

2018-04-03 Thread Paul Belanger
Hello from Infra.

It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack
Project Infrastructure team is upgrading the server which runs
review.openstack.org to Ubuntu Xenial, and that means a new virtual machine
instance with new IP addresses assigned by our service provider. The new IP
addresses will be as follows:

IPv4 -> 104.130.246.32
IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229

They will replace these current production IP addresses:

IPv4 -> 104.130.246.91
IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525

We understand that some users may be running from egress-filtered
networks with port 29418/tcp explicitly allowed to the current
review.openstack.org IP addresses, and so are providing this
information as far in advance as we can to allow them time to update
their firewalls accordingly.

Note that some users dealing with egress filtering may find it
easier to switch their local configuration to use Gerrit's REST API
via HTTPS instead, and the current release of git-review has support
for that workflow as well.
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html

We will follow up with final confirmation in subsequent announcements.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Upcoming changes in ARA Zuul job reports

2018-03-29 Thread Paul Belanger
On Thu, Mar 29, 2018 at 06:14:06PM -0400, David Moreau Simard wrote:
> Hi,
> 
> By default, all jobs currently benefit from the generation of a static
> ARA report located in the "ara" directory at the root of the log
> directory.
> Due to scalability concerns, these reports were only generated when a
> job failed and were not available on successful runs.
> 
> I'm happy to announce that you can expect ARA reports to be available
> for every job from now on -- including the successful ones !
> 
> You'll notice a subtle but important change: the report directory will
> henceforth be named "ara-report" instead of "ara".
> 
> Instead of generating and saving a HTML report, we'll now only save
> the ARA database in the "ara-report" directory.
> This is a special directory from the perspective of the
> logs.openstack.org server and ARA databases located in such
> directories will be loaded dynamically by a WSGI middleware.
> 
> You don't need to do anything to benefit from this change -- it will
> be pushed to all jobs that inherit from the base job by default.
> 
> However, if you happen to be using a "nested" installation of ARA and
> Ansible (i.e, OpenStack-Ansible, Kolla-Ansible, TripleO, etc.), this
> means that you can also leverage this feature.
> In order to do that, you'll want to create an "ara-report" directory
> and copy your ARA database inside before your logs are collected and
> uploaded.
> 
I believe this is an important task we should also push on for the projects you
listed above. The main reason to do this is simplify job uploads and filesystemd
demands (thanks clarkb).

Lets see if we can update these projects in the coming week or two!

Great work.

> To help you visualize:
> /ara-report <-- This is the default Zuul report
> /logs/ara <-- This wouldn't be loaded dynamically
> /logs/ara-report <-- This would be loaded dynamically
> /logs/some/directory/ara-report <-- This would be loaded dynamically
> 
> For more details on this feature of ARA, you can refer to the documentation 
> [1].
> 
> Let me know if you have any questions !
> 
> [1]: https://ara.readthedocs.io/en/latest/advanced.html
> 
> David Moreau Simard
> Senior Software Engineer | OpenStack RDO
> 
> dmsimard = [irc, github, twitter]
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] All Hail our Newest Release Name - OpenStack Stein

2018-03-29 Thread Paul Belanger
Hi everybody!

As the subject reads, the "S" release of OpenStack is officially "Stein". As
been with previous elections this wasn't the first choice, that was "Solar".

Solar was judged to have legal risk, so as per our name selection process, we
moved to the next name on the list.

Thanks to everybody who participated, and look forward to making OpenStack Stein
a great release.

Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack "S" Release Naming Preliminary Results

2018-03-21 Thread Paul Belanger
Hello all!

We decided to run a public poll this time around, we'll likely discuss the
process during a TC meeting, but we'd love the hear your feedback.

The raw results are below - however ...

**PLEASE REMEMBER** that these now have to go through legal vetting. So 
it is too soon to say 'OpenStack Solar' is our next release, given that previous
polls have had some issues with the top choice.

In any case, the names will been sent off to legal for vetting. As soon 
as we have a final winner, I'll let you all know.

https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_40b95cb2be3fcdf1=c04ca6bca83a1427

Result

1. Solar  (Condorcet winner: wins contests with all other choices)
2. Stein  loses to Solar by 159–138
3. Spree  loses to Solar by 175–122, loses to Stein by 148–141
4. Sonne  loses to Solar by 190–99, loses to Spree by 174–97
5. Springer  loses to Solar by 214–60, loses to Sonne by 147–103
6. Spandau  loses to Solar by 195–88, loses to Springer by 125–118
7. See  loses to Solar by 203–61, loses to Spandau by 121–111
8. Schiller  loses to Solar by 207–70, loses to See by 112–106
9. SBahn  loses to Solar by 212–74, loses to Schiller by 111–101
10. Staaken  loses to Solar by 219–59, loses to SBahn by 115–89
11. Shellhaus  loses to Solar by 213–61, loses to Staaken by 94–85
12. Steglitz  loses to Solar by 216–50, loses to Shellhaus by 90–83
13. Saatwinkel  loses to Solar by 219–55, loses to Steglitz by 96–57
14. Savigny  loses to Solar by 219–51, loses to Saatwinkel by 77–76
15. Schoenholz  loses to Solar by 221–46, loses to Savigny by 78–70
16. Suedkreuz  loses to Solar by 220–50, loses to Schoenholz by 68–67
17. Soorstreet  loses to Solar by 226–32, loses to Suedkreuz by 75–58

- Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Poll: S Release Naming

2018-03-21 Thread Paul Belanger
On Tue, Mar 13, 2018 at 07:58:59PM -0400, Paul Belanger wrote:
> Greetings all,
> 
> It is time again to cast your vote for the naming of the S Release. This time
> is little different as we've decided to use a public polling option over per
> user private URLs for voting. This means, everybody should proceed to use the
> following URL to cast their vote:
> 
>   
> https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1=8cfdc1f5df5fe4d3
> 
> Because this is a public poll, results will currently be only viewable by 
> myself
> until the poll closes. Once closed, I'll post the URL making the results
> viewable to everybody. This was done to avoid everybody seeing the results 
> while
> the public poll is running.
> 
> The poll will officially end on 2018-03-21 23:59:59[1], and results will be
> posted shortly after.
> 
> [1] 
> http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst
> ---
> 
> According to the Release Naming Process, this poll is to determine the
> community preferences for the name of the R release of OpenStack. It is
> possible that the top choice is not viable for legal reasons, so the second or
> later community preference could wind up being the name.
> 
> Release Name Criteria
> 
> Each release name must start with the letter of the ISO basic Latin alphabet
> following the initial letter of the previous release, starting with the
> initial release of "Austin". After "Z", the next name should start with
> "A" again.
> 
> The name must be composed only of the 26 characters of the ISO basic Latin
> alphabet. Names which can be transliterated into this character set are also
> acceptable.
> 
> The name must refer to the physical or human geography of the region
> encompassing the location of the OpenStack design summit for the
> corresponding release. The exact boundaries of the geographic region under
> consideration must be declared before the opening of nominations, as part of
> the initiation of the selection process.
> 
> The name must be a single word with a maximum of 10 characters. Words that
> describe the feature should not be included, so "Foo City" or "Foo Peak"
> would both be eligible as "Foo".
> 
> Names which do not meet these criteria but otherwise sound really cool
> should be added to a separate section of the wiki page and the TC may make
> an exception for one or more of them to be considered in the Condorcet poll.
> The naming official is responsible for presenting the list of exceptional
> names for consideration to the TC before the poll opens.
> 
> Exact Geographic Region
> 
> The Geographic Region from where names for the S release will come is Berlin
> 
> Proposed Names
> 
> Spree (a river that flows through the Saxony, Brandenburg and Berlin states of
>Germany)
> 
> SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin)
> 
> Spandau (One of the twelve boroughs of Berlin)
> 
> Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently
>abbreviated as )
> 
> Steglitz (a locality in the South Western part of the city)
> 
> Springer (Berlin is headquarters of Axel Springer publishing house)
> 
> Staaken (a locality within the Spandau borough)
> 
> Schoenholz (A zone in the Niederschönhausen district of Berlin)
> 
> Shellhaus (A famous office building)
> 
> Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg)
> 
> Schiller (A park in the Mitte borough)
> 
> Saatwinkel (The name of a super tiny beach, and its surrounding neighborhood)
>(The adjective form, Saatwinkler is also a really cool bridge but
>that form is too long)
> 
> Sonne (Sonnenallee is the name of a large street in Berlin crossing the former
>wall, also translates as "sun")
> 
> Savigny (Common place in City-West)
> 
> Soorstreet (Street in Berlin restrict Charlottenburg)
> 
> Solar (Skybar in Berlin)
> 
> See (Seestraße or "See Street" in Berlin)
> 
A friendly reminder, the naming poll will be closing later today (2018-03-21
23:59:59 UTC). If you haven't done so, please take a moment to vote.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][neutron] tools/tox_install changes - breakage with constraints

2018-03-14 Thread Paul Belanger
On Wed, Mar 14, 2018 at 10:25:49PM +, Chris Dent wrote:
> On Thu, 15 Mar 2018, Akihiro Motoki wrote:
> 
> > (1) it makes difficult to run tests in local environment
> > We have only released version of neutron/horizon on PyPI. It means
> > PyPI version (i.e. queens) is installed when we run tox in our local
> > development. Most neutron stadium projects and horizon plugins depends
> > on the latest master. Test run in local environment will be broken. We
> > need to install the latest neutron/horizon manually. This confuses
> > most developers. We need to ensure that tox can run successfully in a
> > same manner in our CI and local environments.
> 
> Assuming that ^ is actually the case then:
> 
> This sounds like a really critical issue. We need to be really
> careful about automating the human out of the equation to the point
> where people are submitting broken code just so they can get a good
> test run. That's not great if we'd like to encourage various forms
> of TDD and the like and we also happen to have a limited supply of
> CI resources.
> 
> (Which is not to say that tox-siblings isn't an awesome feature. I
> hadn't really known about it until today and it's a great thing.)
> 
If ansible is our interface for developers to use, it shouldn't be difficult to
reproduce the environments locally to get master. This does mean changing the
developer workflow to use ansible, which I can understand might not be what
people want to do.

The main reason for removing install_tox.sh, is to remove zuul-cloner from our
DIB images, as zuulv3 no longer includes this command. Even running that
locally, would no longer work against git.o.o.

I agree, we should see how to make the migration for local developer
environments better.

Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27

2018-03-14 Thread Paul Belanger
On Wed, Mar 14, 2018 at 04:44:07PM -0400, Paul Belanger wrote:
> On Wed, Mar 14, 2018 at 03:20:40PM -0400, Paul Belanger wrote:
> > On Wed, Mar 14, 2018 at 03:53:59AM +, na...@vn.fujitsu.com wrote:
> > > Hello Paul,
> > > 
> > > I am Nam from Barbican team. I would like to notify a problem when using 
> > > fedora-27. 
> > > 
> > > Currently, fedora-27 is using mariadb at 10.2.12. But there is a bug in 
> > > this version and it is the main reason for failure Barbican database 
> > > upgrading [1], the bug was fixed at 10.2.13 [2]. Would you mind updating 
> > > the version of mariadb before removing fedora-26.
> > > 
> > > [1] https://bugs.launchpad.net/barbican/+bug/1734329 
> > > [2] https://jira.mariadb.org/browse/MDEV-13508 
> > > 
> > Looking at https://apps.fedoraproject.org/packages/mariadb seems 10.2.13 has
> > already been updated. Let me recheck the patch and see if it will use the 
> > newer
> > version.
> > 
> Okay, it looks like our AFS mirrors for fedora our out of sync, I've proposed 
> a
> patch to fix that[3]. Once landed, I'll recheck the job.
> 
Okay, database looks to be fixed, but there are tests failing[4]. I'll defer
back to you to continue work on the migration.

[4] 
http://logs.openstack.org/20/547120/2/check/barbican-dogtag-devstack-functional-fedora-27/4cd64e0/job-output.txt.gz#_2018-03-14_22_29_49_400822

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27

2018-03-14 Thread Paul Belanger
On Wed, Mar 14, 2018 at 03:20:40PM -0400, Paul Belanger wrote:
> On Wed, Mar 14, 2018 at 03:53:59AM +, na...@vn.fujitsu.com wrote:
> > Hello Paul,
> > 
> > I am Nam from Barbican team. I would like to notify a problem when using 
> > fedora-27. 
> > 
> > Currently, fedora-27 is using mariadb at 10.2.12. But there is a bug in 
> > this version and it is the main reason for failure Barbican database 
> > upgrading [1], the bug was fixed at 10.2.13 [2]. Would you mind updating 
> > the version of mariadb before removing fedora-26.
> > 
> > [1] https://bugs.launchpad.net/barbican/+bug/1734329 
> > [2] https://jira.mariadb.org/browse/MDEV-13508 
> > 
> Looking at https://apps.fedoraproject.org/packages/mariadb seems 10.2.13 has
> already been updated. Let me recheck the patch and see if it will use the 
> newer
> version.
> 
Okay, it looks like our AFS mirrors for fedora our out of sync, I've proposed a
patch to fix that[3]. Once landed, I'll recheck the job.

[3] https://review.openstack.org/553052
> > Thanks,
> > Nam
> > 
> > > -Original Message-
> > > From: Paul Belanger [mailto:pabelan...@redhat.com]
> > > Sent: Tuesday, March 13, 2018 9:54 PM
> > > To: openstack-dev@lists.openstack.org
> > > Subject: Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from
> > > fedora-26 to fedora-27
> > > 
> > > On Mon, Mar 05, 2018 at 06:45:13PM -0500, Paul Belanger wrote:
> > > > Greetings,
> > > >
> > > > A quick search of git shows your projects are using fedora-26 nodes for
> > > testing.
> > > > Please take a moment to look at gerrit[1] and help land patches.  We'd
> > > > like to remove fedora-26 nodes in the next week and to avoid broken
> > > > jobs you'll need to approve these patches.
> > > >
> > > > If you jobs are failing under fedora-27, please take the time to fix
> > > > any issue or update said patches to make them non-voting.
> > > >
> > > > We (openstack-infra) aim to only keep the latest fedora image online,
> > > > which changes aprox every 6 months.
> > > >
> > > > Thanks for your help and understanding, Paul
> > > >
> > > > [1] https://review.openstack.org/#/q/topic:fedora-27+status:open
> > > >
> > > Greetings,
> > > 
> > > This is a friendly reminder, about moving jobs to fedora-27. I'd like to 
> > > remove
> > > our fedora-26 images next week and if jobs haven't been migrated you may
> > > start to see NODE_FAILURE messages while running jobs.  Please take a
> > > moment to merge the open changes or update them to be non-voting while
> > > you work on fixes.
> > > 
> > > Thanks again,
> > > Paul
> > > 
> > > __
> > > 
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: OpenStack-dev-
> > > requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27

2018-03-14 Thread Paul Belanger
On Wed, Mar 14, 2018 at 03:53:59AM +, na...@vn.fujitsu.com wrote:
> Hello Paul,
> 
> I am Nam from Barbican team. I would like to notify a problem when using 
> fedora-27. 
> 
> Currently, fedora-27 is using mariadb at 10.2.12. But there is a bug in this 
> version and it is the main reason for failure Barbican database upgrading 
> [1], the bug was fixed at 10.2.13 [2]. Would you mind updating the version of 
> mariadb before removing fedora-26.
> 
> [1] https://bugs.launchpad.net/barbican/+bug/1734329 
> [2] https://jira.mariadb.org/browse/MDEV-13508 
> 
Looking at https://apps.fedoraproject.org/packages/mariadb seems 10.2.13 has
already been updated. Let me recheck the patch and see if it will use the newer
version.

> Thanks,
> Nam
> 
> > -Original Message-
> > From: Paul Belanger [mailto:pabelan...@redhat.com]
> > Sent: Tuesday, March 13, 2018 9:54 PM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from
> > fedora-26 to fedora-27
> > 
> > On Mon, Mar 05, 2018 at 06:45:13PM -0500, Paul Belanger wrote:
> > > Greetings,
> > >
> > > A quick search of git shows your projects are using fedora-26 nodes for
> > testing.
> > > Please take a moment to look at gerrit[1] and help land patches.  We'd
> > > like to remove fedora-26 nodes in the next week and to avoid broken
> > > jobs you'll need to approve these patches.
> > >
> > > If you jobs are failing under fedora-27, please take the time to fix
> > > any issue or update said patches to make them non-voting.
> > >
> > > We (openstack-infra) aim to only keep the latest fedora image online,
> > > which changes aprox every 6 months.
> > >
> > > Thanks for your help and understanding, Paul
> > >
> > > [1] https://review.openstack.org/#/q/topic:fedora-27+status:open
> > >
> > Greetings,
> > 
> > This is a friendly reminder, about moving jobs to fedora-27. I'd like to 
> > remove
> > our fedora-26 images next week and if jobs haven't been migrated you may
> > start to see NODE_FAILURE messages while running jobs.  Please take a
> > moment to merge the open changes or update them to be non-voting while
> > you work on fixes.
> > 
> > Thanks again,
> > Paul
> > 
> > __
> > 
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Poll: S Release Naming

2018-03-13 Thread Paul Belanger
Greetings all,

It is time again to cast your vote for the naming of the S Release. This time
is little different as we've decided to use a public polling option over per
user private URLs for voting. This means, everybody should proceed to use the
following URL to cast their vote:

  
https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1=8cfdc1f5df5fe4d3

Because this is a public poll, results will currently be only viewable by myself
until the poll closes. Once closed, I'll post the URL making the results
viewable to everybody. This was done to avoid everybody seeing the results while
the public poll is running.

The poll will officially end on 2018-03-21 23:59:59[1], and results will be
posted shortly after.

[1] 
http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst
---

According to the Release Naming Process, this poll is to determine the
community preferences for the name of the R release of OpenStack. It is
possible that the top choice is not viable for legal reasons, so the second or
later community preference could wind up being the name.

Release Name Criteria

Each release name must start with the letter of the ISO basic Latin alphabet
following the initial letter of the previous release, starting with the
initial release of "Austin". After "Z", the next name should start with
"A" again.

The name must be composed only of the 26 characters of the ISO basic Latin
alphabet. Names which can be transliterated into this character set are also
acceptable.

The name must refer to the physical or human geography of the region
encompassing the location of the OpenStack design summit for the
corresponding release. The exact boundaries of the geographic region under
consideration must be declared before the opening of nominations, as part of
the initiation of the selection process.

The name must be a single word with a maximum of 10 characters. Words that
describe the feature should not be included, so "Foo City" or "Foo Peak"
would both be eligible as "Foo".

Names which do not meet these criteria but otherwise sound really cool
should be added to a separate section of the wiki page and the TC may make
an exception for one or more of them to be considered in the Condorcet poll.
The naming official is responsible for presenting the list of exceptional
names for consideration to the TC before the poll opens.

Exact Geographic Region

The Geographic Region from where names for the S release will come is Berlin

Proposed Names

Spree (a river that flows through the Saxony, Brandenburg and Berlin states of
   Germany)

SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin)

Spandau (One of the twelve boroughs of Berlin)

Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently
   abbreviated as )

Steglitz (a locality in the South Western part of the city)

Springer (Berlin is headquarters of Axel Springer publishing house)

Staaken (a locality within the Spandau borough)

Schoenholz (A zone in the Niederschönhausen district of Berlin)

Shellhaus (A famous office building)

Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg)

Schiller (A park in the Mitte borough)

Saatwinkel (The name of a super tiny beach, and its surrounding neighborhood)
   (The adjective form, Saatwinkler is also a really cool bridge but
   that form is too long)

Sonne (Sonnenallee is the name of a large street in Berlin crossing the former
   wall, also translates as "sun")

Savigny (Common place in City-West)

Soorstreet (Street in Berlin restrict Charlottenburg)

Solar (Skybar in Berlin)

See (Seestraße or "See Street" in Berlin)

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27

2018-03-13 Thread Paul Belanger
On Mon, Mar 05, 2018 at 06:45:13PM -0500, Paul Belanger wrote:
> Greetings,
> 
> A quick search of git shows your projects are using fedora-26 nodes for 
> testing.
> Please take a moment to look at gerrit[1] and help land patches.  We'd like to
> remove fedora-26 nodes in the next week and to avoid broken jobs you'll need 
> to
> approve these patches.
> 
> If you jobs are failing under fedora-27, please take the time to fix any issue
> or update said patches to make them non-voting.
> 
> We (openstack-infra) aim to only keep the latest fedora image online, which
> changes aprox every 6 months.
> 
> Thanks for your help and understanding,
> Paul
> 
> [1] https://review.openstack.org/#/q/topic:fedora-27+status:open
> 
Greetings,

This is a friendly reminder, about moving jobs to fedora-27. I'd like to remove
our fedora-26 images next week and if jobs haven't been migrated you may start
to see NODE_FAILURE messages while running jobs.  Please take a moment to merge
the open changes or update them to be non-voting while you work on fixes.

Thanks again,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27

2018-03-05 Thread Paul Belanger
Greetings,

A quick search of git shows your projects are using fedora-26 nodes for testing.
Please take a moment to look at gerrit[1] and help land patches.  We'd like to
remove fedora-26 nodes in the next week and to avoid broken jobs you'll need to
approve these patches.

If you jobs are failing under fedora-27, please take the time to fix any issue
or update said patches to make them non-voting.

We (openstack-infra) aim to only keep the latest fedora image online, which
changes aprox every 6 months.

Thanks for your help and understanding,
Paul

[1] https://review.openstack.org/#/q/topic:fedora-27+status:open

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Release Naming for S - time to suggest a name!

2018-03-05 Thread Paul Belanger
On Tue, Feb 20, 2018 at 08:19:59PM -0500, Paul Belanger wrote:
> Hey everybody,
> 
> Once again, it is time for us to pick a name for our "S" release.
> 
> Since the associated Summit will be in Berlin, the Geographic
> Location has been chosen as "Berlin" (State).
> 
> Nominations are now open. Please add suitable names to
> https://wiki.openstack.org/wiki/Release_Naming/S_Proposals between now
> and 2018-03-05 23:59 UTC.
> 
> In case you don't remember the rules:
> 
> * Each release name must start with the letter of the ISO basic Latin
> alphabet following the initial letter of the previous release, starting
> with the initial release of "Austin". After "Z", the next name should
> start with "A" again.
> 
> * The name must be composed only of the 26 characters of the ISO basic
> Latin alphabet. Names which can be transliterated into this character
> set are also acceptable.
> 
> * The name must refer to the physical or human geography of the region
> encompassing the location of the OpenStack design summit for the
> corresponding release. The exact boundaries of the geographic region
> under consideration must be declared before the opening of nominations,
> as part of the initiation of the selection process.
> 
> * The name must be a single word with a maximum of 10 characters. Words
> that describe the feature should not be included, so "Foo City" or "Foo
> Peak" would both be eligible as "Foo".
> 
> Names which do not meet these criteria but otherwise sound really cool
> should be added to a separate section of the wiki page and the TC may
> make an exception for one or more of them to be considered in the
> Condorcet poll. The naming official is responsible for presenting the
> list of exceptional names for consideration to the TC before the poll opens.
> 
> Let the naming begin.
> 
> Paul
> 
Just a reminder, there is only few more hours left to get your suggestions in
for the naming the next release.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][3rd party ci] Removal of jenkins user from DIB images

2018-02-28 Thread Paul Belanger
Greetings,

As we move forward with using zuulv3 more and more and less and less of jenkins,
we are continuing the clean up of our images.

Specifically, if your 3rd party CI is still using jenkins please take note of
the following changes[1]. By default our openstack-infra images will no longer
be creating the jenkins user accounts however, other CI systems if jenkins user
is still needed, you'll likely need to update your nodepool.yaml file and add
jenkins-slave element directly.

If you have issues, please join us in the #openstack-infra IRC channel on
freenode.

[1] https://review.openstack.org/514485/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] rdo cloud tenant reach 100 stacks

2018-02-23 Thread Paul Belanger
On Fri, Feb 23, 2018 at 02:20:34PM +0100, Arx Cruz wrote:
> Just an update, we cleaned up the stacks with more than 10 hours, jobs
> should be working properly now.
> 
> Kind regards,
> Arx Cruz
> 
> On Fri, Feb 23, 2018 at 12:49 PM, Arx Cruz  wrote:
> 
> > Hello,
> >
> > We just notice that there are several jobs failing because the
> > openstack-nodepool tenant reach 100 stacks and cannot create new ones.
> >
> > I notice there are several stacks created > 10 hours, and I'm manually
> > deleting those ones.
> > I don't think it will affect someone, but just in case, be aware of it.
> >
> > Kind regards,
> > Arx Cruz
> >

Give that multinode jobs are first class citizen in zuulv3, I'd like to take
some time at the PTG to discuss what would be needed to stop using heat for OVB
and switch to nodepool.

There are a number of reasons to do this, remove te-broker, remove heat
dependency for testing, use common tooling, etc. I believe there is a CI session
for tripelo one day, I was thinking of bringing it up then. Unless there is a
better time.

Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Release Naming for S - time to suggest a name!

2018-02-20 Thread Paul Belanger
Hey everybody,

Once again, it is time for us to pick a name for our "S" release.

Since the associated Summit will be in Berlin, the Geographic
Location has been chosen as "Berlin" (State).

Nominations are now open. Please add suitable names to
https://wiki.openstack.org/wiki/Release_Naming/S_Proposals between now
and 2018-03-05 23:59 UTC.

In case you don't remember the rules:

* Each release name must start with the letter of the ISO basic Latin
alphabet following the initial letter of the previous release, starting
with the initial release of "Austin". After "Z", the next name should
start with "A" again.

* The name must be composed only of the 26 characters of the ISO basic
Latin alphabet. Names which can be transliterated into this character
set are also acceptable.

* The name must refer to the physical or human geography of the region
encompassing the location of the OpenStack design summit for the
corresponding release. The exact boundaries of the geographic region
under consideration must be declared before the opening of nominations,
as part of the initiation of the selection process.

* The name must be a single word with a maximum of 10 characters. Words
that describe the feature should not be included, so "Foo City" or "Foo
Peak" would both be eligible as "Foo".

Names which do not meet these criteria but otherwise sound really cool
should be added to a separate section of the wiki page and the TC may
make an exception for one or more of them to be considered in the
Condorcet poll. The naming official is responsible for presenting the
list of exceptional names for consideration to the TC before the poll opens.

Let the naming begin.

Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [all] project pipeline definition should stay in project-config or project side ?

2018-02-13 Thread Paul Belanger
On Tue, Feb 13, 2018 at 11:05:34PM +0900, gmann wrote:
> Hi Infra Team,
> 
> I have 1 quick question on zuulv3 jobs and their migration part. From
> zuulv3 doc [1], it is clear about migrating the job definition and use
> those among cross repo pipeline etc.
> 
> But I did not find clear recommendation that whether project's
> pipeline definition should stay in project-config or we should move
> that to project side.
> 
> IMO,
> 'template' part(which has system level jobs) can stay in
> project-config. For example below part-
> 
> https://github.com/openstack-infra/project-config/blob/e2b82623a4ab60261b37a91e38301927b9b6/zuul.d/projects.yaml#L10507-L10523
> 
> Other pipeline definition- 'check', 'gate', 'experimental' etc should
> be move to project repo, mainly this list-
> https://github.com/openstack-infra/project-config/blob/master/zuul.d/projects.yaml#L10524-L11019
> 
> If we move those past as mentioned above then, we can have a
> consolidated place to control the project pipeline for
> 'irrelevant-files', specific branch etc
> 
> ..1 https://docs.openstack.org/infra/manual/zuulv3.html
> 
As it works today, pipeline stanza needs to be in a config project[1]
(project-config) repo. So what you are suggestion will not work. This was done
to allow zuul admins to control which pipelines are setup / configured.

I am mostly curious why a project would need to modify a pipeline configuration
or duplicate it into all projects, over having it central located in
project-config.

[1] https://docs.openstack.org/infra/zuul/user/config.html#pipeline
> 
> -gmann
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI'ing ceph-ansible against TripleO scenarios

2018-01-25 Thread Paul Belanger
On Thu, Jan 25, 2018 at 04:22:56PM -0800, Emilien Macchi wrote:
> Is there any plans to run TripleO CI jobs in ceph-ansible?
> I know the project is on github but thanks to zuulv3 we can now easily
> configure ceph-ansible to run Ci jobs in OpenStack Infra.
> 
> It would be really great to investigate that in the near future so we avoid
> eventual regressions.
> Sebastien, Giulio, John, thoughts?
> -- 
> Emilien Macchi

Just a note, we haven't actually agree to enable CI for github projects just
yet.  While it is something zuul can do now, I believe we still need to decide
when / how to enable it.

We are doing some initial testing with ansible/ansible however.

-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Broken repo on devstack-plugin-container for Fedora

2018-01-24 Thread Paul Belanger
On Wed, Jan 24, 2018 at 02:14:40PM +0100, Daniel Mellado wrote:
> Hi everyone,
> 
> Since today, when I try to install devstack-plugin-container plugin over
> fedora. It complains in here [1] about not being able to sync the cache
> for the repo with the following error [2].
> 
> This is affecting me on Fedora26+ from different network locations, so I
> was wondering if someone from suse could have a look (it did work for
> Andreas in opensuse... thanks in advance!)
> 
> [1]
> https://github.com/openstack/devstack-plugin-container/blob/master/devstack/lib/docker#L164-L170
> 
> [2] http://paste.openstack.org/show/652041/
> 
We should consider mirroring this into our AFS mirror infrastrcuture to help
remove the dependency on opensuse servers. Then each regional mirror has a copy
and we don't always need to hit upstream.

-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Many timeouts in zuul gates for TripleO

2018-01-19 Thread Paul Belanger
On Fri, Jan 19, 2018 at 11:23:45AM -0600, Ben Nemec wrote:
> 
> 
> On 01/18/2018 09:45 AM, Emilien Macchi wrote:
> > On Thu, Jan 18, 2018 at 6:34 AM, Or Idgar  wrote:
> > > Hi,
> > > we're encountering many timeouts for zuul gates in TripleO.
> > > For example, see
> > > http://logs.openstack.org/95/508195/28/check-tripleo/tripleo-ci-centos-7-ovb-ha-oooq/c85fcb7/.
> > > 
> > > rechecks won't help and sometimes specific gate is end successfully and
> > > sometimes not.
> > > The problem is that after recheck it's not always the same gate which is
> > > failed.
> > > 
> > > Is there someone who have access to the servers load to see what cause 
> > > this?
> > > alternatively, is there something we can do in order to reduce the running
> > > time for each gate?
> > 
> > We're migrating to RDO Cloud for OVB jobs:
> > https://review.openstack.org/#/c/526481/
> > It's a work in progress but will help a lot for OVB timeouts on RH1.
> > 
> > I'll let the CI folks comment on that topic.
> > 
> 
> I noticed that the timeouts on rh1 have been especially bad as of late so I
> did a little testing and found that it did seem to be running more slowly
> than it should.  After some investigation I found that 6 of our compute
> nodes have warning messages that the cpu was throttled due to high
> temperature.  I've disabled 4 of them that had a lot of warnings. The other
> 2 only had a handful of warnings so I'm hopeful we can leave them active
> without affecting job performance too much.  It won't accomplish much if we
> disable the overheating nodes only to overload the remaining ones.
> 
> I'll follow up with our hardware people and see if we can determine why
> these specific nodes are overheating.  They seem to be running 20 degrees C
> hotter than the rest of the nodes.
> 
Did tripleo-test-cloud-rh1 get new kernels applied for meltdown / spectre,
possible that is impacting performance too?

-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][neutron][octavia][horizon][networking-l2gw] Renaming tox_venvlist in Zuul v3 run-tempest

2018-01-12 Thread Paul Belanger
On Fri, Jan 12, 2018 at 05:13:28PM +0100, Andreas Jaeger wrote:
> The Zuul v3 tox jobs use "tox_envlist" to name the tox environment to
> use, the tempest run-tempest role used "tox_venvlist" with an extra "v"
> in it. This lead to some confusion and a wrong fix, so let's be
> consistent across these jobs.
> 
> I've just pushed changes under the topic tox_envlist to sync these.
> 
> To have working jobs, I needed the usual rename dance: Add the new
> variable, change the job, remove the old one.
> 
> Neutron, octavia, horizon, networking-l2gw team, please review and merge
> the first one quickly.
> 
> https://review.openstack.org/#/q/topic:tox_envlist
> 
++

Agree, in fact it would be good to see what would need to change with our
existing run-tox role and have tempest consume it directly over using its own
tasks for running tox.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptl][goals][storyboard] tracking the rocky goals with storyboard

2018-01-12 Thread Paul Belanger
On Fri, Jan 12, 2018 at 01:11:40PM -0800, Emilien Macchi wrote:
> On Fri, Jan 12, 2018 at 12:37 PM, Doug Hellmann  wrote:
> > Since we are discussing goals for the Rocky cycle, I would like to
> > propose a change to the way we track progress on the goals.
> >
> > We've started to see lots and lots of changes to the goal documents,
> > more than anticipated when we designed the system originally. That
> > leads to code review churn within the governance repo, and it means
> > the goal champions have to wait for the TC to review changes before
> > they have complete tracking information published somewhere. We've
> > talked about moving the tracking out of git and using an etherpad
> > or a wiki page, but I propose that we use storyboard.
> >
> > Specifically, I think we should create 1 story for each goal, and
> > one task for each project within the goal. We can then use a board
> > to track progress, with lanes like "New", "Acknowledged", "In
> > Progress", "Completed", and "Not Applicable". It would be the
> > responsibility of the goal champion to create the board, story, and
> > tasks and provide links to the board and story in the goal document
> > (so we only need 1 edit after the goal is approved). From that point
> > on, teams and goal champions could collaborate on keeping the board
> > up to date.
> >
> > Not all projects are registered in storyboard, yet. Since that
> > migration is itself a goal under discussion, I think for now we can
> > just associate all tasks with the governance repository.
> >
> > It doesn't look like changes to a board trigger any sort of
> > notifications for the tasks or stories involved, but that's probably
> > OK. If we really want notifications we can look at adding them as
> > a feature of Storyboard at the board level.
> >
> > How does this sound as an approach? Does anyone have any reservations
> > about using storyboard this way?
> 
> Sounds like a good idea, and will help to "Eat Our Own Dog Food" (if
> we want Storyboard adopted at some point).
>
Agree, I've seen some downstream teams also do this with trello. If people
would like to try with Storyboard, I don't have any objections.

-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone Team Update - Week of 4 December 2017

2017-12-08 Thread Paul Belanger
On Fri, Dec 08, 2017 at 05:58:05PM +, Jeremy Stanley wrote:
> On 2017-12-08 18:48:56 +0100 (+0100), Colleen Murphy wrote:
> [...]
> > The major hindrance to keystone using Storyboard is its lack of
> > support for private bugs, which is a requirement given that
> > keystone is a VMT-managed project. If anyone is tired of keystone
> > work and wants to help the Storyboard team with that feature I'm
> > sure they would appreciate it!
> [...]
> 
> I also followed up on this topic in the SB meeting yesterday[*] and
> realized support is much further along than I previously recalled.
> In summary, SB admins can define "teams" (e.g., OpenStack VMT) and
> anyone creating a private story can choose to share it with any
> teams or individual accounts they like. What we're mostly missing at
> this point is a streamlined reporting mechanism to reduce the steps
> (and chances to make mistakes) when reporting a suspected
> vulnerability. A leading candidate solution would be support for
> custom reporting URLs which can feed arbitrary values into the
> creation form.
> 
> [*] 
> http://eavesdrop.openstack.org/meetings/storyboard/2017/storyboard.2017-12-06-19.05.log.html#l-36
> 
Thanks to both for pointing this out again. I too didn't know it was possible to
create teams for private stories. Sounds like we are slowly making progress on
this blocker for some of our projects.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 49

2017-12-07 Thread Paul Belanger
On Thu, Dec 07, 2017 at 05:28:53PM +, Jeremy Stanley wrote:
> On 2017-12-07 17:08:55 + (+), Graham Hayes wrote:
> [...]
> > It is worth remembering that this is a completely separate project to
> > OpenStack, with its own governance. They are free not to use our tooling
> > (and considering a lot of containers work is on github, they may never
> > use it).
> 
> Right. We'd love it of course if they decide they want to, and the
> Infra team is welcoming and taking the potential needs of these
> other communities into account in some upcoming refactoring of
> services and new features, but to a great extent they're on their
> own journeys and need to decide for themselves what tools and
> solutions will work best for their particular contributors and
> users. The OpenStack community sets a really great example I expect
> many of them will want to emulate and converge on over time, but
> that works better if they come to those sorts of conclusions on
> their own rather than being told what to do/use.
> -- 
> Jeremy Stanley

It seems there is atleast some services they'd be interested in using, mailman
for example.  Does this mean it would be a-la-carte services, where new projects
mix and match which things they'd like to be managed and not?

As for being told what to do/use, I'm sure there much be some process in place
or something we want, to give an overview of the services we already available,
and why a project might want to use them.  But agree with comments about 'own
journeys and need to decide fro themselves'.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][all] Removal of packages from bindep-fallback

2017-11-23 Thread Paul Belanger
On Thu, Nov 23, 2017 at 07:55:07PM +0100, Andreas Jaeger wrote:
> On 2017-11-16 06:59, Ian Wienand wrote:
> > Hello,
> > 
> > Some time ago we started the process of moving towards projects being
> > more explicit about thier binary dependencies using bindep [1]
> > 
> > To facilitate the transition, we created a "fallback" set of
> > dependencies [2] which are installed when a project does not specifiy
> > it's own bindep dependencies.  This essentially replicated the rather
> > ad-hoc environment provided by CI images before we started the
> > transition.
> > 
> > This list has acquired a few packages that cause some problems in
> > various situations today.  Particularly packages that aren't in the
> > increasing number of distributions we provide, or packages that come
> > from alternative repositories.
> > 
> > To this end, [3,4] proposes the removal of
> > 
> >  liberasurecode-*
> >  mongodb-*
> >  python-zmq
> >  redis
> >  zookeeper
> >  ruby-*
> > 
> > from the fallback packages.  This has a small potential to affect some
> > jobs that tacitly rely on these packages.
> 
> This has now merged. One fallout I noticed is that pcre-devel is not
> installed anymore and thus "pcre.h" is not found when building some
> python files. If that hits you, update your bindep.txt file with:
> pcre-devel [platform:rpm]
> libpcre3-dev [platform:dpkg]
> 
Good to know, I've been trying to see what other jobs are failing, and haven't
seen too much yet.

> For details about bindep, see
> https://docs.openstack.org/infra/manual/drivers.html#package-requirements
> 
> Andreas
> 
> > NOTE: this does *not* affect devstack jobs (devstack manages it's own
> > dependencies outside bindep) and if you want them back, it's just a
> > matter of putting them into the bindep file in your project (and as a
> > bonus, you have better dependency descriptions for your code).
> > 
> > We should be able to then remove centos-release-openstack-* from our
> > centos base images too [5], which will make life easier for projects
> > such as triple-o who have to work-around that.
> > 
> > If you have concerns, please reach out either via mail or in
> > #openstack-infra
> > 
> > Thank you,
> > 
> > -i
> > 
> > [1] https://docs.openstack.org/infra/bindep/
> > [2] 
> > https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/data/bindep-fallback.txt
> > [3] https://review.openstack.org/519533
> > [4] https://review.openstack.org/519534
> > [5] https://review.openstack.org/519535
> 
> 
> -- 
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Removing internet access from unit test gates

2017-11-21 Thread Paul Belanger
On Tue, Nov 21, 2017 at 04:41:13PM +, Mooney, Sean K wrote:
> 
> 
> > -Original Message-
> > From: Jeremy Stanley [mailto:fu...@yuggoth.org]
> > Sent: Tuesday, November 21, 2017 3:05 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> > Subject: Re: [openstack-dev] Removing internet access from unit test
> > gates
> > 
> > On 2017-11-21 09:28:20 +0100 (+0100), Thomas Goirand wrote:
> > [...]
> > > The only way that I see going forward, is having internet access
> > > removed from unit tests in the gate, or probably just the above
> > > variables set.
> > [...]
> > 
> > Historically, our projects hadn't done a great job of relegating their
> > "unit test" jobs to only run unit tests, and had a number of what would
> > be commonly considered functional tests mixed in. This has improved in
> > recent years as many of those projects have created separate functional
> > test jobs and are able to simplify their unit test jobs accordingly, so
> > this may be more feasible now than it was in the past.
> > 
> > Removing network access from the machines running these jobs won't
> > work, of course, because our job scheduling and execution service needs
> > to reach them over the Internet to start jobs, monitor progress and
> > collect results. As you noted, faking Python out with envvars pointing
> > it at nonexistent HTTP proxies might help at least where tests attempt
> > to make HTTP(S) connections to remote systems.
> > The Web is not all there is to the Internet however, so this wouldn't
> > do much to prevent use of remote DNS, NTP, SMTP or other
> > non-HTTP(S) protocols.
> > 
> > The biggest wrinkle I see in your "proxy" idea is that most of our
> > Python-based projects run their unit tests with tox, and it will use
> > pip to install project and job dependencies via HTTPS prior to starting
> > the test runner. As such, any proxy envvar setting would need to happen
> > within the scope of tox itself so that it will be able to set up the
> > virtualenv prior to configuring the proxy vars for the ensuing tests.
> > It might be easiest for you to work out the tox.ini modification on one
> > project (it'll be self-testing at least) and then once the pilot can be
> > shown working the conversation with the community becomes a little
> > easier.
> [Mooney, Sean K] I may be over simplifying here but our unit tests are still 
> executed by
> Zuul in vms provided by nodepool. Could we simply take advantage of openstack 
> and
> use security groups to to block egress traffic from the vm except that 
> required to upload the logs?
> e.g. don't mess with tox or proxyies within the vms and insteand do this 
> externally via neutron.
> This would require the cloud provider to expose neutron however which may be 
> an issue for Rackspace but
> It its only for unit test which are relatively short lived vs tempest jobs 
> perhaps the other providers would
> Still have enough capacity?
> > --
> > Jeremy Stanley
> 
I don't think we'd need to use security groups, we could just setup a local
firewall ruleset to do this on the node if we wanted.

-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][infra][stable] we are ready to restore stable/mitaka for openstack-manuals

2017-11-20 Thread Paul Belanger
On Mon, Nov 20, 2017 at 05:40:26PM +0100, Andreas Jaeger wrote:
> On 2017-11-20 17:32, Doug Hellmann wrote:
> > As we agreed in the new documentation retention policy spec [1], I need
> > to restore the stable/mitaka branch for the openstack/openstack-manuals
> > repository and then trigger a build of the mitaka version of the guides.
> > I have a local patch ready that works, so I believe the next steps are
> > to create the branch, propose the patch, and fix whatever infra issues
> > we have due to running the change on such an old branch.
> > 
> > I believe I have the permissions needed to create the branch from the
> > existing mitaka-eol tag. Before I do that, however, I wanted to make
> > sure there are no additional steps needed. Should we delete the tag, for
> > example? Or is creating the branch sufficient?
> 
> Deletion of tag is not really working - everybody that has cloned the
> repo will have the tag locally. So, I suggest to leave it.
> 
> 
> I'm not aware of any other issues, go ahead - but let's wait a day or
> two if anybody else sees a problem,
> 
> Andreas
> 
> 
> > Doug
> > 
> > [1] 
> > http://specs.openstack.org/openstack/docs-specs/specs/queens/retention-policy.html
> 
++ Ready to support if needed.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][all] Removal of packages from bindep-fallback

2017-11-16 Thread Paul Belanger
On Thu, Nov 16, 2017 at 04:59:01PM +1100, Ian Wienand wrote:
> Hello,
> 
> Some time ago we started the process of moving towards projects being
> more explicit about thier binary dependencies using bindep [1]
> 
> To facilitate the transition, we created a "fallback" set of
> dependencies [2] which are installed when a project does not specifiy
> it's own bindep dependencies.  This essentially replicated the rather
> ad-hoc environment provided by CI images before we started the
> transition.
> 
> This list has acquired a few packages that cause some problems in
> various situations today.  Particularly packages that aren't in the
> increasing number of distributions we provide, or packages that come
> from alternative repositories.
> 
> To this end, [3,4] proposes the removal of
> 
>  liberasurecode-*
>  mongodb-*
>  python-zmq
>  redis
>  zookeeper
>  ruby-*
> 
> from the fallback packages.  This has a small potential to affect some
> jobs that tacitly rely on these packages.
> 
> NOTE: this does *not* affect devstack jobs (devstack manages it's own
> dependencies outside bindep) and if you want them back, it's just a
> matter of putting them into the bindep file in your project (and as a
> bonus, you have better dependency descriptions for your code).
> 
> We should be able to then remove centos-release-openstack-* from our
> centos base images too [5], which will make life easier for projects
> such as triple-o who have to work-around that.
> 
> If you have concerns, please reach out either via mail or in
> #openstack-infra
> 
> Thank you,
> 
> -i
> 
> [1] https://docs.openstack.org/infra/bindep/
> [2] 
> https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/data/bindep-fallback.txt
> [3] https://review.openstack.org/519533
> [4] https://review.openstack.org/519534
> [5] https://review.openstack.org/519535
> 
++

I think we are in good shape to move forward here, I know a lot of projects of
the last 12 months have moved to intree bindep.txt files, but not every thing.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Paul Belanger
On Tue, Nov 14, 2017 at 11:25:03AM -0500, Doug Hellmann wrote:
> Excerpts from Bogdan Dobrelya's message of 2017-11-14 17:08:31 +0100:
> > >> The concept, in general, is to create a new set of cores from these
> > >> groups, and use 3rd party CI to validate patches. There are lots of
> > >> details to be worked out yet, but our amazing UC (User Committee) will
> > >> be begin working out the details.
> > > 
> > > What is the most worrying is the exact "take over" process. Does it mean 
> > > that 
> > > the teams will give away the +2 power to a different team? Or will our 
> > > (small) 
> > > stable teams still be responsible for landing changes? If so, will they 
> > > have to 
> > > learn how to debug 3rd party CI jobs?
> > > 
> > > Generally, I'm scared of both overloading the teams and losing the 
> > > control over 
> > > quality at the same time :) Probably the final proposal will clarify it..
> > 
> > The quality of backported fixes is expected to be a direct (and only?) 
> > interest of those new teams of new cores, coming from users and 
> > operators and vendors. The more parties to establish their 3rd party 
> 
> We have an unhealthy focus on "3rd party" jobs in this discussion. We
> should not assume that they are needed or will be present. They may be,
> but we shouldn't build policy around the assumption that they will. Why
> would we have third-party jobs on an old branch that we don't have on
> master, for instance?
> 
I get the feeling more people are comfortable contributing to their own 3rd
party CI then upstream openstack CI systems.  Either because they don't have the
time or don't understand how it works. I agree with Doug, for this to work, I
think we need to have a health amount of people helping keep the CI systems
running upstream, then depending on 3rd party CI downstream.

As a comment, I think getting involved in the openstack-stablemaint team will be
a good first step towards the goals people are interested in here. I'm happy to
help work with others, and I'm taking tonyb up on his offer to start helping too
:)

> > checking jobs, the better proposed changes communicated, which directly 
> > affects the quality in the end. I also suppose, contributors from ops 
> > world will likely be only struggling to see things getting fixed, and 
> > not new features adopted by legacy deployments they're used to maintain. 
> > So in theory, this works and as a mainstream developer and maintainer, 
> > you need no to fear of losing control over LTS code :)
> > 
> > Another question is how to not block all on each over, and not push 
> > contributors away when things are getting awry, jobs failing and merging 
> > is blocked for a long time, or there is no consensus reached in a code 
> > review. I propose the LTS policy to enforce CI jobs be non-voting, as a 
> > first step on that way, and giving every LTS team member a core rights 
> > maybe? Not sure if that works though.
> 
> I'm not sure what change you're proposing for CI jobs and their voting
> status. Do you mean we should make the jobs non-voting as soon as the
> branch passes out of the stable support period?
> 
> Regarding the review team, anyone on the review team for a branch
> that goes out of stable support will need to have +2 rights in that
> branch. Otherwise there's no point in saying that they're maintaining
> the branch.
> 
> Doug
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][election] Question for all candidates in TC election: What will you do if you don't win?

2017-10-15 Thread Paul Belanger
On Sun, Oct 15, 2017 at 08:15:51AM -0400, Amrith Kumar wrote:
> Full disclosure, I'm running for election as well. I intend to also
> provide an answer to the question I pose here, one that I've posed
> before on #openstack-tc in an office hours session.
> 
> Question 1:
> 
> "There are M open slots for the TC and there are N (>>M) candidates
> for those open slots. This is a good problem to have, no doubt.
> Choice, is a good thing, enthusiasm and participation are good things.
> 
> But clearly, (N-M) candidates will not be elected.
> 
> If you are one of those (N-M) candidates, what then? What do you
> believe you can do if you are not elected to the TC, and what will you
> do? (concrete examples would be good)"
>
++

I'd like to see (N-M) candidates continue with TC by helping support M.
Personally, I plan on participating more in TC office hours regardless of
results. Or even reach out to TC and ask what non-TC members could do to help TC
more.

Once thing I've noticed in the question period before elections was 'What more
could the TC do'. I think it is also valid that we look at it the other way
around as 'What more could the non-TC member do' like Amrith asks above.

> Question 2:
> 
> "If you are one of the M elected candidates, the N-M candidates who
> were not elected represent a resource?
> 
> Would you look to leverage/exploit that resource, and if so, how?
> (concrete examples would be good)"
> 
Yah, I'd love to see 'pair programming' style for TC and non-TC memeber. Clearly
we have interested parties in becoming TC, and I would think the N-M candidates
would also try running again in 6 months.  So why not help those N-M member
become M, just like we do for non-core / core members on OpenStack projects.

> -amrith
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][election] Question for candidates: How do you think we can make our community more inclusive?

2017-10-13 Thread Paul Belanger
On Fri, Oct 13, 2017 at 02:45:28PM +0200, Flavio Percoco wrote:
> Greetings,
> 
> Some of you, TC candidates, expressed concerns about diversity and 
> inclusiveness
> (or inclusivity, depending on your taste) in your candidacy. I believe this 
> is a
> broad, and some times ill-used, topic so, I'd like to know, from y'all, how 
> you
> think we could make our community more inclusive. What areas would you improve
> first?
> 
> Thank you,
> Flavio
> 
I admit, I didn't include this topic in my email. And I'll be the first to say
inclusiveness is an import and healthy issue to have for any project / society.

Listening, and learning, what makes people comfortable to engage and contribute
would be on my list. This applied both to any project and persons.  I'd learn
how other projects are responding to the topic and possible engage with those
leaders to share successes.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] TC Candidates: what does an OpenStack user look like?

2017-10-12 Thread Paul Belanger
On Thu, Oct 12, 2017 at 12:51:10PM -0400, Zane Bitter wrote:
> (Reminder: we are in the TC election campaigning period, and we all have the
> opportunity to question the candidates. The campaigning period ends on
> Saturday, so make with the questions.)
> 
> 
> In my head, I have a mental picture of who I'm building OpenStack for. When
> I'm making design decisions I try to think about how it will affect these
> hypothetical near-future users. By 'users' here I mean end-users, the actual
> consumers of OpenStack APIs. What will it enable them to do? What will they
> have to work around? I think we probably all do this, at least
> subconsciously. (Free tip: try doing it consciously.)
> 
> So my question to the TC candidates (and incumbent TC members, or anyone
> else, if they want to answer) is: what does the hypothetical OpenStack user
> that is top-of-mind in your head look like? Who are _you_ building OpenStack
> for?
> 
For me, my 'OpenStack user' is the developers of the OpenStack project. While a
developer might not be directly consuming the OpenStack API, the tooling they
use to contribute to OpenStack does. This is the main reason why I enjoy working
on the Project Infrastructure team.

It provides a feedback loop back into the project, to take real world issues
with OpenStack (eg: scaling, multi-cloud, python clients, APIs, you name it) and
hopefully make themself back into the hands of developers.  It doesn't always
work out that way, but is still a great process to have.

> There's a description of mine in this email, as an example:
> http://lists.openstack.org/pipermail/openstack-dev/2017-October/123312.html
> 
> To be clear, for me at least there's only one wrong answer ("person who
> needs somewhere to run their IRC bouncer"). What's important in my opinion
> is that we have a bunch of people with *different* answers on the TC,
> because I think that will lead to better discussion and hopefully better
> decisions.
> 
> Discuss.
> 
> cheers,
> Zane.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][election] Question for the candidates

2017-10-12 Thread Paul Belanger
On Thu, Oct 12, 2017 at 12:42:46PM -0700, Clay Gerrard wrote:
> I like a representative democracy.  It mostly means I get a say in which
> other people I have to trust to think deeply about issues which effect me
> and make decisions which I agree (more or less) are of benefit to the
> social groups in which I participate.  When I vote IRL I like to consider
> voting records.  Actions speak louder blah blah.
> 
> To candidates:
> 
> Would you please self select a change (or changes) from
> https://github.com/openstack/governance/ in the past ~12 mo or so where
> they thought the outcome or the discussion/process was particular good and
> explain why you think so?
> 
> It'd be super helpful to me, thanks!
> 
> -Clay

2017-05-30 Guidelines for Managing Releases of Binary Artifacts [1].

It would have to be 469265[2] that Doug Hellmann proposed after the OpenStack
Summit in Boston. There has been a lot of passionate people in the community
that have been asking for containers, specifically docker in this case.

Regardless of what side you are in the debate of container vs VM, together as a
community we had discussions on what the guideline would look like.
Individually, each project had a specific notion of what publishing containers
would look like, but the TC help navigate some of the technical issues around
binary vs source releasing, versioning and branding (to name a few).

While there is still work to be done on getting the publishing pipeline
finalized, I like to think the interested parties in binary artifacts are happy
we now have governance in place.

[1] 
http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20170530-binary-artifacts.rst
[2] https://review.openstack.org/469265/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][election] Question for the candidates

2017-10-12 Thread Paul Belanger
On Thu, Oct 12, 2017 at 01:38:03PM -0500, Ed Leafe wrote:
> In the past year or so, has there been anything that made you think “I wish 
> the TC would do something about that!” ? If so, what was it, and what would 
> you have wanted the TC to do about it?
> 
> -- Ed Leafe
>
No, I've been happy with how our TC has operated this and previous years. It
doesn't mean I don't agree with everything the TC has done, but I do appreciate
the effort and energy everybody has put in.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][election] TC candidacy

2017-10-09 Thread Paul Belanger
Today I step outside my comfort zone and submit my name into the Technical
Committee's election.

I'm a Red Hat employee who contributes full time to the OpenStack Project
Infrastructure team and have been contributing since Folsom release (22 Sept
2012).

I submit my ballot today, not because I seek to be in the lime light but to help
fill the void of the awesome contributors before me. Like any project, I think
it is healthy to rotate new people and new ideas into a project.  It is a great
way for people to learn new leadership skills but also a way for a project to
gain new energy and share different points of views.

For my knowledge, I hope to bring an point of view from operations. While I
have contributed to OpenStack projects, my primary passion is helping provide
the OpenStack resources to contributors to make OpenStack awesome. In doing so,
working on the Infrastructure project allows for the ability to see what is
great about OpenStack and what could use improvements.

I believe you get out of something what you put into it, so here I am adding my
name of the list.

Good luck to all,
Paul Belanger

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Update on Zuul v3 Migration - and what to do about issues

2017-10-04 Thread Paul Belanger
On Wed, Oct 04, 2017 at 02:39:17PM +0900, Rikimaru Honjo wrote:
> Hello,
> 
> I'm trying to run jobs with Zuul v3 in my local environment.[1]
> I prepared a sample job that runs sleep command on zuul's host.
> This job doesn't use Nodepool. [2]
> 
> As a result, Zuul v3 submitted "SUCCESS" to gerrit when gerrit event occurred.
> But, error logs were generated. And my job was not run.
> 
> I'd appreciate it if you help me.
> (Should I write this topic on Zuul Storyboard?)
> 
> [1]I use Ubuntu 16.04 and zuul==2.5.3.dev1374.
> 
> [2]In my understanding, I can use Zuul v3 without Nodepool.
> https://docs.openstack.org/infra/zuul/feature/zuulv3/user/config.html#attr-job.nodeset
> > If a job has an empty or no nodeset definition, it will still run and may 
> > be able to perform actions on the Zuul executor.
> 
While this is true, at this time it has limited testing and not sure I would be
writing job content to leverage this too much.  Right now, we are only using it
to trigger RTFD hooks in openstack-infra.

Zuulv3 is really meant to be used with nodepool, much tighter now then before.
We do have plan to support static nodes in zuulv3, but work on that hasn't
finished.

> [Conditions]
> * Target project is defined as config-project in tenant configuration file.
> * I didn't write nodeset in .zuul.yaml.
>   Because my job doesn't use Nodepool.
> * I configured  playbooks's hosts as "- hosts: all" or "- hosts: localhost".
>   (I referred to project-config repository.)
> 
> [Error logs]
> "no hosts matched" or "list index out of range" were generated.
> Please see the attached file.
> 
> 
> On 2017/09/29 23:58, Monty Taylor wrote:
> > Hey everybody!
> > 
> > tl;dr - If you're having issues with your jobs, check the FAQ, this email 
> > and followups on this thread for mentions of them. If it's an issue with 
> > your job and you can spot it (bad config) just submit a patch with topic 
> > 'zuulv3'. If it's bigger/weirder/you don't know - we'd like to ask that you 
> > send a follow up email to this thread so that we can ensure we've got them 
> > all and so that others can see it too.
> > 
> > ** Zuul v3 Migration Status **
> > 
> > If you haven't noticed the Zuul v3 migration - awesome, that means it's 
> > working perfectly for you.
> > 
> > If you have - sorry for the disruption. It turns out we have a REALLY 
> > complicated array of job content you've all created. Hopefully the pain of 
> > the moment will be offset by the ability for you to all take direct 
> > ownership of your awesome content... so bear with us, your patience is 
> > appreciated.
> > 
> > If you find yourself with some extra time on your hands while you wait on 
> > something, you may find it helpful to read:
> > 
> >    https://docs.openstack.org/infra/manual/zuulv3.html
> > 
> > We're adding content to it as issues arise. Unfortunately, one of the 
> > issues is that the infra manual publication job stopped working.
> > 
> > While the infra manual publication is being fixed, we're collecting FAQ 
> > content for it in an etherpad:
> > 
> >    https://etherpad.openstack.org/p/zuulv3-migration-faq
> > 
> > If you have a job issue, check it first to see if we've got an entry for 
> > it. Once manual publication is fixed, we'll update the etherpad to point to 
> > the FAQ section of the manual.
> > 
> > ** Global Issues **
> > 
> > There are a number of outstanding issues that are being worked. As of right 
> > now, there are a few major/systemic ones that we're looking in to that are 
> > worth noting:
> > 
> > * Zuul Stalls
> > 
> > If you say to yourself "zuul doesn't seem to be doing anything, did I do 
> > something wrong?", we're having an issue that jeblair and Shrews are 
> > currently tracking down with intermittent connection issues in the backend 
> > plumbing.
> > 
> > When it happens it's an across the board issue, so fixing it is our number 
> > one priority.
> > 
> > * Incorrect node type
> > 
> > We've got reports of things running on trusty that should be running on 
> > xenial. The job definitions look correct, so this is also under 
> > investigation.
> > 
> > * Multinode jobs having POST FAILURE
> > 
> > There is a bug in the log collection trying to collect from all nodes while 
> > the old jobs were designed to only collect from the 'primary'. Patches are 
> > up to fix this and should be fixed soon.
> > 
> > * Branch Exclusions being ignored
> > 
> > This has been reported and its cause is currently unknown.
> > 
> > Thank you all again for your patience! This is a giant rollout with a bunch 
> > of changes in it, so we really do appreciate everyone's understanding as we 
> > work through it all.
> > 
> > Monty
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 
> 

Re: [openstack-dev] [infra] pypi publishing

2017-10-01 Thread Paul Belanger
On Sun, Oct 01, 2017 at 05:02:00AM -0700, Clark Boylan wrote:
> On Sat, Sep 30, 2017, at 10:28 PM, Gary Kotton wrote:
> > Hi,
> > Any idea why latest packages are not being published to pypi.
> > Examples are:
> > vmware-nsxlib 10.0.2 (latest stable/ocata)
> > vmware-nsxlib 11.0.1 (latest stable/pike)
> > vmware-nsxlib 11.1.0 (latest queens)
> > Did we miss a configuration that we needed to do in the infra projects?
> > Thanks
> > Gary
> 
> Looks like these are all new tags pushed within the last day. Looking at
> logs for 11.1.1 we see the tarball artifact creation failed [0] due to
> what is likely a bug in the new zuulv3 jobs.
> 
> [0]
> http://logs.openstack.org/e5/e5a2189276396201ad88a6c47360c90447c91589/release/publish-openstack-python-tarball/2bdd521/ara/result/6ec8ae45-7266-40a9-8fd5-3fb4abcde677/
> 
> We'll need to get the jobs debugged.
> 
This is broken because of: https://review.openstack.org/508563/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][QA][group-based-policy][zaqar][packaging_deb][fuel][networking-*] Marking <= mitaka EOL

2017-09-24 Thread Paul Belanger
On Wed, Sep 20, 2017 at 11:26:46AM -0400, Tony Breeds wrote:
> On Wed, Sep 20, 2017 at 12:56:07PM +0200, Andreas Jaeger wrote:
> 
> > So, for fuel we have stable/7.0 etc - what are the plans for these? Can
> > we retire them as well?
> > 
> > Those are even older AFAIK,
> 
> As discussed on IRC, when I started this I needed to start with
> something small and simple, so I picked the series based branches.
> 
> I do intend to get look at the older numeric stable branches but I doubt
> there is enough time for real community consultation befoer the zuulv3
> migration.
> 
> Yours Tony.

Hijacking thread for branch EOL things.

I just noticed our DIB image git cache is still referencing old branches we have
EOL'd. This is because we haven't deleted the cache in some time.

I think we could update the EOL docs to have an infra-root make sure we do this,
after all branches have been EOL'd in gerrit. Unless there is a better way to do
this using git over manually deleting the cache on our nodepool-builder servers.

-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-22 Thread Paul Belanger
On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
> > "if DevStack gets custom images prepped to make its jobs
> > run faster, won't Triple-O, Kolla, et cetera want the same and where
> > do we draw that line?). "
> > 
> > IMHO we can try to have only one big image per distribution,
> > where the packages are the union of the packages requested by all team,
> > minus the packages blacklisted by any team.
> [...]
> 
> Until you realize that some projects want packages from UCA, from
> RDO, from EPEL, from third-party package repositories. Version
> conflicts mean they'll still spend time uninstalling the versions
> they don't want and downloading/installing the ones they do so we
> have to optimize for one particular set and make the rest
> second-class citizens in that scenario.
> 
> Also, preinstalling packages means we _don't_ test that projects
> actually properly declare their system-level dependencies any
> longer. I don't know if anyone's concerned about that currently, but
> it used to be the case that we'd regularly add/break the package
> dependency declarations in DevStack because of running on images
> where the things it expected were preinstalled.
> -- 
> Jeremy Stanley

+1

We spend a lot of effort trying to keep the 6 images we have in nodepool working
today, I can't imagine how much work it would be to start adding more images per
project.

Personally, I'd like to audit things again once we roll out zuulv3, I am sure
there are some tweaks we could make to help speed up things.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Garbage patches for simple typo fixes

2017-09-22 Thread Paul Belanger
On Fri, Sep 22, 2017 at 10:26:09AM +0800, Zhipeng Huang wrote:
> Let's not forget the epic fail earlier on the "contribution.rst fix" that
> almost melt down the community CI system.
> 
> For any companies that are doing what Matt mentioned, please be aware that
> the dev community of the country you belong to is getting hurt by your
> stupid activity.
> 
> Stop patch trolling and doing something meaningful.
> 
Sorry, but I found this comment over the line. Just because you disagree with
the $topic at hand, doesn't mean you should default to calling it 'stupid'. Give
somebody the benefit of not knowing any better.

This is not a good example of encouraging anybody to contribute to the project.

-Paul

> On Fri, Sep 22, 2017 at 10:21 AM, Matt Riedemann 
> wrote:
> 
> > I just wanted to highlight to people that there seems to be a series of
> > garbage patches in various projects [1] which are basically doing things
> > like fixing a single typo in a code comment, or very narrowly changing http
> > to https in links within docs.
> >
> > Also +1ing ones own changes.
> >
> > I've been trying to snuff these out in nova, but I see it's basically a
> > pattern widespread across several projects.
> >
> > This is the boilerplate comment I give with my -1, feel free to employ it
> > yourself.
> >
> > "Sorry but this isn't really a useful change. Fixing typos in code
> > comments when the context is still clear doesn't really help us, and mostly
> > seems like looking for padding stats on stackalytics. It's also a drain on
> > our CI environment.
> >
> > If you fixed all of the typos in a single module, or in user-facing
> > documentation, or error messages, or something in the logs, or something
> > that actually doesn't make sense in code comments, then maybe, but this
> > isn't one of those things."
> >
> > I'm not trying to be a jerk here, but this is annoying to the point I felt
> > the need to say something publicly.
> >
> > [1] https://review.openstack.org/#/q/author:%255E.*inspur.*
> >
> > --
> >
> > Thanks,
> >
> > Matt
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> 
> -- 
> Zhipeng (Howard) Huang
> 
> Standard Engineer
> IT Standard & Patent/IT Product Line
> Huawei Technologies Co,. Ltd
> Email: huangzhip...@huawei.com
> Office: Huawei Industrial Base, Longgang, Shenzhen
> 
> (Previous)
> Research Assistant
> Mobile Ad-Hoc Network Lab, Calit2
> University of California, Irvine
> Email: zhipe...@uci.edu
> Office: Calit2 Building Room 2402
> 
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Pike final release - blockers & priorities

2017-09-04 Thread Paul Belanger
On Sun, Sep 03, 2017 at 10:23:15PM -0700, Emilien Macchi wrote:
> On Sun, Sep 3, 2017 at 7:40 PM, Emilien Macchi  wrote:
> > Here are the priorities and blockers so we can release final Pike.
> > If you have any question, or think I missed something, or have any
> > comment, please reply to this thread.
> > Also, your help is critical for TripleO team to release the final
> > version of Pike.
> > I would like to set a target on Friday 8th (before the PTG) but the
> > hard limit is actually in ~10 days, so we will see how it goes by
> > Thursday 7th and take the decision there.
> >
> > ## Upgrades from Ocata to Pike
> >
> > I'm using this patch to test upgrades: 
> > https://review.openstack.org/#/c/461000/
> > At the time I'm writing this,
> > https://bugs.launchpad.net/tripleo/+bug/1714412 is still open and
> > remains the major blocker for upgrades.
> > Folks familiar with upgrades process should help Mathieu Bultel to
> > finish the upgrade documentation as soon as possible:
> > https://review.openstack.org/#/c/496223/
> 
> A last update before I'm afk:
> 
> https://review.openstack.org/#/c/493391/ merged and
> gate-tripleo-ci-centos-7-containers-multinode-upgrades-nv passed.
> Which means we have the upgrade workflow in place again.
> 
> Martin, I've been tested https://review.openstack.org/#/c/500364/ with
> https://review.openstack.org/#/c/461000/ and it fails on all scenarios
> because it tries to deploy containers when deploying Ocata.
> See 
> http://logs.openstack.org/00/461000/54/check/gate-tripleo-ci-centos-7-scenario002-multinode-oooq-container-upgrades-nv/f3a1eac/logs/undercloud/home/jenkins/overcloud_deploy.log.txt.gz#_2017-09-04_02_46_41
> 
> I'm going a recheck on https://review.openstack.org/#/c/500145 to see
> how things are on stable/pike.
> 
> So this is where we are:
> 
> - containers-multinode-upgrades-nv pass with pingtest but timeouts
> very often (the job is very long indeed)
> - I'm trying to enable tempest on this job, just to see how it works:
> https://review.openstack.org/#/c/500440/
> - container scenarios upgrade failed with
> https://review.openstack.org/#/c/461000/ but not sure without
> - not sure about how it works on stable/pike
> 
> Note: oice upgrade job pass on stable/pike, we'll want to make them
> voting like we did for Ocata.
> 
> I'll let Steve Baker post an update after our chat on IRC today.
> 
> > ## Last standing blueprint:
> > https://blueprints.launchpad.net/tripleo/+spec/websocket-logging
> > https://review.openstack.org/#/c/469608/ is the last patch but right
> > now has negative review and doesn't pass CI.
> > Hopefully we can make some progress this week.
> >
> >
> > We'll probably have more backports to do after final pike, but that's
> > fine, please use launchpad bugs when you do backports though.
> > We're almost there! Thanks all for the help,
> > --
> > Emilien Macchi
> 
Something to keep in mind too, we are 5 days out before we start testing zuulv3
in production[1]. So expect some outages of the CI system too.

I don't want to rush your release process, but I would strongly consider tagging
pike no later then Thursday, Sept. 7th, giving a day before PTG maintenance.
Monday morning we'll be throwing the switch for real and all week at PTG, we'll
be providing office hours to help with zuulv3 migration issues.

That said, we have been testing our release jobs and confident they will be
working, but if you can tag before zuulv3, it might be less stressful.

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-August/120499.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Gate is broken - Do not approve any patch until further notice

2017-08-30 Thread Paul Belanger
On Wed, Aug 30, 2017 at 11:31:14AM +0200, Bogdan Dobrelya wrote:
> On 30.08.2017 6:54, Emilien Macchi wrote:
> > On Tue, Aug 29, 2017 at 4:17 PM, Emilien Macchi <emil...@redhat.com> wrote:
> >> We are currently dealing with 4 issues and until they are fix, please
> >> do not approve any patch. We want to keep the gate clear to merge the
> >> fixes for the 4 problems first.
> >>
> >> 1) devstack-gate broke us because we use it as a library (bad)
> >> https://bugs.launchpad.net/tripleo/+bug/1713868
> >>
> >> 2) https://review.openstack.org/#/c/474578/ broke us and we're
> >> reverting it https://bugs.launchpad.net/tripleo/+bug/1713832
> >>
> >> 3) We shouldn't build images on multinode jobs
> >> https://bugs.launchpad.net/tripleo/+bug/1713167
> >>
> >> 4) We should use pip instead of git for delorean
> >> https://bugs.launchpad.net/tripleo/+bug/1708832
> >>
> >>
> >> Until further notice from Alex or myself, please do not approve any patch.
> > 
> > The 4 problems have been mitigated.
> > You can now proceed to normal review.
> > 
> > Please do not recheck a patch without an elastic-recheck comment, we
> > need to track all issues related to CI from now.
> > Paul Belanger has been doing extremely useful work to help us, now
> > let's use elastic-recheck more and stop blind rechecks.
> > All known issues are in http://status.openstack.org/elastic-recheck/
> > If one is missing, you're welcome to contribute by sending a patch to
> > elastic-recheck. Example with https://review.openstack.org/#/c/498954/
> 
> That's a great example! Let me follow up on that and share my beginner's
> experience as well.
> 
> Let's help with improving elastic-recheck queries to identify those
> unknown or new failures, this is really important. This also trains
> domain knowledge for particular areas, either openstack or *-infra, or
> tripleo specific.
> 
> As beginners, we could start with watching for failing tripleo-ci
> periodic [0],[1] (available as RSS feeds) and gate jobs without e-r
> comments, also from that page [2].
> 
> Then fetching the logs locally with tools like getthelogs [3], or
> looking into the logs.openstack.org directly, if advanced beginners wish so.
> 
> Finally, identifying discovered (just do some grep, like I do with my
> tool [4]) errorish patterns and helping with root cause analysis. And,
> ideally, submitting new e-r queries (see also [5]) and corresponding lp
> bugs. And absolutely ideally, help with addressing those as well. This
> might be hard though as we may be not experts in some of the areas. Some
> of the error messages would literally mean nothing to us. For me, the
> most  But as the best effort, we could invite the right persons to
> look into that, or at least ask folks on #tripleo or #openstack-infra.
> 
> [0]
> http://status.openstack.org/openstack-health/#/g/project/openstack-infra~2Ftripleo-ci
> [1]
> http://status.openstack.org/openstack-health/#/g/project/openstack~2Ftripleo-quickstart
> [2] http://status.openstack.org/elastic-recheck/data/others.html
> [3] https://review.openstack.org/#/c/492178/
> [4] https://github.com/bogdando/fuel-log-parse/blob/master/fuel-log-parse.sh
> [5]
> https://docs.openstack.org/infra/elastic-recheck/readme.html#running-queries-locally
> 
> > 
> > I've restored all patches that were killed from the gate and did
> > recheck already, hopefully we can get some merges and finish this
> > release.
> > 
> > Thanks Paul and all Infra for their consistent help!
> > 
> 
Indeed, this look much better this morning! Thanks to everybody on jumping on
the fixes.

Regarding Bug 1713832 - Object PUT failed for zaqar_subscription[1], which was
reverted last night. That is a great example to showcase elastic-recheck,
basically if you look back at the logstash queries, you can see the signs
pointing to an issue, but unfortunatly wasn't picked up until yesterday.

The info above from Bogdan is great, the general idea is, if a job fails in the
check pipeline and elastic-recheck doesn't leave a comment, it is likely a new
failure. Moving forward, we need to keep the blind rechecks to a minimum, as
each time we do so, we have the potential for breaking the gate down the road.

This is why you see tripleo pushing upwards of 16hr+ jobs on status.o.o/zuul,
because there was a job failure, and we had to rerun all patches again.

Keep up the good work, and look forward to talking more about this at PTG.

[1] http://status.openstack.org/elastic-recheck/#1713832

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] CI design session at the PTG

2017-08-28 Thread Paul Belanger
On Mon, Aug 28, 2017 at 09:42:45AM -0400, David Moreau Simard wrote:
> Hi,
> 
> (cc whom I would at least like to attend)
> 
> The PTG would be a great opportunity to talk about CI design/layout
> and how we see things moving forward in TripleO with Zuul v3, upstream
> and in review.rdoproject.org.
> 
> Can we have a formal session on this scheduled somewhere ?
> 
Wednesday onwards likely is best for me, otherwise, I can find time during
Mon-Tues if that is better.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] critical situation with CI / upgrade jobs

2017-08-21 Thread Paul Belanger
On Mon, Aug 21, 2017 at 10:43:07AM +1200, Steve Baker wrote:
> On Thu, Aug 17, 2017 at 4:13 PM, Steve Baker  wrote:
> 
> >
> >
> > On Thu, Aug 17, 2017 at 10:47 AM, Emilien Macchi 
> > wrote:
> >
> >>
> >> > Problem #3: from Ocata to Pike: all container images are
> >> > uploaded/specified, even for services not deployed
> >> > https://bugs.launchpad.net/tripleo/+bug/1710992
> >> > The CI jobs are timeouting during the upgrade process because
> >> > downloading + uploading _all_ containers in local cache takes more
> >> > than 20 minutes.
> >> > So this is where we are now, upgrade jobs timeout on that. Steve Baker
> >> > is currently looking at it but we'll probably offer some help.
> >>
> >> Steve is still working on it: https://review.openstack.org/#/c/448328/
> >> Steve, if you need any help (reviewing or coding) - please let us
> >> know, as we consider this thing important to have and probably good to
> >> have in Pike.
> >>
> >
> > I have a couple of changes up now, one to capture the relationship between
> > images and services[1], and another to add an argument to the prepare
> > command to filter the image list based on which services are containerised
> > [2]. Once these land, all the calls to prepare in CI can be modified to
> > also specify these heat environment files, and this will reduce uploads to
> > only the images required.
> >
> > [1] https://review.openstack.org/#/c/448328/
> > [2] https://review.openstack.org/#/c/494367/
> >
> >
> Just updating progress on this, with infra caching from docker.io I'm
> seeing transfer times of 16 minutes (an improvement on 20 minutes ->
> $timeout).
> 
> Only transferring the required images [3] reduces this to 8 minutes.
> 
> [3] https://review.openstack.org/#/c/494767/

I'd still like to have docker daemon running with debug:True, just for peace of
mind. In our testing of the cache, it was possible for docker to silently
failure on the reverse proxy cache and hit docker.io directly.  Regardless this
is good news.

Because the size of the containers we are talking about here, I think it is a
great idea to only download / cache images that will only be used for the job.

Lets me know if you see any issues

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] critical situation with CI / upgrade jobs

2017-08-16 Thread Paul Belanger
On Tue, Aug 15, 2017 at 11:06:20PM -0400, Wesley Hayutin wrote:
> On Tue, Aug 15, 2017 at 9:33 PM, Emilien Macchi  wrote:
> 
> > So far, we're having 3 critical issues, that we all need to address as
> > soon as we can.
> >
> > Problem #1: Upgrade jobs timeout from Newton to Ocata
> > https://bugs.launchpad.net/tripleo/+bug/1702955
> > Today I spent an hour to look at it and here's what I've found so far:
> > depending on which public cloud we're running the TripleO CI jobs, it
> > timeouts or not.
> > Here's an example of Heat resources that run in our CI:
> > https://www.diffchecker.com/VTXkNFuk
> > On the left, resources on a job that failed (running on internap) and
> > on the right (running on citycloud) it worked.
> > I've been through all upgrade steps and I haven't seen specific tasks
> > that take more time here or here, but some little changes that make
> > the big change at the end (so hard to debug).
> > Note: both jobs use AFS mirrors.
> > Help on that front would be very welcome.
> >
> >
> > Problem #2: from Ocata to Pike (containerized) missing container upload
> > step
> > https://bugs.launchpad.net/tripleo/+bug/1710938
> > Wes has a patch (thanks!) that is currently in the gate:
> > https://review.openstack.org/#/c/493972
> > Thanks to that work, we managed to find the problem #3.
> >
> >
> > Problem #3: from Ocata to Pike: all container images are
> > uploaded/specified, even for services not deployed
> > https://bugs.launchpad.net/tripleo/+bug/1710992
> > The CI jobs are timeouting during the upgrade process because
> > downloading + uploading _all_ containers in local cache takes more
> > than 20 minutes.
> > So this is where we are now, upgrade jobs timeout on that. Steve Baker
> > is currently looking at it but we'll probably offer some help.
> >
> >
> > Solutions:
> > - for stable/ocata: make upgrade jobs non-voting
> > - for pike: keep upgrade jobs non-voting and release without upgrade
> > testing
> >
> > Risks:
> > - for stable/ocata: it's highly possible to inject regression if jobs
> > aren't voting anymore.
> > - for pike: the quality of the release won't be good enough in term of
> > CI coverage comparing to Ocata.
> >
> > Mitigations:
> > - for stable/ocata: make jobs non-voting and enforce our
> > core-reviewers to pay double attention on what is landed. It should be
> > temporary until we manage to fix the CI jobs.
> > - for master: release RC1 without upgrade jobs and make progress
> > - Run TripleO upgrade scenarios as third party CI in RDO Cloud or
> > somewhere with resources and without timeout constraints.
> >
> > I would like some feedback on the proposal so we can move forward this
> > week,
> > Thanks.
> > --
> > Emilien Macchi
> >
> 
> I think due to some of the limitations with run times upstream we may need
> to rethink the workflow with upgrade tests upstream. It's not very clear to
> me what can be done with the multinode nodepool jobs outside of what is
> already being done.  I think we do have some choices with ovb jobs.   I'm
> not going to try and solve in this email but rethinking how we CI upgrades
> in the upstream infrastructure should be a focus for the Queens PTG.  We
> will need to focus on bringing run times significantly down as it's
> incredibly difficult to run two installs in 175 minutes across all the
> upstream cloud providers.
> 
Can you explain in more details where the bottlenecks are for the 175 mins?
That's just shy of 3 hours, and seems like more then enough time.

Not that it can be solved now, but maybe it is time to look at these jobs the
other way, how can we make them faster and what optimizations need to be made.

One example, we spend a lot of time in rebuilding RPM package with DLRN.  It is
possible in zuulv3 we'll be able to make changes to the CI workflow, so only 1
nodes builds a package, then all other jobs download new packages from that
node.

Another thing we can look at, is more parallel testing inplace of serial. I
can't point to anything specific, but would be helpful to sit down with sombody
to better understand all the back and forth between undercloud / overcloud /
multinodes / etc.

> Thanks Emilien for all the work you have done around upgrades!
> 
> 
> 
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [tripleo][puppet] Hold off on approving patches until further notice

2017-08-11 Thread Paul Belanger
On Thu, Aug 10, 2017 at 10:32:25PM -0600, Alex Schultz wrote:
> On Thu, Aug 10, 2017 at 11:12 AM, Paul Belanger <pabelan...@redhat.com> wrote:
> > On Thu, Aug 10, 2017 at 07:03:32AM -0600, Alex Schultz wrote:
> >> FYI,
> >>
> >> The gates are hosed for a variety of reasons[0][1] and we can't get
> >> critical patches merged. Please hold off on rechecking or approving
> >> anything new until further notice.   We're hoping to get some of the
> >> fixes for this merged today.  I will send a note when it's OK to merge
> >> again.
> >>
> >> [0] https://bugs.launchpad.net/tripleo/+bug/1709428
> >> [1] https://bugs.launchpad.net/tripleo/+bug/1709327
> >>
> > So far, these are the 3 patches we need to land today:
> >
> >   Exclude networking-bagpipe from dlrn
> > - https://review.openstack.org/491878
> >
> >   Disable existing repositories in tripleo-ci
> > - https://review.openstack.org/492289
> >
> >   Stop trying to build networking-bagpipe with DLRN
> > - https://review.openstack.org/492339
> >
> > These 3 fixes will take care of the large amount of gate resets tripleo is
> > currently seeing. Like Alex says, please try not to approve / recheck 
> > anything
> > until we land these.
> >
> 
> Ok so we've managed to land patches to improve the reliability.
> 
> https://review.openstack.org/492339 - merged
> https://review.openstack.org/491878 - still pending but we managed to
> get the package fixed so this one is not as critical anymore
> https://review.openstack.org/491522 - merged
> https://review.openstack.org/492289 - merged
> 
> We found that the undercloud-container's job is still trying to pull
> from buildlogs.centos.org, and I've proposed a fix
> https://review.openstack.org/#/c/492786/
> 
> I've restored (and approved) previously approved patches that have a
> high/critical bug or a FFE approved blueprint associated.
> 
> It should be noted that the following patches for tripleo do not have
> a bug or bp reference so they should be updated prior to being
> re-approved:
> https://review.openstack.org/#/c/400407/
> https://review.openstack.org/#/c/489083/
> https://review.openstack.org/#/c/475457/
> 
> For tripleo patches, please refer to Emilien's email[0] about the RC
> schedule with includes these rules about what patches should be
> merged.  Please be careful on rechecks and check failures. Do not
> blindly recheck.  We have noticed some issues with citycloud nodes, so
> if you spot problems with specific clouds please let us know so we can
> track these and work with infra on it.
> 
> Thanks,
> -Alex
> 
> [0] http://lists.openstack.org/pipermail/openstack-dev/2017-August/120806.html
> 
Thanks,

For today I'm going to keep an eye on elastic-recheck and see what is still
failing in the gate.  I agree, for the most part we seem to be looking good on
infrastructure issues but I think the container jobs are still failing too much
for my liking.

Lets see what happens today.

-PB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][puppet] Hold off on approving patches until further notice

2017-08-10 Thread Paul Belanger
On Thu, Aug 10, 2017 at 07:03:32AM -0600, Alex Schultz wrote:
> FYI,
> 
> The gates are hosed for a variety of reasons[0][1] and we can't get
> critical patches merged. Please hold off on rechecking or approving
> anything new until further notice.   We're hoping to get some of the
> fixes for this merged today.  I will send a note when it's OK to merge
> again.
> 
> [0] https://bugs.launchpad.net/tripleo/+bug/1709428
> [1] https://bugs.launchpad.net/tripleo/+bug/1709327
> 
So far, these are the 3 patches we need to land today:

  Exclude networking-bagpipe from dlrn
- https://review.openstack.org/491878

  Disable existing repositories in tripleo-ci
- https://review.openstack.org/492289

  Stop trying to build networking-bagpipe with DLRN
- https://review.openstack.org/492339

These 3 fixes will take care of the large amount of gate resets tripleo is
currently seeing. Like Alex says, please try not to approve / recheck anything
until we land these.

Thanks,
PB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][infra] Help needed! high gate failure rate

2017-08-10 Thread Paul Belanger
On Thu, Aug 10, 2017 at 07:22:42PM +0530, Rabi Mishra wrote:
> On Thu, Aug 10, 2017 at 4:34 PM, Rabi Mishra  wrote:
> 
> > On Thu, Aug 10, 2017 at 2:51 PM, Ian Wienand  wrote:
> >
> >> On 08/10/2017 06:18 PM, Rico Lin wrote:
> >> > We're facing a high failure rate in Heat's gates [1], four of our gate
> >> > suffering with fail rate from 6 to near 20% in 14 days. which makes
> >> most of
> >> > our patch stuck with the gate.
> >>
> >> There have been a confluence of things causing some problems recently.
> >> The loss of OSIC has distributed more load over everything else, and
> >> we have seen an increase in job timeouts and intermittent networking
> >> issues (especially if you're downloading large things from remote
> >> sites).  There have also been some issues with the mirror in rax-ord
> >> [1]
> >>
> >> > gate-heat-dsvm-functional-convg-mysql-lbaasv2-ubuntu-xenial(19.67%)
> >> > gate-heat-dsvm-functional-convg-mysql-lbaasv2-non-apache-
> >> ubuntu-xenia(9.09%)
> >> > gate-heat-dsvm-functional-orig-mysql-lbaasv2-ubuntu-xenial(8.47%)
> >> > gate-heat-dsvm-functional-convg-mysql-lbaasv2-py35-ubuntu-xenial(6.00%)
> >>
> >> > We still try to find out what's the cause but (IMO,) seems it might be
> >> some
> >> > thing wrong with our infra. We need some help from infra team, to know
> >> if
> >> > any clue on this failure rate?
> >>
> >> The reality is you're just going to have to triage this and be a *lot*
> >> more specific with issues.
> >
> >
> > One of the issues we see recently is that, many jobs killed mid way
> > through the tests as the job times out(120 mins).  It seems jobs are many
> > times scheduled to very slow nodes, where setting up devstack takes more
> > than 80 mins[1].
> >
> > [1] http://logs.openstack.org/49/492149/2/check/gate-heat-dsvm-
> > functional-orig-mysql-lbaasv2-ubuntu-xenial/03b05dd/console.
> > html#_2017-08-10_05_55_49_035693
> >
> > We download an image from a fedora mirror and it seems to take more than
> 1hr.
> 
> http://logs.openstack.org/41/484741/7/check/gate-heat-dsvm-functional-convg-mysql-lbaasv2-py35-ubuntu-xenial/a797010/logs/devstacklog.txt.gz#_2017-08-10_04_13_14_400
> 
> Probably an issue with the specific mirror or some infra network bandwidth
> issue. I've submitted a patch to change the mirror to see if that helps.
> 
Today we mirror both fedora-26[1] and fedora-25 (to be removed shortly). So if
you want to consider bumping your image for testing, you can fetch it from our
AFS mirrors.

You can source /etc/ci/mirror_info.sh to get information about things we mirror.

[1] 
http://mirror.regionone.infracloud-vanilla.openstack.org/fedora/releases/26/CloudImages/x86_64/images/
> 
> > I find opening an etherpad and going
> >> through the failures one-by-one helpful (e.g. I keep [2] for centos
> >> jobs I'm interested in).
> >>
> >> Looking at the top of the console.html log you'll have the host and
> >> provider/region stamped in there.  If it's timeouts or network issues,
> >> reporting to infra the time, provider and region of failing jobs will
> >> help.  If it's network issues similar will help.  Finding patterns is
> >> the first step to understanding what needs fixing.
> >>
> >> If it's due to issues with remote transfers, we can look at either
> >> adding specific things to mirrors (containers, images, packages are
> >> all things we've added recently) or adding a caching reverse-proxy for
> >> them ([3],[4] some examples).
> >>
> >> Questions in #openstack-infra will usually get a helpful response too
> >>
> >> Good luck :)
> >>
> >> -i
> >>
> >> [1] https://bugs.launchpad.net/openstack-gate/+bug/1708707/
> >> [2] https://etherpad.openstack.org/p/centos7-dsvm-triage
> >> [3] https://review.openstack.org/491800
> >> [4] https://review.openstack.org/491466
> >>
> >> 
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
> >> e
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> > --
> > Regards,
> > Rabi Misra
> >
> >
> 
> 
> -- 
> Regards,
> Rabi Mishra

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][qa] Enabling elastic-recheck for tripleo job

2017-08-08 Thread Paul Belanger
On Mon, Aug 07, 2017 at 03:28:13PM -0400, Paul Belanger wrote:
> On Mon, Aug 07, 2017 at 10:34:59AM -0700, Emilien Macchi wrote:
> > On Mon, Aug 7, 2017 at 9:18 AM, Paul Belanger <pabelan...@redhat.com> wrote:
> > > I've just pushed up a change to puppet-elastic_recheck[1] to update the 
> > > jobs
> > > regex for elastic-recheck to start tracking tripleo.  What this means, is 
> > > now
> > > the elasticRecheck bot will start leaving comments on reviews once it 
> > > matches a
> > > failure in knows about.
> > 
> > Really cool, thanks.
> > Note we have an etherpad to track old queries in logstash:
> > https://etherpad.openstack.org/p/tripleo-ci-logstash-queries
> > I know it's not the best way to handle it and we should use more
> > elastic-recheck, but we can also re-use some queries if needed and
> > remove this etherpad, with this move to elastic-recheck.
> 
> Yes, once we have the current elastic-recheck bugs commenting on tripleo
> reviews, we should migrated any existing logstash queries into it.
> 
Just a heads up, this is now live. You'll start to see comments from 'Elastic
Recheck' bot on failed jobs. Please take a moment to look into the failures that
elastic-recheck is reporting, and if the failures is 'unrecognized error',
please dive a little deeper before typing 'recheck'.

The idea now is the start classifying failures, in an attempt to find patterns.
Chances are, if you are hitting an 'unrecognized error' somebody else is hitting
it too.  And what issues have been classified, we can then triage the issue and
see what is needed to fix it.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][qa] Enabling elastic-recheck for tripleo job

2017-08-07 Thread Paul Belanger
On Mon, Aug 07, 2017 at 10:34:59AM -0700, Emilien Macchi wrote:
> On Mon, Aug 7, 2017 at 9:18 AM, Paul Belanger <pabelan...@redhat.com> wrote:
> > I've just pushed up a change to puppet-elastic_recheck[1] to update the jobs
> > regex for elastic-recheck to start tracking tripleo.  What this means, is 
> > now
> > the elasticRecheck bot will start leaving comments on reviews once it 
> > matches a
> > failure in knows about.
> 
> Really cool, thanks.
> Note we have an etherpad to track old queries in logstash:
> https://etherpad.openstack.org/p/tripleo-ci-logstash-queries
> I know it's not the best way to handle it and we should use more
> elastic-recheck, but we can also re-use some queries if needed and
> remove this etherpad, with this move to elastic-recheck.

Yes, once we have the current elastic-recheck bugs commenting on tripleo
reviews, we should migrated any existing logstash queries into it.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][qa] Enabling elastic-recheck for tripleo job

2017-08-07 Thread Paul Belanger
Greetings,

I've just pushed up a change to puppet-elastic_recheck[1] to update the jobs
regex for elastic-recheck to start tracking tripleo.  What this means, is now
the elasticRecheck bot will start leaving comments on reviews once it matches a
failure in knows about.

The goal here is to better inform tripleo developers of the reasons of job
failures in an effort to find patterns / make fixes. From a developers POV, the
only difference you'll see if the elasticRecheck bot leaving a comment on your
patch when it finds a failure, if that does happen. Please take a moment to look
at the launchpad bug / logstash query.  

At the moment we are tracking:
  Bug 1708704 - yum client: [Errno 256] No more mirrors to try[2]

We have setup AFS mirrors to better help with yum failures, so I'd like to make
sure all jobs are properly setup to use it.

[1] https://review.openstack.org/491535
[2] http://status.openstack.org/elastic-recheck/index.html#1708704

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][packaging][deb] retiring Packaging Deb project

2017-08-03 Thread Paul Belanger
On Thu, Aug 03, 2017 at 10:09:59AM -0400, Allison Randal wrote:
> Hi all,
> 
> The Debian OpenStack packagers are gathered at the annual Debian
> developer event, and discussing the future of OpenStack packaging for
> Debian. There's general agreement that we'd like to go ahead and package
> Pike, but also agreement that we'd like to move back to the Debian
> infrastructure for doing packaging work. There are a variety of reasons,
> including that:
> 
> - Using the familiar Debian workflows and infrastructure is a better way
> to attract experienced Debian developers to the work, and
> 
> - The way we set up the Debian packaging workflow on OpenStack Infra was
> a temporary compromise, and not really a good fit for OpenStack Infra
> (or for Debian).
> 
> As a result of the change, we'd like to retire the Packaging Deb
> project in OpenStack, and there will be no candidate running for Queens
> PTL for the project. There will be some cleanup work to do, around
> deb-* repos and other aspects of the Debian packaging workflow we set up
> with OpenStack Infra, following the standard retirement procedures:
> 
> https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project
> 
> The OpenStack installation tutorial for Debian will remain in OpenStack,
> and we'll update it for Pike.
> 
> If you have any concerns about the retirement, please let us know
> (especially other distros that use .deb format packages), we'll wait a
> bit for feedback before moving ahead with the cleanup.
> 
++ I've already gone a head a few weeks ago and stop the usptream-repo sync that
was setup when packaging first started, that will make retiring projects easier.

Feel free to ping in #openstack-infra when ready, because the amount of projects
that will be retired, the incoming patch will be a magnet for merge conflicts.

> Thanks,
> Allison
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] An experiment with Ansible

2017-07-21 Thread Paul Belanger
On Thu, Jul 20, 2017 at 10:06:00PM -0400, James Slagle wrote:
> On Thu, Jul 20, 2017 at 9:52 PM, James Slagle <james.sla...@gmail.com> wrote:
> > On Thu, Jul 20, 2017 at 9:04 PM, Paul Belanger <pabelan...@redhat.com> 
> > wrote:
> >> On Thu, Jul 20, 2017 at 06:21:22PM -0400, James Slagle wrote:
> >>> Following up on the previous thread:
> >>> http://lists.openstack.org/pipermail/openstack-dev/2017-July/119405.html
> >>>
> >>> I wanted to share some work I did around the prototype I mentioned
> >>> there. I spent a couple days exploring this idea. I came up with a
> >>> Python script that when run against an in progress Heat stack, will
> >>> pull all the server and deployment metadata out of Heat and generate
> >>> ansible playbooks/tasks from the deployments.
> >>>
> >>> Here's the code:
> >>> https://github.com/slagle/pump
> >>>
> >>> And an example of what gets generated:
> >>> https://gist.github.com/slagle/433ea1bdca7e026ce8ab2c46f4d716a8
> >>>
> >>> If you're interested in any more detail, let me know.
> >>>
> >>> It signals the stack to completion with a dummy "ok" signal so that
> >>> the stack will complete. You can then use ansible-playbook to apply
> >>> the actual deloyments (in the expected order, respecting the steps
> >>> across all roles, and in parallel across all the roles).
> >>>
> >>> Effectively, this treats Heat as nothing but a yaml cruncher. When
> >>> using it with deployed-server, Heat doesn't actually change anything
> >>> on an overcloud node, you're only using it to generate ansible.
> >>>
> >>> Honestly, I think I will prefer the longer term approach of using
> >>> stack outputs. Although, I am not sure of the end goal of that work
> >>> and if it is the same as this prototype.
> >>>
> >> Sorry if this hasn't been asked before but why don't you removed all of 
> >> your
> >> ansible-playbook logic out of heat and write them directly as native 
> >> playbooks /
> >> roles? Then instead of having a tool that reads heat to then generate the
> >> playbooks / roles, you update heat just to directly call the playbooks? Any
> >> dynamic information about be stored in the inventory or using the 
> >> --extra-vars
> >> on the CLI?
> >
> > We must maintain backwards compatibility with our existing Heat based
> > interfaces (cli, api, templates). While that could probably be done
> > with the approach you mention, it feels like it would be much more
> > difficult to do so in that you'd need to effectively add back on the
> > compatibility layer once the new pristine native ansible
> > playbooks/roles were written. And it seems like it would be quite a
> > lot of heat template work to translate existing interfaces to call
> > into the new playbooks.
> >
> > Even then, any new playbooks written from scratch would have to be
> > flexible enough to accommodate the old interfaces. On the surface, it
> > feels like you may end up sacrificing a lot of your goals in your
> > playbooks so you can maintain backwards compatibility anyways.
> >
> > The existing interface must be the first class citizen. We can't break
> > those contracts, so we need ways to quickly iterate towards ansible.
> > Writing all new native playbooks sounds like just writing a new
> > OpenStack installer to me, and then making Heat call that so that it's
> > backwards compatible.
> >
> > The focus on the interface flips that around so that you use existing
> > systems and iterate them towards the end goal. Just my POV.
> >
> > FYI, there are other ongoing solutions as well such as existing
> > ansible tasks directly in the templates today. These are much easier
> > to reason about when it comes to generating the roles and playbooks,
> > because it is direct Ansible syntax in the templates, so it's easier
> > to see the origin of tasks and make changes.
> 
> I also wanted to mention that the Ansible tasks in the templates today
> could be included with Heat's get_file function. In which case, as a
> template developer you basically are writing a native Ansible tasks
> file that could be included in an Ansible role.
> 
Right, that's what I was trying to say. I believe you have some ansible logic
within your heat templates today. As I understand it, dynamically generated. I
would think first moving that logic outside of heat into stand al

Re: [openstack-dev] [TripleO] An experiment with Ansible

2017-07-20 Thread Paul Belanger
On Thu, Jul 20, 2017 at 06:21:22PM -0400, James Slagle wrote:
> Following up on the previous thread:
> http://lists.openstack.org/pipermail/openstack-dev/2017-July/119405.html
> 
> I wanted to share some work I did around the prototype I mentioned
> there. I spent a couple days exploring this idea. I came up with a
> Python script that when run against an in progress Heat stack, will
> pull all the server and deployment metadata out of Heat and generate
> ansible playbooks/tasks from the deployments.
> 
> Here's the code:
> https://github.com/slagle/pump
> 
> And an example of what gets generated:
> https://gist.github.com/slagle/433ea1bdca7e026ce8ab2c46f4d716a8
> 
> If you're interested in any more detail, let me know.
> 
> It signals the stack to completion with a dummy "ok" signal so that
> the stack will complete. You can then use ansible-playbook to apply
> the actual deloyments (in the expected order, respecting the steps
> across all roles, and in parallel across all the roles).
> 
> Effectively, this treats Heat as nothing but a yaml cruncher. When
> using it with deployed-server, Heat doesn't actually change anything
> on an overcloud node, you're only using it to generate ansible.
> 
> Honestly, I think I will prefer the longer term approach of using
> stack outputs. Although, I am not sure of the end goal of that work
> and if it is the same as this prototype.
> 
Sorry if this hasn't been asked before but why don't you removed all of your
ansible-playbook logic out of heat and write them directly as native playbooks /
roles? Then instead of having a tool that reads heat to then generate the
playbooks / roles, you update heat just to directly call the playbooks? Any
dynamic information about be stored in the inventory or using the --extra-vars
on the CLI?

Basically, we do this for zuulv2.5 today in openstack-infra (dynamically
generate playbooks at run-time) and it is a large amount of work to debug
issues.  In our case, we did it to quickly migrate from jenkins to ansible
(since zuulv3 completely fixes this with native playbooks) and I wouldn't
recommend it to operators to do.  Not fun.

> And some of what I've done may be useful with that approach as well:
> https://review.openstack.org/#/c/485303/
> 
> However, I found this prototype interesting and worth exploring for a
> couple of reasons:
> 
> Regardless of the approach we take, I wanted to explore what an end
> result might look like. Personally, this illustrates what I kind of
> had in mind for an "end goal".
> 
> I also wanted to see if this was at all feasible. I envisioned some
> hurdles, such as deployments depending on output values of previous
> deployments, but we actually only do that in 1 place in
> tripleo-heat-templates, and I was able to workaround that. In the end
> I used it to deploy an all in one overcloud equivalent to our
> multinode CI job, so I believe it's feasible.
> 
> It meets most of the requirements we're looking to get out of ansible.
> You can (re)apply just a single deployment, or a given deployment
> across all ResourceGroup members, or all deployments for a given
> server(s), it's easy to see what failed and for what servers, etc.
> 
> FInally, It's something we could deliver  without much (any?) change
> in tripleo-heat-templates. Although I'm not trying to say it'd be a
> small amount of work to even do that, as this is a very rough
> prototype.
> 
> -- 
> -- James Slagle
> --
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][devstack] DIB builds after mysql.qcow2 removal

2017-07-17 Thread Paul Belanger
On Mon, Jul 17, 2017 at 07:22:34PM +1000, Ian Wienand wrote:
> Hi,
> 
> The removal of the mysql.qcow2 image [1] had a flow-on effect noticed
> first by Paul in [2] that the tools/image_list.sh "sanity" check was
> not updated, leading to DIB builds failing in a most unhelpful way as
> it tries to cache the images for CI builds.
> 
> So while [2] fixes the problem; one complication here is that the
> caching script [3] loops through the open devstack branches and tries
> to collect the images to cache.
> 
> Now it seems we hadn't closed the liberty or mitaka branches.  This
> causes a problem, because the old branches refer to the old image, but
> we can't actually commit a fix to change them because the branch is
> broken (such as [4]).
> 
> I have taken the liberty of EOL-ing stable/liberty and stable/mitaka
> for devstack.  I get the feeling it was just forgotten at the time.
> Comments in [4] support this theory.  I have also taken the liberty of
> approving backports of the fix to newton and ocata branches [5],[6].
> 
> A few 3rd-party CI people using dib have noticed this failure.  As the
> trio of [4],[5],[6] move through, your builds should start working
> again.
> 
> Thanks,
> 
> -i
> 
> [1] https://review.openstack.org/482600
> [2] https://review.openstack.org/484001
> [3] 
> http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/elements/cache-devstack/extra-data.d/55-cache-devstack-repos
> [4] https://review.openstack.org/482604
> [5] https://review.openstack.org/484299
> [6] https://review.openstack.org/484298
> 
Thanks, I had patches up last week but was hitting random devstack job failures.
I'll watch this today and make sure our image builds are back online.

Also, thanks for cleaning up the old branches.  I was planning on doing that
this week.

-PB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] diskimage builder works for trusty but not for xenial

2017-06-21 Thread Paul Belanger
On Wed, Jun 21, 2017 at 08:44:45AM +0200, Ignazio Cassano wrote:
> Hi all,
> today I am creating openstack images with disk-image-create.
> It works fine for centos and ubuntu trusty.
> With xenial I got the following error:
> 
> * Connection #0 to host cloud-images.ubuntu.com left intact
> Downloaded and cached
> http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-root.tar.gz,
> having forced upstream caches to revalidate
> xenial-server-cloudimg-amd64-root.tar.gz: FAILED
> sha256sum: WARNING: 1 computed checksum did NOT match
> 
> Are there any problems on http://cloud-images.ubuntu.com ?
> Regards
> Ignazio

I would suggest using ubuntu-minimal and centos-minimal elements for creating
images. This is what we do in openstack-infra today which means you'll build the
image directly from dpkg / rpm sources and not download existing images from
upstream.

This is the solve the issue which you are hitting if upstream images have been
removed / deleted / of failed to download.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-15 Thread Paul Belanger
On Thu, Jun 15, 2017 at 03:56:30PM +, Jeremy Stanley wrote:
> On 2017-06-15 11:48:42 -0400 (-0400), Davanum Srinivas wrote:
> > We took that tradeoff before and have suffered as a result. I'd say
> > it's the cost of getting a project under governance.
> 
> Well, sort of. We took the slightly less work (for the Infra team)
> approach of renaming repos within a single Gerrit instead of trying
> to relocate projects between Gerrit deployments.
> 
> Ultimately, if the solution is to run a separate copy of our
> infrastructure to host non-OpenStack projects, it might as well have
> its own volunteer team running it. We're down to a skeleton crew as
> it is, so I don't relish the idea of us being responsible for
> maintaining an extra copy of all these services.
> -- 
> Jeremy Stanley

++ completely agree!


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] TripleO OVB check gates to move to third party

2017-06-13 Thread Paul Belanger
On Tue, Jun 13, 2017 at 02:11:53PM -0500, Ben Nemec wrote:
> 
> 
> On 06/13/2017 12:28 PM, Paul Belanger wrote:
> > On Tue, Jun 13, 2017 at 11:12:08AM -0500, Ben Nemec wrote:
> > > 
> > > 
> > > On 06/12/2017 06:19 PM, Ronelle Landy wrote:
> > > > Greetings,
> > > > 
> > > > TripleO OVB check gates are managed by upstream Zuul and executed on
> > > > nodes provided by test cloud RH1. RDO Cloud is now available as a test
> > > > cloud to be used when running CI jobs. To utilize to RDO Cloud, we could
> > > > either:
> > > > 
> > > > - continue to run from upstream Zuul (and spin up nodes to deploy
> > > > the overcloud from RDO Cloud)
> > > > - switch the TripleO OVB check gates to run as third party and
> > > > manage these jobs from the Zuul instance used by Software Factory
> > > > 
> > > > The openstack infra team advocates moving to third party.
> > > > The CI team is meeting with Frederic Lepied, Alan Pevec, and other
> > > > members of the Software Factory/RDO project infra tream to discuss how
> > > > this move could be managed.
> > > > 
> > > > Note: multinode jobs are not impacted - and will continue to run from
> > > > upstream Zuul on nodes provided by nodepool.
> > > > 
> > > > Since a move to third party could have significant impact, we are
> > > > posting this out to gather feedback and/or concerns that TripleO
> > > > developers may have.
> > > 
> > > I'm +1 on moving to third-party...eventually.  I don't think it should be
> > > done at the same time as we move to a new cloud, which is a major change 
> > > in
> > > and of itself.  I suppose we could do the third-party transition in 
> > > parallel
> > > with the existing rh1 jobs, but as one of the people who will probably 
> > > have
> > > to debug problems in RDO cloud I'd rather keep the number of variables to 
> > > a
> > > minimum.  Once we're reasonably confident that RDO cloud is stable and
> > > handling our workload well we can transition to third-party and deal with
> > > the problems that will no doubt cause on their own.
> > > 
> > This was a goal for tripleo-test-cloud-rh2, to move that to thirdparty CI,
> > ensure jobs work, then migrated. As you can see, we never actually did that.
> > 
> > My preference would be to make the move the thirdparty now, with
> > tripleo-test-cloud-rh1.  We now have all the pieces in place for RDO 
> > project to
> > support this and in parallel set up RDO cloud to run jobs from RDO.
> > 
> > If RDO stablility is a concern, the move to thirdparty first seems to make 
> > the
> > most sense. This avoid the need to bring RDO cloud online, ensure it works, 
> > then
> > move it again, and re-insure it works.
> > 
> > Again, the move can be made seemless by turning down some of the capacity in
> > nodepool.o.o and increase capacity in nodepool.rdoproject.org. And I am 
> > happy to
> > help work with RDO on making this happen.
> 
> I'm good with doing the third-party migration first too.  I'm only looking
> to avoid two concurrent major changes.
> 
Great, I am happy to hear that :D

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] TripleO OVB check gates to move to third party

2017-06-13 Thread Paul Belanger
On Tue, Jun 13, 2017 at 11:12:08AM -0500, Ben Nemec wrote:
> 
> 
> On 06/12/2017 06:19 PM, Ronelle Landy wrote:
> > Greetings,
> > 
> > TripleO OVB check gates are managed by upstream Zuul and executed on
> > nodes provided by test cloud RH1. RDO Cloud is now available as a test
> > cloud to be used when running CI jobs. To utilize to RDO Cloud, we could
> > either:
> > 
> > - continue to run from upstream Zuul (and spin up nodes to deploy
> > the overcloud from RDO Cloud)
> > - switch the TripleO OVB check gates to run as third party and
> > manage these jobs from the Zuul instance used by Software Factory
> > 
> > The openstack infra team advocates moving to third party.
> > The CI team is meeting with Frederic Lepied, Alan Pevec, and other
> > members of the Software Factory/RDO project infra tream to discuss how
> > this move could be managed.
> > 
> > Note: multinode jobs are not impacted - and will continue to run from
> > upstream Zuul on nodes provided by nodepool.
> > 
> > Since a move to third party could have significant impact, we are
> > posting this out to gather feedback and/or concerns that TripleO
> > developers may have.
> 
> I'm +1 on moving to third-party...eventually.  I don't think it should be
> done at the same time as we move to a new cloud, which is a major change in
> and of itself.  I suppose we could do the third-party transition in parallel
> with the existing rh1 jobs, but as one of the people who will probably have
> to debug problems in RDO cloud I'd rather keep the number of variables to a
> minimum.  Once we're reasonably confident that RDO cloud is stable and
> handling our workload well we can transition to third-party and deal with
> the problems that will no doubt cause on their own.
> 
This was a goal for tripleo-test-cloud-rh2, to move that to thirdparty CI,
ensure jobs work, then migrated. As you can see, we never actually did that.

My preference would be to make the move the thirdparty now, with
tripleo-test-cloud-rh1.  We now have all the pieces in place for RDO project to
support this and in parallel set up RDO cloud to run jobs from RDO.

If RDO stablility is a concern, the move to thirdparty first seems to make the
most sense. This avoid the need to bring RDO cloud online, ensure it works, then
move it again, and re-insure it works.

Again, the move can be made seemless by turning down some of the capacity in
nodepool.o.o and increase capacity in nodepool.rdoproject.org. And I am happy to
help work with RDO on making this happen.

PB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][deb-packaging] Stop using track-upstream for deb projects

2017-06-13 Thread Paul Belanger
Greetings,

I'd like to propose we stop using track-upstream for project-config on
deb-packaging projects. It seems there is no active development on these
projects currently and by using track-upstream we currently wasting both CI
resources and HDD space keeping their projects in sync with there upstream
openstack projects.

Long term, we don't actually want to support the behavior. I propose we stop
doing this today, and if somebody steps up to continue the effort on packaging
our release we then progress forward with out the need of track-upstream.

Effectively, track-upstream duplicates the size of a projects git repo. For
example, if deb-nova is setup to track-upstream of nova, we copy all commits and
import them into deb-nova. This puts unneeded pressure on our infrastructure
moving forward, the git overlay option for gbp is likely the solution we could
use.

-- Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] tripleo periodic jobs moving to RDO's software factory and RDO Cloud

2017-06-12 Thread Paul Belanger
On Mon, Jun 12, 2017 at 05:01:26PM -0400, Wesley Hayutin wrote:
> Greetings,
> 
> I wanted to send out a summary email regarding some work that is still
> developing and being planned to give interested parties time to comment and
> prepare for change.
> 
> Project:
> Move tripleo periodic promotion jobs
> 
> Goal:
> Increase the cadence of tripleo-ci periodic promotion jobs in a way
> that does not impact upstream OpenStack zuul queues and infrastructure.
> 
> Next Steps:
> The dependencies in RDO's instance of software factory are now complete
> and we should be able to create a new a net new zuul queue in RDO infra for
> tripleo-periodic jobs.  These jobs will have to run both multinode nodepool
> and ovb style jobs and utilize RDO-Cloud as the host cloud provider.  The
> TripleO CI team is looking into moving the TripleO periodic jobs running
> upstream to run from RDO's software factory instance. This move will allow
> the CI team more flexibility in managing the periodic jobs and resources to
> run the jobs more frequently.
> 
> TLDR:
> There is no set date as to when the periodic jobs will move. The move
> will depend on tenant resource allocation and how easily the periodic jobs
> can be modified.  This email is to inform the group that changes are being
> planned to the tripleo periodic workflow and allow time for comment and
> preparation.
> 
> Completed Background Work:
> After long discussion with Paul Belanger about increasing the cadence
> of the promotion jobs [1]. Paul explained infa's position and if he doesn't
> -1/-2 a new pipeline that has the same priority as check jobs someone else
> will. To summarize the point, the new pipeline would compete and slow down
> non-tripleo projects in the gate even when the hardware resources are our
> own.
> To avoid slowing down non-tripleo projects Paul has volunteered to help
> setup the infrastructure in rdoproject to manage the queue ( zuul etc). We
> would still use rh-openstack-1 / rdocloud for ovb, and could also trigger
> multinode nodepool jobs.
> There is one hitch though, currently, rdo-project does not have all the
> pieces of the puzzle in place to move off of openstack zuul and onto
> rdoproject zuul. Paul mentioned that nodepool-builder [2] is a hard
> requirement to be setup in rdoproject before we can proceed here. He
> mentioned working with the software factory guys to get this setup and
> running.
> At this time, I think this issue is blocked until further discussion.
> [1] https://review.openstack.org/#/c/443964/
> [2]
> https://github.com/openstack-infra/nodepool/blob/master/nodepool/builder.py
> 
> Thanks

The first step is landing the nodepool elements in nodepool.rdoproject.org, and
building a centos-7 DIB.  I believe number80 is currently working on this and
hopefully that could be landed in the next day or so.  Once images have been
built, it won't be much work to then run a job. RDO already has 3rdparty jobs
running, we'd to the same with tripleo-ci.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-09 Thread Paul Belanger
On Fri, Jun 09, 2017 at 04:52:25PM +, Flavio Percoco wrote:
> On Fri, Jun 9, 2017 at 11:30 AM Britt Houser (bhouser) 
> wrote:
> 
> > How does confd run inside the container?  Does this mean we’d need some
> > kind of systemd in every container which would spawn both confd and the
> > real service?  That seems like a very large architectural change.  But
> > maybe I’m misunderstanding it.
> >
> >
> Copying part of my reply to Doug's email:
> 
> 1. Run confd + openstack service in side the container. My concern in this
> case
> would be that we'd have to run 2 services inside the container and structure
> things in a way we can monitor both services and make sure they are both
> running. Nothing impossible but one more thing to do.
> 
> 2. Run confd `-onetime` and then run the openstack service.
> 
> 
> I either case, we could run confd as part of the entrypoint and have it run
> in
> background for the case #1 or just run it sequentially for case #2.
> 
Both approached are valid, it all depends on your use case.  I suspect in the
case of openstack, you'll be running 2 daemons in your containers. Otherwise,
-onetime will need to launch new containers each config change.

> 
> > Thx,
> > britt
> >
> > On 6/9/17, 9:04 AM, "Doug Hellmann"  wrote:
> >
> > Excerpts from Flavio Percoco's message of 2017-06-08 22:28:05 +:
> >
> > > Unless I'm missing something, to use confd with an OpenStack
> > deployment on
> > > k8s, we'll have to do something like this:
> > >
> > > * Deploy confd in every node where we may want to run a pod
> > (basically
> > > wvery node)
> >
> > Oh, no, no. That's not how it works at all.
> >
> > confd runs *inside* the containers. It's input files and command line
> > arguments tell it how to watch for the settings to be used just for
> > that
> > one container instance. It does all of its work (reading templates,
> > watching settings, HUPing services, etc.) from inside the container.
> >
> > The only inputs confd needs from outside of the container are the
> > connection information to get to etcd. Everything else can be put
> > in the system package for the application.
> >
> > Doug
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to set default security group rules?

2017-06-09 Thread Paul Belanger
On Fri, Jun 09, 2017 at 05:20:03AM -0700, Kevin Benton wrote:
> This was an intentional decision. One of the goals of OpenStack is to
> provide consistency across different clouds and configurable defaults for
> new tenants default rules hurts consistency.
> 
> If I write a script to boot up a workload on one OpenStack cloud that
> allows everything by default and it doesn't work on another cloud that
> doesn't allow everything by default, that leads to a pretty bad user
> experience. I would now need logic to scan all of the existing security
> group rules and do a diff between what I want and what is there and have
> logic to resolve the difference.
> 
FWIW: While that argument is valid, the reality is every cloud provider runs a
different version of operating system you boot up your workload on, so it is
pretty much assume that every cloud is different out of box.

What we do now in openstack-infra, is place expected cloud configuration[2] in 
ansible-role-cloud-launcher[1], and run ansible against the cloud. This has been
one of the ways we ensure consistency between clouds. Bonus point, we build and
upload images daily to ensure our workloads are also the same.

[1] http://git.openstack.org/cgit/openstack/ansible-role-cloud-launcher
[2] 
http://git.openstack.org/cgit/openstack-infra/system-config/tree/playbooks/clouds_layouts.yml

> It's a backwards-incompatible change so we'll probably be stuck with the
> current behavior.
> 
> 
> On Fri, Jun 9, 2017 at 2:27 AM, Ahmed Mostafa 
> wrote:
> 
> > I believe that there are no features impelemented in neutron that allows
> > changing the rules for the default security group.
> >
> > I am also interested in seeing such a feature implemented.
> >
> > I see only this blueprint :
> >
> > https://blueprints.launchpad.net/neutron/+spec/default-
> > rules-for-default-security-group
> >
> > But no work has been done on it so far.
> >
> >
> >
> > On Fri, Jun 9, 2017 at 9:16 AM, Paul Schlacter 
> > wrote:
> >
> >> I see the neutron code, which added the default rules to write very
> >> rigid, only for ipv4 ipv6 plus two rules. What if I want to customize the
> >> default rules?
> >>
> >> 
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
> >> e
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [forum] Future of Stackalytics

2017-06-08 Thread Paul Belanger
On Thu, Jun 08, 2017 at 02:11:18PM +, Jeremy Stanley wrote:
> On 2017-06-08 09:22:48 -0400 (-0400), Paul Belanger wrote:
> [...]
> > We also have another issue where we loose access to gerrit and our apache
> > process pins CPU to 100%, these might also be low hanging friut for people
> > wanting to get involved.
> [...]
> 
> Wasn't that fixed with a new lower-bound on paramiko so it now
> closes SSH API sessions correctly?
>
We did submit a patch, but I believe we are still leaking some connections to
gerrit. We likley need to audit the code to ensure we applied the patch to all
connection attempts.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [forum] Future of Stackalytics

2017-06-08 Thread Paul Belanger
On Wed, May 17, 2017 at 06:55:57PM +, Jeremy Stanley wrote:
> On 2017-05-17 16:16:30 +0200 (+0200), Thierry Carrez wrote:
> [...]
> > we need help with completing the migration to infra. If interested
> > you can reach out to fungi (Infra team PTL) nor mrmartin (who
> > currently helps with the transition work).
> [...]
> 
> The main blocker for us right now is addressed by an Infra spec
> (Stackalytics is an unofficial project and it's unclear to us where
> design discussions for it happen):
> 
> https://review.openstack.org/434951
> 
> In particular, getting the current Stackalytics developers on-board
> with things like this is where we've been failing to make progress
> mainly (I think) because we don't have a clear venue for discussions
> and they're stretched pretty thin with other work. If we can get
> some additional core reviewers for that project (and maybe even talk
> about turning it into an official team or joining them up as a
> deliverable for an existing team) that might help.
> -- 
> Jeremy Stanley

Agree with Jeremy.

We have been running a shadow instances of stackalytics[1] for 2 years now. So,
we could make the flip today to our community infrastructure if we wanted too.
The persistent cache would be a helpful thing to avoid potential re-imports of
all the data.

We also have another issue where we loose access to gerrit and our apache
process pins CPU to 100%, these might also be low hanging friut for people
wanting to get involved.

[1] http://stackalytics.openstack.org

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][infra] Missing OpenSSL headers (fatal error: openssl/opensslv.h: No such file or directory)

2017-06-06 Thread Paul Belanger
Greetings,

If your project is seeing the following errors, this likely means you do not
have libssl development headers included in your bindep.txt file.

To fix this you can add the following to your local bindep.txt file:

  libssl-dev [platform:dpkg]
  openssl-devel [platform:rpm]

This will ensure centos-7 and ubuntu-xenial (and trusty) will properly set them
up for you.

This is a result of openstack-infra removing npm / nodejs as a build time
dependency for our DIB images.

If you have questions, please join us in #openstack-infra

-PB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Removal of git-review from DIB images

2017-05-29 Thread Paul Belanger
Greetings,

Does your project use git-review on our zuul worker nodes? If so, this post is
for you.  For a long time we have installed git-review on to our worker nodes
that are used by jobs.  This also installs additional python dependencies:

  Successfully installed argparse-1.4.0 certifi-2017.4.17 chardet-3.0.3 
git-review-1.25.0 idna-2.5 requests-2.16.5 urllib3-1.21.1

Moving forward we'd like to stop adding this dependency into our base images and
have jobs themself manage installing git-review if needed.  Using codesearch.o.o
we already see some projects doing this today. If you do need it, you would add
it to your requirements.txt / test-requirements.txt files.

As such, I have proposed the removal of git-review[1] from our images.

-PB

[1] https://review.openstack.org/#/c/468872/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Update re: debian packages?

2017-05-25 Thread Paul Belanger
On Thu, May 25, 2017 at 08:30:46AM -0500, Andrew Bogott wrote:
> I've just noticed that, as predicted, Mirantis has switched off their
> Openstack package repos (e.g. http://liberty-jessie.pkgs.mirantis.com/).
> Are there any updates about a replacement repo, or a newly-reconstituted
> packaging team?
> 
> I remember being convinced back in February that people were on the
> case, but I was unable to attend the Boston summit, and the lists have been
> dead silent on the subject since the original thread announcing the end of
> Mirantis support.
> 
I believe zigo was trying to migrate that work to openstack-infra, which
currently lives at debian-openstack[1] repo. However, there hasn't been much
activity this year.

I know wendar has expressed interest in the past also.

[1] http://mirror.dfw.rax.openstack.org/debian-openstack/dists/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][infra][puppet][stable] Re: [Release-job-failures] Release of openstack/puppet-nova failed

2017-05-22 Thread Paul Belanger
On Mon, May 22, 2017 at 10:53:32AM -0400, Doug Hellmann wrote:
> Excerpts from jenkins's message of 2017-05-22 10:49:09 +:
> > Build failed.
> > 
> > - puppet-nova-tarball 
> > http://logs.openstack.org/89/89c58e7958b448364cb0290c1879116f49749a68/release/puppet-nova-tarball/fe9daf7/
> >  : FAILURE in 55s
> > - puppet-nova-tarball-signing puppet-nova-tarball-signing : SKIPPED
> > - puppet-nova-announce-release puppet-nova-announce-release : SKIPPED
> > 
> 
> The most recent puppet-nova release (newton 9.5.1) failed because
> puppet isn't installed on the tarball building node. I know that
> node configurations just changed recently to drop puppet, but I
> don't know what needs to be done to fix the issue for this particular
> job. It does seem to be running bindep, so maybe we just need to
> include puppet there?  I could use some advice & help.
> 
We need to sync 461970[1] across all modules, I've been meaning to do this but 
will
result in some gerrit spam. If a puppet core already has it setup, maybe they
could do it.

I was going to bring the puppet proposal patch[2] back online to avoid manually
doing this.

[1] https://review.openstack.org/#/c/461970/
[2] https://review.openstack.org/#/c/211744/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-18 Thread Paul Belanger
On Thu, May 18, 2017 at 09:34:44AM -0700, Michał Jastrzębski wrote:
> >> Issue with that is
> >>
> >> 1. Apache served is harder to use because we want to follow docker API
> >> and we'd have to reimplement it
> >
> > No, the idea is apache is transparent, for now we have been using proxypass
> > module in apache.  I think what Doug was mentioning was have a primary 
> > docker
> > registery, with is RW for a publisher, then proxy it to regional mirrors as 
> > RO.
> 
> That would also work, yes
> 
> >> 2. Running registry is single command
> >>
> > I've seen this mentioned a few times before, just because it is one command 
> > or
> > 'simple' to do, doesn't mean we want to or can.  Currently our 
> > infrastructure is
> > complicated, for various reasons.  I am sure we'll get to the right 
> > technical
> > solution for making jobs happy. Remember our infrastructure spans 6 clouds 
> > and 15
> > regions and want to make sure it is done correctly.
> 
> And that's why we discussed dockerhub. Remember that I was willing to
> implement proper registry, but we decided to go with dockerhub simply
> because it puts less stress on both infra and infra team. And I
> totally agree with that statement. Dockerhub publisher + apache
> caching was our working idea.
> 
yes, we still want to implement a docker registry for openstack, maybe for
testing, maybe for production. From the technical side, we have a good handle
now how that would look.  However, even if we decide to publish to docker, I
don't think we'd push directly. Maybe push to our docker registry then mirror to
docker hub. That is something we can figure out a little later.

> >> 3. If we host in in infra, in case someone actually uses it (there
> >> will be people like that), that will eat up lot of network traffic
> >> potentially
> >
> > We can monitor this and adjust as needed.
> >
> >> 4. With local caching of images (working already) in nodepools we
> >> loose complexity of mirroring registries across nodepools
> >>
> >> So bottom line, having dockerhub/quay.io is simply easier.
> >>
> > See comment above.
> >
> >> > Doug
> >> >
> >> > __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe: 
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-18 Thread Paul Belanger
On Tue, May 16, 2017 at 02:11:18PM +, Sam Yaple wrote:
> I would like to bring up a subject that hasn't really been discussed in
> this thread yet, forgive me if I missed an email mentioning this.
> 
> What I personally would like to see is a publishing infrastructure to allow
> pushing built images to an internal infra mirror/repo/registry for
> consumption of internal infra jobs (deployment tools like kolla-ansible and
> openstack-ansible). The images built from infra mirrors with security
> turned off are perfect for testing internally to infra.
> 
Zuulv3 should have a little with this, it will allow for DAG graph for jobs,
which means the top level job could be an image build then all jobs below can
now consume said image.  The steps we are still working on is artifact handling
but long term, it should be possible for the testing jobs to setup the dynamic
infrastructure needed themselves.

> If you build images properly in infra, then you will have an image that is
> not security checked (no gpg verification of packages) and completely
> unverifiable. These are absolutely not images we want to push to
> DockerHub/quay for obvious reasons. Security and verification being chief
> among them. They are absolutely not images that should ever be run in
> production and are only suited for testing. These are the only types of
> images that can come out of infra.
> 
We disable gpg for Ubuntu packaging for a specific reason, most this is because
our APT repos are not official mirrors of upstream. We regenerate indexes every
2 hours as not to break long running jobs.  We have talked in the past of fixing
this, but it requires openstack-infra to move to a new mirroring tool for APT.

> Thanks,
> SamYaple
> 
> On Tue, May 16, 2017 at 1:57 PM, Michał Jastrzębski 
> wrote:
> 
> > On 16 May 2017 at 06:22, Doug Hellmann  wrote:
> > > Excerpts from Thierry Carrez's message of 2017-05-16 14:08:07 +0200:
> > >> Flavio Percoco wrote:
> > >> > From a release perspective, as Doug mentioned, we've avoided
> > releasing projects
> > >> > in any kind of built form. This was also one of the concerns I raised
> > when
> > >> > working on the proposal to support other programming languages. The
> > problem of
> > >> > releasing built images goes beyond the infrastructure requirements.
> > It's the
> > >> > message and the guarantees implied with the built product itself that
> > are the
> > >> > concern here. And I tend to agree with Doug that this might be a
> > problem for us
> > >> > as a community. Unfortunately, putting your name, Michal, as contact
> > point is
> > >> > not enough. Kolla is not the only project producing container images
> > and we need
> > >> > to be consistent in the way we release these images.
> > >> >
> > >> > Nothing prevents people for building their own images and uploading
> > them to
> > >> > dockerhub. Having this as part of the OpenStack's pipeline is a
> > problem.
> > >>
> > >> I totally subscribe to the concerns around publishing binaries (under
> > >> any form), and the expectations in terms of security maintenance that it
> > >> would set on the publisher. At the same time, we need to have images
> > >> available, for convenience and testing. So what is the best way to
> > >> achieve that without setting strong security maintenance expectations
> > >> for the OpenStack community ? We have several options:
> > >>
> > >> 1/ Have third-parties publish images
> > >> It is the current situation. The issue is that the Kolla team (and
> > >> likely others) would rather automate the process and use OpenStack
> > >> infrastructure for it.
> > >>
> > >> 2/ Have third-parties publish images, but through OpenStack infra
> > >> This would allow to automate the process, but it would be a bit weird to
> > >> use common infra resources to publish in a private repo.
> > >>
> > >> 3/ Publish transient (per-commit or daily) images
> > >> A "daily build" (especially if you replace it every day) would set
> > >> relatively-limited expectations in terms of maintenance. It would end up
> > >> picking up security updates in upstream layers, even if not immediately.
> > >>
> > >> 4/ Publish images and own them
> > >> Staff release / VMT / stable team in a way that lets us properly own
> > >> those images and publish them officially.
> > >>
> > >> Personally I think (4) is not realistic. I think we could make (3) work,
> > >> and I prefer it to (2). If all else fails, we should keep (1).
> > >>
> > >
> > > At the forum we talked about putting test images on a "private"
> > > repository hosted on openstack.org somewhere. I think that's option
> > > 3 from your list?
> > >
> > > Paul may be able to shed more light on the details of the technology
> > > (maybe it's just an Apache-served repo, rather than a full blown
> > > instance of Docker's service, for example).
> >
> > Issue with that is
> >
> > 1. Apache served is harder to use because we want to follow docker API
> > 

  1   2   3   >