[OpenStack-Infra] Retire pabelanger as infra-root

2020-05-25 Thread Paul Belanger
Hello all,

The time has come for me to step down my infra-root duties. Sadly, my
day to day job is no longer directly related to openstack-infra, and
finding it difficult to be involved in 'infra-root' capacity to help the
project.

Thanks to everything on the infra team, everybody is awesome humans! I
hope some time in the future I'll be able to get move involved with the
opendev.org effort, but sadly today isn't that day.

https://review.opendev.org/668192/

Paul


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Touching base; Airship CI cluster

2020-01-22 Thread Paul Belanger
On Wed, Jan 22, 2020 at 02:12:11PM -0800, Clark Boylan wrote:
> On Tue, Jan 7, 2020, at 1:45 AM, Roman Gorshunov wrote:
> > Hello Clark,
> > 
> > Thank you for your reply. Meeting time is OK for me. I have forwarded
> > invitation to Pete Birley and Matt McEuen, they would hopefully join
> > us.
> 
> I wanted to make sure we got a summary of this meeting sent out. Notes were 
> kept at https://etherpad.openstack.org/p/Wqxwce1UDq
> 
> Airship needs to test their All in One deployment tool. This tool deploys 
> their entire bootstrapping system into a single VM which is then used to 
> deploy other software which may be clustered. Because the production usage of 
> this tool is via a VM it is actually important to be able to test the 
> contents of that VM in CI and that is what creates the memory requirements 
> for Airship CI.
> 
> We explained the benefits of being able to run Airship CI on less special 
> hardware. Airship gains redundancy as more than one provider can supply these 
> resources, reliability should be improved as nested virt has been known to be 
> flaky, and better familiarity within the community with global resources 
> means that debugging and working together is easier.
> 
> However, we recognize that Airship has specific constraints today that 
> require more specialized testing. The proposed plan is to move forward with 
> adding a new cloud, but have it provide specialized and generic resources. 
> The intent is to address Airship's needs of today with the expectation that 
> they will work towards running on the generic resources. Having generic 
> resources ensures that the infra team has exposure to this new cloud outside 
> the context of Airship. This improves familiarity and debuggability of the 
> system. It is also more equitable as other donations are globally used. Also, 
> Nodepool doesn't actually allow us to prevent consumption of resources 
> exposed to the system; however, we would ask that specialized resources only 
> be used when necessary to test specific cases as with Airship. This is 
> similar to our existing high memory, multi numa node, nested virt enabled 
> test flavors.
> 
> For next steps we'll work to add the new cloud with the two sets of flavors, 
> and Airship will begin investigating what a modified test setup looks like to 
> run on our generic resources. We'll see where that takes us.
> 
> Let me know if this summary needs editing or updating.
> 
> Finally, we'll be meeting again Wednesday January 29, 2020 at 1600UTC to 
> followup on any questions now that things should be moving. I recently used 
> jitsi meet and it worked really well so want to give that a try for this. 
> Lets meet at https://meet.jit.si/AirshipCICloudFun. Fungi says you can click 
> the circled "i" icon at that url to get dial in info if necessary.
> 
> If for some reason jitsi doesn't work we'll fall back to the method used last 
> time: https://wiki.openstack.org/wiki/Infrastructure/Conferencing room 6001.
> 
Regarding the dedicated cloud, it might be an interesting discussion
point to talk with some of the TripleO folks from when
tripleo-test-cloud-rh1 cloud was still a thing. As most infra people
know, this was a cloud dedicated to running tripleo specific jobs.

There was an effort to make there jobs more generic, to run on any cloud
infrastructure, which resulted IMO, to a large increase of testing (as
there was much more capacity). While it took a bit of effort, I believe
overall it was a better improvement for CI.

Paul


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [zuul-jobs] configure-mirrors: deprecate mirroring configuration for easy_install

2019-11-25 Thread Paul Belanger
On Mon, Nov 25, 2019 at 04:02:13PM +1100, Ian Wienand wrote:
> Hello,
> 
> Today I force-merged [5] to avoid widespread gate breakage.  Because
> the change is in zuul-jobs, we have a policy of annoucing
> deprecations.  I've written the following but not sent it to
> zuul-announce (per policy) yet, as I'm not 100% confident in the
> explanation.
> 
> I'd appreciate it if, once proof-read, someone could send it out
> (modified or otherwise).
> 
> Thanks,
> 
Greetings!

Rather then force merge, and potential break other zuul installs. What
about a new feature flag, that was still enabled but have openstack base
jobs disabled?  This would still allow older versions of setuptools to
work I would guess?

That said, ansible Zuul is not affected as we currently fork
configure-mirrors for our open puproses, I'll check now that we are also
not affected.

> -i
> 
> --
> 
> Hello,
> 
> The recent release of setuptools 42.0.0 has broken the method used by
> the configure-mirrors role to ensure easy_install (the older method of
> install packages, before pip became in widespread use [1]) would only
> access the PyPi mirror.
> 
> The prior mirror setup code would set the "allow_hosts" whitelist to
> the mirror host exclusively in pydistutils.cfg.  This would avoid
> easy_install "leaking" access outside the specified mirror.
> 
> Change [2] in setuptools means that pip is now used to fetch packages.
> Since it does not implement the constraints of the "allow_hosts"
> setting, specifying this option has become an error condition.  This
> is reported as:
> 
>  the `allow-hosts` option is not supported 'when using pip to install 
> requirements
> 
> It has been pointed out [3] that this prior code would break any
> dependency_links [4] that might be specified for the package (as the
> external URLs will not match the whitelist).  Overall, there is no
> desire to work-around this behaviour as easy_install is considered
> deprecated for any current use.
> 
> In short, this means the only solution is to remove the now
> conflicting configuration from pydistutils.cfg.  Due to the urgency of
> this update, it has been merged with [5] before our usual 2-week
> deprecation notice.
> 
> The result of this is that older setuptools (perhaps in a virtualenv)
> with jobs still using easy_install may not correctly access the
> specified mirror.  Assuming jobs have access to PyPi they would still
> work, although without the benefits of a local mirror.  If such jobs
> are firewalled from usptream they may now fail.  We consider the
> chance of jobs using this legacy install method in this situation to
> be very low.
> 
> Please contact zuul-discuss [6] with any concerns.
> 
> We now return you to your regularly scheduled programming :)
> 
> [1] https://packaging.python.org/discussions/pip-vs-easy-install/
> [2] 
> https://github.com/pypa/setuptools/commit/d6948c636f5e657ac56911b71b7a459d326d8389
> [3] https://github.com/pypa/setuptools/issues/1916
> [4] https://python-packaging.readthedocs.io/en/latest/dependencies.html
> [5] https://review.opendev.org/695821
> [6] http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
> 
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Low Key Monday Evening Get Together

2019-04-24 Thread Paul Belanger
On Wed, Apr 24, 2019 at 12:31:05PM -0400, Clark Boylan wrote:
> On Wed, Apr 24, 2019, at 9:14 AM, Clark Boylan wrote:
> > Hello Infra!
> > 
> > Monday evening is looking like a good night to have a low key informal 
> > team gathering. The only official event on the calendar is the 
> > Marketplace Mixer which runs until 7pm. Weather permitting I'd like to 
> > head back to Lowry Beer Garden (where we've gone at past PTGs). It is a 
> > bit out of the way from the conference center so we will need to 
> > coordinate uber/lyft transport but that hasn't been a problem in the 
> > past.
> > 
> > Lets meet at 6:30pm (I can send specific location once onsite) and head 
> > over to Lowry. If the weather looks terrible I can call ahead and see 
> > if their indoor area is open and if they say it is "meh" we'll find 
> > something closer to the conference center. Also, they close at 9pm 
> > which should force us to get some sleep :)
> > 
> > Finally, Monday is a good day because it is gertty's birthday. Hope to 
> > see you then.
> > 
> 
> Also Tuesday night is the official Party event. Sign up at 
> https://www.eventbrite.com/e/the-denver-party-during-open-infrastructure-summit-tickets-58863817262
>  with password "denver". If you are like me and don't want to give out a 
> phone number just enter "555-555-" in that field.
> 
> These details apparently went out to attendees already but I didn't get said 
> email so posting it here just in case it is useful for others too.
>
Since I won't be in Denver next week, have a great time. The beer garden
is a great choice.

- Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding index and views/dashboards for Kata to ELK stack

2018-11-27 Thread Paul Belanger
On Tue, Nov 27, 2018 at 06:53:16PM +, Whaley, Graham wrote:
> (back to an old thread... this has rippled near the top of my pile again)
> 
> > -Original Message-
> > From: Clark Boylan [mailto:cboy...@sapwetik.org]
> > Sent: Tuesday, October 23, 2018 6:03 PM
> > To: Whaley, Graham ; openstack-
> > in...@lists.openstack.org; thie...@openstack.org
> > Cc: Ernst, Eric ; fu...@yuggoth.org
> > Subject: Re: Adding index and views/dashboards for Kata to ELK stack
> [snip]
> > > I don't think the Zuul Ansible role will be applicable - the metrics run
> > > on bare metal machines running Jenkins, and export their JSON results
> > > via a filebeat socket. My theory was we'd then add the socket input to
> > > the logstash server to receive from that filebeat - as in my gist at
> > >
> > https://gist.github.com/grahamwhaley/aa730e6bbd6a8ceab82129042b186467
> > 
> > I don't think we would want to expose write access to the unauthenticated
> > logstash and elasticsearch system to external systems. The thing that makes 
> > this
> > secure today is we (community infrastructure team) control the existing 
> > writers.
> > The existing writers are available for your use (see below) should you 
> > decide to
> > use them.
> 
> My theory was we'd secure the connection at least using the logstash/beat SSL 
> connection, and only we/the infra group would have access to the keys:
> https://www.elastic.co/guide/en/beats/filebeat/current/configuring-ssl-logstash.html
> 
> The machines themselves are only accessible by the CNCF CIL owners and 
> nominated Kata engineers with the keys.
>
> > 
> > >
> > > One crux here is that the metrics have to run on a machine with
> > > guaranteed performance (so not a shared/virtual cloud instance), and
> > > hence currently run under Jenkins and not on the OSF/Zuul CI infra.
> > 
> > Zuul (by way of Nodepool) can speak to arbitrary machines as long as they 
> > speak
> > an ansible connection protocol. In this case the default of ssh would 
> > probably
> > work when tied to nodepool's static instance driver. The community
> > infrastructure happens to only talk to cloud VMs today because that is what 
> > we
> > have been given access to, but should be able to talk to other resources if
> > people show up with them.
> 
> If we ignore the fact that all current Kata CI is running on Jenkins, and we 
> are not presently transitioning to Zuul afaik, then
> Even if we did integrate the bare metal CNCF CIL packet.net machines vi 
> ansible/SSH/nodepool/Zuul, then afaict you'd still be running the same CI 
> tasks on the same machines and injecting the Elastic data through the same 
> SSL socket/tunnel into Elastic.

John Studarus at OpenStack summit gave a talk about using zuul and
packet.net, during the talk he mentioned starting to work on a nodepool
driver for packet.net bare metal servers.  I believe the plan is to
upstream it, which then allows for both static and packet.net dynamic
provider.

> I know you'd like to keep as much of the infra under your control, but the 
> only bit I think that would be different is the Jenkins Master. Given the 
> Jenkins job running the slave only executes master branch merges, which have 
> undergone peer review (which would be the same jobs that Zuul would run), 
> then I'm not sure there is any security difference here in reality between 
> having the Kata Jenkins master or Zuul drive the slaves.
> 
> > 
> > >
> > > Let me know you see any issues with that Jenkins/filebeat/socket/JSON 
> > > flow.
> > >
> > > I need to deploy a new machine to process master branch merges to
> > > generate the data (currently we have a machine that is processing PRs at
> > > submission, not merge, which is not the data we want to track long
> > > term). I'll let you know when I have that up and running. If we wanted
> > > to move on this earlier, then I could inject data to a test index from
> > > my local test setup - all it would need I believe is the valid keys for
> > > the filebeat->logstash connection.
> 
> Oh, I've deployed a Jenkins slave and job to test out the first stage of the 
> flow btw:
> http://jenkins.katacontainers.io/job/kata-metrics-runtime-ubuntu-16-04-master/
> 
> > >
> > > > Clark
> > > Thanks!
> > >   Graham (now on copy ;-)
> > 
> > Ideally we'd make use of the existing community infrastructure as much as
> > possible to make this sustainable and secure. We are happy to modify our
> > existing tooling as necessary to do this. Update the logstash 
> > configuration, add
> > Nodepool resources, have grafana talk to elasticsearch, and so on.
> 
> I think the only key decision is if we can use the packet.net slaves as 
> driven by the kata Jenkins master, or if we have to move the management of 
> those into Zuul.
> For expediency and consistency with the rest of the Kata CI, obviously I lean 
> heavily towards Jenkins.
> If we do have to go with Zuul, then I think we'll have to work out who has 
> access to and how they can modify the 

Re: [openstack-dev] [tripleo] using molecule to test ansible playbooks and roles

2018-11-11 Thread Paul Belanger
On Sun, Nov 11, 2018 at 11:29:43AM +, Sorin Sbarnea wrote:
> I recently came across molecule   a 
> project originated at Cisco which recently become an officially Ansible 
> project, at the same time as ansible-lint. Both projects were transferred 
> from their former locations to Ansible github organization -- I guess as a 
> confirmation that they are now officially supported by the core team. I used 
> ansible-lint for years and it did same me a lot of time, molecule is still 
> new to me.
> 
> Few weeks back I started to play with molecule as at least on paper it was 
> supposed to resolve the problem of testing roles on multiple platforms and 
> usage scenarios and while the work done for enabling tripleo-quickstart to 
> support fedora-28 (py3). I was trying to find a faster way to test these 
> changes faster and locally --- and avoid increasing the load on CI before I 
> get the confirmation that code works locally.
> 
> The results of my testing that started about two weeks ago are very positive 
> and can be seen on:
> https://review.openstack.org/#/c/613672/ 
> 
> You can find there a job names opentstack-tox-molecule which runs in 
> ~15minutes but this is only because on CI docker caching does not work as 
> well as locally, locally it re-runs in ~2-3minutes.
> 
> I would like to hear your thoughts on this and if you also have some time to 
> checkout that change and run it yourself it would be wonderful.
> 
> Once you download the change you only have to run "tox -e molecule", (or 
> "make" which also clones sister extras repo if needed)
> 
> Feel free to send questions to the change itself, on #oooq or by email.
> 
I've been doing this for a while with ansible-role-nodepool[1], same
idea you run tox -emolecule and the role will use the docker backend to
validate. I also run it in the gate (with docker backend) however this
is online to validate that end users will not be broken locally if they
run tox -emolecule. There is a downside with docker, no systemd
integration, which is fine for me as I have other tests that are able to
provide coverage.

With zuul, it really isn't needed to run nested docker for linters and
smoke testing, as it mostly creates unneeded overhead.  However, if you
do want to standardize on molecule, I recommend you don't use docker
backend but use the delegated and reused the inventory provided by zuul.
Then you still use molecule but get the bonus of using the VMs presented
by zuul / nodepool.

- Paul

[1] http://git.openstack.org/cgit/openstack/ansible-role-nodepool

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo

2018-10-15 Thread Paul Belanger
On Mon, Oct 15, 2018 at 04:27:00PM -0500, Monty Taylor wrote:
> On 10/15/2018 05:49 AM, Stephen Finucane wrote:
> > On Wed, 2018-10-10 at 18:51 +, Jeremy Stanley wrote:
> > > On 2018-10-10 13:35:00 -0500 (-0500), Greg Hill wrote:
> > > [...]
> > > > We plan to still have a CI gatekeeper, probably Travis CI, to make sure 
> > > > PRs
> > > > past muster before being merged, so it's not like we're wanting to
> > > > circumvent good contribution practices by committing whatever to HEAD.
> > > 
> > > Travis CI has gained the ability to prevent you from merging changes
> > > which fail testing? Or do you mean something else when you refer to
> > > it as a "gatekeeper" here?
> > 
> > Yup but it's GitHub feature rather than specifically a Travis CI
> > feature.
> > 
> >https://help.github.com/articles/about-required-status-checks/
> > 
> > Doesn't help the awful pull request workflow but that's neither here
> > nor there.
> 
> It's also not the same as gating.
> 
> The github feature is the equivalent of "Make sure the votes in check are
> green before letting someone click the merge button"
> 
> The zuul feature is "run the tests between the human decision to merge and
> actually merging with the code in the state it will actually be in when
> merged".
> 
> It sounds nitpicky, but the semantic distinction is important - and it
> catches things more frequently than you might imagine.
> 
> That said - Zuul supports github, and there are Zuuls run by not-openstack,
> so taking a project out of OpenStack's free infrastructure does not mean you
> have to also abandon Zuul. The OpenStack Infra team isn't going to run a
> zuul to gate patches on a GitHub project - but other people might be happy
> to let you use a Zuul so that you don't have to give up the Zuul features in
> place today. If you go down that road, I'd suggest pinging the
> softwarefactory-project.io folks or the openlab folks.
> 
As somebody who has recently moved from a gerrit workflow to a github
workflow using Zuul, keep in mind this is not a 1:1 feature map.  The
biggest difference, as people have said, is code review in github.com is
terrible.  It was something added after the fact, I wish daily to be
able to use gerrit again :)

Zuul does make things better, and 100% with Monty here.  You want Zuul
to be the gate, not travis CI.

- Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Adding index and views/dashboards for Kata to ELK stack

2018-10-05 Thread Paul Belanger
On Fri, Oct 05, 2018 at 01:14:21PM -0700, Clark Boylan wrote:
> On Wed, Oct 3, 2018, at 3:16 AM, Whaley, Graham wrote:
> > Hi Infra team.
> > 
> > First, a brief overview for folks who have not been in this loop.
> > Kata Containers has a CI that runs some metrics, and spits out JSON 
> > results. We'd like to store those results in the OSF logstash ELK infra 
> > (http://logstash.openstack.org/#/dashboard/file/logstash.json), and set 
> > up some Kibana views and dashboards so we can monitor historical trends 
> > and project health (and longer term maybe some advanced trend regression 
> > triggers).
> > 
> > There is a relevant github Issue/thread showing a working PoC here:
> > https://github.com/kata-containers/ci/issues/60#issuecomment-426579084
> > 
> > I believe we are at the point where we should set up a trial index and 
> > keys (for the filebeat/logstash/elastic injection) etc. so we can start 
> > to tie the parts together.
> > What do I need to make that next step happen? Do I need to open a formal 
> > request ticket/Issue somewhere, or can we just thrash it out here?
> 
> Here and in code review is fine.
> 
> > 
> > This email is part aimed at ClarkB :-), but obviously may involve others 
> > etc. Input most welcome.
> 
> Currently all of our jobs submit jobs to the logstash processing system via 
> the ansible role here [0]. That is then fed through our logstash 
> configuration [1]. The first step in this is probably to update the logstash 
> configuration and update the existing Kata jobs in zuul to submit jobs to 
> index that information as appropriate?
> 
> As for Kibana views I'm not sure we ever sorted that out with the in browser 
> version of kibana we are running. I think we can embed dashboard information 
> in the kibana "source" or inject it into elasticsearch? This piece would take 
> some investigation as I don't know what needs to be done off the top of my 
> head. Note that our elasticsearch can't be written to via the public 
> interface to it as there are no authentication and authorization controls for 
> elasticsearch.
> 
> [0] 
> https://git.openstack.org/cgit/openstack-infra/project-config/tree/roles/submit-logstash-jobs/defaults/main.yaml
> [1] 
> https://git.openstack.org/cgit/openstack-infra/logstash-filters/tree/filters/openstack-filters.conf
> 
Yah, i think it would be great is we could use the JJB / grafyaml
workflow here where we store the view / dashboard in yaml then write a
job to push them into the application.  Then we don't need to deal with
public authentication.

-Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Continued discussion of Winterscale naming

2018-09-12 Thread Paul Belanger
On Wed, Sep 12, 2018 at 02:26:09PM -0700, Clark Boylan wrote:
> On Wed, Aug 29, 2018, at 11:06 AM, Clark Boylan wrote:
> > On Tue, Aug 21, 2018, at 11:33 AM, Jeremy Stanley wrote:
> > > The Infra team has been brainstorming and collecting feedback in
> > > https://etherpad.openstack.org/p/infra-services-naming as to what
> > > actual name/domain the Winterscale effort will use. If you've not
> > > been following along, the earlier discussions can be found in the
> > > following mailing list threads:
> > > 
> > > http://lists.openstack.org/pipermail/openstack-infra/2018-May/005957.html
> > > http://lists.openstack.org/pipermail/openstack-infra/2018-July/006010.html
> > > 
> > > So far the "short list" at the bottom of the pad seems to contain
> > > only one entry: OpenDev.
> > > 
> > > The OpenStack Foundation has offered to let us take control of
> > > opendev.org for this purpose if we would like. They have it mainly
> > > as a defensive registration to protect the OpenDev Conference brand,
> > > but use a separate opendevconf.org domain for that at present. The
> > > opendev.org domain is just sitting parked on the same nameservers as
> > > openstack.org right now, not in use for anything. The brand itself
> > > has a positive connotation in the community so far, and the OpenDev
> > > Conferences share a lot of similar goals (bringing communities of
> > > people together to collaborate openly on design and development
> > > activities) so this even provides some useful synergy around the
> > > name and possible promotional tie-ins with the events if we like.
> > > 
> > > I know lots of us are eager to move forward on the rebranding, so I
> > > wanted to wake this discussion back up and try to see if we can
> > > drive it to a conclusion. As we continue to take on hosting for
> > > additional large projects, having somewhere for them to send the
> > > contributors and users in their community which has a distinct
> > > identity unclouded by OpenStack itself will help make our services
> > > much more welcoming. If you don't particularly like the OpenDev idea
> > > or have alternatives you think might achieve greater consensus
> > > within our team and present a better image to our extended
> > > community, present and future, please update the above mentioned pad
> > > or follow up to this mailing list thread. Thanks!
> > 
> > I am a fan of OpenDev. I think it gives a path forward that works for 
> > the immediate future and long term. https://review.opendev.org seems 
> > like a reasonable place to do code review for a project :)
> > 
> > I do think it would be good to continue collecting input particularly 
> > from those involved in the day to day infra activities. If we can reach 
> > rough consensus over the next week or so that would give us the 
> > opportunity to use time at the PTG to do a rough sketch of how we can 
> > start "migrating" to the new name.
> > 
> > Your feedback much appreciated.
> > 
> 
> It has been about 3 weeks and the feedback so far has largely been positive. 
> The one concern we have seen raised is that this may be confusing with the 
> OpenDev conference. Fungi makes a great argument that cross promoting between 
> the conference and the development tooling can be a net positive since many 
> of our goals overlap.
> 
> Finding that argument compelling myself, and not seeing any counterarguments 
> I think we should move forward with using the OpenDev name and opendev.org 
> domain for the team and services that we host.
> 
> Long story short let's go ahead and use this name and start making progress 
> on this effort. Next stop: etherpad.opendev.org.
> 
> Thank you,
> Clark
> 
+1 for etherpad.opendev.org

- Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[openstack-dev] non-candidacy for TC

2018-09-04 Thread Paul Belanger
Greetings,

After a year on the TC, I've decided not to run for another term. I'd
like to thank the other TC members for helping bring me up to speed over
the last year and community for originally voting.  There is always work
to do, and I'd like to use this email to encourage everybody to strongly
consider running for the TC if you are interested in the future of
OpenStack.

It is a great learning opportunity, great humans to work with and great
project! Please do consider running if you are at all interested.

Thanks again,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Retiring some repos?

2018-08-29 Thread Paul Belanger
On Wed, Aug 29, 2018 at 07:03:29AM +0200, Andreas Jaeger wrote:
> I digged into the remaining and will investigate whether we can retire them:
> 
> openstack-infra/zuul-packaging
> 
>   Paul, should we abandon this one as well?
> 
Yes

> openstack-infra/featuretracker (together with puppet-featuretracker)
> 
>   these work together with openstack/development-proposals from the
>   product working group, I'll reach out to them.
> 
> puppet-docker-registry:
>   See https://review.openstack.org/#/c/399221/ - there's already another
>   tool which we should use, never any commits merged to it.
> 
> pynotedb
>   I'll reach out to storyboard team.
> 
> Andreas
> -- 
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
> 
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Continued discussion of Winterscale naming

2018-08-21 Thread Paul Belanger
On Tue, Aug 21, 2018 at 06:33:06PM +, Jeremy Stanley wrote:
> The Infra team has been brainstorming and collecting feedback in
> https://etherpad.openstack.org/p/infra-services-naming as to what
> actual name/domain the Winterscale effort will use. If you've not
> been following along, the earlier discussions can be found in the
> following mailing list threads:
> 
> http://lists.openstack.org/pipermail/openstack-infra/2018-May/005957.html
> http://lists.openstack.org/pipermail/openstack-infra/2018-July/006010.html
> 
> So far the "short list" at the bottom of the pad seems to contain
> only one entry: OpenDev.
> 
> The OpenStack Foundation has offered to let us take control of
> opendev.org for this purpose if we would like. They have it mainly
> as a defensive registration to protect the OpenDev Conference brand,
> but use a separate opendevconf.org domain for that at present. The
> opendev.org domain is just sitting parked on the same nameservers as
> openstack.org right now, not in use for anything. The brand itself
> has a positive connotation in the community so far, and the OpenDev
> Conferences share a lot of similar goals (bringing communities of
> people together to collaborate openly on design and development
> activities) so this even provides some useful synergy around the
> name and possible promotional tie-ins with the events if we like.
> 
> I know lots of us are eager to move forward on the rebranding, so I
> wanted to wake this discussion back up and try to see if we can
> drive it to a conclusion. As we continue to take on hosting for
> additional large projects, having somewhere for them to send the
> contributors and users in their community which has a distinct
> identity unclouded by OpenStack itself will help make our services
> much more welcoming. If you don't particularly like the OpenDev idea
> or have alternatives you think might achieve greater consensus
> within our team and present a better image to our extended
> community, present and future, please update the above mentioned pad
> or follow up to this mailing list thread. Thanks!
> -- 
> Jeremy Stanley
>
+1 Exciting!

- Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] [barbican][ara][helm][tempest] Removal of fedora-27 nodes

2018-08-13 Thread Paul Belanger
On Mon, Aug 13, 2018 at 09:56:44AM -0400, Paul Belanger wrote:
> On Thu, Aug 02, 2018 at 08:01:46PM -0400, Paul Belanger wrote:
> > Greetings,
> > 
> > We've had fedora-28 nodes online for some time in openstack-infra, I'd like 
> > to
> > finish the migration process and remove fedora-27 images.
> > 
> > Please take a moment to review and approve the following patches[1]. We'll 
> > be
> > using the fedora-latest nodeset now, which make is a little easier for
> > openstack-infra to migrate to newer versions of fedora.  Next time around, 
> > we'll
> > send out an email to the ML once fedora-29 is online to give projects some 
> > time
> > to test before we make the change.
> > 
> > Thanks
> > - Paul
> > 
> > [1] https://review.openstack.org/#/q/topic:fedora-latest
> > 
> Thanks for the approval of the patches above, today we are blocked by the
> following backport for barbican[2]. If we can land this today, we can proceed
> with the removal from nodepool.
> 
> Thanks
> - Paul
> 
> [2] https://review.openstack.org/590420/
> 
Thanks to the fast approvals today, we've been able to fully remove
fedora-27 from nodepool.  All jobs will now use fedora-latest, which is
currently fedora-28.

We'll send out an enail once we are ready to bring fedora-29 online, and
promote it to fedora-latest.

Thanks
- Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican][ara][helm][tempest] Removal of fedora-27 nodes

2018-08-13 Thread Paul Belanger
On Thu, Aug 02, 2018 at 08:01:46PM -0400, Paul Belanger wrote:
> Greetings,
> 
> We've had fedora-28 nodes online for some time in openstack-infra, I'd like to
> finish the migration process and remove fedora-27 images.
> 
> Please take a moment to review and approve the following patches[1]. We'll be
> using the fedora-latest nodeset now, which make is a little easier for
> openstack-infra to migrate to newer versions of fedora.  Next time around, 
> we'll
> send out an email to the ML once fedora-29 is online to give projects some 
> time
> to test before we make the change.
> 
> Thanks
> - Paul
> 
> [1] https://review.openstack.org/#/q/topic:fedora-latest
> 
Thanks for the approval of the patches above, today we are blocked by the
following backport for barbican[2]. If we can land this today, we can proceed
with the removal from nodepool.

Thanks
- Paul

[2] https://review.openstack.org/590420/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do

2018-08-09 Thread Paul Belanger
On Thu, Aug 09, 2018 at 08:00:13PM -0600, Wesley Hayutin wrote:
> On Thu, Aug 9, 2018 at 5:33 PM Alex Schultz  wrote:
> 
> > On Thu, Aug 9, 2018 at 2:56 PM, Doug Hellmann 
> > wrote:
> > > Excerpts from Alex Schultz's message of 2018-08-09 14:31:34 -0600:
> > >> Ahoy folks,
> > >>
> > >> I think it's time we come up with some basic rules/patterns on where
> > >> code lands when it comes to OpenStack related Ansible roles and as we
> > >> convert/export things. There was a recent proposal to create an
> > >> ansible-role-tempest[0] that would take what we use in
> > >> tripleo-quickstart-extras[1] and separate it for re-usability by
> > >> others.   So it was asked if we could work with the openstack-ansible
> > >> team and leverage the existing openstack-ansible-os_tempest[2].  It
> > >> turns out we have a few more already existing roles laying around as
> > >> well[3][4].
> > >>
> > >> What I would like to propose is that we as a community come together
> > >> to agree on specific patterns so that we can leverage the same roles
> > >> for some of the core configuration/deployment functionality while
> > >> still allowing for specific project specific customization.  What I've
> > >> noticed between all the project is that we have a few specific core
> > >> pieces of functionality that needs to be handled (or skipped as it may
> > >> be) for each service being deployed.
> > >>
> > >> 1) software installation
> > >> 2) configuration management
> > >> 3) service management
> > >> 4) misc service actions
> > >>
> > >> Depending on which flavor of the deployment you're using, the content
> > >> of each of these may be different.  Just about the only thing that is
> > >> shared between them all would be the configuration management part.
> > >
> > > Does that make the 4 things separate roles, then? Isn't the role
> > > usually the unit of sharing between playbooks?
> > >
> >
> > It can be, but it doesn't have to be.  The problem comes in with the
> > granularity at which you are defining the concept of the overall
> > action.  If you want a role to encompass all that is "nova", you could
> > have a single nova role that you invoke 5 different times to do the
> > different actions during the overall deployment. Or you could create a
> > role for nova-install, nova-config, nova-service, nova-cells, etc etc.
> > I think splitting them out into their own role is a bit too much in
> > terms of management.   In my particular openstack-ansible is already
> > creating a role to manage "nova".  So is there a way that I can
> > leverage part of their process within mine without having to duplicate
> > it.  You can pull in the task files themselves from a different so
> > technically I think you could define a ansible-role-tripleo-nova that
> > does some include_tasks: ../../os_nova/tasks/install.yaml but then
> > we'd have to duplicate the variables in our playbook rather than
> > invoking a role with some parameters.
> >
> > IMHO this structure is an issue with the general sharing concepts of
> > roles/tasks within ansible.  It's not really well defined and there's
> > not really a concept of inheritance so I can't really extend your
> > tasks with mine in more of a programming sense. I have to duplicate it
> > or do something like include a specific task file from another role.
> > Since I can't really extend a role in the traditional OO programing
> > sense, I would like to figure out how I can leverage only part of it.
> > This can be done by establishing ansible variables to trigger specific
> > actions or just actually including the raw tasks themselves.   Either
> > of these concepts needs some sort of contract to be established to the
> > other won't get broken.   We had this in puppet via parameters which
> > are checked, there isn't really a similar concept in ansible so it
> > seems that we need to agree on some community established rules.
> >
> > For tripleo, I would like to just invoke the os_nova role and pass in
> > like install: false, service: false, config_dir:
> > /my/special/location/, config_data: {...} and it spit out the configs.
> > Then my roles would actually leverage these via containers/etc.  Of
> > course most of this goes away if we had a unified (not file based)
> > configuration method across all services (openstack and non-openstack)
> > but we don't. :D
> >
> 
> I like your idea here Alex.
> So having a role for each of these steps is too much management I agree,
> however
> establishing a pattern of using tasks for each step may be a really good
> way to cleanly handle this.
> 
> Are you saying something like the following?
> 
> openstack-nova-role/
> * * /tasks/
> * * /tasks/install.yml
> * * /tasks/service.yml
> * */tasks/config.yml
> * */taks/main.yml
> ---
> # main.yml
> 
> include: install.yml
> when: nova_install|bool
> 
> include: service.yml
> when: nova_service|bool
> 
> include: config.yml
> when: nova_config.yml
> 

[openstack-dev] [barbican][ara][helm][tempest] Removal of fedora-27 nodes

2018-08-02 Thread Paul Belanger
Greetings,

We've had fedora-28 nodes online for some time in openstack-infra, I'd like to
finish the migration process and remove fedora-27 images.

Please take a moment to review and approve the following patches[1]. We'll be
using the fedora-latest nodeset now, which make is a little easier for
openstack-infra to migrate to newer versions of fedora.  Next time around, we'll
send out an email to the ML once fedora-29 is online to give projects some time
to test before we make the change.

Thanks
- Paul

[1] https://review.openstack.org/#/q/topic:fedora-latest

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] [zuul] Change publication interface to be directories on node, not executor

2018-08-02 Thread Paul Belanger
On Tue, Oct 10, 2017 at 05:42:12PM -0500, Monty Taylor wrote:
> Hey everybody!
> 
> I'd like to make a proposal for changing how we do logs/artifacts/docs
> collection based on the last few weeks/months of writing things- and of
> having to explain to people how to structure build and publish jobs over the
> last couple of weeks.
> 
> tl;dr - I'd like to change the publication interface to be directories on
> the remote node, not directories on the executor
> 
> Rationale
> =
> 
> If jobs have to copy files back to the executor as part of the publication
> interface, then the zuul admins can't change the mechanism of how artifacts,
> logs or docs are published without touching a ton of potentially in-tree job
> content.
> 
> Doing so should also allow us to stop having a second copy of build logic in
> the artifact release jobs.
> 
> Implementation
> ==
> 
> Define a root 'output' dir on the remote nodes. Different types of output
> can be collected by putting them into subdirectories of that dir on the
> remote nodes and expecting that base jobs will take care of them.
> 
> People using jobs defined in zuul-jobs should define a variable
> "zuul_output_dir", either in site-variables or in their base job. Jobs in
> zuul-jobs can and will depend on that variable existing - it will be
> considered part of the base job interface zuul-jobs expects.
> 
> Jobs in zuul-jobs will recognize three specific types of job output:
> * logs
> * artifacts
> * docs
> 
> Jobs in zuul-jobs will put job outputs into "{{ zuul_output_dir }}/logs",
> "{{ zuul_ouptut_dir }}/artifacts" and "{{ zuul_output_dir }}/docs" as
> appropriate.
> 
> A role will be added to zuul-jobs that can be used in base jobs that will
> ensure those directories all exist.
> 
> Compression
> ---
> 
> Deployers may choose to have their base job compress items in {{
> zuul_output_dir }} as part of processing them, or may prefer not to
> depending on whether CPU or network is more precious. Jobs in zuul-jobs
> should just move/copy things into {{ zuul_output_dir }} on the node and
> leave compression, transfer and publication as a base-job operation.
> 
> Easy Output Copying
> ---
> 
> A role will also be added to zuul-jobs to facilitate easy/declarative output
> copying.
> 
> It will take as input a dictionary of files/folders named
> 'zuul_copy_output'. The role will copy contents into {{ zuul_output_dir }}
> on the remote node and is intended to be used before output fetching in a
> base job's post-playook.
> 
> The input will be a dictionary so that zuul variable merging can be used for
> accumulating.
> 
> Keys of the dictionary will be things to copy. Valid values will be the type
> of output to copy:
> 
> * logs
> * artifacts
> * docs
> * null   # ansible null, not the string null
> 
> null will be used for 'a parent job said to copy this, but this job wants to
> not do so'
> 
> The simple content copying role will not be flexible or powerful. People
> wanting more expressive output copying have all of the normal tools for
> moving files around at their disposal. It will obey the following rules:
> 
> * All files/folders will be copied if they exist, or ignored if they don't
> * Items will be copied as-if 'cp -a' had been used.
> * Order of files is undefined
> * Conflicts between declared files are an error.
> 
> Jobs defined in zuul-jobs should not depend on the {{ zuul_copy_output }}
> variable for content they need copied in place on a remote node. Jobs
> defined in zuul-jobs should instead copy their output to {{ zuul_output_dir
> }} This prevents zuul deployers from being required to put the easy output
> copying role into their base jobs. Jobs defined in zuul-jobs may use the
> role behind the scenes.
> 
> Filter Plugin
> -
> 
> Since the pattern of using a dictionary in job variables to take advantage
> of variable merging is bound to come up more than once, we'll define a
> filter plugin in zuul called 'zuul_list_from_value' (or some better name)
> that will return the list of keys that match a given value. So that given
> the following in a job defintion:
> 
> vars:
>   zuul_copy_output:
> foo/bar.html: logs
> other/logs/here.html: logs
> foo/bar.tgz: artifacts
> 
> Corresponding Ansible could do:
> 
> - copy:
> src: {{ item }}
> dest: {{ zuul_log_dir }}
>   with_items: {{ zuul_copy_output | zuul_list_from_value(logs) }}
> 
> For OpenStack Today
> ===
> 
> We will define zuul_output_dir to be "{{ ansible_user_dir }}/zuul-output" in
> our site-variables.yaml file.
> 
> Implement the following in OpenStack's base job:
> 
> We will have the base job will include the simple copying role.
> 
> Logs
> 
> 
> Base job post playbook will always grab {{ zuul_output_dir }}/logs from
> nodes, same as today:
> * If there are more than one node, grab it into {{ zuul.executor.log_dir
> }}/{{ inventory_hostname }}.
> * If only one node, grab into  {{ 

Re: [OpenStack-Infra] Reworking zuul base jobs

2018-08-02 Thread Paul Belanger
On Mon, Jul 23, 2018 at 11:22:13AM -0400, Paul Belanger wrote:
> Greetings,
> 
> A few weeks ago, I sent an email to the zuul-discuss[1] ML talking about the
> idea of splitting a base job in project-config into trusted / untrusted parts.
> Since then we've actually implemented the idea in rdoproject.org zuul and 
> seems
> to be working very well.
> 
> Basically, I'd like to do the same here with our base job but first wanted to
> give a heads up.  Here is the basic idea:
> 
> 
>   project-config (trusted)
>   - job:
>   name: base-minimal
>   parent: null
>   description: top-level job
> 
>   - job:
>   name: base-minimal-test
>   parent: null
>   description: top-level job for testing base-minimal
> 
>   openstack-zuul-jobs (untrusted)
>   - job:
>   name: base
>   parent: base-minimal
> 
> This then allows us to start moving tasks / roles like configure-mirrors from
> trusted into untrusted, since it doesn't really need trusted context on the
> executor.
> 
> In rdoproject, our base-minimal job is much smaller then openstack-infra 
> today,
> but really has just become used for handling secrets (post-run playbooks) and
> zuul_stream (pre). Everything else has been moved into untrusted.
> 
> Here, we likely need to have a little more discussion around what we move into
> untrusted from trusted, but once we've done the dance to place base into
> openstack-zuul-jobs and parent to base-minimal in project-config, we can start
> testing.
> 
> We'd still need to do the base-minimal / base-minimal-test dance for trusted
> context, but it should be much smaller the things we need to test. As a 
> working
> example, the recent changes to pypi mirrors I believe would have been much
> easier to test in this setup.
> 
> - Paul
> 
> [1] http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-July/000508.html

I've gone ahead an pushed up a stack of changes at topic:base-minimal-jobs[2].
We need to rename the current base-minimal job to base-ozj first, then
depending on how minimal the new base-minimal is, we might be able to remove
base-ozj once everything is finished.

- Paul

[2] https://review.openstack.org/#/q/topic:base-minimal-jobs

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[OpenStack-Infra] Reworking zuul base jobs

2018-07-23 Thread Paul Belanger
Greetings,

A few weeks ago, I sent an email to the zuul-discuss[1] ML talking about the
idea of splitting a base job in project-config into trusted / untrusted parts.
Since then we've actually implemented the idea in rdoproject.org zuul and seems
to be working very well.

Basically, I'd like to do the same here with our base job but first wanted to
give a heads up.  Here is the basic idea:


  project-config (trusted)
  - job:
  name: base-minimal
  parent: null
  description: top-level job

  - job:
  name: base-minimal-test
  parent: null
  description: top-level job for testing base-minimal

  openstack-zuul-jobs (untrusted)
  - job:
  name: base
  parent: base-minimal

This then allows us to start moving tasks / roles like configure-mirrors from
trusted into untrusted, since it doesn't really need trusted context on the
executor.

In rdoproject, our base-minimal job is much smaller then openstack-infra today,
but really has just become used for handling secrets (post-run playbooks) and
zuul_stream (pre). Everything else has been moved into untrusted.

Here, we likely need to have a little more discussion around what we move into
untrusted from trusted, but once we've done the dance to place base into
openstack-zuul-jobs and parent to base-minimal in project-config, we can start
testing.

We'd still need to do the base-minimal / base-minimal-test dance for trusted
context, but it should be much smaller the things we need to test. As a working
example, the recent changes to pypi mirrors I believe would have been much
easier to test in this setup.

- Paul

[1] http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-July/000508.html

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] Disk space requirement - any way to lower it a little?

2018-07-19 Thread Paul Belanger
On Thu, Jul 19, 2018 at 05:30:27PM +0200, Cédric Jeanneret wrote:
> Hello,
> 
> While trying to get a new validation¹ in the undercloud preflight
> checks, I hit an (not so) unexpected issue with the CI:
> it doesn't provide flavors with the minimal requirements, at least
> regarding the disk space.
> 
> A quick-fix is to disable the validations in the CI - Wes has already
> pushed a patch for that in the upstream CI:
> https://review.openstack.org/#/c/583275/
> We can consider this as a quick'n'temporary fix².
> 
> The issue is on the RDO CI: apparently, they provide instances with
> "only" 55G of free space, making the checks fail:
> https://logs.rdoproject.org/17/582917/3/openstack-check/legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master/c9cf398/logs/undercloud/home/zuul/undercloud_install.log.txt.gz#_2018-07-17_10_23_46
> 
> So, the question is: would it be possible to lower the requirment to,
> let's say, 50G? Where does that 60G³ come from?
> 
> Thanks for your help/feedback.
> 
> Cheers,
> 
> C.
> 
> 
> 
> ¹ https://review.openstack.org/#/c/582917/
> 
> ² as you might know, there's a BP for a unified validation framework,
> and it will allow to get injected configuration in CI env in order to
> lower the requirements if necessary:
> https://blueprints.launchpad.net/tripleo/+spec/validation-framework
> 
> ³
> http://tripleo.org/install/environments/baremetal.html#minimum-system-requirements
> 
Keep in mind, upstream we don't really have control over partitions of nodes, in
some case it is a single, other multiple. I'd suggest looking more at:

  https://docs.openstack.org/infra/manual/testing.html

As for downstream RDO, the same is going to apply once we start adding more
cloud providers. I would look to see if you actually need that much space for
deployments, and make try to mock the testing of that logic.

- Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [TIP] tox release 3.1.1

2018-07-09 Thread Paul Belanger
On Mon, Jul 09, 2018 at 01:44:00PM -0400, Doug Hellmann wrote:
> Excerpts from Eric Fried's message of 2018-07-09 11:16:11 -0500:
> > Doug-
> > 
> > How long til we can start relying on the new behavior in the gate?  I
> > gots me some basepython to purge...
> > 
> > -efried
> 
> Great question. I have to defer to the infra team to answer, since I'm
> not sure how we're managing the version of tox we use in CI.
> 
Should be less then 24 hours, likely sooner. We pull in the latest tox when we
rebuild images each day[1].

[1] 
http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/elements/infra-package-needs/install.d/10-pip-packages

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[OpenStack-Infra] zuulv3 feedback for 3pci

2018-07-05 Thread Paul Belanger
Greetings,

Over the last few weeks I've been helping the RDO project migrate away from
zuulv2 (jenkins) to zuulv3. Today all jobs have been migrated with the help of
the zuul-migrate script. We'll start deleting jenkins bits in the next few days.

I wanted to get down some things I've noticed in the process as feedback to
thirdparty CI operators. Hopefully this will help others.

Removal of zuul-cloner
--

This by far was the largest issue we had in RDO project. The first thing it
meant, was the need for much more HDD space. We almost quadrupled the storage
quota needed to run zuulv3 properly because we no longer could zuul-cloner from
git.o.o.

Right now rdo is running 4 zuul-executors / 4 zuul-mergers, with the increase in
storage requirements this also meant we needed faster disks.  The previous
servers used under zuulv2 couldn't handle the IO now required, so we've had to
rebuild them backed with SSD. Previously they could be boot from volume to ceph.

Need for use-cached-repos
-

Today, use-cached-repos is only available to openstack-infra/project-config, we
should promote this into zuul-jobs to help reduce the amount of pressure on
zuul-executors when jobs start. In the case of 3pci, prepare-workspace role
isn't up to the task to sync everything at once.

The feedback here, is to some how allow the base job to be smart enough to work
if a project is found in /opt/git or not.  Today we have 2 different images in
rdo, 1 has the cache of upstream git.o.o and other doesn't.

Namespace projects with fqdn


This one is likely unique to rdoproject, but because we have 2 connection to
different gerrit systems, review.rdoproject.org and git.openstack.org, we
actually have duplicate project names. For example:

  openstack/tripleo-common

which means, for zuul we have to write projects as:

  project:
name: git.openstack.org/openstack/tripleo-common

  project:
name: review.openstack.org/openstack/tripleo-common

There are legacy reasons for this, and we plan on cleaning review.r.o, however
because of this duplication we cannot use upstream jobs right now. My initial
thought would be to update jobs, in this case devstack to use the following for
required-projects:

  required-projects:
- git.openstack.org/openstack-dev/devstack
- git.openstack.org/openstack/tripleo-common

and propose the patch upstream.  Again, this is likely specific to rdoproject,
but something right now that blocks them on loading jobs from zuul.o.o.

I do have some other suggestions, but they are more specific to zuul. I could
post them here as a follow up or on zuul ML.

I am happy I was able to help in the original migration of the openstack
projects from jenkins to zuulv3, it did help a lot when I was debugging zuul
failures. But over all rdo project didn't have any major issues with job 
content.

Thanks,
Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] [openstack-ansible] dropping selinux support

2018-06-28 Thread Paul Belanger
On Thu, Jun 28, 2018 at 12:56:22PM -0400, Mohammed Naser wrote:
> Hi everyone:
> 
> This email is to ask if there is anyone out there opposed to removing
> SELinux bits from OpenStack ansible, it's blocking some of the gates
> and the maintainers for them are no longer working on the project
> unfortunately.
> 
> I'd like to propose removing any SELinux stuff from OSA based on the 
> following:
> 
> 1) We don't gate on it, we don't test it, we don't support it.  If
> you're running OSA with SELinux enforcing, please let us know how :-)
> 2) It extends beyond the scope of the deployment project and there are
> no active maintainers with the resources to deal with them
> 3) With the work currently in place to let OpenStack Ansible install
> distro packages, we can rely on upstream `openstack-selinux` package
> to deliver deployments that run with SELinux on.
> 
> Is there anyone opposed to removing it?  If so, please let us know. :-)
> 
While I don't use OSA, I would be surprised to learn that selinux wouldn't be
supported.  I also understand it requires time and care to maintain. Have you
tried reaching out to people in #RDO, IIRC all those packages should support
selinux.

As for gating, maybe default to selinux passive for it to report errors, but not
fail.  And if anybody is interested in support it, they can do so and enable
enforcing again when everything is fixed.

- Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo gate is blocked - please read

2018-06-16 Thread Paul Belanger
On Sat, Jun 16, 2018 at 12:47:10PM +, Jeremy Stanley wrote:
> On 2018-06-15 23:15:01 -0700 (-0700), Emilien Macchi wrote:
> [...]
> > ## Dockerhub proxy issue
> > Infra using wrong image layer object storage proxy for Dockerhub:
> > https://review.openstack.org/#/c/575787/
> > Huge thanks to infra team, specially Clark for fixing this super quickly,
> > it clearly helped to stabilize our container jobs, I actually haven't seen
> > timeouts since we merged your patch. Thanks a ton!
> [...]
> 
> As best we can tell from logs, the way Dockerhub served these images
> changed a few weeks ago (at the end of May) leading to this problem.
> -- 
> Jeremy Stanley

Should also note what we are doing here is a terrible hack, we've only been able
to learn the information by sniffing the traffic to hub.docker.io for our 
reverse
proxy cache configuration. It is also possible this can break in the future too,
so something to always keep in the back of your mind.

It would be great if docker tools just worked with HTTP proxies.

-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][python3][tc][infra] Python 3.6

2018-06-05 Thread Paul Belanger
On Tue, Jun 05, 2018 at 04:48:00PM -0400, Zane Bitter wrote:
> On 05/06/18 16:38, Doug Hellmann wrote:
> > Excerpts from Zane Bitter's message of 2018-06-05 15:55:49 -0400:
> > > We've talked a bit about migrating to Python 3, but (unless I missed it)
> > > not a lot about which version of Python 3. Currently all projects that
> > > support Python 3 are gating against 3.5. However, Ubuntu Artful and
> > > Fedora 26 already ship Python 3.6 by default. (And Bionic and F28 have
> > > been released since then.) The one time it did come up in a thread, we
> > > decided it was blocked on the availability of 3.6 in Ubuntu to run on
> > > the test nodes, so it's time to discuss it again.
> > > 
> > > AIUI we're planning to switch the test nodes to Bionic, since it's the
> > > latest LTS release, so I'd assume that means that when we talk about
> > > running docs jobs, pep8  with Python3 (under the python3-first
> > > project-wide goal) that means 3.6. And while 3.5 jobs should continue to
> > > work, it seems like we ought to start testing ASAP with the version that
> > > users are going to get by default if they choose to use our Python3
> > > packages.
> > > 
> > > The list of breaking changes in 3.6 is quite short (although not zero),
> > > so I wouldn't expect too many roadblocks:
> > > https://docs.python.org/3/whatsnew/3.6.html#porting-to-python-3-6
> > > 
> > > I think we can split the problem into two parts:
> > > 
> > > * How can we detect any issues ASAP.
> > > 
> > > Would it be sane to give all projects with a py35 unit tests job a
> > > non-voting py36 job so that they can start fixing any issues right away?
> > > Like this: https://review.openstack.org/572535
> > 
> > That seems like a good way to start.
> > 
> > Maybe we want to rename that project template to openstack-python3-jobs
> > to keep it version-agnostic?
> 
> You mean the 35_36 one? Actually, let's discuss this on the review.
> 
Yes please lets keep python35 / python36 project-templates, I've left comments
in review.

> > > 
> > > * How can we ensure every project fixes any issues and migrates to
> > > voting gates, including for functional test jobs?
> > > 
> > > Would it make sense to make this part of the 'python3-first'
> > > project-wide goal?
> > 
> > Yes, that seems like a good idea. We can be specific about the version
> > of python 3 to be used to achieve that goal (assuming it is selected as
> > a goal).
> > 
> > The instructions I've been putting together are based on just using
> > "python3" in the tox.ini file because I didn't want to have to update
> > that every time we update to a new version of python. Do you think we
> > should be more specific there, too?
> 
> That's probably fine IMHO. We should just be aware that e.g. when distros
> start switching to 3.7 then people's local jobs will start running in 3.7.
> 
> For me, at least, this has already been the case with 3.6 - tox is now
> python3 by default in Fedora, so e.g. pep8 jobs have been running under 3.6
> for a while now. There were a *lot* of deprecation warnings at first.
> 
> > Doug
> > 
> > > 
> > > cheers,
> > > Zane.
> > > 
> > > 
> > > (Disclaimer for the conspiracy-minded: you might assume that I'm
> > > cleverly concealing inside knowledge of which version of Python 3 will
> > > replace Python 2 in the next major release of RHEL/CentOS, but in fact
> > > you would be mistaken. The truth is I've been too lazy to find out, so
> > > I'm as much in the dark as anybody. Really. Anyway, this isn't about
> > > that, it's about testing within upstream OpenStack.)
> > > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Gerrit server replacement finished

2018-05-02 Thread Paul Belanger
Hello from Infra.
 
Gerrit maintenance has concluded successfully and running happily on Ubuntu
Xenial.  We were able to save and restore the queues from zuul, but as always
be sure to check your patches as a recheck maybe be required.

If you have any questions or comments, please reach out to us in
#openstack-infra.

I've leave the text below in case anybody missed our previous emails.

---

It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack
Project Infrastructure team is upgrading the server which runs
review.openstack.org to Ubuntu Xenial, and that means a new virtual machine
instance with new IP addresses assigned by our service provider. The new IP
addresses will be as follows:

IPv4 -> 104.130.246.32
IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229

They will replace these current production IP addresses:

IPv4 -> 104.130.246.91
IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525

We understand that some users may be running from egress-filtered
networks with port 29418/tcp explicitly allowed to the current
review.openstack.org IP addresses, and so are providing this
information as far in advance as we can to allow them time to update
their firewalls accordingly.

Note that some users dealing with egress filtering may find it
easier to switch their local configuration to use Gerrit's REST API
via HTTPS instead, and the current release of git-review has support
for that workflow as well.
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html

We will follow up with final confirmation in subsequent announcements.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Gerrit server replacement scheduled for May 2nd 2018

2018-05-02 Thread Paul Belanger
Hello from Infra.

Today is the day for scheduled maintenance of gerrit, we'll be allocating 2
hours for the outage but don't expect it to take that long. During this time you
will not be able to access gerrit.

If you have any questions, or would like to follow along, please join us in
#openstack-infra.

---

It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack
Project Infrastructure team is upgrading the server which runs
review.openstack.org to Ubuntu Xenial, and that means a new virtual machine
instance with new IP addresses assigned by our service provider. The new IP
addresses will be as follows:

IPv4 -> 104.130.246.32
IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229

They will replace these current production IP addresses:

IPv4 -> 104.130.246.91
IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525

We understand that some users may be running from egress-filtered
networks with port 29418/tcp explicitly allowed to the current
review.openstack.org IP addresses, and so are providing this
information as far in advance as we can to allow them time to update
their firewalls accordingly.

Note that some users dealing with egress filtering may find it
easier to switch their local configuration to use Gerrit's REST API
via HTTPS instead, and the current release of git-review has support
for that workflow as well.
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html

We will follow up with final confirmation in subsequent announcements.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thank you TryStack!!

2018-04-30 Thread Paul Belanger
On Mon, Apr 30, 2018 at 09:37:21AM +, Jens Harbott wrote:
> 2018-03-26 22:51 GMT+00:00 Jimmy Mcarthur :
> > Hi everyone,
> >
> > We recently made the tough decision, in conjunction with the dedicated
> > volunteers that run TryStack, to end the service as of March 29, 2018.  For
> > those of you that used it, thank you for being part of the TryStack
> > community.
> >
> > The good news is that you can find more resources to try OpenStack at
> > http://www.openstack.org/start, including the Passport Program, where you
> > can test on any participating public cloud. If you are looking to test
> > different tools or application stacks with OpenStack clouds, you should
> > check out Open Lab.
> >
> > Thank you very much to Will Foster, Kambiz Aghaiepour, Rich Bowen, and the
> > many other volunteers who have managed this valuable service for the last
> > several years!  Your contribution to OpenStack was noticed and appreciated
> > by many in the community.
> 
> Seems it would be great if https://trystack.openstack.org/ would be
> updated with this information, according to comments in #openstack
> users are still landing on that page and try to get a stack there in
> vain.
> 
The code is hosted by openstack-infra[1], if somebody would like to propose a
patch with the new information.

[1] http://git.openstack.org/cgit/openstack-infra/trystack-site

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] final stages of python 3 transition

2018-04-26 Thread Paul Belanger
On Thu, Apr 26, 2018 at 09:27:31AM -0500, Sean McGinnis wrote:
> On Wed, Apr 25, 2018 at 04:54:46PM -0400, Doug Hellmann wrote:
> > It's time to talk about the next steps in our migration from python
> > 2 to python 3.
> > 
> > [...]
> > 
> > 2. Change (or duplicate) all functional test jobs to run under
> >python 3.
> 
> As a test I ran Cinder functional and unit test jobs on bionic using 3.6. All
> went well.
> 
> That made me realize something though - right now we have jobs that explicitly
> say py35, both for unit tests and functional tests. But I realized setting up
> these test jobs that it works to just specify "basepython = python3" or run
> unit tests with "tox -e py3". Then with that, it just depends on whether the
> job runs on xenial or bionic as to whether the job is run with py35 or py36.
> 
> It is less explicit, so I see some downside to that, but would it make sense 
> to
> change jobs to drop the minor version to make it more flexible and easy to 
> make
> these transitions?
> 
I still think using tox-py35 / tox-py36 makes sense, as those jobs are already
setup to use the specific nodeset of ubuntu-xenial or ubuntu-bionic.  If we did
move to just tox-py3, it would actually result if more code projects would need
to add to their .zuul.yaml files:

  -project:
  check:
jobs:
  - tox-py35


  -project:
  check:
jobs:
  - tox-py3:
  nodeset: ubuntu-xenial

This maybe okay, will allow others to comment, but the main reason I am not a
fan, is we can no longer infer the nodeset by looking at the job name.
tox-py3 could be xenial or bionic.

Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Gerrit server replacement scheduled for May 2nd 2018

2018-04-25 Thread Paul Belanger
On Thu, Apr 19, 2018 at 11:49:12AM -0400, Paul Belanger wrote:
Hello from Infra.

This is our weekly reminder of the upcoming gerrit replacement.  We'll continue
to send these announcements out up until the day of the migration. We are now 1
weeks away from replacement date.

If you have any questions, please contact us in #openstack-infra.

---

It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack
Project Infrastructure team is upgrading the server which runs
review.openstack.org to Ubuntu Xenial, and that means a new virtual machine
instance with new IP addresses assigned by our service provider. The new IP
addresses will be as follows:

IPv4 -> 104.130.246.32
IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229

They will replace these current production IP addresses:

IPv4 -> 104.130.246.91
IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525

We understand that some users may be running from egress-filtered
networks with port 29418/tcp explicitly allowed to the current
review.openstack.org IP addresses, and so are providing this
information as far in advance as we can to allow them time to update
their firewalls accordingly.

Note that some users dealing with egress filtering may find it
easier to switch their local configuration to use Gerrit's REST API
via HTTPS instead, and the current release of git-review has support
for that workflow as well.
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html

We will follow up with final confirmation in subsequent announcements.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [bifrost][bandit][magnum][ironic][kolla][pyeclib][mistral] Please merge bindep changes

2018-04-23 Thread Paul Belanger
Greetings,

Could you please review the following bindep.txt[1] changes to your projects and
approve them, it would be helpful to the openstack-infra team.  We are looking
to remove some legacy jenkins scripts from openstack-infra/project-config and
your projects are still using them.  The following patches will update your jobs
to the new functionality of using our bindep role.

If you have any questions, please reach out to us in #openstack-infra.

Thanks,
Paul

[1] https://review.openstack.org/#/q/topic:bindep.txt+status:open

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Stop supporting bindep-fallback.txt moving forward

2018-04-20 Thread Paul Belanger
On Fri, Apr 20, 2018 at 05:34:09PM +, Jeremy Stanley wrote:
> On 2018-04-20 12:31:24 -0400 (-0400), Paul Belanger wrote:
> [...]
> > That is fine, if we want to do the mass migration to bionic first,
> > then start looking at which projects are still using
> > bindep-fallback.txt is fine with me.
> > 
> > I just wanted to highlight I think it is time we start pushing a
> > little harder on projects to stop using this logic and start
> > managing bindep.txt themself.
> 
> Yep, this is something I _completely_ agree with. We could even
> start with a deprecation warning in the fallback path so it starts
> showing up more clearly in the job logs too.
> -- 
> Jeremy Stanley

Okay, looking at codesearch.o.o, I've been able to start pushing up changes to
remove bindep-fallback.txt.

https://review.openstack.org/#/q/topic:bindep.txt

This adds bindep.txt to projects that need it, and also removes the legacy
install-distro-packages.sh scripts in favor of our bindep role.

Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] PTG September 10-14 in Denver

2018-04-20 Thread Paul Belanger
On Fri, Apr 20, 2018 at 10:42:48AM -0700, Clark Boylan wrote:
> Hello everyone,
> 
> I've been asked if the Infra team plans to attend the next PTG in Denver. My 
> current position is that it would be good to attend as a team as I think it 
> will give us a good opportunity to work on modernizing config management 
> efforts. But before I go ahead and commit to that it would be helpful to get 
> a rough headcount of who intends to go (if it will just be me then likely 
> don't need to have team space).
> 
> Don't worry if you don't have approval yet or have to sort out other details. 
> Mostly just interested in a "do we intend on being there or not" type of 
> answer.
> 
> More details on the event can be found at 
> http://lists.openstack.org/pipermail/openstack-dev/2018-April/129564.html. 
> Feel free to ask questions if that will help you too.
> 
> Let me know (doesn't have to be to the list if you aren't comfortable with 
> that) and thanks!
> 
Intend on being there (pending travel approval)

-Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Stop supporting bindep-fallback.txt moving forward

2018-04-20 Thread Paul Belanger
On Fri, Apr 20, 2018 at 09:13:17AM -0700, Clark Boylan wrote:
> On Fri, Apr 20, 2018, at 9:01 AM, Jeremy Stanley wrote:
> > On 2018-04-19 19:15:18 -0400 (-0400), Paul Belanger wrote:
> > [...]
> > > today ubuntu-bionic does seem to pass properly with
> > > bindep-fallback.txt, but perhaps we prime it with a bad package on
> > > purpose to force the issue. As clarkb points out, the downside to
> > > this it does make it harder for projects to be flipped to
> > > ubuntu-bionic.
> > [...]
> > 
> > My main concern is that this seems sort of at odds with how we
> > discussed simply forcing all PTI jobs from ubuntu-xenial to
> > ubuntu-bionic on master branches rather than giving projects the
> > option to transition on their own timelines (which worked out pretty
> > terribly when we tried being flexible with them on the ubuntu-trusty
> > to ubuntu-xenial transition a couple years ago). Adding a forced
> > mass migration to in-repo bindep.txt files at the same moment we
> > also force all the PTI jobs to a new platform will probably result
> > in torches and pitchforks.
> 
> Yup, this was my concern as well. I think the value of not being on older 
> platforms outweighs needing to manage a list of packages for longer. We 
> likely just need to keep pushing on projects to add/update bindep.txt in repo 
> instead. We can run a logstash query against job-output.txt looking for 
> output of using the fallback file and nicely remind projects if they show up 
> on that list.
> 
That is fine, if we want to do the mass migration to bionic first, then start
looking at which projects are still using bindep-fallback.txt is fine with me.

I just wanted to highlight I think it is time we start pushing a little harder
on projects to stop using this logic and start managing bindep.txt themself.

-Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] [all][infra] ubuntu-bionic and legacy nodesets

2018-04-20 Thread Paul Belanger
On Fri, Apr 20, 2018 at 08:16:07AM +0100, Jean-Philippe Evrard wrote:
> That's very cool.
> Any idea of the repartition of nodes xenial vs bionic? Is that a very
> restricted amount of nodes?
> 
According to upstream, ubuntu-bionic releases next week. In openstack-infra we
are in really good shape to have projects start using it once we rebuild using
the released version. Projects are able to use ubuntu-bionic today, we just ask
they don't gate on them until the official release.

As for switching the PTI job to use ubuntu-bionic, that is a different
discussion. It would bump python to 3.6 and likely be too late in the cycle to
do it.  I guess something we can hash out with infra / requirements / tc /
EALLTHEPROJECTS.

-Paul

> 
> On 20 April 2018 at 00:37, Paul Belanger <pabelan...@redhat.com> wrote:
> > Greetings,
> >
> > With ubuntu-bionic release around the corner we'll be starting discussions 
> > about
> > migrating jobs from ubuntu-xenial to ubuntu-bionic.
> >
> > On topic I'd like to raise, is round job migrations from legacy to native
> > zuulv3.  Specifically, I'd like to propose we do not add 
> > legacy-ubuntu-bionic
> > nodesets into openstack-zuul-jobs. Projects should be working towards moving
> > away from the legacy format, as they were just copypasta from our previous 
> > JJB
> > templates.
> >
> > Projects would still be free to move them intree, but I would highly 
> > encourage
> > projects do not do this, as it only delays the issue.
> >
> > The good news is the majority of jobs have already been moved to native 
> > zuulv3
> > jobs, but there are still some projects still depending on the legacy 
> > nodesets.
> > For example, tox bases jobs would not be affected.  It mostly would be dsvm
> > based jobs that haven't been switch to use the new devstack jobs for zuulv3.
> >
> > -Paul
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Stop supporting bindep-fallback.txt moving forward

2018-04-20 Thread Paul Belanger
On Fri, Apr 20, 2018 at 09:07:25AM +0200, Andreas Jaeger wrote:
> On 2018-04-20 01:15, Paul Belanger wrote:
> > Greetings,
> > 
> > I'd like to propose we hard freeze changes to bindep-fallback.txt[1] and 
> > push
> > projects to start using a local bindep.txt file.
> > 
> > This would mean, moving forward with ubuntu-bionic, if a project was still
> > depending on bindep-fallback.txt, their jobs may raise a syntax error.
> > 
> > In fact, today ubuntu-bionic does seem to pass properly with
> > bindep-fallback.txt, but perhaps we prime it with a bad package on purpose 
> > to
> > force the issue. As clarkb points out, the downside to this it does make it
> > harder for projects to be flipped to ubuntu-bionic.  It is possible we could
> > also prime gerrit patches for projects that are missing bindep.txt to help 
> > push
> > this effort along.
> > 
> > Thoughts?
> > 
> > [1] 
> > http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/elements/bindep-fallback.txt
> 
> This might break all stable branches as well. Pushing those changes in
> is a huge effort ;( Is that worth it?
> 
I wouldn't expect stable branches to be running bionic, unless I am missing
something obvious.

> 
> Andreas
> -- 
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
> 
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[openstack-dev] [all][infra] ubuntu-bionic and legacy nodesets

2018-04-19 Thread Paul Belanger
Greetings,

With ubuntu-bionic release around the corner we'll be starting discussions about
migrating jobs from ubuntu-xenial to ubuntu-bionic.

On topic I'd like to raise, is round job migrations from legacy to native
zuulv3.  Specifically, I'd like to propose we do not add legacy-ubuntu-bionic
nodesets into openstack-zuul-jobs. Projects should be working towards moving
away from the legacy format, as they were just copypasta from our previous JJB
templates.

Projects would still be free to move them intree, but I would highly encourage
projects do not do this, as it only delays the issue.

The good news is the majority of jobs have already been moved to native zuulv3
jobs, but there are still some projects still depending on the legacy nodesets.
For example, tox bases jobs would not be affected.  It mostly would be dsvm
based jobs that haven't been switch to use the new devstack jobs for zuulv3.

-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[OpenStack-Infra] Stop supporting bindep-fallback.txt moving forward

2018-04-19 Thread Paul Belanger
Greetings,

I'd like to propose we hard freeze changes to bindep-fallback.txt[1] and push
projects to start using a local bindep.txt file.

This would mean, moving forward with ubuntu-bionic, if a project was still
depending on bindep-fallback.txt, their jobs may raise a syntax error.

In fact, today ubuntu-bionic does seem to pass properly with
bindep-fallback.txt, but perhaps we prime it with a bad package on purpose to
force the issue. As clarkb points out, the downside to this it does make it
harder for projects to be flipped to ubuntu-bionic.  It is possible we could
also prime gerrit patches for projects that are missing bindep.txt to help push
this effort along.

Thoughts?

[1] 
http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/elements/bindep-fallback.txt

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[openstack-dev] [infra] Removal of debian-jessie, replaced by debian-stable (stretch)

2018-04-19 Thread Paul Belanger
Greetings,

I'd like to propose now that we have debian-stable (stretch) nodesets online for
nodepool, that we start the process to remove debian-jessie.  As I can see,
there really is only 2 projects using debian-jessie:

  * ARA
  * ansible-hardening

I've already proposed patches to update their jobs to debian-stable, replacing
debian-jessie:

  https://review.openstack.org/#/q/topic:debian-stable

You'll also noticed we are not using debian-stretch directly for the nodeset,
this is on purpose as when the next release of debian happens (buster). We don't
need to make a bunch of in repo changes to projects.  But update the label of
the nodeset from debian-stretch to debian-buster.

Of course, we'd need to give a fair amount of notice when we plan to make that
change, but given this nodeset isn't part of our LTS platform (ubuntu / centos)
I believe this will help us in openstack-infra migrate projects to the latest
distro as fast a possible.

Thoughts?
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Gerrit server replacement scheduled for May 2nd 2018

2018-04-19 Thread Paul Belanger
Hello from Infra.

This is our weekly reminder of the upcoming gerrit replacement.  We'll continue
to send these announcements out up until the day of the migration. We are now 2
weeks away from replacement date.

If you have any questions, please contact us in #openstack-infra.

---

It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack
Project Infrastructure team is upgrading the server which runs
review.openstack.org to Ubuntu Xenial, and that means a new virtual machine
instance with new IP addresses assigned by our service provider. The new IP
addresses will be as follows:

IPv4 -> 104.130.246.32
IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229

They will replace these current production IP addresses:

IPv4 -> 104.130.246.91
IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525

We understand that some users may be running from egress-filtered
networks with port 29418/tcp explicitly allowed to the current
review.openstack.org IP addresses, and so are providing this
information as far in advance as we can to allow them time to update
their firewalls accordingly.

Note that some users dealing with egress filtering may find it
easier to switch their local configuration to use Gerrit's REST API
via HTTPS instead, and the current release of git-review has support
for that workflow as well.
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html

We will follow up with final confirmation in subsequent announcements.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-04-12 Thread Paul Belanger
On Thu, Apr 12, 2018 at 09:00:15AM -0400, Paul Belanger wrote:
> On Mon, Jan 15, 2018 at 01:11:23PM +, Frank Jansen wrote:
> > Hi Ian,
> > 
> > do you have any insight into the availability of a physical environment for 
> > the ARM64 cloud?
> > 
> > I’m curious, as there may be a need for downstream testing, which I would 
> > assume will want to make use of our existing OSP CI framework.
> > 
> The hardware is donated by Linaro and the first cloud is currently located in
> China. As for details of hardware, I recently asked hrw in #openstack-infra 
> and
> this was his reply:
> 
>   hrw | pabelanger: misc aarch64 servers with 32+GB of ram and some GB/TB of 
> storage. different vendors. That's probably the closest to what I can say
>   hrw | pabelanger: some machines may be under NDA, some never reached mass 
> market, some are mass market available, some are no longer mass market 
> available.
> 
> As for downstream testing, are you looking for arm64 hardware or hoping to use
> the Linaro clouds for the testing.
> 
Also, I just noticed this was from Jan 15th, but only just showed up in my
inbox. Sorry for the noise, and will try to look at headers before replying :)

Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-04-12 Thread Paul Belanger
On Mon, Jan 15, 2018 at 01:11:23PM +, Frank Jansen wrote:
> Hi Ian,
> 
> do you have any insight into the availability of a physical environment for 
> the ARM64 cloud?
> 
> I’m curious, as there may be a need for downstream testing, which I would 
> assume will want to make use of our existing OSP CI framework.
> 
The hardware is donated by Linaro and the first cloud is currently located in
China. As for details of hardware, I recently asked hrw in #openstack-infra and
this was his reply:

  hrw | pabelanger: misc aarch64 servers with 32+GB of ram and some GB/TB of 
storage. different vendors. That's probably the closest to what I can say
  hrw | pabelanger: some machines may be under NDA, some never reached mass 
market, some are mass market available, some are no longer mass market 
available.

As for downstream testing, are you looking for arm64 hardware or hoping to use
the Linaro clouds for the testing.

- Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[openstack-dev] [all] Gerrit server replacement scheduled for May 2nd 2018

2018-04-10 Thread Paul Belanger
Hello from Infra.

This is our weekly reminder of the upcoming gerrit replacement.  We'll continue
to send these announcements out up until the day of the migration.

If you have any questions, please contact us in #openstack-infra.

---

It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack
Project Infrastructure team is upgrading the server which runs
review.openstack.org to Ubuntu Xenial, and that means a new virtual machine
instance with new IP addresses assigned by our service provider. The new IP
addresses will be as follows:

IPv4 -> 104.130.246.32
IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229

They will replace these current production IP addresses:

IPv4 -> 104.130.246.91
IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525

We understand that some users may be running from egress-filtered
networks with port 29418/tcp explicitly allowed to the current
review.openstack.org IP addresses, and so are providing this
information as far in advance as we can to allow them time to update
their firewalls accordingly.

Note that some users dealing with egress filtering may find it
easier to switch their local configuration to use Gerrit's REST API
via HTTPS instead, and the current release of git-review has support
for that workflow as well.
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html

We will follow up with final confirmation in subsequent announcements.

Thanks,
Paul


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way

2018-04-08 Thread Paul Belanger
On Sat, Apr 07, 2018 at 12:56:31AM -0700, Arun SAG wrote:
> Hello,
> 
> On Thu, Apr 5, 2018 at 5:39 PM, Paul Belanger <pabelan...@redhat.com> wrote:
> 
> > Yah, I agree your approach is the better, i just wanted to toggle what was
> > supported by default. However, it is pretty broken today.  I can't imagine
> > anybody actually using it, if so they must be carrying downstream patches.
> >
> > If we think USE_VENV is valid use case, for per project VENV, I suggest we
> > continue to fix it and update neutron to support it.  Otherwise, we maybe 
> > should
> > rip and replace it.
> 
> I work for Yahoo (Oath). We use USE_VENV a lot in our CI. We use VENVs
> to deploy software to
> production as well. we have some downstream patches to devstack to fix
> some issues with
> USE_VENV feature, i would be happy to upstream them. Please do not rip
> this out. Thanks.
> 
Yes, please upstream them if at all possible. I've been tracking all the fixes
so far at https://review.openstack.org/552939/ but still having an issue with
rootwrap.  I think clarkb managed to fix this in his patchset.

Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27

2018-04-05 Thread Paul Belanger
On Wed, Apr 04, 2018 at 11:27:34AM -0400, Paul Belanger wrote:
> On Tue, Mar 13, 2018 at 10:54:26AM -0400, Paul Belanger wrote:
> > On Mon, Mar 05, 2018 at 06:45:13PM -0500, Paul Belanger wrote:
> > > Greetings,
> > > 
> > > A quick search of git shows your projects are using fedora-26 nodes for 
> > > testing.
> > > Please take a moment to look at gerrit[1] and help land patches.  We'd 
> > > like to
> > > remove fedora-26 nodes in the next week and to avoid broken jobs you'll 
> > > need to
> > > approve these patches.
> > > 
> > > If you jobs are failing under fedora-27, please take the time to fix any 
> > > issue
> > > or update said patches to make them non-voting.
> > > 
> > > We (openstack-infra) aim to only keep the latest fedora image online, 
> > > which
> > > changes aprox every 6 months.
> > > 
> > > Thanks for your help and understanding,
> > > Paul
> > > 
> > > [1] https://review.openstack.org/#/q/topic:fedora-27+status:open
> > > 
> > Greetings,
> > 
> > This is a friendly reminder, about moving jobs to fedora-27. I'd like to 
> > remove
> > our fedora-26 images next week and if jobs haven't been migrated you may 
> > start
> > to see NODE_FAILURE messages while running jobs.  Please take a moment to 
> > merge
> > the open changes or update them to be non-voting while you work on fixes.
> > 
> > Thanks again,
> > Paul
> > 
> Hi,
> 
> It's been a month since we started asking projects to migrate to fedora-26.
> 
> I've proposed the patch to review fedora-26 nodes from nodepool[2], if your
> project hasn't merge the patches above you will start to see NODE_FAILURE
> results for your jobs. Please take the time to approve the changes above.
> 
> Because new fedora images come online every 6 months, we like to only keep one
> of them online at any given time. Fedora is meant to be a fast moving distro 
> to
> pick up new versions of software out side of the Ubuntu LTS releases.
> 
> If you have any questions please reach out to us in #openstack-infra.
> 
> Thanks,
> Paul
> 
> [2] https://review.openstack.org/558847/
> 
We've just landed the patch, fedora-26 images are now removed. If you haven't
upgraded your jobs to fedora-27, you'll now start setting NODE_FAILURE return by
zuul.

If you have any questions please reach out to us in #openstack-infra.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way

2018-04-05 Thread Paul Belanger
On Thu, Apr 05, 2018 at 01:27:13PM -0700, Clark Boylan wrote:
> On Mon, Apr 2, 2018, at 9:13 AM, Clark Boylan wrote:
> > On Mon, Apr 2, 2018, at 8:06 AM, Matthew Thode wrote:
> > > On 18-03-31 15:00:27, Jeremy Stanley wrote:
> > > > According to a notice[1] posted to the pypa-announce and
> > > > distutils-sig mailing lists, pip 10.0.0.b1 is on PyPI now and 10.0.0
> > > > is expected to be released in two weeks (over the April 14/15
> > > > weekend). We know it's at least going to start breaking[2] DevStack
> > > > and we need to come up with a plan for addressing that, but we don't
> > > > know how much more widespread the problem might end up being so
> > > > encourage everyone to try it out now where they can.
> > > > 
> > > 
> > > I'd like to suggest locking down pip/setuptools/wheel like openstack
> > > ansible is doing in 
> > > https://github.com/openstack/openstack-ansible/blob/master/global-requirement-pins.txt
> > > 
> > > We could maintain it as a separate constraints file (or infra could
> > > maintian it, doesn't mater).  The file would only be used for the
> > > initial get-pip install.
> > 
> > In the past we've done our best to avoid pinning these tools because 1) 
> > we've told people they should use latest for openstack to work and 2) it 
> > is really difficult to actually control what versions of these tools end 
> > up on your systems if not latest.
> > 
> > I would strongly push towards addressing the distutils package deletion 
> > problem that we've run into with pip10 instead. One of the approaches 
> > thrown out that pabelanger is working on is to use a common virtualenv 
> > for devstack and avoid the system package conflict entirely.
> 
> I was mistaken and pabelanger was working to get devstack's USE_VENV option 
> working which installs each service (if the service supports it) into its own 
> virtualenv. There are two big drawbacks to this. This first is that we would 
> lose coinstallation of all the openstack services which is one way we ensure 
> they all work together at the end of the day. The second is that not all 
> services in "base" devstack support USE_VENV and I doubt many plugins do 
> either (neutron apparently doesn't?).
> 
Yah, I agree your approach is the better, i just wanted to toggle what was
supported by default. However, it is pretty broken today.  I can't imagine
anybody actually using it, if so they must be carrying downstream patches.

If we think USE_VENV is valid use case, for per project VENV, I suggest we
continue to fix it and update neutron to support it.  Otherwise, we maybe should
rip and replace it.

Paul

> I've since worked out a change that passes tempest using a global virtualenv 
> installed devstack at https://review.openstack.org/#/c/558930/. This needs to 
> be cleaned up so that we only check for and install the virtualenv(s) once 
> and we need to handle mixed python2 and python3 environments better (so that 
> you can run a python2 swift and python3 everything else).
> 
> The other major issue we've run into is that nova file injection (which is 
> tested by tempest) seems to require either libguestfs or nbd. libguestfs 
> bindings for python aren't available on pypi and instead we get them from 
> system packaging. This means if we want libguestfs support we have to enable 
> system site packages when using virtualenvs. The alternative is to use nbd 
> which apparently isn't preferred by nova and doesn't work under current 
> devstack anyways.
> 
> Why is this a problem? Well the new pip10 behavior that breaks devstack is 
> pip10's refusable to remove distutils installed packages. Distro packages by 
> and large are distutils packaged which means if you mix system packages and 
> pip installed packages there is a good chance something will break (and it 
> does break for current devstack). I'm not sure that using a virtualenv with 
> system site packages enabled will sufficiently protect us from this case (but 
> we should test it further). Also it feels wrong to enable system packages in 
> a virtualenv if the entire point is avoiding system python packages.
> 
> I'm not sure what the best option is here but if we can show that system site 
> packages with virtualenvs is viable with pip10 and people want to move 
> forward with devstack using a global virtualenv we can work to clean up this 
> change and make it mergeable.
> 
> Clark
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asking for ask.openstack.org

2018-04-04 Thread Paul Belanger
On Wed, Apr 04, 2018 at 04:26:12PM -0500, Jimmy McArthur wrote:
> Hi everyone!
> 
> We have a very robust and vibrant community at ask.openstack.org
> .  There are literally dozens of posts a day.
> However, many of them don't receive knowledgeable answers.  I'm really
> worried about this becoming a vacuum where potential community members get
> frustrated and don't realize how to get more involved with the community.
> 
> I'm looking for thoughts/ideas/feelings about this tool as well as potential
> admin volunteers to help us manage the constant influx of technical and
> not-so-technical questions around OpenStack.
> 
> For those of you already contributing there, Thank You!  For those that are
> interested in becoming a moderator (instant AUC status!) or have some
> additional ideas around fostering this community, please respond.
> 
> Looking forward to your thoughts :)
> 
> Thanks!
> Jimmy
> irc: jamesmcarthur

We also have a 2nd issue where the ask.o.o server doesn't appear to be large
enough any more to handle the traffic. A few times over the last few weeks we've
had outages due to the HDD being full.

We likely need to reduce the number of days we retain database backups / http
logs or look to attach a volume to increase storage.

Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-helm][infra] Please consider using experimental pipeline for non-voting jobs

2018-04-04 Thread Paul Belanger
Greetings,

I've recently proposed https://review.openstack.org/558870 to the openstack-helm
project. This moves both centos / fedora jobs into the experimental pipeline.
The reason for this, the multinode jobs in helm each use 5 nodes per distro.
This this case, 10 shared between centos / fedora.

Given that this happens on every patchset propose to helm, and these jobs have
been non-voting for some time 3+ months, I think it is fair to now move them
into experimental to help conserve CI resources.

Once they have been properly fixed, I see no issue on moving them back to check
/ gate pipelines.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27

2018-04-04 Thread Paul Belanger
On Tue, Mar 13, 2018 at 10:54:26AM -0400, Paul Belanger wrote:
> On Mon, Mar 05, 2018 at 06:45:13PM -0500, Paul Belanger wrote:
> > Greetings,
> > 
> > A quick search of git shows your projects are using fedora-26 nodes for 
> > testing.
> > Please take a moment to look at gerrit[1] and help land patches.  We'd like 
> > to
> > remove fedora-26 nodes in the next week and to avoid broken jobs you'll 
> > need to
> > approve these patches.
> > 
> > If you jobs are failing under fedora-27, please take the time to fix any 
> > issue
> > or update said patches to make them non-voting.
> > 
> > We (openstack-infra) aim to only keep the latest fedora image online, which
> > changes aprox every 6 months.
> > 
> > Thanks for your help and understanding,
> > Paul
> > 
> > [1] https://review.openstack.org/#/q/topic:fedora-27+status:open
> > 
> Greetings,
> 
> This is a friendly reminder, about moving jobs to fedora-27. I'd like to 
> remove
> our fedora-26 images next week and if jobs haven't been migrated you may start
> to see NODE_FAILURE messages while running jobs.  Please take a moment to 
> merge
> the open changes or update them to be non-voting while you work on fixes.
> 
> Thanks again,
> Paul
> 
Hi,

It's been a month since we started asking projects to migrate to fedora-26.

I've proposed the patch to review fedora-26 nodes from nodepool[2], if your
project hasn't merge the patches above you will start to see NODE_FAILURE
results for your jobs. Please take the time to approve the changes above.

Because new fedora images come online every 6 months, we like to only keep one
of them online at any given time. Fedora is meant to be a fast moving distro to
pick up new versions of software out side of the Ubuntu LTS releases.

If you have any questions please reach out to us in #openstack-infra.

Thanks,
Paul

[2] https://review.openstack.org/558847/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gerrit server replacement scheduled for May 2nd 2018

2018-04-03 Thread Paul Belanger
Hello from Infra.

It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack
Project Infrastructure team is upgrading the server which runs
review.openstack.org to Ubuntu Xenial, and that means a new virtual machine
instance with new IP addresses assigned by our service provider. The new IP
addresses will be as follows:

IPv4 -> 104.130.246.32
IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229

They will replace these current production IP addresses:

IPv4 -> 104.130.246.91
IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525

We understand that some users may be running from egress-filtered
networks with port 29418/tcp explicitly allowed to the current
review.openstack.org IP addresses, and so are providing this
information as far in advance as we can to allow them time to update
their firewalls accordingly.

Note that some users dealing with egress filtering may find it
easier to switch their local configuration to use Gerrit's REST API
via HTTPS instead, and the current release of git-review has support
for that workflow as well.
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html

We will follow up with final confirmation in subsequent announcements.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Upcoming changes in ARA Zuul job reports

2018-03-29 Thread Paul Belanger
On Thu, Mar 29, 2018 at 06:14:06PM -0400, David Moreau Simard wrote:
> Hi,
> 
> By default, all jobs currently benefit from the generation of a static
> ARA report located in the "ara" directory at the root of the log
> directory.
> Due to scalability concerns, these reports were only generated when a
> job failed and were not available on successful runs.
> 
> I'm happy to announce that you can expect ARA reports to be available
> for every job from now on -- including the successful ones !
> 
> You'll notice a subtle but important change: the report directory will
> henceforth be named "ara-report" instead of "ara".
> 
> Instead of generating and saving a HTML report, we'll now only save
> the ARA database in the "ara-report" directory.
> This is a special directory from the perspective of the
> logs.openstack.org server and ARA databases located in such
> directories will be loaded dynamically by a WSGI middleware.
> 
> You don't need to do anything to benefit from this change -- it will
> be pushed to all jobs that inherit from the base job by default.
> 
> However, if you happen to be using a "nested" installation of ARA and
> Ansible (i.e, OpenStack-Ansible, Kolla-Ansible, TripleO, etc.), this
> means that you can also leverage this feature.
> In order to do that, you'll want to create an "ara-report" directory
> and copy your ARA database inside before your logs are collected and
> uploaded.
> 
I believe this is an important task we should also push on for the projects you
listed above. The main reason to do this is simplify job uploads and filesystemd
demands (thanks clarkb).

Lets see if we can update these projects in the coming week or two!

Great work.

> To help you visualize:
> /ara-report <-- This is the default Zuul report
> /logs/ara <-- This wouldn't be loaded dynamically
> /logs/ara-report <-- This would be loaded dynamically
> /logs/some/directory/ara-report <-- This would be loaded dynamically
> 
> For more details on this feature of ARA, you can refer to the documentation 
> [1].
> 
> Let me know if you have any questions !
> 
> [1]: https://ara.readthedocs.io/en/latest/advanced.html
> 
> David Moreau Simard
> Senior Software Engineer | OpenStack RDO
> 
> dmsimard = [irc, github, twitter]
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] All Hail our Newest Release Name - OpenStack Stein

2018-03-29 Thread Paul Belanger
Hi everybody!

As the subject reads, the "S" release of OpenStack is officially "Stein". As
been with previous elections this wasn't the first choice, that was "Solar".

Solar was judged to have legal risk, so as per our name selection process, we
moved to the next name on the list.

Thanks to everybody who participated, and look forward to making OpenStack Stein
a great release.

Paul

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack-operators] All Hail our Newest Release Name - OpenStack Stein

2018-03-29 Thread Paul Belanger
Hi everybody!

As the subject reads, the "S" release of OpenStack is officially "Stein". As
been with previous elections this wasn't the first choice, that was "Solar".

Solar was judged to have legal risk, so as per our name selection process, we
moved to the next name on the list.

Thanks to everybody who participated, and look forward to making OpenStack Stein
a great release.

Paul

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] All Hail our Newest Release Name - OpenStack Stein

2018-03-29 Thread Paul Belanger
Hi everybody!

As the subject reads, the "S" release of OpenStack is officially "Stein". As
been with previous elections this wasn't the first choice, that was "Solar".

Solar was judged to have legal risk, so as per our name selection process, we
moved to the next name on the list.

Thanks to everybody who participated, and look forward to making OpenStack Stein
a great release.

Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Recap of the Cross Community Infra/CI/CD event before ONS

2018-03-29 Thread Paul Belanger
On Thu, Mar 29, 2018 at 11:11:39AM -0700, Clark Boylan wrote:
> Hello everyone,
> 
> Thought I would give a recap of the Cross Community CI event that Fatih, 
> Melvin, and Robyn hosted prior to the ONS conference this last weekend. As a 
> small disclaimer there was a lot to ingest over a short period of time so 
> apologies if I misremember and get names or projects or topics wrong.
> 
> The event had representatives from OpenStack, Ansible, Linux Foundation, 
> OpenDaylight, OPNFV, ONAP, CNCF, and fd.io (and probably others that I don't 
> remember). The event was largely split into two halves, the first a get to 
> know each project (the community they represent, the tools and methods they 
> use and the challenges they face) and the second working together to reach 
> common understanding on topics such as vocabulary, tooling pracitices, and 
> addressing particular issues that affect many of us. Notes were taken for 
> each day (half) and can be found on mozilla's etherpad [0] [1].
> 
> My biggest takeaway from the event was that while we produce different 
> software we face many of the same challenges performing CI/CD for this 
> software and there is a lot of opportunity for us to work together. In many 
> cases we already use many of the same tools. Gerrit for example is quite 
> popular with the LF projects. In other places we have made distinct choices 
> like Jenkins or Zuul or Gitlab CI, but still have to solve similar issues 
> across these tools like security of job runs and signing of release artifacts.
> 
> I've personally volunteered along with Trevor Bramwell at the LF to sort out 
> some of the common security issues we face running arbitrary code pulled down 
> from the Internet. Another topic that had a lot of interest was building (or 
> consuming some existing if it already exists) message bus to enable machine 
> to machine communication between CI systems. This would help groups like 
> OPNFV which are integrating the output of OpenStack and others to know when 
> there are new things that needs testing and where to get them.
> 
> Basically we previously operated in silos despite significant overlap in 
> tooling and issues we face and since we all work on open source software 
> little prevents us from working together so we should do that more. If this 
> sounds like a good idea and is interesting to you there is a wiki [2] with 
> information on places to collaborate. Currently there are things like a 
> mailing list, freenode IRC channel (other chat tools too if you prefer), and 
> a wiki. Feel free to sign up and get involved. Also I'm happy to give my 
> thoughts on the event if you have further questions.
> 
> [0] https://public.etherpad-mozilla.org/p/infra_cicd_day1
> [1] https://public.etherpad-mozilla.org/p/infra_cicd_day2
> [2] https://gitlab.openci.io/openci/community/wikis/home#collaboration-tools
> 
> Thank you to everyone who helped organize and attended making it a success,
> Clark
> 
Great report,

What was the feedback about continuing these meetings ever 6 / 12 months? Do you
think it was a one off or something that looks to grow into a recurring
event?

I'm interested in the message bus topic myself, it reminds me to rebase some
fedmsg patches :)

Thanks for the report,
Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [openstack-dev] [OpenStackAnsible] Tag repos as newton-eol

2018-03-26 Thread Paul Belanger
On Tue, Mar 27, 2018 at 08:56:09AM +1100, Tony Breeds wrote:
> Hi folks,
> Can we ask someone from infra to do this, or add me to bootstrappers
> to do it myself?
> 
Give that we did this last time, I don't see why we can't add you to
boostrappers again.

Will confirm.

-Paul

> On Thu, Mar 15, 2018 at 10:57:58AM +, Jean-Philippe Evrard wrote:
> > Looks good to me.
> > 
> > On 15 March 2018 at 01:11, Tony Breeds  wrote:
> > > On Wed, Mar 14, 2018 at 09:40:33PM +, Jean-Philippe Evrard wrote:
> > >> Hello folks,
> > >>
> > >> The list is almost perfect: you can do all of those except
> > >> openstack/openstack-ansible-tests.
> > >> I'd like to phase out openstack/openstack-ansible-tests and
> > >> openstack/openstack-ansible later.
> > >
> > > Okay excluding the 2 repos above and filtering out projects that don't
> > > have newton branches we came down to:
> > >
> > > # EOL repos belonging to OpenStackAnsible
> > > eol_branch.sh -- stable/newton newton-eol \
> > >  openstack/ansible-hardening \
> > >  openstack/openstack-ansible-apt_package_pinning \
> > >  openstack/openstack-ansible-ceph_client \
> > >  openstack/openstack-ansible-galera_client \
> > >  openstack/openstack-ansible-galera_server \
> > >  openstack/openstack-ansible-haproxy_server \
> > >  openstack/openstack-ansible-lxc_container_create \
> > >  openstack/openstack-ansible-lxc_hosts \
> > >  openstack/openstack-ansible-memcached_server \
> > >  openstack/openstack-ansible-openstack_hosts \
> > >  openstack/openstack-ansible-openstack_openrc \
> > >  openstack/openstack-ansible-ops \
> > >  openstack/openstack-ansible-os_aodh \
> > >  openstack/openstack-ansible-os_ceilometer \
> > >  openstack/openstack-ansible-os_cinder \
> > >  openstack/openstack-ansible-os_glance \
> > >  openstack/openstack-ansible-os_gnocchi \
> > >  openstack/openstack-ansible-os_heat \
> > >  openstack/openstack-ansible-os_horizon \
> > >  openstack/openstack-ansible-os_ironic \
> > >  openstack/openstack-ansible-os_keystone \
> > >  openstack/openstack-ansible-os_magnum \
> > >  openstack/openstack-ansible-os_neutron \
> > >  openstack/openstack-ansible-os_nova \
> > >  openstack/openstack-ansible-os_rally \
> > >  openstack/openstack-ansible-os_sahara \
> > >  openstack/openstack-ansible-os_swift \
> > >  openstack/openstack-ansible-os_tempest \
> > >  openstack/openstack-ansible-pip_install \
> > >  openstack/openstack-ansible-plugins \
> > >  openstack/openstack-ansible-rabbitmq_server \
> > >  openstack/openstack-ansible-repo_build \
> > >  openstack/openstack-ansible-repo_server \
> > >  openstack/openstack-ansible-rsyslog_client \
> > >  openstack/openstack-ansible-rsyslog_server \
> > >  openstack/openstack-ansible-security
> > >
> > > If you confirm I have the list right this time I'll work on this tomorrow
> > >
> > > Yours Tony.
> > >
> > > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> Yours Tony.



> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[Openstack] OpenStack "S" Release Naming Preliminary Results

2018-03-21 Thread Paul Belanger
Hello all!

We decided to run a public poll this time around, we'll likely discuss the
process during a TC meeting, but we'd love the hear your feedback.

The raw results are below - however ...

**PLEASE REMEMBER** that these now have to go through legal vetting. So 
it is too soon to say 'OpenStack Solar' is our next release, given that previous
polls have had some issues with the top choice.

In any case, the names will been sent off to legal for vetting. As soon 
as we have a final winner, I'll let you all know.

https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_40b95cb2be3fcdf1=c04ca6bca83a1427

Result

1. Solar  (Condorcet winner: wins contests with all other choices)
2. Stein  loses to Solar by 159–138
3. Spree  loses to Solar by 175–122, loses to Stein by 148–141
4. Sonne  loses to Solar by 190–99, loses to Spree by 174–97
5. Springer  loses to Solar by 214–60, loses to Sonne by 147–103
6. Spandau  loses to Solar by 195–88, loses to Springer by 125–118
7. See  loses to Solar by 203–61, loses to Spandau by 121–111
8. Schiller  loses to Solar by 207–70, loses to See by 112–106
9. SBahn  loses to Solar by 212–74, loses to Schiller by 111–101
10. Staaken  loses to Solar by 219–59, loses to SBahn by 115–89
11. Shellhaus  loses to Solar by 213–61, loses to Staaken by 94–85
12. Steglitz  loses to Solar by 216–50, loses to Shellhaus by 90–83
13. Saatwinkel  loses to Solar by 219–55, loses to Steglitz by 96–57
14. Savigny  loses to Solar by 219–51, loses to Saatwinkel by 77–76
15. Schoenholz  loses to Solar by 221–46, loses to Savigny by 78–70
16. Suedkreuz  loses to Solar by 220–50, loses to Schoenholz by 68–67
17. Soorstreet  loses to Solar by 226–32, loses to Suedkreuz by 75–58

- Paul

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack-operators] OpenStack "S" Release Naming Preliminary Results

2018-03-21 Thread Paul Belanger
Hello all!

We decided to run a public poll this time around, we'll likely discuss the
process during a TC meeting, but we'd love the hear your feedback.

The raw results are below - however ...

**PLEASE REMEMBER** that these now have to go through legal vetting. So 
it is too soon to say 'OpenStack Solar' is our next release, given that previous
polls have had some issues with the top choice.

In any case, the names will been sent off to legal for vetting. As soon 
as we have a final winner, I'll let you all know.

https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_40b95cb2be3fcdf1=c04ca6bca83a1427

Result

1. Solar  (Condorcet winner: wins contests with all other choices)
2. Stein  loses to Solar by 159–138
3. Spree  loses to Solar by 175–122, loses to Stein by 148–141
4. Sonne  loses to Solar by 190–99, loses to Spree by 174–97
5. Springer  loses to Solar by 214–60, loses to Sonne by 147–103
6. Spandau  loses to Solar by 195–88, loses to Springer by 125–118
7. See  loses to Solar by 203–61, loses to Spandau by 121–111
8. Schiller  loses to Solar by 207–70, loses to See by 112–106
9. SBahn  loses to Solar by 212–74, loses to Schiller by 111–101
10. Staaken  loses to Solar by 219–59, loses to SBahn by 115–89
11. Shellhaus  loses to Solar by 213–61, loses to Staaken by 94–85
12. Steglitz  loses to Solar by 216–50, loses to Shellhaus by 90–83
13. Saatwinkel  loses to Solar by 219–55, loses to Steglitz by 96–57
14. Savigny  loses to Solar by 219–51, loses to Saatwinkel by 77–76
15. Schoenholz  loses to Solar by 221–46, loses to Savigny by 78–70
16. Suedkreuz  loses to Solar by 220–50, loses to Schoenholz by 68–67
17. Soorstreet  loses to Solar by 226–32, loses to Suedkreuz by 75–58

- Paul

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] OpenStack "S" Release Naming Preliminary Results

2018-03-21 Thread Paul Belanger
Hello all!

We decided to run a public poll this time around, we'll likely discuss the
process during a TC meeting, but we'd love the hear your feedback.

The raw results are below - however ...

**PLEASE REMEMBER** that these now have to go through legal vetting. So 
it is too soon to say 'OpenStack Solar' is our next release, given that previous
polls have had some issues with the top choice.

In any case, the names will been sent off to legal for vetting. As soon 
as we have a final winner, I'll let you all know.

https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_40b95cb2be3fcdf1=c04ca6bca83a1427

Result

1. Solar  (Condorcet winner: wins contests with all other choices)
2. Stein  loses to Solar by 159–138
3. Spree  loses to Solar by 175–122, loses to Stein by 148–141
4. Sonne  loses to Solar by 190–99, loses to Spree by 174–97
5. Springer  loses to Solar by 214–60, loses to Sonne by 147–103
6. Spandau  loses to Solar by 195–88, loses to Springer by 125–118
7. See  loses to Solar by 203–61, loses to Spandau by 121–111
8. Schiller  loses to Solar by 207–70, loses to See by 112–106
9. SBahn  loses to Solar by 212–74, loses to Schiller by 111–101
10. Staaken  loses to Solar by 219–59, loses to SBahn by 115–89
11. Shellhaus  loses to Solar by 213–61, loses to Staaken by 94–85
12. Steglitz  loses to Solar by 216–50, loses to Shellhaus by 90–83
13. Saatwinkel  loses to Solar by 219–55, loses to Steglitz by 96–57
14. Savigny  loses to Solar by 219–51, loses to Saatwinkel by 77–76
15. Schoenholz  loses to Solar by 221–46, loses to Savigny by 78–70
16. Suedkreuz  loses to Solar by 220–50, loses to Schoenholz by 68–67
17. Soorstreet  loses to Solar by 226–32, loses to Suedkreuz by 75–58

- Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Poll: S Release Naming

2018-03-21 Thread Paul Belanger
On Tue, Mar 13, 2018 at 07:58:59PM -0400, Paul Belanger wrote:
> Greetings all,
> 
> It is time again to cast your vote for the naming of the S Release. This time
> is little different as we've decided to use a public polling option over per
> user private URLs for voting. This means, everybody should proceed to use the
> following URL to cast their vote:
> 
>   
> https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1=8cfdc1f5df5fe4d3
> 
> Because this is a public poll, results will currently be only viewable by 
> myself
> until the poll closes. Once closed, I'll post the URL making the results
> viewable to everybody. This was done to avoid everybody seeing the results 
> while
> the public poll is running.
> 
> The poll will officially end on 2018-03-21 23:59:59[1], and results will be
> posted shortly after.
> 
> [1] 
> http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst
> ---
> 
> According to the Release Naming Process, this poll is to determine the
> community preferences for the name of the R release of OpenStack. It is
> possible that the top choice is not viable for legal reasons, so the second or
> later community preference could wind up being the name.
> 
> Release Name Criteria
> 
> Each release name must start with the letter of the ISO basic Latin alphabet
> following the initial letter of the previous release, starting with the
> initial release of "Austin". After "Z", the next name should start with
> "A" again.
> 
> The name must be composed only of the 26 characters of the ISO basic Latin
> alphabet. Names which can be transliterated into this character set are also
> acceptable.
> 
> The name must refer to the physical or human geography of the region
> encompassing the location of the OpenStack design summit for the
> corresponding release. The exact boundaries of the geographic region under
> consideration must be declared before the opening of nominations, as part of
> the initiation of the selection process.
> 
> The name must be a single word with a maximum of 10 characters. Words that
> describe the feature should not be included, so "Foo City" or "Foo Peak"
> would both be eligible as "Foo".
> 
> Names which do not meet these criteria but otherwise sound really cool
> should be added to a separate section of the wiki page and the TC may make
> an exception for one or more of them to be considered in the Condorcet poll.
> The naming official is responsible for presenting the list of exceptional
> names for consideration to the TC before the poll opens.
> 
> Exact Geographic Region
> 
> The Geographic Region from where names for the S release will come is Berlin
> 
> Proposed Names
> 
> Spree (a river that flows through the Saxony, Brandenburg and Berlin states of
>Germany)
> 
> SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin)
> 
> Spandau (One of the twelve boroughs of Berlin)
> 
> Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently
>abbreviated as )
> 
> Steglitz (a locality in the South Western part of the city)
> 
> Springer (Berlin is headquarters of Axel Springer publishing house)
> 
> Staaken (a locality within the Spandau borough)
> 
> Schoenholz (A zone in the Niederschönhausen district of Berlin)
> 
> Shellhaus (A famous office building)
> 
> Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg)
> 
> Schiller (A park in the Mitte borough)
> 
> Saatwinkel (The name of a super tiny beach, and its surrounding neighborhood)
>(The adjective form, Saatwinkler is also a really cool bridge but
>that form is too long)
> 
> Sonne (Sonnenallee is the name of a large street in Berlin crossing the former
>wall, also translates as "sun")
> 
> Savigny (Common place in City-West)
> 
> Soorstreet (Street in Berlin restrict Charlottenburg)
> 
> Solar (Skybar in Berlin)
> 
> See (Seestraße or "See Street" in Berlin)
> 
A friendly reminder, the naming poll will be closing later today (2018-03-21
23:59:59 UTC). If you haven't done so, please take a moment to vote.

Thanks,
Paul

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] Poll: S Release Naming

2018-03-21 Thread Paul Belanger
On Tue, Mar 13, 2018 at 07:58:59PM -0400, Paul Belanger wrote:
> Greetings all,
> 
> It is time again to cast your vote for the naming of the S Release. This time
> is little different as we've decided to use a public polling option over per
> user private URLs for voting. This means, everybody should proceed to use the
> following URL to cast their vote:
> 
>   
> https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1=8cfdc1f5df5fe4d3
> 
> Because this is a public poll, results will currently be only viewable by 
> myself
> until the poll closes. Once closed, I'll post the URL making the results
> viewable to everybody. This was done to avoid everybody seeing the results 
> while
> the public poll is running.
> 
> The poll will officially end on 2018-03-21 23:59:59[1], and results will be
> posted shortly after.
> 
> [1] 
> http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst
> ---
> 
> According to the Release Naming Process, this poll is to determine the
> community preferences for the name of the R release of OpenStack. It is
> possible that the top choice is not viable for legal reasons, so the second or
> later community preference could wind up being the name.
> 
> Release Name Criteria
> 
> Each release name must start with the letter of the ISO basic Latin alphabet
> following the initial letter of the previous release, starting with the
> initial release of "Austin". After "Z", the next name should start with
> "A" again.
> 
> The name must be composed only of the 26 characters of the ISO basic Latin
> alphabet. Names which can be transliterated into this character set are also
> acceptable.
> 
> The name must refer to the physical or human geography of the region
> encompassing the location of the OpenStack design summit for the
> corresponding release. The exact boundaries of the geographic region under
> consideration must be declared before the opening of nominations, as part of
> the initiation of the selection process.
> 
> The name must be a single word with a maximum of 10 characters. Words that
> describe the feature should not be included, so "Foo City" or "Foo Peak"
> would both be eligible as "Foo".
> 
> Names which do not meet these criteria but otherwise sound really cool
> should be added to a separate section of the wiki page and the TC may make
> an exception for one or more of them to be considered in the Condorcet poll.
> The naming official is responsible for presenting the list of exceptional
> names for consideration to the TC before the poll opens.
> 
> Exact Geographic Region
> 
> The Geographic Region from where names for the S release will come is Berlin
> 
> Proposed Names
> 
> Spree (a river that flows through the Saxony, Brandenburg and Berlin states of
>Germany)
> 
> SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin)
> 
> Spandau (One of the twelve boroughs of Berlin)
> 
> Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently
>abbreviated as )
> 
> Steglitz (a locality in the South Western part of the city)
> 
> Springer (Berlin is headquarters of Axel Springer publishing house)
> 
> Staaken (a locality within the Spandau borough)
> 
> Schoenholz (A zone in the Niederschönhausen district of Berlin)
> 
> Shellhaus (A famous office building)
> 
> Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg)
> 
> Schiller (A park in the Mitte borough)
> 
> Saatwinkel (The name of a super tiny beach, and its surrounding neighborhood)
>(The adjective form, Saatwinkler is also a really cool bridge but
>that form is too long)
> 
> Sonne (Sonnenallee is the name of a large street in Berlin crossing the former
>wall, also translates as "sun")
> 
> Savigny (Common place in City-West)
> 
> Soorstreet (Street in Berlin restrict Charlottenburg)
> 
> Solar (Skybar in Berlin)
> 
> See (Seestraße or "See Street" in Berlin)
> 
A friendly reminder, the naming poll will be closing later today (2018-03-21
23:59:59 UTC). If you haven't done so, please take a moment to vote.

Thanks,
Paul

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] Poll: S Release Naming

2018-03-21 Thread Paul Belanger
On Tue, Mar 13, 2018 at 07:58:59PM -0400, Paul Belanger wrote:
> Greetings all,
> 
> It is time again to cast your vote for the naming of the S Release. This time
> is little different as we've decided to use a public polling option over per
> user private URLs for voting. This means, everybody should proceed to use the
> following URL to cast their vote:
> 
>   
> https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1=8cfdc1f5df5fe4d3
> 
> Because this is a public poll, results will currently be only viewable by 
> myself
> until the poll closes. Once closed, I'll post the URL making the results
> viewable to everybody. This was done to avoid everybody seeing the results 
> while
> the public poll is running.
> 
> The poll will officially end on 2018-03-21 23:59:59[1], and results will be
> posted shortly after.
> 
> [1] 
> http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst
> ---
> 
> According to the Release Naming Process, this poll is to determine the
> community preferences for the name of the R release of OpenStack. It is
> possible that the top choice is not viable for legal reasons, so the second or
> later community preference could wind up being the name.
> 
> Release Name Criteria
> 
> Each release name must start with the letter of the ISO basic Latin alphabet
> following the initial letter of the previous release, starting with the
> initial release of "Austin". After "Z", the next name should start with
> "A" again.
> 
> The name must be composed only of the 26 characters of the ISO basic Latin
> alphabet. Names which can be transliterated into this character set are also
> acceptable.
> 
> The name must refer to the physical or human geography of the region
> encompassing the location of the OpenStack design summit for the
> corresponding release. The exact boundaries of the geographic region under
> consideration must be declared before the opening of nominations, as part of
> the initiation of the selection process.
> 
> The name must be a single word with a maximum of 10 characters. Words that
> describe the feature should not be included, so "Foo City" or "Foo Peak"
> would both be eligible as "Foo".
> 
> Names which do not meet these criteria but otherwise sound really cool
> should be added to a separate section of the wiki page and the TC may make
> an exception for one or more of them to be considered in the Condorcet poll.
> The naming official is responsible for presenting the list of exceptional
> names for consideration to the TC before the poll opens.
> 
> Exact Geographic Region
> 
> The Geographic Region from where names for the S release will come is Berlin
> 
> Proposed Names
> 
> Spree (a river that flows through the Saxony, Brandenburg and Berlin states of
>Germany)
> 
> SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin)
> 
> Spandau (One of the twelve boroughs of Berlin)
> 
> Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently
>abbreviated as )
> 
> Steglitz (a locality in the South Western part of the city)
> 
> Springer (Berlin is headquarters of Axel Springer publishing house)
> 
> Staaken (a locality within the Spandau borough)
> 
> Schoenholz (A zone in the Niederschönhausen district of Berlin)
> 
> Shellhaus (A famous office building)
> 
> Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg)
> 
> Schiller (A park in the Mitte borough)
> 
> Saatwinkel (The name of a super tiny beach, and its surrounding neighborhood)
>(The adjective form, Saatwinkler is also a really cool bridge but
>that form is too long)
> 
> Sonne (Sonnenallee is the name of a large street in Berlin crossing the former
>wall, also translates as "sun")
> 
> Savigny (Common place in City-West)
> 
> Soorstreet (Street in Berlin restrict Charlottenburg)
> 
> Solar (Skybar in Berlin)
> 
> See (Seestraße or "See Street" in Berlin)
> 
A friendly reminder, the naming poll will be closing later today (2018-03-21
23:59:59 UTC). If you haven't done so, please take a moment to vote.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][neutron] tools/tox_install changes - breakage with constraints

2018-03-14 Thread Paul Belanger
On Wed, Mar 14, 2018 at 10:25:49PM +, Chris Dent wrote:
> On Thu, 15 Mar 2018, Akihiro Motoki wrote:
> 
> > (1) it makes difficult to run tests in local environment
> > We have only released version of neutron/horizon on PyPI. It means
> > PyPI version (i.e. queens) is installed when we run tox in our local
> > development. Most neutron stadium projects and horizon plugins depends
> > on the latest master. Test run in local environment will be broken. We
> > need to install the latest neutron/horizon manually. This confuses
> > most developers. We need to ensure that tox can run successfully in a
> > same manner in our CI and local environments.
> 
> Assuming that ^ is actually the case then:
> 
> This sounds like a really critical issue. We need to be really
> careful about automating the human out of the equation to the point
> where people are submitting broken code just so they can get a good
> test run. That's not great if we'd like to encourage various forms
> of TDD and the like and we also happen to have a limited supply of
> CI resources.
> 
> (Which is not to say that tox-siblings isn't an awesome feature. I
> hadn't really known about it until today and it's a great thing.)
> 
If ansible is our interface for developers to use, it shouldn't be difficult to
reproduce the environments locally to get master. This does mean changing the
developer workflow to use ansible, which I can understand might not be what
people want to do.

The main reason for removing install_tox.sh, is to remove zuul-cloner from our
DIB images, as zuulv3 no longer includes this command. Even running that
locally, would no longer work against git.o.o.

I agree, we should see how to make the migration for local developer
environments better.

Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27

2018-03-14 Thread Paul Belanger
On Wed, Mar 14, 2018 at 04:44:07PM -0400, Paul Belanger wrote:
> On Wed, Mar 14, 2018 at 03:20:40PM -0400, Paul Belanger wrote:
> > On Wed, Mar 14, 2018 at 03:53:59AM +, na...@vn.fujitsu.com wrote:
> > > Hello Paul,
> > > 
> > > I am Nam from Barbican team. I would like to notify a problem when using 
> > > fedora-27. 
> > > 
> > > Currently, fedora-27 is using mariadb at 10.2.12. But there is a bug in 
> > > this version and it is the main reason for failure Barbican database 
> > > upgrading [1], the bug was fixed at 10.2.13 [2]. Would you mind updating 
> > > the version of mariadb before removing fedora-26.
> > > 
> > > [1] https://bugs.launchpad.net/barbican/+bug/1734329 
> > > [2] https://jira.mariadb.org/browse/MDEV-13508 
> > > 
> > Looking at https://apps.fedoraproject.org/packages/mariadb seems 10.2.13 has
> > already been updated. Let me recheck the patch and see if it will use the 
> > newer
> > version.
> > 
> Okay, it looks like our AFS mirrors for fedora our out of sync, I've proposed 
> a
> patch to fix that[3]. Once landed, I'll recheck the job.
> 
Okay, database looks to be fixed, but there are tests failing[4]. I'll defer
back to you to continue work on the migration.

[4] 
http://logs.openstack.org/20/547120/2/check/barbican-dogtag-devstack-functional-fedora-27/4cd64e0/job-output.txt.gz#_2018-03-14_22_29_49_400822

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27

2018-03-14 Thread Paul Belanger
On Wed, Mar 14, 2018 at 03:20:40PM -0400, Paul Belanger wrote:
> On Wed, Mar 14, 2018 at 03:53:59AM +, na...@vn.fujitsu.com wrote:
> > Hello Paul,
> > 
> > I am Nam from Barbican team. I would like to notify a problem when using 
> > fedora-27. 
> > 
> > Currently, fedora-27 is using mariadb at 10.2.12. But there is a bug in 
> > this version and it is the main reason for failure Barbican database 
> > upgrading [1], the bug was fixed at 10.2.13 [2]. Would you mind updating 
> > the version of mariadb before removing fedora-26.
> > 
> > [1] https://bugs.launchpad.net/barbican/+bug/1734329 
> > [2] https://jira.mariadb.org/browse/MDEV-13508 
> > 
> Looking at https://apps.fedoraproject.org/packages/mariadb seems 10.2.13 has
> already been updated. Let me recheck the patch and see if it will use the 
> newer
> version.
> 
Okay, it looks like our AFS mirrors for fedora our out of sync, I've proposed a
patch to fix that[3]. Once landed, I'll recheck the job.

[3] https://review.openstack.org/553052
> > Thanks,
> > Nam
> > 
> > > -Original Message-
> > > From: Paul Belanger [mailto:pabelan...@redhat.com]
> > > Sent: Tuesday, March 13, 2018 9:54 PM
> > > To: openstack-dev@lists.openstack.org
> > > Subject: Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from
> > > fedora-26 to fedora-27
> > > 
> > > On Mon, Mar 05, 2018 at 06:45:13PM -0500, Paul Belanger wrote:
> > > > Greetings,
> > > >
> > > > A quick search of git shows your projects are using fedora-26 nodes for
> > > testing.
> > > > Please take a moment to look at gerrit[1] and help land patches.  We'd
> > > > like to remove fedora-26 nodes in the next week and to avoid broken
> > > > jobs you'll need to approve these patches.
> > > >
> > > > If you jobs are failing under fedora-27, please take the time to fix
> > > > any issue or update said patches to make them non-voting.
> > > >
> > > > We (openstack-infra) aim to only keep the latest fedora image online,
> > > > which changes aprox every 6 months.
> > > >
> > > > Thanks for your help and understanding, Paul
> > > >
> > > > [1] https://review.openstack.org/#/q/topic:fedora-27+status:open
> > > >
> > > Greetings,
> > > 
> > > This is a friendly reminder, about moving jobs to fedora-27. I'd like to 
> > > remove
> > > our fedora-26 images next week and if jobs haven't been migrated you may
> > > start to see NODE_FAILURE messages while running jobs.  Please take a
> > > moment to merge the open changes or update them to be non-voting while
> > > you work on fixes.
> > > 
> > > Thanks again,
> > > Paul
> > > 
> > > __
> > > 
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: OpenStack-dev-
> > > requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27

2018-03-14 Thread Paul Belanger
On Wed, Mar 14, 2018 at 03:53:59AM +, na...@vn.fujitsu.com wrote:
> Hello Paul,
> 
> I am Nam from Barbican team. I would like to notify a problem when using 
> fedora-27. 
> 
> Currently, fedora-27 is using mariadb at 10.2.12. But there is a bug in this 
> version and it is the main reason for failure Barbican database upgrading 
> [1], the bug was fixed at 10.2.13 [2]. Would you mind updating the version of 
> mariadb before removing fedora-26.
> 
> [1] https://bugs.launchpad.net/barbican/+bug/1734329 
> [2] https://jira.mariadb.org/browse/MDEV-13508 
> 
Looking at https://apps.fedoraproject.org/packages/mariadb seems 10.2.13 has
already been updated. Let me recheck the patch and see if it will use the newer
version.

> Thanks,
> Nam
> 
> > -Original Message-
> > From: Paul Belanger [mailto:pabelan...@redhat.com]
> > Sent: Tuesday, March 13, 2018 9:54 PM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from
> > fedora-26 to fedora-27
> > 
> > On Mon, Mar 05, 2018 at 06:45:13PM -0500, Paul Belanger wrote:
> > > Greetings,
> > >
> > > A quick search of git shows your projects are using fedora-26 nodes for
> > testing.
> > > Please take a moment to look at gerrit[1] and help land patches.  We'd
> > > like to remove fedora-26 nodes in the next week and to avoid broken
> > > jobs you'll need to approve these patches.
> > >
> > > If you jobs are failing under fedora-27, please take the time to fix
> > > any issue or update said patches to make them non-voting.
> > >
> > > We (openstack-infra) aim to only keep the latest fedora image online,
> > > which changes aprox every 6 months.
> > >
> > > Thanks for your help and understanding, Paul
> > >
> > > [1] https://review.openstack.org/#/q/topic:fedora-27+status:open
> > >
> > Greetings,
> > 
> > This is a friendly reminder, about moving jobs to fedora-27. I'd like to 
> > remove
> > our fedora-26 images next week and if jobs haven't been migrated you may
> > start to see NODE_FAILURE messages while running jobs.  Please take a
> > moment to merge the open changes or update them to be non-voting while
> > you work on fixes.
> > 
> > Thanks again,
> > Paul
> > 
> > __
> > 
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] Poll: S Release Naming

2018-03-13 Thread Paul Belanger
Greetings all,

It is time again to cast your vote for the naming of the S Release. This time
is little different as we've decided to use a public polling option over per
user private URLs for voting. This means, everybody should proceed to use the
following URL to cast their vote:

  
https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1=8cfdc1f5df5fe4d3

Because this is a public poll, results will currently be only viewable by myself
until the poll closes. Once closed, I'll post the URL making the results
viewable to everybody. This was done to avoid everybody seeing the results while
the public poll is running.

The poll will officially end on 2018-03-21 23:59:59[1], and results will be
posted shortly after.

[1] 
http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst
---

According to the Release Naming Process, this poll is to determine the
community preferences for the name of the R release of OpenStack. It is
possible that the top choice is not viable for legal reasons, so the second or
later community preference could wind up being the name.

Release Name Criteria

Each release name must start with the letter of the ISO basic Latin alphabet
following the initial letter of the previous release, starting with the
initial release of "Austin". After "Z", the next name should start with
"A" again.

The name must be composed only of the 26 characters of the ISO basic Latin
alphabet. Names which can be transliterated into this character set are also
acceptable.

The name must refer to the physical or human geography of the region
encompassing the location of the OpenStack design summit for the
corresponding release. The exact boundaries of the geographic region under
consideration must be declared before the opening of nominations, as part of
the initiation of the selection process.

The name must be a single word with a maximum of 10 characters. Words that
describe the feature should not be included, so "Foo City" or "Foo Peak"
would both be eligible as "Foo".

Names which do not meet these criteria but otherwise sound really cool
should be added to a separate section of the wiki page and the TC may make
an exception for one or more of them to be considered in the Condorcet poll.
The naming official is responsible for presenting the list of exceptional
names for consideration to the TC before the poll opens.

Exact Geographic Region

The Geographic Region from where names for the S release will come is Berlin

Proposed Names

Spree (a river that flows through the Saxony, Brandenburg and Berlin states of
   Germany)

SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin)

Spandau (One of the twelve boroughs of Berlin)

Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently
   abbreviated as )

Steglitz (a locality in the South Western part of the city)

Springer (Berlin is headquarters of Axel Springer publishing house)

Staaken (a locality within the Spandau borough)

Schoenholz (A zone in the Niederschönhausen district of Berlin)

Shellhaus (A famous office building)

Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg)

Schiller (A park in the Mitte borough)

Saatwinkel (The name of a super tiny beach, and its surrounding neighborhood)
   (The adjective form, Saatwinkler is also a really cool bridge but
   that form is too long)

Sonne (Sonnenallee is the name of a large street in Berlin crossing the former
   wall, also translates as "sun")

Savigny (Common place in City-West)

Soorstreet (Street in Berlin restrict Charlottenburg)

Solar (Skybar in Berlin)

See (Seestraße or "See Street" in Berlin)

Thanks,
Paul

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack-operators] Poll: S Release Naming

2018-03-13 Thread Paul Belanger
Greetings all,

It is time again to cast your vote for the naming of the S Release. This time
is little different as we've decided to use a public polling option over per
user private URLs for voting. This means, everybody should proceed to use the
following URL to cast their vote:

  
https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1=8cfdc1f5df5fe4d3

Because this is a public poll, results will currently be only viewable by myself
until the poll closes. Once closed, I'll post the URL making the results
viewable to everybody. This was done to avoid everybody seeing the results while
the public poll is running.

The poll will officially end on 2018-03-21 23:59:59[1], and results will be
posted shortly after.

[1] 
http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst
---

According to the Release Naming Process, this poll is to determine the
community preferences for the name of the R release of OpenStack. It is
possible that the top choice is not viable for legal reasons, so the second or
later community preference could wind up being the name.

Release Name Criteria

Each release name must start with the letter of the ISO basic Latin alphabet
following the initial letter of the previous release, starting with the
initial release of "Austin". After "Z", the next name should start with
"A" again.

The name must be composed only of the 26 characters of the ISO basic Latin
alphabet. Names which can be transliterated into this character set are also
acceptable.

The name must refer to the physical or human geography of the region
encompassing the location of the OpenStack design summit for the
corresponding release. The exact boundaries of the geographic region under
consideration must be declared before the opening of nominations, as part of
the initiation of the selection process.

The name must be a single word with a maximum of 10 characters. Words that
describe the feature should not be included, so "Foo City" or "Foo Peak"
would both be eligible as "Foo".

Names which do not meet these criteria but otherwise sound really cool
should be added to a separate section of the wiki page and the TC may make
an exception for one or more of them to be considered in the Condorcet poll.
The naming official is responsible for presenting the list of exceptional
names for consideration to the TC before the poll opens.

Exact Geographic Region

The Geographic Region from where names for the S release will come is Berlin

Proposed Names

Spree (a river that flows through the Saxony, Brandenburg and Berlin states of
   Germany)

SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin)

Spandau (One of the twelve boroughs of Berlin)

Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently
   abbreviated as )

Steglitz (a locality in the South Western part of the city)

Springer (Berlin is headquarters of Axel Springer publishing house)

Staaken (a locality within the Spandau borough)

Schoenholz (A zone in the Niederschönhausen district of Berlin)

Shellhaus (A famous office building)

Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg)

Schiller (A park in the Mitte borough)

Saatwinkel (The name of a super tiny beach, and its surrounding neighborhood)
   (The adjective form, Saatwinkler is also a really cool bridge but
   that form is too long)

Sonne (Sonnenallee is the name of a large street in Berlin crossing the former
   wall, also translates as "sun")

Savigny (Common place in City-West)

Soorstreet (Street in Berlin restrict Charlottenburg)

Solar (Skybar in Berlin)

See (Seestraße or "See Street" in Berlin)

Thanks,
Paul

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] Poll: S Release Naming

2018-03-13 Thread Paul Belanger
Greetings all,

It is time again to cast your vote for the naming of the S Release. This time
is little different as we've decided to use a public polling option over per
user private URLs for voting. This means, everybody should proceed to use the
following URL to cast their vote:

  
https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1=8cfdc1f5df5fe4d3

Because this is a public poll, results will currently be only viewable by myself
until the poll closes. Once closed, I'll post the URL making the results
viewable to everybody. This was done to avoid everybody seeing the results while
the public poll is running.

The poll will officially end on 2018-03-21 23:59:59[1], and results will be
posted shortly after.

[1] 
http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst
---

According to the Release Naming Process, this poll is to determine the
community preferences for the name of the R release of OpenStack. It is
possible that the top choice is not viable for legal reasons, so the second or
later community preference could wind up being the name.

Release Name Criteria

Each release name must start with the letter of the ISO basic Latin alphabet
following the initial letter of the previous release, starting with the
initial release of "Austin". After "Z", the next name should start with
"A" again.

The name must be composed only of the 26 characters of the ISO basic Latin
alphabet. Names which can be transliterated into this character set are also
acceptable.

The name must refer to the physical or human geography of the region
encompassing the location of the OpenStack design summit for the
corresponding release. The exact boundaries of the geographic region under
consideration must be declared before the opening of nominations, as part of
the initiation of the selection process.

The name must be a single word with a maximum of 10 characters. Words that
describe the feature should not be included, so "Foo City" or "Foo Peak"
would both be eligible as "Foo".

Names which do not meet these criteria but otherwise sound really cool
should be added to a separate section of the wiki page and the TC may make
an exception for one or more of them to be considered in the Condorcet poll.
The naming official is responsible for presenting the list of exceptional
names for consideration to the TC before the poll opens.

Exact Geographic Region

The Geographic Region from where names for the S release will come is Berlin

Proposed Names

Spree (a river that flows through the Saxony, Brandenburg and Berlin states of
   Germany)

SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin)

Spandau (One of the twelve boroughs of Berlin)

Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently
   abbreviated as )

Steglitz (a locality in the South Western part of the city)

Springer (Berlin is headquarters of Axel Springer publishing house)

Staaken (a locality within the Spandau borough)

Schoenholz (A zone in the Niederschönhausen district of Berlin)

Shellhaus (A famous office building)

Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg)

Schiller (A park in the Mitte borough)

Saatwinkel (The name of a super tiny beach, and its surrounding neighborhood)
   (The adjective form, Saatwinkler is also a really cool bridge but
   that form is too long)

Sonne (Sonnenallee is the name of a large street in Berlin crossing the former
   wall, also translates as "sun")

Savigny (Common place in City-West)

Soorstreet (Street in Berlin restrict Charlottenburg)

Solar (Skybar in Berlin)

See (Seestraße or "See Street" in Berlin)

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27

2018-03-13 Thread Paul Belanger
On Mon, Mar 05, 2018 at 06:45:13PM -0500, Paul Belanger wrote:
> Greetings,
> 
> A quick search of git shows your projects are using fedora-26 nodes for 
> testing.
> Please take a moment to look at gerrit[1] and help land patches.  We'd like to
> remove fedora-26 nodes in the next week and to avoid broken jobs you'll need 
> to
> approve these patches.
> 
> If you jobs are failing under fedora-27, please take the time to fix any issue
> or update said patches to make them non-voting.
> 
> We (openstack-infra) aim to only keep the latest fedora image online, which
> changes aprox every 6 months.
> 
> Thanks for your help and understanding,
> Paul
> 
> [1] https://review.openstack.org/#/q/topic:fedora-27+status:open
> 
Greetings,

This is a friendly reminder, about moving jobs to fedora-27. I'd like to remove
our fedora-26 images next week and if jobs haven't been migrated you may start
to see NODE_FAILURE messages while running jobs.  Please take a moment to merge
the open changes or update them to be non-voting while you work on fixes.

Thanks again,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27

2018-03-05 Thread Paul Belanger
Greetings,

A quick search of git shows your projects are using fedora-26 nodes for testing.
Please take a moment to look at gerrit[1] and help land patches.  We'd like to
remove fedora-26 nodes in the next week and to avoid broken jobs you'll need to
approve these patches.

If you jobs are failing under fedora-27, please take the time to fix any issue
or update said patches to make them non-voting.

We (openstack-infra) aim to only keep the latest fedora image online, which
changes aprox every 6 months.

Thanks for your help and understanding,
Paul

[1] https://review.openstack.org/#/q/topic:fedora-27+status:open

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [openstack-dev] Release Naming for S - time to suggest a name!

2018-03-05 Thread Paul Belanger
On Tue, Feb 20, 2018 at 08:19:59PM -0500, Paul Belanger wrote:
> Hey everybody,
> 
> Once again, it is time for us to pick a name for our "S" release.
> 
> Since the associated Summit will be in Berlin, the Geographic
> Location has been chosen as "Berlin" (State).
> 
> Nominations are now open. Please add suitable names to
> https://wiki.openstack.org/wiki/Release_Naming/S_Proposals between now
> and 2018-03-05 23:59 UTC.
> 
> In case you don't remember the rules:
> 
> * Each release name must start with the letter of the ISO basic Latin
> alphabet following the initial letter of the previous release, starting
> with the initial release of "Austin". After "Z", the next name should
> start with "A" again.
> 
> * The name must be composed only of the 26 characters of the ISO basic
> Latin alphabet. Names which can be transliterated into this character
> set are also acceptable.
> 
> * The name must refer to the physical or human geography of the region
> encompassing the location of the OpenStack design summit for the
> corresponding release. The exact boundaries of the geographic region
> under consideration must be declared before the opening of nominations,
> as part of the initiation of the selection process.
> 
> * The name must be a single word with a maximum of 10 characters. Words
> that describe the feature should not be included, so "Foo City" or "Foo
> Peak" would both be eligible as "Foo".
> 
> Names which do not meet these criteria but otherwise sound really cool
> should be added to a separate section of the wiki page and the TC may
> make an exception for one or more of them to be considered in the
> Condorcet poll. The naming official is responsible for presenting the
> list of exceptional names for consideration to the TC before the poll opens.
> 
> Let the naming begin.
> 
> Paul
> 
Just a reminder, there is only few more hours left to get your suggestions in
for the naming the next release.

Thanks,
Paul

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] Release Naming for S - time to suggest a name!

2018-03-05 Thread Paul Belanger
On Tue, Feb 20, 2018 at 08:19:59PM -0500, Paul Belanger wrote:
> Hey everybody,
> 
> Once again, it is time for us to pick a name for our "S" release.
> 
> Since the associated Summit will be in Berlin, the Geographic
> Location has been chosen as "Berlin" (State).
> 
> Nominations are now open. Please add suitable names to
> https://wiki.openstack.org/wiki/Release_Naming/S_Proposals between now
> and 2018-03-05 23:59 UTC.
> 
> In case you don't remember the rules:
> 
> * Each release name must start with the letter of the ISO basic Latin
> alphabet following the initial letter of the previous release, starting
> with the initial release of "Austin". After "Z", the next name should
> start with "A" again.
> 
> * The name must be composed only of the 26 characters of the ISO basic
> Latin alphabet. Names which can be transliterated into this character
> set are also acceptable.
> 
> * The name must refer to the physical or human geography of the region
> encompassing the location of the OpenStack design summit for the
> corresponding release. The exact boundaries of the geographic region
> under consideration must be declared before the opening of nominations,
> as part of the initiation of the selection process.
> 
> * The name must be a single word with a maximum of 10 characters. Words
> that describe the feature should not be included, so "Foo City" or "Foo
> Peak" would both be eligible as "Foo".
> 
> Names which do not meet these criteria but otherwise sound really cool
> should be added to a separate section of the wiki page and the TC may
> make an exception for one or more of them to be considered in the
> Condorcet poll. The naming official is responsible for presenting the
> list of exceptional names for consideration to the TC before the poll opens.
> 
> Let the naming begin.
> 
> Paul
> 
Just a reminder, there is only few more hours left to get your suggestions in
for the naming the next release.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][3rd party ci] Removal of jenkins user from DIB images

2018-02-28 Thread Paul Belanger
Greetings,

As we move forward with using zuulv3 more and more and less and less of jenkins,
we are continuing the clean up of our images.

Specifically, if your 3rd party CI is still using jenkins please take note of
the following changes[1]. By default our openstack-infra images will no longer
be creating the jenkins user accounts however, other CI systems if jenkins user
is still needed, you'll likely need to update your nodepool.yaml file and add
jenkins-slave element directly.

If you have issues, please join us in the #openstack-infra IRC channel on
freenode.

[1] https://review.openstack.org/514485/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Team dinner at Dublin PTG

2018-02-26 Thread Paul Belanger
On Mon, Feb 19, 2018 at 06:57:29PM -0500, Paul Belanger wrote:
> On Thu, Feb 15, 2018 at 03:17:58PM -0500, Paul Belanger wrote:
> > Greetings,
> > 
> > It is that time again when we all get out from behind our computers and 
> > attempt
> > to be social for the evening. Talking about great subjects like sportsball 
> > and
> > favorite beers.
> > 
> > As usually, please indicate which datetime works better for you by adding 
> > your
> > name and vote to ethercalc[1].
> > 
> > Right now, we are likely going to end up at a pub for drinks and food, if 
> > you
> > have a specific place in mind, please reply.  I'll do my best to find enough
> > room for everybody, however unsure if everybody will sit together at a large
> > table.
> > 
> > [1] https://ethercalc.openstack.org/pqhemnrgnz7t
> 
> Just a reminder to please take a moment to add your name to the team dinner
> list, so far it looks like we'll meet on Monday or Tuesday night.
> 
> Thanks,
> Paul

Greetings everybody!  Hopefully this isnt' too late in the process, but it does
seem Tuesday was the best evening for everybody to meet up.

On Sunday I was at Fagan’s Pub, a short walk from The Croke Park hotel for
drinks and food, and it seems very Irish like.

So, I am proposing after the Official PTG Networking Reception @ 7:30pm we meet
in the lobby of the hotel and walk over and obtain drinks and food. I haven't
requested any sort of reservations, but if we think that is required. I'm happy
to take some time tomorrow morning to confirm we can sit everybody.

Thanks again, and hopefully clarkb won't slap me with a trout on IRC.

Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] [tripleo] rdo cloud tenant reach 100 stacks

2018-02-23 Thread Paul Belanger
On Fri, Feb 23, 2018 at 02:20:34PM +0100, Arx Cruz wrote:
> Just an update, we cleaned up the stacks with more than 10 hours, jobs
> should be working properly now.
> 
> Kind regards,
> Arx Cruz
> 
> On Fri, Feb 23, 2018 at 12:49 PM, Arx Cruz  wrote:
> 
> > Hello,
> >
> > We just notice that there are several jobs failing because the
> > openstack-nodepool tenant reach 100 stacks and cannot create new ones.
> >
> > I notice there are several stacks created > 10 hours, and I'm manually
> > deleting those ones.
> > I don't think it will affect someone, but just in case, be aware of it.
> >
> > Kind regards,
> > Arx Cruz
> >

Give that multinode jobs are first class citizen in zuulv3, I'd like to take
some time at the PTG to discuss what would be needed to stop using heat for OVB
and switch to nodepool.

There are a number of reasons to do this, remove te-broker, remove heat
dependency for testing, use common tooling, etc. I believe there is a CI session
for tripelo one day, I was thinking of bringing it up then. Unless there is a
better time.

Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] Release Naming for S - time to suggest a name!

2018-02-20 Thread Paul Belanger
Hey everybody,

Once again, it is time for us to pick a name for our "S" release.

Since the associated Summit will be in Berlin, the Geographic
Location has been chosen as "Berlin" (State).

Nominations are now open. Please add suitable names to
https://wiki.openstack.org/wiki/Release_Naming/S_Proposals between now
and 2018-03-05 23:59 UTC.

In case you don't remember the rules:

* Each release name must start with the letter of the ISO basic Latin
alphabet following the initial letter of the previous release, starting
with the initial release of "Austin". After "Z", the next name should
start with "A" again.

* The name must be composed only of the 26 characters of the ISO basic
Latin alphabet. Names which can be transliterated into this character
set are also acceptable.

* The name must refer to the physical or human geography of the region
encompassing the location of the OpenStack design summit for the
corresponding release. The exact boundaries of the geographic region
under consideration must be declared before the opening of nominations,
as part of the initiation of the selection process.

* The name must be a single word with a maximum of 10 characters. Words
that describe the feature should not be included, so "Foo City" or "Foo
Peak" would both be eligible as "Foo".

Names which do not meet these criteria but otherwise sound really cool
should be added to a separate section of the wiki page and the TC may
make an exception for one or more of them to be considered in the
Condorcet poll. The naming official is responsible for presenting the
list of exceptional names for consideration to the TC before the poll opens.

Let the naming begin.

Paul

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] Release Naming for S - time to suggest a name!

2018-02-20 Thread Paul Belanger
Hey everybody,

Once again, it is time for us to pick a name for our "S" release.

Since the associated Summit will be in Berlin, the Geographic
Location has been chosen as "Berlin" (State).

Nominations are now open. Please add suitable names to
https://wiki.openstack.org/wiki/Release_Naming/S_Proposals between now
and 2018-03-05 23:59 UTC.

In case you don't remember the rules:

* Each release name must start with the letter of the ISO basic Latin
alphabet following the initial letter of the previous release, starting
with the initial release of "Austin". After "Z", the next name should
start with "A" again.

* The name must be composed only of the 26 characters of the ISO basic
Latin alphabet. Names which can be transliterated into this character
set are also acceptable.

* The name must refer to the physical or human geography of the region
encompassing the location of the OpenStack design summit for the
corresponding release. The exact boundaries of the geographic region
under consideration must be declared before the opening of nominations,
as part of the initiation of the selection process.

* The name must be a single word with a maximum of 10 characters. Words
that describe the feature should not be included, so "Foo City" or "Foo
Peak" would both be eligible as "Foo".

Names which do not meet these criteria but otherwise sound really cool
should be added to a separate section of the wiki page and the TC may
make an exception for one or more of them to be considered in the
Condorcet poll. The naming official is responsible for presenting the
list of exceptional names for consideration to the TC before the poll opens.

Let the naming begin.

Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Team dinner at Dublin PTG

2018-02-19 Thread Paul Belanger
On Thu, Feb 15, 2018 at 03:17:58PM -0500, Paul Belanger wrote:
> Greetings,
> 
> It is that time again when we all get out from behind our computers and 
> attempt
> to be social for the evening. Talking about great subjects like sportsball and
> favorite beers.
> 
> As usually, please indicate which datetime works better for you by adding your
> name and vote to ethercalc[1].
> 
> Right now, we are likely going to end up at a pub for drinks and food, if you
> have a specific place in mind, please reply.  I'll do my best to find enough
> room for everybody, however unsure if everybody will sit together at a large
> table.
> 
> [1] https://ethercalc.openstack.org/pqhemnrgnz7t

Just a reminder to please take a moment to add your name to the team dinner
list, so far it looks like we'll meet on Monday or Tuesday night.

Thanks,
Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [nodepool] Restricting images to specific nodepool builders

2018-02-19 Thread Paul Belanger
On Mon, Feb 19, 2018 at 08:28:27AM -0500, David Shrewsbury wrote:
> Hi,
> 
> On Sun, Feb 18, 2018 at 10:25 PM, Ian Wienand  wrote:
> 
> > Hi,
> >
> > How should we go about restricting certain image builds to specific
> > nodepool builder instances?  My immediate issue is with ARM64 image
> > builds, which I only want to happen on a builder hosted in an ARM64
> > cloud.
> >
> > Currently, the builders go through the image list and check "is the
> > existing image missing or too old, if so, build" [1].  Additionally,
> > all builders share a configuration file [2]; so builders don't know
> > "who they are".
> >
> >
> 
> Why not just split the builder configuration file? I don't see a need to
> add code
> to do this.
> 
In our case (openstack-infra) this will require another change to
puppet-nodepool to support this. Not that we cannot, but it will now mean we'll
have 7[1] different nodepool configuration files to now manage. 4 x
nodepool-launchers, 3 x nodepool-builders, since we have 7 services running.

We could update puppet to start templating or add support for nodepool.d (like
zuul.d) and better split our configs too. I just haven't found time to write
that patch.

I did submit support homing diskimage builds to specific builder[2] a while
back, which is more inline with what ianw is asking. This allows us to assign
images to builders, if set.

[1] http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool
[2] https://review.openstack.org/461239/
> 
> 
> 
> > I'd propose we add an arbitrary tag/match system so that builders can
> > pickup only those builds they mark themselves capable of building?
> >
> > e.g. diskimages would specify required builder tags similar to:
> >
> > ---
> > diskimages:
> >   - name: arm64-ubuntu-xenial
> > elements:
> >   - block-device-efi
> >   - vm
> >   - ubuntu-minimal
> >   ...
> > env-vars:
> >   TMPDIR: /opt/dib_tmp
> >   DIB_CHECKSUM: '1'
> >   ...
> > builder-requires:
> >   architecture: arm64
> > ---
> >
> > The nodepool.yaml would grow another section similar:
> >
> > ---
> > builder-provides:
> >   architecture: arm64
> >   something_else_unique_about_this_buidler: true
> > ---
> >
> > For OpenStack, we would template this section in the config file via
> > puppet in [2], ensuring above that only our theoretical ARM64 build
> > machine had that section in it's config.
> >
> > The nodepool-buidler build loop can then check that its
> > builder-provides section has all the tags specified in an image's
> > "builder-requires" section before deciding to start building.
> >
> > Thoughts welcome :)
> >
> > -i
> >
> > [1] https://git.openstack.org/cgit/openstack-infra/nodepool/
> > tree/nodepool/builder.py#n607
> > [2] https://git.openstack.org/cgit/openstack-infra/project-
> > config/tree/nodepool/nodepool.yaml
> >
> > ___
> > Zuul-discuss mailing list
> > zuul-disc...@lists.zuul-ci.org
> > http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss
> >
> 
> 
> 
> -- 
> David Shrewsbury (Shrews)

> ___
> Zuul-discuss mailing list
> zuul-disc...@lists.zuul-ci.org
> http://lists.zuul-ci.org/cgi-bin/mailman/listinfo/zuul-discuss


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[OpenStack-Infra] Team dinner at Dublin PTG

2018-02-15 Thread Paul Belanger
Greetings,

It is that time again when we all get out from behind our computers and attempt
to be social for the evening. Talking about great subjects like sportsball and
favorite beers.

As usually, please indicate which datetime works better for you by adding your
name and vote to ethercalc[1].

Right now, we are likely going to end up at a pub for drinks and food, if you
have a specific place in mind, please reply.  I'll do my best to find enough
room for everybody, however unsure if everybody will sit together at a large
table.

[1] https://ethercalc.openstack.org/pqhemnrgnz7t

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] [infra] [all] project pipeline definition should stay in project-config or project side ?

2018-02-13 Thread Paul Belanger
On Tue, Feb 13, 2018 at 11:05:34PM +0900, gmann wrote:
> Hi Infra Team,
> 
> I have 1 quick question on zuulv3 jobs and their migration part. From
> zuulv3 doc [1], it is clear about migrating the job definition and use
> those among cross repo pipeline etc.
> 
> But I did not find clear recommendation that whether project's
> pipeline definition should stay in project-config or we should move
> that to project side.
> 
> IMO,
> 'template' part(which has system level jobs) can stay in
> project-config. For example below part-
> 
> https://github.com/openstack-infra/project-config/blob/e2b82623a4ab60261b37a91e38301927b9b6/zuul.d/projects.yaml#L10507-L10523
> 
> Other pipeline definition- 'check', 'gate', 'experimental' etc should
> be move to project repo, mainly this list-
> https://github.com/openstack-infra/project-config/blob/master/zuul.d/projects.yaml#L10524-L11019
> 
> If we move those past as mentioned above then, we can have a
> consolidated place to control the project pipeline for
> 'irrelevant-files', specific branch etc
> 
> ..1 https://docs.openstack.org/infra/manual/zuulv3.html
> 
As it works today, pipeline stanza needs to be in a config project[1]
(project-config) repo. So what you are suggestion will not work. This was done
to allow zuul admins to control which pipelines are setup / configured.

I am mostly curious why a project would need to modify a pipeline configuration
or duplicate it into all projects, over having it central located in
project-config.

[1] https://docs.openstack.org/infra/zuul/user/config.html#pipeline
> 
> -gmann
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI'ing ceph-ansible against TripleO scenarios

2018-01-25 Thread Paul Belanger
On Thu, Jan 25, 2018 at 04:22:56PM -0800, Emilien Macchi wrote:
> Is there any plans to run TripleO CI jobs in ceph-ansible?
> I know the project is on github but thanks to zuulv3 we can now easily
> configure ceph-ansible to run Ci jobs in OpenStack Infra.
> 
> It would be really great to investigate that in the near future so we avoid
> eventual regressions.
> Sebastien, Giulio, John, thoughts?
> -- 
> Emilien Macchi

Just a note, we haven't actually agree to enable CI for github projects just
yet.  While it is something zuul can do now, I believe we still need to decide
when / how to enable it.

We are doing some initial testing with ansible/ansible however.

-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Broken repo on devstack-plugin-container for Fedora

2018-01-24 Thread Paul Belanger
On Wed, Jan 24, 2018 at 02:14:40PM +0100, Daniel Mellado wrote:
> Hi everyone,
> 
> Since today, when I try to install devstack-plugin-container plugin over
> fedora. It complains in here [1] about not being able to sync the cache
> for the repo with the following error [2].
> 
> This is affecting me on Fedora26+ from different network locations, so I
> was wondering if someone from suse could have a look (it did work for
> Andreas in opensuse... thanks in advance!)
> 
> [1]
> https://github.com/openstack/devstack-plugin-container/blob/master/devstack/lib/docker#L164-L170
> 
> [2] http://paste.openstack.org/show/652041/
> 
We should consider mirroring this into our AFS mirror infrastrcuture to help
remove the dependency on opensuse servers. Then each regional mirror has a copy
and we don't always need to hit upstream.

-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Talks for the Vancouver CFP ?

2018-01-23 Thread Paul Belanger
On Mon, Jan 22, 2018 at 05:12:55PM -0500, David Moreau Simard wrote:
> Hi,
> 
> Did we want to brainstorm around topics and talks suggestions from an
> openstack-infra perspective for Vancouver [1] ?
> 
> The deadline is February 8th and the tracks are the following:
> - CI / CD
> - Container Infrastructure
> - Edge Computing
> - HPC / GPU / AI
> - Open Source Community
> - Private & Hybrid Cloud
> - Public Cloud
> - Telecom & NFV
> 
> CI/CD has Zuul and Nodepool written all over it, of course.
> FWIW I'm already planning on submitting a talk that covers how a
> commit in an upstream project ends up being released by RDO which
> includes the upstream Zuul and RDO's instance of Zuul (amongst other
> things).
> 
> I started an etherpad [2], we can brainstorm there ?
> 
> [1]: https://www.openstack.org/summit/vancouver-2018/call-for-presentations/
> [2]: https://etherpad.openstack.org/p/infra-vancouver-cfp
> 
I'd like to see if we can do the zuulv3 workshop again, I think it went well in
Sydney and being the 2nd time around know of some changes that could be made.
Was likely going to propose that.

Another one, we've done in the past, is to give an overview of what
openstack-infra is. Maybe this time around we can discuss the evolution of
becoming a hosting platform with projects like kata and zuul-ci.org.

-Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] Many timeouts in zuul gates for TripleO

2018-01-19 Thread Paul Belanger
On Fri, Jan 19, 2018 at 11:23:45AM -0600, Ben Nemec wrote:
> 
> 
> On 01/18/2018 09:45 AM, Emilien Macchi wrote:
> > On Thu, Jan 18, 2018 at 6:34 AM, Or Idgar  wrote:
> > > Hi,
> > > we're encountering many timeouts for zuul gates in TripleO.
> > > For example, see
> > > http://logs.openstack.org/95/508195/28/check-tripleo/tripleo-ci-centos-7-ovb-ha-oooq/c85fcb7/.
> > > 
> > > rechecks won't help and sometimes specific gate is end successfully and
> > > sometimes not.
> > > The problem is that after recheck it's not always the same gate which is
> > > failed.
> > > 
> > > Is there someone who have access to the servers load to see what cause 
> > > this?
> > > alternatively, is there something we can do in order to reduce the running
> > > time for each gate?
> > 
> > We're migrating to RDO Cloud for OVB jobs:
> > https://review.openstack.org/#/c/526481/
> > It's a work in progress but will help a lot for OVB timeouts on RH1.
> > 
> > I'll let the CI folks comment on that topic.
> > 
> 
> I noticed that the timeouts on rh1 have been especially bad as of late so I
> did a little testing and found that it did seem to be running more slowly
> than it should.  After some investigation I found that 6 of our compute
> nodes have warning messages that the cpu was throttled due to high
> temperature.  I've disabled 4 of them that had a lot of warnings. The other
> 2 only had a handful of warnings so I'm hopeful we can leave them active
> without affecting job performance too much.  It won't accomplish much if we
> disable the overheating nodes only to overload the remaining ones.
> 
> I'll follow up with our hardware people and see if we can determine why
> these specific nodes are overheating.  They seem to be running 20 degrees C
> hotter than the rest of the nodes.
> 
Did tripleo-test-cloud-rh1 get new kernels applied for meltdown / spectre,
possible that is impacting performance too?

-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][neutron][octavia][horizon][networking-l2gw] Renaming tox_venvlist in Zuul v3 run-tempest

2018-01-12 Thread Paul Belanger
On Fri, Jan 12, 2018 at 05:13:28PM +0100, Andreas Jaeger wrote:
> The Zuul v3 tox jobs use "tox_envlist" to name the tox environment to
> use, the tempest run-tempest role used "tox_venvlist" with an extra "v"
> in it. This lead to some confusion and a wrong fix, so let's be
> consistent across these jobs.
> 
> I've just pushed changes under the topic tox_envlist to sync these.
> 
> To have working jobs, I needed the usual rename dance: Add the new
> variable, change the job, remove the old one.
> 
> Neutron, octavia, horizon, networking-l2gw team, please review and merge
> the first one quickly.
> 
> https://review.openstack.org/#/q/topic:tox_envlist
> 
++

Agree, in fact it would be good to see what would need to change with our
existing run-tox role and have tempest consume it directly over using its own
tasks for running tox.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptl][goals][storyboard] tracking the rocky goals with storyboard

2018-01-12 Thread Paul Belanger
On Fri, Jan 12, 2018 at 01:11:40PM -0800, Emilien Macchi wrote:
> On Fri, Jan 12, 2018 at 12:37 PM, Doug Hellmann  wrote:
> > Since we are discussing goals for the Rocky cycle, I would like to
> > propose a change to the way we track progress on the goals.
> >
> > We've started to see lots and lots of changes to the goal documents,
> > more than anticipated when we designed the system originally. That
> > leads to code review churn within the governance repo, and it means
> > the goal champions have to wait for the TC to review changes before
> > they have complete tracking information published somewhere. We've
> > talked about moving the tracking out of git and using an etherpad
> > or a wiki page, but I propose that we use storyboard.
> >
> > Specifically, I think we should create 1 story for each goal, and
> > one task for each project within the goal. We can then use a board
> > to track progress, with lanes like "New", "Acknowledged", "In
> > Progress", "Completed", and "Not Applicable". It would be the
> > responsibility of the goal champion to create the board, story, and
> > tasks and provide links to the board and story in the goal document
> > (so we only need 1 edit after the goal is approved). From that point
> > on, teams and goal champions could collaborate on keeping the board
> > up to date.
> >
> > Not all projects are registered in storyboard, yet. Since that
> > migration is itself a goal under discussion, I think for now we can
> > just associate all tasks with the governance repository.
> >
> > It doesn't look like changes to a board trigger any sort of
> > notifications for the tasks or stories involved, but that's probably
> > OK. If we really want notifications we can look at adding them as
> > a feature of Storyboard at the board level.
> >
> > How does this sound as an approach? Does anyone have any reservations
> > about using storyboard this way?
> 
> Sounds like a good idea, and will help to "Eat Our Own Dog Food" (if
> we want Storyboard adopted at some point).
>
Agree, I've seen some downstream teams also do this with trello. If people
would like to try with Storyboard, I don't have any objections.

-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Paul Belanger
On Fri, Jan 12, 2018 at 11:17:33AM +0100, Marcin Juszkiewicz wrote:
> Wu dniu 12.01.2018 o 01:09, Ian Wienand pisze:
> > On 01/10/2018 08:41 PM, Gema Gomez wrote:
> >> 1. Control-plane project that will host a nodepool builder with 8 vCPUs,
> >> 8 GB RAM, 1TB storage on a Cinder volume for the image building scratch
> >> space.
> > Does this mean you're planning on using diskimage-builder to produce
> > the images to run tests on?  I've seen occasional ARM things come by,
> > but of course diskimage-builder doesn't have CI for it (yet :) so it's
> > status is probably "unknown".
> 
> I had a quick look at diskimage-builder tool.
> 
> It looks to me that you always build MBR based image with one partition.
> This will have to be changed as AArch64 is UEFI based platform (both
> baremetal and VM) so disk needs to use GPT for partitioning and EFI
> System Partition needs to be present (with grub-efi binary on it).
> 
This is often the case when bringing new images online, that some changes to DIB
will be required to support them. I suspect somebody with access to AArch64
hardware will first need to run build-image.sh[1] and paste the build.log. That
will build an image locally for you using our DIB elements.

[1] 
http://git.openstack.org/cgit/openstack-infra/project-config/tree/tools/build-image.sh
> I am aware that you like to build disk images on your own but have you
> considered using virt-install with generated preseed/kickstart files? It
> would move several arch related things (like bootloader) to be handled
> by distribution rules instead of handling them again in code.
> 
I don't believe we want to look at using a new tool to build all our images,
switching to virt-install would be a large change. There are reasons why we
build images from scratch and don't believe switching to virt-install help
with that.
> 
> Sent a patch to make it choose proper grub package on aarch64.
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] ze04 & #532575

2018-01-11 Thread Paul Belanger
On Thu, Jan 11, 2018 at 07:58:11AM -0500, David Shrewsbury wrote:
> This is probably mostly my fault since I did not WIP or -2 my change in
> 532575 to keep it
> from getting merged without some infra coordination.
> 
> Because of that change, it is also required that we change the user
> zuul-executor starts
> as from root to zuul [1], and that we also open up the new default finger
> port on the
> executors [2]. Once those are in place, we should be ok to restart the
> executors.
> 
> As for ze04, since that one restarted as the 'root' user, and never dropped
> privileges
> to the 'zuul' user due to 532575, I'm not sure what state it is going to be
> in after applying
> [1] and [2]. Would it create files/directories as root that would now be
> inaccessible if it
> were to restart with the zuul user? Think logs, work dirs, etc...
> 
For permissions, we should likely confirm that puppet-zuul will properly setup
zuul:zuul on the required folders. Then next puppet run we'd be protected.
> 
> -Dave
> 
> 
> [1] https://review.openstack.org/532594
> [2] https://review.openstack.org/532709
> 
> 
> On Wed, Jan 10, 2018 at 11:53 PM, Ian Wienand  wrote:
> 
> > Hi,
> >
> > To avoid you having to pull apart the logs starting ~ [1], we
> > determined that ze04.o.o was externally rebooted at 01:00UTC (there is
> > a rather weird support ticket which you can look at, which is assigned
> > to a rackspace employee but in our queue, saying the host became
> > unresponsive).
> >
> > Unfortunately that left a bunch of jobs orphaned and necessitated a
> > restart of zuul.
> >
> > However, recent changes to not run the executor as root [2] were thus
> > partially rolled out on ze04 as it came up after reboot.  As a
> > consequence when the host came back up the executor was running as
> > root with an invalid finger server.
> >
> > The executor on ze04 has been stopped, and the host placed in the
> > emergency file to avoid it coming back.  There are now some in-flight
> > patches to complete this transition, which will need to be staged a
> > bit more manually.
> >
> > The other executors have been left as is, based on the KISS theory
> > they shouldn't restart and pick up the code until this has been dealt
> > with.
> >
> > Thanks,
> >
> > -i
> >
> >
> > [1] http://eavesdrop.openstack.org/irclogs/%23openstack-
> > infra/%23openstack-infra.2018-01-11.log.html#t2018-01-11T01:09:20
> > [2] https://review.openstack.org/#/c/532575/
> >
> > ___
> > OpenStack-Infra mailing list
> > OpenStack-Infra@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
> 
> 
> 
> 
> -- 
> David Shrewsbury (Shrews)

> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Hostnames

2018-01-06 Thread Paul Belanger
On Sat, Jan 06, 2018 at 10:21:12AM -0800, Clark Boylan wrote:
> On Sat, Jan 6, 2018, at 10:03 AM, James E. Blair wrote:
> > Hi,
> > 
> > It seems that every time we boot a new server, it either randomly has a
> > hostname of foo, or foo.openstack.org.  And maybe that changes between
> > the first boot and second.
> > 
> > The result of this is that our services which require that they know
> > their hostname (which is a lot, especially the complicated ones) end up
> > randomly working or not.  We waste time repeating the same diagnosis and
> > manual fix each time.
> > 
> > What is the cause of this, and how do we fix this correctly?
> 
> It seems to be an intentional behavior [0] of part of the launch node build 
> process [1]. We could remove the split entirely there and in the hosts and 
> mailnametemplate to use fqdns as hostname to fix it.
> 
> [0] 
> https://git.openstack.org/cgit/openstack-infra/system-config/tree/playbooks/roles/set_hostname/tasks/main.yml#n12
> [1] 
> https://git.openstack.org/cgit/openstack-infra/system-config/tree/launch/launch-node.py#n209
> 
> Clark
> 
We also talked about removing cloud-init, which has been known to modify our
hostnames on reboot.  When I last looked (few months ago) that was the reason
for renames, unsure this time.

I know we also taked about building out own DIBs for control plane servers,
which would move us to glean by default. In the past we discussed using nodepool
to build the images, but didn't want to add passwords for rax into nodepool.o.o.
That would mean a 2nd instance of nodepool, do people think that would work? Or
maybe some sort of periodic job and store credentials in zuul secrets?

PB

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Zuul roadmap

2017-12-12 Thread Paul Belanger
On Tue, Dec 12, 2017 at 05:01:41PM +0100, Matthieu Huin wrote:
> Hello,
> 
> If the getting-started documentation effort is also aimed at end
> users, I'd be happy to help Leif with this: we've written a quick
> start guide for Software Factory explaining how to set up pipelines
> and jobs with Zuul3, and this would probably be better hosted upstream
> with minimal adaptations. Let me know if there's interest for this
> (the storyboard item at
> https://storyboard.openstack.org/#!/story/2001332 doesn't specify
> which kind of doc is expected) and I can submit some patches to the
> documentatoin.
> 
I was just talking with leifmadsen about it this morning and we're going to
organize a working group on docs in the coming days. With holidays coming up
quick, it might be difficult to wrap things up before Christmas.

I know there already has been some discussion between Jim and Leif, plus myself
and Leif documented in the etherpad[1]. Using Fedora and github, I believe the
etherpad notes are correct. So the next steps are reformatting into RST and
tuning the docs.

TL;DR we have some docs, and jobs ran, now to make that a little more user
friendly.

[1] https://etherpad.openstack.org/p/zuulv3-quickstart

> Best regards,
> 
> MHU
> 
> On Fri, Dec 8, 2017 at 9:25 PM, David Shrewsbury
>  wrote:
> > Hi!
> >
> > On Wed, Dec 6, 2017 at 10:34 AM, James E. Blair  wrote:
> >
> > 
> >
> >>
> >> * Add finger gateway
> >>
> >> The fact that the executor must be started as root in order to listen on
> >> port 79 is awkward for new users.  It can be avoided by configuring it
> >> to listen on a different port, but that's also awkward.  In either case,
> >> it's a significant hurdle, and it's one of the most frequently asked
> >> questions in IRC.  The plan to deal with this is to create a new service
> >> solely to multiplex the finger streams.  This is very similar to the
> >> zuul-web service for multiplexing the console streams, so all the
> >> infrastructure is in place.  And of course, running this service will be
> >> optional, so it means that new users don't even have to deal with it to
> >> get up and running, like they do now.  Adding a new service to the 3.0
> >> list should not be done lightly, but the improvement in experience for
> >> new users will be significant.
> >>
> >> David Shrewsbury has started on this.  I don't think it is out of reach.
> >
> >
> >
> > Indeed, it is not out of reach:
> >
> >https://review.openstack.org/525276
> >
> >
> >
> > --
> > David Shrewsbury (Shrews)
> >
> > ___
> > OpenStack-Infra mailing list
> > OpenStack-Infra@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] Keystone Team Update - Week of 4 December 2017

2017-12-08 Thread Paul Belanger
On Fri, Dec 08, 2017 at 05:58:05PM +, Jeremy Stanley wrote:
> On 2017-12-08 18:48:56 +0100 (+0100), Colleen Murphy wrote:
> [...]
> > The major hindrance to keystone using Storyboard is its lack of
> > support for private bugs, which is a requirement given that
> > keystone is a VMT-managed project. If anyone is tired of keystone
> > work and wants to help the Storyboard team with that feature I'm
> > sure they would appreciate it!
> [...]
> 
> I also followed up on this topic in the SB meeting yesterday[*] and
> realized support is much further along than I previously recalled.
> In summary, SB admins can define "teams" (e.g., OpenStack VMT) and
> anyone creating a private story can choose to share it with any
> teams or individual accounts they like. What we're mostly missing at
> this point is a streamlined reporting mechanism to reduce the steps
> (and chances to make mistakes) when reporting a suspected
> vulnerability. A leading candidate solution would be support for
> custom reporting URLs which can feed arbitrary values into the
> creation form.
> 
> [*] 
> http://eavesdrop.openstack.org/meetings/storyboard/2017/storyboard.2017-12-06-19.05.log.html#l-36
> 
Thanks to both for pointing this out again. I too didn't know it was possible to
create teams for private stories. Sounds like we are slowly making progress on
this blocker for some of our projects.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Gate Issues

2017-12-08 Thread Paul Belanger
On Fri, Dec 08, 2017 at 08:38:24PM +1100, Ian Wienand wrote:
> Hello,
> 
> Just to save people reverse-engineering IRC logs...
> 
> At ~04:00UTC frickler called out that things had been sitting in the
> gate for ~17 hours.
> 
> Upon investigation, one of the stuck jobs was a
> legacy-tempest-dsvm-neutron-full job
> (bba5d98bb7b14b99afb539a75ee86a80) as part of
> https://review.openstack.org/475955
> 
> Checking the zuul logs, it had sent that to ze04
> 
>   2017-12-07 15:06:20,962 DEBUG zuul.Pipeline.openstack.gate: Build  bba5d98bb7b14b99afb539a75ee86a80 of   legacy-tempest-dsvm-neutron-full on 
> > started
> 
> However, zuul-executor was not running on ze04.  I believe there were
> issues with this host yesterday.  "/etc/init.d/zuul-executor start" and
> "service zuul-executor start" reported as OK, but didn't actually
> start the daemon.  Rather than debug, I just used
> _SYSTEMCTL_SKIP_REDIRECT=1 and that got it going.  We should look into
> that, I've noticed similar things with zuul-scheduler too.
> 
> At this point, the evidence suggested zuul was waiting for jobs that
> would never return.  Thus I saved the queues, restarted zuul-scheduler
> and re-queued.
> 
> Soon after frickler again noticed that releasenotes jobs were now
> failing with "could not import extension openstackdocstheme" [1].  We
> suspect [2].
> 
> However, the gate did not become healthy.  Upon further investigation,
> the executors are very frequently failing jobs with
> 
>  2017-12-08 06:41:10,412 ERROR zuul.AnsibleJob: [build: 
> 11062f1cca144052afb733813cdb16d8] Exception while executing job
>  Traceback (most recent call last):
>File "/usr/local/lib/python3.5/dist-packages/zuul/executor/server.py", 
> line 588, in execute
>  str(self.job.unique))
>File "/usr/local/lib/python3.5/dist-packages/zuul/executor/server.py", 
> line 702, in _execute
>File "/usr/local/lib/python3.5/dist-packages/zuul/executor/server.py", 
> line 1157, in prepareAnsibleFiles
>File "/usr/local/lib/python3.5/dist-packages/zuul/executor/server.py", 
> line 500, in make_inventory_dict
>  for name in node['name']:
>  TypeError: unhashable type: 'list'
> 
> This is leading to the very high "retry_limit" failures.
> 
> We suspect change [3] as this did some changes in the node area.  I
> did not want to revert this via a force-merge, I unfortunately don't
> have time to do something like apply manually on the host and babysit
> (I did not have time for a short email, so I sent a long one instead :)
> 
> At this point, I sent the alert to warn people the gate is unstable,
> which is about the latest state.
> 
> Good luck,
> 
> -i
> 
> [1] 
> http://logs.openstack.org/95/526595/1/check/build-openstack-releasenotes/f38ccb4/job-output.txt.gz
> [2] https://review.openstack.org/525688
> [3] https://review.openstack.org/521324
> 
Digging into some of the issues this morning, I believe that citycloud-sto2 has
been wedged for some time. I see ready / locked nodes sitting for 2+ days.  We
also have a few ready / locked nodes in rax-iad, which I think are related to
the unhasable list from this morning.

As i understand it, the only way to release these nodes is to stop the
scheduler, is that correct? If so, I'd like to request we add some sort of CLI
--force option to delete, or some other command, if it make sense.

I'll hold off on a restart until jeblair or shrews has a moment to look at logs.

Paul

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] [tc] [all] TC Report 49

2017-12-07 Thread Paul Belanger
On Thu, Dec 07, 2017 at 05:28:53PM +, Jeremy Stanley wrote:
> On 2017-12-07 17:08:55 + (+), Graham Hayes wrote:
> [...]
> > It is worth remembering that this is a completely separate project to
> > OpenStack, with its own governance. They are free not to use our tooling
> > (and considering a lot of containers work is on github, they may never
> > use it).
> 
> Right. We'd love it of course if they decide they want to, and the
> Infra team is welcoming and taking the potential needs of these
> other communities into account in some upcoming refactoring of
> services and new features, but to a great extent they're on their
> own journeys and need to decide for themselves what tools and
> solutions will work best for their particular contributors and
> users. The OpenStack community sets a really great example I expect
> many of them will want to emulate and converge on over time, but
> that works better if they come to those sorts of conclusions on
> their own rather than being told what to do/use.
> -- 
> Jeremy Stanley

It seems there is atleast some services they'd be interested in using, mailman
for example.  Does this mean it would be a-la-carte services, where new projects
mix and match which things they'd like to be managed and not?

As for being told what to do/use, I'm sure there much be some process in place
or something we want, to give an overview of the services we already available,
and why a project might want to use them.  But agree with comments about 'own
journeys and need to decide fro themselves'.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Nodepool drivers

2017-12-07 Thread Paul Belanger
On Thu, Dec 07, 2017 at 09:34:50AM +, Tristan Cacqueray wrote:
> Hi,
> 
> Top posting here to raise another complication.
> James mentioned an API problem regarding the NodeRequestHandler
> interface. Indeed the run_handler method should actually be part of the
> generic code so that the driver's handler only implements the 'launch' method.
> 
> Unfortunately, this is another refactor where we need to move and
> abstract a good chunk of the openstack handler... I worked on a first
> implementation that adds new handler interfaces to address the openstack
> driver needs (such as setting az when a node is reused):
>  https://review.openstack.org/526325
> 
> Well I'm not sure what's the best repartition of roles between the
> handler, the node_launcher and the provider, so feedback would be
> appreciated.
> 
> 
> I also proposed a 'plugin' interface so that driver are fully contained
> in their namespace, which seems like another legitimate addition to this
> feature:
>  https://review.openstack.org/524620
> 
I like the idea of some sort of plugin interface, only to allow for out of tree
drivers to be maintained easier. I found using stevedore to be easy enough when
I hadd to write some openstack plugins in the past, is that something we might
look into reusing here?

> 
> Thanks,
> -Tristan
> 
> 
> On December 2, 2017 1:30 am, Clint Byrum wrote:
> > Excerpts from corvus's message of 2017-12-01 16:08:00 -0800:
> > > Tristan Cacqueray  writes:
> > > 
> > > > Hi,
> > > >
> > > > Now that the zuulv3 release is approaching, please find below a
> > > > follow-up on this spec.
> > > >
> > > > The current code could use one more patch[0] to untangle the common
> > > > config from the openstack provider specific bits. The patch often needs
> > > > to be manualy rebased. Since it looks like a good addition to what
> > > > has already been merged, I think we should consider it for the release.
> > > >
> > > > Then it seems like new drivers are listed as 'future work' on the
> > > > zuul roadmap board, though they are still up for review[1].
> > > > They are fairly self contained and they don't require further
> > > > zuul or nodepool modification, thus they could be easily part of a
> > > > future release indeed.
> > > >
> > > > However I think we should re-evaluate them for the release one more
> > > > time since they enable using zuul without an OpenStack cloud.
> > > > Anyway I remain available to do the legwork.
> > > >
> > > > Regards,
> > > > -Tristan
> > > >
> > > > [0]: https://review.openstack.org/488384
> > > > [1]: https://review.openstack.org/468624
> > > 
> > > I think getting the static driver in to the 3.0 release is reasonable --
> > > most of the work is done, and I think it will make simple or test
> > > deployments of Zuul much easier.  That can make for a better experience
> > > for users trying out Zuul.
> > > 
> > > I'd support moving that to the 3.0 roadmap, but reserving further
> > > drivers for later work.  Thanks!
> > 
> > +1. The static driver has come up a few times in my early experiments
> > and I keep bouncing off of it.
> > 
> > ___
> > OpenStack-Infra mailing list
> > OpenStack-Infra@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra



> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[OpenStack-Infra] Virtual Sprint Queens

2017-11-30 Thread Paul Belanger
Greetings,

I have created an etherpad[1] with some thoughts about doing a virtual sprint to
upgrade our control plane to Ubuntu Xenial. Please take a moment to read and
indicate when you could assist.

While creating / deleting / upgrading control plane servers does require
infra-root permissions, there is a fair bit of stuff a non infra-root can do to
help. Specifically, we are likely going to need some updates to our puppet
manifests for some servers. This can be done by launching puppet locally
using openstack-infra/system-config on an Ubuntu Xenial VM, confirming it works
or not. If not, pushing those patch into gerrit will be more then welcome!

I've also create 2 groups of servers, ephemeral and long lived. Ephemeral means
we can likey just delete / create the server without any downtime or impact to
the OpenStack project, where our long lived servers are going to require
downtime, DNS updates, etc.

Last time we did this, we managed to knock out a large amount of server upgrades
in a week, I am hoping we'll be able reproduce this!

As always, reply here or #openstack-infra with questions.

-Paul

[1] https://etherpad.openstack.org/p/infra-sprint-xenial-upgrades

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] [infra][all] Removal of packages from bindep-fallback

2017-11-23 Thread Paul Belanger
On Thu, Nov 23, 2017 at 07:55:07PM +0100, Andreas Jaeger wrote:
> On 2017-11-16 06:59, Ian Wienand wrote:
> > Hello,
> > 
> > Some time ago we started the process of moving towards projects being
> > more explicit about thier binary dependencies using bindep [1]
> > 
> > To facilitate the transition, we created a "fallback" set of
> > dependencies [2] which are installed when a project does not specifiy
> > it's own bindep dependencies.  This essentially replicated the rather
> > ad-hoc environment provided by CI images before we started the
> > transition.
> > 
> > This list has acquired a few packages that cause some problems in
> > various situations today.  Particularly packages that aren't in the
> > increasing number of distributions we provide, or packages that come
> > from alternative repositories.
> > 
> > To this end, [3,4] proposes the removal of
> > 
> >  liberasurecode-*
> >  mongodb-*
> >  python-zmq
> >  redis
> >  zookeeper
> >  ruby-*
> > 
> > from the fallback packages.  This has a small potential to affect some
> > jobs that tacitly rely on these packages.
> 
> This has now merged. One fallout I noticed is that pcre-devel is not
> installed anymore and thus "pcre.h" is not found when building some
> python files. If that hits you, update your bindep.txt file with:
> pcre-devel [platform:rpm]
> libpcre3-dev [platform:dpkg]
> 
Good to know, I've been trying to see what other jobs are failing, and haven't
seen too much yet.

> For details about bindep, see
> https://docs.openstack.org/infra/manual/drivers.html#package-requirements
> 
> Andreas
> 
> > NOTE: this does *not* affect devstack jobs (devstack manages it's own
> > dependencies outside bindep) and if you want them back, it's just a
> > matter of putting them into the bindep file in your project (and as a
> > bonus, you have better dependency descriptions for your code).
> > 
> > We should be able to then remove centos-release-openstack-* from our
> > centos base images too [5], which will make life easier for projects
> > such as triple-o who have to work-around that.
> > 
> > If you have concerns, please reach out either via mail or in
> > #openstack-infra
> > 
> > Thank you,
> > 
> > -i
> > 
> > [1] https://docs.openstack.org/infra/bindep/
> > [2] 
> > https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/data/bindep-fallback.txt
> > [3] https://review.openstack.org/519533
> > [4] https://review.openstack.org/519534
> > [5] https://review.openstack.org/519535
> 
> 
> -- 
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Removing internet access from unit test gates

2017-11-21 Thread Paul Belanger
On Tue, Nov 21, 2017 at 04:41:13PM +, Mooney, Sean K wrote:
> 
> 
> > -Original Message-
> > From: Jeremy Stanley [mailto:fu...@yuggoth.org]
> > Sent: Tuesday, November 21, 2017 3:05 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> > Subject: Re: [openstack-dev] Removing internet access from unit test
> > gates
> > 
> > On 2017-11-21 09:28:20 +0100 (+0100), Thomas Goirand wrote:
> > [...]
> > > The only way that I see going forward, is having internet access
> > > removed from unit tests in the gate, or probably just the above
> > > variables set.
> > [...]
> > 
> > Historically, our projects hadn't done a great job of relegating their
> > "unit test" jobs to only run unit tests, and had a number of what would
> > be commonly considered functional tests mixed in. This has improved in
> > recent years as many of those projects have created separate functional
> > test jobs and are able to simplify their unit test jobs accordingly, so
> > this may be more feasible now than it was in the past.
> > 
> > Removing network access from the machines running these jobs won't
> > work, of course, because our job scheduling and execution service needs
> > to reach them over the Internet to start jobs, monitor progress and
> > collect results. As you noted, faking Python out with envvars pointing
> > it at nonexistent HTTP proxies might help at least where tests attempt
> > to make HTTP(S) connections to remote systems.
> > The Web is not all there is to the Internet however, so this wouldn't
> > do much to prevent use of remote DNS, NTP, SMTP or other
> > non-HTTP(S) protocols.
> > 
> > The biggest wrinkle I see in your "proxy" idea is that most of our
> > Python-based projects run their unit tests with tox, and it will use
> > pip to install project and job dependencies via HTTPS prior to starting
> > the test runner. As such, any proxy envvar setting would need to happen
> > within the scope of tox itself so that it will be able to set up the
> > virtualenv prior to configuring the proxy vars for the ensuing tests.
> > It might be easiest for you to work out the tox.ini modification on one
> > project (it'll be self-testing at least) and then once the pilot can be
> > shown working the conversation with the community becomes a little
> > easier.
> [Mooney, Sean K] I may be over simplifying here but our unit tests are still 
> executed by
> Zuul in vms provided by nodepool. Could we simply take advantage of openstack 
> and
> use security groups to to block egress traffic from the vm except that 
> required to upload the logs?
> e.g. don't mess with tox or proxyies within the vms and insteand do this 
> externally via neutron.
> This would require the cloud provider to expose neutron however which may be 
> an issue for Rackspace but
> It its only for unit test which are relatively short lived vs tempest jobs 
> perhaps the other providers would
> Still have enough capacity?
> > --
> > Jeremy Stanley
> 
I don't think we'd need to use security groups, we could just setup a local
firewall ruleset to do this on the node if we wanted.

-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Zuul roadmap

2017-11-21 Thread Paul Belanger
On Wed, Nov 01, 2017 at 02:47:20PM -0700, James E. Blair wrote:
> Hi,
> 
> At the PTG we brainstormed a road map for Zuul once we completed the
> infra cutover.  I think we're in a position now that we can get back to
> thinking about this, so I've (slightly) cleaned it up and organized it
> here.
> 
> I've grouped into a number of sections.  First:
> 
> Very Near Term
> --
> 
> These are things that we should be able to land within just a few weeks
> at most, once we're back from the OpenStack summit and can pay more
> attention to work other than the openstack-infra migration.  All of
> these are already in progress (some are basically finished) and all have
> a primary driver assigned:
> 
> * granular quota support in nodepool (tobias)
> * zuul-web dashboard (tristanC)
> * update private key api for zuul-web (jeblair)
> * github event ingestion via zuul-web (jlk)
> * abstract flag (do not run this job) (jeblair)
> * zuul_json fixes (dmsimard)
> 
> Short Term
> --
> 
> These are things we should be able to do within the weeks or months
> following.  Some have had work start on them already and have a driver
> assigned, others are still up for grabs.  These are things we really
> ought to get done before the v3.0 release because either they involve
> some of the defining features of v3, make it possible to actually deploy
> and run v3, or may involve significant changes for which we don't want
> to have to deal with backwards compatability.
> 
> * refactor config loading (jeblair)
> * protected flag (inherit only within this project) (jeblair)
> * refactor zuul_stream and add testing (mordred)
> * getting-started documentation (leifmadsen)
> * demonstrate openstack-infra reporting on github
I can start working on this one, is there any objections if we use
gtest-org/ansible first?

> * cross-source dependencies
> * add command socket to scheduler and merger for consistent start/stop
I can see about working on this too

> * finish git driver
> * standardize javascript tooling
> 
> -- v3.0 release 
> 
> Yay!  After we release...
> 
> Medium Term
> ---
> 
> Once the initial v3 release is out the door, there are some things that
> we have been planning on for a while and should work on to improve the
> v3 story.  These should be straightforward to implement, but these don't
> need to hold up the release and can easily fit into v3.1.
> 
> * add line comment support to reporters
> * gerrit ci reporting (2.14)
> * add cleanup jobs (jobs that always run even if parents fail)
> * automatic job doc generation
> 
> Long Term / Design
> --
> 
> Some of these are items that we should either discuss a bit further
> before implementing, but most of them probably warrant an proposal in
> infra-specs so we can flesh out the design before we start work.
> 
> * gerrit ingestion via separate process?
> * per-job artifact location
> * need way for admin to trigger a single job (not just a buildset)
> * nodepool backends
> * nodepool label access (tenant/project label restrictions?)
> * nodepool tenant awareness?
> * nodepool rest api alignment?
> * selinux domains
> * fedmesg driver (trigger/reporter)
> * mqtt driver (trigger/reporter)
> * nodepool status ui?
> 
> How does this look?
> 
> -Jim
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

  1   2   3   4   5   >