Hi,
Similar to [1] Red Hat has some internal testing which would like to
post results of testing geritt reviews (mostly for packstack [2])
Below is a public key. A name like Red Hat CI would be great
I assume ssh is the only access? I have played with [3] which does
work to talk to the json
Dan should be able to help you ...
-i
On Mon, Nov 25, 2013 at 10:42:26PM -0800, Prasanna Viswakumar wrote:
HI Team
I am Prasanna Viswakumar, with Cisco Systems Bangalore.
I placed a request sometime ago to get myself enrolled to the Trystack
facebook group
so that I can start to
Hi,
The redhatci user in gerrit currently has no contact details. I have
had an alias redha...@redhat.com created; could someone with
permissions please update.
Thanks,
-i
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
Hi,
To summarize recent discussions, nobody is opposed in general to
having Fedora / Centos included in the gate. However, it raises a
number of big questions : which job(s) to run on Fedora, where does
the quota for extra jobs come from, how do we get the job on multiple
providers, how stable
Hi,
I added an item to today's meeting but we didn't get to it.
I'd like to bring up the disablement of the F20 based job, disabled in
[1] with some discussion in [2].
It's unclear to me why there are insufficient Fedora nodes. Is the
problem that Fedora is booting too slowly compared to other
On 06/18/2014 07:05 AM, Sean Dague wrote:
Because this is the way this degrades when we are using all our quota,
I'm really wary of adding these back until we discuss the expectations
here
This seems fair
We actually had 0 nodes in use or ready of the type at the time.
Firstly I'm trying to
On 06/18/2014 11:32 AM, Dan Prince wrote:
Would this fix (or something similar) help nodepool to allocate things
more efficiently?
https://review.openstack.org/#/c/88223/
That's an interesting approach. Just looping around the same little
test from [1] with 20 nodes across two providers,
On 06/18/2014 06:46 PM, Eoghan Glynn wrote:
If we were to use f20 more widely in the gate (not to entirely
supplant precise, more just to split the load more evenly) then
would the problem observed tend to naturally resolve itself?
I would be happy to see that, having spent some time on the
On 06/19/2014 01:18 AM, James E. Blair wrote:
(This requires tracking a bit more state across allocation runs).
This seems to be the crux of the matter; once we have some state all
sorts of things become possible.
I've made a proposal in which seems like the most simple place to
start; track
On 07/15/2014 02:23 AM, Ken Giusti wrote:
So now my question - the unit tests (tox) require these libraries be
installed on the machine the tests are running on in order for the
tests to pass. Is it possible to have these packages installed to the
CI systems? This would require adding EPEL
On 07/15/2014 11:55 PM, Ken Giusti wrote:
Good to hear about epel's availability. But on the Ubuntu/Debian
side - is it possible to add the Qpid project's PPA to the config
project? From a quick 'grep' of the sources, it appears as if Pypy
requires a PPA. It's configured in
Hi
I wanted to send an update about Fedora and CentOS testing
Fedora
==
The first goal is to get a single Fedora job back up and running
gating devstack changes in [1]. This requires a restart of nodepool
to ensure the changes in [2] have taken. I'm hoping the right people
with the keys
Hi,
I'm having some issues starting the centos7 job recently added [1] (it
has been a few hours, so not that recent that I think config needs to
be reloaded)
See review [2] where I'm seeing
check-tempest-dsvm-centos7 NOT_REGISTERED (non-voting)
I can't really see from the zuul web-page
On 08/13/2014 09:08 PM, Daniel P. Berrange wrote:
I'm practically certain that this is due to Fedora 20 using the
'firewalld' daemon by default. The way libvirt talks to firewalld is
very inefficient (x18 slower than non-firewalld code path) and so
could easily explain the difference vs
Hi,
I would like to get centos 7 based testing working, but I am stuck
without images being provided in the HP Cloud. Rackspace has a
(slightly quirky, but workable) image and we have an experimental job
that runs fine.
I am aware that building our own custom images with disk-image-builder
is
On 08/26/2014 04:04 PM, Ian Wienand wrote:
I'm having a hard time getting the description in [1] to trigger after
trying several different approaches.
A huge thank-you to jeblair for getting the logs out and a day of
analysis. We can see things going crazy [1] with negative
allocations
On 01/12/2015 03:39 PM, Bharat Kumar wrote:
Found same error in all log files:
My first port-of-call when something like this happens is to search
launchpad for that error, in quotes [1]. Picking out the openstack
bugs, I believe you've hit [2]. So fixes are forthcoming...
If still nothing
On 05/10/2015 05:20 AM, James E. Blair wrote:
If you encounter any problems, please let us know here or in
#openstack-infra on Freenode.
One minor thing is that after login you're redirected to
https://review.openstack.org//; (note double //, which then messes up
various relative-links when
On 06/05/2015 12:02 PM, Tony Breeds wrote:
Don't plan on doing much else
Ever? or just for sometime while I get it going?
The thing is that it's only ever one commit way from not-going. Given
the rate of change not just of Open Stack, but everything it sits
on-top of, you should expect to
On 06/05/2015 10:46 AM, Tony Breeds wrote:
Hi All,
I'd like to test the current dev release of ubuntu (15.04) in the gate.
If I read nodepool.yaml.erb correctly then there currently isn't a nodepool
image defined for this.
There is not. First thing is probably to make it so diskimage-builder
Hi,
I spent some time last week figuring out issues with centos kernel
failures which turned out to have been fixed in a recent update that
was not applied to some nodes due to build failures.
This prompted me to look a bit more closely at builds with [1]. The
results are not great. We are
Hi,
Just trying to get my head around this from [1]:
---
- builder:
name: test_builder
builders:
- shell: |
echo ${FOO_1}
echo ${{FOO_2}}
- job-template:
name: '{foo}-test'
builders:
- test_builder
- shell: |
echo ${{FOO_3}}
On 08/13/2015 08:19 PM, Darragh Bailey wrote:
macros do not get substitution performed unless you provide a
variable to be substituted in.
Thanks; that makes some sense when you grok what's going on,
especially as to why job-templates require it but other macros don't.
I have proposed [1] to
Hi,
With more and more plugins, etc, within our various projects, I've
seen some jobs coming in with things like
if [ -f /path/to/hook.sh ]; then
. /path/to/hook.sh
fi
and some similar "conditional execution" idioms.
Clearly we don't want to go overboard and deny maintainers
On 05/26/2016 04:43 AM, Sean Dague wrote:
One thing I've been thinking a bit about is whether the event stream
could get into something like MQTT easily.
Although larger in scope than just gerrit, Fedora has something very
similar to this with fedmsg [1]
It is a pretty cool idea to have
Hi,
It has been a pretty crazy month with a lot of action on many fronts,
so I thought I'd call out Centos and Fedora testing for those
interested.
There are really 3 Centos environments in flight
- snapshot images
- DIB images built ontop of the upstream cloud-image release
- DIB based
On 04/07/2016 04:17 AM, Horváth Ferenc wrote:
> What should be the next step when I have the minified image?
I would imagine you would add it to devstack around [1] as a
dependency for tempest. Having it in this list will get it uploaded
to glance for test runs, and image-builds will
Hi all,
I spent a bit of time putting together a high-level view of the many
changes we've worked on to get our image building & platform support
to where it is today
https://www.technovelty.org/openstack/image-building-in-openstack-ci.html
It's a bit long, but I hope it can help introduce
So it seems the just released pip 8.1.2 has brought in a new version
of setuptools with it, which creates canonical names per [1] by
replacing "." with "-".
The upshot is that pip is now looking for the wrong name on our local
mirrors. e.g.
---
$ pip --version
pip 8.1.2 from
Hi,
We got a report of CI jobs failing with disconnects when
downloading from tarballs.openstack.org. The file in question is
a largish container for kolla-kubernetes [1]
ISTR this is not the first time we've had complaints about this, but
I'm not sure if we ever came up with a solution.
Below
On 09/22/2016 12:28 PM, Tony Breeds wrote:
> Checking pypi[2] shows:
> ...
> openstacksdk-0.9.7.tar.gz
> openstacksdk-0.8.6-py2.py3-none-any.whl
> openstacksdk-0.9.7-py2.py3-none-any.whl
> openstacksdk-0.7.3.tar.gz
> ...
> But the mirror for that job[3] shows:
> ...
> openstacksdk-0.9.5.tar.gz
>
Hi all,
Pursuant to our discussion at [1] I have migrated this host.
I created a new 100GiB cinder volume and copied the old ~gerrit2 to
this. This is now mounted at ~gerrit2 on the new 30GiB host in a
manner similar to review.openstack.org
SSL needs to be updated. I will speak with experts
Hi all,
I noticed that nodepool was failing to build, out of space again. We
haven't had a build in about 3 days.
Unlike last time, there wasn't anything to cleanup in the cache; it
all seemed to be images.
---
ianw@nodepool:/opt$ sudo du -sh ./*/
16G ./dib_cache/
12G ./dib_tmp/
704K
On 11/07/2016 04:08 PM, Ian Wienand wrote:
> I have started some image builds now
> to see what the deal is. I will keep an eye on them.
So we have fresh images for everything but fedora (time to delete
fedora23, just haven't got around to it, will debug fedora24 unless
anyone else
On 11/03/2016 04:30 AM, James E. Blair wrote:
Please let me know if the proposed time (Monday, 20:00 UTC) works for
you, or if an alternate time would be better.
This should be fine for us antipodeans :) 19:00 is also OK, but starts
getting pretty early in (our) winter
-i
Hi everyone,
On Thursday, January 12th from approximately 20:00 through 20:30 UTC
Gerrit will be unavailable while we complete project renames.
Currently, we plan on renaming the following projects:
Nomad -> Cyborg
- openstack/nomad -> openstack/cyborg
Nimble -> Mogan
- openstack/nimble
So I'm trying to test ansible 2.2.1-rc3 on puppetmaster. If as root
you try
# /root/ianw/ansible/bin/activate # ansible 2.2.1-rc3 venv
# /root/ianw/run_all.sh # run_all but with --check to dry-run
You should see the problem. ansible-playbook just stops; when you
take a look at the wait
On 01/11/2017 04:53 PM, Ian Wienand wrote:
> The thing is, ansible 2.0.2.0 seems to do all the ssh stuff very
> differently so this doesn't appear to happen.
I tell a lie; this same thing actually happens with 2.0.2.0
I'm wondering if just nobody has run "run_all.sh" by han
Hi,
Today I was alerted to jobs failing on IRC, further investigation
showed the pypi volume did not seem to be responding on the mirror
servers.
---
ianw@mirror:/afs/openstack.org/mirror$ ls pypi
ls: cannot access pypi: Connection timed out
---
The bandersnatch logs suggested the vos release
On 11/23/2016 01:51 AM, Jeremy Stanley wrote:
Thanks! I removed a few old manual backups from some of our homedirs
(mostly mine!) freeing up a few more GB on the rootfs. The biggest
offender though seems to be /var/log/jetty which has about a week of
retention. Whatever's rotating these daily at
On 12/05/2016 03:30 PM, Ian Wienand wrote:
So I think the only side-effect at the moment is that while the
bandersnatch cron update is running, AFS is locked and thus the
mirrors will not get a new volume release until this sync is done;
i.e. our pypi mirrors are a bit behind.
As of right now
On 6 Dec. 2016 3:13 am, "Kevin L. Mitchell" <klmi...@mit.edu> wrote:On Mon, 2016-12-05 at 15:30 +1100, Ian Wienand wrote:
For the record, those log entries are from December 2nd, rather than February: US date conventions. Heh, yep :). In one of the openafs files it has at the t
Hi,
We found a regression where python3-only Xenial images have a messed
up pip, and incorrectly installs glean. The result is that the system
boots but no network.
Because dib builds images for a wide range of platforms, some of which
ship python3 only, we need a way to call python scripts
On 01/16/2017 03:15 PM, Ian Y. Choi wrote:
Note that this issue is re-generatable: I am able recreate the issue on
translate-dev
: When I create a new version from openstack-manuals from master branch
[1] - 20107950 words,
there is no further web responses from translate-dev.o.o around after
3/4
Hi,
I was alerted to translate-dev performance issues today. Indeed, it
seemed that things were going crazy with the java wildfly process
sucking up all CPU.
At first there didn't seem to be anything in the logs. Java was
clearly going mad however, with the following threads going flat-out.
On 03/16/2017 11:34 PM, Jeremy Stanley wrote:
> I'd also like to be certain the current DIB contributors are
> entirely disinterested in forming a separate official team in
> OpenStack as I doubt the TC would reject such a proposal (I'd
> happily support it).
Assuming "interested" means you had
On 03/07/2017 06:46 PM, Gene Kuo wrote:
> I found that ask.o.o is down again.
I restarted apache
---
root@ask:/var/log/apache2# date
Tue Mar 7 07:54:26 UTC 2017
[Tue Mar 07 06:01:38.469993 2017] [core:notice] [pid 19511:tid 140460060575616]
AH00052: child pid 19517 exit signal Segmentation
On 03/07/2017 07:20 PM, Gene Kuo wrote:
These errors do line up to the time where it's down.
However, I have no idea what cause apache to seg fault.
Something disappearing underneath it would be my suspicion
Anyway, I added "CoreDumpDirectory /var/cache/apache2" to
/etc/apache2/apache2.conf
On 03/08/2017 03:45 PM, Masahito MUROI wrote:
This is a request mail to add me into blazar-release team[1] as an
initial member of the team.
Done
Thanks
-i
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
Hi,
In response to sdague reporting that citycloud jobs were timing out, I
investigated the mirror, suspecting it was not providing data fast enough.
There were some 170 htcacheclean jobs running, and the host had a load
over 100. I killed all these, but performance was still unacceptable.
I
Hi,
Unfortunately it seems the nova-specs repo has undergone some
corruption, currently manifesting itself in an inability to be pushed
to github for replication.
Upon examination, it seems there's a problem with a symlink and
probably jgit messing things up making duplicate files. I have filed
At around Sep 21 02:30UTC mirror01.bhs1.ovh.openstack.org became
uncontactable and jobs in the region started to fail.
The server was in an ACTIVE state but uncontactable. I attempted to
get a console but either a log or url request returned 500 (request
id's below if it helps).
... console
On 10/12/2017 05:52 PM, Ian Wienand wrote:
> I tried this in order, firstly recreating references.db (didn't help)
> and so I have started the checksums.db recreation. This is now
> running; I just moved the old one out of the way
Well, that didn't go so well. The output floo
On 10/14/2017 03:25 AM, Clark Boylan wrote:
I'd like to nominate a few people to be core on our job related config
repos. Dmsimard, mnaser, and jlk have been doing some great reviews
particularly around the Zuul v3 transition. In recognition of this work
I propose that we give them even more
Hi all,
As discussed in the meeting, I've started a page for planning an infra
evening in Sydney (but note -- ALL welcome)
https://ethercalc.openstack.org/lx7zv5denrb9
I put an active, less active and easy option. Just fill it in and
we'll see where we're at.
Cheers,
-i
On 12/19/2017 01:53 AM, James E. Blair wrote:
Ian Wienand <iwien...@redhat.com> writes:
There's a bunch of stuff that wouldn't show up until live, but we
probably could have got a lot of prep work out of the way if the
integration tests were doing something. I didn't realise that altho
On 11/01/2017 09:27 PM, Ian Y. Choi wrote:
> Could you please add "Mark Korondi" in
> upstream-institute-virtual-environment-core group?
> He is the bootstrapper of the project:
It seems Mark has managed to get two gerrit accounts:
| registered_on | full_name| preferred_email
Let's meet at the swirlly fountain pit about 6:10pm
Preliminary plan is a ferry, dinner, walk and drinks
Not to sound like your Mum/Mom but a light jacket and comfortable shoes
suggested :)
-i
On 1 Nov. 2017 10:59 am, "Ian Wienand" <iwien...@redhat.com> wrote:
On 10/18/20
Hello,
Just to save people reverse-engineering IRC logs...
At ~04:00UTC frickler called out that things had been sitting in the
gate for ~17 hours.
Upon investigation, one of the stuck jobs was a
legacy-tempest-dsvm-neutron-full job
(bba5d98bb7b14b99afb539a75ee86a80) as part of
On 12/04/2017 09:54 AM, Andreas Jaeger wrote:
> ERROR: Failure downloading
> https://search.maven.org/remotecontent?filepath=org/zanata/zanata-cli/3.8.1/zanata-cli-3.8.1-dist.tar.gz,
>
> HTTP Error 503: Service Unavailable: Back-end server is at capacity
>
> Could we cache this, please? Any
On 10/18/2017 05:37 PM, Ian Wienand wrote:
Hi all,
As discussed in the meeting, I've started a page for planning an infra
evening in Sydney (but note -- ALL welcome)
https://ethercalc.openstack.org/lx7zv5denrb9
It looks like Wednesday night (8th) and the more active/pub crawl
option
Hi,
We were notified of an issue around 22:45GMT with the volumes backing
the storage on afs02.dfw.o.o, which holds R/O mirrors for our AFS
volumes.
It seems that during this time there were a number of "vos release"s
in flight, or started, that ended up with volumes in a range of
unreliable
On 05/24/2018 05:40 PM, Ian Wienand wrote:
> In an effort to resolve this, the afs01 & 02 servers were restarted to
> clear all old transactions, and for the affected mirrors I essentially
> removed their read-only copies and re-added them with:
It seems this theory of removing the v
On 05/24/2018 11:36 PM, Ian Wienand wrote:
Thanks to the help of Jeffrey Altman [1], we have managed to get
mirror.pypi starting to resync again.
And thanks to user error on my behalf, and identified by jeblair, in
the rush of all this I ran this under k5start on mirror-update,
instead
On 05/24/2018 08:45 PM, Ian Wienand wrote:
> On 05/24/2018 05:40 PM, Ian Wienand wrote:
>> In an effort to resolve this, the afs01 & 02 servers were restarted to
>> clear all old transactions, and for the affected mirrors I essentially
>> removed their read-only
Hi,
It seems like the opensuse mirror has been on a bit of a growth spurt
[1]. Monitoring alerted me that the volume had not released for
several days, which lead me to look at the logs.
The rsync is failing with "File too large (27)" as it goes through
the tumbleweed sync.
As it turns out,
On 05/25/2018 08:00 PM, Ian Wienand wrote:
I am now re-running the sync in a root screen on afs02 with -localauth
so it won't timeout.
I've now finished syncing back all R/O volumes on afs02, and the update
cron jobs have been running successfully.
Thanks,
-i
Hi,
To avoid you having to pull apart the logs starting ~ [1], we
determined that ze04.o.o was externally rebooted at 01:00UTC (there is
a rather weird support ticket which you can look at, which is assigned
to a rackspace employee but in our queue, saying the host became
unresponsive).
On 01/16/2018 12:11 AM, Frank Jansen wrote:
do you have any insight into the availability of a physical
environment for the ARM64 cloud?
I’m curious, as there may be a need for downstream testing, which I
would assume will want to make use of our existing OSP CI framework.
Sorry, not 100%
On 01/13/2018 01:26 PM, Ian Wienand wrote:
In terms of implementation, since you've already looked, I think
essentially diskimage_builder/block_device/level1.py create() will
need some moderate re-factoring to call a gpt implementation in
response to a gpt label, which could translate
On 01/13/2018 05:01 AM, Jeremy Stanley wrote:
> On 2018-01-12 17:54:20 +0100 (+0100), Marcin Juszkiewicz wrote:
> [...]
>> UEFI expects GPT and DIB is completely not prepared for it. I made
>> block-layout-arm64.yaml file and got it used just to see "sorry,
>> mbr expected" message.
>
> I concur.
On 01/10/2018 08:41 PM, Gema Gomez wrote:
1. Control-plane project that will host a nodepool builder with 8 vCPUs,
8 GB RAM, 1TB storage on a Cinder volume for the image building scratch
space.
Does this mean you're planning on using diskimage-builder to produce
the images to run tests on?
Hi,
A quick status update on the integration of the Linaro aarch64 cloud
- Everything is integrated into the system-config cloud-launcher bits,
so all auth tokens are in place, keys are deploying, etc.
- I've started with a mirror. So far only a minor change to puppet
required for the
On 02/20/2018 02:23 AM, Paul Belanger wrote:
Why not just split the builder configuration file? I don't see a
need to add code to do this.
I'm happy with this; I was just coming at it from an angle of not
splitting the config file, but KISS :)
I did submit support homing diskimage builds to
Hi,
How should we go about restricting certain image builds to specific
nodepool builder instances? My immediate issue is with ARM64 image
builds, which I only want to happen on a builder hosted in an ARM64
cloud.
Currently, the builders go through the image list and check "is the
existing
On 02/02/2018 05:15 PM, Ian Wienand wrote:
> - Once that is done, it should be straight forward to add a
>nodepool-builder in the cloud and have it build images, and zuul
>should be able to launch them just like any other node (famous last
>words).
This roughl
On 08/03/2018 04:45 AM, Clark Boylan wrote:
> On Thu, Aug 2, 2018, at 9:57 AM, Alex Schultz wrote:
> As a note, Fedora 28 does come with python2.7. It is installed so
> that Zuul related ansible things can execute under python2 on the
> test nodes. There is the possibility that ansible's python3
On 08/28/2018 09:48 AM, Clark Boylan wrote:
On Mon, Aug 27, 2018, at 4:21 PM, Clark Boylan wrote:
One quick new observation. launch-node.py does not install puppet at
all so the subsequent ansible runs on the newly launched instances
will fail when attempting to stop the puppet service (and will
On 01/13/2018 03:54 AM, Marcin Juszkiewicz wrote:
UEFI expects GPT and DIB is completely not prepared for it.
I feel like we've made good progress on this part, with sufficient
GPT support in [1] to get started on the EFI part
... which is obviously where the magic is here. This is my first
* Puppet doesn't create the //var/log/nodepool///images /log directory
Note that since [1] the builder log output changed; previously it went
through python logging into the directory you mention, now it is
written into log files directly in /var/log/nodepool/builds (by
default)
* The
On 04/06/2018 11:37 PM, Jens Harbott wrote:
I didn't intend to say that this was easier. My comment was related
to the efforts in https://review.openstack.org/558991 , which could
be avoided if we decided to deploy askbot on Xenial with
Ansible. The amount of work needed to perform the latter
On 03/28/2018 11:30 AM, James E. Blair wrote:
> As soon as I say that, it makes me think that the solution to this
> really should be in the log processor. Whether it's a grok filter, or
> just us parsing the lines looking for task start/stop -- that's where we
> can associate the extra data with
I wanted to query for a failing ansible task; specifically what would
appear in the console log as
2018-03-27 15:07:49.294630 |
2018-03-27 15:07:49.295143 | TASK [configure-unbound : Check for IPv6]
2018-03-27 15:07:49.368062 | primary | skipping: Conditional result was False
2018-03-27
On 03/28/2018 01:04 AM, Jeremy Stanley wrote:
I would be remiss if I failed to remind people that the *manually*
installed etcd release there was supposed to be a one-time stop-gap,
and we were promised it would be followed shortly with some sort of
job which made updating it not-manual. We're
On 06/30/2017 04:11 PM, Ian Wienand wrote:
> Unfortunately it seems the nova-specs repo has undergone some
> corruption, currently manifesting itself in an inability to be pushed
> to github for replication.
We haven't cleaned this up, due to wanting to do it during a rename
transit
On Fri, Dec 07, 2018 at 12:02:00PM +0100, Thierry Carrez wrote:
> Looks like the readthedocs integration for JJB is misconfigured, causing the
> trigger-readthedocs-webhook to fail ?
Thanks for pointing this out. After investigation it doesn't appear
to be misconfigured in any way, but it seems
On Sun, Nov 18, 2018 at 11:09:29AM -0800, Clark Boylan wrote:
> Both ideas seem sound to me and I think we should try to implement
> them for the Infra team. I propose that we require agenda updates 24
> hours prior to the meeting start time and if there are no agenda
> updates we cancel the
On Mon, Sep 17, 2018 at 04:09:03PM -0700, Clark Boylan wrote:
> October 15-19 may be our best week for this. Does that week work?
Post school-holidays here so SGTM :)
> Let me know if you are working on upgrading any servers/services and
> I will do what I can to help review changes and make
== Agenda for next meeting ==
* Announcements
** Clarkb remains on vacation March 25-28
* Actions from last meeting
* Specs approval
* Priority Efforts (Standing meeting agenda items. Please expand if you have
subtopics.)
**
On Fri, Mar 15, 2019 at 11:01:44AM +0100, Andreas Jaeger wrote:
> Anybody remembers or can reach out to Zanata folks for help on
> fixing this for good, please?
From internal communication with people previously involved with
Zanata, it seems the team has disabanded and there is no current
On Tue, Apr 02, 2019 at 12:28:31PM +0200, Frank Kloeker wrote:
> The OpenStack I18n team was aware about the fact, that we will run into an
> unsupported platform in the near future and started an investigation about
> the renew of translation platform on [1].
> [1]
>
Hello,
I started to look at the system-config base -devel job, which runs
Ansible & ARA from master (this job has been quite useful in flagging
issues early across Ansible, testinfra, ARA etc, but it takes a bit
for us to keep it stable...)
It seems ARA 1.0 has moved in some directions we're not
On Tue, Jun 11, 2019 at 04:39:58PM -0400, David Moreau Simard wrote:
> Although it was first implemented as somewhat of a hack to address the
> lack of scalability of HTML generation, I've gotten to like the design
> principle of isolating a job's result in a single database.
>
> It easy to scale
== Agenda for next meeting ==
* Announcements
* Actions from last meeting
* Specs approval
* Priority Efforts (Standing meeting agenda items. Please expand if you have
subtopics.)
**
[http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html
A Task Tracker for
On Mon, Jul 08, 2019 at 02:36:11PM -0700, Clark Boylan wrote:
> ** Mirror setup updates (clarkb 20190709)
> *** Do we replace existing mirrors with new opendev mirrors running openafs
> 1.8.3?
I won't make it to the meeting tomorrow sorry, but here's the current
status, which is largely
We will be meeting tomorrow at 19:00 UTC in #openstack-meeting on freenode with
this agenda:
== Agenda for next meeting ==
* Announcements
* Actions from last meeting
* Specs approval
* Priority Efforts (Standing meeting agenda items. Please expand if you have
subtopics.)
**
Hello,
We received reports of connectivity issues to opendev.org at about
06:30 [1].
After some initial investigation, I could not contact
gitea-lb01.opendev.org via ipv4 or 6.
Upon checking it's console I saw a range of kernel errors that suggest
the host was probably having issues with it's
Hello,
All our current images use dib's "pip-and-virtualenv" element to
ensure the latest pip/setuptools/virtualenv are installed, and
/usr/bin/ installs Python 2 packages and
/usr/bin/ install Python 3 packages.
The upshot of this is that all our base images have Python 2 and 3
installed (even
On Fri, Sep 27, 2019 at 11:09:22AM +, Jeremy Stanley wrote:
> I'd eventually love to see us stop preinstalling pip and virtualenv
> entirely, allowing jobs to take care of doing that at runtime if
> they need to use them.
You'd think, right? :) But it is a bit of a can of worms ...
So pip
Hello,
I'm trying to get us to a point where we can use nodepool container
images in production, particularly because I want to use updated tools
available in later distributions than our current Xenial builders [1]
We have hit the hardest problem; naming :)
To build a speculative
1 - 100 of 106 matches
Mail list logo