[rdo-dev] Re: Projects dropping py36 support in master: what to do?

2022-05-30 Thread Alan Pevec
> For now, only Octavia [3] and Cinder [4] are blocked (i.e we can't build the 
> latest commits with DLRN).

Let's not introduce master-cs8 since it has no future - instead let's
accumulate py36 related blockers on master in a CIX tracker,
as a pressure to move TripleO and Puppet to CS9

Alan
___
dev mailing list -- dev@lists.rdoproject.org
To unsubscribe send an email to dev-le...@lists.rdoproject.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

To unsubscribe: %(_internal_name)s-unsubscribe@%(host_name)s


Re: [rdo-dev] RDO packaging for CentOS-Stream

2020-08-25 Thread Alan Pevec
> and unable to benefit from the effects of having the
> Fedora community involved in supporting many of the components that
> OpenStack relies on.

We did not see the benefit, instead we ended up with lots of new
packages added by our team members.
We still do keep OpenStack clients maintained in Fedora and that
requires keeping lots deps up to date by us, see what all python-*
ended up owned by openstack-sig group [1]
Trouble with OpenStack services is that they integrate lots of system
services, python3 was a minor part of RHEL7 to RHEL8 conversion, for
TripleO I think most painful change was from docker to podman.

Cheers,
Alan

[1] https://src.fedoraproject.org/group/openstack-sig

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] RDO packaging for CentOS-Stream

2020-08-21 Thread Alan Pevec
Hi Wes,

> Are there any public plans for building RDO packages on CentOS-Stream 
> available for the community to review?

Do you mean c8-stream or c9-stream?
c9s is not there yet, so I'll assume c8s: RDO packages should work on
c8s as they are, do you have specific example where is that not the
case?

Cheers,
Alan

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] RDO packaging for CentOS-Stream

2020-08-21 Thread Alan Pevec
Hi Pete,

> How is building on CentOS Stream better than building on Fedora?

centos8 stream is preview of the next minor RHEL8 release
and c9 stream will be next RHEL major preview

We had RDO Trunk on Fedora in the past and it was not sustainable to
maintain, it's basic principle to keep platform stable while
developing.

Cheers,
Alan

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] [cloudkitty][rdo] Broken cloudkitty RPMs on CentOS8

2020-08-21 Thread Alan Pevec
Hi Pierre,

> > > I submitted a patch to raise the minimum requirement for dateutil in
> > > cloudkitty: https://review.opendev.org/#/c/742477/

thanks for that!

> > > However, how are those requirements taken into consideration when
> > > packaging OpenStack in RDO? RDO packages for CentOS7 provide
> > > python2-dateutil-2.8.0-1.el7.noarch.rpm, but there is no such package
> > > in the CentOS8 repository.

yes, Yatin and I had a chat the other day [1]
conclusion was "we got lucky until cloudkitty" :)

Cheers,
Alan

[1] 
http://eavesdrop.openstack.org/irclogs/%23rdo/%23rdo.2020-08-19.log.html#t2020-08-19T11:39:55

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] [cloudkitty][rdo] Broken cloudkitty RPMs on CentOS8

2020-08-21 Thread Alan Pevec
err, wrong quote! Below was reply to this part of your email:

> However, did you notice that oslo.log also claims to require
> python-dateutil>=2.7.0

yes, Yatin and I had a chat the other day [1]
conclusion was "we got lucky until cloudkitty" :)

 [1] 
http://eavesdrop.openstack.org/irclogs/%23rdo/%23rdo.2020-08-19.log.html#t2020-08-19T11:39:55

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] HAProxy 2.2 in RDO repositories

2020-08-19 Thread Alan Pevec
Hi Carlos,

> Octavia roadmap includes adding support to new features and performance 
> improvements only available starting in HAProxy 2.0. CentOS 8 ships with 
> HAProxy 1.8, and according to the package maintainer there are no plans to 
> provide HAProxy 2.x in a foreseeable future.
> I have rebuilt HAProxy 2.2 from Fedora rawhide against CentOS 8 and CentOS 
> Stream in [1] and validated it passed Octavia tests in [2] [3] (patchset 3, 
> ignore newer ones).
> We would like to check if it would be possible to provide the latest stable 
> LTS HAProxy 2.2 in RDO repositories.

Since HAProxy is networking related and could be used outside
OpenStack, I'd like to consider building and hosting it by rebooted
NFV CentOS SIG.
We should avoid pilling up deps in RDO repos, adding them is the last
option as per our deps guidelines [1]

Can you also give us short intro how is haproxy included and used in Octavia?
If it were a containerized service, we might be able to take the
container image from Openshift/OKD ?

Cheers,
Alan

[1] https://www.rdoproject.org/documentation/requirements/

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] [cloudkitty][rdo] Broken cloudkitty RPMs on CentOS8

2020-08-19 Thread Alan Pevec
Hi Pierre,

> I submitted a patch to raise the minimum requirement for dateutil in
> cloudkitty: https://review.opendev.org/#/c/742477/
> However, how are those requirements taken into consideration when
> packaging OpenStack in RDO? RDO packages for CentOS7 provide
> python2-dateutil-2.8.0-1.el7.noarch.rpm, but there is no such package
> in the CentOS8 repository.

RDO sticks to the version from base OS if a package is available
there, as long as it works with upstream projects.
In EL7 base python-dateutil 1.5 was too old so it is overridden by an
updated version in the RDO repo.
When we moved to EL8, python3-dateutil 2.6 included in the base OS was
new enough so it was no introduced in RDO for EL8.
The whole process of maintaing RDO deps is documented at
https://www.rdoproject.org/documentation/requirements/

> Would it be better to just remove the use of tz.UTC? I believe we
> could use dateutil.tz.tzutc() instead.

Yes backward compatibility would be good, if the upstream project is
happy with "available in RHEL8 base repo" justification.

Cheers,
Alan

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] RDO Cloud operations today

2020-07-10 Thread Alan Pevec
> last update as of few hours ago was: rdocloud networking should be now
> stable, uplink is not redundant, IT will work on getting back failover
> during the day

Update as of this morning:
uplink redundancy was restored last night,
restoring full CI pool is planned today.

Cheers,
Alan

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] RDO Cloud operations today

2020-07-09 Thread Alan Pevec
> Any updates on the status of the operations?

I was giving updates in #rdo IRC since we had unstable networking and
lists.r.o was not reachable,
last update as of few hours ago was: rdocloud networking should be now
stable, uplink is not redundant, IT will work on getting back failover
during the day
I'll update this thread when I get confirmation from ops that
redundancy and full CI rack is back, CI pool is now reduced ~50%

Cheers,
Alan

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


[rdo-dev] RDO Cloud operations today

2020-07-08 Thread Alan Pevec
Hi all,

FYI RDO Cloud is undergoing scheduled movement of some of its racks,
control plane and infra services (www, lists, CI pool) should stay up
all the time.
In case of unplanned outage we'll let you know in this thread and also
announce when those operations are finished.
At one point there will be reduced CI pool capacity, so expect to see
longer queues in https://review.rdoproject.org/zuul/status during the
day.

Cheers,
Alan

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] [ansible-sig][kolla][openstack-ansible][osc][sdk][tripleo] OpenStack modules broken in Ansible 2.8.9

2020-03-06 Thread Alan Pevec
> Please make sure ansible does not get bumped to 2.8.9 we are currently at 
> 2.8.8 in https://trunk.rdoproject.org/centos8-master/deps/latest/noarch/

we don't have explicit blacklisting in rdoinfo, so let's try with
doc-comment like this? https://review.rdoproject.org/r/25744

Cheers,
Alan

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] Does Opentack Stein support RHEL8/8.1 for deployment?

2020-03-05 Thread Alan Pevec
adding rdo devel list

On Thu, Mar 5, 2020 at 5:38 AM kakarla, Chaitanya
 wrote:
> Hi Rain/Team,
> Could you please respond on this below issue?

> We had discussion about Openstack Stein support with RHEL8.1 in January 2020. 
> Regarding that I want to take help to overcome the issues during minimal 
> installation of stein  to know the availability and whom I can approach about 
> the technical queries which I have?

As Rain replied you can reach out on #rdo Freenode and also on the
public mailing lists.
Our plan is explained in
https://blogs.rdoproject.org/2020/02/migration-paths-for-rdo-from-centos-7-to-8/
and it does not include Stein.
Note that Stein is one year old and in maintenance phase so changes
required might not be possible to push to the stable/stein branches.
So first step would be to explain why Stein instead of Ussuri or Train?

Cheers,
Alan

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] Building and shipping RPMs from Ansible collections

2020-02-25 Thread Alan Pevec
> Even worse: upstream testing is done using Ubuntu, does this mean that we 
> start building debs too?

TripleO is not tested with Ubuntu and we don't ship anything in OSP
for Ubuntu, so no, we're not going to start building debs.

> Ansible 2.9 introduced a way to install modules, via collections, which is 
> not platform dependent.

We need to test what we ship to customers, so we need to figure that
out first for Ansible, together with the Ansible team.
Has shipping on Red Hat CDN for Collections been defined by the
Ansible organization?
e.g. for Python we do not ship wheels, we ship Python RPMs. OTOH, for
CI pieces which do not get shipped to customers installing from pypi
is fine and we're doing it, so native Ansible Collections for CI
framework dependencies will be fine too.

Alan

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] [RDO] Weekly status for 2020-02-14

2020-02-17 Thread Alan Pevec
Hi Yatin,

thanks for the update, I'm happy upstream virtualenv blocker is out of our way!

>   * More and more packages are dropping support for python2. We keep pinning 
> to last known good py2 but at some point will not make sense to keep a 
> promotion pipeline on a repo with so many pinned packages. We need to open a 
> discussion with the TripleoCI team about it.

TripleO CI is making centos8 jobs priority, on our end we could work
on unblocking all c8 blockers mentioned below by doing temporary
workaround if needed
e.g. publishing deps like Ceph in trunk.rdo repos and rebuilding deps
in Copr if CBS is blocked.
We will also ignore aarch64 for now and add it later.

Cheers,
Alan

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] Build Rdo Train in RHEL 8 on LinuxONE

2020-01-10 Thread Alan Pevec
Hi,

thanks for the update!

> 1. Is there a server we can upload those packages for LinuxONE? E.g, use
> this repo as a experimental repo for LinuxONE.

Since those packages are built outside RDO infra, please publish them
at your public location and we can link to it in the docs.
Since we do not have this hardware available in the community infra,
we would rely on you to write a howto documentation and send PR to
github.com/redhat-openstack/website/
it would fit under https://www.rdoproject.org/install/

> 2. Now we run test with Packstack on RHEL8 manually, if we want to make
> RDO support LinuxONE officially, is CI system a requiment?

Please note "support" is an overloaded term, for community projects
that means we as a community tried it and it worked, and yes CI would
definitely be a part of our community release criteria.
For "official official" we would also need to build packages in the
CentOS build system which then requires this arch in CentOS first and
that will take time.
I think for now external repo with a doc how to set it up on ONE will
be good progress!
Also, OpenStack packages are noarch so we could combine RDO Trunk and
your repo with arch-specific dependencies.
I'm looking forward to your howto to see how you're actually running it now.

Cheers,
Alan

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] File contains no section headers.

2019-11-19 Thread Alan Pevec
> The following steps are provided on the Ussuri Milestone1 Test Day page.
> $ sudo curl -O http://trunk.rdoproject.org/centos7/delorean-deps.repo
> $ sudo curl -O 
> http://trunk.rdoproject.org/centos7/current-passed-ci/delorean.repo

-L was missing, this is now fixed in
http://rdoproject.org/testday/ussuri/milestone1/

Alan

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] File contains no section headers.

2019-11-19 Thread Alan Pevec
>> I followed the "How to test?" steps provided on the Ussuri Milestone1 Test 
>> Day page (http://rdoproject.org/testday/ussuri/milestone1/).

curl commands were missing -L I've fixed it in
https://github.com/redhat-openstack/website/blob/master/source/testday/ussuri/milestone1.html.md
webpage should be republished shortly

Alan

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] Build Rdo Train in RHEL 8 on LinuxONE

2019-10-18 Thread Alan Pevec
>> Build dependency d2to1 should have been gone, we need to fix that.
>> In which package did you hit it?
>>
>
> It's still in several packages:
>
> https://codesearch.rdoproject.org/?q=d2to1=nope==
>
> It may be worthy to check if those are actually required or we can clean it 
> up. Until then, you can try to rebuild it from fedora:

afact it's just leftover copy/paste all the way from original
nova.spec, I've opened
https://trello.com/c/HGvX9Xat/720-cleanup-br-d2to1


Cheers,
Alan
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] Build Rdo Train in RHEL 8 on LinuxONE

2019-10-18 Thread Alan Pevec
> 1. We want contribute to RDO community, let RDO add s390x architecture
> build. As there is no CentOS s390x architecture build in CentOS
> repositories, we can only build and test RDO packages in RHEL, Is it
> possiable we add RDO s390x  build without CentOS s390x architecture
> build? Build CentOS might need much more effort and time.

yes, we could publish your RHEL rebuilds as an experimental repo
outside CentOS CloudSIG

> 2. I'm trying building RDO Train packages in [2] in RHEL 8 on LinuxONE,
> seems there are some package missing, such as python-d2to1. Should I
> switch to build RDO Train in RHEL 7 as the start step, wait until RDO
> train for CentOS 8 is ready?

Build dependency d2to1 should have been gone, we need to fix that.
In which package did you hit it?

> 3. For RDO Train on RHEL 8, can I expect python 3 is the default Python
> interpreter for all OpenStack packages and dependecy packages?

yes, RHEL8 is python3 only


Cheers,
Alan

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] [RDO Community Blogs] RDO is ready to ride the wave of CentOS Stream

2019-10-06 Thread Alan Pevec
Hi Samer,

I'm redirecting to the list since the answer is time dependent.

> Hi, is there a way to install RDO on Centos 8?

not yet, we'll bootstrap deps once CBS Koji is ready for c8, watch
https://trello.com/c/fv3u22df/709-centos8-move-to-centos8

In the meantime OSP 15 (Stein) was released on RHEL 8, if you want to
start experimenting:
https://www.redhat.com/en/about/press-releases/red-hat-openstack-platform-15-enhances-infrastructure-security-and-cloud-native-integration-across-open-hybrid-cloud


Cheers,
Alan

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] Python3 status in RDO

2019-10-01 Thread Alan Pevec
Hi Lance,

> I'm assuming that the RH8 build will have Python3

this is correct

> but I'm also curious if RH7 will have Python3 or just stay on Python2.

RDO Train will be released at GA on RHEL7/CentOS7 on Python2, since
that's what was tested throughout this release cycle,
and as soon as we have CBS Koji ready for c8, we'll start
bootstrapping dependencies and move master (Ussuri) and Train to
EL8/py3
This work is tracked in
https://trello.com/c/fv3u22df/709-centos8-move-to-centos8
There are no plans to add python36 support on RHEL 7.7.


Cheers,
Alan

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] Any ETA for getting RDO kibana back online?

2019-09-24 Thread Alan Pevec
> You can use our Taiga board to track issues: 
> https://tree.taiga.io/project/morucci-software-factory/issues?q=

specifically with "infra" tag:
https://tree.taiga.io/project/morucci-software-factory/issues?q==infra

Alan
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] [infra][tripleo-ci] Disk space usage in logs.rdoproject.org

2019-06-14 Thread Alan Pevec
> 10-14TB hard drives are not really so expensive.

true for consumer-class drives, cloud storage is more like > $1k/month
for 10TB HDD and >$5k/moth for 10TB SSD

Cheers,
Alan
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] [infra][tripleoci] ppc64le container images in registry.rdoproject.org

2019-03-21 Thread Alan Pevec
>> If we can get you on the zuul platform you won't have to rework or redo at 
>> least some of that work. In zuul we have some post playbooks that execute 
>> after the build job [1]. I think you'll be able to find just about 
>> everything we do w/ containers here [2] now.
>
> This is non-trivial as we do not have access to power hardware in RDO CI. The 
> reason we are using ci.centos is because it has access to cico, which has 
> ppc64le hardware to use.

Just thinking aloud w/o much clue: maybe we could have a nodepool
driver for cico nodes, if they can be accessed from the outside via
jumphost?

Cheers,
Alan
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] Fwd: unable to talk with rdo gerrit using api-key

2019-03-18 Thread Alan Pevec
Hi Sorin,

> Based on RDO documentation, when talking with https://review.rdoproject.org/ 
> Gerrit we are supposed to use the API key from 
> https://review.rdoproject.org/sf/user_settings.html page.

Does "Generate new API key" help?
IIRC there was something after one of the upgrades that made old api
keys invalid.

Cheers,
Alan
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


[rdo-dev] Fwd: [CentOS-devel] SIGs: Possibility to drop EOL content at 7.6.1810 release time

2018-11-03 Thread Alan Pevec
Hi all,

I'll put this on RDO Meeting agenda, just a quick check re. ceph-jewel EOL
which needs to be verified by Storage SIG.
I'm not sure if we could move RDO Ocata to Lumnious?

Cheers,
Alan


-- Forwarded message -
From: Anssi Johansson 
Date: Thu, Nov 1, 2018, 10:02
Subject: [CentOS-devel] SIGs: Possibility to drop EOL content at 7.6.1810
release time
To: The CentOS developers mailing list. 


So here we go again. As one of the virtues of a programmer is laziness,
I'll just cut the previous email with some modifications. You know
the drill.

RHEL 7.6 was released a few days ago and building CentOS 7.6.1810 has
just started. This would be an excellent time to remove any EOL software
you may have floating around on mirror.centos.org. mirror.centos.org
should have only supported packages available.

SIGs should explicitly state which content they want copied over from
7.5.1804 to 7.6.1810.

I'd imagine that for example ceph-jewel could be dropped because it went
EOL in July 2018. Is there some other content that could be dropped? If
you are planning to keep content available that has gone EOL upstream,
you must commit to backport any required security fixes to it.

If SIGs want to transfer some of their packages from 7.5.18004 to
7.6.1810, please let hughesjr know. You can probably simply reply to
this message to let people know of your decision. It is possible that
some other SIG depends on your packages that are going to be removed. In
that case the other SIG should probably update their packages to depend
on supported versions.

Content to be transferred over to 7.6.1810 can be specified either by
directory name, or by individual package names.

There are also various centos-release-* packages in extras,
http://mirror.centos.org/centos/7.5.1804/extras/x86_64/Packages/ ,
perhaps some of those could be trimmed as well.

You should also communicate to your users in advance that the EOL
packages will disappear, and if necessary, instruct them to migrate to a
newer supported version. Having instructions for that procedure
published somewhere would be nice.

The old content under 7.5.1804, including any EOL content, will be
archived to vault.centos.org, and that content will be available in the
vault indefinitely. The 7.5.1804 directory on mirror.centos.org will be
emptied some time after 7.6.1810 is released.

For reference and inspiration, here are some directories from
mirror.centos.org, including both up-to-date content and potentially EOL
content. SIGs should review the list to make sure these directories can
be copied over to 7.6.1810 when that time comes. Making the decisions
now would save a bit of time at 7.6.1810 release time.

cloud/x86_64/openstack-ocata
cloud/x86_64/openstack-pike
cloud/x86_64/openstack-queens
cloud/x86_64/openstack-rocky
configmanagement/x86_64/ansible26
configmanagement/x86_64/yum4
dotnet
nfv/x86_64/fdio/vpp/vpp-1710
nfv/x86_64/fdio/vpp/vpp-1801
nfv/x86_64/fdio/vpp/vpp-1804
nfv/x86_64/fdio/vpp/vpp-1807
opstools/x86_64/common
opstools/x86_64/fluentd
opstools/x86_64/logging
opstools/x86_64/perfmon
opstools/x86_64/sensu
paas/x86_64/openshift-origin
paas/x86_64/openshift-origin13
paas/x86_64/openshift-origin14
paas/x86_64/openshift-origin15
paas/x86_64/openshift-origin36
paas/x86_64/openshift-origin37
paas/x86_64/openshift-origin38
paas/x86_64/openshift-origin39
paas/x86_64/openshift-origin310
sclo/x86_64/rh/devassist09
sclo/x86_64/rh/devtoolset-3
sclo/x86_64/rh/devtoolset-4
sclo/x86_64/rh/devtoolset-6
sclo/x86_64/rh/devtoolset-7
sclo/x86_64/rh/git19
sclo/x86_64/rh/go-toolset-7
sclo/x86_64/rh/httpd24
sclo/x86_64/rh/llvm-toolset-7
sclo/x86_64/rh/mariadb55
sclo/x86_64/rh/maven30
sclo/x86_64/rh/mongodb24
sclo/x86_64/rh/mysql55
sclo/x86_64/rh/nginx16
sclo/x86_64/rh/nodejs010
sclo/x86_64/rh/passenger40
sclo/x86_64/rh/perl516
sclo/x86_64/rh/php54
sclo/x86_64/rh/php55
sclo/x86_64/rh/postgresql92
sclo/x86_64/rh/python27
sclo/x86_64/rh/python33
sclo/x86_64/rh/python34
sclo/x86_64/rh/rh-eclipse46
sclo/x86_64/rh/rh-git29
sclo/x86_64/rh/rh-haproxy18
sclo/x86_64/rh/rh-java-common
sclo/x86_64/rh/rh-mariadb100
sclo/x86_64/rh/rh-mariadb101
sclo/x86_64/rh/rh-mariadb102
sclo/x86_64/rh/rh-maven33
sclo/x86_64/rh/rh-maven35
sclo/x86_64/rh/rh-mongodb26
sclo/x86_64/rh/rh-mongodb30upg
sclo/x86_64/rh/rh-mongodb32
sclo/x86_64/rh/rh-mongodb34
sclo/x86_64/rh/rh-mongodb36
sclo/x86_64/rh/rh-mysql56
sclo/x86_64/rh/rh-mysql57
sclo/x86_64/rh/rh-nginx18
sclo/x86_64/rh/rh-nginx110
sclo/x86_64/rh/rh-nginx112
sclo/x86_64/rh/rh-nodejs4
sclo/x86_64/rh/rh-nodejs6
sclo/x86_64/rh/rh-nodejs8
sclo/x86_64/rh/rh-perl520
sclo/x86_64/rh/rh-perl524
sclo/x86_64/rh/rh-perl526
sclo/x86_64/rh/rh-php56
sclo/x86_64/rh/rh-php70
sclo/x86_64/rh/rh-php71
sclo/x86_64/rh/rh-postgresql10
sclo/x86_64/rh/rh-postgresql94
sclo/x86_64/rh/rh-postgresql95
sclo/x86_64/rh/rh-postgresql96
sclo/x86_64/rh/rh-python35
sclo/x86_64/rh/rh-python36
sclo/x86_64/rh/rh-redis32
sclo/x86_64/rh/rh-ror42
sclo/x86_64/rh/rh-ror50

Re: [rdo-dev] Status of python3 PoC in RDO - 13-jul-2018

2018-07-17 Thread Alan Pevec
> I think having the two separated images is the only way we can ensure we are 
> not polluting the image in the initial phase with packages newer that in the 
> stabilized repo.

This should be a small list, are any of those actually included in the
base image?
Alternatively, which jobs use "normal" f28 images, could we switch
them to use "stabilized" f28 ?

Alan
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] Status of python3 PoC in RDO - 13-jul-2018

2018-07-17 Thread Alan Pevec
>
> If we use Fedora 28 to create the initial image and then replace
> repositories, we may get packages which are newer that the ones in the
> stabilized repo which would make images bad to test python3 packages.
>

These are DIB created images, we could enable stabilized repo when building
them?

Alan
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] State of distgit jobs for rdoproject

2018-07-03 Thread Alan Pevec
> With the zuulv3 migration wrapping up, I wanted to start a thread about 
> projects
> that use the package-distgit-check-jobs template. These are projects like:
>
>   openstack/cloudkittyclient-distgit
>
> I wanted to raise the idea of maybe pushing these projects directly upstream 
> into
> git.openstack.org. The main reason would be to leverage the upstream testing
> infrastructure upstream, and maybe increase adoption of rpm with other
> openstack teams. It is also one less this we as RDO have to manage on our own.


> From a governance POV these could be under an rdoproject team, tripleo or some
> other.

Ideally we would not have one core team but keep them synced with
maintainers listed in rdoinfo,
how could that work with distgit in review.o.o ?
In review.rdo we're using SF resource manager to create gerrit ACL, is
that manual operation in openstack gerrit?

> To me the main question is around publishing of RPM and DLRN. Given secrets 
> are
> now part of zuulv3, is there any external services running that we'd need to
> worry about?  Could the publishing process be run from openstack-infra 
> service?

I would not be comfortable putting CBS Koji certs in openstack-infra.
Could we instead trigger CBS Koji builds via 3rd party CI in review.rdo ?

Cheers,
Alan
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] Managing executables in python2/python3 packages

2018-06-26 Thread Alan Pevec
On Tue, Jun 26, 2018 at 7:53 PM, Alfredo Moralejo Alonso
 wrote:
> As part of the python3 PoC we are working on in rocky cycle i think we need
> to reconsider how we are managing executables in packages with
> python2/python3 subpackages. Currently, we are following Fedora best
> practices creating - in python{2|3}-
> packages and making  a symlink for python2 version and shipping
> it only in python2 subpackages.

This mess is only required when you have both python2 and python3 subpackage,
idea was to switch to python3 fully in Fedora and drop python2 as per
recent Fedora packaging guideline change
https://pagure.io/packaging-committee/issue/753 "python2-... should
not be packaged when it's not needed"

Unversioned  binary then links to the python3 version:
https://fedoraproject.org/wiki/Packaging:Python#Naming

> I think this is not a convenient way to manage it for python3 installations
> as it requires users (and tools) to use -3 commands. IMO, moving
> from python2 to python3 should be more transparent from a user PoV.
>
> I see some ways we could manage this:
>
> 1. Don't build python2 in el7 builds and stop creating - version>, just .

el7? Did you mean, Fedora? If so, yes, this is solution, as explained above.

> 2. Keep building both and change  symlink to point to
> -3 when python3 subpackages are enabled in spec file.

no, that's against the current Fedora packaging guidelines, which are
following upstream Python requirement that unversioned "python"
executes "python2"

> 3. Keep building both and use alternatives mechanism to manage 
> symlink to point to -2 or -3 depending on what is
> installed (-3 if both are enabled).

NACK, same as 2.

> My main doubt is related to how to handle this from a Fedora perspective
> keeping in mind that we are using the same specs to build packages in Fedora
> repos. I guess the best way to keep both python2 and 3 installations working
> is to use option 3.

We really do not want to maintain the mess of mixed py2/3 - in Fedora
all should move to python3,
while keeping py2 compat for EL7 builds. Looking at how much
copy/paste there is between py2/3 sections, I wonder if we could just
generate python3 spec from the python2, with some mappings in Requires
using pymod2pkg ?

Cheers,
Alan
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] [ppc64le] TripleO Container Build Job Questions

2018-06-21 Thread Alan Pevec
> We think it'd be a lot easier to pull a couple builders in on the RDO end of

If by "RDO" you mean in RDO Cloud, there are two answers:
1) adding a multiarch computes was not included into design, there's
separate ops team managing RDO Cloud which we would need to consult
and get estimates, I'd expect this to be
2) we actually do not need to add physical nodes, you just need to
provide publicly available openstack cloud account which we could add
as a separate nodepool provider. Is that an option?

> things rather than stand up a container job over on centos ci, but for the
> most part we're just looking for direction, because we've heard seemingly
> conflicting information from either side.

What is the conflicting info and why is standing up a container job on
ci.centos.org a problem?
As it was pointed out, job is already defined there, you just need to
assign it to ppc64e duffy nodes.

Cheers,
Alan
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] openstack-newton rpm packages unavailable now

2018-06-06 Thread Alan Pevec
On Wed, Jun 6, 2018 at 1:44 PM, lucker zheng  wrote:
> Sorry to trouble, I met a problem when I want to install RDO newton version,
> after installed the rdo-release-newton.rpm, it links to repo
>
> http://mirror.centos.org/centos/7/cloud/x86_64/openstack-newton/
> seems no rpm avaiable there, is this a mistake?

CentOS SIG repos are retired everytime when CentOS minor update is released,
and it's up to SIG to declare which of their releases should be retired.
In this case OpenStack Newton was EOLed upstream already Oct 2017:
https://releases.openstack.org/
Old packages are frozen at
http://vault.centos.org/centos/7.4.1708/cloud/x86_64/openstack-newton/
and are NOT receiving any updates, so please do not use them!

> BTW I didn't find it in
> https://repos.fedorapeople.org/repos/openstack/EOL/

Those are only archives of rdo-release RPMs only anyway and we should
probably get rid of that old hosting location in Fedora.

> and I found the build system successful built newton jobs at
> https://trunk.rdoproject.org/centos7-newton/report.html

Those are available to support Fast Forward Upgrades CI testing from
Newton to Queens, building project which did not push newton-eol tags
yet. Anything else is frozen at Newton EOL point.
Other than for CI, any other usage of those packages is not
recommended, best would be if you could upgrade to the latest RDO
Queens release ASAP.

Cheers,
Alan
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] qemu-kvm vs qemu-kvm-rhev requirement

2018-06-05 Thread Alan Pevec
>> then we could sub-package openstack-nova to have separate optional
>> package for each hypervisor.
>
> I'm not sure I understand what you mean.  Can you please give an
> example?

To allow hypervisor-specific deps, we could split %package compute into

%package compute-common
as-is now, just w/o hypervisor specific Requires:

and subpackage per hypervisor e.g.
%package compute-kvm
Requires: openstack-nova-compute-common = %{epoch}:%{version}-%{release}
# backward compat
Provides: openstack-nova-compute
Obsoletes: openstack-nova-compute
Requires: qemu-kvm-rhev

%package compute-xen
Requires: openstack-nova-compute-common = %{epoch}:%{version}-%{release}
Requires: xen

%package compute-vz
Requires: openstack-nova-compute-common = %{epoch}:%{version}-%{release}
Requires: qemu-kvm-vz

TBH this might be an over-kill since we do not run anything but
qemu-kvm-ev in CI, so workaround in your qemu package providing
qemu-kvm-rhev is fine.

Cheers,
Alan
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] qemu-kvm vs qemu-kvm-rhev requirement

2018-06-05 Thread Alan Pevec
>> On Mon, Jun 04, 2018 at 09:41:07PM +0300, Roman Kagan wrote:
>>> I'm now trying to figure out what is needed to make our QEMU package
>>> work with Nova; any help will be appreciated.

Where is Virtuozzo qemu RPM coming from?
If it is a separate package, could it provide virtuozzo specific Provide: e.g.
Provides: qemu-virtuozzo
then we could sub-package openstack-nova to have separate optional
package for each hypervisor.

Cheers,
Alan
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] EPEL packages needed by OpenStack-Ansible

2018-05-29 Thread Alan Pevec
Quick note: each dep needs to be examined case by case, why is it needed
and does it really fit in RDO/Cloud SIG, not just rebuilt blindly.

Alan
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] Strategy on packaging external dependencies in RDO + include Ansible in RDO

2018-04-03 Thread Alan Pevec
On Fri, Mar 30, 2018 at 4:31 AM, Sam Doran  wrote:
>> Ansible RPMs are already there
>> http://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/ but they
>> depend on EPEL for additional deps.
>
> Ansible RPMs have always been there. I don't believe they depend on anything
> in EPEL.

You are correct, I had some stale info or mixed it up with something else.
Here is yum install output on an empty CentOS7 machine:
Installing:
 ansible  noarch 2.4.3.0-1.el7.ans /ansible-2.4.3.0-1.el7.ans.noarch
Installing for dependencies:
 PyYAML   x86_64 3.10-11.el7   base
 libyaml  x86_64 0.1.4-11.el7_0base
 python-babel noarch 0.9.6-8.el7   base
 python-cffi  x86_64 1.6.0-5.el7   base
 python-enum34noarch 1.0.4-1.el7   base
 python-idna  noarch 2.4-1.el7 base
 python-ipaddress noarch 1.0.16-2.el7  base
 python-jinja2noarch 2.7.2-2.el7   base
 python-markupsafex86_64 0.11-10.el7   base
 python-paramiko  noarch 2.1.1-4.el7   extras
 python-ply   noarch 3.4-11.el7base
 python-pycparser noarch 2.14-1.el7base
 python-setuptoolsnoarch 0.9.8-7.el7   base
 python2-cryptography x86_64 1.7.2-1.el7_4.1   updates
 python2-pyasn1   noarch 0.1.9-7.el7   base
 sshpass  x86_64 1.06-2.el7extras

> sshpass and paramiko come from Extras, python2-cryptography comes from
> updates.

My concern is that if those were included in Extras for Ansible, they
would be removed from Extras together with ansible.

> I'm not sure if any of that is helpful since you mentioned it would need to
> be built by the appropriate SIG anyway.

Yes, ideally we would be able to get ConfigMgmt SIG going, in the
meantime other SIGs are rebuilding on their own e.g. Virt SIG/oVirt
did 2.4.3 http://cbs.centos.org/koji/buildinfo?buildID=21591
As a quickfix, we could also temporarily push this to RDO deps repo,
until we have rest of the plan ready.

>> BTW ideal approach would be to insert OpenStack use-cases into Ansible
>> upstream CI and make it voting, this could become reality with cross-project
>> CI efforts lead by openstack-infra. With that, Ansible master would never
>> break us!
>
> I don't entirely follow this, but I think it sounds like what I proposed
> above: having OpenStack test the devel branch of Ansible so Ansible
> Engineering can get feedback quickly if things are broken prior to a
> release. I know some of the OpenStack infra folks, and the networking team
> within Ansible has been doing a lot of work with them with Zuul for
> distributed CI. Myself and Ricardo Cruz on the Ansible side are very
> interested in hooking up more testing of Ansible as it relates to OpenStack
> using Zuul run by OpenStack Infra. Ricki and I talked about this a bunch at
> the PTG but have been working on other things since we got back.

Yes, above was forward-looking CD world where, given infinite CI
resources, everything is tested pre-commit across collaborating
projects.
Definitely trunk RPMs from devel branch are the step in that
direction, progression scale is:
no testing, push the latest release, hope for the best -> CI with
latest release -> CI with devel branch -> CI pre-commit

Cheers,
Alan
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] Strategy on packaging external dependencies in RDO + include Ansible in RDO

2018-03-29 Thread Alan Pevec
> - Identify which projects we have had troubles in the past months and that
> we are not automatically testing when new versions bump up (OVS? Ceph? etc)
> - For these projects, how could we either 1) import them in delorean and

Pretty please s/delorean/RDO Trunk/
DLRN is a tool, RDO Trunk are the repos built by it.
(see Alfredo's blog series starting with [1] for the introduction to RDO repos)

> automatically bump them at each new tag, with proper CI job in place or 2)
> have a repo that is nightly build from imports on latest tags available from
> their upstream repos), and have periodic jobs that test these bits.

Upstream trunk chasing requires involvement from the people working
closely in that upstream project,
and reality is that e.g. even for OpenStack projects FTBFs in RDO
Trunk are fixed by CI and infra teams most of the time.
Until we have clear buy-in from subject-matter-experts, we cannot add
a project directly to the trunk chase.
There could a compromise, read on.

> What I like with 1) is that we can easily test them independently (e.g. a
> new version of OVS) versus all together in 2) (a new version of OVS + a new
> version of MariaDB, etc).

For OVS specifically, Javier and I looked it and opened a story for
DLRN support [2]
tl;dr project would be in rdoinfo but pinned and every pin update
would be gated by rdoinfo CI
Open questions are which CI jobs give good enough coverage while not
too expense (hint: running full TripleO CI is NOT answer).
This would avoid limit amount of racing against upstream master as pin
updates would be proposed by the interested person who would then own
it until it works and passes gate.

Note this is all for the OpenStack/RDO release in _development_ i.e.
master CI, for stable releases we would use stable release of
dependencies, ideally built and released in the CentOS SIG repos
i.e. Ceph from Storage SIG (this is the case now), OVS from NFV SIG
(not the case now), Ansible from ConfigManagement SIG (not the case
now) ...

> Which leads to my second question... how are going to ship Ansible.
> I talked with Sam Doran (in cc) today and it seems like in the near future
> Ansible would be shipped via releases.ansible.com or EPE (and not from CentOS 
> Extras anymore).

Ansible RPMs are already there
http://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/
but they depend on EPEL for additional deps. To make it work in CentOS
SIG ecosystem, we need them rebuilt and released in the appropriate
SIG.

> So here are the questions:
>
> 1) How are we going to test new versions of Ansible in RDO CI context?
> 2) Where should we ship it? Back in my proposal, would it make sense to
> import Ansible in RDO and gate each bump (proposal #1) or take upstream, put
> in a deps repo (as we have current I believe) and test it with a periodic 
> jobs among other new deps.

Ansible is not in deps repo, we are getting it from Extras.
NB there are multiple Ansibles in the CI context: Zuul/softwarefactory
is pinned to an older version and pypi installed and the same is
tripleo-quickstart (ideally those would use RPM Ansible but that's a
separate discussion).
Ansible RPM is only used by tripleo itself and it is currently coming
from Extras where latest version is 2.4.2 [3] and I saw requests to
get newer.
For master we could try the same plan as OVS above, for stable
releases I'd ask Ansible team to get involved with ConfigManagement
SIG.

BTW ideal approach would be to insert OpenStack use-cases into Ansible
upstream CI and make it voting, this could become reality with
cross-project CI efforts lead by openstack-infra. With that, Ansible
master would never break us!

> Discussion is open, thanks for participating,

Thanks for starting the discussion!

Alan

[1] 
https://blogs.rdoproject.org/2016/04/new-in-rdo-repos-one-size-doesn-t-fit-all/
[2] https://tree.taiga.io/project/morucci-software-factory/us/1044
[3] https://git.centos.org/log/rpms!ansible.git/c7-extras
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] proposal to alert #rdo w/ issues from launchpad

2018-03-03 Thread Alan Pevec
Hi Wes,
I'd prefer to integrate those alerts into existing RDO monitoring instead
of adding one more bot.
We have #rdo-dev channel where infra alerts would fit better, can you show
few example LPs where you those tags would be applied?

Alan
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] [all] Revisiting RDO Technical Definition of Done

2018-02-27 Thread Alan Pevec
Hi all,

tomorrow is the release day for OpenStack Queens, so I wanted to bring
this thread to the conclusion!

On Wed, Nov 29, 2017 at 12:16 PM, Alan Pevec <ape...@redhat.com> wrote:
> Proposal would be to redefine DoD as follows:
> - RDO GA release delivers RPM packages via CentOS Cloud SIG repos,
> built from pristine upstream source tarballs
> - CI promotion GA criteria is changed from Jenkins pipeline to the
> list of jobs running with RPM packages directly, initial set would be
> all weirdo jobs running in [3]

since we now have RDO release automation, CI promotion GA criteria is
"validate-buildsys-tags" job triggered by the rdoinfo update
e.g. Queens RC https://review.rdoproject.org/r/12644

> - TripleO jobs would not be part of RDO GA criteria since TripelO now
> requires containers which RDO will not ship.TripleO promotion CI will
> continue running with containers built with RDO Trunk packages.

Tomorrow we are releasing RDO OpenStack Queens release packages,
TripleO as a "cycle-trailing" project[*] will follow within the next two weeks.

Cheers,
Alan

[*] https://releases.openstack.org/reference/release_models.html#cycle-trailing
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] Zuul v3 in RDO SF

2018-02-19 Thread Alan Pevec
On Mon, Feb 19, 2018 at 2:46 PM, Sagi Shnaidman  wrote:
> just curios, is it known when we move to zuulv3 in RDO Software Factory?
> Do we have a plan for that?

Pre-requisite is to migrate tripleo ci to v3, discussed in
http://lists.openstack.org/pipermail/openstack-dev/2017-December/125735.html
and tracked in https://etherpad.openstack.org/p/rdosf_zuulv3_planning
Rafael is looking at that using an internal SF instance and Jakub is
helping from the Software Factory side.
Related is also an effort discussed few RDO meetings back, to unify
softwarefactory-project.io and review.rdoproject.org SF instances,
where only ZuulV3 would be available.
This is in planning and not ETA yet.

Cheers,
Alan
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] [ppc64le] Some greenlet packages seem to be wrong arch

2018-02-14 Thread Alan Pevec
On Tue, Feb 13, 2018 at 6:05 PM, Michael Turek
 wrote:
> Sorry for the confusion Haïkel, See link [1] for what I'm talking about
> https://trunk.rdoproject.org/centos7-queens/deps/latest/ppc64le/

That must have been mistake during recent deps repo sync, I'll clean that up.

Cheers,
Alan
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] [Octavia] Providing service VM images in RDO

2018-01-10 Thread Alan Pevec
Hi Bernard,

I've added this as a topic for the
https://etherpad.openstack.org/p/RDO-Meeting today,
with some initial questions to explore.

On Wed, Jan 10, 2018 at 1:50 PM, Bernard Cafarelli  wrote:
> * easier install/maintenance for the user, tripleo can consume the
> image directly (from a package)

How do you plan to distribute the image, wrapped inside RPM?

> * ensuring up-to-date amphora images (to match the controller version,
> for security updates, …)

That's the most critical part of the process to figure out, how to
automate image updates,
just run it daily, trigger when included packages change...

> * allow to test and confirm the amphora works properly with latest
> changes, including enforcing SELinux (CentOS upstream gate runs in
> permissive)

And that's the second part, which CI job would test it properly?

> * use this image in tripleo CI (instead of having to build it there)

Related to the up-to-date issue: how to ensure it's latest for CI?

> * (future) extend this system for other system VM images

Which ones?

Cheers,
Alan
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


[rdo-dev] [all] Revisiting RDO Technical Definition of Done

2017-11-29 Thread Alan Pevec
Hi all,

we as a community last discussed RDO definition of done more than a
year ago and it was documented[1]

In  the meantime we have multiple changes in the RDO promotion
process, most significant is that we do not run all the CI promotion
jobs in the single Jenkins pipeline, instead there is now an
increasing number of periodic Zuul jobs in review.rdoproject.org
reporting to DLRN API database.
Promotion is performed asynchronously when all the required jobs report success.

At the same time, TripleO as the deployment project with the most
coverage in the promotion CI, has moved to be completely containerized
 in Queens.
While RDO does provide container registry which is used with RDO
Trunk, there aren't currently plans to provide containers built from
the stable RPM builds as discussed on this list [2] around Pike GA.
Even if we do all the work listed in [2] problem stays that containers
are currently installer specific and we cannot realistically provide
separate set of containers for each of TripleO, Kolla, OSA...

Proposal would be to redefine DoD as follows:
- RDO GA release delivers RPM packages via CentOS Cloud SIG repos,
built from pristine upstream source tarballs
- CI promotion GA criteria is changed from Jenkins pipeline to the
list of jobs running with RPM packages directly, initial set would be
all weirdo jobs running in [3]
- TripleO jobs would not be part of RDO GA criteria since TripelO now
requires containers which RDO will not ship.TripleO promotion CI will
continue running with containers built with RDO Trunk packages.

I'm adding this topic on the agenda for the RDO meeting today, I won't
be able to join but we need to get that discussion going so we have
updated DoD ready for Queens GA.

Cheers,
Alan

[1] https://www.rdoproject.org/blog/2016/05/technical-definition-of-done/
[2] https://www.redhat.com/archives/rdo-list/2017-August/msg00069.html
[3] 
https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo_trunk-promote-master-current-tripleo/
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org