[rdo-dev] FYI.. docker.io rate limiting

2021-04-28 Thread Wesley Hayutin
Greetings,

Most of the tripleo ci jobs at
https://ci.centos.org/view/rdo/view/promotion-pipeline/
are now failing due to docker.io rate limits.  This may impact how often
the various releases promote.

0/
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] Planned outage of review.rdoproject.org: 2021-03-15 from 14:00 to 16:00 UTC

2021-03-15 Thread Wesley Hayutin
Thanks Nicolas!


On Mon, Mar 15, 2021 at 9:51 AM Nicolas Hicher  wrote:

> Hello folks,
>
> The upgrade operation is done, all services are up and running.
>
> New user interfaces have been deployed.
>
> Gerrit has been updated to the last stable version.
>
> You may have to clean your browser cache if you get 'Plugin install error:
> TypeError: self.onAction ... delete-project.js' on gerrit page.
>
> Regards,
> Nicolas , on behalf of the Software Factory Operation Team
>
> [1] https://phabricator.wikimedia.org/T256560
>
>
> On 3/9/21 1:48 PM, nhic...@redhat.com wrote:
>
> Hello folks,
>
> We plan to upgrade the software factory deployment on 2021-03-15 from 14:00 
> to 16:00 UTC to the next 3.6 release(including Gerrit 3.2, Zuul and Nodepool 
> 4.0 and a new web-ui for the welcome page of software-factory).
>
> Service interruption is expected, including:
> - Zuul CI not running jobs for gerrit, github or opendev.
> - RDO Trunk not building new packages.
> - DLRN API.
> - review.rdoproject.org and softwarefactory-project.io gerrit service.
> - www.rdoproject.org and lists.rdoproject.org.
>
> We expect that the interruption of services will be less than 2 hours.
>
> Regards,
> Nicolas , on behalf of the Software Factory Operation Team
>
> ___
> dev mailing list
> dev@lists.rdoproject.org
> http://lists.rdoproject.org/mailman/listinfo/dev
>
> To unsubscribe: dev-unsubscr...@lists.rdoproject.org
>
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] [rdo-users] [RDO] Weekly Status for 2020-10-09

2020-10-12 Thread Wesley Hayutin
On Mon, Oct 12, 2020 at 4:59 AM YATIN KAREL  wrote:

> Promotions:-
>   * Latest promotions (TripleO CI):
> * Master: 06th Oct
> * Ussuri: 09th Oct
> * Train(C8): 08th Oct
> * Train(C7): 17th Sept
> * Stein: 08th Oct
>   * Know blocker master
> * https://bugs.launchpad.net/tripleo/+bug/1897947 (octavia component)
> * https://bugs.launchpad.net/tripleo/+bug/1898931 (integration)
>   * Known blocker train(C7)
> * https://review.rdoproject.org/r/#/c/29397/
>
> Packages
>   * Tempest is bumped to 25.0.0 in Ussuri and Train
>   * Octavia-tempest-plugin is pinned in victoria+ due to missing
> deps(httpx)
> * https://review.rdoproject.org/etherpad/p/httpx-centos8-octavia
>
> Victoria Release Preparation
>   * https://review.rdoproject.org/etherpad/p/victoria-release-preparation
>   * Most of the upstream projects have released RC2 bits, all of these
> projects are tagged for release and available in CentOS Mirrors
> * http://mirror.centos.org/centos/8/cloud/x86_64/openstack-victoria/
>   * https://review.rdoproject.org/r/#/q/topic:victoria-branching
>   * openstack/requirements has cut stable/victoria, and now master is
> switched to wallaby tags in rdoinfo
> * https://review.rdoproject.org/r/29934
>   * Victoria jobs are being added by TripleO CI Team
> * https://hackmd.io/sasvoKjTSkabGlFOTkK_kw#bugs-and-reviews
>

Just a quick update here... still a WIP, do not use Victoria for tripleo
deployments.

We did have our first promotion today, /me notes this is NOT with full
coverage yet.
https://trunk.rdoproject.org/api-centos8-victoria/api/civotes_agg_detail.html?ref_hash=13431bf4c5a8ee8c0231f11d43050939
https://trunk.rdoproject.org/centos8-victoria/current-tripleo/
http://images.rdoproject.org/centos8/victoria/rdo_trunk/
https://hub.docker.com/u/tripleovictoria

Nice progress though :)





>
> Other
>   * Implementation of sources verification with GPG signature is on
> progress:
> *
> https://review.rdoproject.org/r/#/q/topic:gpg-verification+(status:open+OR+status:merged)
>
>
> On behalf of RDO
> ___
> users mailing list
> us...@lists.rdoproject.org
> http://lists.rdoproject.org/mailman/listinfo/users
>
> To unsubscribe: users-unsubscr...@lists.rdoproject.org
>
>
>
>
>
>
> ___
> dev mailing list
> dev@lists.rdoproject.org
> http://lists.rdoproject.org/mailman/listinfo/dev
>
> To unsubscribe: dev-unsubscr...@lists.rdoproject.org
>
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


[rdo-dev] RDO packaging for CentOS-Stream

2020-08-21 Thread Wesley Hayutin
Greetings,

Are there any public plans for building RDO packages on CentOS-Stream
available for the community to review?

Thanks 0/
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] RDO Cloud operations today

2020-07-10 Thread Wesley Hayutin
On Fri, Jul 10, 2020 at 1:46 AM Alan Pevec  wrote:

> > last update as of few hours ago was: rdocloud networking should be now
> > stable, uplink is not redundant, IT will work on getting back failover
> > during the day
>
> Update as of this morning:
> uplink redundancy was restored last night,
> restoring full CI pool is planned today.
>
> Cheers,
> Alan
>

Good news.. thank you
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] RDO Cloud operations today

2020-07-09 Thread Wesley Hayutin
On Wed, Jul 8, 2020 at 5:11 AM Alan Pevec  wrote:

> Hi all,
>
> FYI RDO Cloud is undergoing scheduled movement of some of its racks,
> control plane and infra services (www, lists, CI pool) should stay up
> all the time.
> In case of unplanned outage we'll let you know in this thread and also
> announce when those operations are finished.
> At one point there will be reduced CI pool capacity, so expect to see
> longer queues in https://review.rdoproject.org/zuul/status during the
> day.
>
> Cheers,
> Alan
>
>
Any updates on the status of the operations?


> ___
> dev mailing list
> dev@lists.rdoproject.org
> http://lists.rdoproject.org/mailman/listinfo/dev
>
> To unsubscribe: dev-unsubscr...@lists.rdoproject.org
>
>
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] New services on rdoproject.org

2020-04-28 Thread Wesley Hayutin
On Tue, Apr 28, 2020 at 9:35 AM Daniel Pawlik  wrote:

> Hi,
>
> Today we enable new services on rdoproject.org:
> - cgit
> - codesearch
>

Great additions! Thanks Daniel!


>
> Old domain codesearch.rdoproject.org will be removed and now
> service address is: https://review.rdoproject.org/codesearch/.
> I suggest to "hard" refresh the review.rdoproject.org to
> update the site cache.
>
> If you have any problems related to this change, please do not hesitate to
> contact me.
>
>
> Regards,
> Dan
> ___
> dev mailing list
> dev@lists.rdoproject.org
> http://lists.rdoproject.org/mailman/listinfo/dev
>
> To unsubscribe: dev-unsubscr...@lists.rdoproject.org
>
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


[rdo-dev] Fwd: [ansible-sig][kolla][openstack-ansible][osc][sdk][tripleo] OpenStack modules broken in Ansible 2.8.9

2020-03-05 Thread Wesley Hayutin
FYI..
Please make sure ansible does not get bumped to 2.8.9 we are currently at
2.8.8 in https://trunk.rdoproject.org/centos8-master/deps/latest/noarch/

Thanks!

-- Forwarded message -
From: Mark Goddard 
Date: Thu, Mar 5, 2020 at 8:28 AM
Subject: [ansible-sig][kolla][openstack-ansible][osc][sdk][tripleo]
OpenStack modules broken in Ansible 2.8.9
To: openstack-discuss 


Hi,

The 2.8.9 release of Ansible has a regression [1] which breaks the
OpenStack modules. I've proposed a simple fix, hopefully it will be
included in a 2.8.10 release soon but in the meantime you may need to
blacklist 2.8.9.

[1] https://github.com/ansible/ansible/issues/68042
[2] https://github.com/ansible/ansible/pull/68043

Cheers,
Mark
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] [rdo-users] [Meeting] RDO meeting (2020-02-26) minutes

2020-03-02 Thread Wesley Hayutin
On Fri, Feb 28, 2020 at 4:36 PM Wesley Hayutin  wrote:

>
>
> On Wed, Feb 26, 2020 at 7:45 AM YATIN KAREL  wrote:
>
>> ==
>> #rdo: RDO meeting - 2020-02-26
>> ==
>>
>>
>> Meeting started by ykarel at 14:02:31 UTC.  The full logs are available
>> athttp://eavesdrop.openstack.org/meetings/rdo_meeting___2020_02_26/2020/rdo_meeting___2020_02_26.2020-02-26-14.02.log.html
>> .
>>
>>
>>
>> Meeting summary
>> ---
>>
>> * roll call  (ykarel, 14:02:57)
>>
>> * CentOS 8 updates:  (ykarel, 14:06:26)
>>   * CBS buildsys-tags for ussuri on centos8 is merged  (amoralej,
>> 14:07:15)
>>   * Moving back to ansible 2.8 in CentOS8 -
>> 
>> http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012820.html
>> (amoralej, 14:09:11)
>>   * LINK: https://cbs.centos.org/koji/buildinfo?buildID=28541  (ykarel,
>> 14:14:59)
>>   * LINK:
>> 
>> https://github.com/ansible/ansible/blob/stable-2.8/packaging/rpm/ansible.spec
>> (fultonj, 14:15:59)
>>   * LINK: https://cbs.centos.org/koji/buildinfo?buildID=28541  (ykarel,
>> 14:16:14)
>>   * Unpinning ussuri-uc  (amoralej, 14:20:13)
>>   * LINK:
>> https://lists.rdoproject.org/pipermail/dev/2020-February/009279.html
>> (amoralej, 14:20:21)
>>
>> Unpinning should have the following criteria.
> 1. Each centos-8 component has a component pipeline and test ( done )
> 2.  The TripleO-CI team has a centos-8 integration pipeline running and
> passing with an appropriate set of jobs ( check w/ team )
> 3.  A representative of the RDO packaging team attends the #tripleo
> meeting and updates the group on when unpinning will happen and the
> expected diff.
>
> Thanks!!
>

Please respond.. to the above criteria.
Please do NOT unpin w/o coordinating w/ the tripleo-ci team.
Thanks


>
>
>>   * once we have one more promotion, i'll propose in the ML thread to
>> unpin  (amoralej, 14:23:37)
>>   * Currently building train pkgs
>> https://review.rdoproject.org/r/#/q/topic:train-centos8  (ykarel,
>> 14:31:03)
>>
>> * chair for next week  (ykarel, 14:33:36)
>>   * ACTION: jcapitao to chair next week  (ykarel, 14:34:55)
>>
>> * open floor  (ykarel, 14:35:04)
>>
>>
>>
>> Meeting ended at 14:43:02 UTC.
>>
>>
>>
>> Action items, by person
>> ---
>>
>> * jcapitao
>>   * jcapitao to chair next week
>>
>>
>>
>> People present (lines said)
>> ---
>>
>> * amoralej (47)
>> * ykarel (45)
>> * fultonj (17)
>> * rdogerrit (10)
>> * openstack (9)
>> * jcapitao (3)
>> * jpena (3)
>> * leanderthal (1)
>> * rh-jelabarre (1)
>>
>>
>> Generated by `MeetBot`_ 0.1.4
>>
>> __
>>
>> users mailing list
>> us...@lists.rdoproject.org
>> http://lists.rdoproject.org/mailman/listinfo/users
>> To unsubscribe: users-unsubscr...@lists.rdoproject.org
>>
>>
>>
>>
>>
>>
>>
>> ___
>> dev mailing list
>> dev@lists.rdoproject.org
>> http://lists.rdoproject.org/mailman/listinfo/dev
>>
>> To unsubscribe: dev-unsubscr...@lists.rdoproject.org
>>
>
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] [rdo-users] [Meeting] RDO meeting (2020-02-26) minutes

2020-02-28 Thread Wesley Hayutin
On Wed, Feb 26, 2020 at 7:45 AM YATIN KAREL  wrote:

> ==
> #rdo: RDO meeting - 2020-02-26
> ==
>
>
> Meeting started by ykarel at 14:02:31 UTC.  The full logs are available
> athttp://eavesdrop.openstack.org/meetings/rdo_meeting___2020_02_26/2020/rdo_meeting___2020_02_26.2020-02-26-14.02.log.html
> .
>
>
>
> Meeting summary
> ---
>
> * roll call  (ykarel, 14:02:57)
>
> * CentOS 8 updates:  (ykarel, 14:06:26)
>   * CBS buildsys-tags for ussuri on centos8 is merged  (amoralej,
> 14:07:15)
>   * Moving back to ansible 2.8 in CentOS8 -
> 
> http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012820.html
> (amoralej, 14:09:11)
>   * LINK: https://cbs.centos.org/koji/buildinfo?buildID=28541  (ykarel,
> 14:14:59)
>   * LINK:
> 
> https://github.com/ansible/ansible/blob/stable-2.8/packaging/rpm/ansible.spec
> (fultonj, 14:15:59)
>   * LINK: https://cbs.centos.org/koji/buildinfo?buildID=28541  (ykarel,
> 14:16:14)
>   * Unpinning ussuri-uc  (amoralej, 14:20:13)
>   * LINK:
> https://lists.rdoproject.org/pipermail/dev/2020-February/009279.html
> (amoralej, 14:20:21)
>
> Unpinning should have the following criteria.
1. Each centos-8 component has a component pipeline and test ( done )
2.  The TripleO-CI team has a centos-8 integration pipeline running and
passing with an appropriate set of jobs ( check w/ team )
3.  A representative of the RDO packaging team attends the #tripleo meeting
and updates the group on when unpinning will happen and the expected diff.

Thanks!!


>   * once we have one more promotion, i'll propose in the ML thread to
> unpin  (amoralej, 14:23:37)
>   * Currently building train pkgs
> https://review.rdoproject.org/r/#/q/topic:train-centos8  (ykarel,
> 14:31:03)
>
> * chair for next week  (ykarel, 14:33:36)
>   * ACTION: jcapitao to chair next week  (ykarel, 14:34:55)
>
> * open floor  (ykarel, 14:35:04)
>
>
>
> Meeting ended at 14:43:02 UTC.
>
>
>
> Action items, by person
> ---
>
> * jcapitao
>   * jcapitao to chair next week
>
>
>
> People present (lines said)
> ---
>
> * amoralej (47)
> * ykarel (45)
> * fultonj (17)
> * rdogerrit (10)
> * openstack (9)
> * jcapitao (3)
> * jpena (3)
> * leanderthal (1)
> * rh-jelabarre (1)
>
>
> Generated by `MeetBot`_ 0.1.4
>
> __
>
> users mailing list
> us...@lists.rdoproject.org
> http://lists.rdoproject.org/mailman/listinfo/users
> To unsubscribe: users-unsubscr...@lists.rdoproject.org
>
>
>
>
>
>
>
> ___
> dev mailing list
> dev@lists.rdoproject.org
> http://lists.rdoproject.org/mailman/listinfo/dev
>
> To unsubscribe: dev-unsubscr...@lists.rdoproject.org
>
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] [tripleo] missing centos-8 rpms for kolla builds

2020-02-24 Thread Wesley Hayutin
On Mon, Feb 24, 2020 at 8:55 AM Mark Goddard  wrote:

> On Wed, 29 Jan 2020 at 11:31, Alfredo Moralejo Alonso
>  wrote:
> >
> >
> >
> > On Tue, Jan 28, 2020 at 5:53 PM Mark Goddard  wrote:
> >>
> >> On Tue, 28 Jan 2020 at 15:18, Mark Goddard  wrote:
> >> >
> >> > On Mon, 27 Jan 2020 at 09:18, Radosław Piliszek
> >> >  wrote:
> >> > >
> >> > > I know it was for masakari.
> >> > > Gaëtan had to grab crmsh from opensuse:
> >> > >
> http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-7/
> >> > >
> >> > > -yoctozepto
> >> >
> >> > Thanks Wes for getting this discussion going. I've been looking at
> >> > CentOS 8 today and trying to assess where we are. I created an
> >> > Etherpad to track status:
> >> > https://etherpad.openstack.org/p/kolla-centos8
> >>
> >
> > uwsgi and etcd are now available in rdo dependencies repo. Let me know
> if you find some issue with it.
>
> I've been working on the backport of kolla CentOS 8 patches to the
> stable/train branch. It looks like these packages which were added to
> master are not present in Train.
>
>
I'll help check in on that for you Mark.
Thank you!!



> >
> >>
> >> We are seeing an odd DNF error sometimes. DNF exits 141 with no error
> >> code when installing packages. It often happens on the rabbitmq and
> >> grafana images. There is a prompt about importing GPG keys prior to
> >> the error.
> >>
> >> Example:
> https://4eff4bb69c321960be39-770d619687de1bce0976465c40e4e9ca.ssl.cf2.rackcdn.com/693544/33/check/kolla-ansible-centos8-source-mariadb/93a8351/primary/logs/build/000_FAILED_kolla-toolbox.log
> >>
> >> Related bug report? https://github.com/containers/libpod/issues/4431
> >>
> >> Anyone familiar with it?
> >>
> >
> > Didn't know about this issue.
> >
> > BTW, there is rabbitmq-server in RDO dependencies repo if you are
> interested in using it from there instead of rabbit repo.
> >
> >> >
> >> > >
> >> > > pon., 27 sty 2020 o 10:13 Marcin Juszkiewicz
> >> > >  napisał(a):
> >> > > >
> >> > > > W dniu 27.01.2020 o 09:48, Alfredo Moralejo Alonso pisze:
> >> > > > > How is crmsh used in these images?, ha packages included in
> >> > > > > HighAvailability repo in CentOS includes pcs and some crm_*
> commands in pcs
> >> > > > > and pacemaker-cli packages. IMO, tt'd be good to switch to
> those commands
> >> > > > > to manage the cluster.
> >> > > >
> >> > > > No idea. Gaëtan Trellu may know - he created those images.
> >> > > >
> >> > >
> >>
>
>
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


[rdo-dev] [tripleo] missing centos-8 rpms for kolla builds

2020-01-24 Thread Wesley Hayutin
Greetings,

I know the ceph repo is in progress.
TripleO / RDO is not releasing opendaylight

Can the RDO team comment on the rest of the missing packages here please?

Thank you!!

https://review.opendev.org/#/c/699414/9/kolla/image/build.py

 NOTE(mgoddard): Mark images with missing dependencies as unbuildable for
# CentOS 8.
'centos8': {
"barbican-api",  # Missing uwsgi-plugin-python3
"ceph-base", # Missing Ceph repo
"cinder-base",   # Missing Ceph repo
"collectd",  # Missing collectd-ping and
 # collectd-sensubility packages
"elasticsearch", # Missing elasticsearch repo
"etcd",  # Missing etcd package
"fluentd",   # Missing td-agent repo
"glance-base",   # Missing Ceph repo
"gnocchi-base",  # Missing Ceph repo
"hacluster-base",# Missing hacluster repo
"ironic-conductor",  # Missing shellinabox package
"kibana",# Missing elasticsearch repo
"manila-share",  # Missing Ceph repo
"mongodb",   # Missing mongodb and mongodb-server
packages
"monasca-grafana",   # Using python2
"nova-compute",  # Missing Ceph repo
"nova-libvirt",  # Missing Ceph repo
"nova-spicehtml5proxy",  # Missing spicehtml5 package
"opendaylight",  # Missing opendaylight repo
"ovsdpdk",   # Not supported on CentOS
"sensu-base",# Missing sensu package
"tgtd",  # Not supported on CentOS 8
},

'centos8+source': {
"barbican-base", # Missing uwsgi-plugin-python3
"bifrost-base",  # Bifrost does not support CentOS 8
"cyborg-agent",  # opae-sdk does not support CentOS 8
"freezer-base",  # Missing package trickle
"masakari-monitors", # Missing hacluster repo
"zun-compute",   # Missing Ceph repo
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] Proposing Sorin Sbarnea (zbr) and Marios Andreou (marios) as core on review.rdoproject.org repos config and rdo-jobs

2019-08-23 Thread Wesley Hayutin
+1


On Fri, Aug 23, 2019 at 8:31 AM Ronelle Landy  wrote:

> +1
> Both would be great additions
>
> On Fri, Aug 23, 2019 at 10:22 AM Chandan kumar 
> wrote:
>
>> Hello,
>>
>> I'd like to propose Sorin and Marios as new cores on the
>> review.rdoproject.org repos config and rdo-jobs.
>>
>> Sorin and Marios have done amazing work on these repos by providing
>> valuable feedback and knowledge on the reviews and making sure it can
>> be tested via bringing molecule and avoiding repetition.
>>
>> They are both knowledgeable in the design and working of TripleO jobs and
>> done
>> an amazing work while bringing RHEL8 on RDO.
>>
>> Having them as a core on these repos would be a great addition.
>>
>> The vote will end on Friday, Aug 30.
>>
>> Thanks,
>>
>> Chandan Kumar
>> ___
>> dev mailing list
>> dev@lists.rdoproject.org
>> http://lists.rdoproject.org/mailman/listinfo/dev
>>
>> To unsubscribe: dev-unsubscr...@lists.rdoproject.org
>>
> ___
> dev mailing list
> dev@lists.rdoproject.org
> http://lists.rdoproject.org/mailman/listinfo/dev
>
> To unsubscribe: dev-unsubscr...@lists.rdoproject.org
>
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] [infra][tripleo-ci] Disk space usage in logs.rdoproject.org

2019-06-13 Thread Wesley Hayutin
On Thu, Jun 13, 2019 at 8:55 AM Javier Pena  wrote:

>
>
> --
>
>
>
> On Thu, Jun 13, 2019 at 8:22 AM Javier Pena  wrote:
>
>> Hi all,
>>
>> For the last few days, I have been monitoring a spike in disk space
>> utilization for logs.rdoproject.org. The current situation is:
>>
>> - 94% of space used, with less than 140GB out of 2TB available.
>> - The log pruning script has been reclaiming less space than we are using
>> for new logs during this week.
>> - I expect the situation to improve over the weekend, but we're
>> definitely running out of space.
>>
>> I have looked at a random job (https://review.opendev.org/639324, patch
>> set 26), and found that each run is consuming 1.2 GB of disk space in logs.
>> The worst offenders I have found are:
>>
>> - atop.bin.gz files (one per job, 8 jobs per recheck), ranging between 15
>> and 40 MB each
>> - logs/undercloud/home/zuul/tempest/.stackviz directory on
>> tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001 jobs, which is a
>> virtualenv eating up 81 MB.
>>
>
> Can we sync up w/ how you are calculating these results as they do not
> match our results.
> I see each job consuming about 215M of space, we are close on stackviz
> being 83M. Oddly I don't see atop.bin.gz in our calculations so I'll have
> to look into that.
>
> I've checked it directly using du on the logserver. By 1.2 GB I meant the
> aggregate of the 8 jobs running for a single patchset. PS26 is currently
> using 2.5 GB and had one recheck.
>
> About the atop.bin.gz file:
>
> # find . -name atop.bin.gz -exec du -sh {} \;
> 16M
>  
> ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-queens-branch/042cb8f/logs/undercloud/var/log/atop.bin.gz
> 16M
>  
> ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-queens-branch/e4171d7/logs/undercloud/var/log/atop.bin.gz
> 28M
>  
> ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-rocky-branch/ffd4de9/logs/undercloud/var/log/atop.bin.gz
> 26M
>  
> ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-rocky-branch/34d44bf/logs/undercloud/var/log/atop.bin.gz
> 25M
>  
> ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-stein-branch/b89761d/logs/undercloud/var/log/atop.bin.gz
> 24M
>  
> ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-stein-branch/9ade834/logs/undercloud/var/log/atop.bin.gz
> 29M
>  
> ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset053/a10447d/logs/undercloud/var/log/atop.bin.gz
> 44M
>  
> ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset053/99a5f9f/logs/undercloud/var/log/atop.bin.gz
> 15M
>  
> ./tripleo-ci-centos-7-multinode-1ctlr-featureset010/c8a8c60/logs/subnode-2/var/log/atop.bin.gz
> 33M
>  
> ./tripleo-ci-centos-7-multinode-1ctlr-featureset010/c8a8c60/logs/undercloud/var/log/atop.bin.gz
> 16M
>  
> ./tripleo-ci-centos-7-multinode-1ctlr-featureset010/73ef532/logs/subnode-2/var/log/atop.bin.gz
> 33M
>  
> ./tripleo-ci-centos-7-multinode-1ctlr-featureset010/73ef532/logs/undercloud/var/log/atop.bin.gz
> 40M
>  
> ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035/109d5ae/logs/undercloud/var/log/atop.bin.gz
> 45M
>  
> ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035/c2ebeae/logs/undercloud/var/log/atop.bin.gz
> 39M
>  
> ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001/7fe5bbb/logs/undercloud/var/log/atop.bin.gz
> 16M
>  
> ./tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001/5e6cb0f/logs/undercloud/var/log/atop.bin.gz
> 40M
>  
> ./tripleo-ci-centos-7-ovb-3ctlr_1comp_1supp-featureset039/c6bf5ea/logs/undercloud/var/log/atop.bin.gz
> 40M
>  
> ./tripleo-ci-centos-7-ovb-3ctlr_1comp_1supp-featureset039/6ec5ac6/logs/undercloud/var/log/atop.bin.gz
>
> Can I safely delete all .stackviz directories? I guess that would give us
> some breathing room.
>

Yup, go for it


>
> Regards,
> Javier
>
> Each job reports the size of the logs e.g. [1]
>
> http://logs.rdoproject.org/24/639324/26/openstack-check/tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-stein-branch/9ade834/logs/quickstart_files/log-size.txt
>
>
>> As a temporary measure, I am reducing log retention from 21 days to 14,
>> but we still need to reduce the rate at which we are uploading logs. Would
>> it be possible to check the oooq-generated logs and see where we can
>> reduce? These jobs are by far the ones consuming most space.
>>
>> Thanks,
>> Javier
>> ___
>> dev mailing list
>> dev@lists.rdoproject.org
>> http://lists.rdoproject.org/mailman/listinfo/dev
>>
>> To unsubscribe: dev-unsubscr...@lists.rdoproject.org
>>
>
>
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] [infra][tripleoci] ppc64le container images in registry.rdoproject.org

2019-04-02 Thread Wesley Hayutin
rebooting this conversation...  see inline at the bottom

On Mon, Mar 25, 2019 at 11:29 AM Javier Pena  wrote:

>
>
> --
>
> On Fri, Mar 22, 2019 at 8:35 PM Javier Pena  wrote:
>
>>
>>
>> - Original Message -
>> > I've been working with mjturek and baha on this a bit.  I've responded
>> inline
>> > below, but also want to clarify on the desired workflow.
>> >
>> > TL;DR: The desired workflow is to have ppc64le and x86_64 seamlessly
>> > integrated and uploaded.  This can be done with docker manifest list
>> images.
>> >
>> > The following link explains in greater detail:
>> > https://docs.docker.com/registry/spec/manifest-v2-2/
>> >
>> > The process boils down to the following steps:
>> >
>> > 1) Upload an image of the first architecture  (ex:
>> image1:x86_64_01012019)
>> > 2) Upload an image of the second architecture  (ex:
>> image1:ppc64le_01012019)
>> > 3) Upload manifest list image of the image  (ex: image1:01012019)
>> >
>>
>> This is one of the details where I had my doubts. Currently, the images
>> uploaded to the registry use the following naming convention:
>>
>>
>> tripleomaster/centos-binary-neutron-l3-agent:42a882962919b867c91a182b83acca6d8004096e_ee467b40
>>
>>
>> Where:
>>
>> - tripleomaster is associated to the release (we have tripleomaster,
>> tripleostein, tripleorocky...)
>> - centos is associated to the OS (we have centos and fedora)
>> - 42a882962919b867c91a182b83acca6d8004096e_ee467b40 refers to the
>> repository in trunk.rdoproject.org used to build the image (commit hash
>> and short distro hash)
>>
>> If we want to go multi-arch, we need to change that tag to include the
>> architecture, is this correct? Otherwise, we could have conflicts between
>> the x86_64 and ppc64le pipelines trying to upload the same image.
>>
>
> Yup.  The idea is that the enpoint URL
> (tripleomaster/centos-binary-neutron-l3-agent:42a882962919b867c91a182b83acca6d8004096e_ee467b40)
> is a container manifest.  Where we include the arch would be with an
> additional tag:
>
> tripleomaster/centos-binary-neutron-l3-agent:42a882962919b867c91a182b83acca6d8004096e_ee467b40_$arch
> but nothing else should change and *explicitly* do not want different orgs
> per architecture.  So the publish pipeline would look like:
>
>
>- Each architecture builds and publishes all the containers per branch
>and OS [1] all the containers and publishes a container image/layer to:
>
>'tripleo%(branch)s/%(os)s-%(build_type)s-%(container)s:%(repo)s_%(arch)s'
>- Then checks to see if the manifest exists.
>manifest =
>'tripleo%(branch)s/%(os)s-%(build_type)s-%(container)s:%(repo)s'
>if exists(manifest):
>
>
> add_to_manifest(arch_layer=tripleo%(branch)s/%(os)s-%(build_type)s-%(container)s:%(repo)s_%(arch)s')
>else:
>
>
> create_manifest(arch_layer=tripleo%(branch)s/%(os)s-%(build_type)s-%(container)s:%(repo)s_%(arch)s')
>
> I have been running some tests to check how this could work, and I've
> found an issue. It looks like the OpenShift Registry (what we use for our
> RDO Registry) does not properly support manifest lists, see [2]. It
> actually failed for me when I tried it, while a plain Docker registry
> worked (using manifest-tool).
>
> Would the manifest upload be required in the RDO Registry (which is used
> as an intermediate step), or just in DockerHub (which is used for actual
> content delivery)? If it's the second case, we're still fine.
>
> Regards,
> Javier
>
> [2] -
> https://trello.com/c/4EcAIJrd/1303-5-quayregistry-add-support-for-manifest-list-rd
>
> This shouldn't break existing consumers as docker and podman both do the
> correct thing when encountering a manifest.  and does mean that multi-arch
> consumers can use the same URL scheme.  This is how downstream currently
> works
>
> It's possible and possibly even desirable, due to resource constraints,
> for the ppc64le build to be triggered only when updating
> current-passed-ci.  That's exactly what we discussed in Dublin.
>
> Tony.
>
> [1] for ppc64le we're starting with centos and master but over time this
> would need to grow out from master to include stein, u etc etc  We haven't
> looked at Fedora due to using centos CI but if Fedora is going to stick
> around we can work on that too.
>
>
> ___
> dev mailing list
> dev@lists.rdoproject.org
> http://lists.rdoproject.org/mailman/listinfo/dev
>
> To unsubscribe: dev-unsubscr...@lists.rdoproject.org



OK... so here we are.
Trevor, myself and Steve hashed out a PPC64LE workflow that works alongside
the x86_64 workflow.
Our notes are here [1] .

Although x86_64 and PPC64LE should try their best to use the same container
build methods and process the workflows are completely independent and can
remain that way.   The convergence occurs on the tripleo-ci promotion
server after reading test results from the dlrn_api ( notes in the etherpad
)

Some additional points.
* Alan Pevec and I both agree 

Re: [rdo-dev] [infra][tripleoci] ppc64le container images in registry.rdoproject.org

2019-03-22 Thread Wesley Hayutin
On Fri, Mar 22, 2019 at 3:36 AM Javier Pena  wrote:

>
>
> - Original Message -
> > I've been working with mjturek and baha on this a bit.  I've responded
> inline
> > below, but also want to clarify on the desired workflow.
> >
> > TL;DR: The desired workflow is to have ppc64le and x86_64 seamlessly
> > integrated and uploaded.  This can be done with docker manifest list
> images.
> >
> > The following link explains in greater detail:
> > https://docs.docker.com/registry/spec/manifest-v2-2/
> >
> > The process boils down to the following steps:
> >
> > 1) Upload an image of the first architecture  (ex:
> image1:x86_64_01012019)
> > 2) Upload an image of the second architecture  (ex:
> image1:ppc64le_01012019)
> > 3) Upload manifest list image of the image  (ex: image1:01012019)
> >
>
> This is one of the details where I had my doubts. Currently, the images
> uploaded to the registry use the following naming convention:
>
>
> tripleomaster/centos-binary-neutron-l3-agent:42a882962919b867c91a182b83acca6d8004096e_ee467b40
>
>
> Where:
>
> - tripleomaster is associated to the release (we have tripleomaster,
> tripleostein, tripleorocky...)
> - centos is associated to the OS (we have centos and fedora)
> - 42a882962919b867c91a182b83acca6d8004096e_ee467b40 refers to the
> repository in trunk.rdoproject.org used to build the image (commit hash
> and short distro hash)
>
> If we want to go multi-arch, we need to change that tag to include the
> architecture, is this correct? Otherwise, we could have conflicts between
> the x86_64 and ppc64le pipelines trying to upload the same image.
>
> Regards,
> Javier
>

Right.. I'm agreeing w/ you..

I'm recommending the following change
tripleomaster -> tripleomaster_x86_64
and add
tripleomaster_ppc64le


>
>
> > Step 3 is essentially just pushing a JSON body that has descriptors and
> > references to the other two images, such that when someone does a pull
> > request of the manifest list image, it will gather the appropriate
> > architecture for that image based on the host's architecture.
> >
> > -Trevor
> >
> > PS. If I've missed something important with the overall concerns here I
> > apologize, but thought it necessary to spell out the goal as I understand
> > it.
> >
> > > On Mar 21, 2019, at 12:28 PM, Javier Pena  wrote:
> > >
> > >
> > > - Original Message -
> > >> Hi all,
> > >>
> > >> Over the last few weeks, mjturek and baha have been busy working on a
> set
> > >> of
> > >> periodic jobs to build TripleO images for the ppc64le arch [1].
> > >>
> > >> The current missing step is publishing those images, and they are
> > >> proposing
> > >> to push those to the RDO Registry instance at registry.rdoproject.org
> ,
> > >> just
> > >> like we do with our TripleO images. I have tried to understand the
> > >> requirements, and would like to get input on the following topics:
> > >>
> > >> - Which namespace would these images use? Based on some logs [2] it
> looks
> > >> like they use tripleomaster-ppc64le, will they also push the images to
> > >> that
> > >> namespace?
> >
> > I have no experience in namespaces inside of a registry or how that
> > differentiates images from one another, but the images should be pushed
> (in
> > my opinion) to the same location in which the x86 images reside.
> >
> > >> - Could this create any conflicts with our current promotion pipeline?
> >
> > This should not cause conflicts in current promotion pipeline, as the
> process
> > should be an extension to existing functionality.
> >
> > >> - Is registry.rdo the right place for those images? I'm not familiar
> with
> > >> the
> > >> next steps for ppc64le images after that (will it then go through a
> > >> promotion pipeline?), so that might affect.
> >
> > If the x86 images exist in registry.rdo, then the ppc64le (and any other
> > architecture image) should exist there as well.  I can't think of a
> reason
> > to differentiate between architectures when the desired result is parity
> and
> > support of more architectures.
> >
> > >>
> > >> If we decide to upload the images to images.rdo, we'll need to do the
> > >
> > > Correction: this should read "registry.rdo"
> > >
> > >> following:
> > >>
> > >> - Create the tripleomaster-ppc64le namespace in registry.rdo,
> following a
> > >> similar pattern to [3].
> > >> - Schedule a short registry downtime to increase its disk space,
> since it
> > >> is
> > >> currently near its limit.
> >
> > This is definitely necessary, given the capacity requirement will double,
> > give or take, to support the additional architecture.
> >
> > >> - Update the job at ci.centos to include the REGISTRY_PASSWORD
> environment
> > >> variable with the right token (see [4]). This is missing today, and
> > >> causing
> > >> the job failure.
> > >>
> > >> Once we get input from all interested parties, we will decide on the
> next
> > >> steps.
> > >>
> > >> Thanks,
> > >> Javier
> > >>
> > >>
> > >> [1] -
> > >>
> 

Re: [rdo-dev] [infra][tripleoci] ppc64le container images in registry.rdoproject.org

2019-03-21 Thread Wesley Hayutin
On Thu, Mar 21, 2019 at 11:19 AM Javier Pena  wrote:

>
>
> - Original Message -
> > Hi all,
> >
> > Over the last few weeks, mjturek and baha have been busy working on a
> set of
> > periodic jobs to build TripleO images for the ppc64le arch [1].
> >
> > The current missing step is publishing those images, and they are
> proposing
> > to push those to the RDO Registry instance at registry.rdoproject.org,
> just
> > like we do with our TripleO images. I have tried to understand the
> > requirements, and would like to get input on the following topics:
> >
> > - Which namespace would these images use? Based on some logs [2] it looks
> > like they use tripleomaster-ppc64le, will they also push the images to
> that
> > namespace?
>

I've been warning my folks this was coming.
If the PPC folks are using  tripleomaster-ppc64le, I would propose we
update the x86_64 containers to be tripleo$release-x86_64


> > - Could this create any conflicts with our current promotion pipeline?
>

Only if we start to overload the rdo registry, which has happened in the
past. I don't see how the two would conflict otherwise


> > - Is registry.rdo the right place for those images? I'm not familiar
> with the
> > next steps for ppc64le images after that (will it then go through a
> > promotion pipeline?), so that might affect.
>

I think it's the right place to upload container images.
Overcloud images should be uploaded to https://images.rdoproject.org/master ,
but we're going to have to account for arch type there too.  Increase space
etc..

Finally they should build periodic jobs to build containers, overcloud
images and test jobs that trigger off of
https://trunk.rdoproject.org/centos7-master/tripleo-ci-testing/ and report
results to the dlrn_api.

If things proceed well and everyone has access to debug info and logs I
could see us adding ppc as promotion criteria.

Thanks for the update Javier!


> >
> > If we decide to upload the images to images.rdo, we'll need to do the
>
> Correction: this should read "registry.rdo"
>
> > following:
> >
> > - Create the tripleomaster-ppc64le namespace in registry.rdo, following a
> > similar pattern to [3].
> > - Schedule a short registry downtime to increase its disk space, since
> it is
> > currently near its limit.
> > - Update the job at ci.centos to include the REGISTRY_PASSWORD
> environment
> > variable with the right token (see [4]). This is missing today, and
> causing
> > the job failure.
> >
> > Once we get input from all interested parties, we will decide on the next
> > steps.
> >
> > Thanks,
> > Javier
> >
> >
> > [1] -
> >
> https://ci.centos.org/job/tripleo-upstream-containers-build-master-ppc64le/
> > [2] -
> >
> https://centos.logs.rdoproject.org/tripleo-upstream-containers-build-master-ppc64le/422/logs/logs/000_FAILED_tripleoclient.log
> > [3] - https://review.rdoproject.org/r/19063
> > [4] -
> >
> https://github.com/rdo-infra/review.rdoproject.org-config/blob/master/playbooks/tripleo-ci-periodic-base/containers-build.yaml#L12-L20
> > ___
> > dev mailing list
> > dev@lists.rdoproject.org
> > http://lists.rdoproject.org/mailman/listinfo/dev
> >
> > To unsubscribe: dev-unsubscr...@lists.rdoproject.org
> >
> ___
> dev mailing list
> dev@lists.rdoproject.org
> http://lists.rdoproject.org/mailman/listinfo/dev
>
> To unsubscribe: dev-unsubscr...@lists.rdoproject.org
>
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


[rdo-dev] proposing rfolco as core on rdoproject.org-config

2018-08-07 Thread Wesley Hayutin
Greetings,
Folco has been doing a lot of work to configure zuul properly across
multiple projects.
I'd like propose him as a core for the entire project or the bits to
configure jobs in rdo-sf.

WDYT?

Thanks

-- 

Wes Hayutin

Associate MANAGER

Red Hat



w hayu...@redhat.comT: +1919 <+19197544114>4232509
   IRC:  weshay


View my calendar and check my availability for meetings HERE

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


[rdo-dev] proposing Quique as ci-config core

2018-07-13 Thread Wesley Hayutin
Greetings,

I'd like to propose Quique as a core member to
https://github.com/rdo-infra/ci-config/
WDYT

Thanks
-- 

Wes Hayutin

Associate MANAGER

Red Hat



w hayu...@redhat.comT: +1919 <+19197544114>4232509
   IRC:  weshay


View my calendar and check my availability for meetings HERE

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] zuulv3 migration update

2018-06-25 Thread Wesley Hayutin
On Mon, Jun 25, 2018 at 3:33 PM Paul Belanger  wrote:

> On Fri, Jun 22, 2018 at 01:00:14PM -0400, Paul Belanger wrote:
> > Greetings,
> >
> > Wanted to give an update to the zuulv3 migration process. As it stands
> right
> > now, all projects specific to the rdoproject.org have been fully
> migrated from
> > jenkins to zuulv3.  Hopefully this process was transparent to you, for
> the most
> > part we were able to fully test everything before flipping the switch on
> a
> > project.  However we did have a few issues with jobs, but thanks to
> jpena,
> > amoralej and number80, to name a few, we managed to land fixes quickly.
> >
> > At this point, we are still waiting to complete the migration of
> tripleo-ci
> > jobs, this include OVB and various 3rd party CI jobs.  We've been
> blocked by a
> > nodepool migration of upstream-centos-7 nodes, but I'm told that will be
> > finished today.  Meaning, we are aiming for Monday morning to flip the
> switch on
> > the rest of the tripleo-ci specific projects.
> >
> > Due to this, we are still converting JJB changes into ansible playbooks
> using
> > the zuul-migrate command, so I please hold of on moving forward with
> changes to
> > the new rdo-jobs project.  If everything goes well on Monday, we'll
> remove the
> > freeze on config / rdo-jobs for people to start contributing to again.
> >
> > Thanks for your patience during these last 3 weeks, and hopefully we'll
> be
> > wrapped up Monday.  If you have any questions or problems, please reply
> here or
> > #rdo on freenode.
> >
> Next update, today we successfully tested zuulv3 base jobs for devstack
> based
> tripleo jobs.  This basically means we are ready to migrate the remaining
> tripleo in rdoproject.org from jenkins to zuulv3. We still need to test
> OVB
> jobs, but confident they will work.
>
> Nodepool capacity has been addressed however we are running into scale
> issues
> with zuulv3 now. We have an IRC message out to the SoftwareFactory team to
> see
> about bringing onlne more zuul-executors / zuul-mergers, hopefully this
> will not
> take long.  Once that happens, I finally think we'll be ready to migrate
> the
> left over tripleo jobs from jenkins to zuulv3.
>
> -Paul
>

Thanks Paul and to the other folks involved!



> ___
> dev mailing list
> dev@lists.rdoproject.org
> http://lists.rdoproject.org/mailman/listinfo/dev
>
> To unsubscribe: dev-unsubscr...@lists.rdoproject.org
>
-- 

Wes Hayutin

Associate MANAGER

Red Hat



w hayu...@redhat.comT: +1919 <+19197544114>4232509
   IRC:  weshay


View my calendar and check my availability for meetings HERE

___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] Bringing OpenStack back to Fedora

2018-06-06 Thread Wesley Hayutin
On Fri, May 25, 2018 at 9:06 AM Neal Gompa  wrote:

> Hey all,
>
> It's been several years since OpenStack has been part of the Fedora
> distribution. Since then, a number of things have changed.
>
> Some of the major ones:
> * Python packaging has gotten better and easier in Fedora (the changes are
> numerous)
> * RDO now has Software Factory and full-range CI capabilities
> * Fedora has CI hook support for packaging (with the transition to Pagure
> dist-git)
> * OpenStack and Fedora schedules line up again:
> https://releases.openstack.org/
>
> In my view, it's a huge shame that we don't offer OpenStack for people to
> use on Fedora, integrated with the latest software. OpenStack distribution
> on other distros are able to pull this off, and I feel like we should be
> able to as well.
>
> So what I want to know is the following:
> * What are the (real or perceived) difficulties in packaging OpenStack for
> Fedora?
> * How difficult would it be to adapt RDO CI tooling to plug into Fedora
> Dist-Git?
> * How many of the underlying dependencies exist in Fedora today that were
> forked into CentOS for RDO and which ones?
> * What dependencies are in RDO that don't exist in Fedora?
>
> I'm willing to help with a lot of the packaging stuff, including adapting
> packages for Fedora, and helping with reviews to bring stuff back in. I
> firmly believe that maintenance of OpenStack in Fedora should be far easier
> than it was three years ago, when it was ripped out.
>

Hey Neal,
There is actually some planning and discussions around RDO and Fedora going
on atm.  I suspect there will be some updates to this list in the near
future.

Thanks


>
> So help me... Help you!
>
> Best regards,
> Neal
>
> --
> 真実はいつも一つ!/ Always, there's only one truth!
> ___
> dev mailing list
> dev@lists.rdoproject.org
> http://lists.rdoproject.org/mailman/listinfo/dev
>
> To unsubscribe: dev-unsubscr...@lists.rdoproject.org
>
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] proposal to alert #rdo w/ issues from launchpad

2018-03-03 Thread Wesley Hayutin
On Sat, Mar 3, 2018 at 9:44 AM, Alan Pevec  wrote:

> Hi Wes,
> I'd prefer to integrate those alerts into existing RDO monitoring instead
> of adding one more bot.
> We have #rdo-dev channel where infra alerts would fit better, can you show
> few example LPs where you those tags would be applied?
>
> Alan
>
>
You already found the bug that I was thinking of :)
https://bugs.launchpad.net/tripleo/+bug/1753058
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


[rdo-dev] proposal to alert #rdo w/ issues from launchpad

2018-03-03 Thread Wesley Hayutin
Greetings,

As we rely on RDO infrastructure more and more for TripleO 3rd party jobs
and periodic jobs I propose we use the same alerting method we use in the
#tripleo channel.

So in this case I think we could add a tag to launch pad bugs named
tag = "rdo-infra-alert"
or
tag = "rdo-infra"

etc.. you get the point.

The irc alerts have worked well in other channels and it would probably
help establish the cross-team dependencies we have atm.

Thanks for your comments
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


[rdo-dev] FYI.. tripleo jobs are cutting over to queens

2018-02-12 Thread Wesley Hayutin
FYI,
TripleO promotion jobs are cutting over to queens and master vs pike and
master.
Pike will move to the 24hr queue.

https://review.rdoproject.org/r/#/c/11775/
https://review.rdoproject.org/r/#/c/11920/
needs review  --> https://review.rdoproject.org/r/#/c/11910/
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


[rdo-dev] Closing https://trello.com/b/WXJTwsuU/tripleo-and-rdo-ci-status

2018-02-01 Thread Wesley Hayutin
Greetings,

There was a proposal to reduce the number of trello boards used by the
various teams involved in CI.  The board that was used to highlight issues
in RDO and TripleO jobs [0], will be closed soon [1].  In the future
please use the tripleo and rdo ci board [3] to track failing jobs and other
work under the "CI Failing jobs" column.

The cross team collaboration that was done on the previous board was
excellent I hope moving the location of that work doesn't change that.

Thanks

[0] https://trello.com/b/WXJTwsuU/tripleo-and-rdo-ci-status
[1] https://trello.com/c/IUpuWasN
[3] https://trello.com/b/U1ITy0cu/tripleo-and-rdo-ci
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


[rdo-dev] proposing rlandy as core

2018-01-17 Thread Wesley Hayutin
Greetings,

As discussed in the RDO meeting today, I am proposing rlandy as core for
[1].  I'm not 100% sure how the access is divided up so I'm specifying the
repo itself.   Ronelle has proven herself to be a quality reviewer and her
submissions are also up to the standards of core.

Thank you all for your time in considering Ronelle as core.


[1] https://github.com/rdo-infra/review.rdoproject.org-config
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] Graphite and Grafana in RDO-Cloud

2018-01-05 Thread Wesley Hayutin
On Fri, Jan 5, 2018 at 12:36 PM, David Moreau Simard <d...@redhat.com> wrote:

> There are already plans [1] to add the software factory implementation of
> Grafana on review.rdoproject.org, you can see what it looks like on
> softwarefactory-project.io [2].
>
> The backend to this grafana implementation is currently influxdb, not
> graphite.
> However, there are ongoing discussions to either both graphite and
> influxdb simultaneously or optionally either.
>
> We're interested in leveraging this influxdb (or graphite) and grafana
> implementation for monitoring data in general (uptime, resources, disk
> space, load, etc.) so our goals align here.
> We both agree that using graphite would be a plus in order to re-use the
> same queries in the grafana dashboard but at the same time, influxdb is
> more "modern" and easier to work with -- this is why we might end up
> deploying both, we'll see.
>
> [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1514086
> [2]: https://softwarefactory-project.io/grafana/
>
>
This is great news David, thank you for sharing.
Given that this is already in plan software factory and we have an
immediate need I'm wondering how to proceed.
Does the RDO Infra team have an estimate when graphite/influxdb/grafana
will be moved to production?
Some possibilities come to mind, depending on when it moves to prod

1.  The TripleO-CI team waits for prod
2.  TripleO CI would stand up a test instance of graphite/influxdb and
grapha and start to work out what we need to send and how to send data
3.  Is it possible to use the stage instance RDO SF as a testbed for
TripleO-CI's work?  Meaning we send metrics and use the stage instance with
a backing up the data in mind?

What do you think?
Thanks




>
>
> David Moreau Simard
> Senior Software Engineer | OpenStack RDO
>
> dmsimard = [irc, github, twitter]
>
> On Fri, Jan 5, 2018 at 12:13 PM, Wesley Hayutin <whayu...@redhat.com>
> wrote:
>
>> Greetings,
>>
>> At the end of 2017, a number of the upstream multinode scenario jobs
>> started to run over our required deployment times [1].  In an effort to
>> better understand the performance of the deployment and CI the tripleo
>> cores requested that a Graphite and Grafana server be stood up such that we
>> can analyze the core issues more effectively.
>>
>> There is a certain amount of urgency with the issue as our upstream
>> coverage is impacted.  The TripleO-CI team is working on the deployment of
>> both tools in a dev-ops style in RDO-Cloud this sprint.  Nothing yet has
>> been deployed.
>>
>> The TripleO CI team is also working with upstream infra to send metric
>> and data to the upstream Graphite and Grafana servers.  It is not clear yet
>> if we have permission or access to the upstream tools.
>>
>> I wanted to publically announce this work to the RDO infra community to
>> inform and to gather any feedback anyone may have.  There are two scopes of
>> work here, the initial tooling to stand up the infra and the longer term
>> maintenance of the tools.  Perhaps there are plans to build these into RDO
>> SF already.. etc.
>>
>> Please reply with your comments and concerns.
>> Thank you!
>>
>>
>> [1] https://github.com/openstack-infra/tripleo-ci/commit/7a2
>> edf70eccfc7002d26fd1ce1eef803ce8d0ba8
>>
>>
>>
>> ___
>> dev mailing list
>> dev@lists.rdoproject.org
>> http://lists.rdoproject.org/mailman/listinfo/dev
>>
>> To unsubscribe: dev-unsubscr...@lists.rdoproject.org
>>
>>
>
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


[rdo-dev] Graphite and Grafana in RDO-Cloud

2018-01-05 Thread Wesley Hayutin
Greetings,

At the end of 2017, a number of the upstream multinode scenario jobs
started to run over our required deployment times [1].  In an effort to
better understand the performance of the deployment and CI the tripleo
cores requested that a Graphite and Grafana server be stood up such that we
can analyze the core issues more effectively.

There is a certain amount of urgency with the issue as our upstream
coverage is impacted.  The TripleO-CI team is working on the deployment of
both tools in a dev-ops style in RDO-Cloud this sprint.  Nothing yet has
been deployed.

The TripleO CI team is also working with upstream infra to send metric and
data to the upstream Graphite and Grafana servers.  It is not clear yet if
we have permission or access to the upstream tools.

I wanted to publically announce this work to the RDO infra community to
inform and to gather any feedback anyone may have.  There are two scopes of
work here, the initial tooling to stand up the infra and the longer term
maintenance of the tools.  Perhaps there are plans to build these into RDO
SF already.. etc.

Please reply with your comments and concerns.
Thank you!


[1]
https://github.com/openstack-infra/tripleo-ci/commit/7a2edf70eccfc7002d26fd1ce1eef803ce8d0ba8
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] [ci] rdo-infra and tripleo-ci communication between teams

2017-12-06 Thread Wesley Hayutin
On Wed, Dec 6, 2017 at 2:53 PM, Wesley Hayutin <whayu...@redhat.com> wrote:

> Greetings RDO Infra teammates,
>
> Let me begin with stating how much we appreciate the advanced level of
> support that the RDO Infra team provides to its infrastructure users and
> all the extra things done for lots of teams.  However, from time to time
> messages between the teams are lost or not fully communicated and has lead
> to some hiccups.
>
> For example, some changes have been made to the openstack-nodepool
> tenant., the TripleO-related scripts in review.rdoproject.org and the
> general infrastructure that have had a significant impact on CI. In some
> cases, communication about these changes did not reach the entire TripleO
> CI team in time.
>
> In the spirit of continuous improvement, we are looking for ways to
> streamline communication. Below are some ideas:
>
> - Post about the upcoming change on public mailing list (rdo-users?)
> - Any emails/posts related to outages or maintenance work on
> openstack-nodepool tenant have a subject prefix like
> [outage]/[maintenance]. This will raise the visibility of the email in
> Gmail inboxes
> - Bring information about the change to an RDO CI Team meeting
> - Avoid pinging one TripleO CI team member on chat - rather email the
> whole team
> - Add RDO CI team members to reviews related to the openstack-nodepool
> tenant, zuul/upstream.yaml, jobs/tripleo-upstream.yml, other TripleO CI
> areas
> - Perhaps a ticketing system?
>
> Communication between teams within the production chain, in general, seems
> to be mostly informal. As such,  we will propose adding a session to the
> upcoming "production chain sync" meeting early next year on the topic of
> inter-team information sharing. We understand that good communication goes
> both ways and would be open to hearing feedback and other suggestions.
>
> Thanks all
>

I do want to make sure that I am communicating the right message here.
IMHO I see two teams that don't quite have enough common work to be one
team and have weekly meetings etc.
We also have enough dependencies and touch points where email and irc seem
insufficient.

The email I posted stays fairly true to some input from a team
retrospective and I wanted to respect that and send out what people were
expressing.
So although the first suggestion is to use public email lists more often,
IMHO we may see improvements from something more and I think it's worth
raising.
The spirit of the conversation is and remains how to improve and should not
be read as a complaint.

I hope I'm coming across in a positive way here, as that is my intent.
Thanks
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org


Re: [rdo-dev] [outage] RDO Container Registry

2017-12-05 Thread Wesley Hayutin
Thanks David!

On Mon, Dec 4, 2017 at 11:25 PM, David Moreau Simard  wrote:

> Hi,
>
> While we monitor the disk space utilization of
> trunk.registry.rdoproject.org, the alerts for it were silenced due to an
> ongoing false positive.
> Last November 28th, we pruned the metadata of ~5000 image tags [1] >7 days
> old after which we were supposed to prune the (now orphaned) blobs.
>
> The blobs were not deleted and this lead to the registry partition running
> out of disk space.
> Container pushes from approximately the last 48 hours may have failed due
> as a result of the issue.
>
> We're currently pruning the orphaned blobs and pushes should work once
> enough free space is available.
> We'll address the false positive on the monitoring alert ASAP and we hope
> to automate the pruning process in the future to prevent this from
> re-ocurring.
>
> Let me know if you have any questions,
>
> ​[1]: https://paste.fedoraproject.org/paste/Js2rOwOFWdUqfRrBcZaXHw​
>
> David Moreau Simard
> Senior Software Engineer | OpenStack RDO
>
> dmsimard = [irc, github, twitter]
>
___
dev mailing list
dev@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/dev

To unsubscribe: dev-unsubscr...@lists.rdoproject.org