Re: [openstack-dev] [infra] elements vs. openstack-infra puppet for CI "infra" nodes
On Mon, May 5, 2014 at 5:51 AM, Dan Prince wrote: > I originally sent this to TripleO but perhaps [infra] would have been a > better choice. Adding the -infra mailing list on this too so folks see it as they rush around doing pre-summit things. > The short version is I'd like to run a lightweight (unofficial) mirror for > Fedora in infra: > > https://review.openstack.org/#/c/90875/ On the Debian side, I also have a bug (with some mirror discussion and an attached review) here: https://bugs.launchpad.net/openstack-ci/+bug/1311855 After discussing this particular patch+bug with the rest of the -infra team, there wasn't a ton of interest in running an infra-based mirror due to the package index out of sync issue in unofficial mirrors, which would be a problem for us. I had hoped we could sit down and chat about this at the summit for both Fedora and Debian mirrors, but unfortunately I won't be able to attend (been very sick this week, doctor didn't approve getting on a plane on Sunday). So I'm hoping some other infra folks can sync up with Dan and the TripleO crew to chat about how we can best get these changes in so they'll work effectively for everyone. Also happy to continue this discussion here on list or resume at a meeting after summit. -- Elizabeth Krumbach Joseph || Lyz || pleia2 http://www.princessleia.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [infra] elements vs. openstack-infra puppet for CI "infra" nodes
Excerpts from Dan Prince's message of 2014-05-05 05:51:52 -0700: > I originally sent this to TripleO but perhaps [infra] would have been a > better choice. > > The short version is I'd like to run a lightweight (unofficial) mirror for > Fedora in infra: > > https://review.openstack.org/#/c/90875/ > > And then in the TripleO CI racks we can run local Squid caches using > something like this: > > https://review.openstack.org/#/c/91161/ > > We have to do something because we see quite a bit of job failures related to > unstable mirrors. If using local Squid caches doesn't work out then perhaps > we will have to run local mirrors in each TripleO CI rack but I would like to > avoid that if possible as it is more resource heavy. Especially because we'll > need to do the same things in each rack for Fedora and Ubuntu (both of which > run in each TripleO CI test rack). > > Dan > > - Forwarded Message - > From: "Dan Prince" > To: "OpenStack Development Mailing List (not for usage questions)" > > Sent: Tuesday, April 29, 2014 4:10:30 PM > Subject: [openstack-dev] [TripleO] elements vs. openstack-infra puppet for > CI "infra" nodes > > A bit of background TripleO CI background: > > At this point we've got two public CI overcloud which we can use to run > TripleO check jobs for CI. Things are evolving nicely and we've recently been > putting some effort into making things run faster by adding local distro and > Pypi mirrors. Etc. This is good in that it should help us improve both the > stability of test results and runtimes. > > > > This brings up the question of how we are going to manage our TripleO > overcloud CI resources for things like: distro mirrors, caches, test > environment brokers, etc > > 1) Do we use and or create openstack-infra/config modules for everything we > need and manage it via the normal OpenStack infrastructure way using Puppet > etc.? > > 2) Or, do we take the TripleO oriented approach and use image elements and > Heat templates to manage things? > > Which of these two options do we prefer given that we eventually want TripleO > to be gating? And who is responsible for maintaining them (TripleO CD Admins > or OpenStack Infra)? > The way I see things going, infra has a job to do _today_ and they should choose the tools they want for doing that job. TripleO is trying to make OpenStack deploy itself, and hopefully in so doing, also make managing workloads easier on OpenStack. If we are operating any workload, it would make a lot of sense for us to use the same tools we are suggesting OpenStack operators consider using. So it really comes down to who is operating the mirrors. If we're doing it, we should be doing it with the tools we've built for that specific purpose. If we expect infra to do it, then they should decide what works best for them. Either way, I completely agree that we shouldn't be doing anymore one-off cowboy mirrors, which is what we've been doing. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [infra] elements vs. openstack-infra puppet for CI "infra" nodes
I originally sent this to TripleO but perhaps [infra] would have been a better choice. The short version is I'd like to run a lightweight (unofficial) mirror for Fedora in infra: https://review.openstack.org/#/c/90875/ And then in the TripleO CI racks we can run local Squid caches using something like this: https://review.openstack.org/#/c/91161/ We have to do something because we see quite a bit of job failures related to unstable mirrors. If using local Squid caches doesn't work out then perhaps we will have to run local mirrors in each TripleO CI rack but I would like to avoid that if possible as it is more resource heavy. Especially because we'll need to do the same things in each rack for Fedora and Ubuntu (both of which run in each TripleO CI test rack). Dan - Forwarded Message - From: "Dan Prince" To: "OpenStack Development Mailing List (not for usage questions)" Sent: Tuesday, April 29, 2014 4:10:30 PM Subject: [openstack-dev] [TripleO] elements vs. openstack-infra puppet for CI "infra" nodes A bit of background TripleO CI background: At this point we've got two public CI overcloud which we can use to run TripleO check jobs for CI. Things are evolving nicely and we've recently been putting some effort into making things run faster by adding local distro and Pypi mirrors. Etc. This is good in that it should help us improve both the stability of test results and runtimes. This brings up the question of how we are going to manage our TripleO overcloud CI resources for things like: distro mirrors, caches, test environment brokers, etc 1) Do we use and or create openstack-infra/config modules for everything we need and manage it via the normal OpenStack infrastructure way using Puppet etc.? 2) Or, do we take the TripleO oriented approach and use image elements and Heat templates to manage things? Which of these two options do we prefer given that we eventually want TripleO to be gating? And who is responsible for maintaining them (TripleO CD Admins or OpenStack Infra)? If it helps to narrow the focus of this thread I do want to stress I'm only really talking about the public CI (overcloud) resources. What happens underneath this layer is already managed via TripleO tooling itself. Regardless of what we use I'd like to be able to maintain feature parity with regards to setting up these CI cloud resources across providers (HP and Red Hat at this point). As is I fear we've got a growing list of CI infrastructure that isn't easily reproducible across the racks. Dan ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev