Re: [openstack-dev] [tripleo] CI Squad Meeting Summary (week 13)

2017-04-03 Thread Paul Belanger
On Mon, Apr 03, 2017 at 08:28:15PM -0400, Emilien Macchi wrote:
> On Mon, Apr 3, 2017 at 4:41 PM, Paul Belanger  wrote:
> > On Mon, Apr 03, 2017 at 01:08:26PM +0200, Attila Darazs wrote:
> >> If the topics below interest you and you want to contribute to the
> >> discussion, feel free to join the next meeting:
> >>
> >> Time: Thursdays, 14:30-15:30 UTC
> >> Place: https://bluejeans.com/4113567798/
> >>
> >> Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting
> >>
> >> We had a meeting full of intense discussion last Thursday. Here's the
> >> summary.
> >>
> >> = Promotion jobs and HTTP caching =
> >>
> >> The first part of it was centered around trying to improve and mostly speed
> >> up the promotion process for TripleO, which is an ongoing discussion for 
> >> the
> >> last few weeks.
> >>
> >> Image building takes a long time (~30 minutes) for each promotion job, 
> >> which
> >> we can be spared by having a separate job build the images. This would
> >> result in fewer job timeouts. Zuul v3 will be able to handle these kind of
> >> job dependencies directly, but meanwhile we can probably work around it. 
> >> Our
> >> contact on this work is pabelanger.
> >>
> >> A lot of other outside queries can be probably sped up by having an
> >> infra-wide caching proxy. This might be an Apache server with mod_proxy for
> >> the short term, and AFS mirror in the long term. This will speed up image
> >> downloads and docker registry downloads as well, speeding up our jobs.
> >>
> > As an update, we did roll out the proxy late last week and so far we setup a
> > reverse proxy cache for https://trunk.rdoproject.org [1]. Currently we just
> > updated puppet jobs to use it, and so far it seems to be working. We are 
> > going
> > to make an announcement to the ML later day for other projects to start 
> > using
> > it.
> >
> > We have also started mirroring RDO into AFS, which is our long term 
> > solution.
> > This should be ready sometime this week.
> >
> > As for docker things, clarkb and I are planning to test the reverse proxy 
> > with
> > hub.docker.com.  We should know more in a few days, and if the results are 
> > good
> > we'll likely setup a reverse proxy too.
> >
> > Currently, our apache2 configuration is caching files under 100MB up to 2GB 
> > of
> > data. Because we have finite resources on our mirrors, this is the defaults 
> > for
> > today.
> >
> > [1] 
> > http://mirror.regionone.infracloud-vanilla.openstack.org:8080/rdo/centos7/
> 
> The only blocker at this time is that rh1 cloud doesn't mirror RDO yet.
> If we want https://review.openstack.org/#/c/451938 to work in the
> gate, we'll have to create it or put some logic to ignore the mirror
> for RDO repo (but I would prefer to have a mirror to optimize the
> deployments).
> 
> Thoughts on adding rdo repo to rh1 mirror like Paul did on nodepool providers?
> 
This is fixed. Ansible was failing to connect to the server, once I validate the
ssh knownhosts file, puppet ran and provisioned correctly.

> >> = Quickstart transition update =
> >>
> >> The big OVB change from last week got merged, now we're checking the
> >> stability of those jobs before proceeding with the transition. We'll want 
> >> to
> >> have more extensive testing before we move the voting jobs over, so 
> >> probably
> >> we'll create parallel non-voting jobs this time (ha/non-ha/updates + gate
> >> job), not just testing through pending tripleo-ci changes.
> >>
> >>
> >>
> >> We will probably combine the former ha and nonha OVB jobs to save resources
> >> on rh1. Relevant change and discussion here[1].
> >>
> >> We also briefly talked and discussed how to involve and bring up to speed
> >> more people for reviewing Quickstart changes. There will be a deep dive
> >> session on the subject given by one of the current cores probably.
> >>
> >> Best regards,
> >> Attila
> >>
> >> [1] https://review.openstack.org/449785
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> -- 
> Emilien Macchi
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for 

Re: [openstack-dev] [tripleo] CI Squad Meeting Summary (week 13)

2017-04-03 Thread Emilien Macchi
On Mon, Apr 3, 2017 at 8:28 PM, Emilien Macchi  wrote:
> On Mon, Apr 3, 2017 at 4:41 PM, Paul Belanger  wrote:
>> On Mon, Apr 03, 2017 at 01:08:26PM +0200, Attila Darazs wrote:
>>> If the topics below interest you and you want to contribute to the
>>> discussion, feel free to join the next meeting:
>>>
>>> Time: Thursdays, 14:30-15:30 UTC
>>> Place: https://bluejeans.com/4113567798/
>>>
>>> Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting
>>>
>>> We had a meeting full of intense discussion last Thursday. Here's the
>>> summary.
>>>
>>> = Promotion jobs and HTTP caching =
>>>
>>> The first part of it was centered around trying to improve and mostly speed
>>> up the promotion process for TripleO, which is an ongoing discussion for the
>>> last few weeks.
>>>
>>> Image building takes a long time (~30 minutes) for each promotion job, which
>>> we can be spared by having a separate job build the images. This would
>>> result in fewer job timeouts. Zuul v3 will be able to handle these kind of
>>> job dependencies directly, but meanwhile we can probably work around it. Our
>>> contact on this work is pabelanger.
>>>
>>> A lot of other outside queries can be probably sped up by having an
>>> infra-wide caching proxy. This might be an Apache server with mod_proxy for
>>> the short term, and AFS mirror in the long term. This will speed up image
>>> downloads and docker registry downloads as well, speeding up our jobs.
>>>
>> As an update, we did roll out the proxy late last week and so far we setup a
>> reverse proxy cache for https://trunk.rdoproject.org [1]. Currently we just
>> updated puppet jobs to use it, and so far it seems to be working. We are 
>> going
>> to make an announcement to the ML later day for other projects to start using
>> it.
>>
>> We have also started mirroring RDO into AFS, which is our long term solution.
>> This should be ready sometime this week.
>>
>> As for docker things, clarkb and I are planning to test the reverse proxy 
>> with
>> hub.docker.com.  We should know more in a few days, and if the results are 
>> good
>> we'll likely setup a reverse proxy too.
>>
>> Currently, our apache2 configuration is caching files under 100MB up to 2GB 
>> of
>> data. Because we have finite resources on our mirrors, this is the defaults 
>> for
>> today.
>>
>> [1] 
>> http://mirror.regionone.infracloud-vanilla.openstack.org:8080/rdo/centos7/
>
> The only blocker at this time is that rh1 cloud doesn't mirror RDO yet.
> If we want https://review.openstack.org/#/c/451938 to work in the
> gate, we'll have to create it or put some logic to ignore the mirror
> for RDO repo (but I would prefer to have a mirror to optimize the
> deployments).
>
> Thoughts on adding rdo repo to rh1 mirror like Paul did on nodepool providers?

Ok Paul just ran puppet on the mirror and it created the repo:
http://mirror.regionone.tripleo-test-cloud-rh1.openstack.org/rdo/

He asked us to wait a little before using it, so we'll wait his "go".

Thanks Paul,

>>> = Quickstart transition update =
>>>
>>> The big OVB change from last week got merged, now we're checking the
>>> stability of those jobs before proceeding with the transition. We'll want to
>>> have more extensive testing before we move the voting jobs over, so probably
>>> we'll create parallel non-voting jobs this time (ha/non-ha/updates + gate
>>> job), not just testing through pending tripleo-ci changes.
>>>
>>>
>>>
>>> We will probably combine the former ha and nonha OVB jobs to save resources
>>> on rh1. Relevant change and discussion here[1].
>>>
>>> We also briefly talked and discussed how to involve and bring up to speed
>>> more people for reviewing Quickstart changes. There will be a deep dive
>>> session on the subject given by one of the current cores probably.
>>>
>>> Best regards,
>>> Attila
>>>
>>> [1] https://review.openstack.org/449785
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI Squad Meeting Summary (week 13)

2017-04-03 Thread Emilien Macchi
On Mon, Apr 3, 2017 at 4:41 PM, Paul Belanger  wrote:
> On Mon, Apr 03, 2017 at 01:08:26PM +0200, Attila Darazs wrote:
>> If the topics below interest you and you want to contribute to the
>> discussion, feel free to join the next meeting:
>>
>> Time: Thursdays, 14:30-15:30 UTC
>> Place: https://bluejeans.com/4113567798/
>>
>> Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting
>>
>> We had a meeting full of intense discussion last Thursday. Here's the
>> summary.
>>
>> = Promotion jobs and HTTP caching =
>>
>> The first part of it was centered around trying to improve and mostly speed
>> up the promotion process for TripleO, which is an ongoing discussion for the
>> last few weeks.
>>
>> Image building takes a long time (~30 minutes) for each promotion job, which
>> we can be spared by having a separate job build the images. This would
>> result in fewer job timeouts. Zuul v3 will be able to handle these kind of
>> job dependencies directly, but meanwhile we can probably work around it. Our
>> contact on this work is pabelanger.
>>
>> A lot of other outside queries can be probably sped up by having an
>> infra-wide caching proxy. This might be an Apache server with mod_proxy for
>> the short term, and AFS mirror in the long term. This will speed up image
>> downloads and docker registry downloads as well, speeding up our jobs.
>>
> As an update, we did roll out the proxy late last week and so far we setup a
> reverse proxy cache for https://trunk.rdoproject.org [1]. Currently we just
> updated puppet jobs to use it, and so far it seems to be working. We are going
> to make an announcement to the ML later day for other projects to start using
> it.
>
> We have also started mirroring RDO into AFS, which is our long term solution.
> This should be ready sometime this week.
>
> As for docker things, clarkb and I are planning to test the reverse proxy with
> hub.docker.com.  We should know more in a few days, and if the results are 
> good
> we'll likely setup a reverse proxy too.
>
> Currently, our apache2 configuration is caching files under 100MB up to 2GB of
> data. Because we have finite resources on our mirrors, this is the defaults 
> for
> today.
>
> [1] http://mirror.regionone.infracloud-vanilla.openstack.org:8080/rdo/centos7/

The only blocker at this time is that rh1 cloud doesn't mirror RDO yet.
If we want https://review.openstack.org/#/c/451938 to work in the
gate, we'll have to create it or put some logic to ignore the mirror
for RDO repo (but I would prefer to have a mirror to optimize the
deployments).

Thoughts on adding rdo repo to rh1 mirror like Paul did on nodepool providers?

>> = Quickstart transition update =
>>
>> The big OVB change from last week got merged, now we're checking the
>> stability of those jobs before proceeding with the transition. We'll want to
>> have more extensive testing before we move the voting jobs over, so probably
>> we'll create parallel non-voting jobs this time (ha/non-ha/updates + gate
>> job), not just testing through pending tripleo-ci changes.
>>
>>
>>
>> We will probably combine the former ha and nonha OVB jobs to save resources
>> on rh1. Relevant change and discussion here[1].
>>
>> We also briefly talked and discussed how to involve and bring up to speed
>> more people for reviewing Quickstart changes. There will be a deep dive
>> session on the subject given by one of the current cores probably.
>>
>> Best regards,
>> Attila
>>
>> [1] https://review.openstack.org/449785
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI Squad Meeting Summary (week 13)

2017-04-03 Thread Paul Belanger
On Mon, Apr 03, 2017 at 01:08:26PM +0200, Attila Darazs wrote:
> If the topics below interest you and you want to contribute to the
> discussion, feel free to join the next meeting:
> 
> Time: Thursdays, 14:30-15:30 UTC
> Place: https://bluejeans.com/4113567798/
> 
> Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting
> 
> We had a meeting full of intense discussion last Thursday. Here's the
> summary.
> 
> = Promotion jobs and HTTP caching =
> 
> The first part of it was centered around trying to improve and mostly speed
> up the promotion process for TripleO, which is an ongoing discussion for the
> last few weeks.
> 
> Image building takes a long time (~30 minutes) for each promotion job, which
> we can be spared by having a separate job build the images. This would
> result in fewer job timeouts. Zuul v3 will be able to handle these kind of
> job dependencies directly, but meanwhile we can probably work around it. Our
> contact on this work is pabelanger.
> 
> A lot of other outside queries can be probably sped up by having an
> infra-wide caching proxy. This might be an Apache server with mod_proxy for
> the short term, and AFS mirror in the long term. This will speed up image
> downloads and docker registry downloads as well, speeding up our jobs.
> 
As an update, we did roll out the proxy late last week and so far we setup a
reverse proxy cache for https://trunk.rdoproject.org [1]. Currently we just
updated puppet jobs to use it, and so far it seems to be working. We are going
to make an announcement to the ML later day for other projects to start using
it.

We have also started mirroring RDO into AFS, which is our long term solution.
This should be ready sometime this week.

As for docker things, clarkb and I are planning to test the reverse proxy with
hub.docker.com.  We should know more in a few days, and if the results are good
we'll likely setup a reverse proxy too.

Currently, our apache2 configuration is caching files under 100MB up to 2GB of
data. Because we have finite resources on our mirrors, this is the defaults for
today.

[1] http://mirror.regionone.infracloud-vanilla.openstack.org:8080/rdo/centos7/

> = Quickstart transition update =
> 
> The big OVB change from last week got merged, now we're checking the
> stability of those jobs before proceeding with the transition. We'll want to
> have more extensive testing before we move the voting jobs over, so probably
> we'll create parallel non-voting jobs this time (ha/non-ha/updates + gate
> job), not just testing through pending tripleo-ci changes.
> 
> 
> 
> We will probably combine the former ha and nonha OVB jobs to save resources
> on rh1. Relevant change and discussion here[1].
> 
> We also briefly talked and discussed how to involve and bring up to speed
> more people for reviewing Quickstart changes. There will be a deep dive
> session on the subject given by one of the current cores probably.
> 
> Best regards,
> Attila
> 
> [1] https://review.openstack.org/449785
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI Squad Meeting Summary (week 13)

2017-04-03 Thread Attila Darazs
If the topics below interest you and you want to contribute to the 
discussion, feel free to join the next meeting:


Time: Thursdays, 14:30-15:30 UTC
Place: https://bluejeans.com/4113567798/

Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

We had a meeting full of intense discussion last Thursday. Here's the 
summary.


= Promotion jobs and HTTP caching =

The first part of it was centered around trying to improve and mostly 
speed up the promotion process for TripleO, which is an ongoing 
discussion for the last few weeks.


Image building takes a long time (~30 minutes) for each promotion job, 
which we can be spared by having a separate job build the images. This 
would result in fewer job timeouts. Zuul v3 will be able to handle these 
kind of job dependencies directly, but meanwhile we can probably work 
around it. Our contact on this work is pabelanger.


A lot of other outside queries can be probably sped up by having an 
infra-wide caching proxy. This might be an Apache server with mod_proxy 
for the short term, and AFS mirror in the long term. This will speed up 
image downloads and docker registry downloads as well, speeding up our jobs.


= Quickstart transition update =

The big OVB change from last week got merged, now we're checking the 
stability of those jobs before proceeding with the transition. We'll 
want to have more extensive testing before we move the voting jobs over, 
so probably we'll create parallel non-voting jobs this time 
(ha/non-ha/updates + gate job), not just testing through pending 
tripleo-ci changes.




We will probably combine the former ha and nonha OVB jobs to save 
resources on rh1. Relevant change and discussion here[1].


We also briefly talked and discussed how to involve and bring up to 
speed more people for reviewing Quickstart changes. There will be a deep 
dive session on the subject given by one of the current cores probably.


Best regards,
Attila

[1] https://review.openstack.org/449785

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev