On Wed, Oct 12, 2016, at 07:56 AM, Paul Belanger wrote:
> On Wed, Oct 12, 2016 at 11:40:44PM +1100, Tony Breeds wrote:
> > On Thu, Sep 22, 2016 at 12:28:48PM +1000, Tony Breeds wrote:
> > > Hi All,
> > >     I know a lot of the infra team are in Germany for the sprint, howveer 
> > > I'm
> > > see what seems like a lot of upper-constraint bumps that are failing due 
> > > to
> > > mirrors being out of sync.
> > 
> > This seem to have happened again.
> > 
> > In review https://review.openstack.org/#/c/385099/ specifically the logs [1]
> > 
> > we see:
> > ---
> > Could not find a version that satisfies the requirement 
> > oslo.policy===1.15.0 (from -c 
> > /home/jenkins/workspace/gate-cross-nova-python27-db-ubuntu-xenial/upper-constraints.txt
> >  (line 225)) (from versions: 0.1.0, 0.2.0, 0.3.0, 0.3.1, 0.3.2, 0.4.0, 
> > 0.5.0, 0.6.0, 0.7.0, 0.8.0, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 1.0.0, 
> > 1.1.0, 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 
> > 1.11.0, 1.12.0, 1.13.0, 1.14.0)
> > No matching distribution found for oslo.policy===1.15.0 (from -c 
> > /home/jenkins/workspace/gate-cross-nova-python27-db-ubuntu-xenial/upper-constraints.txt
> >  (line 225))
> > ---
> > 
> > A quick check of https://pypi.python.org//simple/oslo-policy/ vs
> > http://mirror.regionone.osic-cloud1.openstack.org/pypi/simple/oslo-policy/
> > shows that 1.15.0 is on pypi but not our mirrors
> > 
> > For the sake of transparency this is basically the same thing I mentioned on
> > IRC[2]
> > 
> > Any chance we can get a manual run (of bandersnatch?) to clear the issues?
> > 
> > Yours Tony.
> > 
> > [1] 
> > htitp://logs.openstack.org/99/385099/1/check/gate-cross-nova-python27-db-ubuntu-xenial/3468229/console.html#_2016-10-12_04_18_53_131481
> >  
> > [2]
> > http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2016-10-12.log.html#t2016-10-12T05:29:20
> 
> Over this past weekend, I noticed our AFS mirror.pypi directory quota was
> full.
> So, I increased our quota size, however I noticied bandersnatch seemed be
> stuck
> downloading some files.
> 
> I first removed the TODO files, but apparently that didn't solve the
> issue.
> Since diskimage-builder 1.21.0 was still not on our mirror.  I nexted
> forced
> what I thought to be a full sync.  Then enabled bandersnatch again via
> crontab
> (this was Sunday night).
> 
> It is possible I didn't kick off the full sync properly, as some users
> have been
> mentioning some packages are missing.

I think I got this sorted out today. It broke on the 11th trying to sync
a whole bunch of frida packages which took longer than our 30 minute
timeout. I had to run bandersnatch several times in the foreground
without a timeout to get it to reach a steady state then released the
AFS volume. Current bandersnatch runs are falling well under the timeout
so we should syncing automatically now as well.

We should probably tweak the timeout to make it longer to avoid these
sorts of issues in the future.

Clark

_______________________________________________
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Reply via email to