> On May 4, 2016, at 2:51 PM, L Walsh <cpan-tes...@tlinx.org> wrote:
> 
> Douglas Bell wrote:
>> Over the QA Hackathon, we added Fastly caching to the cpantesters. That 
>> error appears when the cache can't access cpantesters itself to update its 
>> cache. We're still trying to get everything figured out here.
>> 
>> Fastly does geocaching, so you get a local version faster. It caches for 
>> about an hour, since that's approximately how often the static site gets 
>> updated anyway. It doesn't seem to have a way to set how long it waits for a 
>> response from cpantesters though. The cache not waiting long enough is my 
>> current theory on why this problem is happening.
>>  
> ---
>   Could be, though *subjectively*, it seems to be taking alot longer
> to get responses of any kind -- and then more often than not (>50%) of the
> time it is a timeout error.

About when did this slowdown start? I'd like to know if it correlates to when 
we started enabling Fastly caching.

>   Maybe the varnish layer resulted in an overall slowdown due to machine
> memory or CPU constraints?

Fastly is a hosted service. CPANTesters did nothing except change where the DNS 
is pointed.

>   We should've just gone with Memoize for back-end.... of course last
> I looked at it, it didn't seem to have alot of control for how often
> it's static version would get updated (i.e. -- like 'never': *oh well*)
> seems like its often a matter of not enough cache or cash to throw at
> solutions...

Using a locally-hosted cache would solve very little of the problem as it 
exists. Memoize itself requires process persistence, which CPANTesters lacks 
(it's all CGI). Any other caching proxy located on the CPANTesters machine 
wouldn't reduce very much load on the CPANTesters machine. A lot of the site is 
already generated as static files, and it's the parts that are currently not 
generated (or indeed cannot be generated) that have the biggest impact.

That, and Fastly is generously donating the use of their service, which 
benefits both parties.

Reply via email to