Hi Lukas,

On Sun, Jun 23, 2013 at 09:46:34AM +0200, Lukas Tribus wrote:
> Hi,
> 
> > I find it strange that the 'normal' git repository (though slow) is
> > unable to clone correctly. But i guess thats not so important if there
> > is a good workaround / secondary up to date repository.
> 
> I agree, slow is one thing, not working is another thing.
> 
> Willy, can you take a look why cloning from git.1wt.eu fails?
> 
> 
> lukas@ubuntuvm:~/haproxy-test$ git clone http://git.1wt.eu/git/haproxy.git/
> Cloning into 'haproxy'...
> error: Unable to get pack file 
> http://git.1wt.eu/git/haproxy.git/objects/pack/pack-ad332087a4ea5a65ac90791a6d55f57f2efb57d3.pack
> transfer closed with 272368 bytes remaining to read
> error: Unable to find 85eb3ee8610b7a8389e78b3f342f6101467d31c3 under 
> http://git.1wt.eu/git/haproxy.git
> Cannot obtain needed blob 85eb3ee8610b7a8389e78b3f342f6101467d31c3
> while processing commit 0a3dd74c9cd24ab77178c9ccc65c577a91648cef.
> error: Fetch failed.
> lukas@ubuntuvm:~/haproxy-test$

We have this report from time to time with no clear explanation :-(
Here it seems the problem was a bit clearer.

When you download from git.1wt.eu, you pass via a cache (formilux.org)
so that git packs are retrieved faster.

There is one haproxy in front of this cache which reports this :

2013-06-23T07:42:16+02:00/86 127.0.0.1 haproxy[29509]: XX.XXX.XX.XX:39265 
[23/Jun/2013:07:41:44.662] public cache-1wt/cache 45/0/0/2079/32021 200 51531 - 
- SDNI 9/9/5/5/0 0/0 {git.1wt.eu} \"GET 
/git/haproxy.git/objects/pack/pack-ad332087a4ea5a65ac90791a6d55f57f2e
fb57d3.pack HTTP/1.1\"

And on the site on the other side I'm seeing this :

Jun 23 09:42:16 rpx2 haproxy[1153]: 88.191.124.161:40531 
[23/Jun/2013:09:41:44.857] http-in www/www 3/0/1/13/31816 200 220007 - - cD-- 
1/1/1/1/0 0/0 {git.1wt.eu:81|git/1.7.9.5|XX.XXX.XX.XX, 1|||} 
{|323599|application/octet-st} "GET 
/git/haproxy.git/objects/pack/pack-ad332087a4ea5a65ac90791a6d55f57f2efb57d3.pack
 HTTP/1.1" 

So it seems to me like this is the cache in the middle which tends to
hang on some connections. And probably that once the connection aborts,
the broken object is stored truncated in the cache.

I've just put the cache into maintenance so that connections will go
directly to the origin, if you want to retry. It will be even slower
but probably worth a try.

Regards,
Willy


Reply via email to