2009/9/12 "Martin v. Löwis" <mar...@v.loewis.de>:
>>> Just make a specification. Notice that some links *already* have
>>> a rel attribute which might be interesting to consider.
>>
>> OK.
>>
>> Any idea how we coud handle the dead links ?
>
> IIUC, you don't have to follow them, right? If you want to download
> the package, just follow the download links.
>

easy_install will try to find the "best version" if you don't specify
the version.
So it will visit all the links (among the selected links with the
'rel' attribute, etc..) before
picking the one that it will download.

Just try this one:

$ easy_install hachoir-core
Searching for hachoir-core
Reading http://pypi.python.org/simple/hachoir-core/
Reading http://hachoir.org/wiki/hachoir-core   <- this page doesn't
exists anymore that's an old home url

     page, you're blocked for a while !!

If we keep this behavior, the client-side should be more smart.

We are adding timeout handling in Distribute, and we will probably add
a special option so it doesn't follow
external links if some distributions were found at PyPI.

But we should find a way to remove dead links from PyPI imho.

Maybe by providing a proxy for all links ? So PyPI can fallback to an
empty page if the link is dead ?


> Regards,
> Martin
>



-- 
Tarek Ziadé | http://ziade.org | オープンソースの岩!
_______________________________________________
Catalog-SIG mailing list
Catalog-SIG@python.org
http://mail.python.org/mailman/listinfo/catalog-sig

Reply via email to