Follow-up Comment #6, bug #30999 (project wget):
Crawl-delay is host/domain specific. Thus a wget -r 'domain1 domain2 domain3'
can't simply wait 'crawl-delay' seconds after a download. We need some
specific logic when dequeing the next file. Also how comes --wait into play ?
The user might be able to override crawl-delay for domain1 but not for domain2
and domain3.
Today, web servers often allow for 50+ parallel connections from one client -
I really don't see the point in implementing crawl-delay.
I could change my mind if someone has a *real* good reason for it *and* comes
up with a good algorithm / patch to handle all corner cases.
_______________________________________________________
Reply to this item at:
<http://savannah.gnu.org/bugs/?30999>
_______________________________________________
Nachricht gesendet von/durch Savannah
http://savannah.gnu.org/