Here's a test case for the --spider option. perhaps helpful for
documentation?
using wget on about 17,000 URLs (these are in the FSF/UNESCO Free Software
Directory and are not by any means unique). out of these about 395
generate errors when run with the spider option (--spider) of the wget
comm
solved my own problem.
Netscape Enterprise Server 3.6 (which is probably as old as the
mozilla project itself) doesn't seem to support the HEAD request method.
HTTP/1.1 my ass. not even HTTP/1.0 compliant.
On Tue, 17 Jun 2003, Aaron S. Hawley wrote:
> I use the --spider option a lot and don't
if one didn't come with the source distribution, one probably doesn't
exist. your closest match may be a README file, a man page and of course
the code itself.
On Wed, 18 Jun 2003, Clark, Rob wrote:
> I have seen manuals for GNU Wget 1.8.1 and earlier versions. However, I am
> seeking a manual
I have seen manuals for GNU Wget 1.8.1 and earlier versions. However, I am
seeking a manual specifically directed to the predecessor program "GetURL
1.1." Is one available? If so, how can I obtain a copy? Any assistance
would be appreciated.
-Rob Clark
Just a quick note regarding the "trash at end of file" problem: usually that
means a broken/braindead proxy (possibly transparent), not a wget fault.
For the rest, don't expect to much, currently wget is in stasis for lack of
active maintainer.
Heiko
--
-- PREVINET S.p.A. www.previnet.it
-- Heiko
Quite the opposite at that time, "wait" was used for retries and between
normal connections, so a high wait time (avoid hammering) meant slow
downloads even for working connections.
So the idea at that time was having a possibility of wait 0 (between normal
connections) and waitretry 0..x (used bet