On Tuesday 01 March 2011 15:00:53 KJ wrote:
> I was looking at building a log file to see what broken links might be
> listed.  Wget works as I expect with -r and -m etc.  But when I add the
> --spider option it doesn't work correctly, seems to stop on the
> index.html file.  I can grab index.html by using wget w/out the spider
> option.
>
> $ wget --spider -r -o logfile.txt www.domain.com

Must be something peculiar to your domain; I just tried it with one of mine
(www.terylbrake.com), and it works perfectly.  There are a few bogus errors 
in the file, but as the result is good, I'm not too concerned about it:

Found no broken links.

FINISHED --2011-03-01 17:03:02--
Downloaded: 2 files, 9.0K in 0.08s (118 KB/s)

-- 
Tilghman

-- 
You received this message because you are subscribed to the Google Groups 
"NLUG" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/nlug-talk?hl=en

Reply via email to