From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Þröstur
Sent: Wednesday, June 21, 2006 4:35 PM
There have been some reports in the past but I don't think it has been acted
upon; one of the problems is that the list of names can be extended at will
(beside the standard comx, lptx,
Hi,
I'm having a problem while downloading from a Microsoft FTP server.
The problem is that the connection is timeout/close while downloading,
then wget retry to download the file, but it receives a file not found
error.
Is this problem with the MS server or wget?
Here is tog of the error,
bruce wrote:
hi...
i'm testing wget on a test site.. i'm using the recursive function of wget
to crawl through a portion of the site...
it appears that wget is hitting a link within the crawl that's causing it to
begin to crawl through the section of the site again...
i know wget isn't as
Try using the -np (no parent) parameter.
Mark Post
-Original Message-
From: bruce [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 22, 2006 4:15 PM
To: 'Frank McCown'; wget@sunsite.dk
Subject: RE: wget - tracking urls/web crawling
hi frank...
there must be something simple i'm
hey frank...
creating a list of pages to parse doesn't do me any good... i really need to
be able to recurse through the underlying pages.. or at least a section of
the pages...
if there was a way that i could insert/use some form of a regex to exclude
urls+querystring that match, then i'd be