RE: Bug in GNU Wget 1.x (Win32)

2006-06-22 Thread Herold Heiko
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Þröstur Sent: Wednesday, June 21, 2006 4:35 PM There have been some reports in the past but I don't think it has been acted upon; one of the problems is that the list of names can be extended at will (beside the standard comx, lptx,

Problem when timeout

2006-06-22 Thread Oliver Schulze L.
Hi, I'm having a problem while downloading from a Microsoft FTP server. The problem is that the connection is timeout/close while downloading, then wget retry to download the file, but it receives a file not found error. Is this problem with the MS server or wget? Here is tog of the error,

Re: wget - tracking urls/web crawling

2006-06-22 Thread Frank McCown
bruce wrote: hi... i'm testing wget on a test site.. i'm using the recursive function of wget to crawl through a portion of the site... it appears that wget is hitting a link within the crawl that's causing it to begin to crawl through the section of the site again... i know wget isn't as

RE: wget - tracking urls/web crawling

2006-06-22 Thread Post, Mark K
Try using the -np (no parent) parameter. Mark Post -Original Message- From: bruce [mailto:[EMAIL PROTECTED] Sent: Thursday, June 22, 2006 4:15 PM To: 'Frank McCown'; wget@sunsite.dk Subject: RE: wget - tracking urls/web crawling hi frank... there must be something simple i'm

RE: wget - tracking urls/web crawling

2006-06-22 Thread bruce
hey frank... creating a list of pages to parse doesn't do me any good... i really need to be able to recurse through the underlying pages.. or at least a section of the pages... if there was a way that i could insert/use some form of a regex to exclude urls+querystring that match, then i'd be