--convert-links vs. non-recursion

2005-03-02 Thread Belov, Charles
Hi -

I'm invoking wget 1.9.1 with the following options, among others:

--input-file _filename_
--restrict-file-names='windows'
--directory-prefix=/www/htdocs/newfolder1/newfolder2
--convert-links
--html-extension

but not recursion. 

The reason I'm using an input file instead of recursion is this documentation 
about --accept _acclist_:

Note that these two options do not affect the downloading of HTML files; Wget 
must load all the HTMLs to know where to go at all--recursive retrieval would 
make no sense otherwise.

Well, not quite.  I want to retrieve all pages named
http://my.source.site/oldfolder/abc_pages.asp?id=n and
http://my.source.site/oldfolder/def_pages.asp?id=n and
http://my.source.site/oldfolder/ghi_pages.asp?id=n and
but not pages named
http://my.source.site/oldfolder/jkl_pages.asp?id=n or
http://my.source.site/oldfolder/mno_pages.asp?id=n or
http://my.source.site/oldfolder/pqr_pages.asp?id=n . 

(Where n is the 5-digit number corresponding to the actual content.)

That is, they are all in the same directory, with different whatever.asp names. 

What's happening is that the pages in my input list are correctly getting 
copied to 

http://my.target.site/newfolder1/newfolder2/[EMAIL PROTECTED] etc

but the links in the pages are untranslated from their original

/oldfolder/def_pages?id=n

instead of being translated to a working

[EMAIL PROTECTED]
or   
/newfolder1/newfolder2/[EMAIL PROTECTED]

and links to unwanted pages such as
/oldfolder/jkl_pages.asp?id=n 
are not being translated to
http://my.source.site/oldfolder/jkl_pages.asp?id=n
in the files on my new site. 

I'm guessing that --convert-links will only work with recursion, and that's why 
the links also aren't being fixed for the .html extension or the Windows file 
names.  Is there a way to get some of the HTML files and not others when they 
are in the same directory, but still get the links fully translated? Or will I 
need to post-edit my new files outside of wget to fix the links?

Note: The target site is on a Un*x box, but I have to be able to 
upload/download from a PC.

Thanks in advance,
Charles Chas Belov


RE: --convert-links vs. non-recursion

2005-03-02 Thread Belov, Charles
Original message corrected as [EMAIL PROTECTED] munged my URLs.

Please read [AT] as an at sign.

Hi -

I'm invoking wget 1.9.1 with the following options, among others:

--input-file _filename_
--restrict-file-names='windows'
--directory-prefix=/www/htdocs/newfolder1/newfolder2
--convert-links
--html-extension

but not recursion. 

The reason I'm using an input file instead of recursion is this documentation 
about --accept _acclist_:

Note that these two options do not affect the downloading of HTML files; Wget 
must load all the HTMLs to know where to go at all--recursive retrieval would 
make no sense otherwise.

Well, not quite.  I want to retrieve all pages named
http://my.source.site/oldfolder/abc_pages.asp?id=n and
http://my.source.site/oldfolder/def_pages.asp?id=n and
http://my.source.site/oldfolder/ghi_pages.asp?id=n and
but not pages named
http://my.source.site/oldfolder/jkl_pages.asp?id=n or
http://my.source.site/oldfolder/mno_pages.asp?id=n or
http://my.source.site/oldfolder/pqr_pages.asp?id=n . 

(Where n is the 5-digit number corresponding to the actual content.)

That is, they are all in the same directory, with different whatever.asp names. 

What's happening is that the pages in my input list are correctly getting 
copied to 

http://my.target.site/newfolder1/newfolder2/abc_pages[AT]id=n.html etc

but the links in the pages are untranslated from their original

/oldfolder/def_pages?id=n

instead of being translated to a working

def_pages[AT]id=n.html
or   
/newfolder1/newfolder2/def_pages[AT]id=n.html

and links to unwanted pages such as
/oldfolder/jkl_pages.asp?id=n 
are not being translated to
http://my.source.site/oldfolder/jkl_pages.asp?id=n
in the files on my new site. 

I'm guessing that --convert-links will only work with recursion, and that's why 
the links also aren't being fixed for the .html extension or the Windows file 
names.  Is there a way to get some of the HTML files and not others when they 
are in the same directory, but still get the links fully translated? Or will I 
need to post-edit my new files outside of wget to fix the links?

Note: The target site is on a Un*x box, but I have to be able to 
upload/download from a PC.

Thanks in advance,
Charles Chas Belov


non-recursion in 1.9.1

2004-05-14 Thread Ilya N. Golubev
wget -mLd http://swimming.hut.ru/tech/tech.html

does not follow `a class=t href=br.html' links contained in the
resource.  `~/.wgetrc' is empty, proxy is not used.  The program
output follows.

DEBUG output created by Wget 1.9.1 on linux-gnu.

Enqueuing http://swimming.hut.ru/tech/tech.html at depth 0
Queue count 1, maxcount 1.
Dequeuing http://swimming.hut.ru/tech/tech.html at depth 0
Queue count 0, maxcount 1.
--22:44:18--  http://swimming.hut.ru/tech/tech.html
   = `swimming.hut.ru/tech/tech.html'
Resolving swimming.hut.ru... 195.161.118.35
Caching swimming.hut.ru = 195.161.118.35
Connecting to swimming.hut.ru[195.161.118.35]:80... connected.
Created socket 3.
Releasing 0x8083fd0 (new refcount 1).
---request begin---
HEAD /tech/tech.html HTTP/1.0
User-Agent: Wget/1.9.1
Host: swimming.hut.ru
Accept: */*
Connection: Keep-Alive

---request end---
HTTP request sent, awaiting response... HTTP/1.1 200 OK
Date: Thu, 06 May 2004 18:44:18 GMT
Server: Apache/1.3.19 (Unix) AGAVA.Banners/1.10 rus/PL30.4
Connection: close
Content-Type: text/html; charset=koi8-r
Expires: Thu, 01 Jan 1970 00:00:01 GMT
Last-Modified: Thu, 06 May 2004 18:44:18 GMT


Length: unspecified [text/html]
Closing fd 3
Remote file is newer, retrieving.
--22:44:18--  http://swimming.hut.ru/tech/tech.html
   = `swimming.hut.ru/tech/tech.html'
Found swimming.hut.ru in host_name_addresses_map (0x8083fd0)
Connecting to swimming.hut.ru[195.161.118.35]:80... connected.
Created socket 3.
Releasing 0x8083fd0 (new refcount 1).
---request begin---
GET /tech/tech.html HTTP/1.0
User-Agent: Wget/1.9.1
Host: swimming.hut.ru
Accept: */*
Connection: Keep-Alive

---request end---
HTTP request sent, awaiting response... HTTP/1.0 200 OK
Date: Thu, 06 May 2004 18:43:29 GMT
Server: Apache/1.3.19 (Unix) AGAVA.Banners/1.10 rus/PL30.4
Connection: close
Content-Type: text/html; charset=koi8-r
Vary: accept-charset, user-agent
Content-Length: 15257
Age: 49


Length: 15,257 [text/html]

0K ..    100%  105.01 KB/s

Closing fd 3
22:44:18 (105.01 KB/s) - `swimming.hut.ru/tech/tech.html' saved [15257/15257]

Loaded swimming.hut.ru/tech/tech.html (size 15257).
swimming.hut.ru/tech/tech.html: merge(http://swimming.hut.ru/tech/tech.html;, 
../sw.css) - http://swimming.hut.ru/tech/../sw.css
appending http://swimming.hut.ru/sw.css; to urlpos.
swimming.hut.ru/tech/tech.html: merge(http://swimming.hut.ru/tech/tech.html;, 
http://www.hut.ru/g/cw.gif?swimmin7;) - http://www.hut.ru/g/cw.gif?swimmin7
appending http://www.hut.ru/g/cw.gif?swimmin7; to urlpos.
swimming.hut.ru/tech/tech.html: merge(http://swimming.hut.ru/tech/tech.html;, 
http://www.hut.ru/g/ch.gif?swimmin7;) - http://www.hut.ru/g/ch.gif?swimmin7
appending http://www.hut.ru/g/ch.gif?swimmin7; to urlpos.
swimming.hut.ru/tech/tech.html: merge(http://swimming.hut.ru/tech/tech.html;, 
http://ad.tbn.ru/bb.cgi?cmd=gopubid=' + userid + 'pg=' + page + 
'vbn=188num=1w=468h=60nocache=' + rndnum + ') - 
http://ad.tbn.ru/bb.cgi?cmd=gopubid=' + userid + 'pg=' + page + 
'vbn=188num=1w=468h=60nocache=' + rndnum + '
appending 
http://ad.tbn.ru/bb.cgi?cmd=gopubid='%20+%20userid%20+%20'pg='%20+%20page%20+%20'vbn=188num=1w=468h=60nocache='%20+%20rndnum%20+%20'
 to urlpos.
swimming.hut.ru/tech/tech.html: merge(http://swimming.hut.ru/tech/tech.html;, 
http://ad.tbn.ru/bb.cgi?cmd=adpubid=' + userid + 'pg=' + page + 
'vbn=188num=1w=468h=60nocache=' + rndnum + ') - 
http://ad.tbn.ru/bb.cgi?cmd=adpubid=' + userid + 'pg=' + page + 
'vbn=188num=1w=468h=60nocache=' + rndnum + '
appending 
http://ad.tbn.ru/bb.cgi?cmd=adpubid='%20+%20userid%20+%20'pg='%20+%20page%20+%20'vbn=188num=1w=468h=60nocache='%20+%20rndnum%20+%20'
 to urlpos.
swimming.hut.ru/tech/tech.html: merge(http://swimming.hut.ru/tech/tech.html;, 
http://ad.tbn.ru/bb.cgi?cmd=gopubid=297349pg=1vbn=188num=1w=468h=60nocache=6820111;)
 - 
http://ad.tbn.ru/bb.cgi?cmd=gopubid=297349pg=1vbn=188num=1w=468h=60nocache=6820111
appending 
http://ad.tbn.ru/bb.cgi?cmd=gopubid=297349pg=1vbn=188num=1w=468h=60nocache=6820111;
 to urlpos.
swimming.hut.ru/tech/tech.html: merge(http://swimming.hut.ru/tech/tech.html;, 
http://ad.tbn.ru/bb.cgi?cmd=adpubid=297349pg=1vbn=188num=1w=468h=60nocache=6820111;)
 - 
http://ad.tbn.ru/bb.cgi?cmd=adpubid=297349pg=1vbn=188num=1w=468h=60nocache=6820111
appending 
http://ad.tbn.ru/bb.cgi?cmd=adpubid=297349pg=1vbn=188num=1w=468h=60nocache=6820111;
 to urlpos.
swimming.hut.ru/tech/tech.html: merge(http://swimming.hut.ru/tech/tech.html;, 
http://top100.rambler.ru/top100/;) - http://top100.rambler.ru/top100/
appending http://top100.rambler.ru/top100/; to urlpos.
swimming.hut.ru/tech/tech.html: merge(http://swimming.hut.ru/tech/tech.html;, 
http://counter.rambler.ru/top100.cnt?218123;) - 
http://counter.rambler.ru/top100.cnt?218123
appending http://counter.rambler.ru/top100.cnt?218123; to urlpos.
swimming.hut.ru/tech/tech.html: merge(http://swimming.hut.ru/tech/tech.html;, 

Re: thx: non-recursion

2004-04-19 Thread Hrvoje Niksic
Ilya N. Golubev [EMAIL PROTECTED] writes:

 A future version of
 Wget will probably parse comments in a non-compliant fashion, by
 considering everything between !-- and -- to be a comment

 Installed 1.9.1 (unfortunately, there are no good binary rpms still;
 this is why ran it uninstalled so long).  Recursive copying of
 http://www.hro.org/docs/rlex/hk/index.htm and like works normally
 now.  Thanks!

Good to know, thanks for checking it out.



Re: non-recursion

2003-09-19 Thread Hrvoje Niksic
Doug Kaufman [EMAIL PROTECTED] writes:

 On Thu, 18 Sep 2003, Hrvoje Niksic wrote:

 modifying advance_declaration() in html-parse.c.  A future version of
 Wget will probably parse comments in a non-compliant fashion, by
 considering everything between !-- and -- to be a comment, which is
 what most other browsers have been doing since the beginnings of the
 web.

 The lynx browser is configurable as to how it parses comments.

So is Wget, as of last night.  The default is minimal (non-compliant)
comment parsing, and that can be changed with `--strict-comments'.

 It can change on the fly from minimal comments to historical
 comments to valid comments. Which browsers act in non-compliant
 fashion all the time?

Those that display http://www.hro.org/docs/rlex/uk/index.htm (unless
I'm mistaken), and that would mean pretty much all of them.  Of
course, that page is but one example out of many.

Some browsers have more complex heuristics for comment parsing, but
adding that to Wget would probably be overdoing it.


Re: non-recursion

2003-09-18 Thread Hrvoje Niksic
Ilya N. Golubev [EMAIL PROTECTED] writes:

 Duplicating my [EMAIL PROTECTED] sent on Wed, 10 Sep 2003
 19:48:56 +0400 since mailer reports that [EMAIL PROTECTED] does not
 work.

 wget -mLd http://www.hro.org/docs/rlex/uk/index.htm

 does not follow `A HREF=uk1.htm#1' links contained in the
 resource.

That's because Wget thinks those links are part of a huge comment that
spans the better part of the document.  Unlike most browsers, Wget
implements a (too) strict comment parsing, which breaks pages that use
non-SGML-compliant comments.

As http://www.htmlhelp.com/reference/wilbur/misc/comment.html explains:

[...] There is also the problem with the -- sequence. Some
people have a habit of using things like !-- as
separators in their source. Unfortunately, in most cases, the
number of - characters is not a multiple of four. This means
that a browser who tries to get it right will actually get it
wrong here and actually hide the rest of the document.

Currently the only workaround is to alter the source, e.g. by
modifying advance_declaration() in html-parse.c.  A future version of
Wget will probably parse comments in a non-compliant fashion, by
considering everything between !-- and -- to be a comment, which is
what most other browsers have been doing since the beginnings of the
web.


Re: non-recursion

2003-09-18 Thread Doug Kaufman
On Thu, 18 Sep 2003, Hrvoje Niksic wrote:

 modifying advance_declaration() in html-parse.c.  A future version of
 Wget will probably parse comments in a non-compliant fashion, by
 considering everything between !-- and -- to be a comment, which is
 what most other browsers have been doing since the beginnings of the
 web.

The lynx browser is configurable as to how it parses comments. It can
change on the fly from minimal comments to historical comments to
valid comments. Which browsers act in non-compliant fashion all the
time?
  Doug
-- 
Doug Kaufman
Internet: [EMAIL PROTECTED]