uhh, sorry, quoted my own URL wrongly with this answer:
> > need an utility that can download web sites for offline viewing
>
> HTGET does exactly that. You need a packet driver loaded before to use it.
- so here is the rpet of the announcement of some two weeks ago:
Date: 09-09-1999 13:25:23
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: (Announcement) WWW-grabber
This has intrigued me for I while and now I've put it together:
A fast and small setup to get any WWW-page:
Sort of a "WWW-GRABBER" on base of HTGET and a packet driver.
Though most of this is not so new at all - some pieces of the arrangement
are well-known and around here for a while but it has been a bit
cumbersoume (and typo-prone...) to write the command line for
HTGET for instance, one of the parts of the parcel.
So here is a package which uses a dialler (eg. Netdial, Chat, or DialPC)
to connect a packet driver (DosPPPD) and a file lister which is a
"web-URL grabber": it takes the URL to launch HTGET - via a one-line
batch file it writes - with the complete and correct target to get.
The fast and simple HTML-interpreter HLIST from Martin Goebbel
helps to do a quick check online of the WWW-page downloaded, to store or
to trash it. As the whole isn't rather much more than a batch file
set-up there's room to build in all sorts of additional little helpers.
(I've put in a "renamer" there, for instance, to either trash or
rename and move the files received.)
The "URL grabber" greatly simplifies the business. Just assemble
lines with URL references from all the text files, mails, or newsgroups
read into an unedited list - evidently ReRead is good at that, but any
file lister capable to copy and append linewise would do too; LIST for
example. And then use the resulting limbo as (re)source list for the
"grabber".
This is a somewhat experimental design but it works well and fast here
already; and it's really more efficient, in terms of online time (and
fees !), than to surf to nirwana and back.
The "package" looks big (ca. 285 KB ZIPfile and somewhere in the region
of 2 MB unpacked) but the actual working set-up is less than 260 KB on
disk(ette) - and it *does* run from a diskette, and on any PC under DOS
(from DOS 3.3 on); the zipped package sure contains all the docs and
even the sources of some of the parcels being part of it.
It is up at my place as <http://www.inti.be/hammer/get-www.zip>
And yes, I think it *should" work with "any (www-)file", be it HTML,
images of sorts, RaWaV etc. - the problem sure is to have the *precise*
URL, and not only some "portal" from where to click endlessly on.
This latter is an upcoming problem. The endless click-alongs to get at
the destination sought serve only those others who get better from it,
namely 1. telco's and Net connection providers (sharing the
interconnection rate), 2. Get-Rich-Quick-ISPs, milking 3. the
advertizers. We pay double: As telco users, and as consumer who pay the
publicity overhead, baked into the prices.
(And sure that those four-lines-long, Java-barricaded URLs have their
function in *that* setup.)
// Heimo Claasen // <[EMAIL PROTECTED]> // Brussels 1999-09-09
HomePage of ReRead - and much to read ==> http://www.inti.be/hammer
To unsubscribe from SURVPC send a message to [EMAIL PROTECTED] with
unsubscribe SURVPC in the body of the message.
Also, trim this footer from any quoted replies.