Quoting Greg Robinson ([EMAIL PROTECTED]):

> I'm having a problem with wget.  I need to have the program (while
> running recursively) output to stdout so that I can pipe the output to a
> separate filter process.  Unfortunately, wget will only download the
> first file from any site I point it at when stdout is specified as the
> file to write to.

The difficulty here is the recursive download: When recursively
donwloading, wget requires a physical copies of files to exist in
order to extract URLs form those files. No chance in the moment to do
it directly when downloading, sorry.

> Does that mean it's trying to write to the non-existant www.google.com
> directory on my drive, or does it mean that there's no index.html file
> on any server I want to suck from?

The message comes probably from the URL parser and it means that no
local copy of index.html from www.google.com exists on your drive (and
therefore no URLs can be extracted and the recursive download will
fail).

-- jan

--------------------+------------------------------------------------------
 Jan Prikryl        | vr|vis center for virtual reality and visualisation
 <[EMAIL PROTECTED]> | http://www.vrvis.at
--------------------+------------------------------------------------------

Reply via email to