--save-header only seems to save the headers returned by the
webserver, I need to save the http header I originally sent, or some
other way which will show the full url of the page I downloaded.

On 17/03/2008, Charles <[EMAIL PROTECTED]> wrote:
> On Mon, Mar 17, 2008 at 10:24 PM, Julian Burgess <[EMAIL PROTECTED]> wrote:
>
> > Hello, is there any way in wget I can save the request headers to the
>  >  file, at the moment I'm downloading a fairly large number of files and
>  >  sometimes it would be really helpful to have the original url from
>  >  which it was downloaded. Thanks
>
>
> C:\>wget --help | grep header
>        ...
>        --save-headers          save the HTTP headers to file.
>        ...
>
>  C:\> wget --save-headers --proxy=off http://localhost/
>  --2008-03-17 22:29:49--  http://localhost/
>  ...
>  Saving to: `index.html'
>
>  The contents of index.html:
>  HTTP/1.1 200 OK
>  Date: Mon, 17 Mar 2008 15:29:49 GMT
>  Server: Apache/2.2.8 (Win32) mod_view/2.2 mod_python/3.3.1 Python/2.5.1
>  Connection: close
>  Content-Type: text/html
>  <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
>  "DTD/xhtml1-transitional.dtd">
>  <html><head>
>  ...
>
>  So there is a way, but I am not sure that it is the way you want :-).
>  Maybe a better way is to run wget in the background so that it produce
>  a wget-log that can be used to trace the URLs or 'tee' the output of
>  wget to a file.
>
>  ---
>  Charles
>

Reply via email to