On 22/09/15 19:57, andreas wpv wrote:
Unfortunately this only pulls the html files (because where I pull them
they are compressed), and not all the other scripts and stylesheets and so
on, although at least a few of these are compressed, either.
From wget point of view, the "html" is a binary blob. It scans it looking for
scripts/stylesheets and founds none.

Ideas, tips?
What about implementing gzip Accept-encoding into wget? :)

Someone asked about doing it not so long ago, but it wasn't done.


* That should actually save the pages uncompressed, but I assume you are
more interested in downloading the contents compressed than in storing
them compressed locally. Otherwise, you can download them with current
wget and then run a script compressing everything.



Reply via email to