Jens Schleusener <[email protected]> writes: > (for the Germans: Giuseppe spoken Tschuseppe ?)
It is more like: Jewseppee This can help you better: http://www.pronounceitright.com/pronuncia.php?id_pronuncia=3631 > Ok, that I was afraid. Maybe that should be mentioned shortly in the > man page under the "--page-requisites" documentation. Thanks for the hint. > P.S.: OT: I "misused" wget for some kind of (relative) benchmarking to > check the performance of some different configurations of an > self-administrated Apache/Varnish system. Anyone knows a "simple" > (batch) tool to "simulate" real browser behaviour for that purposes? > My current test approach using Firefox with Firebug/PageSpeed and/or > Wireshark is probably realistic but a little bit troublesome. I think these tools can help you: http://httpd.apache.org/docs/2.0/programs/ab.html http://www.joedog.org/index/siege-home By the way, a "real" browser doesn't do quickly many requests but there are quite long intervals where the connection is kept alive without any request to the server. Giuseppe > > >> Jens Schleusener <[email protected]> writes: >> >>> Hi, >>> >>> sorry, the below described wget behaviour may not be a real bug: >>> >>> I use often the wget option >>> >>> --page-requisites >>> >>> ("-p") but for some test purposes I now added also the option >>> >>> --header='Accept-Encoding: gzip, deflate' >>> >>> Now wget downloads and saves for e,g, a file named index.html (not >>> index.html.gz) but in "gzip compressed" format. But wget doesn't seem >>> to detect that and so cannot find other files that are necessary to >>> properly display the given HTML page. Any hints to circumvent that >>> behaviour respectively to force decompression of compressed files >>> after download? >>> >>> Regards >>> >>> Jens >>
