DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT <http://nagoya.apache.org/bugzilla/show_bug.cgi?id=16060>. ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND INSERTED IN THE BUG DATABASE.
http://nagoya.apache.org/bugzilla/show_bug.cgi?id=16060 HTTP Request Retrieve all images option does not mimic cache [EMAIL PROTECTED] changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |RESOLVED Resolution| |LATER ------- Additional Comments From [EMAIL PROTECTED] 2004-01-24 02:50 ------- I am a bit confused by this bug submission. The way netscape 3-6 and IE work is the browser caches all images by the URL. The only time a browser retrieves the same images on subsequent pages is if the browser is set to download all images all the time. The way the samplers work, it will always retrieve all images once, since the images are added to a list. Duplicate images are not added. the images are then retrieved by the sampler. The sampler does not immitate a browser strictly, since a browser will launch multiple threads to get the images at the same time. This is specified in W3C's HTTP specification. HTTP1.0 limits the number of concurrent connections to the same server to 4 and HTTP1.1 limits it to 2 concurrent connections to the same server. If we really want to immitate browser behavior, you can achieve this without writing a new controller to cache the images. Simply make a separate thread group that gets the images. You should be able to figure this out by looking at the number of hits per page and comparing to the number of times each image is downloaded. --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
