JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the response, which in general will be longer than one byte.
Wireshark measures the time when bytes are actually sent/received. The JMeter time should be closer to that which is experienced by a browser or other application client. On 22/11/2007, Thierry Boullet <[EMAIL PROTECTED]> wrote: > > Hello, > > I wonder what really represents latency in the results of Jmeter (sampler > HTTP). > I read in previous e-mails that it was the time (in milliseconds) to first > response. > > I understand that it is the difference between the time of receiving the > first http response packet and time of sending the request. Is that > correct? > > I compared the value of the Jmeter latency with the information provided > by the sniffer Wireshark. > I do not get correspondence, the latency Jmeter is far greater than the > difference in Wireshark (between the time of receiving the first http > response packet and time of sending the request). I know there is the > processing time of Jmeter but I think this does not explain everything. > > For me, the definition of latency is the time of transmission on the > network for a request or a response. > > Is what someone can bring me more details about latency in Jmeter ? > > Thanks > > Thierry Boullet > > www.kereval.com > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] > > --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]

