Jeremy Nix wrote:

Okay, let me pose another (slightly differenc) question. Say that
instance (A) and (B) are separate institutions, independent from each
other. Same scenario as before. How could instance (B) (the responding
instance) be able to measure latency in instance (A)? The reason I ask
is related to an issue that has been reported to me, but I am unable to
understand how the latency was measured.



Is it HTTP or HTTPS? If HTTPS, things get more kludgy, but if it is HTTP, there is a fairly simple way:

1. start a network analyzer (such as Ethereal) on Tomcat B network.
2. Capture packets until you see a client request that needs a redirection (I'm pretty sure you have one in your app)
3. You will see some time later a client request for the redirected resource.
4. Measure the time since the last bit of the redirection until the first bit of the redirected request.
5. Divide by 2.

With HTTPS, you won't see anything useful on the network analyzer, unless you know very well the usage pattern. Ig the usage pattern is known and constant, you can do the same, by "imagining" what you see.

Or, you may even go down a bit more, and this works for both HTTP and HTTPS, and it may be even simpler (no need to have a redirection):

1. Start a network analyzer on TC B network
2. Capture packets until you see (it should not be a long time ;-) TCP packets like these:
Packet 1: A -> B Flags: SYN
Packet 2: B -> A Flags: SYN ACK
Packet 3: A -> B Flags: ACK
3. Measure difference between packet 2 and packet 3.
4. Divide by 2.

Of course both procedures assume that network latency is symmetrical.

Hope that helps.


Antonio Fiol

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to