On Tue, 17 Aug 2004, Systems Administration wrote:

nn>> The problem is in the calculation of the rtt (round trip time) and
retransmit timeout. If the rtt is 0, then it is considered not initialized,
and the timeout is set to 2 or 3 seconds (depending on whether the server
is considered 'local' to the client), whereas if the rtt is low, but
non-zero, the timeout can drop as low as 0.35 seconds. You can examine the
rtt and timeout values of an rx server or client using the -peers switch of
the rxdebug command.

Hmmm, so unless there is enough latency in the loop the client goes deaf effectively when a frame gets dropped by the ethernet media-to-media conversion (fiber to copper) - because the rtt was indistinguishable from 0 the timeout is set so high that the server and client get out of synch?

no. that would be a serious problem. this one is only annoying.


Another interesting point - can I assume that the output of rxdebug is reporting MTU which is the tcp MTU - if so how does the AFS binary think it can send MTUs of 5692 when the interface is declared to have an MTU of 1500 - I know the client cannot handle that large an MTU because the intermediate hops on the ethernet connection are not gigabit capable. Could the server be trying to send a bogus packet with 5692 data octets which are getting dropped on the floor due to an invalid MTU size?

well, it's not tcp mtu, but it's probably confused and i can't just guess why.



_______________________________________________ OpenAFS-info mailing list [EMAIL PROTECTED] https://lists.openafs.org/mailman/listinfo/openafs-info

Reply via email to