Well, setting TCP_NODELAY in telnet certainly improved things and got rid
of the 200 ms delay between packets, but I still have a problem in that
the packets are still going:
DATA1 -> ACK1 -> DATA2 -> ACK2 ...
only now the delay is only the length of the circuit latency, which is
still occasionally causing a problem at the far end.
My understanding of TCP is that there is the ability to window packets up
to the window byte limit so that in effect I could have:
DATA1 -> DATA2 -> ACK1 -> ACK2 ...
So why is this not happening, or am I misunderstanding here?
Scott Howard (unfortunately I have lost his excellent explanation) spoke
about delayed ACK. Is this my problem?
An unfortunate consequence of Linux is that, although a single
cursor/function key stroke generates multiple characters, Linux
occasionally and randomly (seemingly) chooses to send these characters in
separate packets. I know that Ken Yap said that all that is certain about
TCP is that you get a data stream and there is no certainty about where
that stream will be split, but it seems to me to be a bit extreme for
multiple characters which must be being generated within microseconds of
each other after a typical operator keystroke pause, to be so split.
The annoying embarrassment is that M$ telnet and Digi portservers appear
to be immune from this problem in that they never split the characters
generated from a single key stroke, at least as far as I can determine.
I think I might see what happens from a CLI rather than from xterm just in
case the problem lies with the latter.
--
Howard.
______________________________________________________
LANNet Computing Associates <http://www.lannet.com.au>
--
SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/
More Info: http://slug.org.au/lists/listinfo/slug