> In situations of connections normally averaging
> 1000ms or less, spiking to 3000ms, it is usually the safest and lesser
> evil (less screen-to-screen latency) to lengthen the transmission
> interval, in order to reduce screen-to-screen lag by reducing the
> number of bytes that "piles up" in massive congestion.

Also, latency spikes are often temporary (seconds to minutes).
Which means the algorithm would quickly return it to 1 second.

Remember GRPS is a shared resource and is not constant data rate
because of bandwidth-availability fluctuations and reception
fluctuations.  For example: One second you might be allowed to
transmit 5,000 bytes (~500ms TCP/IP ack) another few second, you may
be able to transmit only 200 bytes or 300 bytes (>2000ms TCP/IP ack),
next second, you're stalled for a couple seconds of no data
throughput, next couple second, you manage to push out another ~2,000
bytes (~750ms TCP/IP ack) etc.  GPRS is very bursty/fluctuating like
that, especially when reception is low.

In these specific cases, temporarily longer transmission intervals for
XEP-0301 result in less screen-to-screen lag, because less bytes are
congested because of the use of fewer packets.  Also if many users are
using the airwaves at once, it all slows down (like somebody
multitasking telnet/ssh and FTP over a dial-up connection: the
telnet/ssh connection becomes high-lag).

Again, these are minority of cases, and this really chiefly appliable
to mission-critical application use where you must also support good
operation on narrowband systems.  The simple case is to stay fixed at
1000ms interval, without regards to monitoring latency (i.e. section
10.2 of XEP-0301 spec is not required)

Mark Rejhon

Reply via email to