Thomas Sailer wrote in a message to Mike Bilow:

 TS> I've never understood the T2 waiting business anyway. If
 TS> only AX.25 and the channel access algorithm were coupled
 TS> more tightly. 

This is true, but AX.25 doesn't really have a channel access algorithm.  Most
of the textbook models do not take into account the hidden transmitter problem,
and we have never really developed an effective method of dealing with that. 
What we run in practice tends to reduce to pure Aloha.  Seen in that light, the
importance of T2 is more sensible.

 TS> IMHO there's no need to do any waiting at all (or even use a
 TS> poll bit) in that situation. When the carrier goes off, you
 TS> can be quite sure that you are expected to send an ack, so
 TS> there's no point in waiting. This is more or less what
 TS> FlexNet does.

This is, of course, completely true of a point-to-point link, or at least of
any link generally where there are no hidden transmitters.  However, as soon as
hidden transmitters are introduced, the channel utilization starts to follow a
classical Poisson distribution, so the introduction of strategic delays is
actually the only thing that prevents degeneration into total chaos.

> AX.25 was simply not designed for this.  TCP was never intended to be run
> across lower layer protocols which intend to guarantee delivery, either.

 TS> Granted. But should we use this as an excuse for inferior
 TS> performance in all eternity?

No, but I think that T2 does more good than harm in practice.

 TS> We need an ARQ scheme, otherwise we'll easily get 33% packet 
 TS> loss over longer distances, which TCP can't cope with. The ARQ 
 TS> scheme will give us higher rtt variance, granted (at least over 
 TS> short radio paths). But we can fix these problems well.

Yes and no.  I agree that ARQ would be good, and FEC would help also.  Heck,
the whole physical later has been unaccountably neglected.  Nevertheless, it is
not rtt variation which is itself the problem, but the statistical pattern into
which that variation falls.  Our existing models tend to assume that rtt will
follow a normal distribution, and we dutifully compute mean and standard
deviation for rtt.  However, if something is going on at a lower level which
makes the distribution substantially unnormal, then everything gets messed up.

I'm not really sure what sort of pattern results from ARQ, but I would assume
that it tends to look more or less binormal.  As long as it is something
predictable, we should have no trouble adapting our rtt measurements.

 TS> Not being designed for was never a good excuse in the 
 TS> networking business. TCP/IP was never designed to be used by 
 TS> _that_ many hosts either, yet it still works 8-)

Nonsense!  If TCP/IP worked, why would anyone have invented OSI protocols?

Seriously, this whole issue is nothing new.  I would especially encourage
anyone interested in the historical details to read Mike Padlipsky's classic "A
Critique of X.25," which is on-line as RFC874.  Although now over 16 years old,
that document is among the earliest to raise exactly this question.
 
-- Mike, N1BEE

Reply via email to