Mike Bilow wrote:
> but now it will be totally redundant and useless. If the higher layer
> protocols are expiring their retry timers quickly enough relative to the link
> queue depth, then the link queue will grow indefinitely until buffers fill up,
> a condition known as secondary avalanche.
We have a similar problem with AX.25 and KISS. But the solution IMO is
to
provide more feedback about the queueing status to upper layers, what
6pack does.
> There is no easy answer about which approach is better in any given situation,
> but experience has shown that moving the guarantee of delivery to the highest
> possible protocol layer will result in greatly simplified implementations and
> behavior under failure mode conditions that is more in line with expectations.
The problem is that TCP cannot distinguish between packet loss due to
congestion
and packet loss due to QRM etc. So it treats all packet loss as if due
to
congestion and backs off. This leads to TCP not working with roughly
more
than 10% packet loss. Now if you want to work over a chain of
digipeaters
where each of the radio links has a packet loss rate in the 1-5% range,
it just won't work in DG mode with TCP doing the retries. If you use
VC mode with hop to hop acks, it works very well.
So conclusion is that even if you use FEC on the radio links, you
should still use ARQ in most cases (except maybe audio and video) to
protect
each link, otherwise retransmissions that have to travel the whole radio
chain
will increase dramatically with the number of radio links in the chain.
Tom