> On Jul 18, 2018, at 7:18 PM, Johnny Billquist <b...@softjar.se> wrote:
> 
>> ...
> 
> It's probably worth pointing out that the reason I implemented that was not 
> because of hardware problems, but because of software problems. DECnet can 
> degenerate pretty badly when packets are lost. And if you shove packets fast 
> enough at the interface, the interface will (obviously) eventually run out of 
> buffers, at which point packets will be dropped.
> This is especially noticeable in DECnet/RSX at least. I think I know how to 
> improve that software, but I have not had enough time to actually try fixing 
> it. And it is especially noticeable when doing file transfers over DECnet.

All ARQ protocols suffer dramatically with packet loss.  The other day I was 
reading a recent paper about high speed long distance TCP.  It showed a graph 
of throughput vs. packet loss rate.  I forgot the exact numbers, but it was 
something like 0.01% packet loss rate causes a 90% throughput drop.  Compare 
that with the old (1970s) ARPAnet rule of thumb that 1% packet loss means 90% 
loss of throughput.  Those both make sense; the old one was for "high speed" 
links running at 56 kbps, rather than the multi-Gbps of current links.

The other thing with nontrivial packet loss is that any protocol with 
congestion control algorithms triggered by packet loss (such as recent versions 
of DECnet), the flow control machinery will severely throttle the link under 
such conditions.

So yes, anything you can do in the infrastructure to keep the packet loss well 
under 1% is going to be very helpful indeed.

        paul

_______________________________________________
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Reply via email to