On Thu, 7 Jun 2012, Bill Bogstad wrote:

On Thu, Jun 7, 2012 at 9:19 AM, Edward Ned Harvey <[email protected]> wrote:
From: [email protected] [mailto:[email protected]] On
Behalf Of Daniel Feenberg

We have maxed out our WAN link, and users are complaining of slow access
to websites and x-windows interaction. Yet when I ping sites on the
internet I see no lost packets, and ping times for relatively close hosts
are consistently 20 - 30 milliseconds. Large packets are about the same.
Ping times to our ISP's router at their POP are 2-4 milliseconds. I see no
dropped pings to real hosts. Sometimes the ISP router drops a ping but I
understand that may be due to ICMP limiting.

I have difficulty reconciling these facts. If pings are fast and packets
are not dropped, why do users see problems? I can confirm things seem
slow. Is this the dreaded "buffer bloat" problem so recently hyped? Is
there anything I can do here to aleviate it while waiting for more
bandwidth?

You should never drop pings, or any other traffic.  If you are dropping any
type of traffic, you have a much more serious problem.  So looking for
dropped packets is not a good test.  Er ... It's something you should test,
but you should always expect 0% loss, even on the most heavily overloaded
connection.

I'm going to have to disagree with this.   A congested link SHOULD
drop TCP packets so that congestion control knows to slow down.
It's actually this thinking which results in deploying equipment and
software that creates buffer bloat.   Now if you do see consistent
lossage you should consider upgrading your link bandwidth, but that
isn't always an option and if you don't drop packets you end up with
retransmitted copies of the same TCP packet sitting in equipment
buffers which doesn't help anyone.  You not only get lousy latency
(due to long queuing times in equipment buffers), but you also get
lower effective bandwidth (since the replicated packets are a waste of
bandwidth when you need it most  (under overload conditions)).
Admittedly ping/ICMP doesn't do congestion control so it won't  slow
down its transmissions when packets get dropped.  However,  I would
assert that it is easier, cheaper, and a better indication of network
conditions to (by default) treat all packets the same and would hope
that network equipment vendors would do so.

BTW, Dan if your traceroute command supports the "-T" option you
should try it out.   It uses TCP SYN packets rather then ICMP ECHO
(ping) packets which might help you determine if different packet
types are being treated differently.  You also might want to set up
some kind of permanent monitoring system so you can be alerted to
problems before users start complaining.

I get an interesting result from traceroute -T to the server I had been
doing most of my tests with. We are a customer of Paetec, and to avoid
congested peering I have been pinging and tracerouting to ntp.paetec.net. The fastest time I have ever seen was 14 ms, which seemed ok to me. However with the -T option the time drops to less than a millisecond. Is this anycasting in action?

traceroute to ntp.paetec.net (64.80.254.1), 30 hops max, 60 byte packets
 1  nbergw.nber.org (66.251.73.254)  0.729 ms  0.708 ms  0.696 ms
 2  ROCHNY01H07CR01.paetec.net (64.80.254.1)  0.261 ms  0.160 ms  0.183 ms

Dan Feenberg
NBER


Bill Bogstad
_______________________________________________
bblisa mailing list
[email protected]
http://www.bblisa.org/mailman/listinfo/bblisa

Reply via email to