The comcast telemarketers are pestering my wife with offers
to "upgrade" our service from many streaming megabytes per
second to many more streaming megabytes per second.  That
way, we can watch 5 internet movies at once rather than 3.

We don't watch movies on the net.  We could get by with
far less bandwidth if packet performance was better.

My bandwidth use is packets to and from my external
server/firewall.  My M.D. wife's use is interactive
televisits with patients.  In both cases, we care about
is interactive first packet latency and packet rate,
not stream rate.  

The comcast marketdweeb told her that with the twice-
as-expensive service ("new and improved fiber AND
coax!") we could have 100 megabytes per second,
and transfer 100 packets a second!"  Probably idiot
noises from a marketing script, but what if that
dismal packet performance was actually true?

When I use a service like "internet speed test", I see
the "needle" hovering near zero for about three seconds,
then it gently crawls towards 101% of our contracted
bandwidth.  I used to believe the slow climb was what
the app animation did for show, but now I suspect I am
actually watching streaming latency, packets bouncing
through servers in Finland and Brazil, but the bandwidth
THE WAY WE ACTUALLY USE IT is the less-than-megabyte-
per-second slow crawl at the beginning.

Decades ago, I designed and sold chips that went into
internet routers ... until our VC demanded that we move
from routers to ethernet chipsets, because the internet
wasn't real.  Money doesn't talk, it babbles.  So, I
understand how streaming routers can be optimized VERY
DIFFERENTLY than random packet routers.

Perhaps there are linux tools that a small group of us can
use to characterize what our internet providers actually
provide, especially first-packet latency.  Suggestions?

Keith

P.S. We can also move to Bitly - the former Verizon fiber
modem is still in the garage.  Is Bitly any better?

--
Keith Lofstrom          kei...@keithl.com

Reply via email to