I’m looking for a practical guide – i.e. specifically NOT an academic paper, 
thanks anyway – to predicting the effect of increased (or decreased) latency on 
my user’s applications.

Specifically, I want to estimate how much improvement there will be in 
{bandwidth, application XYZ responsiveness, protocol ABC goodput, whatever} if 
I decrease the RTT between the user and the server by 10msec, or by 20msec, or 
by 40msec.

My googling has come up with lots of research articles discussing theoretical 
frameworks for figuring this out, but nothing concrete in terms of a calculator 
or even a rule-of-thumb.

Ultimately, this goes into MY calculator – we have the usual north-american 
duopoly on last-mile consumer internet here; I’m connected directly to only one 
of the two.  There’s a cost $X to improve connectivity so I’m peered with both, 
how do I tell if it will be worthwhile?

Anyone got anything at all that might help me?

Thanks in advance,
-Adam

Adam Thompson
Consultant, Infrastructure Services
[[MERLIN LOGO]]<https://www.merlin.mb.ca/>
100 - 135 Innovation Drive
Winnipeg, MB, R3T 6A8
(204) 977-6824 or 1-800-430-6404 (MB only)
athomp...@merlin.mb.ca<mailto:athomp...@merlin.mb.ca>
www.merlin.mb.ca<http://www.merlin.mb.ca/>

Reply via email to