Re: Product for heat containment per rack unit?

2020-08-13 Thread Adrian Minta

https://objects.eanixter.com/PD317813.PDF

On 8/13/20 9:26 PM, David Hubbard wrote:


Curious if anyone has knowledge of a vendor / product designed to make 
it possible to use back-to-front cooled equipment in racks that need 
to be ‘sealed’ for heat containment reasons?  I’d envision this 
looking like some kind of adjustable depth sleeve, to get the cold air 
to the equipment, and perhaps a brush strip opening to allow power 
cables in?


Thanks!


--
Best regards,
Adrian Minta




Product for heat containment per rack unit?

2020-08-13 Thread David Hubbard
Curious if anyone has knowledge of a vendor / product designed to make it 
possible to use back-to-front cooled equipment in racks that need to be 
‘sealed’ for heat containment reasons?  I’d envision this looking like some 
kind of adjustable depth sleeve, to get the cold air to the equipment, and 
perhaps a brush strip opening to allow power cables in?

Thanks!


Re: Bottlenecks and link upgrades

2020-08-13 Thread William Herrin
On Wed, Aug 12, 2020 at 12:33 AM Hank Nussbacher  wrote:
> At what point do commercial ISPs upgrade links in their backbone as well as 
> peering and transit links that are congested?  At 80% capacity?  90%?  95%?

Hi Hank,

As others have noted, the answer is rarely that simple.

First, what is your consumption? 90th or 95th percentile usually,
after all 100% between 9 and 5 is 100% not 33% but 100% for two
minutes is not 100%. It gets more complicated if any kind of QoS is in
play because capacity-wise QoS essentially gives you not a single
fixed-speed line but many interdependent variable-speed lines.

Next, capacity is not the only question. Here are some of the other factors:

1) A residential customer on the cheapest plan does not merit as clean
a channel as a high-paying business customer you'd like to keep
milking.

2) Upgrades can take months of planning so the capacity now is beside
the point. You'll use your best-guess projection for the capacity at
the time an upgrade can be complete.

3) Some upgrades tend to be significantly more expensive than others.
Lit service to dark fiber, for example. It's pretty ordinary to run
closer to the limit before making an expensive upgrade than a modest
upgrade.

4) A dirty link merits replacement sooner than a clean one. If the
higher-capacity service also clears up packet loss, you'll want to
trigger the decision at a lower consumption threshold.

5) Switching a single path to two paths is more valuable than
switching two paths to three. It has priority at a lower level of
consumption.

Regards,
Bill Herrin

-- 
William Herrin
b...@herrin.us
https://bill.herrin.us/


Re: Bottlenecks and link upgrades

2020-08-13 Thread Etienne-Victor Depasquale
>
> There is rarely a one sized fits all answer when it comes to these
> things.
>

Absolutely true: every application has characteristic QoS parameters.

Unfortunately, it seems that 5-minute averages of data rates through links
are the one-size-fits-all answer ... which doesn't fit all.

Etienne

On Thu, Aug 13, 2020 at 5:37 PM Tom Beecher  wrote:

> Wouldn't it be better to measure the basic performance like packet drop
>> rates and queue sizes ?
>>
>
> Those values should be a standard part of monitoring and data collection,
> but if they happen to MATTER or not in a given situation very much depends.
>
> The traffic profile traversing the link may be such that the observed drop
> % and buffer depths is acceptable for that traffic, and there is no need
> for further tuning or changes. In other scenarios it may not be, in which
> case either network or application adjustments are warranted.
>
> There is rarely a one sized fits all answer when it comes to these things.
>
>
> On Thu, Aug 13, 2020 at 6:25 AM Olav Kvittem via NANOG 
> wrote:
>
>>
>> On 12.08.2020 09:31, Hank Nussbacher wrote:
>>
>> At what point do commercial ISPs upgrade links in their backbone as well
>> as peering and transit links that are congested?  At 80% capacity?  90%?
>> 95%?
>>
>>
>> Hi,
>>
>>
>> Wouldn't it be better to measure the basic performance like packet drop
>> rates and queue sizes ?
>>
>> These days live video is needed and these parameters are essential to the
>> quality.
>>
>> Queues are building up in milliseconds and people are averaging over
>> minutes to estimate quality.
>>
>>
>> If you are measuring queue delay with high frequent one-way-delay
>> measurements
>>
>> you would then be able to advice better on what the consequences of a
>> highly loaded link are.
>>
>>
>> We are running a research project on end-to-end quality and the enclosed
>> image is yesterdays report on
>>
>> queuesize(h_ddelay) in ms. It shows stats on delays between some peers.
>>
>> I would have looked at the trends on the involved links to see if upgrade
>> is necessary -
>>
>> 421 ms  might be too much ig it happens often.
>>
>>
>> Best regards
>>
>>
>>   Olav Kvittem
>>
>>
>>
>> Thanks,
>> Hank
>>
>>
>> Caveat: The views expressed above are solely my own and do not express
>> the views or opinions of my employer
>>
>>

-- 
Ing. Etienne-Victor Depasquale
Assistant Lecturer
Department of Communications & Computer Engineering
Faculty of Information & Communication Technology
University of Malta
Web. https://www.um.edu.mt/profile/etiennedepasquale


Re: Bottlenecks and link upgrades

2020-08-13 Thread Tom Beecher
It is possible to gather a lot of information about buffers and queues, at
least with the vendors we work with. That can be very helpful in a lot of
ways. :)

On Thu, Aug 13, 2020 at 9:21 AM Baldur Norddahl 
wrote:

> Is it possible to do and is anyone monitoring metrics such as max queue
> length in 5 minutes intervals? Might be a better metric than average load
> in 5 minutes intervals.
>
> Regards
>
> Baldur
>


Re: Bottlenecks and link upgrades

2020-08-13 Thread Tom Beecher
>
> Wouldn't it be better to measure the basic performance like packet drop
> rates and queue sizes ?
>

Those values should be a standard part of monitoring and data collection,
but if they happen to MATTER or not in a given situation very much depends.

The traffic profile traversing the link may be such that the observed drop
% and buffer depths is acceptable for that traffic, and there is no need
for further tuning or changes. In other scenarios it may not be, in which
case either network or application adjustments are warranted.

There is rarely a one sized fits all answer when it comes to these things.


On Thu, Aug 13, 2020 at 6:25 AM Olav Kvittem via NANOG 
wrote:

>
> On 12.08.2020 09:31, Hank Nussbacher wrote:
>
> At what point do commercial ISPs upgrade links in their backbone as well
> as peering and transit links that are congested?  At 80% capacity?  90%?
> 95%?
>
>
> Hi,
>
>
> Wouldn't it be better to measure the basic performance like packet drop
> rates and queue sizes ?
>
> These days live video is needed and these parameters are essential to the
> quality.
>
> Queues are building up in milliseconds and people are averaging over
> minutes to estimate quality.
>
>
> If you are measuring queue delay with high frequent one-way-delay
> measurements
>
> you would then be able to advice better on what the consequences of a
> highly loaded link are.
>
>
> We are running a research project on end-to-end quality and the enclosed
> image is yesterdays report on
>
> queuesize(h_ddelay) in ms. It shows stats on delays between some peers.
>
> I would have looked at the trends on the involved links to see if upgrade
> is necessary -
>
> 421 ms  might be too much ig it happens often.
>
>
> Best regards
>
>
>   Olav Kvittem
>
>
>
> Thanks,
> Hank
>
>
> Caveat: The views expressed above are solely my own and do not express the
> views or opinions of my employer
>
>


Re: Bottlenecks and link upgrades

2020-08-13 Thread Baldur Norddahl
I expect my hardware does not have such a metric, but maybe it should have.
Max queue length tell us how full the link is with respect to microbursts.


tor. 13. aug. 2020 15.28 skrev Mike Hammett :

> I suppose it would depend on if your hardware has an OID for what you want
> to monitor.
>
>
>
> -
> Mike Hammett
> Intelligent Computing Solutions 
> 
> 
> 
> 
> Midwest Internet Exchange 
> 
> 
> 
> The Brothers WISP 
> 
> 
> --
> *From: *"Baldur Norddahl" 
> *To: *nanog@nanog.org
> *Sent: *Thursday, August 13, 2020 8:20:26 AM
> *Subject: *Re: Bottlenecks and link upgrades
>
> Is it possible to do and is anyone monitoring metrics such as max queue
> length in 5 minutes intervals? Might be a better metric than average load
> in 5 minutes intervals.
>
> Regards
>
> Baldur
>
>


Re: Bottlenecks and link upgrades

2020-08-13 Thread Mike Hammett
I suppose it would depend on if your hardware has an OID for what you want to 
monitor. 




- 
Mike Hammett 
Intelligent Computing Solutions 

Midwest Internet Exchange 

The Brothers WISP 

- Original Message -

From: "Baldur Norddahl"  
To: nanog@nanog.org 
Sent: Thursday, August 13, 2020 8:20:26 AM 
Subject: Re: Bottlenecks and link upgrades 




Is it possible to do and is anyone monitoring metrics such as max queue length 
in 5 minutes intervals? Might be a better metric than average load in 5 minutes 
intervals. 


Regards 


Baldur 



Re: Bottlenecks and link upgrades

2020-08-13 Thread Baldur Norddahl
Is it possible to do and is anyone monitoring metrics such as max queue
length in 5 minutes intervals? Might be a better metric than average load
in 5 minutes intervals.

Regards

Baldur


Re: Bottlenecks and link upgrades

2020-08-13 Thread Mark Tinka



On 13/Aug/20 13:44, Olav Kvittem wrote:

> sure, but I guess the loss rate depends of the nature of the traffic. 

Packet loss is packet loss.

Some applications are more sensitive to it (live video, live voice, for
example), while others are less so. However, packet loss always
manifests badly if left unchecked.


>> I guess that having more reports would support the judgements better.

For sure, yes. Any decent NMS can provide a number of data points so you
aren't shooting in the dark.


>>
>> A basic question is : what is the effect on the perceived quality of the
>> customers ?

Depends on the application.

Gamers tend to complain the most, so that's a great indicator.

Some customers that think bandwidth solves all problems will perceive
their inability to attain their advertised contract as a problem, if
packet loss is in the way.

Generally, other bad things, including unruly human beings :-).


>>
>> And the relation between that and /5min load is not known to me.

For troubleshooting, being able to have a tighter resolution is more
important. 5-minute averages are for day-to-day operations, and
long-term planning.


>>
>> Actually one good indicator of the congestion loss rate are of course
>> the SNMP OutputDiscards.
>>
>>
>> Curves for  queueing delay, link load and discard rate are surprisingly
>> different.

Yes, that then gets into the guts of the router hardware, and it's design.

In such cases, that's when your 100Gbps link is peaking and causing
packet loss, not understanding that the forwarding chip on it is only
good for 60Gbps, for example.

Mark.



Re: Bottlenecks and link upgrades

2020-08-13 Thread Olav Kvittem via NANOG
Hi Mark,


Just comments on your points below.

On 13.08.2020 12:31, Mark Tinka wrote:
>
> On 13/Aug/20 12:23, Olav Kvittem via NANOG wrote:
>
>> Wouldn't it be better to measure the basic performance like packet
>> drop rates and queue sizes ?
>>
>> These days live video is needed and these parameters are essential to
>> the quality.
>>
>> Queues are building up in milliseconds and people are averaging over
>> minutes to estimate quality.
>>
>>
>> If you are measuring queue delay with high frequent one-way-delay
>> measurements
>>
>> you would then be able to advice better on what the consequences of a
>> highly loaded link are.
>>
>>
>> We are running a research project on end-to-end quality and the
>> enclosed image is yesterdays report on
>>
>> queuesize(h_ddelay) in ms. It shows stats on delays between some peers.
>>
>> I would have looked at the trends on the involved links to see if
>> upgrade is necessary - 
>>
>> 421 ms  might be too much ig it happens often.
>>
> I'm confident everyone (even the cheapest CFO) knows the consequences of
> congesting a link and choosing not to upgrade it.
>
> Optical issues, dirty patch cords, faulty line cards, wrong
> configurations, will almost likely lead to packet loss.  
> Link congestion
> due to insufficient bandwidth will most certainly lead to packet loss.
sure, but I guess the loss rate depends of the nature of the traffic.
>
> It's great to monitor packet loss, latency, pps, e.t.c. But packet loss
> at 10% link utilization is not a foreign occurrence. No amount of
> bandwidth upgrades will fix that.


I guess that having more reports would support the judgements better.

A basic question is : what is the effect on the perceived quality of the
customers ?

And the relation between that and /5min load is not known to me.

Actually one good indicator of the congestion loss rate are of course
the SNMP OutputDiscards.


Curves for  queueing delay, link load and discard rate are surprisingly
different.


regards

 Olav



>
> Mark.


pEpkey.asc
Description: application/pgp-keys


Re: Bottlenecks and link upgrades

2020-08-13 Thread Mark Tinka



On 13/Aug/20 13:00, Nick Hilliard wrote:

>
> you could easily have 10% utilization and see packet loss due to
> insufficient bandwidth if you have egress << ingress and
> proportionally low buffering, e.g. UDP or iSCSI from a 40G/100 port
> with egress to a low-buffer 1G port.
>
> This sort of thing is less likely in the imix world, but it can easily
> happen with high capacity CDN nodes injecting content where the
> receiving port is small and subject to bursty traffic.

Indeed.

The smaller the capacity gets toward egress, the closer you are getting
to an end-user, in most cases.

End-user link upgrades will always be the weakest link in the chain, as
the incentive is more on their side than you, their provider. Your final
egress port buffer sizing notwithstanding, of course.

Mark.


Re: Bottlenecks and link upgrades

2020-08-13 Thread Nick Hilliard

Mark Tinka wrote on 13/08/2020 11:31:

It's great to monitor packet loss, latency, pps, e.t.c. But packet loss
at 10%  link utilization is not a foreign occurrence. No amount of
bandwidth upgrades will fix that.


you could easily have 10% utilization and see packet loss due to 
insufficient bandwidth if you have egress << ingress and proportionally 
low buffering, e.g. UDP or iSCSI from a 40G/100 port with egress to a 
low-buffer 1G port.


This sort of thing is less likely in the imix world, but it can easily 
happen with high capacity CDN nodes injecting content where the 
receiving port is small and subject to bursty traffic.


Nick


Re: Bottlenecks and link upgrades

2020-08-13 Thread Mark Tinka



On 13/Aug/20 12:23, Olav Kvittem via NANOG wrote:

> Wouldn't it be better to measure the basic performance like packet
> drop rates and queue sizes ?
>
> These days live video is needed and these parameters are essential to
> the quality.
>
> Queues are building up in milliseconds and people are averaging over
> minutes to estimate quality.
>
>
> If you are measuring queue delay with high frequent one-way-delay
> measurements
>
> you would then be able to advice better on what the consequences of a
> highly loaded link are.
>
>
> We are running a research project on end-to-end quality and the
> enclosed image is yesterdays report on
>
> queuesize(h_ddelay) in ms. It shows stats on delays between some peers.
>
> I would have looked at the trends on the involved links to see if
> upgrade is necessary - 
>
> 421 ms  might be too much ig it happens often.
>

I'm confident everyone (even the cheapest CFO) knows the consequences of
congesting a link and choosing not to upgrade it.

Optical issues, dirty patch cords, faulty line cards, wrong
configurations, will almost likely lead to packet loss.  Link congestion
due to insufficient bandwidth will most certainly lead to packet loss.

It's great to monitor packet loss, latency, pps, e.t.c. But packet loss
at 10% link utilization is not a foreign occurrence. No amount of
bandwidth upgrades will fix that.

Mark.


Re: Has virtualization become obsolete in 5G?

2020-08-13 Thread Mark Tinka



On 12/Aug/20 19:10, adamv0...@netconsultings.com wrote:

> Fair enough, but you actually haven't answered my question about why you 
> think that VNFs such as vTMS can not be implemented in a horizontal scaling 
> model? 
> In my opinion any NF virtual or physical can be horizontally scaled. 

The limitation is the VM i/o with the metal. Trying to shift 100Gbps of
DoS traffic across smaller VNF's running on Intel CPU's is going to
require quite a sizeable investment, and plenty of gymnastics in how you
route traffic to and through them, vs. taking that cash and spending on
just one or two purpose-built platforms that aren't scrubbing traffic in
general-purpose CPU's.

Needless to say, the ratio between the dirty traffic entering the system
and the clean traffic coming out is often not 1:1, from a licensing
standpoint.

It's not unlike when we ran the numbers to see whether a VM running
CSR1000v on a server connected to a dumb, cheap Layer 2 switch was
cheaper than just buying an ASR920. The ASR920, even with the full
license, was cheaper. Server + VMware license fees + considerations for
NIC throughput just made it massively costly at scale.


> Right, and of these 3 you mentioned, what is it that you'd say operators are 
> waiting for to get standardized, in order for them to start implementing 
> network services orchestration?

You miss my point. The existence of these data models doesn't mean that
operators cannot automate without them.

There are plenty of operators automating their procedures with, and
without those open-based models. My point was if we are spending a lot
of time trying to agree on these data models, so that Cisco can sell me
their NSO, Juniper their Contrail, Ciena their Blue Planet, NEC their
ProgrammableFlow or Nokia their Nuage - while several operators are
deciding what automation means to them without trying to be boxed in
these off-the-shelf solutions that promise vendor-agonstic integration -
we may just blow another 10 years.



> Agreed, all I'm trying to understand is what makes you claim things like: 
> progress is slow, or there's a lack of standardization, or operators need to 
> wait till things get standardized in order to start doing network service 
> orchestration... 
> I'm asking cause I just don't see that. My personal experience is quite 
> different to what you're claiming. 
>
> Yes the landscape is quite diverse ranging from fire and forget CLI scrapers 
> (Puppet, Chef, Ansible, SaltStack) through open network service orchestration 
> frameworks all the way to a range of commercial products for network service 
> orchestration, but the point is options are there and one can start today, no 
> need to wait for anything to get standardized or things to settle.  

Don't get me wrong - if NSO, Blue Planet, Nuage and all the rest are
good for you, go for it.

My concern is most engineers and commercial teams are confused about the
best way forward because the industry keeps going back and forth on what
the appropriate answer is, or worse, could be, or even more scary, is
likely to be. In the end, either nothing is done, or costly mistakes happen.

Only a handful of folk have the time, energy and skills to dig into the
minutiae and follow the technical community on defining solutions at a
very low level. Everybody else just wants to know if it will work and
how much it will cost.

Meanwhile, homegrown automation solutions that do not follow any
standard continue to be seen as a "stop-gap", not realizing that,
perhaps, what works for me now is what works for me, period.

I'm not saying operators aren't automating. I'm saying my automating is
not your automating. As long as we are both happy with the solutions we
have settled on for automating, despite them not being the same or
following a similar standard, what's wrong with that? There are other
pressing matters that need our attention.

Mark.



Re: Bottlenecks and link upgrades

2020-08-13 Thread Olav Kvittem via NANOG

On 12.08.2020 09:31, Hank Nussbacher wrote:
>
> At what point do commercial ISPs upgrade links in their backbone as
> well as peering and transit links that are congested?  At 80%
> capacity?  90%?  95%? 
>

Hi,


Wouldn't it be better to measure the basic performance like packet drop
rates and queue sizes ?

These days live video is needed and these parameters are essential to
the quality.

Queues are building up in milliseconds and people are averaging over
minutes to estimate quality.


If you are measuring queue delay with high frequent one-way-delay
measurements

you would then be able to advice better on what the consequences of a
highly loaded link are.


We are running a research project on end-to-end quality and the enclosed
image is yesterdays report on

queuesize(h_ddelay) in ms. It shows stats on delays between some peers.

I would have looked at the trends on the involved links to see if
upgrade is necessary - 

421 ms  might be too much ig it happens often.


Best regards


  Olav Kvittem


>
> Thanks,
> Hank
>
>
> Caveat: The views expressed above are solely my own and do not express
> the views or opinions of my employer
>


pEpkey.asc
Description: application/pgp-keys


Re: Bottlenecks and link upgrades

2020-08-13 Thread Etienne-Victor Depasquale
>
> With tongue in cheek, one could say that measured instantaneously, the
> load on a link is always either zero or 100% link rate...
>

Actually, that's a first-class observation !

On Thu, Aug 13, 2020 at 12:00 PM Simon Leinen 
wrote:

> m Taichi writes:
> > Just my curiosity. May I ask how we can measure the link capacity
> > loading? What does it mean by a 50%, 70%, or 90% capacity loading?
> > Load sampled and measured instantaneously, or averaging over a certain
> > period of time (granularity)?
>
> Very good question!
>
> With tongue in cheek, one could say that measured instantaneously, the
> load on a link is always either zero or 100% link rate...
>
> ISPs typically sample link load in 5-minute intervals and look at graphs
> that show load (at this 5-minute sampling resolution) over ~24 hours, or
> longer-term graphs where the resolution has been "downsampled", where
> downsampling usually smoothes out short-term peaks.
>
> From my own experience, upgrade decisions are made by looking at those
> graphs and checking whether peak traffic (possibly ignoring "spikes" :-)
> crosses the threshold repeatedly.
>
> At some places this might be codified in terms of percentiles, e.g. "the
> Nth percentile of the M-minute utilization samples exceeds X% of link
> capacity over a Y-day period".  I doubt that anyone uses such rules to
> automatically issue upgrade orders, but maybe to generate alerts like
> "please check this link, we might want to upgrade it".
>
> I'd be curious whether other operators have such alert rules, and what
> N/M/X/Y they use - might well be different values for different kinds of
> links.
> --
> Simon.
> PS. We use the "stare at graphs" method, but if we had automatic alerts,
> I guess it would be something like "the 95th percentile of 5-minute
> samples exceeds 50% over 30 days".
> PPS. My colleagues remind me that we do alert on output queue drops.
>
> > These are questions have bothered me for long. Don't know if I can ask
> > about these by the way. I take care of the radio access network
> > performance at work. Found many things unknown in transport network.
>
> > Thanks and best regards,
> > Taichi
>
> > On Wed, Aug 12, 2020 at 3:54 PM Mark Tinka 
> wrote:
>
> >  On 12/Aug/20 09:31, Hank Nussbacher wrote:
>
> >  At what point do commercial ISPs upgrade links in their backbone as
> well as peering and transit links that are congested?  At 80%
> >  capacity?  90%?  95%?
>
> >  We start the process at 50% utilization, and work toward completing the
> upgrade by 70% utilization.
>
> >  The period between 50% - 70% is just internal paperwork.
>
> >  Mark.
>
>

-- 
Ing. Etienne-Victor Depasquale
Assistant Lecturer
Department of Communications & Computer Engineering
Faculty of Information & Communication Technology
University of Malta
Web. https://www.um.edu.mt/profile/etiennedepasquale


Re: Bottlenecks and link upgrades

2020-08-13 Thread Mark Tinka



On 13/Aug/20 11:56, Simon Leinen wrote:

> I'd be curious whether other operators have such alert rules, and what
> N/M/X/Y they use - might well be different values for different kinds of
> links.

We use alerts to tell us about links that hit a threshold, in our NMS.
But yes, this is based on 5-minute samples, not percentile data.

The alerts are somewhat redundant for any long-term planning. They are
more useful when problems happen out of the blue.

Mark.


Re: Bottlenecks and link upgrades

2020-08-13 Thread Simon Leinen
m Taichi writes:
> Just my curiosity. May I ask how we can measure the link capacity
> loading? What does it mean by a 50%, 70%, or 90% capacity loading?
> Load sampled and measured instantaneously, or averaging over a certain
> period of time (granularity)?

Very good question!

With tongue in cheek, one could say that measured instantaneously, the
load on a link is always either zero or 100% link rate...

ISPs typically sample link load in 5-minute intervals and look at graphs
that show load (at this 5-minute sampling resolution) over ~24 hours, or
longer-term graphs where the resolution has been "downsampled", where
downsampling usually smoothes out short-term peaks.

>From my own experience, upgrade decisions are made by looking at those
graphs and checking whether peak traffic (possibly ignoring "spikes" :-)
crosses the threshold repeatedly.

At some places this might be codified in terms of percentiles, e.g. "the
Nth percentile of the M-minute utilization samples exceeds X% of link
capacity over a Y-day period".  I doubt that anyone uses such rules to
automatically issue upgrade orders, but maybe to generate alerts like
"please check this link, we might want to upgrade it".

I'd be curious whether other operators have such alert rules, and what
N/M/X/Y they use - might well be different values for different kinds of
links.
-- 
Simon.
PS. We use the "stare at graphs" method, but if we had automatic alerts,
I guess it would be something like "the 95th percentile of 5-minute
samples exceeds 50% over 30 days".
PPS. My colleagues remind me that we do alert on output queue drops.

> These are questions have bothered me for long. Don't know if I can ask
> about these by the way. I take care of the radio access network
> performance at work. Found many things unknown in transport network.

> Thanks and best regards,
> Taichi

> On Wed, Aug 12, 2020 at 3:54 PM Mark Tinka  wrote:

>  On 12/Aug/20 09:31, Hank Nussbacher wrote:

>  At what point do commercial ISPs upgrade links in their backbone as well as 
> peering and transit links that are congested?  At 80%
>  capacity?  90%?  95%?  

>  We start the process at 50% utilization, and work toward completing the 
> upgrade by 70% utilization.

>  The period between 50% - 70% is just internal paperwork.

>  Mark.