RE: modeling residential subscriber bandwidth demand

2019-04-04 Thread John DAmbrosia
All,
I am chairing an effort in the IEEE 802.3 Ethernet Working Group to understand 
bandwidth demand and how it will impact future Ethernet needs.  This is exactly 
the type of discussion i would like to get shared with this activity.  I would 
appreciate follow-on conversations with anyone wishing to share their 
observations.

Regards,

John D'Ambrosia
Chair, IEEE 802.3 New Ethernet Applications Ad hoc

-Original Message-
From: NANOG  On Behalf Of James Bensley
Sent: Thursday, April 4, 2019 4:41 AM
To: Tom Ammon ; NANOG 
Subject: Re: modeling residential subscriber bandwidth demand

On Tue, 2 Apr 2019 at 17:57, Tom Ammon  wrote:
>
> How do people model and try to project residential subscriber bandwidth 
> demands into the future? Do you base it primarily on historical data? Are 
> there more sophisticated approaches that you use to figure out how much 
> backbone bandwidth you need to build to keep your eyeballs happy?
>
> Netflow for historical data is great, but I guess what I am really asking is 
> - how do you anticipate the load that your eyeballs are going to bring to 
> your network, especially in the face of transport tweaks such as QUIC and TCP 
> BBR?
>
> Tom

Hi Tom,

Historical data is definitely the way to predict a trend, you can’t call 
something a trend if it only started today IMO, something (e.g.
bandwidth profiling) needs to have been recorded for a while before you can say 
that you are trying to predict the trend. Without historical data you're just 
making predications without any direction, which I don't think you want J

Assuming you have a good mixture of subs, i.e. adults, children, male, female, 
different regions etc. and 100% of your subs aren't a single demographic like 
university campuses for example; then I don't think you need to worry about 
specifics like the adoption of QUIC or BBR.
You will never see a permeant AND massive increase in your total aggregate 
network utilisation from one day to the next.

If for example, a large CDN makes a change that increases per-user bandwidth 
requirements, it's unlikely they are going to deploy it globally in one single 
big-bang change. This would also be just one of your major bandwidth 
sources/destinations, of which you'll likely have several big-hitters that make 
up the bulk of your traffic. If you have planned well so far, and have plenty 
of spare capacity (as others have mentioned, in the 50-70% range and your 
backhaul/peering/transit links are of a reasonable size ratio to your subs, 
e.g. subs get 10-20Mbps services and your links are 1Gbps) there should be no 
persisting risk to your network capacity as long as you keep following the same 
upgrade trajectory. Major social events like the Super Bowl where you are (or 
here in England, sunshine) will cause exceptional traffic increases, but only 
for brief periods.

You haven't mentioned exactly what you're doing for modelling capacity demand 
(assuming that you wanted feedback on it)?

Assuming all the above is true for you, to give us a reasonable foundation to 
build on; In my experience the standard method is to record your ingress 
traffic rate at all your PEs or P nodes, and essentially divide this by the 
number of subs you have (egress is important too, it's just usually negligible 
in comparison). For example, if your ASN has a total average ingress traffic 
rate of 1Gbps at during peak hours and, you have 10,000 subs, you can model on 
say 0.1Mbps per sub. That’s actually a crazily low figure these days but, it’s 
just a fictional example to demonstrate the calculation.

The ideal scenario is that you have this info for as long as you can.
Also, the more subs you have the better it all averages out. For business ISPs, 
bringing on 1 new customer can make a major difference, if it’s a 100Gbps 
end-site site and your backbone is a single 100Gbps link you could be in 
trouble. For residential services, subs almost always have slower links than 
your backbone/P/PE nodes.

If you have different types of subs it’s also worth breaking down the stats by 
sub type. For example; we have ADSL subs and VDSL subs. We record the egress 
traffic rate on the BNGs towards each type of sub separately and then aggregate 
across all BNGs. For example, today peak inbound for our ASN was X, of that X, 
Y went to ADSL subs and Z when to VDSL subs. Y / $number_of_adsl_subs == peak 
average for an ADSL line and, Z / $number_of_vdsl_subs == peak average for a 
VDSL line.

It’s good to know this difference because a sub migrating from ADSL to VDSL is 
not the same as getting a new sub in terms of additional traffic growth. We 
have a lot of users upgrading to VDSL which makes a difference at scale, e.g 
10K upgrades is less additional traffic than 10k new subs. Rinse and repeat for 
you other customer types (FTTP/H, wireless etc.)


> On Tue, Apr 2, 2019 at 2:20 PM Josh Luthman  
> wrote:
>>
>> We have GB/mo figures for ou

Re: modeling residential subscriber bandwidth demand

2019-04-04 Thread James Bensley
On Tue, 2 Apr 2019 at 17:57, Tom Ammon  wrote:
>
> How do people model and try to project residential subscriber bandwidth 
> demands into the future? Do you base it primarily on historical data? Are 
> there more sophisticated approaches that you use to figure out how much 
> backbone bandwidth you need to build to keep your eyeballs happy?
>
> Netflow for historical data is great, but I guess what I am really asking is 
> - how do you anticipate the load that your eyeballs are going to bring to 
> your network, especially in the face of transport tweaks such as QUIC and TCP 
> BBR?
>
> Tom

Hi Tom,

Historical data is definitely the way to predict a trend, you can’t
call something a trend if it only started today IMO, something (e.g.
bandwidth profiling) needs to have been recorded for a while before
you can say that you are trying to predict the trend. Without
historical data you're just making predications without any direction,
which I don't think you want J

Assuming you have a good mixture of subs, i.e. adults, children, male,
female, different regions etc. and 100% of your subs aren't a single
demographic like university campuses for example; then I don't think
you need to worry about specifics like the adoption of QUIC or BBR.
You will never see a permeant AND massive increase in your total
aggregate network utilisation from one day to the next.

If for example, a large CDN makes a change that increases per-user
bandwidth requirements, it's unlikely they are going to deploy it
globally in one single big-bang change. This would also be just one of
your major bandwidth sources/destinations, of which you'll likely have
several big-hitters that make up the bulk of your traffic. If you have
planned well so far, and have plenty of spare capacity (as others have
mentioned, in the 50-70% range and your backhaul/peering/transit links
are of a reasonable size ratio to your subs, e.g. subs get 10-20Mbps
services and your links are 1Gbps) there should be no persisting risk
to your network capacity as long as you keep following the same
upgrade trajectory. Major social events like the Super Bowl where you
are (or here in England, sunshine) will cause exceptional traffic
increases, but only for brief periods.

You haven't mentioned exactly what you're doing for modelling capacity
demand (assuming that you wanted feedback on it)?

Assuming all the above is true for you, to give us a reasonable
foundation to build on;
In my experience the standard method is to record your ingress traffic
rate at all your PEs or P nodes, and essentially divide this by the
number of subs you have (egress is important too, it's just usually
negligible in comparison). For example, if your ASN has a total
average ingress traffic rate of 1Gbps at during peak hours and, you
have 10,000 subs, you can model on say 0.1Mbps per sub. That’s
actually a crazily low figure these days but, it’s just a fictional
example to demonstrate the calculation.

The ideal scenario is that you have this info for as long as you can.
Also, the more subs you have the better it all averages out. For
business ISPs, bringing on 1 new customer can make a major difference,
if it’s a 100Gbps end-site site and your backbone is a single 100Gbps
link you could be in trouble. For residential services, subs almost
always have slower links than your backbone/P/PE nodes.

If you have different types of subs it’s also worth breaking down the
stats by sub type. For example; we have ADSL subs and VDSL subs. We
record the egress traffic rate on the BNGs towards each type of sub
separately and then aggregate across all BNGs. For example, today peak
inbound for our ASN was X, of that X, Y went to ADSL subs and Z when
to VDSL subs. Y / $number_of_adsl_subs == peak average for an ADSL
line and, Z / $number_of_vdsl_subs == peak average for a VDSL line.

It’s good to know this difference because a sub migrating from ADSL to
VDSL is not the same as getting a new sub in terms of additional
traffic growth. We have a lot of users upgrading to VDSL which makes a
difference at scale, e.g 10K upgrades is less additional traffic than
10k new subs. Rinse and repeat for you other customer types (FTTP/H,
wireless etc.)


> On Tue, Apr 2, 2019 at 2:20 PM Josh Luthman  
> wrote:
>>
>> We have GB/mo figures for our customers for every month for the last ~10 
>> years.  Is there some simple figure you're looking for?  I can tell you off 
>> hand that I remember we had accounts doing ~15 GB/mo and now we've got 1500 
>> GB/mo at similar rates per month.
>>
>
> I'm mostly just wondering what others do for this kind of planning - trying 
> to look outside of my own experience, so I don't miss something obvious. That 
> growth in total transfer that you mention is interesting.

You need to be careful with volumetric based usage figures. As links
continuously increase in speed over the years, users can transfer the
same amount of data in less bit-time. The problem with polling at any
interval (be it 1 

Re: modeling residential subscriber bandwidth demand

2019-04-03 Thread Ray Van Dolson
On Wed, Apr 03, 2019 at 03:45:17AM -0400, Valdis Klētnieks wrote:
> On Tue, 02 Apr 2019 23:53:06 -0700, Ben Cannon said:
> > A 100/100 enterprise connection can easily support hundreds of desktop 
> > users 
> > if not more.  It???s a lot of bandwidth even today.
> 
> And what happens when a significant fraction of those users fire up Netflix 
> with
> an HD stream?
> 
> We're discussing residential not corporate connections, I thought
> 

Yes, Enterprise requirements are certainly different, though inching
upwards with the prevalance of SaaS services like Salesforce, O365 and
file sharing services (the latter are a growing % of our traffic at
branch offices).

I feel like our rule of thumb on the Enterprise side is in the 1.5-2Mbps
per user range these days (for Internet).

Ray


Re: modeling residential subscriber bandwidth demand

2019-04-03 Thread Paul Nash
I am also surprised.  However, we have had a total of 5 complaints about 
network speed over a 3 year period.  

One possible reason is that because they own the infrastructure collectively 
and pay for the bandwidth directly (I just manage everything for them), they 
are prepared to put up with the odd slowdown to avoid the expense of an 
upgrade. 

Our original plan was to start with the 100M circuit so that they could make 
sure that everything would work, that we had reliable wifi delivery (about 95% 
of users only use a wifi connection to their computers/iDevices/whatever), and 
then to upgrade to 1G as soon as the dust started settling.  They have 
postponed the upgrade for 3 years now, with no complaints.

I guess that if they will be directly impacted by higher bandwidth costs, some 
people can make do with slower service (or something).

paul 

> On Apr 3, 2019, at 8:41 AM, Darin Steffl  wrote:
> 
> Paul,
> 
> I have hard time seeing how you aren't maxing out that circuit. We see about 
> 2.3 mbps average per customer at peak with a primarily residential user base. 
> That would about 575 mbps average at peak for 250 users on our network so how 
> do we use 575 but you say your users don't even top 100 mbps at peak? It 
> doesn't make sense that our customers use 6 times as much bandwidth at peak 
> than yours do. 
> 
> We're a rural and small town mix in Minnesota, no urban areas in our 
> coverage. 90% of our customers are on a plan 22 mbps or less and the other 
> 10% are on a 100 mbps plan but their average usage isn't really much higher.
> 
> 
> Enterprise environments can easily handle many more users on a 100 meg 
> circuit because they aren't typically streaming video like they would be at 
> home. Residential will always be much higher usage per person than most 
> enterprise users. 
> 
> On Wed, Apr 3, 2019, 2:46 AM Valdis Klētnieks  wrote:
> On Tue, 02 Apr 2019 23:53:06 -0700, Ben Cannon said:
> > A 100/100 enterprise connection can easily support hundreds of desktop 
> > users 
> > if not more.  It’s a lot of bandwidth even today.
> 
> And what happens when a significant fraction of those users fire up Netflix 
> with
> an HD stream?
> 
> We're discussing residential not corporate connections, I thought
> 



Re: modeling residential subscriber bandwidth demand

2019-04-03 Thread Darin Steffl
Paul,

I have hard time seeing how you aren't maxing out that circuit. We see
about 2.3 mbps average per customer at peak with a primarily residential
user base. That would about 575 mbps average at peak for 250 users on our
network so how do we use 575 but you say your users don't even top 100 mbps
at peak? It doesn't make sense that our customers use 6 times as much
bandwidth at peak than yours do.

We're a rural and small town mix in Minnesota, no urban areas in our
coverage. 90% of our customers are on a plan 22 mbps or less and the other
10% are on a 100 mbps plan but their average usage isn't really much higher.


Enterprise environments can easily handle many more users on a 100 meg
circuit because they aren't typically streaming video like they would be at
home. Residential will always be much higher usage per person than most
enterprise users.

On Wed, Apr 3, 2019, 2:46 AM Valdis Klētnieks 
wrote:

> On Tue, 02 Apr 2019 23:53:06 -0700, Ben Cannon said:
> > A 100/100 enterprise connection can easily support hundreds of desktop
> users
> > if not more.  It’s a lot of bandwidth even today.
>
> And what happens when a significant fraction of those users fire up
> Netflix with
> an HD stream?
>
> We're discussing residential not corporate connections, I thought
>
>


Re: modeling residential subscriber bandwidth demand

2019-04-03 Thread Valdis Klētnieks
On Tue, 02 Apr 2019 23:53:06 -0700, Ben Cannon said:
> A 100/100 enterprise connection can easily support hundreds of desktop users 
> if not more.  It’s a lot of bandwidth even today.

And what happens when a significant fraction of those users fire up Netflix with
an HD stream?

We're discussing residential not corporate connections, I thought



Re: modeling residential subscriber bandwidth demand

2019-04-03 Thread Ben Cannon
A 100/100 enterprise connection can easily support hundreds of desktop users if 
not more.  It’s a lot of bandwidth even today. 

-Ben

> On Apr 2, 2019, at 10:35 PM, Mikael Abrahamsson  wrote:
> 
>> On Tue, 2 Apr 2019, Paul Nash wrote:
>> 
>> FWIW, I have a 250 subscribers sitting on a 100M fiber into Torix.  I have 
>> had no complains about speed in 4 1/2 years.  I have been planning to bump 
>> them to 1G for the last 4 years, but there is currently no economic 
>> justification.
> 
> I know FTTH footprints where peak evening average per customer is 3-5 
> megabit/s. I know others who claim their customers only average equivalent 
> 5-10% of that.
> 
> It all depends on what services you offer. Considering my household has 
> 250/100 for 40 USD a month I'd say your above solution wouldn't even be 
> enough to deliver an acceptable service to even 10 households.
> 
> -- 
> Mikael Abrahamssonemail: swm...@swm.pp.se


Re: modeling residential subscriber bandwidth demand

2019-04-02 Thread Mikael Abrahamsson

On Tue, 2 Apr 2019, Paul Nash wrote:

FWIW, I have a 250 subscribers sitting on a 100M fiber into Torix.  I 
have had no complains about speed in 4 1/2 years.  I have been planning 
to bump them to 1G for the last 4 years, but there is currently no 
economic justification.


I know FTTH footprints where peak evening average per customer is 3-5 
megabit/s. I know others who claim their customers only average equivalent 
5-10% of that.


It all depends on what services you offer. Considering my household has 
250/100 for 40 USD a month I'd say your above solution wouldn't even be 
enough to deliver an acceptable service to even 10 households.


--
Mikael Abrahamssonemail: swm...@swm.pp.se


Re: modeling residential subscriber bandwidth demand

2019-04-02 Thread Tom Ammon
On Tue, Apr 2, 2019 at 2:20 PM Josh Luthman 
wrote:

> We have GB/mo figures for our customers for every month for the last ~10
> years.  Is there some simple figure you're looking for?  I can tell you off
> hand that I remember we had accounts doing ~15 GB/mo and now we've got 1500
> GB/mo at similar rates per month.
>
>
I'm mostly just wondering what others do for this kind of planning - trying
to look outside of my own experience, so I don't miss something obvious.
That growth in total transfer that you mention is interesting.

I always wonder what the value of trying to predict utilization is anyway,
especially since bandwidth is so cheap. But I figure it can't hurt to ask a
group of people where I am highly likely to find somebody smarter than I am
:-)





-- 
-
Tom Ammon
M: (801) 784-2628
thomasam...@gmail.com
-


Re: modeling residential subscriber bandwidth demand

2019-04-02 Thread Paul Nash
Mixed residential (ages 25 - 75, 1 - 6 people per unit), group who worked 
together to keep costs down.  Works well for them.  Friday nights we get to 
about 85% utilization (Netflix), other than that, usually sits between 25 - 45%

paul

> On Apr 2, 2019, at 5:44 PM, Jared Mauch  wrote:
> 
> I would say this is perhaps atypical but may depend on the customer type(s).
> 
> If they’re residential and use OTT data then sure.  If it’s SMB you’re likely 
> in better shape.
> 
> - Jared
> 
> 
>> On Apr 2, 2019, at 5:21 PM, Paul Nash  wrote:
>> 
>> FWIW, I have a 250 subscribers sitting on a 100M fiber into Torix.  I have 
>> had no complains about speed in 4 1/2 years.  I have been planning to bump 
>> them to 1G for the last 4 years, but there is currently no economic 
>> justification.
>> 
>>  paul
>> 
>> 
>>> On Apr 2, 2019, at 3:21 PM, Louie Lee via NANOG  wrote:
>>> 
>>> Certainly.
>>> 
>>> Projecting demand is one thing. Figuring out what to buy for your backbone, 
>>> edge (uplink & peer), and colo (for CDN caches too!), for which 
>>> scale+growth is quite another.
>>> 
>>> And yeah, Jim, overall, things have stayed the same. There are just the 
>>> nuances added with caches, gaming, OTT streaming, some IoT (like always-on 
>>> home security cams) plus better tools now for network management and 
>>> network analysis.
>>> 
>>> Louie
>>> Google Fiber.
>>> 
>>> 
>>> 
>>> On Tue, Apr 2, 2019 at 12:00 PM Jared Mauch  wrote:
>>> 
>>> 
 On Apr 2, 2019, at 2:35 PM, jim deleskie  wrote:
 
 +1 on this. its been more than 10 years since I've been responsible for a 
 broadband network but have friends that still play in that world and do 
 some very good work on making sure their models are very well managed, 
 with more math than I ever bothered with, That being said, If had used the 
 methods I'd had used back in the 90's they would have fully predicted per 
 sub growth including all the FB/YoutubeNetflix traffic we have today. The 
 "rapid" growth we say in the 90's and the 2000' and even this decade are 
 all magically the same curve, we'd just further up the incline, the 
 question is will it continue another 10+ years, where the growth rate is 
 nearing straight up :)
>>> 
>>> 
>>> I think sometimes folks have the challenge with how to deal with aggregate 
>>> scale and growth vs what happens in a pure linear model with subscribers.
>>> 
>>> The first 75 users look a lot different than the next 900.  You get 
>>> different population scale and average usage.
>>> 
>>> I could roughly estimate some high numbers for population of earth internet 
>>> usage at peak for maximum, but in most cases if you have a 1G connection 
>>> you can support 500-800 subscribers these days.  Ideally you can get a 10G 
>>> link for a reasonable price.  Your scale looks different as well as you can 
>>> work with “the content guys” once you get far enough.
>>> 
>>> Thursdays are still the peak because date night is still generally Friday.
>>> 
>>> - Jared
> 



Re: modeling residential subscriber bandwidth demand

2019-04-02 Thread Jared Mauch
I would say this is perhaps atypical but may depend on the customer type(s).

If they’re residential and use OTT data then sure.  If it’s SMB you’re likely 
in better shape.

- Jared


> On Apr 2, 2019, at 5:21 PM, Paul Nash  wrote:
> 
> FWIW, I have a 250 subscribers sitting on a 100M fiber into Torix.  I have 
> had no complains about speed in 4 1/2 years.  I have been planning to bump 
> them to 1G for the last 4 years, but there is currently no economic 
> justification.
> 
>   paul
> 
> 
>> On Apr 2, 2019, at 3:21 PM, Louie Lee via NANOG  wrote:
>> 
>> Certainly.
>> 
>> Projecting demand is one thing. Figuring out what to buy for your backbone, 
>> edge (uplink & peer), and colo (for CDN caches too!), for which scale+growth 
>> is quite another.
>> 
>> And yeah, Jim, overall, things have stayed the same. There are just the 
>> nuances added with caches, gaming, OTT streaming, some IoT (like always-on 
>> home security cams) plus better tools now for network management and network 
>> analysis.
>> 
>> Louie
>> Google Fiber.
>> 
>> 
>> 
>> On Tue, Apr 2, 2019 at 12:00 PM Jared Mauch  wrote:
>> 
>> 
>>> On Apr 2, 2019, at 2:35 PM, jim deleskie  wrote:
>>> 
>>> +1 on this. its been more than 10 years since I've been responsible for a 
>>> broadband network but have friends that still play in that world and do 
>>> some very good work on making sure their models are very well managed, with 
>>> more math than I ever bothered with, That being said, If had used the 
>>> methods I'd had used back in the 90's they would have fully predicted per 
>>> sub growth including all the FB/YoutubeNetflix traffic we have today. The 
>>> "rapid" growth we say in the 90's and the 2000' and even this decade are 
>>> all magically the same curve, we'd just further up the incline, the 
>>> question is will it continue another 10+ years, where the growth rate is 
>>> nearing straight up :)
>> 
>> 
>> I think sometimes folks have the challenge with how to deal with aggregate 
>> scale and growth vs what happens in a pure linear model with subscribers.
>> 
>> The first 75 users look a lot different than the next 900.  You get 
>> different population scale and average usage.
>> 
>> I could roughly estimate some high numbers for population of earth internet 
>> usage at peak for maximum, but in most cases if you have a 1G connection you 
>> can support 500-800 subscribers these days.  Ideally you can get a 10G link 
>> for a reasonable price.  Your scale looks different as well as you can work 
>> with “the content guys” once you get far enough.
>> 
>> Thursdays are still the peak because date night is still generally Friday.
>> 
>> - Jared



Re: modeling residential subscriber bandwidth demand

2019-04-02 Thread Paul Nash
FWIW, I have a 250 subscribers sitting on a 100M fiber into Torix.  I have had 
no complains about speed in 4 1/2 years.  I have been planning to bump them to 
1G for the last 4 years, but there is currently no economic justification.

paul


> On Apr 2, 2019, at 3:21 PM, Louie Lee via NANOG  wrote:
> 
> Certainly.
> 
> Projecting demand is one thing. Figuring out what to buy for your backbone, 
> edge (uplink & peer), and colo (for CDN caches too!), for which scale+growth 
> is quite another.
> 
> And yeah, Jim, overall, things have stayed the same. There are just the 
> nuances added with caches, gaming, OTT streaming, some IoT (like always-on 
> home security cams) plus better tools now for network management and network 
> analysis.
> 
> Louie
> Google Fiber.
> 
> 
> 
> On Tue, Apr 2, 2019 at 12:00 PM Jared Mauch  wrote:
> 
> 
> > On Apr 2, 2019, at 2:35 PM, jim deleskie  wrote:
> > 
> > +1 on this. its been more than 10 years since I've been responsible for a 
> > broadband network but have friends that still play in that world and do 
> > some very good work on making sure their models are very well managed, with 
> > more math than I ever bothered with, That being said, If had used the 
> > methods I'd had used back in the 90's they would have fully predicted per 
> > sub growth including all the FB/YoutubeNetflix traffic we have today. The 
> > "rapid" growth we say in the 90's and the 2000' and even this decade are 
> > all magically the same curve, we'd just further up the incline, the 
> > question is will it continue another 10+ years, where the growth rate is 
> > nearing straight up :)
> 
> 
> I think sometimes folks have the challenge with how to deal with aggregate 
> scale and growth vs what happens in a pure linear model with subscribers.
> 
> The first 75 users look a lot different than the next 900.  You get different 
> population scale and average usage.
> 
> I could roughly estimate some high numbers for population of earth internet 
> usage at peak for maximum, but in most cases if you have a 1G connection you 
> can support 500-800 subscribers these days.  Ideally you can get a 10G link 
> for a reasonable price.  Your scale looks different as well as you can work 
> with “the content guys” once you get far enough.
> 
> Thursdays are still the peak because date night is still generally Friday.
> 
> - Jared



Re: modeling residential subscriber bandwidth demand

2019-04-02 Thread Louie Lee via NANOG
Certainly.

Projecting demand is one thing. Figuring out what to buy for your backbone,
edge (uplink & peer), and colo (for CDN caches too!), for which
scale+growth is quite another.

And yeah, Jim, overall, things have stayed the same. There are just the
nuances added with caches, gaming, OTT streaming, some IoT (like always-on
home security cams) plus better tools now for network management and
network analysis.

Louie
Google Fiber.



On Tue, Apr 2, 2019 at 12:00 PM Jared Mauch  wrote:

>
>
> > On Apr 2, 2019, at 2:35 PM, jim deleskie  wrote:
> >
> > +1 on this. its been more than 10 years since I've been responsible for
> a broadband network but have friends that still play in that world and do
> some very good work on making sure their models are very well managed, with
> more math than I ever bothered with, That being said, If had used the
> methods I'd had used back in the 90's they would have fully predicted per
> sub growth including all the FB/YoutubeNetflix traffic we have today. The
> "rapid" growth we say in the 90's and the 2000' and even this decade are
> all magically the same curve, we'd just further up the incline, the
> question is will it continue another 10+ years, where the growth rate is
> nearing straight up :)
>
>
> I think sometimes folks have the challenge with how to deal with aggregate
> scale and growth vs what happens in a pure linear model with subscribers.
>
> The first 75 users look a lot different than the next 900.  You get
> different population scale and average usage.
>
> I could roughly estimate some high numbers for population of earth
> internet usage at peak for maximum, but in most cases if you have a 1G
> connection you can support 500-800 subscribers these days.  Ideally you can
> get a 10G link for a reasonable price.  Your scale looks different as well
> as you can work with “the content guys” once you get far enough.
>
> Thursdays are still the peak because date night is still generally Friday.
>
> - Jared


Re: modeling residential subscriber bandwidth demand

2019-04-02 Thread Louie Lee via NANOG
+1 Also on this.

>From my viewpoint, the game is roughly the same for the last 20+ years. You
might want to validate that your per-customer bandwidth use across your
markets is roughly the same for the same service/speeds/product. If you
have that data over time, then you can extrapolate what each market's
bandwidth use would be when you lay on a customer growth forecast.

But for something that's simpler and actionable now, yeah, just make sure
that your upstream and peering(!!) links are not congested. I agree that
the 50-75% is a good target not only for the lead time to bring up more
capacity, but also to allow for spikes in traffic for various events
throughout the year.

Louie
Google Fiber


On Tue, Apr 2, 2019 at 11:36 AM jim deleskie  wrote:

> +1 on this. its been more than 10 years since I've been responsible for a
> broadband network but have friends that still play in that world and do
> some very good work on making sure their models are very well managed, with
> more math than I ever bothered with, That being said, If had used the
> methods I'd had used back in the 90's they would have fully predicted per
> sub growth including all the FB/YoutubeNetflix traffic we have today. The
> "rapid" growth we say in the 90's and the 2000' and even this decade are
> all magically the same curve, we'd just further up the incline, the
> question is will it continue another 10+ years, where the growth rate is
> nearing straight up :)
>
> -jim
>
> On Tue, Apr 2, 2019 at 3:26 PM Mikael Abrahamsson 
> wrote:
>
>> On Tue, 2 Apr 2019, Tom Ammon wrote:
>>
>> > Netflow for historical data is great, but I guess what I am really
>> > asking is - how do you anticipate the load that your eyeballs are going
>> > to bring to your network, especially in the face of transport tweaks
>> > such as QUIC and TCP BBR?
>>
>> I don't see how QUIC and BBR is going to change how much bandwidth is
>> flowing.
>>
>> If you want to make your eyeballs happy then make sure you're not
>> congesting your upstream links. Aim for max 50-75% utilization in 5
>> minute
>> average at peak hour (graph by polling interface counters every 5
>> minutes). Depending on your growth curve you might need to initiate
>> upgrades to make sure they're complete before utilization hits 75%.
>>
>> If you have thousands of users then typically just look at the statistics
>> per user and extrapolate. I don't believe this has fundamentally changed
>> in the past 20 years, this is still best common practice.
>>
>> If you go into the game of running your links full parts of the day then
>> you're into the game of trying to figure out QoE values which might mean
>> you spend more time doing that than the upgrade would cost.
>>
>> --
>> Mikael Abrahamssonemail: swm...@swm.pp.se
>>
>


Re: modeling residential subscriber bandwidth demand

2019-04-02 Thread Jared Mauch



> On Apr 2, 2019, at 2:35 PM, jim deleskie  wrote:
> 
> +1 on this. its been more than 10 years since I've been responsible for a 
> broadband network but have friends that still play in that world and do some 
> very good work on making sure their models are very well managed, with more 
> math than I ever bothered with, That being said, If had used the methods I'd 
> had used back in the 90's they would have fully predicted per sub growth 
> including all the FB/YoutubeNetflix traffic we have today. The "rapid" growth 
> we say in the 90's and the 2000' and even this decade are all magically the 
> same curve, we'd just further up the incline, the question is will it 
> continue another 10+ years, where the growth rate is nearing straight up :)


I think sometimes folks have the challenge with how to deal with aggregate 
scale and growth vs what happens in a pure linear model with subscribers.

The first 75 users look a lot different than the next 900.  You get different 
population scale and average usage.

I could roughly estimate some high numbers for population of earth internet 
usage at peak for maximum, but in most cases if you have a 1G connection you 
can support 500-800 subscribers these days.  Ideally you can get a 10G link for 
a reasonable price.  Your scale looks different as well as you can work with 
“the content guys” once you get far enough.

Thursdays are still the peak because date night is still generally Friday.

- Jared

Re: modeling residential subscriber bandwidth demand

2019-04-02 Thread jim deleskie
Louie,

 Its almost like us old guys knew something, and did know everything back
then, the more things have changed the more that they have stayed the same
:)



-jim

On Tue, Apr 2, 2019 at 3:52 PM Louie Lee  wrote:

> +1 Also on this.
>
> From my viewpoint, the game is roughly the same for the last 20+ years.
> You might want to validate that your per-customer bandwidth use across your
> markets is roughly the same for the same service/speeds/product. If you
> have that data over time, then you can extrapolate what each market's
> bandwidth use would be when you lay on a customer growth forecast.
>
> But for something that's simpler and actionable now, yeah, just make sure
> that your upstream and peering(!!) links are not congested. I agree that
> the 50-75% is a good target not only for the lead time to bring up more
> capacity, but also to allow for spikes in traffic for various events
> throughout the year.
>
> Louie
> Google Fiber
>
>
> On Tue, Apr 2, 2019 at 11:36 AM jim deleskie  wrote:
>
>> +1 on this. its been more than 10 years since I've been responsible for a
>> broadband network but have friends that still play in that world and do
>> some very good work on making sure their models are very well managed, with
>> more math than I ever bothered with, That being said, If had used the
>> methods I'd had used back in the 90's they would have fully predicted per
>> sub growth including all the FB/YoutubeNetflix traffic we have today. The
>> "rapid" growth we say in the 90's and the 2000' and even this decade are
>> all magically the same curve, we'd just further up the incline, the
>> question is will it continue another 10+ years, where the growth rate is
>> nearing straight up :)
>>
>> -jim
>>
>> On Tue, Apr 2, 2019 at 3:26 PM Mikael Abrahamsson 
>> wrote:
>>
>>> On Tue, 2 Apr 2019, Tom Ammon wrote:
>>>
>>> > Netflow for historical data is great, but I guess what I am really
>>> > asking is - how do you anticipate the load that your eyeballs are
>>> going
>>> > to bring to your network, especially in the face of transport tweaks
>>> > such as QUIC and TCP BBR?
>>>
>>> I don't see how QUIC and BBR is going to change how much bandwidth is
>>> flowing.
>>>
>>> If you want to make your eyeballs happy then make sure you're not
>>> congesting your upstream links. Aim for max 50-75% utilization in 5
>>> minute
>>> average at peak hour (graph by polling interface counters every 5
>>> minutes). Depending on your growth curve you might need to initiate
>>> upgrades to make sure they're complete before utilization hits 75%.
>>>
>>> If you have thousands of users then typically just look at the
>>> statistics
>>> per user and extrapolate. I don't believe this has fundamentally changed
>>> in the past 20 years, this is still best common practice.
>>>
>>> If you go into the game of running your links full parts of the day then
>>> you're into the game of trying to figure out QoE values which might mean
>>> you spend more time doing that than the upgrade would cost.
>>>
>>> --
>>> Mikael Abrahamssonemail: swm...@swm.pp.se
>>>
>>


Re: modeling residential subscriber bandwidth demand

2019-04-02 Thread Robert M. Enger
An article was published recently that discusses the possible impact of 
Cloud-based gaming on last-mile capacity requirements, as well as external 
connections. The author suggests that decentralized video services won't be the 
only big user of last-mile capacity. 
https://medium.com/@rudolfvanderberg/what-google-stadia-will-mean-for-broadband-and-interconnection-and-sony-microsoft-and-nintendo-fe20866e6c5b
 


From: "Tom Ammon"  
To: "NANOG"  
Sent: Tuesday, April 2, 2019 9:54:47 AM 
Subject: modeling residential subscriber bandwidth demand 

How do people model and try to project residential subscriber bandwidth demands 
into the future? Do you base it primarily on historical data? Are there more 
sophisticated approaches that you use to figure out how much backbone bandwidth 
you need to build to keep your eyeballs happy? 
Netflow for historical data is great, but I guess what I am really asking is - 
how do you anticipate the load that your eyeballs are going to bring to your 
network, especially in the face of transport tweaks such as QUIC and TCP BBR? 

Tom 
-- 
- 
Tom Ammon 
M: (801) 784-2628 
thomasam...@gmail.com 
- 



Re: modeling residential subscriber bandwidth demand

2019-04-02 Thread jim deleskie
+1 on this. its been more than 10 years since I've been responsible for a
broadband network but have friends that still play in that world and do
some very good work on making sure their models are very well managed, with
more math than I ever bothered with, That being said, If had used the
methods I'd had used back in the 90's they would have fully predicted per
sub growth including all the FB/YoutubeNetflix traffic we have today. The
"rapid" growth we say in the 90's and the 2000' and even this decade are
all magically the same curve, we'd just further up the incline, the
question is will it continue another 10+ years, where the growth rate is
nearing straight up :)

-jim

On Tue, Apr 2, 2019 at 3:26 PM Mikael Abrahamsson  wrote:

> On Tue, 2 Apr 2019, Tom Ammon wrote:
>
> > Netflow for historical data is great, but I guess what I am really
> > asking is - how do you anticipate the load that your eyeballs are going
> > to bring to your network, especially in the face of transport tweaks
> > such as QUIC and TCP BBR?
>
> I don't see how QUIC and BBR is going to change how much bandwidth is
> flowing.
>
> If you want to make your eyeballs happy then make sure you're not
> congesting your upstream links. Aim for max 50-75% utilization in 5 minute
> average at peak hour (graph by polling interface counters every 5
> minutes). Depending on your growth curve you might need to initiate
> upgrades to make sure they're complete before utilization hits 75%.
>
> If you have thousands of users then typically just look at the statistics
> per user and extrapolate. I don't believe this has fundamentally changed
> in the past 20 years, this is still best common practice.
>
> If you go into the game of running your links full parts of the day then
> you're into the game of trying to figure out QoE values which might mean
> you spend more time doing that than the upgrade would cost.
>
> --
> Mikael Abrahamssonemail: swm...@swm.pp.se
>


Re: modeling residential subscriber bandwidth demand

2019-04-02 Thread Mikael Abrahamsson

On Tue, 2 Apr 2019, Tom Ammon wrote:

Netflow for historical data is great, but I guess what I am really 
asking is - how do you anticipate the load that your eyeballs are going 
to bring to your network, especially in the face of transport tweaks 
such as QUIC and TCP BBR?


I don't see how QUIC and BBR is going to change how much bandwidth is 
flowing.


If you want to make your eyeballs happy then make sure you're not 
congesting your upstream links. Aim for max 50-75% utilization in 5 minute 
average at peak hour (graph by polling interface counters every 5 
minutes). Depending on your growth curve you might need to initiate 
upgrades to make sure they're complete before utilization hits 75%.


If you have thousands of users then typically just look at the statistics 
per user and extrapolate. I don't believe this has fundamentally changed 
in the past 20 years, this is still best common practice.


If you go into the game of running your links full parts of the day then 
you're into the game of trying to figure out QoE values which might mean 
you spend more time doing that than the upgrade would cost.


--
Mikael Abrahamssonemail: swm...@swm.pp.se


Re: modeling residential subscriber bandwidth demand

2019-04-02 Thread Josh Luthman
We have GB/mo figures for our customers for every month for the last ~10
years.  Is there some simple figure you're looking for?  I can tell you off
hand that I remember we had accounts doing ~15 GB/mo and now we've got 1500
GB/mo at similar rates per month.

Josh Luthman
Office: 937-552-2340
Direct: 937-552-2343
1100 Wayne St
Suite 1337
Troy, OH 45373


On Tue, Apr 2, 2019 at 2:16 PM Aaron Gould  wrote:

> “…especially in the face of transport tweaks such as QUIC and TCP BBR? “
>
>
>
> Do these “quic and tcp bbr” change bandwidth utilization as we’ve know it
> for years ?
>
>
>
> -Aaron
>


RE: modeling residential subscriber bandwidth demand

2019-04-02 Thread Aaron Gould
“…especially in the face of transport tweaks such as QUIC and TCP BBR? “

 

Do these “quic and tcp bbr” change bandwidth utilization as we’ve know it for 
years ?

 

-Aaron



RE: modeling residential subscriber bandwidth demand

2019-04-02 Thread Aaron Gould
We use trendline/95% trendline that’s built into a lot of graphing tools… 
solarwinds, I think even cdn cache portals have trendlines… forecasts, etc.   
My boss might use other growth percentages gleaned from previous years… but 
yeah, like another person mentioned, the more history you have the better it 
seems… unless there is some major shift for some strange big reason…  but have 
we ever seen that with internet usage growth ?  …yet. ?

 

I mean has the internet bandwidth usage ever gone down nationally/globally , 
similar to like a graph of the housing market in 2007/2008 ?   

 

-Aaron

 



Re: modeling residential subscriber bandwidth demand

2019-04-02 Thread Scott Weeks





:: How do people model and try to project residential 
:: subscriber bandwidth demands into the future? Do 
:: you base it primarily on historical data?
--


Yes, if you have a lot of quality data that goes far 
back in the past you can make pretty good judgements 
on future needs.  Less data and/or not very far back 
lessens the accuracy of a prediction about the future.

scott








--- thomasam...@gmail.com wrote:

From: Tom Ammon 
To: NANOG 
Subject: modeling residential subscriber bandwidth demand
Date: Tue, 2 Apr 2019 12:54:47 -0400

How do people model and try to project residential subscriber bandwidth
demands into the future? Do you base it primarily on historical data? Are
there more sophisticated approaches that you use to figure out how much
backbone bandwidth you need to build to keep your eyeballs happy?

Netflow for historical data is great, but I guess what I am really asking
is - how do you anticipate the load that your eyeballs are going to bring
to your network, especially in the face of transport tweaks such as QUIC
and TCP BBR?

Tom
-- 
-
Tom Ammon
M: (801) 784-2628
thomasam...@gmail.com
-




Re: modeling residential subscriber bandwidth demand

2019-04-02 Thread Ben Cannon
Residential whatnow?

Sorry, to be honest, there really isn’t any.   

I suppose if one is buying lit services, this is important to model.  

But an *incredibly* huge residential network can be served by a single basic 
10/40g backbone connection or two. And if you own the glass it’s trivial to 
spin up very many of those.   Aggregate in metro cores, put the Netflix OC 
there, done.

Then again, we don’t even do DNS anymore, we’re <1ms from Cloudflare, so in 
2019 why bother?

I don’t miss the days of ISDN backhaul, but those days are long gone. And I 
won’t go back.


-Ben Cannon
CEO 6x7 Networks & 6x7 Telecom, LLC 
b...@6by7.net 




> On Apr 2, 2019, at 9:54 AM, Tom Ammon  wrote:
> 
> How do people model and try to project residential subscriber bandwidth 
> demands into the future? Do you base it primarily on historical data? Are 
> there more sophisticated approaches that you use to figure out how much 
> backbone bandwidth you need to build to keep your eyeballs happy? 
> 
> Netflow for historical data is great, but I guess what I am really asking is 
> - how do you anticipate the load that your eyeballs are going to bring to 
> your network, especially in the face of transport tweaks such as QUIC and TCP 
> BBR?
> 
> Tom
> -- 
> -
> Tom Ammon
> M: (801) 784-2628
> thomasam...@gmail.com 
> -