Re: deploying RPKI based Origin Validation

2018-07-18 Thread Mark Tinka



On 19/Jul/18 01:21, Job Snijders wrote:

> @ all - It would be good if operators ask their vendors if they can get
> behind this I-D https://tools.ietf.org/html/draft-ietf-sidrops-ov-clarify

I'm actually glad to see this (Randy, you've abandoned me, hehe).

We actually hit and troubleshot both these issues together with Randy
and a bunch of many good folk in the operator and vendor community back
in 2016/2017, where we discovered that Cisco were marking all iBGP
routes as Valid by default, and automatically applying RPKI policy on
routes without actual operator input.

The latter issue was actually officially documented as part of how the
implementation works over at Cisco-land, but the former was a direct
violation of the RFC.

These issues were eventually fixed later in 2017, but glad to see that
there is an I-D that proposes this more firmly!

Thanks, Randy!

Mark.


Re: Proving Gig Speed

2018-07-18 Thread Mark Tinka



On 18/Jul/18 23:56, Keith Stokes wrote:

> At least in the US, Jane also doesn’t really have a choice of her
> electricity provider, so she’s not getting bombarded with advertising
> from vendors selling “Faster WiFi” than the next guy. I don’t get to
> choose my method of power generation and therefore cost per kWh. I’d
> love to buy $.04 from the Pacific NW when I’m in the Southern US. 

And that's why I suspect that 10Gbps to the home will become a reality
not out of necessity, but out of a race on who can out-market the other.

The problem for us as operators - which is what I was trying to explain
- was that even though the home will likely not saturate that 10Gbps
link, never mind even use 1% of it in any sustained fashion, we shall be
left the burden of proving the "I want to see my 10Gbps that I bought,
or I'm moving to your competitor" case over and over again.

When are we going to stop feeding the monster we've created (or more
accurately, that has been created for us)?

Mark.


Re: deploying RPKI based Origin Validation

2018-07-18 Thread Mark Tinka



On 18/Jul/18 21:30, Michel Py wrote:

> Not much at all, I was actually trying you do do the RPKI part for me ;-)
> This script you wrote, to produce the list of prefixes that are RPKI invalid 
> AND that do not have any alternative, make it run every x minutes on a fixed 
> url (no date/time in name). I will fetch it, inject it in ExaBGP that feeds 
> my iGP and voila, done.

Just to clarify, Job wrote that script, not me :-).


> Who wants to use it can, not trying to impose it on the entire BGP community.

Which is fine, but I want to be cautious about encouraging a parallel
stream that slows down the deployment of RPKI.


> We probably have to wait until attrition brings us routers that have said 
> code.

We generally use typical service provider routers to deliver services.
So I'm not sure whether the 3900's you run support it or not.

Mark.


Re: Proving Gig Speed

2018-07-18 Thread Mark Tinka



On 18/Jul/18 17:35, Brielle Bruns wrote:

>  
>
> Customers are still harping on me about going wireless on all of their
> desktops.  Since most of our customers are CAD/Design/Building
> companies, during planning, we insist on at least two drops to each
> workstation, preferably 3 or more.
>
> But, every time we go to do an upgrade...
>
> "Why can't we just use wireless?"
>
> Even though 20 minutes prior they're complaining at how slow it is for
> their laptops to open up large files over the network over wifi.
>
> "If you want faster speeds, you'll need to go from AC-Lites to AC-HDs
> with Wave 2.  They're $350 or so each, and since your brand new
> building likes to absorb wifi, you'll need 5-8 of them to cover every
> possible location in the building.  Oh, and you'll need to replace
> your laptops with Wave 2 capable ones, plus Wave 2 PCIe cards for
> every desktop... Except for the cheap $200 AMD APU desktops you bought
> against our recommendations that don't have expansion slots and no USB
> 3..."
>
> I long for the day when we can get 100mbit throughout a building or
> house reliably.
>
> (I'm a Ubnt hardware tester too, 99% of my customer setups are a mix
> of EdgeRouter, EdgeSwitch, and Unifi Switch and AP setups).

I have a 100Mbps FTTH service to my house, sitting on 802.11ac (Google
OnHub units, pretty dope, had them for almost 2 years now). With my
802.11ac client devices, I can do well over 600Mbps within my walls,
easily, over the air. But that's because only one of my neighbors that
is closest to me has wi-fi (in 2.4GHz, thank God). The rest are too far
for my thick walls.

It's a totally different story in the office where (fair point, we're
still on 802.11n, but...) the wi-fi is simply useless, because of all
the competing radios from adjacent companies in all bands and on all
channels. And despite having several AP's all over the place + using a
controller to manage the radio network, fundamentally, I prefer wiring
up when I'm in my office, and only use the wi-fi for my phones or when I
need to go to another office or boardroom with my laptop.

But the point is you and I get this phenomenon. Users don't, regardless
of whether they are sending a 2KB e-mail or rendering a multi-gigabit
CAD file.

Mark.


Re: Proving Gig Speed

2018-07-18 Thread Mark Tinka



On 18/Jul/18 17:20, Julien Goodwin wrote:

> Living in Australia this is an every day experience, especially for
> content served out of Europe (or for that matter, Africa).
>
> TCP & below are rarely the biggest problem these days (at least with
> TCP-BBR & friends), far too often applications, web services etc. are
> simply never tested in an environment with any significant latency.
>
> While some issues may exist for static content loading for which a CDN
> can be helpful, that's not helpful for application traffic.

Yip.

Mark.


Re: Proving Gig Speed

2018-07-18 Thread Mark Tinka



On 18/Jul/18 17:00, Mike Hammett wrote:

> The game companies (and render farms) also need to work on as extensive 
> peering as the top CDNs have been doing. They're getting better, but not 
> quite there yet. 

I'm not sure about North America, Asia-Pac or South America, but in
Europe, the gaming folk actually peer very well.

The problem for us is that is anymore from 112ms - 170ms away, depending
on which side of the continent you are.

And while we peer extensively with them via our network in Europe, that
latency will simply not go away. They need to come into market and
satisfy their demand.

Mark.


Re: Proving Gig Speed

2018-07-18 Thread Mark Tinka



On 18/Jul/18 16:58, K. Scott Helms wrote:

>
> Mark,
>
> I agree completely, I'm working on a paper right now for a conference
> (waiting on Wireshark to finish with my complex filter at the moment)
> that shows what's happening with gaming traffic.  What's really
> interesting is how gaming is changing and within the next few years I
> do expect a lot of games to move into the remote rendering world. 
> I've tested several and the numbers are pretty substantial.  You need
> to have <=30 ms of latency to sustain 1080p gaming and obviously
> jitter and packet loss are also problematic.  The traffic is also
> pretty impressive with spikes of over 50 mbps down and sustained
> averages over 21 mbps.  Upstream traffic isn't any more of an issue
> than "normal" online gaming.  Nvidia, Google, and a host of start ups
> are all in the mix with a lot of people predicting Sony and Microsoft
> will be (or are already) working on pure cloud consoles.

And what we need is for them to ensure all these remote rendering farms
are evenly distributed around the world to ensure that 30ms peak latency
is achievable.

Mark.


Re: Proving Gig Speed

2018-07-18 Thread Seth Mattinen

On 7/18/18 7:26 PM, Mike Hammett wrote:

I don't think iPhones have SFP cages.




-
Mike Hammett
Intelligent Computing Solutions
http://www.ics-il.com  


Midwest-IX
http://www.midwest-ix.com  


- Original Message -

From: "Keith Medcalf"  
To: "Mike Hammett", "Mark Tinka"  
Cc: "NANOG list"  
Sent: Wednesday, July 18, 2018 2:32:27 PM

Subject: RE: Proving Gig Speed


Whats WiFi? Is that the "noise" that escapes from the copper cables? Switch to 
optical fibre, it does not emit RF noise ...





It's comeback time for IrDA ports!


Comcast outage Southwest Washington?

2018-07-18 Thread Aaron C. de Bruyn via NANOG
There a Comcast outage affecting a few of my locations in SW Washington
state.  We initially had an estimate of 3:26 PM for service restoration.
That got bumped to 7 PM.  Now the phone system isn't giving an ETR and the
phone system says there are excessive hold times.

I'm guessing it's a fiber cut.  Can anyone provide some insight?

Thanks,

-A


Re: Proving Gig Speed

2018-07-18 Thread Mike Hammett
I don't think iPhones have SFP cages. 




- 
Mike Hammett 
Intelligent Computing Solutions 
http://www.ics-il.com 

Midwest-IX 
http://www.midwest-ix.com 

- Original Message -

From: "Keith Medcalf"  
To: "Mike Hammett" , "Mark Tinka"  
Cc: "NANOG list"  
Sent: Wednesday, July 18, 2018 2:32:27 PM 
Subject: RE: Proving Gig Speed 


Whats WiFi? Is that the "noise" that escapes from the copper cables? Switch to 
optical fibre, it does not emit RF noise ... 


--- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume. 


>-Original Message- 
>From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of Mike 
>Hammett 
>Sent: Tuesday, 17 July, 2018 08:42 
>To: Mark Tinka 
>Cc: NANOG list 
>Subject: Re: Proving Gig Speed 
> 
>10G to the home will be pointless as more and more people move away 
>from Ethernet to WiFi where the noise floor for most installs 
>prevents anyone from reaching 802.11n speeds, much less whatever 
>alphabet soup comes later. 
> 
> 
> 
> 
>- 
>Mike Hammett 
>Intelligent Computing Solutions 
>http://www.ics-il.com 
> 
>Midwest-IX 
>http://www.midwest-ix.com 
> 
>- Original Message - 
> 
>From: "Mark Tinka"  
>To: "K. Scott Helms"  
>Cc: "NANOG list"  
>Sent: Tuesday, July 17, 2018 7:11:35 AM 
>Subject: Re: Proving Gig Speed 
> 
> 
> 
>On 17/Jul/18 14:07, K. Scott Helms wrote: 
> 
>> 
>> That's absolutely true, but I don't see any real alternatives in 
>some 
>> cases. I've actually built automated testing into some of the CPE 
>> we've deployed and that works pretty well for some models but other 
>> devices don't seem to be able to fill a ~500 mbps link. 
> 
>So what are you going to do when 10Gbps FTTH into the home becomes 
>the norm? 
> 
>Perhaps laptops and servers of the time won't even see this as a 
>rounding error :-\... 
> 
>Mark. 







Re: deploying RPKI based Origin Validation

2018-07-18 Thread Job Snijders
On Wed, Jul 18, 2018 at 05:55:23PM -0400, Randy Bush wrote:
> > Can you elaborate what routers with what software you are using? It
> > surprises me a bit to find routers anno 2018 which can't do OV in
> > some shape or form.
> 
> depends on how picky you are about "some shape or form."

I was thinking along the lines of "perhaps the box can't do RTR but
allows for ROAs to be configured in the run-time config"

> draft-ietf-sidrops-ov-clarify was not written because it is usefully
> implemented by many vendors.

@ all - It would be good if operators ask their vendors if they can get
behind this I-D https://tools.ietf.org/html/draft-ietf-sidrops-ov-clarify

Kind regards,

Job


Re: Proving Gig Speed

2018-07-18 Thread Keith Stokes
At least in the US, Jane also doesn’t really have a choice of her electricity 
provider, so she’s not getting bombarded with advertising from vendors selling 
“Faster WiFi” than the next guy. I don’t get to choose my method of power 
generation and therefore cost per kWh. I’d love to buy $.04 from the Pacific NW 
when I’m in the Southern US.

I’m not a betting guy, but my money says when self power generation hits some 
point and multiple vendors are trying to get people to buy their system, we’ll 
get “More amps per X hours of sunlight with our system” and she will care.

On Jul 18, 2018, at 7:01 AM, Mark Tinka 
mailto:mark.ti...@seacom.mu>> wrote:



On 17/Jul/18 18:12, Andy Ringsmuth wrote:

I suppose in reality it’s no different than any other utility. My home has 200 
amp electrical service. Will I ever use 200 amps at one time? Highly highly 
unlikely. But if my electrical utility wanted to advertise “200 amp service in 
all homes we supply!” they sure could. Would an electrician be able to test it? 
I’m sure there is a way somehow.

If me and everyone on my street tried to use 200 amps all at the same time, 
could the infrastructure handle it? Doubtful. But do I on occasion saturate my 
home fiber 300 mbit synchronous connection? Every now and then yes, but rarely. 
Although if I’m paying for 300 and not getting it, my ISP will be hearing from 
me.

If my electrical utility told me “hey, you can upgrade to 500 amp service for 
no additional charge” would I do it? Sure, what the heck. If my water utility 
said “guess what? You can upgrade to a 2-inch water line at no additional 
charge!” would I do it? Probably yeah, why not?

Would I ever use all that capacity on $random_utility at one time? Of course 
not. But nice to know it’s there if I ever need it.

The difference, of course, between electricity and the Internet is that
there is a lot more information and tools freely available online that
Average Jane can arm herself with to run amok with figuring out whether
she is getting 300Mbps of her 300Mbps from her ISP.

Average Jane could care less about measuring whether she's getting 200
amps of her 200 amps from the power company; likely because there is a
lot more structure with how power is produced and delivered, or more to
the point, a lot less freely available tools and information with which
she can arm herself to run amok with. To her, the power company sucks if
the lights go out. In the worst case, if her power starts a fire, she's
calling the fire department.

Mark.


---

Keith Stokes
Neill Technologies






Re: deploying RPKI based Origin Validation

2018-07-18 Thread Randy Bush
> Can you elaborate what routers with what software you are using? It
> surprises me a bit to find routers anno 2018 which can't do OV in some
> shape or form.

depends on how picky you are about "some shape or form."

draft-ietf-sidrops-ov-clarify was not written because it is usefully
implemented by many vendors.

randy


RE: deploying RPKI based Origin Validation

2018-07-18 Thread Michel Py
> Job Snijders wrote :
> Can you elaborate what routers with what software you are using? It surprises
> me a bit to find routers anno 2018 which can't do OV in some shape or form.

They're not anno 2018 ! Cisco 3900 with 4 Gigs. Good enough for me, with the 
current growth of the DFZ I may have 10 years left before I need to upgrade. 
Probably will upgrade before that caused to bandwidth, but as of now works good 
enough for me and upgrading just to get OV is going to be a tough sell.

>> What do I have left : using a subset of RPKI as a blackhole :-(
> If you implement 'invalid == blackhole', and cannot do normal OV - it seems 
> to me that
> you'll be blackholing the actual victim of a BGP hijack? That would seem 
> counter-productive.

I would indeed, but the intent was a subset of invalid : the invalid prefixes 
that nobody _but_ the hijacker anounces, so blackholing does not hurt the real 
owner.
In other words : un-announced prefixes that have been hijacked. These are not 
into bogon lists because they are real.

Now I have no illusions : this is not going to solve the world's problems, how 
many of these are actually announced and how will that play in the longer term 
are questionable, but would not that be worth a quick shot at it ?

Michel.



Re: deploying RPKI based Origin Validation

2018-07-18 Thread Job Snijders
On Wed, Jul 18, 2018 at 07:30:48PM +, Michel Py wrote:
> Not in lieu, but when deploying RPKI is not (yet) possible.  My
> routers are not RPKI capable, upgrading will take years (I'm not going
> to upgrade just because I want RPKI).

Can you elaborate what routers with what software you are using? It
surprises me a bit to find routers anno 2018 which can't do OV in some
shape or form.

> What do I have left : using a subset of RPKI as a blackhole :-(

If you implement 'invalid == blackhole', and cannot do normal OV - it
seems to me that you'll be blackholing the actual victim of a BGP
hijack? That would seem counter-productive.

Kind regards,

Job


RE: Proving Gig Speed

2018-07-18 Thread Keith Medcalf


Whats WiFi?  Is that the "noise" that escapes from the copper cables?  Switch 
to optical fibre, it does not emit RF noise ...


---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.


>-Original Message-
>From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of Mike
>Hammett
>Sent: Tuesday, 17 July, 2018 08:42
>To: Mark Tinka
>Cc: NANOG list
>Subject: Re: Proving Gig Speed
>
>10G to the home will be pointless as more and more people move away
>from Ethernet to WiFi where the noise floor for most installs
>prevents anyone from reaching 802.11n speeds, much less whatever
>alphabet soup comes later.
>
>
>
>
>-
>Mike Hammett
>Intelligent Computing Solutions
>http://www.ics-il.com
>
>Midwest-IX
>http://www.midwest-ix.com
>
>- Original Message -
>
>From: "Mark Tinka" 
>To: "K. Scott Helms" 
>Cc: "NANOG list" 
>Sent: Tuesday, July 17, 2018 7:11:35 AM
>Subject: Re: Proving Gig Speed
>
>
>
>On 17/Jul/18 14:07, K. Scott Helms wrote:
>
>>
>> That's absolutely true, but I don't see any real alternatives in
>some
>> cases. I've actually built automated testing into some of the CPE
>> we've deployed and that works pretty well for some models but other
>> devices don't seem to be able to fill a ~500 mbps link.
>
>So what are you going to do when 10Gbps FTTH into the home becomes
>the norm?
>
>Perhaps laptops and servers of the time won't even see this as a
>rounding error :-\...
>
>Mark.






RE: deploying RPKI based Origin Validation

2018-07-18 Thread Michel Py
Mark,

>> Michel Py wrote:
>> If I understand this correctly, I have a suggestion : update these files at 
>> a regular interval (15/20 min) and make them available for download with a 
>> fixed name
>> (not containing the date). Even better : have a route server that announces 
>> these prefixes with a :666 community so people could use it as a blackhole.
>> This would not remove the invalid prefixes from one's router, but at leat 
>> would prevent traffic from/to these prefixes.
>> In other words : a route server of prefixes that are RPKI invalid with no 
>> alternative that people could use without having an RPKI setup.
>> This would even work with people who have chosen do accept a default route 
>> from their upstream.
>> I understand this is not ideal; blacklisting a prefix that is RPKI invalid 
>> may actually help the hijacker, but blacklisting a prefix that is RPKI 
>> invalid AND that has no
>> alternative could be useful ? Should be considered a bogon.

> Mark Tinka wrote :
> Hmmh - I suppose if you want to do this in-house, that is fine. But I would 
> not recommend this at large for the entire BGP community.

Agree; was trying to to this is the spirit of this:
http://arneill-py.sacramento.ca.us/cbbc/
As any blocklist, it should not be default and should be left to the end user 
to choose if they use it or not.

> The difference is you are proposing a mechanism that uses existing 
> infrastructure within almost all ISP's (the BGP Community) in lieu of 
> deploying RPKI.

Not in lieu, but when deploying RPKI is not (yet) possible.
My routers are not RPKI capable, upgrading will take years (I'm not going to 
upgrade just because I want RPKI).
My upstreams don't do RPKI, I'm trying to convince them but I'm talking to deaf 
ears.
What do I have left : using a subset of RPKI as a blackhole :-(

> I can't quite imagine the effort needed to implement your suggestion,

Not much at all, I was actually trying you do do the RPKI part for me ;-)
This script you wrote, to produce the list of prefixes that are RPKI invalid 
AND that do not have any alternative, make it run every x minutes on a fixed 
url (no date/time in name). I will fetch it, inject it in ExaBGP that feeds my 
iGP and voila, done.
Who wants to use it can, not trying to impose it on the entire BGP community.


> but I'd rather direct it toward deploying RPKI. At the very least, one just 
> needs reputable RV software, and router code that support RPKI RV.

We probably have to wait until attrition brings us routers that have said code.

Michel.

TSI Disclaimer:  This message and any files or text attached to it are intended 
only for the recipients named above and contain information that may be 
confidential or privileged. If you are not the intended recipient, you must not 
forward, copy, use or otherwise disclose this communication or the information 
contained herein. In the event you have received this message in error, please 
notify the sender immediately by replying to this message, and then delete all 
copies of it from your system. Thank you!...


Re: Proving Gig Speed

2018-07-18 Thread Simon Leinen
> For a horrifying moment, I misread this as Google surfacing
> performance stats via a BGP stream by encoding stat_name:value as
> community:value

> /me goes searching for mass quantities of caffeine

Because you'll be spending the night writing up that Internet-Draft? :-)
-- 
Simon.


Re: Proving Gig Speed

2018-07-18 Thread Brielle Bruns

On 7/17/2018 10:18 AM, Mike Hammett wrote:

I don't think you understand the gravity of the in-home interference issue. 
Unfortunately, neither does the IEEE.

It doesn't need to be in lock-step, but if a significant number of homes have 
issues getting over 100 megabit wirelessly, I'm not sure we need to be 
concerned about 10 gigabit to the home.

I am well aware of the wireless world and Ubiquiti. I've been using Ubiquiti 
(among other brands) for over 10 years and have been a hardware beta tester for 
several of them.


Customers are still harping on me about going wireless on all of their 
desktops.  Since most of our customers are CAD/Design/Building 
companies, during planning, we insist on at least two drops to each 
workstation, preferably 3 or more.


But, every time we go to do an upgrade...

"Why can't we just use wireless?"

Even though 20 minutes prior they're complaining at how slow it is for 
their laptops to open up large files over the network over wifi.


"If you want faster speeds, you'll need to go from AC-Lites to AC-HDs 
with Wave 2.  They're $350 or so each, and since your brand new building 
likes to absorb wifi, you'll need 5-8 of them to cover every possible 
location in the building.  Oh, and you'll need to replace your laptops 
with Wave 2 capable ones, plus Wave 2 PCIe cards for every desktop... 
Except for the cheap $200 AMD APU desktops you bought against our 
recommendations that don't have expansion slots and no USB 3..."


I long for the day when we can get 100mbit throughout a building or 
house reliably.


(I'm a Ubnt hardware tester too, 99% of my customer setups are a mix of 
EdgeRouter, EdgeSwitch, and Unifi Switch and AP setups).





--
Brielle Bruns
The Summit Open Source Development Group
http://www.sosdg.org/ http://www.ahbl.org


Re: Proving Gig Speed

2018-07-18 Thread Julien Goodwin
On 19/07/18 00:27, Mark Tinka wrote:
> All the peering in the world doesn't help if the latency is well over
> 100ms+. That's what we need to fix.

Living in Australia this is an every day experience, especially for
content served out of Europe (or for that matter, Africa).

TCP & below are rarely the biggest problem these days (at least with
TCP-BBR & friends), far too often applications, web services etc. are
simply never tested in an environment with any significant latency.

While some issues may exist for static content loading for which a CDN
can be helpful, that's not helpful for application traffic.


Re: Proving Gig Speed

2018-07-18 Thread Mike Hammett
The game companies (and render farms) also need to work on as extensive peering 
as the top CDNs have been doing. They're getting better, but not quite there 
yet. 




- 
Mike Hammett 
Intelligent Computing Solutions 
http://www.ics-il.com 

Midwest-IX 
http://www.midwest-ix.com 

- Original Message -

From: "K. Scott Helms"  
To: "mark tinka"  
Cc: "NANOG list"  
Sent: Wednesday, July 18, 2018 9:58:09 AM 
Subject: Re: Proving Gig Speed 

> Peering isn't the problem. Proximity to content is. 
> 
> Netflix, Google, Akamai and a few others have presence in Africa already. 
> So those aren't the problem (although for those currently in Africa, not 
> all of the services they offer globally are available here - just a few). 
> 
> A lot of user traffic is not video streaming, so that's where a lot of 
> work is required. In particular, cloud and gaming operators are the ones 
> causing real pain. 
> 
> All the peering in the world doesn't help if the latency is well over 
> 100ms+. That's what we need to fix. 
> 
> Mark. 
> 

Mark, 

I agree completely, I'm working on a paper right now for a conference 
(waiting on Wireshark to finish with my complex filter at the moment) that 
shows what's happening with gaming traffic. What's really interesting is 
how gaming is changing and within the next few years I do expect a lot of 
games to move into the remote rendering world. I've tested several and the 
numbers are pretty substantial. You need to have <=30 ms of latency to 
sustain 1080p gaming and obviously jitter and packet loss are also 
problematic. The traffic is also pretty impressive with spikes of over 50 
mbps down and sustained averages over 21 mbps. Upstream traffic isn't any 
more of an issue than "normal" online gaming. Nvidia, Google, and a host 
of start ups are all in the mix with a lot of people predicting Sony and 
Microsoft will be (or are already) working on pure cloud consoles. 

Scott Helms 



Re: Proving Gig Speed

2018-07-18 Thread K. Scott Helms
> Peering isn't the problem. Proximity to content is.
>
> Netflix, Google, Akamai and a few others have presence in Africa already.
> So those aren't the problem (although for those currently in Africa, not
> all of the services they offer globally are available here - just a few).
>
> A lot of user traffic is not video streaming, so that's where a lot of
> work is required. In particular, cloud and gaming operators are the ones
> causing real pain.
>
> All the peering in the world doesn't help if the latency is well over
> 100ms+. That's what we need to fix.
>
> Mark.
>

Mark,

I agree completely, I'm working on a paper right now for a conference
(waiting on Wireshark to finish with my complex filter at the moment) that
shows what's happening with gaming traffic.  What's really interesting is
how gaming is changing and within the next few years I do expect a lot of
games to move into the remote rendering world.  I've tested several and the
numbers are pretty substantial.  You need to have <=30 ms of latency to
sustain 1080p gaming and obviously jitter and packet loss are also
problematic.  The traffic is also pretty impressive with spikes of over 50
mbps down and sustained averages over 21 mbps.  Upstream traffic isn't any
more of an issue than "normal" online gaming.  Nvidia, Google, and a host
of start ups are all in the mix with a lot of people predicting Sony and
Microsoft will be (or are already) working on pure cloud consoles.

Scott Helms


Re: Proving Gig Speed

2018-07-18 Thread valdis . kletnieks
On Wed, 18 Jul 2018 08:24:15 -0500, Mike Hammett said:
> Check your Google portal for more information as to what Google can do with 
> BGP Communities related to reporting.

For a horrifying moment, I misread this as Google surfacing performance stats 
via a
BGP stream by encoding stat_name:value as community:value

/me goes searching for mass quantities of caffeine


pgpiEeTnO4gky.pgp
Description: PGP signature


Re: Proving Gig Speed

2018-07-18 Thread Mark Tinka



On 18/Jul/18 16:22, K. Scott Helms wrote:

> Mark,
>
> I am glad I don't have your challenges :)
>
> What's the Netflix (or other substantial OTT video provider) situation
> for direct peers?  It's pretty easy and cheap for North American
> operators to get settlement free peering to Netflix, Amazon, Youtube
> and others but I don't know what that looks like in Africa.

Peering isn't the problem. Proximity to content is.

Netflix, Google, Akamai and a few others have presence in Africa
already. So those aren't the problem (although for those currently in
Africa, not all of the services they offer globally are available here -
just a few).

A lot of user traffic is not video streaming, so that's where a lot of
work is required. In particular, cloud and gaming operators are the ones
causing real pain.

All the peering in the world doesn't help if the latency is well over
100ms+. That's what we need to fix.

Mark.


Re: Proving Gig Speed

2018-07-18 Thread K. Scott Helms
Mark,

I am glad I don't have your challenges :)

What's the Netflix (or other substantial OTT video provider) situation for
direct peers?  It's pretty easy and cheap for North American operators to
get settlement free peering to Netflix, Amazon, Youtube and others but I
don't know what that looks like in Africa.

Scott Helms

On Wed, Jul 18, 2018 at 10:00 AM Mark Tinka  wrote:

>
>
> On 18/Jul/18 15:41, K. Scott Helms wrote:
>
>
>
> That's why I vastly prefer stats from the actual CDNs and content
> providers that aren't generated by speed tests.  They're generated by
> measuring the actual performance of the service they deliver.  Now, that
> won't prevent burden shifting, but it does get rid of a lot of the problems
> you bring up.  Youtube for example wouldn't rate a video stream as good if
> the packet loss were high because it's actually looking at the bit rate of
> successfully delivered encapsulated video frames I _think_ the same is true
> of Netflix though they also offer a real time test as well which frankly
> isn't as helpful for monitoring but getting a quick test to the Netflix
> node you'd normally use can be nice in some cases.
>
>
> Agreed.
>
> In our market, we've generally not struggled with users and their
> experience for services hosted locally in-country.
>
> So in addition to providing good tools for operators and eyeballs to
> measure experience, the biggest win will come from the content folk and
> CDN's getting their services inside our market.
>
> Mark.
>
>


Re: Proving Gig Speed

2018-07-18 Thread Mark Tinka



On 18/Jul/18 15:48, Luke Guillory wrote:

> https://isp.google.com
>
> Thought I think this is only for when you have peering, someone can correct 
> me if that's incorrect.

And also if you operate a GGC (which is very likely if you're peering).

Mark.


Re: Proving Gig Speed

2018-07-18 Thread Mark Tinka



On 18/Jul/18 15:41, K. Scott Helms wrote:

>
>
> That's why I vastly prefer stats from the actual CDNs and content
> providers that aren't generated by speed tests.  They're generated by
> measuring the actual performance of the service they deliver.  Now,
> that won't prevent burden shifting, but it does get rid of a lot of
> the problems you bring up.  Youtube for example wouldn't rate a video
> stream as good if the packet loss were high because it's actually
> looking at the bit rate of successfully delivered encapsulated video
> frames I _think_ the same is true of Netflix though they also offer a
> real time test as well which frankly isn't as helpful for monitoring
> but getting a quick test to the Netflix node you'd normally use can be
> nice in some cases.

Agreed.

In our market, we've generally not struggled with users and their
experience for services hosted locally in-country.

So in addition to providing good tools for operators and eyeballs to
measure experience, the biggest win will come from the content folk and
CDN's getting their services inside our market.

Mark.



Re: Proving Gig Speed

2018-07-18 Thread K. Scott Helms
That seems only to be for direct peers Mike.

On Wed, Jul 18, 2018 at 9:53 AM Mike Hammett  wrote:

> https://isp.google.com
>
>
>
>
> -
> Mike Hammett
> Intelligent Computing Solutions
> http://www.ics-il.com
>
> Midwest-IX
> http://www.midwest-ix.com
>
> - Original Message -
>
> From: "K. Scott Helms" 
> To: "Mike Hammett" 
> Cc: "NANOG list" 
> Sent: Wednesday, July 18, 2018 8:45:22 AM
> Subject: Re: Proving Gig Speed
>
>
> Mike,
>
> What portal would that be? Do you have a URL?
>
>
> On Wed, Jul 18, 2018 at 9:25 AM Mike Hammett < na...@ics-il.net > wrote:
>
>
> Check your Google portal for more information as to what Google can do
> with BGP Communities related to reporting.
>
>
>
>
> -
> Mike Hammett
> Intelligent Computing Solutions
> http://www.ics-il.com
>
> Midwest-IX
> http://www.midwest-ix.com
>
> - Original Message -
>
> From: "K. Scott Helms" < kscott.he...@gmail.com >
> To: "mark tinka" < mark.ti...@seacom.mu >
> Cc: "NANOG list" < nanog@nanog.org >
> Sent: Wednesday, July 18, 2018 7:40:31 AM
> Subject: Re: Proving Gig Speed
>
> Agreed, and it's one of the fundamental problems that a speed test is (and
> can only) measure the speeds from point A to point B (often both inside
> the
> service provider's network) when the customer is concerned with traffic to
> and from point C off in someone else's network altogether. It's one of the
> reasons that I think we have to get more comfortable and more
> collaborative
> with the CDN providers as well as the large sources of traffic. Netflix,
> Youtube, and I'm sure others have their own consumer facing performance
> testing that is _much_ more applicable to most consumers as compared to
> the
> "normal" technician test and measurement approach or even the service
> assurance that you get from normal performance monitoring. What I'd really
> like to see is a way to measure network performance from the CO/head
> end/PoP and also get consumer level reporting from these kinds of
> services. If Google/Netflix/Amazon Video/$others would get on board with
> this idea it would make all our lives simpler.
>
> Providing individual users stats is nice, but if these guys really want to
> improve service it would be great to get aggregate reporting by ASN. You
> can get a rough idea by looking at your overall graph from Google, but
> it's
> lacking a lot of detail and there's no simple way to compare that to a
> head
> end/CO test versus specific end users.
>
> https://www.google.com/get/videoqualityreport/
> https://fast.com/#
>
>
>
> On Wed, Jul 18, 2018 at 8:27 AM Mark Tinka < mark.ti...@seacom.mu >
> wrote:
>
> >
> >
> > On 18/Jul/18 14:00, K. Scott Helms wrote:
> >
> >
> > That's absolutely a concern Mark, but most of the CPE vendors that
> support
> > doing this are providing enough juice to keep up with their max
> > forwarding/routing data rates. I don't see 10 Gbps in residential
> Internet
> > service being normal for quite a long time off even if the port itself
> is
> > capable of 10Gbps. We have this issue today with commercial customers,
> but
> > it's generally not as a much of a problem because the commercial CPE get
> > their usage graphed and the commercial CPE have more capabilities for
> > testing.
> >
> >
> > I suppose the point I was trying to make is when does it stop being
> > feasible to test each and every piece of bandwidth you deliver to a
> > customer? It may very well not be 10Gbps... perhaps it's 2Gbps, or
> 3.2Gbps,
> > or 5.1Gbps... basically, the rabbit hole.
> >
> > Like Saku, I am more interested in other fundamental metrics that could
> > impact throughput such as latency, packet loss and jitter. Bandwidth,
> > itself, is easy to measure with your choice of SNMP poller + 5 minutes.
> But
> > when you're trying to explain to a simple customer buying 100Mbps that a
> > break in your Skype video cannot be diagnosed with a throughput speed
> test,
> > they don't/won't get it.
> >
> > In Africa, for example, customers in only one of our markets are so
> > obsessed with speed tests. But not to speed test servers that are
> > in-country... they want to test servers that sit in Europe, North
> America,
> > South America and Asia-Pac. With the latency averaging between 140ms -
> > 400ms across all of those regions from source, the amount of energy
> spent
> > explaining to customers that there is no way you can saturate your
> > delivered capacity beyond a couple of Mbps using Ookla and friends is
> > energy I could spend drinking wine and having a medium-rare steak,
> instead.
> >
> > For us, at least, aside from going on a mass education drive in this
> > particular market, the ultimate solution is just getting all that
> content
> > localized in-country or in-region. Once that latency comes down and the
> > resources are available locally, the whole speed test debacle will
> easily
> > fall away, because the source of these speed tests is simply how
> physically
> > far the content is. Is this an easy task - 

Re: Proving Gig Speed

2018-07-18 Thread Mike Hammett
Correct. I figured most eyeballs had Google peering or were looking to get it. 

I was talking with CVF at ChiNOG about some of the shortcomings of the Google 
ISP Portal. He saw value in making the portal available to all ISPs. I don't 
know when (if) that will be available. 




- 
Mike Hammett 
Intelligent Computing Solutions 
http://www.ics-il.com 

Midwest-IX 
http://www.midwest-ix.com 

- Original Message -

From: "Luke Guillory"  
To: "K. Scott Helms" , "Mike Hammett" 
 
Cc: "NANOG list"  
Sent: Wednesday, July 18, 2018 8:48:32 AM 
Subject: RE: Proving Gig Speed 

https://isp.google.com 

Thought I think this is only for when you have peering, someone can correct me 
if that's incorrect. 

ns 





-Original Message- 
From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of K. Scott Helms 
Sent: Wednesday, July 18, 2018 8:45 AM 
To: Mike Hammett 
Cc: NANOG list 
Subject: Re: Proving Gig Speed 

Mike, 

What portal would that be? Do you have a URL? 

On Wed, Jul 18, 2018 at 9:25 AM Mike Hammett  wrote: 

> Check your Google portal for more information as to what Google can do 
> with BGP Communities related to reporting. 
> 
> 
> 
> 
> - 
> Mike Hammett 
> Intelligent Computing Solutions 
> http://www.ics-il.com 
> 
> Midwest-IX 
> http://www.midwest-ix.com 
> 
> - Original Message - 
> 
> From: "K. Scott Helms"  
> To: "mark tinka"  
> Cc: "NANOG list"  
> Sent: Wednesday, July 18, 2018 7:40:31 AM 
> Subject: Re: Proving Gig Speed 
> 
> Agreed, and it's one of the fundamental problems that a speed test is 
> (and can only) measure the speeds from point A to point B (often both 
> inside the service provider's network) when the customer is concerned 
> with traffic to and from point C off in someone else's network 
> altogether. It's one of the reasons that I think we have to get more 
> comfortable and more collaborative with the CDN providers as well as 
> the large sources of traffic. Netflix, Youtube, and I'm sure others 
> have their own consumer facing performance testing that is _much_ more 
> applicable to most consumers as compared to the "normal" technician 
> test and measurement approach or even the service assurance that you 
> get from normal performance monitoring. What I'd really like to see is 
> a way to measure network performance from the CO/head end/PoP and also 
> get consumer level reporting from these kinds of services. If 
> Google/Netflix/Amazon Video/$others would get on board with this idea 
> it would make all our lives simpler. 
> 
> Providing individual users stats is nice, but if these guys really 
> want to improve service it would be great to get aggregate reporting 
> by ASN. You can get a rough idea by looking at your overall graph from 
> Google, but it's lacking a lot of detail and there's no simple way to 
> compare that to a head end/CO test versus specific end users. 
> 
> https://www.google.com/get/videoqualityreport/ 
> https://fast.com/# 
> 
> 
> 
> On Wed, Jul 18, 2018 at 8:27 AM Mark Tinka  wrote: 
> 
> > 
> > 
> > On 18/Jul/18 14:00, K. Scott Helms wrote: 
> > 
> > 
> > That's absolutely a concern Mark, but most of the CPE vendors that 
> support 
> > doing this are providing enough juice to keep up with their max 
> > forwarding/routing data rates. I don't see 10 Gbps in residential 
> Internet 
> > service being normal for quite a long time off even if the port 
> > itself 
> is 
> > capable of 10Gbps. We have this issue today with commercial 
> > customers, 
> but 
> > it's generally not as a much of a problem because the commercial CPE 
> > get their usage graphed and the commercial CPE have more 
> > capabilities for testing. 
> > 
> > 
> > I suppose the point I was trying to make is when does it stop being 
> > feasible to test each and every piece of bandwidth you deliver to a 
> > customer? It may very well not be 10Gbps... perhaps it's 2Gbps, or 
> 3.2Gbps, 
> > or 5.1Gbps... basically, the rabbit hole. 
> > 
> > Like Saku, I am more interested in other fundamental metrics that 
> > could impact throughput such as latency, packet loss and jitter. 
> > Bandwidth, itself, is easy to measure with your choice of SNMP poller + 5 
> > minutes. 
> But 
> > when you're trying to explain to a simple customer buying 100Mbps 
> > that a break in your Skype video cannot be diagnosed with a 
> > throughput speed 
> test, 
> > they don't/won't get it. 
> > 
> > In Africa, for example, customers in only one of our markets are so 
> > obsessed with speed tests. But not to speed test servers that are 
> > in-country... they want to test servers that sit in Europe, North 
> America, 
> > South America and Asia-Pac. With the latency averaging between 140ms 
> > - 400ms across all of those regions from source, the amount of 
> > energy 
> spent 
> > explaining to customers that there is no way you can saturate your 
> > delivered capacity beyond a couple of Mbps using Ookla and friends 
> > is energy I could spend drinking wine and 

RE: Proving Gig Speed

2018-07-18 Thread Luke Guillory
https://isp.google.com

Thought I think this is only for when you have peering, someone can correct me 
if that's incorrect.

ns





-Original Message-
From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of K. Scott Helms
Sent: Wednesday, July 18, 2018 8:45 AM
To: Mike Hammett
Cc: NANOG list
Subject: Re: Proving Gig Speed

Mike,

What portal would that be?  Do you have a URL?

On Wed, Jul 18, 2018 at 9:25 AM Mike Hammett  wrote:

> Check your Google portal for more information as to what Google can do
> with BGP Communities related to reporting.
>
>
>
>
> -
> Mike Hammett
> Intelligent Computing Solutions
> http://www.ics-il.com
>
> Midwest-IX
> http://www.midwest-ix.com
>
> - Original Message -
>
> From: "K. Scott Helms" 
> To: "mark tinka" 
> Cc: "NANOG list" 
> Sent: Wednesday, July 18, 2018 7:40:31 AM
> Subject: Re: Proving Gig Speed
>
> Agreed, and it's one of the fundamental problems that a speed test is
> (and can only) measure the speeds from point A to point B (often both
> inside the service provider's network) when the customer is concerned
> with traffic to and from point C off in someone else's network
> altogether. It's one of the reasons that I think we have to get more
> comfortable and more collaborative with the CDN providers as well as
> the large sources of traffic. Netflix, Youtube, and I'm sure others
> have their own consumer facing performance testing that is _much_ more
> applicable to most consumers as compared to the "normal" technician
> test and measurement approach or even the service assurance that you
> get from normal performance monitoring. What I'd really like to see is
> a way to measure network performance from the CO/head end/PoP and also
> get consumer level reporting from these kinds of services. If
> Google/Netflix/Amazon Video/$others would get on board with this idea
> it would make all our lives simpler.
>
> Providing individual users stats is nice, but if these guys really
> want to improve service it would be great to get aggregate reporting
> by ASN. You can get a rough idea by looking at your overall graph from
> Google, but it's lacking a lot of detail and there's no simple way to
> compare that to a head end/CO test versus specific end users.
>
> https://www.google.com/get/videoqualityreport/
> https://fast.com/#
>
>
>
> On Wed, Jul 18, 2018 at 8:27 AM Mark Tinka  wrote:
>
> >
> >
> > On 18/Jul/18 14:00, K. Scott Helms wrote:
> >
> >
> > That's absolutely a concern Mark, but most of the CPE vendors that
> support
> > doing this are providing enough juice to keep up with their max
> > forwarding/routing data rates. I don't see 10 Gbps in residential
> Internet
> > service being normal for quite a long time off even if the port
> > itself
> is
> > capable of 10Gbps. We have this issue today with commercial
> > customers,
> but
> > it's generally not as a much of a problem because the commercial CPE
> > get their usage graphed and the commercial CPE have more
> > capabilities for testing.
> >
> >
> > I suppose the point I was trying to make is when does it stop being
> > feasible to test each and every piece of bandwidth you deliver to a
> > customer? It may very well not be 10Gbps... perhaps it's 2Gbps, or
> 3.2Gbps,
> > or 5.1Gbps... basically, the rabbit hole.
> >
> > Like Saku, I am more interested in other fundamental metrics that
> > could impact throughput such as latency, packet loss and jitter.
> > Bandwidth, itself, is easy to measure with your choice of SNMP poller + 5 
> > minutes.
> But
> > when you're trying to explain to a simple customer buying 100Mbps
> > that a break in your Skype video cannot be diagnosed with a
> > throughput speed
> test,
> > they don't/won't get it.
> >
> > In Africa, for example, customers in only one of our markets are so
> > obsessed with speed tests. But not to speed test servers that are
> > in-country... they want to test servers that sit in Europe, North
> America,
> > South America and Asia-Pac. With the latency averaging between 140ms
> > - 400ms across all of those regions from source, the amount of
> > energy
> spent
> > explaining to customers that there is no way you can saturate your
> > delivered capacity beyond a couple of Mbps using Ookla and friends
> > is energy I could spend drinking wine and having a medium-rare
> > steak,
> instead.
> >
> > For us, at least, aside from going on a mass education drive in this
> > particular market, the ultimate solution is just getting all that
> content
> > localized in-country or in-region. Once that latency comes down and
> > the resources are available locally, the whole speed test debacle
> > will
> easily
> > fall away, because the source of these speed tests is simply how
> physically
> > far the content is. Is this an easy task - hell no; but slamming
> > your
> head
> > against a wall over and over is no fun either.
> >
> > Mark.
> >
>
>



Re: Proving Gig Speed

2018-07-18 Thread Mike Hammett
https://isp.google.com 




- 
Mike Hammett 
Intelligent Computing Solutions 
http://www.ics-il.com 

Midwest-IX 
http://www.midwest-ix.com 

- Original Message -

From: "K. Scott Helms"  
To: "Mike Hammett"  
Cc: "NANOG list"  
Sent: Wednesday, July 18, 2018 8:45:22 AM 
Subject: Re: Proving Gig Speed 


Mike, 

What portal would that be? Do you have a URL? 


On Wed, Jul 18, 2018 at 9:25 AM Mike Hammett < na...@ics-il.net > wrote: 


Check your Google portal for more information as to what Google can do with BGP 
Communities related to reporting. 




- 
Mike Hammett 
Intelligent Computing Solutions 
http://www.ics-il.com 

Midwest-IX 
http://www.midwest-ix.com 

- Original Message - 

From: "K. Scott Helms" < kscott.he...@gmail.com > 
To: "mark tinka" < mark.ti...@seacom.mu > 
Cc: "NANOG list" < nanog@nanog.org > 
Sent: Wednesday, July 18, 2018 7:40:31 AM 
Subject: Re: Proving Gig Speed 

Agreed, and it's one of the fundamental problems that a speed test is (and 
can only) measure the speeds from point A to point B (often both inside the 
service provider's network) when the customer is concerned with traffic to 
and from point C off in someone else's network altogether. It's one of the 
reasons that I think we have to get more comfortable and more collaborative 
with the CDN providers as well as the large sources of traffic. Netflix, 
Youtube, and I'm sure others have their own consumer facing performance 
testing that is _much_ more applicable to most consumers as compared to the 
"normal" technician test and measurement approach or even the service 
assurance that you get from normal performance monitoring. What I'd really 
like to see is a way to measure network performance from the CO/head 
end/PoP and also get consumer level reporting from these kinds of 
services. If Google/Netflix/Amazon Video/$others would get on board with 
this idea it would make all our lives simpler. 

Providing individual users stats is nice, but if these guys really want to 
improve service it would be great to get aggregate reporting by ASN. You 
can get a rough idea by looking at your overall graph from Google, but it's 
lacking a lot of detail and there's no simple way to compare that to a head 
end/CO test versus specific end users. 

https://www.google.com/get/videoqualityreport/ 
https://fast.com/# 



On Wed, Jul 18, 2018 at 8:27 AM Mark Tinka < mark.ti...@seacom.mu > wrote: 

> 
> 
> On 18/Jul/18 14:00, K. Scott Helms wrote: 
> 
> 
> That's absolutely a concern Mark, but most of the CPE vendors that support 
> doing this are providing enough juice to keep up with their max 
> forwarding/routing data rates. I don't see 10 Gbps in residential Internet 
> service being normal for quite a long time off even if the port itself is 
> capable of 10Gbps. We have this issue today with commercial customers, but 
> it's generally not as a much of a problem because the commercial CPE get 
> their usage graphed and the commercial CPE have more capabilities for 
> testing. 
> 
> 
> I suppose the point I was trying to make is when does it stop being 
> feasible to test each and every piece of bandwidth you deliver to a 
> customer? It may very well not be 10Gbps... perhaps it's 2Gbps, or 3.2Gbps, 
> or 5.1Gbps... basically, the rabbit hole. 
> 
> Like Saku, I am more interested in other fundamental metrics that could 
> impact throughput such as latency, packet loss and jitter. Bandwidth, 
> itself, is easy to measure with your choice of SNMP poller + 5 minutes. But 
> when you're trying to explain to a simple customer buying 100Mbps that a 
> break in your Skype video cannot be diagnosed with a throughput speed test, 
> they don't/won't get it. 
> 
> In Africa, for example, customers in only one of our markets are so 
> obsessed with speed tests. But not to speed test servers that are 
> in-country... they want to test servers that sit in Europe, North America, 
> South America and Asia-Pac. With the latency averaging between 140ms - 
> 400ms across all of those regions from source, the amount of energy spent 
> explaining to customers that there is no way you can saturate your 
> delivered capacity beyond a couple of Mbps using Ookla and friends is 
> energy I could spend drinking wine and having a medium-rare steak, instead. 
> 
> For us, at least, aside from going on a mass education drive in this 
> particular market, the ultimate solution is just getting all that content 
> localized in-country or in-region. Once that latency comes down and the 
> resources are available locally, the whole speed test debacle will easily 
> fall away, because the source of these speed tests is simply how physically 
> far the content is. Is this an easy task - hell no; but slamming your head 
> against a wall over and over is no fun either. 
> 
> Mark. 
> 






Re: Proving Gig Speed

2018-07-18 Thread Mike Hammett
Fast.com will pull from multiple nodes at the same time. I think there were 
four streams on the one I looked at, two to the on-net OCA and two that went 
off-net elsewhere. One of those off-net was in the same country, but very not 
near. 




- 
Mike Hammett 
Intelligent Computing Solutions 
http://www.ics-il.com 

Midwest-IX 
http://www.midwest-ix.com 

- Original Message -

From: "K. Scott Helms"  
To: "mark tinka"  
Cc: "NANOG list"  
Sent: Wednesday, July 18, 2018 8:41:41 AM 
Subject: Re: Proving Gig Speed 

On Wed, Jul 18, 2018 at 9:01 AM Mark Tinka  wrote: 

> 
> Personally, I don't think the content networks and CDN's should focus on 
> developing yet-another-speed-test-server, because then they are just 
> pushing the problem back to the ISP. I believe they should better spend 
> their time: 
> 
> - Delivering as-near-to 100% of all of their services to all regions, 
> cities, data centres as they possibly can. 
> 
> 
> - Providing tools for network operators as well as their consumers 
> that are biased toward the expected quality of experience, rather than how 
> fast their bandwidth is. A 5Gbps link full of packet loss does not a 
> service make - but what does that translate into for the type of service 
> the content network or CDN is delivering? 
> 
> Mark. 
> 
That's why I vastly prefer stats from the actual CDNs and content providers 
that aren't generated by speed tests. They're generated by measuring the 
actual performance of the service they deliver. Now, that won't prevent 
burden shifting, but it does get rid of a lot of the problems you bring 
up. Youtube for example wouldn't rate a video stream as good if the packet 
loss were high because it's actually looking at the bit rate of 
successfully delivered encapsulated video frames I _think_ the same is true 
of Netflix though they also offer a real time test as well which frankly 
isn't as helpful for monitoring but getting a quick test to the Netflix 
node you'd normally use can be nice in some cases. 



Re: Proving Gig Speed

2018-07-18 Thread K. Scott Helms
Mike,

What portal would that be?  Do you have a URL?

On Wed, Jul 18, 2018 at 9:25 AM Mike Hammett  wrote:

> Check your Google portal for more information as to what Google can do
> with BGP Communities related to reporting.
>
>
>
>
> -
> Mike Hammett
> Intelligent Computing Solutions
> http://www.ics-il.com
>
> Midwest-IX
> http://www.midwest-ix.com
>
> - Original Message -
>
> From: "K. Scott Helms" 
> To: "mark tinka" 
> Cc: "NANOG list" 
> Sent: Wednesday, July 18, 2018 7:40:31 AM
> Subject: Re: Proving Gig Speed
>
> Agreed, and it's one of the fundamental problems that a speed test is (and
> can only) measure the speeds from point A to point B (often both inside
> the
> service provider's network) when the customer is concerned with traffic to
> and from point C off in someone else's network altogether. It's one of the
> reasons that I think we have to get more comfortable and more
> collaborative
> with the CDN providers as well as the large sources of traffic. Netflix,
> Youtube, and I'm sure others have their own consumer facing performance
> testing that is _much_ more applicable to most consumers as compared to
> the
> "normal" technician test and measurement approach or even the service
> assurance that you get from normal performance monitoring. What I'd really
> like to see is a way to measure network performance from the CO/head
> end/PoP and also get consumer level reporting from these kinds of
> services. If Google/Netflix/Amazon Video/$others would get on board with
> this idea it would make all our lives simpler.
>
> Providing individual users stats is nice, but if these guys really want to
> improve service it would be great to get aggregate reporting by ASN. You
> can get a rough idea by looking at your overall graph from Google, but
> it's
> lacking a lot of detail and there's no simple way to compare that to a
> head
> end/CO test versus specific end users.
>
> https://www.google.com/get/videoqualityreport/
> https://fast.com/#
>
>
>
> On Wed, Jul 18, 2018 at 8:27 AM Mark Tinka  wrote:
>
> >
> >
> > On 18/Jul/18 14:00, K. Scott Helms wrote:
> >
> >
> > That's absolutely a concern Mark, but most of the CPE vendors that
> support
> > doing this are providing enough juice to keep up with their max
> > forwarding/routing data rates. I don't see 10 Gbps in residential
> Internet
> > service being normal for quite a long time off even if the port itself
> is
> > capable of 10Gbps. We have this issue today with commercial customers,
> but
> > it's generally not as a much of a problem because the commercial CPE get
> > their usage graphed and the commercial CPE have more capabilities for
> > testing.
> >
> >
> > I suppose the point I was trying to make is when does it stop being
> > feasible to test each and every piece of bandwidth you deliver to a
> > customer? It may very well not be 10Gbps... perhaps it's 2Gbps, or
> 3.2Gbps,
> > or 5.1Gbps... basically, the rabbit hole.
> >
> > Like Saku, I am more interested in other fundamental metrics that could
> > impact throughput such as latency, packet loss and jitter. Bandwidth,
> > itself, is easy to measure with your choice of SNMP poller + 5 minutes.
> But
> > when you're trying to explain to a simple customer buying 100Mbps that a
> > break in your Skype video cannot be diagnosed with a throughput speed
> test,
> > they don't/won't get it.
> >
> > In Africa, for example, customers in only one of our markets are so
> > obsessed with speed tests. But not to speed test servers that are
> > in-country... they want to test servers that sit in Europe, North
> America,
> > South America and Asia-Pac. With the latency averaging between 140ms -
> > 400ms across all of those regions from source, the amount of energy
> spent
> > explaining to customers that there is no way you can saturate your
> > delivered capacity beyond a couple of Mbps using Ookla and friends is
> > energy I could spend drinking wine and having a medium-rare steak,
> instead.
> >
> > For us, at least, aside from going on a mass education drive in this
> > particular market, the ultimate solution is just getting all that
> content
> > localized in-country or in-region. Once that latency comes down and the
> > resources are available locally, the whole speed test debacle will
> easily
> > fall away, because the source of these speed tests is simply how
> physically
> > far the content is. Is this an easy task - hell no; but slamming your
> head
> > against a wall over and over is no fun either.
> >
> > Mark.
> >
>
>


Re: Proving Gig Speed

2018-07-18 Thread K. Scott Helms
On Wed, Jul 18, 2018 at 9:01 AM Mark Tinka  wrote:

>
> Personally, I don't think the content networks and CDN's should focus on
> developing yet-another-speed-test-server, because then they are just
> pushing the problem back to the ISP. I believe they should better spend
> their time:
>
>- Delivering as-near-to 100% of all of their services to all regions,
>cities, data centres as they possibly can.
>
>
>- Providing tools for network operators as well as their consumers
>that are biased toward the expected quality of experience, rather than how
>fast their bandwidth is. A 5Gbps link full of packet loss does not a
>service make - but what does that translate into for the type of service
>the content network or CDN is delivering?
>
> Mark.
>
That's why I vastly prefer stats from the actual CDNs and content providers
that aren't generated by speed tests.  They're generated by measuring the
actual performance of the service they deliver.  Now, that won't prevent
burden shifting, but it does get rid of a lot of the problems you bring
up.  Youtube for example wouldn't rate a video stream as good if the packet
loss were high because it's actually looking at the bit rate of
successfully delivered encapsulated video frames I _think_ the same is true
of Netflix though they also offer a real time test as well which frankly
isn't as helpful for monitoring but getting a quick test to the Netflix
node you'd normally use can be nice in some cases.


Re: Proving Gig Speed

2018-07-18 Thread Mark Tinka



On 18/Jul/18 15:24, Mike Hammett wrote:

> More speedtest and quality reporting sites\services (including internal to 
> big content) seem more about blaming the ISP than providing the ISP usable 
> information to fix it. 

Agreed.

IIRC, this all began with http://www.dslreports.com/speedtest (I can't
think of another speed test resource at the time) back in the late
'90's... of course, it assumed all ISP's and their eyeballs were in
North America.

It's been downhill ever since.

Mark.


Re: Proving Gig Speed

2018-07-18 Thread Mike Hammett
More speedtest and quality reporting sites\services (including internal to big 
content) seem more about blaming the ISP than providing the ISP usable 
information to fix it. 




- 
Mike Hammett 
Intelligent Computing Solutions 
http://www.ics-il.com 

Midwest-IX 
http://www.midwest-ix.com 

- Original Message -

From: "K. Scott Helms"  
To: "mark tinka"  
Cc: "NANOG list"  
Sent: Wednesday, July 18, 2018 7:40:31 AM 
Subject: Re: Proving Gig Speed 

Agreed, and it's one of the fundamental problems that a speed test is (and 
can only) measure the speeds from point A to point B (often both inside the 
service provider's network) when the customer is concerned with traffic to 
and from point C off in someone else's network altogether. It's one of the 
reasons that I think we have to get more comfortable and more collaborative 
with the CDN providers as well as the large sources of traffic. Netflix, 
Youtube, and I'm sure others have their own consumer facing performance 
testing that is _much_ more applicable to most consumers as compared to the 
"normal" technician test and measurement approach or even the service 
assurance that you get from normal performance monitoring. What I'd really 
like to see is a way to measure network performance from the CO/head 
end/PoP and also get consumer level reporting from these kinds of 
services. If Google/Netflix/Amazon Video/$others would get on board with 
this idea it would make all our lives simpler. 

Providing individual users stats is nice, but if these guys really want to 
improve service it would be great to get aggregate reporting by ASN. You 
can get a rough idea by looking at your overall graph from Google, but it's 
lacking a lot of detail and there's no simple way to compare that to a head 
end/CO test versus specific end users. 

https://www.google.com/get/videoqualityreport/ 
https://fast.com/# 



On Wed, Jul 18, 2018 at 8:27 AM Mark Tinka  wrote: 

> 
> 
> On 18/Jul/18 14:00, K. Scott Helms wrote: 
> 
> 
> That's absolutely a concern Mark, but most of the CPE vendors that support 
> doing this are providing enough juice to keep up with their max 
> forwarding/routing data rates. I don't see 10 Gbps in residential Internet 
> service being normal for quite a long time off even if the port itself is 
> capable of 10Gbps. We have this issue today with commercial customers, but 
> it's generally not as a much of a problem because the commercial CPE get 
> their usage graphed and the commercial CPE have more capabilities for 
> testing. 
> 
> 
> I suppose the point I was trying to make is when does it stop being 
> feasible to test each and every piece of bandwidth you deliver to a 
> customer? It may very well not be 10Gbps... perhaps it's 2Gbps, or 3.2Gbps, 
> or 5.1Gbps... basically, the rabbit hole. 
> 
> Like Saku, I am more interested in other fundamental metrics that could 
> impact throughput such as latency, packet loss and jitter. Bandwidth, 
> itself, is easy to measure with your choice of SNMP poller + 5 minutes. But 
> when you're trying to explain to a simple customer buying 100Mbps that a 
> break in your Skype video cannot be diagnosed with a throughput speed test, 
> they don't/won't get it. 
> 
> In Africa, for example, customers in only one of our markets are so 
> obsessed with speed tests. But not to speed test servers that are 
> in-country... they want to test servers that sit in Europe, North America, 
> South America and Asia-Pac. With the latency averaging between 140ms - 
> 400ms across all of those regions from source, the amount of energy spent 
> explaining to customers that there is no way you can saturate your 
> delivered capacity beyond a couple of Mbps using Ookla and friends is 
> energy I could spend drinking wine and having a medium-rare steak, instead. 
> 
> For us, at least, aside from going on a mass education drive in this 
> particular market, the ultimate solution is just getting all that content 
> localized in-country or in-region. Once that latency comes down and the 
> resources are available locally, the whole speed test debacle will easily 
> fall away, because the source of these speed tests is simply how physically 
> far the content is. Is this an easy task - hell no; but slamming your head 
> against a wall over and over is no fun either. 
> 
> Mark. 
> 



Re: Proving Gig Speed

2018-07-18 Thread Mike Hammett
Check your Google portal for more information as to what Google can do with BGP 
Communities related to reporting. 




- 
Mike Hammett 
Intelligent Computing Solutions 
http://www.ics-il.com 

Midwest-IX 
http://www.midwest-ix.com 

- Original Message -

From: "K. Scott Helms"  
To: "mark tinka"  
Cc: "NANOG list"  
Sent: Wednesday, July 18, 2018 7:40:31 AM 
Subject: Re: Proving Gig Speed 

Agreed, and it's one of the fundamental problems that a speed test is (and 
can only) measure the speeds from point A to point B (often both inside the 
service provider's network) when the customer is concerned with traffic to 
and from point C off in someone else's network altogether. It's one of the 
reasons that I think we have to get more comfortable and more collaborative 
with the CDN providers as well as the large sources of traffic. Netflix, 
Youtube, and I'm sure others have their own consumer facing performance 
testing that is _much_ more applicable to most consumers as compared to the 
"normal" technician test and measurement approach or even the service 
assurance that you get from normal performance monitoring. What I'd really 
like to see is a way to measure network performance from the CO/head 
end/PoP and also get consumer level reporting from these kinds of 
services. If Google/Netflix/Amazon Video/$others would get on board with 
this idea it would make all our lives simpler. 

Providing individual users stats is nice, but if these guys really want to 
improve service it would be great to get aggregate reporting by ASN. You 
can get a rough idea by looking at your overall graph from Google, but it's 
lacking a lot of detail and there's no simple way to compare that to a head 
end/CO test versus specific end users. 

https://www.google.com/get/videoqualityreport/ 
https://fast.com/# 



On Wed, Jul 18, 2018 at 8:27 AM Mark Tinka  wrote: 

> 
> 
> On 18/Jul/18 14:00, K. Scott Helms wrote: 
> 
> 
> That's absolutely a concern Mark, but most of the CPE vendors that support 
> doing this are providing enough juice to keep up with their max 
> forwarding/routing data rates. I don't see 10 Gbps in residential Internet 
> service being normal for quite a long time off even if the port itself is 
> capable of 10Gbps. We have this issue today with commercial customers, but 
> it's generally not as a much of a problem because the commercial CPE get 
> their usage graphed and the commercial CPE have more capabilities for 
> testing. 
> 
> 
> I suppose the point I was trying to make is when does it stop being 
> feasible to test each and every piece of bandwidth you deliver to a 
> customer? It may very well not be 10Gbps... perhaps it's 2Gbps, or 3.2Gbps, 
> or 5.1Gbps... basically, the rabbit hole. 
> 
> Like Saku, I am more interested in other fundamental metrics that could 
> impact throughput such as latency, packet loss and jitter. Bandwidth, 
> itself, is easy to measure with your choice of SNMP poller + 5 minutes. But 
> when you're trying to explain to a simple customer buying 100Mbps that a 
> break in your Skype video cannot be diagnosed with a throughput speed test, 
> they don't/won't get it. 
> 
> In Africa, for example, customers in only one of our markets are so 
> obsessed with speed tests. But not to speed test servers that are 
> in-country... they want to test servers that sit in Europe, North America, 
> South America and Asia-Pac. With the latency averaging between 140ms - 
> 400ms across all of those regions from source, the amount of energy spent 
> explaining to customers that there is no way you can saturate your 
> delivered capacity beyond a couple of Mbps using Ookla and friends is 
> energy I could spend drinking wine and having a medium-rare steak, instead. 
> 
> For us, at least, aside from going on a mass education drive in this 
> particular market, the ultimate solution is just getting all that content 
> localized in-country or in-region. Once that latency comes down and the 
> resources are available locally, the whole speed test debacle will easily 
> fall away, because the source of these speed tests is simply how physically 
> far the content is. Is this an easy task - hell no; but slamming your head 
> against a wall over and over is no fun either. 
> 
> Mark. 
> 



Re: NANOG list errors

2018-07-18 Thread Mike Hammett
I got a whole bunch overnight as well. 




- 
Mike Hammett 
Intelligent Computing Solutions 
http://www.ics-il.com 

Midwest-IX 
http://www.midwest-ix.com 

- Original Message -

From: "Andy Ringsmuth"  
To: "NANOG list"  
Sent: Tuesday, July 17, 2018 11:24:51 PM 
Subject: NANOG list errors 

Fellow list members, 

The last several days, I’ve been receiving mail forwarding loop errors for the 
list. I’ll receive them several hours after sending a message. I’ll paste the 
latest two of them below, separated by % symbols. 

Anyone able to sort this out and fix? 


%%% 

This is the mail system at host mail.nanog.org. 

I'm sorry to have to inform you that your message could not 
be delivered to one or more recipients. It's attached below. 

For further assistance, please send mail to postmaster. 

If you do so, please include this problem report. You can 
delete your own text from the attached returned message. 

The mail system 

: mail forwarding loop for nanog@nanog.org 
Reporting-MTA: dns; mail.nanog.org 
X-Postfix-Queue-ID: 2B72F160040 
X-Postfix-Sender: rfc822; a...@newslink.com 
Arrival-Date: Wed, 18 Jul 2018 03:41:57 + (UTC) 

Final-Recipient: rfc822; nanog@nanog.org 
Original-Recipient: rfc822;nanog@nanog.org 
Action: failed 
Status: 5.4.6 
Diagnostic-Code: X-Postfix; mail forwarding loop for nanog@nanog.org 

From: Andy Ringsmuth  
Subject: Re: Proving Gig Speed 
Date: July 17, 2018 at 9:53:01 AM CDT 
To: NANOG list  



> On Jul 17, 2018, at 9:41 AM, Mike Hammett  wrote: 
> 
> 10G to the home will be pointless as more and more people move away from 
> Ethernet to WiFi where the noise floor for most installs prevents anyone from 
> reaching 802.11n speeds, much less whatever alphabet soup comes later. 
> 
> 
> 
> 
> - 
> Mike Hammett 
> Intelligent Computing Solutions 
> http://www.ics-il.com 
> 
> Midwest-IX 
> http://www.midwest-ix.com 

Well, in a few years when we’re all watching 4D 32K Netflix on our 16-foot 
screens with 5 million DPI, it’ll make all the difference in the world, right? 

Tongue-in-cheek obviously. 


 
Andy Ringsmuth 
a...@newslink.com 
News Link – Manager Technology, Travel & Facilities 
2201 Winthrop Rd., Lincoln, NE 68502-4158 
(402) 475-6397 (402) 304-0083 cellular 

%% 

This is the mail system at host mail.nanog.org. 

I'm sorry to have to inform you that your message could not 
be delivered to one or more recipients. It's attached below. 

For further assistance, please send mail to postmaster. 

If you do so, please include this problem report. You can 
delete your own text from the attached returned message. 

The mail system 

: mail forwarding loop for nanog@nanog.org 
Reporting-MTA: dns; mail.nanog.org 
X-Postfix-Queue-ID: 2F2AA160040 
X-Postfix-Sender: rfc822; a...@newslink.com 
Arrival-Date: Wed, 18 Jul 2018 03:46:02 + (UTC) 

Final-Recipient: rfc822; nanog@nanog.org 
Original-Recipient: rfc822;nanog@nanog.org 
Action: failed 
Status: 5.4.6 
Diagnostic-Code: X-Postfix; mail forwarding loop for nanog@nanog.org 

From: Andy Ringsmuth  
Subject: Re: Proving Gig Speed 
Date: July 17, 2018 at 11:12:22 AM CDT 
To: NANOG list  



> On Jul 17, 2018, at 10:44 AM, Mark Tinka  wrote: 
> 
> 
> 
> On 17/Jul/18 16:41, Mike Hammett wrote: 
> 
>> 10G to the home will be pointless as more and more people move away 
>> from Ethernet to WiFi where the noise floor for most installs prevents 
>> anyone from reaching 802.11n speeds, much less whatever alphabet soup 
>> comes later. 
> 
> Doesn't stop customers from buying it if it's cheap and available, which 
> doesn't stop them from proving they are getting 10Gbps as advertised. 
> 
> Mark. 

I suppose in reality it’s no different than any other utility. My home has 200 
amp electrical service. Will I ever use 200 amps at one time? Highly highly 
unlikely. But if my electrical utility wanted to advertise “200 amp service in 
all homes we supply!” they sure could. Would an electrician be able to test it? 
I’m sure there is a way somehow. 

If me and everyone on my street tried to use 200 amps all at the same time, 
could the infrastructure handle it? Doubtful. But do I on occasion saturate my 
home fiber 300 mbit synchronous connection? Every now and then yes, but rarely. 
Although if I’m paying for 300 and not getting it, my ISP will be hearing from 
me. 

If my electrical utility told me “hey, you can upgrade to 500 amp service for 
no additional charge” would I do it? Sure, what the heck. If my water utility 
said “guess what? You can upgrade to a 2-inch water line at no additional 
charge!” would I do it? Probably yeah, why not? 

Would I ever use all that capacity on $random_utility at one time? Of course 
not. But nice to know it’s there if I ever need it. 


 
Andy Ringsmuth 
a...@newslink.com 
News Link – Manager Technology, Travel & Facilities 
2201 Winthrop Rd., Lincoln, NE 

Re: Proving Gig Speed

2018-07-18 Thread Mark Tinka



On 18/Jul/18 14:40, K. Scott Helms wrote:

> Agreed, and it's one of the fundamental problems that a speed test is
> (and can only) measure the speeds from point A to point B (often both
> inside the service provider's network) when the customer is concerned
> with traffic to and from point C off in someone else's network altogether.

In our market, most customers that put all their faith and slippers in
Ookla have no qualms with choosing a random speed test server on the
Ookla network, with no regard as to whether that server is on-net or
off-net for their ISP, how that server is maintained, how much bandwidth
capacity it has, how it was deployed, its hardware sources, how busy it
is, how much of its bandwidth it can actually exhaust, how traffic
routes to/from it, e.t.c.

Whatever the result, the speed test server or the Ookla network is NEVER
at fault. So now, an ISP in the African market has to explain why a
speed test server on some unknown network in Feira de Santana is
claiming that the customer is not getting what they paid for?

Then again, we all need reasons to wake up in the morning :-)...


>   It's one of the reasons that I think we have to get more comfortable
> and more collaborative with the CDN providers as well as the large
> sources of traffic.  Netflix, Youtube, and I'm sure others have their
> own consumer facing performance testing that is _much_ more applicable
> to most consumers as compared to the "normal" technician test and
> measurement approach or even the service assurance that you get from
> normal performance monitoring.  What I'd really like to see is a way
> to measure network performance from the CO/head end/PoP and also get
> consumer level reporting from these kinds of services.  If
> Google/Netflix/Amazon Video/$others would get on board with this idea
> it would make all our lives simpler.
>
> Providing individual users stats is nice, but if these guys really
> want to improve service it would be great to get aggregate reporting
> by ASN.  You can get a rough idea by looking at your overall graph
> from Google, but it's lacking a lot of detail and there's no simple
> way to compare that to a head end/CO test versus specific end users.
>
> https://www.google.com/get/videoqualityreport/
> https://fast.com/#

Personally, I don't think the content networks and CDN's should focus on
developing yet-another-speed-test-server, because then they are just
pushing the problem back to the ISP. I believe they should better spend
their time:

  * Delivering as-near-to 100% of all of their services to all regions,
cities, data centres as they possibly can.

  * Providing tools for network operators as well as their consumers
that are biased toward the expected quality of experience, rather
than how fast their bandwidth is. A 5Gbps link full of packet loss
does not a service make - but what does that translate into for the
type of service the content network or CDN is delivering?

Mark.



Re: Proving Gig Speed

2018-07-18 Thread K. Scott Helms
Agreed, and it's one of the fundamental problems that a speed test is (and
can only) measure the speeds from point A to point B (often both inside the
service provider's network) when the customer is concerned with traffic to
and from point C off in someone else's network altogether.  It's one of the
reasons that I think we have to get more comfortable and more collaborative
with the CDN providers as well as the large sources of traffic.  Netflix,
Youtube, and I'm sure others have their own consumer facing performance
testing that is _much_ more applicable to most consumers as compared to the
"normal" technician test and measurement approach or even the service
assurance that you get from normal performance monitoring.  What I'd really
like to see is a way to measure network performance from the CO/head
end/PoP and also get consumer level reporting from these kinds of
services.  If Google/Netflix/Amazon Video/$others would get on board with
this idea it would make all our lives simpler.

Providing individual users stats is nice, but if these guys really want to
improve service it would be great to get aggregate reporting by ASN.  You
can get a rough idea by looking at your overall graph from Google, but it's
lacking a lot of detail and there's no simple way to compare that to a head
end/CO test versus specific end users.

https://www.google.com/get/videoqualityreport/
https://fast.com/#



On Wed, Jul 18, 2018 at 8:27 AM Mark Tinka  wrote:

>
>
> On 18/Jul/18 14:00, K. Scott Helms wrote:
>
>
> That's absolutely a concern Mark, but most of the CPE vendors that support
> doing this are providing enough juice to keep up with their max
> forwarding/routing data rates.  I don't see 10 Gbps in residential Internet
> service being normal for quite a long time off even if the port itself is
> capable of 10Gbps.  We have this issue today with commercial customers, but
> it's generally not as a much of a problem because the commercial CPE get
> their usage graphed and the commercial CPE have more capabilities for
> testing.
>
>
> I suppose the point I was trying to make is when does it stop being
> feasible to test each and every piece of bandwidth you deliver to a
> customer? It may very well not be 10Gbps... perhaps it's 2Gbps, or 3.2Gbps,
> or 5.1Gbps... basically, the rabbit hole.
>
> Like Saku, I am more interested in other fundamental metrics that could
> impact throughput such as latency, packet loss and jitter. Bandwidth,
> itself, is easy to measure with your choice of SNMP poller + 5 minutes. But
> when you're trying to explain to a simple customer buying 100Mbps that a
> break in your Skype video cannot be diagnosed with a throughput speed test,
> they don't/won't get it.
>
> In Africa, for example, customers in only one of our markets are so
> obsessed with speed tests. But not to speed test servers that are
> in-country... they want to test servers that sit in Europe, North America,
> South America and Asia-Pac. With the latency averaging between 140ms -
> 400ms across all of those regions from source, the amount of energy spent
> explaining to customers that there is no way you can saturate your
> delivered capacity beyond a couple of Mbps using Ookla and friends is
> energy I could spend drinking wine and having a medium-rare steak, instead.
>
> For us, at least, aside from going on a mass education drive in this
> particular market, the ultimate solution is just getting all that content
> localized in-country or in-region. Once that latency comes down and the
> resources are available locally, the whole speed test debacle will easily
> fall away, because the source of these speed tests is simply how physically
> far the content is. Is this an easy task - hell no; but slamming your head
> against a wall over and over is no fun either.
>
> Mark.
>


Re: Proving Gig Speed

2018-07-18 Thread Mike Hammett
I encourage my competitors to not implement those products in their networks. 




- 
Mike Hammett 
Intelligent Computing Solutions 
http://www.ics-il.com 

Midwest-IX 
http://www.midwest-ix.com 

- Original Message -

From: "Mark Tinka"  
To: "Mike Hammett"  
Cc: "NANOG list"  
Sent: Wednesday, July 18, 2018 7:29:31 AM 
Subject: Re: Proving Gig Speed 




On 18/Jul/18 14:11, Mike Hammett wrote: 



https://www.ignitenet.com/wireless-backhaul/ 
https://www.siklu.com/product/multihaul-series/ 
https://mikrotik.com/product/wireless_wire_dish 
https://mikrotik.com/product/wap_60g_ap 


There is a product for everything; doesn't mean it'll make a commercially 
viable business for whoever chooses to implement it. 

Mark. 



Re: Proving Gig Speed

2018-07-18 Thread Mark Tinka



On 18/Jul/18 14:11, Mike Hammett wrote:

> https://www.ignitenet.com/wireless-backhaul/ 
> https://www.siklu.com/product/multihaul-series/ 
>
> https://mikrotik.com/product/wireless_wire_dish 
> https://mikrotik.com/product/wap_60g_ap 

There is a product for everything; doesn't mean it'll make a
commercially viable business for whoever chooses to implement it.

Mark.


Re: Proving Gig Speed

2018-07-18 Thread Mark Tinka



On 18/Jul/18 14:00, K. Scott Helms wrote:

>
> That's absolutely a concern Mark, but most of the CPE vendors that
> support doing this are providing enough juice to keep up with their
> max forwarding/routing data rates.  I don't see 10 Gbps in residential
> Internet service being normal for quite a long time off even if the
> port itself is capable of 10Gbps.  We have this issue today with
> commercial customers, but it's generally not as a much of a problem
> because the commercial CPE get their usage graphed and the commercial
> CPE have more capabilities for testing.

I suppose the point I was trying to make is when does it stop being
feasible to test each and every piece of bandwidth you deliver to a
customer? It may very well not be 10Gbps... perhaps it's 2Gbps, or
3.2Gbps, or 5.1Gbps... basically, the rabbit hole.

Like Saku, I am more interested in other fundamental metrics that could
impact throughput such as latency, packet loss and jitter. Bandwidth,
itself, is easy to measure with your choice of SNMP poller + 5 minutes.
But when you're trying to explain to a simple customer buying 100Mbps
that a break in your Skype video cannot be diagnosed with a throughput
speed test, they don't/won't get it.

In Africa, for example, customers in only one of our markets are so
obsessed with speed tests. But not to speed test servers that are
in-country... they want to test servers that sit in Europe, North
America, South America and Asia-Pac. With the latency averaging between
140ms - 400ms across all of those regions from source, the amount of
energy spent explaining to customers that there is no way you can
saturate your delivered capacity beyond a couple of Mbps using Ookla and
friends is energy I could spend drinking wine and having a medium-rare
steak, instead.

For us, at least, aside from going on a mass education drive in this
particular market, the ultimate solution is just getting all that
content localized in-country or in-region. Once that latency comes down
and the resources are available locally, the whole speed test debacle
will easily fall away, because the source of these speed tests is simply
how physically far the content is. Is this an easy task - hell no; but
slamming your head against a wall over and over is no fun either.

Mark.


Re: Proving Gig Speed

2018-07-18 Thread Mark Tinka



On 18/Jul/18 00:01, Saku Ytti wrote:

> Already fairly common in Finland to have just LTE dongle for Internet,
> especially for younger people. DNA quotes average consumption of 8GB
> per subscriber per month. You can get unlimited for 20eur/month, it's
> much faster than DSL with lower latency. And if your home DSL is down,
> it may affect just you, so MTTR can be days, where as on mobile MTTR
> even without calling anyone is minutes or hour.
> Even more strange, providers, particularly one of them, is printing
> money. It's not immediately obvious to me why this the same
> fundamentals do not seem to work elsewhere. In Cyprus I can't buy more
> than 6GB contract and connectivity is spotty even in urban centres,
> which echoes my experience in US and central EU.

Fairly common in Africa, where there is plenty of GSM infrastructure,
and not so much fibre.

Mark.


Re: Proving Gig Speed

2018-07-18 Thread Mike Hammett
https://www.ignitenet.com/wireless-backhaul/ 
https://www.siklu.com/product/multihaul-series/ 

https://mikrotik.com/product/wireless_wire_dish 
https://mikrotik.com/product/wap_60g_ap 




- 
Mike Hammett 
Intelligent Computing Solutions 
http://www.ics-il.com 

Midwest-IX 
http://www.midwest-ix.com 

- Original Message -

From: "Mark Tinka"  
To: "Mike Hammett"  
Cc: "NANOG list"  
Sent: Wednesday, July 18, 2018 6:57:28 AM 
Subject: Re: Proving Gig Speed 




On 17/Jul/18 18:07, Mike Hammett wrote: 



The problem cited is the last 100', not the last mile. 

For ISPs using 60 GHz for the last mile, a wire is ran from the outdoor antenna 
to the indoor router. 


Yeah, the question was rhetorical. 

I personally don't see ISP's using 60GHz to deliver to the home at scale. But 
I'll side with Saku and bet against myself for reliably predicting the future. 

Mark. 



Re: deploying RPKI based Origin Validation

2018-07-18 Thread Mark Tinka



On 17/Jul/18 20:33, Michel Py wrote:

> If I understand this correctly, I have a suggestion : update these files at a 
> regular interval (15/20 min) and make them available for download with a 
> fixed name (not containing the date).
> Even better : have a route server that announces these prefixes with a :666 
> community so people could use it as a blackhole.
>
> This would not remove the invalid prefixes from one's router, but at leat 
> would prevent traffic from/to these prefixes.
> In other words : a route server of prefixes that are RPKI invalid with no 
> alternative that people could use without having an RPKI setup.
> This would even work with people who have chosen do accept a default route 
> from their upstream.
>
> I understand this is not ideal; blacklisting a prefix that is RPKI invalid 
> may actually help the hijacker, but blacklisting a prefix that is RPKI 
> invalid AND that has no alternative could be useful ?
> Should be considered a bogon.

Hmmh - I suppose if you want to do this in-house, that is fine. But I
would not recommend this at large for the entire BGP community.

At any rate, the result is the same, i.e., the route is taken out of the
FIB. The difference is you are proposing a mechanism that uses existing
infrastructure within almost all ISP's (the BGP Community) in lieu of
deploying RPKI.

I can't quite imagine the effort needed to implement your suggestion,
but I'd rather direct it toward deploying RPKI. At the very least, one
just needs reputable RV software, and router code that support RPKI RV.

Mark.


Re: deploying RPKI based Origin Validation

2018-07-18 Thread Mark Tinka



On 17/Jul/18 19:55, Job Snijders wrote:

> There are ~ 330 IPv6 invalids in the DFZ, and for 70 of those no
> alternative covering prefix exists.

Thanks, Job.

Mark.


Re: Proving Gig Speed

2018-07-18 Thread Mark Tinka



On 17/Jul/18 19:44, b...@theworld.com wrote:

> Re: 10gb TTH
>
> Just a thought:
>
> Do they need 10gb? Or do they need multiple 1gb (e.g.) channels which
> might be cheaper and easier to provision?

In my house, for example, I only have a single fibre core coming into my
house (single fibre pair for my neighbors who are on Active-E - I'm on
GPON).

If you're thinking of classic LAG's, not sure how we could do that on
one physical link.

Mark.


Re: Proving Gig Speed

2018-07-18 Thread Mark Tinka



On 17/Jul/18 19:45, James Bensley wrote:

> Hi Mark,
>
> Our field engineers have 1G testers, but even at 1G they are costly
> (in 2018!), so none have 10Gbps or higher testers and we also only do
> this for those that demand it (i.e. no 20Mbps EFM customer ever asks
> for a JSDU/EXO test, because iPerf can easily max out such a link,
> only those that pay for say 1G over 1G get it). Hardware testers are
> the best in my opinion right now but it annoys me that this is the
> current state of affairs, in 2018, even for 1Gbps!

Truth.

Mark.


Re: Proving Gig Speed

2018-07-18 Thread Mark Tinka



On 17/Jul/18 18:12, Andy Ringsmuth wrote:

> I suppose in reality it’s no different than any other utility. My home has 
> 200 amp electrical service. Will I ever use 200 amps at one time? Highly 
> highly unlikely. But if my electrical utility wanted to advertise “200 amp 
> service in all homes we supply!” they sure could. Would an electrician be 
> able to test it? I’m sure there is a way somehow.
>
> If me and everyone on my street tried to use 200 amps all at the same time, 
> could the infrastructure handle it? Doubtful. But do I on occasion saturate 
> my home fiber 300 mbit synchronous connection? Every now and then yes, but 
> rarely. Although if I’m paying for 300 and not getting it, my ISP will be 
> hearing from me.
>
> If my electrical utility told me “hey, you can upgrade to 500 amp service for 
> no additional charge” would I do it? Sure, what the heck. If my water utility 
> said “guess what? You can upgrade to a 2-inch water line at no additional 
> charge!” would I do it? Probably yeah, why not?
>
> Would I ever use all that capacity on $random_utility at one time? Of course 
> not. But nice to know it’s there if I ever need it.

The difference, of course, between electricity and the Internet is that
there is a lot more information and tools freely available online that
Average Jane can arm herself with to run amok with figuring out whether
she is getting 300Mbps of her 300Mbps from her ISP.

Average Jane could care less about measuring whether she's getting 200
amps of her 200 amps from the power company; likely because there is a
lot more structure with how power is produced and delivered, or more to
the point, a lot less freely available tools and information with which
she can arm herself to run amok with. To her, the power company sucks if
the lights go out. In the worst case, if her power starts a fire, she's
calling the fire department.

Mark.


Re: Proving Gig Speed

2018-07-18 Thread K. Scott Helms
That's absolutely a concern Mark, but most of the CPE vendors that support
doing this are providing enough juice to keep up with their max
forwarding/routing data rates.  I don't see 10 Gbps in residential Internet
service being normal for quite a long time off even if the port itself is
capable of 10Gbps.  We have this issue today with commercial customers, but
it's generally not as a much of a problem because the commercial CPE get
their usage graphed and the commercial CPE have more capabilities for
testing.

Scott Helms


On Tue, Jul 17, 2018 at 8:11 AM Mark Tinka  wrote:

>
>
> On 17/Jul/18 14:07, K. Scott Helms wrote:
>
>
> That's absolutely true, but I don't see any real alternatives in some
> cases.  I've actually built automated testing into some of the CPE we've
> deployed and that works pretty well for some models but other devices don't
> seem to be able to fill a ~500 mbps link.
>
>
> So what are you going to do when 10Gbps FTTH into the home becomes the
> norm?
>
> Perhaps laptops and servers of the time won't even see this as a rounding
> error :-\...
>
> Mark.
>


Re: Proving Gig Speed

2018-07-18 Thread Mark Tinka



On 17/Jul/18 18:07, Mike Hammett wrote:

> The problem cited is the last 100', not the last mile. 
>
> For ISPs using 60 GHz for the last mile, a wire is ran from the outdoor 
> antenna to the indoor router. 

Yeah, the question was rhetorical.

I personally don't see ISP's using 60GHz to deliver to the home at
scale. But I'll side with Saku and bet against myself for reliably
predicting the future.

Mark.


Re: Proving Gig Speed

2018-07-18 Thread Mark Tinka



On 17/Jul/18 17:52, Mike Hammett wrote:

> Most ISPs I know build their own last mile. 

There's a whole world out there...

Mark.


Re: Blizzard, Battle.net connectivity issues

2018-07-18 Thread nop
Out of curiosity, are you using one of those cheap dirty "misused outside of 
region" Afrinic blocks? 

They keep trying (and spamming the crap out of a few forums) to offload them to 
ISPs temporarily for cheap so that the ISPs will get them cleaned up and marked 
as residential, then resold/abused afterward for fraud/vpn/bots.

On Tue, Jul 17, 2018, at 7:39 PM, Michael Crapse wrote:
> Could I get an off list reply from blizzard engineers. Your email system is
> blocking our emails as spam, and I'm trying to resolve some geolocation
> issues that disallow our mutual customers to access your services. Thank you
> 
> Michael Crapse
> Wi-Fiber, Inc.


SV: Proving Gig Speed

2018-07-18 Thread Gustav Ulander
We use Netrounds for this. 
We make a speedtest site available to the customer for their "click and test" 
needs which is the first step. 
If the customer doesn’t achive their allocated speed we will send out a probe 
(usually some form of Intel NUC or similar machine) that can do more advanced 
testing automatically also along different times. 
The customer then gets to send us back the device when the case is solved. 
Its not without its faults but its been a great tool so far. 

//Gustav

-Ursprungligt meddelande-
Från: NANOG  För James Bensley
Skickat: den 17 juli 2018 19:46
Till: Mark Tinka ; North American Network Operators' 
Group 
Ämne: Re: Proving Gig Speed

On 17 July 2018 at 12:50, Mark Tinka  wrote:
> But to answer your questions - for some customers, we insist on JDSU 
> testing for large capacities, but only if it's worth the effort.
>
> Mark.

Hi Mark,

Our field engineers have 1G testers, but even at 1G they are costly (in 2018!), 
so none have 10Gbps or higher testers and we also only do this for those that 
demand it (i.e. no 20Mbps EFM customer ever asks for a JSDU/EXO test, because 
iPerf can easily max out such a link, only those that pay for say 1G over 1G 
get it). Hardware testers are the best in my opinion right now but it annoys me 
that this is the current state of affairs, in 2018, even for 1Gbps!

Cheers,
James.