Re: London Interxion Data Centers

2021-02-26 Thread Matthew Petach
On Fri, Feb 26, 2021 at 4:56 AM Töma Gavrichenkov  wrote:

> Peace
>
> On Fri, Feb 26, 2021, 3:06 PM Rod Beck 
> wrote:
>
>> My understanding is that there are three London Interxion data centers (I
>> thought Equinix was the Borg and had assimilated pretty everything at this
>> point).
>>
>> Trying to get the address where the facility where the London Metal
>> Exchange houses its trading engine.
>>
>
> Aren't they (LME) in Savvis, though?
>
> --
> Töma
>

That was certainly true in 2003, at least:

https://zynap.com/savvis-gains-ground-in-u-k-managed-hosting-services-market-with-six-new-customer-wins/

and this list seems to corroborate that:

https://trends.builtwith.com/websitelist/Savvis/United-Kingdom/London

Though, it looks like LME has strict limits on which networks they will
allow to connect into it:

https://www.lme.com/Trading/Access-the-market/ISVs-and-connectivity-providers

and it looks like at the moment, Colt is the favoured provider for LMEnet:

https://www.lme.com/Trading/Systems/LMEnet#tabIndex=0

Best of luck getting a toe in the door!^_^;;

Matt


Re: Texas internet connectivity declining due to blackouts

2021-02-15 Thread Matthew Petach
On Mon, Feb 15, 2021 at 8:50 PM Sean Donelan  wrote:

>
>
> On Tue, 16 Feb 2021, Cory Sell via NANOG wrote:
> > adoption. Sure, wind isn’t perfect, but looks like solution relied on
> failed
> > in a massive way.
>
> Strange the massive shortages and failures are only in one state.
>
> The extreme cold weather extends northwards across many states, which
> aren't reporting rolling blackouts.
>

Isn't that a result of ERCOT stubbornly refusing to interconnect with the
rest of the national grid, out of an irrational fear of coming under
federal regulation?

I suspect that trying to be self-sufficient works most of the time--but
when you get to the edges of the bell curve locally, your ability to be
resilient and survive depends heavily upon your ability to be supported by
others around you.  This certainly holds true for individual humans; I
suspect power grids aren't that different.

Matt


Re: 2021.02.10 community meeting unofficial notes

2021-02-11 Thread Matthew Petach
Hm.

They are linked now, but when I looked this morning before the
talk started, there wasn't a link to the slides.  ^_^;;

Thanks for getting them put up!   :)

Matt


On Wed, Feb 10, 2021 at 10:12 AM Valerie Wittkop  wrote:

> Ahem… slides are linked… you must click on the talk title under the
> “Topic” column of the agenda.
>
> Cheers,
>
> Valerie
>
> Valerie Wittkop - NANOG Program Director
> 305 E. Eisenhower Pkwy, Suite 100, Ann Arbor, MI 48108
> Tel: +1 866 902 1336, ext 103
>
> On Feb 10, 2021, at 12:59 PM, Matthew Petach 
> wrote:
>
>
> It was mentioned in the chat this morning that
> there was no link to slides or anything on the agenda
> for the community meeting that happened this morning,
> so I offered to share the notes I was jotting down during
> the meeting, to give an idea of what was covered for those
> in timelines not as friendly to the meeting time, or to those
> with conflicts.  ^_^;
>
> Thanks so much to the staff, Elizabeth, Steve, and Ed
> for the great presentations this morning!
>
> Matt
>
>
>
> 2021.02.10 NANOG81 community meeting
>
> Speaker Edward McNair, NANOG
> Steve Feldman, ViacomCBS
> Elizabeth Culley, Comcast
>
> NOTES:
> Michael Voity kicks off the meeting at 0901 hours Pacific time
> to go over the agenda for the day.
>
> special announcements:
> fill out surveys!  The PC reads all of them!
> And if you're a lucky winner, you'll get a
> $100 gift card!
>
> Looking for presentation proposals for NANOG82!
>
> check out sponsors at virtual expo both;
> sponsors have stepped up to the challenge
> with interesting offerings.
>
> first talk today is hosted by edward mcnair,
> with Elizabeth Culley and Steve Feldman.
>
> Ed Mcnair has been with nanog since 2005, and
> has been ED since 2018;
>
> Steve has been contributing to nanog since its
> inception since 1994, and has been involved with
> PC for the last 20 years.
>
> Elizabeth Culley is peering coordinator with
> comcast, and is completing her first year on
> the PC; this is her first time presenting to
> NANOG.
>
> welcome edward, steve and elizabeth, it's
> great to have you here.
>
> Edward takes a few moments to give some updates
> on the organization;
> strategic plan narrowed to 3 areas;
> education,
> meeting experience,
> online collaboration,
>
> another piece built out is the strategic
> timeline;
> encompasses from NANOG73 to NANOG 85.
> purpose is to see what we are aiming to
> achieve and what we will put forward.
> we always like to hear your comments,
> there is a feedback button that goes to
> NANOG staff, and he reads them all.
>
> Take a moment to honor the nanog staff;
> counting himself, there are 6.5 staff
> members, as Claudia works with them
> halftime.
> Claudia, Leigh, Valerie, Shawn, Darrieux (Dee), and Brandi
> all work very hard to ensure that we can have
> a quality program.
>
> Our 2020 annual report will be coming out
> soon; to give a linear record of what we've
> accompished over the course of 2020;
>
> we'll do polls at the end of the section;
>
> log into the polls at pollEV.com/nanog
>
> Over to Steve feldman;
> at end of every NANOG, there is a Program commitee
> get together; after SF, decided to have retreat in 2020;
> planned to do it in Boston or Seattle; but then
> things changed, and they couldn't get together in person;
> so they had a virtual retreat, 2 days, dec 16 2020,
> and jan 13 2021 to go over various items.
>
> some of the items covered:
> PC structure
> new member onboarding
>  PC handbook update
>  mentorship
> leadership
>  review roles, responsibliities
>   need to document them better
> subcommittees
>  are we optiimizing our effort?
>  some don't seem to do much.
>  will add some new ones.
> diversity of skills and perspectives
>  need to get people who have different
>  industry perspectives.
>
> other topics at virutal retreat
> meeting format
>  how to do hybrid meetings
>  we optimize time for training and tutorial
>  we focus on human interaction; need to accomodate people
>   in different timezones.
> talk solicitation and selection
>  attracting new speakers
>  expanding the audience
>  standardize voting
>  content review
> PC's strategic vision.
>
> Liz, chairing our data analysis effort;
> for past 20 years, we've been building a
> repository of tracks, tutorials, keynote speeches,
> lightning talks;
> we have a long history of hosting the voices of
> those building the internet.
> what comes up in NANOG is often in the leading edge
> of technology trends;
> this effort will be to mine the repository,

2021.02.10 community meeting unofficial notes

2021-02-10 Thread Matthew Petach
It was mentioned in the chat this morning that
there was no link to slides or anything on the agenda
for the community meeting that happened this morning,
so I offered to share the notes I was jotting down during
the meeting, to give an idea of what was covered for those
in timelines not as friendly to the meeting time, or to those
with conflicts.  ^_^;

Thanks so much to the staff, Elizabeth, Steve, and Ed
for the great presentations this morning!

Matt



2021.02.10 NANOG81 community meeting


Speaker Edward McNair, NANOG

Steve Feldman, ViacomCBS

Elizabeth Culley, Comcast


NOTES:

Michael Voity kicks off the meeting at 0901 hours Pacific time

to go over the agenda for the day.


special announcements:

fill out surveys!  The PC reads all of them!

And if you're a lucky winner, you'll get a

$100 gift card!


Looking for presentation proposals for NANOG82!


check out sponsors at virtual expo both;

sponsors have stepped up to the challenge

with interesting offerings.


first talk today is hosted by edward mcnair,

with Elizabeth Culley and Steve Feldman.


Ed Mcnair has been with nanog since 2005, and

has been ED since 2018;


Steve has been contributing to nanog since its

inception since 1994, and has been involved with

PC for the last 20 years.


Elizabeth Culley is peering coordinator with

comcast, and is completing her first year on

the PC; this is her first time presenting to

NANOG.


welcome edward, steve and elizabeth, it's

great to have you here.


Edward takes a few moments to give some updates

on the organization;

strategic plan narrowed to 3 areas;

education,

meeting experience,

online collaboration,


another piece built out is the strategic

timeline;

encompasses from NANOG73 to NANOG 85.

purpose is to see what we are aiming to

achieve and what we will put forward.

we always like to hear your comments,

there is a feedback button that goes to

NANOG staff, and he reads them all.


Take a moment to honor the nanog staff;

counting himself, there are 6.5 staff

members, as Claudia works with them

halftime.

Claudia, Leigh, Valerie, Shawn, Darrieux (Dee), and Brandi

all work very hard to ensure that we can have

a quality program.


Our 2020 annual report will be coming out

soon; to give a linear record of what we've

accompished over the course of 2020;


we'll do polls at the end of the section;


log into the polls at pollEV.com/nanog


Over to Steve feldman;

at end of every NANOG, there is a Program commitee

get together; after SF, decided to have retreat in 2020;

planned to do it in Boston or Seattle; but then

things changed, and they couldn't get together in person;

so they had a virtual retreat, 2 days, dec 16 2020,

and jan 13 2021 to go over various items.


some of the items covered:

PC structure

new member onboarding

 PC handbook update

 mentorship

leadership

 review roles, responsibliities

  need to document them better

subcommittees

 are we optiimizing our effort?

 some don't seem to do much.

 will add some new ones.

diversity of skills and perspectives

 need to get people who have different

 industry perspectives.


other topics at virutal retreat

meeting format

 how to do hybrid meetings

 we optimize time for training and tutorial

 we focus on human interaction; need to accomodate people

  in different timezones.

talk solicitation and selection

 attracting new speakers

 expanding the audience

 standardize voting

 content review

PC's strategic vision.


Liz, chairing our data analysis effort;

for past 20 years, we've been building a

repository of tracks, tutorials, keynote speeches,

lightning talks;

we have a long history of hosting the voices of

those building the internet.

what comes up in NANOG is often in the leading edge

of technology trends;

this effort will be to mine the repository,

to help us keep up with the onslaught of demand.


cataloging NANOG;

we are recording the details of every presentation,

tutoria and keynote;

tagging each record with relevant metadata

 provide intuitive web page of popular topics

 porovide data to potential speakers on history of any topic

  which topics are asked for most often

  which videos are most viewed.


use surveys and data on view counts to project

topics that will be of most interest.


looking at last 5 years of NANOG data;

how many times this topic

44 topics on DNS

40 requests for automation, but 26 talks on automation

Geoff Huston comes up a lot

look at how many times talks are viewed on youtube;

27K views of 44 DNS talks on youtube;

26 automation videos got 77K views.


A lot of people would say they're experts in a field;

how can they talk about their job when it's 100%

proprietary, and they can't talk about their day

job;

take a look at the request counts, how many presentations

we get, so even doing a 101 intro class is a great way to

get your feet wet.


66% of presentations are on top 20 subjects.

115 categoreis in presentations and 

Re: Half Fibre Pair

2021-01-26 Thread Matthew Petach
You can see the terminology getting referenced in articles such as this one
from Telegeography:

https://www.submarinenetworks.com/en/insights/a-new-coming-for-submarine-cable-systems-the-independent-infrastructure-developers

"Further PLCN incorporates Spectrum Manager, that allows the C+L band to be
sliced into blocks of spectrum that can be independently assigned to
separate SLTE that is owned, operated and upgraded by the party leasing the
spectrum. PLCN has productized it in terms of Minimum Spectrum Unit (MSU)
which is 5% of the total C+L band capacity i.e. 5% of 240x100G or 1.2 Tbps.
It’s extended further as virtual fiber pair with Quarter Fiber Pair as
5xMSU or 6 Tbps and Half Fiber Pair as 10xMSU or 12 Tbps. This notably is
not unique to PLCN and is supported by most [of] the cable systems designed
post 2015 or supported with upgrades."





On Tue, Jan 26, 2021 at 2:49 PM Rod Beck 
wrote:

> Actually it is standard language in the undersea cable world for a large
> spectrum purchase. Sometimes a fiber pair on a system may be too much, but
> the buyer still wants many terabits of capacity. "
>
> The Half Fiber Pair is the same as 10*MSUs in a virtual fiber pair, either
> in C-band or L-band. I believe these are primarily used in transocean
> routes."
>
>
> This is what I have learned so far.
>
>
> Now that deep sea cables are being deployed with as many as 24 pairs,
> there will be more players doing fractional purchases.
>
>
>
>
>
> --
> *From:* William Herrin 
> *Sent:* Tuesday, January 26, 2021 10:56 PM
> *To:* Rod Beck 
> *Cc:* nanog@nanog.org 
> *Subject:* Re: Half Fibre Pair
>
> On Tue, Jan 26, 2021 at 12:52 PM Rod Beck
>  wrote:
> > Can someone explain to me what is a half fibre pair? I took it
> > literally to mean a single fibre strand but someone insisted it
> > was a large quantity of spectrum. Please illuminate.
>
>
> Maybe it's like half a pair of glasses, the perfect accessory for the
> one-eyed man who's king.
>
> Seriously though, it sounds like a bad language construction. If a
> vendor is offering you that, I'd ask for clarification. Are they
> leasing a dedicated strand of fiber end-to-end? Dedicated wavelength
> directions delivered by fiber? Something else?
>
> If you're thinking of offering it, find better words.
>
> Regards,
> Bill Herrin
>


Past policies versus present and future uses

2021-01-24 Thread Matthew Petach
On Sun, Jan 24, 2021 at 4:22 AM JORDI PALET MARTINEZ via NANOG <
nanog@nanog.org> wrote:
[...]

> So, you end up with 2-3 RIRs allocations, not 5. And the real situation is
> that 3 out of 5 RIRs communities, decided to be more relaxed on that
> requirement, so you don’t need actually more than 1 or may be 2
> allocations. Of course, we are talking “in the past” because if we are
> referring to IPv4 addresses, you actually have a different problem trying
> to get them from the RIRs.
>

Hi Jordi,

I've adjusted the subject line to reflect the real thrust of this
discussion.

You're right--if we're trying to get "new" allocations of IPv4 addresses,
we've got bigger problems to solve.

But when it comes to IPv6 address blocks and ASNs, these questions are
still very relevant.

And, going back to the original article that spawned the parent thread, the
problem wasn't about companies requesting *new* blocks, it was about the
usage of old, already granted blocks that were now being reclaimed.

Historically, ISPs have focused on ensuring their usage of IP space
reflected the then-current requirements at the time the blocks were
requested.  This action by Ron, well-intentioned as it is, raises a new
challenge for ISPs:  network numbering decisions that were made in the
past, which may have been done perfectly according to the guidelines in
place at the time the blocks were assigned, may later on violate *newly
added* requirements put in place by RIRs.  How many global networks
allocate manpower and time cycles to potentially renumbering portions of
their network each time a new policy is put in place at an RIR that makes
previously-conforming addressing topologies no longer conforming?
Historically, once addresses were granted by an RIR, and the exercise of
ensuring all the requirements were met, and the addresses were in place,
that was it; nobody went back every time a new policy was put in place and
re-audited the network to ensure it was still in compliance, and did the
work to bring it back into compliance if the new policy created violations,
because the RIRs generally didn't go back to see if new policies had been
retroactively applied to all member networks.

Ron's actions have now put every network on notice; it wasn't good enough
to be in compliance at the time you obtained your address space, you MUST
re-audit your network any time new policies are put into force by the RIR
in a region in which you do business, or your address space may be revoked
due to retroactive application of the new policy against addresses you have
already put into use.

This is a bigger deal that I think many people on the list are first
grasping.

We grow up accustomed to the notion that laws can't be applied
retroactively.  If you smoked pot last year, before it was criminalized,
they can't arrest you this year after a new law was passed for smoking it
before the law was passed.

In the DDoS-guard case, the address blocks in question seem to have been
granted by LACNIC nearly a decade ago back in 2013, under whatever policies
were in force at the time.  But they're being revoked and reclaimed based
on the policies that are in place *now*, nearly a decade later.

It sends a very clear message--it's not enough to be in compliance with
policies at the time the addresses are granted.  New policies can and will
be applied retroactively, so decisions you made in the past that were valid
and legal, may now be invalid, and subject you to revocation.  It's bad
enough when it's your own infrastructure that you have some control over
that you may need to re-number; woe to you if you assign address blocks to
*customers* in a manner that was valid under previous policy, but is no
longer valid under new policies--you get to go back to your customers, and
explain that *they* now have to redo their network addressing so that it is
in compliance, in order for *you* to be in compliance with the new
policies.  Otherwise, you can *all* end up losing your IP address blocks.

So--while I think Ron's actions were done with the best of intentions, I
think the fallout from those actions should be sending a chill down the
spine of every network operator who obtained address blocks under policies
in place a decade ago that hasn't gone back and re-audited their network
for compliance after ever subsequent policy decision.

What if one of *your* customers falls into Ron's spotlight; is the rest of
your network still in compliance with every RIR policy passed in the years
or decades since the addresses were allocated?  Are you at risk of having
chunks of your IP space revoked?

I know this sets a precedent *I* find frightening.  If it isn't scaring
you, either you don't run a network, or I suspect you haven't thought all
the way through how it could impact your business at some unforeseen point
in the future, when a future policy is passed.  :/

Thanks!

Matt


Re: Nice work Ron

2021-01-23 Thread Matthew Petach
On Sat, Jan 23, 2021 at 1:11 AM JORDI PALET MARTINEZ via NANOG <
nanog@nanog.org> wrote:

> When you sign a contract with a RIR (whatever RIR), is always 2 parties,
> so majority of resources operated in the region (so to have the complete
> context) clearly means that you are using in the region >50% of the
> provided IPs.


No.

If you operate a global backbone on six continents,
and obtain a block of addresses to use for building
that backbone, you can easily end up in a situation
where there is no continent with >50% utilization of
resources; it can easily end up with the space being
split 10%, 10%, 20%, 25%, 35%.  Every time I have
gone to an RIR for resources, and have described the
need, explaining that the largest percentage of the
addresses will be used within the primary region
has been sufficient.  No RIR has stated that a global
backbone buildout can only be built in a region if > 50%
of the addresses used on that backbone reside within
their region.  Otherwise, you end up at a stalemate
with no RIR able to allocate addresses for your backbone
in good faith, because no region holds more than 50% of
the planet's regions.

"Mainly" has been interpreted to be "the largest percentage"
every time I have requested space.

If RIRs start to put a >50% requirement in place, you're
going to see global backbone providers put into the awkward
position of having to lie about their buildout plans--so they're
going to consistently vote against language that explicitly says
">50%" just so that nobody is put into the position of having to
knowingly lie on an attestation.

I understand where you're coming from; but as someone who
has built global infrastructure in the past, I think it would be
good to consider the view from the other side of the table,
and realize why the language is kept a bit more loose, to
allow for the creation of infrastructure that spans multiple
regions.

Thanks!

Matt


Re: not a utility, was Parler

2021-01-11 Thread Matthew Petach
On Mon, Jan 11, 2021 at 4:23 AM Rod Beck 
wrote:

> Declare Facebook a public utility and eliminate advertising by replacing
> with a fee or what you call a tariff. Breaking up does not always work.
> Facebook is like a natural monopoly - people want one site to connect with
> all their 'friends'. No one is going to use several Facebooks as social
> media platform. They want one.
>
> Regards,
>
> Roderick.
>

I think you would quickly find that Facebook became a much emptier place
the moment you started charging standardized tariffs to access the service.

How many people here would shell out $10/month to scroll endlessly
through their timeline, or wall, or whatever facebook calls it these days?

I don't even use Facebook for free these days; charging a tariff?  Yeah,
that's going to result in a ghost town pretty quickly.

People want one *free* site to connect to all their friends.  They've
already
learned that it's a non-starter trying to get their friends to join them on
a
platform that charges a monthly tariff.

It's only a natural monopoly because the advertising is subsidizing the
free nature of it.  Take away the free aspect, and suddenly it's not a
very natural monopoly at all.

Matt


Re: Parler

2021-01-11 Thread Matthew Petach
On Sun, Jan 10, 2021 at 7:53 PM William Herrin  wrote:

> On Sun, Jan 10, 2021 at 6:58 PM Matthew Petach 
> wrote:
> > Private businesses can engage in prior restraint all they want.
>
> Hi Matt,
>
> You've conflated a couple ideas here. Public accommodation laws were
> passed in the wake of Jim Crow to the effect that any business which
> provides services to the public must provide services to all the
> public. Courts have found such laws constitutional. Not to mention the
> plethora of common-law precedent in this area. You can set rules and
> enforce them but those rules can't arbitrarily exclude whole classes
> of people nor may they be applied capriciously.
>

...unless the higher calling of "religious freedom" is at stake,
in which case, sure, it's OK to exclude entire classes of people,
if serving them would go against your religious beliefs.
precedent set by
*Masterpiece Cakeshop v. Colorado Civil Rights Commission*, 584 U.S. ___
(2018)

Businesses which post the sign that starts, "we reserve the right,"
> are quite mistaken. If a customer is rejected and removed without good
> cause and thereby injured, a business can find itself on the losing
> end of a lawsuit.
>

But if a customer is simply denied service based on a
category that the business provider claims is against
their religious beliefs, and no injury takes place,
the courts have provided precedent in support of
such exclusion.


> "No shirt, no shoes, no service," on the other hand, is entirely
> enforceable so long as that enforcement is consistent.
>
> The legal term "prior restraint" is even more narrowly focused. It
> refers only to blocking publication on the grounds that the material
> to be published is false or otherwise harmful. The government is
> almost never allowed to do so. Instead, remedies are available only
> after the material is published.
>

Fair enough; I used the phrase "prior restraint" in a completely
amateur and inaccurate way to indicate a business taking action
against a customer prior to actual harm being done.


> With private organizations it gets much more complicated. No
> organization is compelled to publish anything. But then section 230 of
> the DMCA comes in and says: if you exercise editorial control over
> what's published then you are liable for any unlawful material which
> is published. More precisely, common law precedent says you're liable
> for what you publish. Section 230 grants immunity to organizations who
> _do not_ exercise editorial control. But what is editorial control?
> The courts have been all over the place on that one.
>

Amazon, Google, and Apple did not exercise editorial control
over the content; they severed a customer relationship, which
is well within the rights of any business.  They didn't keep Parler
on the platform, but say "you can't say the following words in
any of your posts" -- which would have put them on shakier
grounds; they simply said "sorry, we don't want you as a customer
any longer."

If you're my customer, and my terms of service allow me to
terminate my relationship with you at any time for a list of
reasons, then I can terminate my relationship with you at
any time, based on those reasons.

As ISPs, we depend on TOS clauses like that to allow
us to terminate customers that are DDoSing others, are
attacking others, are causing harm to others, are posting
illegal content, etc.

If you're notified of CSEI on your platform, removing
access to it and turning it over to the FBI doesn't put
you in jeopardy of violating section 230 immunity.
You're not acting as a moderator of content, you're
enforcing your terms of service and cooperating with
law enforcement.

If I kick a customer off because their check bounced,
I'm not moderating their content, I'm severing my
relationship with a customer for non-payment.

Of course, I'm still a complete layman, and I bow to
John Levine's *much* more nuanced and accurate
explanation of the difference, which I've hopelessly
mangled in this discussion.   ^_^;;



> Regards,
> Bill Herrin
>

Matt


Re: Parler

2021-01-10 Thread Matthew Petach
Oh, geez...
I was going to ignore this thread, I really was.  :(


On Sun, Jan 10, 2021 at 6:13 PM Keith Medcalf  wrote:

> >The first amendment deals with the government passing laws restricting
> >freedom of speech. It has nothing to do with to whom AWS chooses to sell
> >their services. It is also not absolute (fire, crowded theater, etc.)
>
> You are correct and incorrect.  The First Amendment prohibits the
> Government from passing laws which constitute "prior restraint".  It does
> nothing with respect to anyone other then the "Government" and its agents.
>
> You are also incorrect.  Freedom of Speech is Absolute.  There is no prior
> restraint which precludes you from "(fire, crowded theatre, etc.)" whatever
> that means.  That does not mean that speech does not have "consequences".
> The first amendment only protects against prior restraint, it does not
> protect against the suffering of consequences.  And of course
> "consequences" come AFTER the speech, not BEFORE the speech.
>
> Furthermore your "(fire, crowded theater, etc.)" (whatever the hell that
> means) cannot, as a matter of fact, possibly justify any action taken prior
> to the so-called speech having been made as that would be an assumption of
> fact not in evidence (also known as a hypothetical question) and the courts
> do not rule on hypotheticals.  If you do not understand the difference then
> perhaps you should be sentenced to death since you have a hand, and having
> a hand it could hold a gun, and since it could hold a gun, you could also
> murder someone.  So therefore you should be put to death now as "prior
> restraint" to prevent you from committing murder.
>

You're being dense.

Private businesses can engage in prior restraint all they want.

Airlines, for example, if they suspect you pose a risk to the
other passengers on the flight, can refuse to take off while you
are on the plane, or even turn the plane back around and land,
and have you ejected.

They don't have to wait until you've beaten up another passenger,
tried to open a door mid-flight, or stabbed someone.

To bring it closer to home, an ISP can refuse to provide service to
someone they suspect is a spammer.  They don't have to wait until
the first spam is sent, they can exercise prior restraint and deny the
entity service based simply on the suspicion they may be a spammer,
and therefore not worth providing service to.

I am neither a lawyer nor a yankee doodle and I know these facts to be
> self-evident.
>

I am sorry to say your grasp of facts seems to be tenuous at best.  :(

Better luck in the next reality.

Matt


Re: Parler

2021-01-10 Thread Matthew Petach
On Sun, Jan 10, 2021, 14:06 Keith Medcalf  wrote:

>
> The world is now a different place with the election of the Nazi's.
>


OK, it's now official.

I'm invoking Godwin's Law on this thread.

*plonk*

Matt


Re: Parler

2021-01-10 Thread Matthew Petach
On Sun, Jan 10, 2021 at 12:29 PM Mel Beckman  wrote:

> It’s gratifying to see the many talented engineers here working on a
> solution to the underlying problem: Censorship. Don’t confuse freedom of
> speech (which protects us from government censorship) from freedom of
> commerce, which is a uniquely American aspect of Internet design.
>
> As John Gilmore wisely said, “The Net interprets censorship as damage and
> routes around it.”
>
> Let’s start helping the free market route around the censors!
>
>  -mel
>

I'm sorry, Mel

The market hasn't been "free" for quite a long time.

There's easy solutions to the problem--hiring really good engineers
to write your own AWS-lookalike where you can host whatever content
you want, hosted in buildings you've built on land you've bought.

It's most definitely not free, though.

I'm available to start immediately--but I charge $2M/year, plus expenses,
including a full security detail to protect me from the types of people who
are likely to use the platform once it's done.

You're "free" to hire me--but I most definitely don't work for free.

You're "free" to buy your own land to build a datacenter on; but it
most definitely isn't available for free.

You're "free" to buy electricity to run the servers you put in that
datacenter you've built--but the electricity is most definitely not
arriving for free.

The part Gilmore missed is that the 'Net only routes around
censorship if someone is willing to foot the bill.

Matt


Re: Parler

2021-01-10 Thread Matthew Petach
On Sun, Jan 10, 2021 at 12:03 PM Michael Thomas  wrote:

>
> On 1/10/21 11:11 AM, Bryan Fields wrote:
> >
> > Anyone hosting with Amazon/Google/the cloud here should be really
> concerned
> > with the timing they gave them, 24 hours notice to migrate.  Industry
> > standards would seem to be at least 30 days notice.  Note this is not the
> > police/courts coming to the host with notice that they are hosting
> illegal
> > content but only the opinion of the provider that they don't want to
> host it.
> >
> Considering that it seems that there continues to be talk/planning of
> armed insurrection, I think we can forgive them for violating
> professional courtesy.
>
> Mike
>

I thought the boot was announced after physical threats were made
against Google and Apple facilities and employees for removing the
app from the app stores?

There's professional courtesy; but the moment you start threatening
to bomb datacenters and kill employees, it's pretty clear professional
courtesy has been forcibly thrown through the reinforced double-glazed
energy-efficient windows and has plummeted straight through the roof
of the classic Cadillac in the parking lot ten stories below.  :(

Matt


Re: Parler

2021-01-10 Thread Matthew Petach
On Sun, Jan 10, 2021 at 7:06 AM  wrote:

> Another interesting angle here is that it as ruled President couldn’t
> block people, because his Tweets were government communication. So has
> Twitter now blocked government communication?
>
>
They blocked Trump's personal account, not the White House
or the official Twitter account of the President.

They're perfectly within their rights to block the
accounts of individuals violating their terms of service.

Matt


Re: NDAA passed: Internet and Online Streaming Services Emergency Alert Study

2021-01-05 Thread Matthew Petach
On Tue, Jan 5, 2021 at 4:31 PM Chris Adams  wrote:

> Once upon a time, Matthew Petach  said:
> [...]
>
> I don't know if an unsubscribed cell phone gets the emergency alerts (I
> know you are supposed to be able to call 911 from any cell phone, even
> if not carrying paid service).  If so, that'd be another cheap way to
> get alerts.
>

Now *that* sounds like a good, feasible middle ground;
simply ensure cellular devices don't need an active plan
to get alerts sent, and we can give out older, donated
phones to people who don't have their own to act as
receivers for alerts.

Much easier than trying to stuff receivers into smoke detectors!  :)

Thanks!


> --
> Chris Adams 
>

Matt


Re: NDAA passed: Internet and Online Streaming Services Emergency Alert Study

2021-01-05 Thread Matthew Petach
On Mon, Jan 4, 2021 at 7:11 PM Billy Crook  wrote:

> Then again how many people would benefit from adding this to online
> streaming, but don't already have cellphones that have emergency alert
> popups that get their attention.  The kind of people who don't have
> smartphones are going to be the ones still watching bunny ears television
> anyway.  In other words, you're not going to reach the people who *don't *have
> smartphones by ADDING more technology.
>

/* begin semi-annoyed and frustrated rant */

That would be incorrect.
My partner is one of the more tech-savvy people on the planet; she's
contributed code to the core sendmail implementation,
she's single-handedly torn down and completely rebuilt the entire
infrastructure for
a silicon valley company in a matter of days following a compromise,
she's written software for monitoring millions of devices within global
networks.
She consumes information voraciously online.
She doesn't wiggle bunny ears on the television.

And she's never bought a cell phone.

If we're going to postulate every citizen of the country having a cell
phone,
then we should first postulate the system whereby the government provides
them free to every citizen, with a minimum level of access provided free to
all users.

*Then* you might be able to start making broad, sweeping claims about
who has cell phones, and who doesn't.

Otherwise, you're kinda talking out your backside, saying "well, I own a
cell phone,
and I can't possibly imagine anyone like me not owning a cell phone,
therefore they
must not exist."

The failure happening here is your ability to imagine the existence of
people unlike yourself,
not the failure of such people to actually exist.

Up to now, they've simply fallen through the cracks.

The Alert Study in question is simply asking "is there a way we can
use technology so they no longer fall through the cracks?"

It's a valid question to ask, and as I'm sitting scarcely a dozen feet
from one of the people that explicitly falls into the category the study
seeks to address, I can vouch that it is a non-zero population, and it
is a population that is indeed missed by our current alerting systems.

We may eventually decide it's too technologically challenging to serve
them, and decide to let them continue falling through the cracks.

But let's at least do the exercise of looking to see if a solution exists,
rather than simply claiming the problem doesn't exist.
Because I can personally attest that yes, there are technically savvy
people who consume online content who have never bought a cell
phone, and have no interest in paying dozens of dollars a month to
a company for a device they have no desire to ever use.

/* end rant */

Thank you for your time and patience in reading,
or at least your silence in your use of the d key,

Matt


Re: NDAA passed: Internet and Online Streaming Services Emergency Alert Study

2021-01-03 Thread Matthew Petach
On Sun, Jan 3, 2021 at 5:03 PM Keith Medcalf  wrote:

> >I think the challenge here is that there's a category of people
> >who don't have cell phones, who don't have cable TV, but
> >receive content over their internet connection.  I happen to
> >live with someone like that, so I know it's a non-zero portion
> >of the population.
>
> I pay for my Internet connection and I do not want "your shit" to be
> spending "my money".  If you think this is oh so important then *YOU* can
> pay to install at your sole expense, a device which emits your silly
> warnings -- I do not want them.  You will also have to negotiate for
> easement rights on my Private Property and those are not going to be given
> away for cheap.
>

I take it you chant the same diatribe at your television when your
video bandwidth is *stolen* by the emergency broadcast notification
system, as it marches across the top of your screen?

After all, you've paid for your television feed, and you don't
want those "emergency broadcast messages" spending
*your money*.  Dammit, how dare they interrupt those
precious seconds of the nightly news to tell you there's
a flash flood warning for your county?

There's already precedent set.

I think that ship sailed long before you started attempting to drill holes
through the hull.
;P

Matt


Re: NDAA passed: Internet and Online Streaming Services Emergency Alert Study

2021-01-02 Thread Matthew Petach
On Sat, Jan 2, 2021 at 5:45 PM Max Harmony via NANOG 
wrote:

>
> On 02 Jan 2021, at 19.18, Matthew Petach  wrote:
> > I think the challenge here is that there's a category of people
> > who don't have cell phones, who don't have cable TV, but
> > receive content over their internet connection.  I happen to
> > live with someone like that, so I know it's a non-zero portion
> > of the population.
>
> Emergency alerts are also on OTA TV (and radio), not just cable. People
> whose sole communications device is a computer can subscribe to FEMA'S
> IPAWS feed. People who can't (or don't want to) do that can use a weather
> radio (despite the name, NWS broadcasts all hazards alerts, not just
> weather). The most likely answer to "how do we get streaming services to
> provide emergency alerts?" is to make them redistribute the IPAWS feed and
> update their software to make the updates human-readable. It would probably
> be cheaper to just tell people where to find free IPAWS software instead of
> making every streaming service add the feature, and, as a last resort, give
> people who need them free weather radios.
>

The FEMA IPAWS system doesn't seem well-suited for end-users to
subscribe to it.  Of note is the specific restriction:


   - Providers cannot stress IPAWS servers with excessive requests.


Which might hint that the FEMA servers aren't intended to support
hundreds of thousands of individuals connecting directly to them
to request alert data.

It doesn't look like there's currently any internet-capable way of
consuming the IPAWS feed, at least that a quick search engine
dive turns up.  Wondering if any of the folks here know of providers
that have signed up with FEMA to redistribute the IPAWS feed
for free?  (Yes, I found WeatherMessage, but their pricing and
platform restrictions made them a non-starter).

Thanks!

Matt


Re: NDAA passed: Internet and Online Streaming Services Emergency Alert Study

2021-01-02 Thread Matthew Petach
On Sat, Jan 2, 2021 at 6:51 AM Sander Steffann  wrote:

> Hi,
> [...]
> Just to be clear: this is talking about IP traffic, not things like
> SMS-CB, right? When there are already cell broadcast alerts, I have the
> feeling that adding alerts to IP traffic (however that would be
> supposed to work) wouldn't add that much coverage…
>
> Cheers,
> Sander
>

I think the challenge here is that there's a category of people
who don't have cell phones, who don't have cable TV, but
receive content over their internet connection.  I happen to
live with someone like that, so I know it's a non-zero portion
of the population.

For them, cell based alerts don't reach them, and cable TV
inserted alerts don't reach them.

Thus, the question of "can we reach them via IP-based
alerting?"

I think it's a good investigation to undertake, as there's
clearly a population that is left out of the current alerting
systems we have in place.

Matt


Re: [External] Re: 10g residential CPE

2020-12-29 Thread Matthew Petach
On Mon, Dec 28, 2020 at 4:26 PM Niels Bakker  wrote:

> * mpet...@netflight.com (Matthew Petach) [Tue 29 Dec 2020, 01:08 CET]:
> >But as far as the physics goes, the conversion of biomatter into
> >petrochemicals in the ground is more "renewable" than the conversion
> >of hydrogen into helium in the sun.
>
> It's not. Where did Mr Metcalf think the energy comes from that is
> necessary for that process? You know, the energy that we can now
> extract by burning it?
>

The same place that provides the energy that gets
water back to the top of the mountains to make
hydroelectric energy "renewable".  The same place
that provides the energy that heats air masses to
different temperatures around the planet, creating
wind currents that move wind turbines to generate
"renewable" electricity.

It's just that water and wind energy cycles work on
shorter time cycles; those cycles are measured in
months and weeks, not in millenia the way the
absorption of solar energy by plants and then
eventual breakdown into petrochemicals underground
takes.

We have short-term renewables, like wind and
hydro; we have longer-term renewables like
oil and coal that take longer than the course
of human history to renew; and then we have
a completely consumable resource called the
sun which powers all the rest, but is itself on a
one-way trip to eventual extinction, albeit on a
much longer time scale.

I'm a huge fan of solar power, of wind power,
and pumped hydro energy storage.  But from
a long enough time horizon, it all depends on
a single, non-renewable energy source--the sun.

We just have the luxury of punting that concern
a few billion years down the road.   ;)

Coming back slightly more on topic--multiple
diverse power sources are always good to have,
but I'm mindful of the fried rodent incident at
Forsythe Hall from the mid-90s.  BARRnet
and SUNet were both impacted when the
datacenter there was taken completely offline
from a power perspective, in spite of having
two different off-campus power providers, plus
a local cogeneration plant and a generator out
in the parking lot.  One rodent in the heart of
the transfer switch made all the different power
feeds completely moot.  From a "single point of
failure" perspective, the transfer switch tends to
be the weakest link in the chain.  Has anyone
developed a distributed transfer switch, split
into different locations in a building, fed at different
entry points, that can withstand one portion of the
transfer system being knocked out?

Thanks!

Matt
(yes, Earth *is* a single point of failure...for now)


Re: [External] Re: 10g residential CPE

2020-12-28 Thread Matthew Petach
On Sun, Dec 27, 2020 at 12:28 PM Mark Tinka  wrote:

>
>
> On 12/27/20 21:56, Keith Medcalf wrote:
>
> > Me too.  On top of that, diesel and gasoline are pretty reliable.
> Though some people may argue about "renewables" the fact is that it is all
> a matter of time-frame.  Solar power, for example, is not renewable.  Once
> it is all used up, it will not "renew" itself -- and this "using up"
> process is quite independent of our usage of it, as it happens.  The time
> to depletion may be somewhat long, but it still has a time to depletion.
> Oil and Gas, however, is a "renewable" resource and as a mere physical and
> chemical process it is occurring at this very moment.
>
> Well, the sun can't be "used up". You just have to wait 12hrs - 15hrs
> before you can see it again :-).
>


Mark,

I think you may have misunderstood Keith's comment about
it being "all a matter of time-frame."

He's right--when the sun consumes all the hydrogen in
the hydrogen-to-helium fusion process and begins to
expand into a red dwarf, that's it; there's no going
backwards, no putting the genie back into the bottle,
no "renewing" the sun.  It's purely a one-way trip.

Now, as far as humans go, we're far more likely to be
extinct due to other reasons before we come anywhere
near to that point.

But as far as the physics goes, the conversion of biomatter
into petrochemicals in the ground is more "renewable" than
the conversion of hydrogen into helium in the sun.

It's just that we're far more likely to hit the near-term
shortage crunch of petrochemicals in the ground than
we are the longer-term exhaustion of hydrogen in the
core of the sun.   ;)

Matt


Re: Unexplainable router log entries mentioning IPSEC from Yahoo IPs

2020-12-19 Thread Matthew Petach
In this case, however, what's being seen is simply valid traffic
which was most likely erroneously redirected through an
internal encryption device.

I would hazard a guess the folks involved have already jumped
on checking the redirector rules to fix the leakage which allowed
external IPs to be passed through the internal encryption pathway.

I helped build the system that's causing those messages, so I have
a bit of a guess as to what the issue is.  I'm no longer an employee,
however, so I can't fix the issue.  But in this case, those boxes really
aren't trying to attack you--they just aren't supposed to be sending
traffic externally like that.

So, it actually is good to speak up about this traffic--because it's a
fixable
issue, and one that should be addressed at the source.

Thanks!

Matt
#notspeakingofficiallyforanyoneoranything


On Fri, Dec 18, 2020 at 9:05 PM Dobbins, Roland 
wrote:

>
>
> On Dec 19, 2020, at 01:19, Frank Bulk  wrote:
>
> Curious if someone can point me in the right direction. In the last three
> days our core router (Cisco 7609) has logged the following events:
>
> Dec 16 19:04:59.027 CST: %CRYPTO-4-RECVD_PKT_INV_SPI: decaps: rec'd IPSEC
> packet has invalid spi for destaddr=, prot=50,
> spi=0xEF7ED795(4018067349), srcaddr=68.180.160.18, input interface=Vlan20
>
>
> It should be noted that attackers will sometimes generate
> non-TCP/-UDP/-ICMP DDoS attack traffic which is intended to bypass ACLs,
> firewall rules, etc. which only take the more common protocols into
> account. They'll often pick ESP (protocol 50, AH (protocol 51), or GRE
> (protocol 47) in order to try & masquerade the attack traffic as legitimate
> VPN or tunneled traffic.
>
> And the source IPs of this attack traffic are frequently spoofed, as well.
>
> 
>
> Roland Dobbins 
>
>
>


Re: Gaming Consoles and IPv4

2020-09-28 Thread Matthew Petach
...I'm guessing someone didn't read "Harrison Bergeron" in middle school,
then?

Crippling everyone down to the lowest common denominator is a wonderful
recipe for creating a service or platform that *nobody* wants to use.

If I connect through an AOL dialup account to an FPS gaming platform,
you really, *really* shouldn't be adding 300ms of latency to everybody
else on that server, just to be fair to me.

I mean, sure, it's *fair*--but it also makes the game far less playable
for everyone else, and they'd be completely right to stop paying for
the service and move over to a different platform that doesn't hobble
their game playing any time someone on a slow connection joins
the game.^_^;;

Matt


On Mon, Sep 28, 2020 at 1:30 PM Matt Hoppes <
mattli...@rivervalleyinternet.net> wrote:

> Correct - but with a server based model you can look at the lag to the
> worst clients and add lag to the other clients so everyone has a level
> playing field.
>

>


Re: Gaming Consoles and IPv4

2020-09-28 Thread Matthew Petach
The number of times when a decision is *both*
cheaper *and* better is miniscule compared to
when the decision is being made to optimize
one axis relative to the other.  And in an industry
with narrow margins, most often that decision will
run squarely along the "cheaper" axis, at the expense
of the "better" axis.

I'm sure you've faced that same decision in your
business, the same as the rest of us over the years...

Matt

On Mon, Sep 28, 2020 at 8:17 AM Mike Hammett  wrote:

> Yet (apparently) worse?
>


> *From: *"Tom Beecher" 
> *To: *"Mike Hammett" 
> *Cc: *"Justin Wilson (Lists)" , "North American Network
> Operators' Group" 
> *Sent: *Monday, September 28, 2020 9:21:09 AM
> *Subject: *Re: Gaming Consoles and IPv4
>
> Why stray away from how PC games were 20 years ago where there was a
>> dedicated server and clients just spoke to servers?
>
>
> Much cheaper to just let all the game clients talk peer to peer than it is
> to maintain regional dedicated server infrastructure.
>


Re: TCP and UDP Port 0 - Should an ISP or ITP Block it?

2020-08-25 Thread Matthew Petach
On Tue, Aug 25, 2020 at 8:36 AM Mel Beckman  wrote:

> “SHOULD” is not “SHALL”, and thus this doesn’t countermand RFC 768’s
> instruction “ If not used, a value of zero is inserted." So the key
> question is, when is the source port not used? When a reply is not
> requested, is my thinking. Is there an application that implements this in
> UDP? (it’s nonsensical in TCP, which always requires a handshake, after
> all). I don’t recall one, but I can envision one: sending a one-way
> notification that requires no acknowledgement.
>

There are many applications that send UDP streams that don't expect a reply.

Here's one I worked on at previous $DAYJOB:
https://github.com/yahoo/UDPing

It emits a stream of UDP packets to a measurement box,
which collects the data and reports statistics on it.  No replies
to the UDP probes are sent.

But there's another, more common application that many
people on this list use every day, and indeed was likely the
initial trigger for this thread:
netflow collection.

Your routers emit UDP data streams, destined for a netflow collector box;
no reply is expected (and indeed, no reply is desired; the router is busy
enough *sending* the netflow stream, trying to process replies would just
be another burden on the CPU).

[...]

> I think filtering zero-sourced UDP flies in the face of fundamental
> Internet interoperability.
>
>  -mel
>
>

Indeed.  There are existing applications where the source port of
unidirectional UDP streams
is not used, as no replies are expected, and may be left as zero.

Matt


Re: questions asked during network engineer interview

2020-07-14 Thread Matthew Petach
On Tue, Jul 14, 2020, 11:00 Ahmed elBorno  wrote:

> 15 years ago, I applied to a network admin role at Google, it was for
> their corporate office, not even the production network.
>
> I had less than two years experience.
>
> The interviewer asked me:
> [...]
> 2) If we had a 1GB file that we need to transfer between America and
> Europe, how much time do we need, knowing that we start with a TCP size of
> X?
>


I *love* questions like that, because I can immediately respond back with
"well, that depends; did your sysadmin configure rfc1323 extension support
in your TCP stack?  Is SACK enabled?  What about window scaling?  Does your
OS do dynamic buffer tuning for TCP, or are the values locked in at start
time?"

Depending on how the interviewer responds gives me a pretty good idea how
much clue the people I'd be working with have, and how well they work
collaboratively even with people they don't really know.  If they respond
well on their feet, and give me better inputs, I respond with a better
answer.

If they say "It doesn't matter", then I respond by saying "See, that's why
things aren't working so well for you here; you don't really understand how
far down the rabbit hole goes", and respectfully ask to end the interview
before we waste any more of each other's time.

I *love* teaching--but only with people who are open to learning.

Stay safe!

Matt



>


Re: Arista Switches rebooting

2020-05-04 Thread Matthew Petach
Just history repeating itself... ;)

https://www.cisco.com/c/en/us/support/docs/field-notices/200/fn25994.html

https://www.networkworld.com/article/3122864/cisco-says-router-bug-could-be-result-of-cosmic-radiation-seriously.html

As the process size in fabrication gets smaller and smaller, it takes less
and less energy hitting a device to cause spurious events like these.

"smaller, faster, cheaper" does come with a few trade-offs.   ^_^;;

Matt


On Mon, May 4, 2020, 08:32 Javier Gutierrez Guerra 
wrote:

> EOS 4.22.0.1F
>
>
>
> But after contacting Support, the issue seems to be related to a ECC issue
> that causes CPU to reset, so a Aboot upgrade is required
>
> Field Notice 0044 - Arista
> 
>
>
>
> Javier Gutierrez Guerra
>
>
>
> *From:* Ariel Biener 
> *Sent:* Monday, May 4, 2020 9:31 AM
> *To:* Javier Gutierrez Guerra ; nanog@nanog.org
> *Subject:* Re: Arista Switches rebooting
>
>
>
> *CAUTION: *This email is from an external source. Do not click links or
> open attachments unless you recognize the sender and know the content is
> safe.
>
> Eos version?
>
>
> --
>
>
>
> *From:* NANOG  on behalf of Javier Gutierrez
> Guerra 
> *Sent:* Monday, May 4, 2020 5:27 PM
> *To:* nanog@nanog.org
> *Subject:* Arista Switches rebooting
>
>
>
> Hi,
> Has anyone had issues with Arista switches rebooting out of the blue, when
> there isn't even a sufficient load on them to be a CPU or memory issue?
> We have a couple Arista 7280s both SR and CR that have had this behaviour,
> this is the second time we see this issue and just wanted to see if this is
> something anyone else is experiencing with this platfrom
> Thanks,
>
> Javier Gutierrez Guerra
>


Re: Google peering pains in Dallas

2020-04-30 Thread Matthew Petach
On Thu, Apr 30, 2020 at 11:43 AM Christopher Morrow 
wrote:

> On Thu, Apr 30, 2020 at 2:39 PM Aaron C. de Bruyn 
> wrote:
> >
> > Why isn't there a well-known anycast ping address similar to
> CloudFlare/Google/Level 3 DNS, or sorta like the NTP project?
> > Get someone to carve out some well-known IP and allow every ISP on the
> planet to add that IP to a router or BSD box somewhere on their network?
> Allow product manufacturers to test connectivity by sending pings to it.
> It would survive IoT manufacturers going out of business.
> > Maybe even a second well-known IP that is just a very small webserver
> that responds with {'status': 'ok'} for testing if there's HTTP/HTTPS
> connectivity.
> >
>
> It sounds like, to me anyway, you'd like to copy/paste/sed the AS112
> project's goals, no?
>


Or at least expand on it, to define specific IPs within
192.175.48.0/24
and
2620:4f:8000::/48
as ICMP/ICMPv6 probe destinations

If every manufacturer knew that, say 2620:4f:8000::58
was going to respond to ICMPv6 ping requests (::58 chosen
purely because it matches the IPV6-ICMP protocol number),
it would surely make it easier for them to do "aliveness"
probing without worries that a single company might go out
of business shortly after releasing their product.

Certainly worthy of proposing to the AS112 operators,
I would think.   :)

Matt


Re: Comcast - Significant v4 vs v6 throughput differences, almost stateful.

2020-04-23 Thread Matthew Petach
On Thu, Apr 23, 2020 at 12:45 PM Sabri Berisha 
wrote:

> - On Apr 23, 2020, at 8:06 AM, Nick Zurku 
> wrote:
>
> We’re having serious throughput issues with our AS20326 pushing packets to
> Comcast over v4. Our transfers are either the full line-speed of the
> Comcast customer modem, or they’re seemingly capped at 200-300KB/s. This
> behavior appears to be almost stateful, as if the speed is decided when the
> connection starts. As long as it starts fast it will remain fast for the
> length of the transfer and slow if it starts slow. Traces seem reasonable
> and currently we’ve influenced the path onto GTT both ways. If we prepend
> and reroute on our side, the same exact issue with happen on another
> transit provider.
>
> Have you tried running a test to see if there may be ECMP issues? I wrote
> a rudimentary script once, https://pastebin.com/TTWEj12T, that might help
> here. This script is written to detect packet loss on multiple ECMP paths,
> but you might be able to modify it for througput.
>
> The rationale behind my thinking is that if you have certain ECMP links
> that are oversubscribed, the TCP sessions following that path will stay
> "low" bandwidth. Sessions what win the ECMP lottery and pass through a
> non-congested ECMP path may show better performance.
>
> Thanks,
>
> Sabri
>


And for a slightly more formal package to do this,
there's UDPing, developed by the amazing networking
team at Yahoo; it was written to identify intermittent
issues affecting a single link in an ECMP or L2-hashed
aggregate link pathway.

https://github.com/yahoo/UDPing

It does have the disadvantage of being designed for
one-way measurement in each direction; that decision
was intentional, to ensure each direction was measuring
a completely known, deterministic pathway based on the
hash values in the packets, without the return trip potentially
obscuring or complicating identification of problematic links.

But if you have access to both the source and destination ends
of the connection, it's a wonderful tool to narrow down exactly
where the underlying problem on a hashed ECMP/aggregate
link is.

Matt


Re: Route aggregation w/o AS-Sets

2020-04-15 Thread Matthew Petach
I apologize if I wasn't clear.

I don't recommend ever using AS_SET.

So, in rule 3, I use the atomic-aggregate knob
to announce the single covering aggregate with
my backbone ASN as the atomic-aggregate origin
AS, and I don't generate or propagate any AS_SET
information along with the aggregate.

That way, no loop is seen by any of the downstream
networks that are announced the aggregate prefix.

I hope that helps clear up what I meant in my third
rule.  :)

Thanks!

Matt


On Wed, Apr 15, 2020 at 11:26 AM Jakob Heitz (jheitz) via NANOG <
nanog@nanog.org> wrote:

> Suppose you had a set of customers than all announced to you a set of
> routes
> and all those routes complete an aggregate
> and you announce only the aggregate to those customers
> and you include an AS_SET with it
> then those customers will drop your aggregate, thinking there is an AS-loop
> and those customers will not be able to reach each other.
>
> An AS_SET does not prevent routing loops and can prevent correct routing.
>
> But you must include the ATOMIC_AGGREGATE attribute, so that someone else
> does not disaggregate your aggregate that does not have the AS_SET.
>
> Regards,
> Jakob.
>
> -Original Message-
> Date: Tue, 14 Apr 2020 02:32:37 -0700
> From: Matthew Petach 
>
> I generally would use the atomic-aggregate knob to
> generate aggregate routes for blocks I controlled,
> when the downstream ASN information was not
> necessary to propagate outside my network
> (usually cases where I had multiple internal ASNs,
> but all connectivity funneled though a single upstream pathway.)
>
> If you have discrete downstream ASNs with potentially
> different external pathways, you shouldn't be generating
> aggregate routes that cover them; that's just bad routing 101.
>
> Thus, my rules for aggregation always came down to:
> 1) is there more than one external/upstream pathway for the ASN and prefix?
>
> If so, don't aggregate.
> 2) is redundant, reliable connectivity between all the external gateway
> routers that would be announcing the aggregate?
> If not, don't generate a covering aggregate.
> 3) If there's only a single upstream pathway through you for the ASN and
> prefix,
> and that won't be changing any time soon (eg, you have a collection of
> downstream
> datacenter with their own ASNs and prefixes, but they all route through a
> common
> backbone), then use the atomic-aggregate option to suppress the more
> specific
> AS_PATH information, and simply announce the space as a single aggregate
> coming
> from your backbone ASN.
>
> That way, there's no confusion with RPKI and AS_SETS; all you're ever
> announcing
> are simple AS_PATHs for a given prefix.
>
> Best of luck!
>
> Matt
>
>
>


Re: Constant Abuse Reports / Borderline Spamming from RiskIQ

2020-04-14 Thread Matthew Petach
On Tue, Apr 14, 2020, 18:14 Matt Palmer  wrote:

> [Hideously mangled quoting fixed]
>
> On Tue, Apr 14, 2020 at 02:51:55PM +0530, Kushal R. wrote:
> > Matt Palmer wrote:
> > > On Mon, Apr 13, 2020 at 11:14:11PM +0530, Kushal R. wrote:
> > > > All abuse reports that we receive are dealt within 48 business hours.
> > >
> > > At eight business hours per calendar day, and five business days per
> > > (typical) calendar week, 48 business hours is...  a week and a bit,
> > > calendar wise.
> >
> > We are a 24x7 operation.
>
> Then why not just say "withing 48 hours", rather than the weaselish "48
> business hours"?  Makes it seem like you're trying to clever-word yourself
> an alibi.
>
> - Matt
>

The Internet never sleeps.

Every hour on the Internet *is* a business hour.

(If you think otherwise, there's a good chance you're not running a global
operation.)

Matt


Re: Route aggregation w/o AS-Sets

2020-04-14 Thread Matthew Petach
On Mon, Apr 13, 2020 at 10:35 AM Lars Prehn  wrote:

> Hi everyone,
>
> how exactly do you aggregate routes? When do you add the AS_SET
> attribute, when do you omit it? How does the latter interplay with RPKI?
>
> Best regards,
>
> Lars
>
>
I generally would use the atomic-aggregate knob to
generate aggregate routes for blocks I controlled,
when the downstream ASN information was not
necessary to propagate outside my network
(usually cases where I had multiple internal ASNs,
but all connectivity funneled though a single upstream pathway.)

If you have discrete downstream ASNs with potentially
different external pathways, you shouldn't be generating
aggregate routes that cover them; that's just bad routing 101.

Thus, my rules for aggregation always came down to:
1) is there more than one external/upstream pathway for the ASN and prefix?

If so, don't aggregate.
2) is redundant, reliable connectivity between all the external gateway
routers that would be announcing the aggregate?
If not, don't generate a covering aggregate.
3) If there's only a single upstream pathway through you for the ASN and
prefix,
and that won't be changing any time soon (eg, you have a collection of
downstream
datacenter with their own ASNs and prefixes, but they all route through a
common
backbone), then use the atomic-aggregate option to suppress the more
specific
AS_PATH information, and simply announce the space as a single aggregate
coming
from your backbone ASN.

That way, there's no confusion with RPKI and AS_SETS; all you're ever
announcing
are simple AS_PATHs for a given prefix.

Best of luck!

Matt


Re: attribution

2020-04-13 Thread Matthew Petach
Well, according to your router's error message, it *did* work...it ensured
you couldn't propagate that route update, thereby ensuring no traffic from
your neighbors would traverse the prepended path.

Of course, it's a bit of a degenerate case of "working"--but it *did* serve
to shift traffic away.  ^_^;;

Matt



On Mon, Apr 13, 2020, 13:33 Randy Bush  wrote:

> > I’m using CAIDA’s bgpreader and this one looks like it might be an
> > example of what you want.
> >
> > R|R|1586714402.00|routeviews|route-views.eqix|||2914|206.126.236.12|
> 103.148.41.0/24|206.126.236.12|2914
>  58717 134371 134371
> 134371 134371 140076 140076 140076 140076 140076 140076 140076 140076
> 140076 140076 140076 140076 140076 140076 140076 140076 140076 140076
> 140076 140076 140076 140076 140076 140076 140076 140076 140076 140076
> 140076 140076 140076 140076 140076 140076 140076 140076 140076 140076
> 140076 140076 140076 140076 140076 140076 140076 140076 140076 140076
> 140076 140076 140076 140076 140076 140076 140076 140076 140076 140076
> 140076 140076 140076 140076 140076 140076 140076 140076 140076 140076
> 140076 140076 140076 140076 140076 140076 140076 140076 140076 140076
> 140076 140076 140076 140076 140076 140076 140076 140076 140076 140076
> 140076 140076 140076 140076 140076 140076 140076 140076 140076 140076
> 140076 140076 140076 140076 140076 140076 140076 140076 140076 140076
> 140076 140076 140076 140076 140076 140076|140076|2914:410 2914:1405
> 2914:2406 2914:3400||
>
> aut-num:AS140076
> as-name:MIS-AS-AP
> descr:  Mir Internet Service
> country:BD
> org:ORG-MIS3-AP
> admin-c:MISA2-AP
> tech-c: MISA2-AP
> mnt-by: APNIC-HM
> mnt-irt:IRT-MIS-BD
> mnt-routes: MAINT-MIS-BD
> mnt-lower:  MAINT-MIS-BD
> last-modified:  2020-01-31T06:35:38Z
> source: APNIC
>
> actually, an example of what none of us wants :)
>
> it seems a lot of folk think prepending acrually works.
>
> thanks
>
>
>


Re: South Africa On Lockdown - Coronavirus - Update!

2020-04-01 Thread Matthew Petach
On Tue, Mar 24, 2020 at 10:01 AM Randy Bush  wrote:

> > He's a network operator. From North America, on the North American
> Network
> > Operators mailing list. Something you are not, so please stop spouting
> your
> > drivel on a list that has nothing to do with you.
>
> this is not how we should act in under pressure
>
>
Returning late to the fray, so apologies for the thread necromancy

if ever there was a time I wished I could upvote messages on NANOG,
this would be it!  ^_^;

Thank you, Randy, and also to you, Beecher, for calling out the
need for us to be especially mindful of the need to be
liberal in what we accept, and conservative in what we send
during these highly stressful and trying times.

As someone who has been part of the community since the
comm-priv days, I would recommend that "Paul Wall" might
want to think carefully about calling on the moderators to take action.
Would you want to risk them subjecting your activity on the list
to the same scrutiny, and hold you to the same standard?

Stay safe, stay sane, stay healthy,

...and stay home, everybody!   ^_^

Matt


Re: interesting troubleshooting

2020-03-22 Thread Matthew Petach
On Sat, Mar 21, 2020 at 12:53 AM Saku Ytti  wrote:

> Hey Matthew,
>
> > There are *several* caveats to doing dynamic monitoring and remapping of
> > flows; one of the biggest challenges is that it puts extra demands on the
> > line cards tracking the flows, especially as the number of flows rises to
> > large values.  I recommend reading
> >
> https://www.juniper.net/documentation/en_US/junos/topics/topic-map/load-balancing-aggregated-ethernet-interfaces.html#id-understanding-aggregated-ethernet-load-balancing
> > before configuring it.
>
> You are confusing two features. Stateful and adaptive. I was proposing
> adaptive, which just remaps the table, which is free, it is not flow
> aware. Amount of flow results is very small bound number, amount of
> states is very large unbound number.
>

Ah, apologies--you are right, I scanned down the linked document too
quickly,
thinking it was a single set of configuration notes.

Thanks for setting me straight on that.

Matt


>
> --
>   ++ytti
>
>


Re: interesting troubleshooting

2020-03-20 Thread Matthew Petach
On Fri, Mar 20, 2020 at 3:09 PM Saku Ytti  wrote:

> Hey Nimrod,
>
> > I was contacted by my NOC to investigate a LAG that was not distributing
> traffic evenly among the members to the point where one member was
> congested while the utilization on the LAG was reasonably low. Looking at
> my netflow data, I was able to confirm that this was caused by a single
> large flow of ESP traffic. Fortunately, I was able to shift this flow to
> another path that had enough headroom available so that the flow could be
> accommodated on a single member link.
> >
> > With the increase in remote workers and VPN traffic that won't hash
> across multiple paths, I thought this anecdote might help someone else
> track down a problem that might not be so obvious.
>
> This problem is called elephant flow. Some vendors have solution for
> this, by dynamically monitoring utilisation and remapping the
> hashResult => egressInt table to create bias to offset the elephant
> flow.
>
> One particular example:
>
> https://www.juniper.net/documentation/en_US/junos/topics/reference/configuration-statement/adaptive-edit-interfaces-aex-aggregated-ether-options-load-balance.html
>
> Ideally VPN providers would be defensive and would use SPORT for
> entropy, like MPLSoUDP does.
>
> --
>   ++ytti
>
>

There are *several* caveats to doing dynamic monitoring and remapping of
flows; one of the biggest challenges is that it puts extra demands on the
line cards tracking the flows, especially as the number of flows rises to
large values.  I recommend reading
https://www.juniper.net/documentation/en_US/junos/topics/topic-map/load-balancing-aggregated-ethernet-interfaces.html#id-understanding-aggregated-ethernet-load-balancing
before configuring it.

"Although the feature performance is high, it consumes significant amount
of line card memory. Approximately, 4000 logical interfaces or 16
aggregated Ethernet logical interfaces can have this feature enabled on
supported MPCs. However, when the Packet Forwarding Engine hardware memory
is low, depending upon the available memory, it falls back to the default
load balancing mechanism."

What is that old saying?

Oh, right--There Ain't No Such Thing As A Free Lunch.   ^_^;;

Matt


It's not about the congestion, it's about the profit motive driving the industry

2020-03-20 Thread Matthew Petach
On Tue, Mar 17, 2020 at 10:52 AM Mike Bolitho  wrote:

> >You're facing essentially the same issue as many in non-healthcare do ;
> how to best talk to applications in Magic Cloud Land. Reaching the major
> cloud providers does not require DIA ; they all have presences on the major
> IXes, and direct peering could be an option too depending on your needs and
> traffic.
>
> I totally agree and 99.999% of the time, congestion on the Internet is a
> nuisance, not a critical problem. I'm not sitting here complaining that my
> public internet circuits don't have SLAs or that we run into some packet
> loss and latency here and there under normal operations. That's obviously
> to be expected. But this whole topic is around what to do when a once in a
> lifetime pandemic hits and we're faced with unseen levels of congestion
> across the country's infrastructure. I mean the thread is titled COVID-19
> Vs Our Networks. That's why I brought up the possible application of TSP to
> tell some of the big CDNs that maybe they should limit 4K streaming or big
> DLCs during a pandemic. That's it. And yet I'm getting chastised (not
> necessarily by you) for suggesting that hospitals, governments, water
> treatment plants, power plants, first responders, etc are actually more
> important during times like this.
>
> - Mike Bolitho
>

I think it's time to re-stock on the "The Cloud Is Just Someone Else's
Computer In A Different Building" stickers...

While having streaming services voluntarily ratchet
their bitrates down during the crisis is a nice enough
response, I think the deeper underlying issue is that
any system that is CRITICAL for maintaining health and
safety during a pandemic or other crisis MUST be
capable of operating standalone in case the rest of
the infrastructure has melted down.

X-Ray systems at hospitals that refuse to work when
they can't talk to a license server in the cloud?

Nope.

If there's government intervention and regulation
that comes out of this, it should focus not on TSP
responses during a crisis, but on ensuring that
manufacturers of healthcare devices do not prioritize
making money over saving lives.

IF there is regulation to be made after this, THAT is what
it needs to focus on.

Internet congestion is a symptom, not the cause of this
thread.  Fix the real problem.
CRITICAL health care systems must be capable of operating
on their own during a state of emergency, not held captive to
the profit motives of rich executives.  :/

Matt
who finds it appalling that we consider it more important to make
money than to save lives.  :(


>
>
> On Tue, Mar 17, 2020 at 10:35 AM Tom Beecher  wrote:
>
>> You're facing essentially the same issue as many in non-healthcare do ;
>> how to best talk to applications in Magic Cloud Land. Reaching the major
>> cloud providers does not require DIA ; they all have presences on the major
>> IXes, and direct peering could be an option too depending on your needs and
>> traffic.
>>
>> I don't mean to be dismissive of the issues you face, I apologize if
>> that's how it comes off. What you describe is certainly challenging, but I
>> think that you will have better success with some of the options that are
>> out there already than hoping for any resolution of intermittent congestion
>> issues in the wild west of the DFZ.
>>
>


Re: COVID-19 vs. peering wars

2020-03-20 Thread Matthew Petach
I'm curious;
would people say that fixing peering inefficiencies could have
a bigger impact on service performance than asking that
Netflix, Amazon Prime, Youtube, Hulu, and other video
streaming services cut their bit rates down?

https://www.bbc.com/news/technology-51968302
https://arstechnica.com/tech-policy/2020/03/netflix-and-youtube-cut-streaming-quality-in-europe-to-handle-pandemic/

It seems that perhaps the fingers, and the regulatory
hammer, are being pointed in the wrong direction at
the moment.  ^_^;

Matt
staying safely under the saran-wrap blanket for the next few weeks




On Fri, Mar 20, 2020 at 9:31 AM Adam Thompson 
wrote:

> Every large ISP does this (or rather, doesn't) at every IX in Canada.
> Bell isn't unique by any stretch.
>
> It's not in their economic interest to peer at a local IX, because from
> their perspective, the IX takes away business (Managed L2 point-to-point
> circuits, at the very least) from them.
>
> Don't expect the dominant wireline ISP(s) in any region to join local IXes
> anytime soon, sadly, no matter how much it would benefit their customers.
> After all, the customer is always free to purchase service to the IX and
> join the IX, right???  *grumble*
>
> In my local case, if BellMTS joined MBIX, un-cached DNS resolution times
> could potentially drop by 15msec.  That's HUGE.  But the end-user
> experience is not their primary goal.  Their primary goal is profit, as
> always.
>
> -Adam Thompson
>  Founding member, MBIX (once upon a time)
>
> Adam Thompson
> Consultant, Infrastructure Services
> MERLIN
> 100 - 135 Innovation Drive
> Winnipeg, MB, R3T 6A8
> (204) 977-6824 or 1-800-430-6404 (MB only)
> athomp...@merlin.mb.ca
> www.merlin.mb.ca
>
> > -Original Message-
> > From: NANOG  On Behalf Of Sadiq Saif
> > Sent: Friday, March 20, 2020 9:38 AM
> > To: nanog@nanog.org
> > Subject: Re: COVID-19 vs. peering wars
> >
> > On Fri, 20 Mar 2020, at 10:31, Steve Mikulasik via NANOG wrote:
> > >
> > > In Canada the CRTC really needs to get on Canadian ISPs about peering
> > > very liberally at IXs in each province. I know of one major
> > > institution right now that would have a major work from home issue
> > > resolved if one big ISP would peer with one big tier 1 in the IX they
> > > are both located at in the same province. Instead traffic needs to
> > > flow across the country or to the USA to get back to the same city.
> >
> > **cough** Bell Canada **cough**.
> >
> > --
> >   Sadiq Saif
> >   https://sadiqsaif.com/
>
>


COVID-19 vs. peering wars

2020-03-19 Thread Matthew Petach
On Thu, Mar 19, 2020 at 10:27 AM Mike Bolitho  wrote:

> *Restoration:*
>
> *The repair or returning to service of one or more telecommunications
> services that have experienced a service outage or are unusable for any
> reason, including a damaged or impaired telecommunications facility. Such
> repair or returning to service may be done by patching, rerouting,
> substitution of component parts or pathways, and other means, as determined
> necessary by a service vendor.*
>
>
> https://www.cisa.gov/sites/default/files/publications/OEC%20TSP%20Operations%20Guide%20Final%2012062016_FINAL%20508C.pdf
>
>
> My understanding, and what we did while I worked for a Tier I ISP, was
> that even for degraded circuits we had to do everything in our power to
> restore to full operations. If capacity is an issue and causes TSP coded
> DIA circuits to be unusable then that falls under the "any reason" clause
> of that line.
>
> - Mike Bolitho
>

If you're going to bang that drum, the place you're going to get the most
buck-for-your-bang is using it to force better cooperation between ISPs.

It appears that baking cakes was not sufficient to get recalcitrant players
to work together.

https://www.flickr.com/photos/mpetach/4031195041

Perhaps a global pandemic may be sufficient to have government begin to
*compel* networks to interconnect at locations at which they share common
peering infrastructure?

If you're worried about congestion and performance, that would be the place
to start pushing.

Matt
staying safely at home away from the flame-fest that may ensue from this.
^_^;


Re: Looking for transit with full table bgp cloud options

2020-03-12 Thread Matthew Petach
On Thu, Mar 12, 2020 at 3:04 PM William Herrin  wrote:

> On Thu, Mar 12, 2020 at 2:31 PM Joe Maimon  wrote:
> > I am looking for some cloud services, that would support Transit and
> > full table BGP to the cloud provided vm(s).
>
> Hi Joe,
>
> Vultr has it down pretty solid although for some reason that baffled
> me, they didn't understand that it's not a full routing table if they
> don't send me the routes to their own POPs. So even though I have two
> connections at vultr, my traffic *to* vultr-hosted sites all goes
> through my other provider. Setup was a breeze and except for that one
> bit of weirdness they've been very reliable.
>

Sounds like someone forgot to include ^$ in the list of prefixes sent along
the BGP feed...


Re: COVID-19 vs. our Networks

2020-03-12 Thread Matthew Petach
On Thu, Mar 12, 2020 at 1:04 PM Tom Beecher  wrote:

> I like the topic, but I think we should dispense with comments like 'house
> arrest'.
>

Agreed.

The situation is already plenty serious as it is.

Let's not add any more fuel to the fire.

...though, on a slightly related note, I've been seeing an increase in ads
for "Packet Scrubbing Services" recently.

Has anyone told the sales folks that's not how this spreads?   ;P


Re: Google peering in LAX

2020-03-02 Thread Matthew Petach
It may be worthwhile for you to consider adding 15169 to your "Don't accept
$tier1 prefixes from other peers" policy in your inbound policy chain.

I've found that there's a set of $LARGE_ENOUGH networks that, even though
they're not literal $tier1 providers, benefit from that same level of
filtering.  You wouldn't want to try sending Level3  traffic through a
random peer, as the results would likely be catastrophic; so, make use of
that same filter rule in your inbound policy to filter out hearing 15169
prefixes from other peering sessions.

The caveat to that, of course, is that successful failover will mean
carrying traffic across your backbone when your 15169 prefixes in one
location disappear during an outage/maintenance window, so make sure your
backbone is correctly sized to handle those reroute situations.  It also
means that multi-homed downstream customers are likely to send less
upstream traffic through you to reach Google.

But that *will* mean that no amount of leaking more specific prefixes
through other paths will unexpectedly cause your traffic to shift.

Matt



On Mon, Mar 2, 2020 at 5:39 PM Seth Mattinen  wrote:

> On 3/2/20 4:32 PM, Patrick W. Gilmore wrote:
> > That said, I fear this is going to be a problem long term. A blind “no
> /24s” filter is dangerous, plus it might solve all traffic issues. It is
> going to take effort to be sure you don’t get bitten by the Law Of
> Unintended Consequences.
>
>
> As soon as Google un-freezes new peering requests so I can get a direct
> peering that includes appropriate /24's I've been told offlist I should
> get (instead of the route server subset) I'll happily remove the transit
> filters. But I can only work with what I'm given.
>
>


Re: idiot reponse

2020-02-26 Thread Matthew Petach
On Wed, Feb 26, 2020 at 4:15 PM J. Hellenthal via NANOG 
wrote:

> Wtf kinda one word response is that lol
>


You missed the *very* important second line of the response, which makes
the first, one-word line meaningful.

Go back and read it again.  ;)

Matt



>
> --
>  J. Hellenthal
>
> The fact that there's a highway to Hell but only a stairway to Heaven says
> a lot about anticipated traffic volume.
>
> On Feb 26, 2020, at 15:03, Selphie Keller 
> wrote:
>
> 
> postfix =)
>
> /^From: .*@electricforestfestival\.com/ DISCARD
>
> On Wed, 26 Feb 2020 at 09:54, Christopher Morrow 
> wrote:
>
>>
>>
>> On Wed, Feb 26, 2020 at 11:46 AM Mike Hammett  wrote:
>>
>>> I send to nanog-ow...@nanog.org, but I never hear back.
>>>
>>>
>>>
>> I had sent this privately but I thought/think: nanog-admin@
>>
>> I could totally be wrong :)
>>
>


Re: QUIC traffic throttled on AT residential

2020-02-21 Thread Matthew Petach
On Fri, Feb 21, 2020, 13:31 Łukasz Bromirski  wrote:

>
> [...]
>
> Now… once we are aware, the only question is — where we go from here?
>
> —
> ./
>


Well, it's clear the UDP 443 experiment wasn't entirely successful.

So clearly, it's time to use the one UDP port that is allowed through at
the top of everyone's ACL rules, and update QUIC in the next iteration to
use UDP/53.

*THAT* should solve the whole problem, once and for all.

;)

Matt


Re: Customer sending blackhole route with another provider's AS

2020-02-11 Thread Matthew Petach
Anyone that is using blackhole communities should have enough Clue-fu
to adjust announcements along each pathway to have the correct sequence
of ASNs.  Passing a route with a different upstream's ASN as the origin,
instead
of their own, is just *asking* for "blackhole leakage", where they
inadvertently
become a conduit for blackhole prefixes from provider A getting
redistributed to
you as provider B.

Push back on them, and indicate they must pass properly-crafted AS-PATH
attributes to you in order to be accepted.  If they don't know how to do
that,
a) they shouldn't be mucking with blackhole communities, and b) they should
consider hiring Clue-fu to bring their network policies up to snuff.   ^_^;

Matt


On Tue, Feb 11, 2020 at 8:31 AM Chris Adams  wrote:

> One of our multihomed customers is set up with some type of security
> system from another upstream that can announce blackhole routes for
> targeted IPs.  They have a BGP policy to take those blackhole routes and
> add our blackhole community string so that we can drop the traffic (and
> we in turn translate to our transit providers).  All good.
>
> However, it doesn't work, because the route the customer sends to us has
> the other upstream's AS as the source, and we have AS path filtering on
> our customer links.
>
> Is this a typical setup?  Should we just accept the route(s) with
> another provider's AS in the path?  That seems... unusual.  Our internal
> blackhole system uses a private AS (so it can be stripped off before
> sending to anyone else).
>
> Just curious what others do... I always assumed AS path filtering to
> customer (and their downstream customers) AS was a standard best
> practice.
>
> --
> Chris Adams 
>
>


Re: "Using Cloud Resources to Dramatically Improve Internet Routing"

2020-01-09 Thread Matthew Petach
Whoa...

So IPv6 is just a segment routing wrapper around IPv4.

!insert mandatory "I know kung fu" meme <-- here

^_^


On Thu, Jan 9, 2020 at 12:07 PM Töma Gavrichenkov  wrote:

> This is the deadliest IPv6 packet structure infographics I've ever seen in
> my life.
>
> https://noia.network/assets/concept-basics.jpg
>
> On Thu, Jan 9, 2020, 7:29 PM Aistis Zenkevičius 
> wrote:
>
>> So, a bit like this then: https://noia.network/technology
>>
>> -Aistis
>>
>>
>> -Original Message-
>> From: NANOG  On Behalf Of Phil Pishioneri
>> Sent: 2019 m. spalio 4 d., penktadienis 22:52
>> To: NANOG list 
>> Subject: "Using Cloud Resources to Dramatically Improve Internet Routing"
>>
>> [Came up in some digest summary I receive]
>>
>> Using Cloud Resources to Dramatically Improve Internet Routing UMass
>> Amherst researchers to use cloud-based ‘logically centralized control’
>>
>>
>> https://www.umass.edu/newsoffice/article/using-cloud-resources-dramatically-improve
>>
>> -Phil
>>
>>


Re: Cost Recovery Surcharge & Va Personal Property Tax Recovery for IP Transit

2020-01-06 Thread Matthew Petach
On Mon, Jan 6, 2020 at 10:17 AM Tom Beecher  wrote:

> Both are quite likely to be negotiable.
>
> FCC Cost Recovery fees are the federally mandated ones they are allowed to
> pass on to you. Most anything else named 'Cost Recovery' is optional, and
> so named to try and confuse you into thinking it's the mandatory stuff.
>


The person getting charged FCC Cost Recovery was in Canada, however.

Good to know the US annexed Canada and brought it under the jurisdiction
of the FCC recently... ^_^;

Matt


Re: 5G roadblock: labor

2019-12-30 Thread Matthew Petach
Unfortunately, Wi-Fi handoffs suck donkey balls compared to
cell tower handoffs when moving.  It's fine when you're
stationary, but walking down the street, and shifting from
one wifi hotspot to the next, you're going to be dropping
and re-establishing connections with a new endpoint IP
address every time.

If we solve the issue of endpoint identity on a connection
independent of the transport, so that your video stream
of the game doesn't have to stop and restart every time
you shift from one access point to the next, I could
definitely see wi-Fi beating 5G.

Otherwise, I think 5G will win, in terms of better
user experience when non-stationary.

Matt


On Mon, Dec 30, 2019 at 12:04 PM Mark Tinka  wrote:

>
>
> On 30/Dec/19 16:50, Shane Ronan wrote:
>
> >
> > Also, keep in mind that 10 years ago, you didn't know you would want
> > or need 25mbits to your phone, but I'd bet that now you'd have a hard
> > time living without it.
>
> Which you can certainly achieve over wi-fi without hassle. I posit that
> in many locations where abundant bandwidth to your phone is required, a
> vast majority of suitable wi-fi options exist, and you (and others) use
> one or more of them.
>
> Wi-fi will beat 5G, over the long term.
>
> Mark.
>
>


Re: restricted hotel block

2019-12-10 Thread Matthew Petach
Which hotel was that?  I might want to go, just to take advantage of the
discount...  ^_^

Matt



On Tue, Dec 10, 2019, 09:36 Randy Bush  wrote:

> is anyone aware of any conference other than nanog which does
>
> Online Reservations: (Open exclusively to NANOG Members only from
> December 2 - December 16)
>
> randy
>
>


Re: Elephant in the room - Akamai

2019-12-04 Thread Matthew Petach
On Wed, Dec 4, 2019, 19:05 Kaiser, Erich  wrote:

> Lets talk Akamai
>

[...]


> The last two nights the traffic levels to them has skyrocketed as well.
>
> Any insight?
>
>
> Erich Kaiser
> The Fusion Network
>

As a CDN, I would usually expect to see traffic *from* Akamai to be the
large direction.

If you're seeing your traffic *to* them skyrocketing, are you sure you
aren't carrying DDoS attack traffic at them?

CDNs aren't known for being large traffic sinks.   ^_^;;

Matt


Re: Disney+ Streaming

2019-11-12 Thread Matthew Petach
My point was that Disney has a lock on much of the content kids love.

Netflix/HBO/AmazonPrime, not so much.

So, the new eyeballs aren't going to be from parents watching different
shows, it'll be from parents watching their adult-ish stuff, while the kids
are happily ensconced with Disney+.

I called out Game of Thrones and Good Omens as shows that are popular with
adults but that aren't terribly family friendly, so you won't be getting
many 12-and-unders watching them.

That's where the new eyeballs come from.

Matt


On Tue, Nov 12, 2019, 13:17 Mark Andrews  wrote:

> They can already stream different content to multiple devices
> simultaneously.
> All this does is make some content that wasn’t available previously now
> available.
>
> People can really only watch one thing at a time.  Net streaming of the
> last mile
> is unlikely to change much.  Just where that content is coming from may
> change.
>
> Mark
>
> > On 13 Nov 2019, at 07:53, Matthew Petach  wrote:
> >
> >
> > Different target audiences.
> >
> > Now the parents can be watching "Good Omens" or "Game of Thrones" on
> Netflix while the kids are streaming "The Lion King" on Disney+ streaming.
> Instead of the whole family watching one show together, now we have
> segmentation in the marketplace.
> >
> > End result is more total overall bandwidth consumption.
> >
> > Matt
> >
> >
> > On Tue, Nov 12, 2019, 12:38 Brian J. Murrell 
> wrote:
> > On Tue, 2019-11-12 at 15:26 -0500, Valdis Klētnieks wrote:
> > >
> > > I can foresee a lot of families subscribing to Netflix *and* Disney+
> > > because neither one has all the content the family wants to watch.
> >
> > Absolutely.  But the time spent watching Disney would *replace* (not be
> > in addition to, or would it?  Would Disney's content result in existing
> > streamers watching more hours of streaming than they did before?)
> > Netflix watching.
> >
> > > Has anybody seen a significant drop in total streaming traffic due to
> > > Netflix
> > > users jumping ship to Amazon/Hulu, or are consumers just biting the
> > > bullet,
> > > coughing up the $$, and streaming more total because across the
> > > services
> > > there's more stuff they want to watch?
> >
> > I actually suspect streaming is going to decline (at least in
> > comparison to where it could have grown to) if this streaming service
> > fragmentation continues.
> >
> > I think people are going to reject the idea that they need to subscribe
> > to a dozen streaming services at $10-$20/mo. each and will be driven
> > back the good old "single source" (piracy) they used to use before 1
> > (or perhaps 2) streaming services kept them happy enough to abandon
> > piracy.
> >
> > The content providers are going to piss in their bed again due to
> > greed.  Again.
> >
> > Cheers,
> > b.
> >
>
> --
> Mark Andrews, ISC
> 1 Seymour St., Dundas Valley, NSW 2117, Australia
> PHONE: +61 2 9871 4742  INTERNET: ma...@isc.org
>
>
>


Re: Disney+ Streaming

2019-11-12 Thread Matthew Petach
Different target audiences.

Now the parents can be watching "Good Omens" or "Game of Thrones" on
Netflix while the kids are streaming "The Lion King" on Disney+ streaming.
Instead of the whole family watching one show together, now we have
segmentation in the marketplace.

End result is more total overall bandwidth consumption.

Matt


On Tue, Nov 12, 2019, 12:38 Brian J. Murrell  wrote:

> On Tue, 2019-11-12 at 15:26 -0500, Valdis Klētnieks wrote:
> >
> > I can foresee a lot of families subscribing to Netflix *and* Disney+
> > because neither one has all the content the family wants to watch.
>
> Absolutely.  But the time spent watching Disney would *replace* (not be
> in addition to, or would it?  Would Disney's content result in existing
> streamers watching more hours of streaming than they did before?)
> Netflix watching.
>
> > Has anybody seen a significant drop in total streaming traffic due to
> > Netflix
> > users jumping ship to Amazon/Hulu, or are consumers just biting the
> > bullet,
> > coughing up the $$, and streaming more total because across the
> > services
> > there's more stuff they want to watch?
>
> I actually suspect streaming is going to decline (at least in
> comparison to where it could have grown to) if this streaming service
> fragmentation continues.
>
> I think people are going to reject the idea that they need to subscribe
> to a dozen streaming services at $10-$20/mo. each and will be driven
> back the good old "single source" (piracy) they used to use before 1
> (or perhaps 2) streaming services kept them happy enough to abandon
> piracy.
>
> The content providers are going to piss in their bed again due to
> greed.  Again.
>
> Cheers,
> b.
>
>


Re: RPKI adoption (was: Re: Corporate Identity Theft: Azuki, LLC -- AS13389, 216.179.128.0/17)

2019-08-14 Thread Matthew Petach
On Tue, Aug 13, 2019 at 5:44 PM John Curran  wrote:

> On 13 Aug 2019, at 9:28 PM, Ronald F. Guilmette 
> wrote:
>
> ...
> The last time I looked, RPKI adoption was sitting at around a grand total
> of 15% worldwide.  Ah yes, here it is...
>
>   https://rpki-monitor.antd.nist.gov/
>
> I've asked many people and many companies why adoption remains so low, and
> why their own companies aren't doing RPKI.  I've gotten the usual
> assortment
> of utterly lame excuses, but the one that I have had the hardest time
> trying to counter is the one where a network engineer says to me "Well,
> ya know, we were GOING to do that, but then ARIN... unlike the other four
> regional authorities... demanded that we sign some silly thing indemnifying
> them in case of something.
>
>
> Interestingly enough, those same indemnification clauses are in the
> registration services agreement that they already signed but apparently
> they were not an issue at all when requesting IP address space or receiving
> a transfer.
> You might want want to ask them why they are now a problem when they
> weren’t before (Also worth noting that many of these ISP's own contracts
> with their customers have rather similar indemnification clauses.)
>

Hi John,

There are things companies will sign
when their backs are up against the wall
that they will balk at signing when it is
for an optional geek-ish extra.

IP addresses are the lifeblood of the
tech industry.  If you don't have an
IP address, you don't exist on the
Internet.  (Apologies to those of us
who still have modems configured
to call and retrieve mail addressed
with UUCP bang paths).

So, companies will grudgingly and with
much hand-wringing sign the RSA
necessary to get IP space.  Without,
they die.  Rather like oxygen; if we
had to sign a license agreement in
order to receive air to breathe, you'd
find most people would sign pretty
horrific terms of service agreements.

Slip those same terms in front of someone
as a requirement for them to buy beer,
and you'll likely discover a whole lot of
people are just fine drinking something
else instead.

So too with the RSA terms versus the
RPKI terms.

As companies, we can't survive without
IP addresses.  We'll sign just about anything
to stay alive.

RPKI is a geek toy.  It's not at all required
for a business to stay alive on the Internet,
so companies feel much safer in saying
"no way will we sign that!".

Now, at the risk of bringing down the ire
of the community on my head...ARIN could
consider tying the elements together, at
least for ARIN members.  Add the RPKI terms
into the RSA document.  You need IP number
resources, congratulations, once you sign the
RSA, you're covered for RPKI purposes as well.

That doesn't solve the issue for out-of-region
folks who don't have an RSA with ARIN; but
that's no worse than you are today; and by
bundling the RPKI terms in with the rest of the
RSA, you at  least get everyone in the ARIN
region that wants^Wneeds to maintain their
IP number resources in order to stay in business
on the Internet covered in terms of being able to
use the RPKI data.

If you've got them by the short and curlies
already, might as well bundle everything in
while they've got the pen in their hand.  ^_^;

Even so, we at ARIN are in the midst of a Board-directed review of the RPKI
> legal framework to see if any improvements can be made <
> https://www.arin.net/vault/participate/meetings/reports/ARIN_43/PDF/PPM/curran_rpki.pdf>
>  – I will provide further updates once it is completed.
>

Best of luck!  I know we'll all be watching carefully to
see how it goes.:)

Matt


> Thanks!
> /John
>
> John Curran
> President and CEO
> American Registry for Internet Numbers
>
>


Re: User Unknown (WAS: really amazon?)

2019-08-13 Thread Matthew Petach
On Fri, Aug 9, 2019 at 4:31 PM Stephen Satchell  wrote:

> On 8/9/19 4:03 PM, Matthew Petach wrote:
> > ...apparently Amazon has become a public utility
> > now?
> >
> > I look forward with bemusement to the PUC
> > tariff filings for AWS pricing.  ^_^;;
>
> [...]

>
> And it wouldn't be the PUC, as Amazon is a company national in scope.
> It would be something like the FCC.  Public Utility Commissions are at
> the local (usually county) or state level.
>

That was somewhat the point.
Public utilities make some amount
of sense when there's a local natural monopoly.

With a global company, there's no such thing
as a local natural monopoly in play; how would
you assign oversight to a global entity?  Which
"public" would be the ones being protected?
The city of Seattle, WA, where Amazon is
headquartered?  The State of Washington?
The United States, at a federal level?   What
about the "public" that uses Amazon in all
the other countries of the world?

There's no way to make a global entity a
regulated public utility; we don't have an
organization that has that level of oversight
across country boundaries, unless you start
thinking about entities that can enforce *treaties*
between countries.

And I'm not sure I'd want our Ambassadors
being the ones at the table deciding how best
to regulate Amazon.   :/


Re: User Unknown (WAS: really amazon?)

2019-08-09 Thread Matthew Petach
On Mon, Aug 5, 2019 at 2:16 AM Scott Christopher  wrote:

>
>
[...]

> It's not about $BIGCORP having lots of corporate lawyers imposing its will
> on the small guys - it's about Amazon's role as a public utility, upon
> which many many many important things depend.
>
> S.C.
>
>
I must have missed the news amidst all the
interest rate changes and tariff tweets...

...apparently Amazon has become a public utility
now?

I look forward with bemusement to the PUC
tariff filings for AWS pricing.  ^_^;;

 Matt


Re: Did IPv6 between HE and Google ever get resolved?

2019-03-31 Thread Matthew Petach
On Sun, Mar 31, 2019 at 6:40 PM Jay Hennigan  wrote:

> Perhaps you should bake them a cake. :-)
>

The cake was delicious and moist

https://www.flickr.com/photos/mpetach/4031434206

"I'd like to buy a vowel.  Can I get an 'e', pleas?"  ^_^;;


Re: Did IPv6 between HE and Google ever get resolved?

2019-03-30 Thread Matthew Petach
On Sat, Mar 30, 2019 at 4:33 AM Matthew Petach 
wrote:

>
>
> On Thu, Mar 28, 2019 at 12:40 PM David Hubbard <
> dhubb...@dino.hostasaurus.com> wrote:
>
>> Hey all, I’ve been having bad luck searching around, but did IPv6 transit
>> between HE and google ever get resolved?  Ironically, I can now get to them
>> cheaply from a location we currently have equipment that has been
>> Cogent-only, so if it fixes the IPv6 issue I’d like to make the move.
>> Anyone peer with HE in general and want to share their experience offlist?
>> With the price, if they’re a good option, I’d consider rolling them in to
>> other locations where we have redundancy already, so the v6 isn’t as big a
>> deal there.
>>
>>
>>
>> Thanks
>>
>>
>>
>
> I wasn't aware of any issues between HE.net and Google;
> are you sure you don't mean HE.net and Cogent?
>
> Matt
>
>
Ah.  Sorry, the changed subject line didn't thread in with this,
so this showed up as an unreplied singleton in my inbox.

Apologies for the duplicated response; at least this won't
be a lonely singleton in anyone else's inbox now.  ^_^;

Matt


Re: Did IPv6 between HE and Google ever get resolved?

2019-03-30 Thread Matthew Petach
On Thu, Mar 28, 2019 at 12:40 PM David Hubbard <
dhubb...@dino.hostasaurus.com> wrote:

> Hey all, I’ve been having bad luck searching around, but did IPv6 transit
> between HE and google ever get resolved?  Ironically, I can now get to them
> cheaply from a location we currently have equipment that has been
> Cogent-only, so if it fixes the IPv6 issue I’d like to make the move.
> Anyone peer with HE in general and want to share their experience offlist?
> With the price, if they’re a good option, I’d consider rolling them in to
> other locations where we have redundancy already, so the v6 isn’t as big a
> deal there.
>
>
>
> Thanks
>
>
>

I wasn't aware of any issues between HE.net and Google;
are you sure you don't mean HE.net and Cogent?

Matt


Re: GPS rollover

2019-03-11 Thread Matthew Petach
On Sun, Mar 10, 2019 at 8:04 PM Stephen Satchell  wrote:

> So far as I can tell with NTP, there was no issue with time sources
> becoming false-tickers, including my local GPS appliance.  FWIW.
>
>
I believe the rollover is *next* month, in April.   :)
https://ics-cert.us-cert.gov/sites/default/files/documents/Memorandum_on_GPS_2019.pdf

"This paper is intended to provide an understanding of the possible effects
of the April 6, 2019 GPS Week Number Rollover on Coordinated Universal Time
derived from GPS devices."

Matt


Re: 2FA, was A Deep Dive on the Recent Widespread DNS Hijacking

2019-02-26 Thread Matthew Petach
On Tue, Feb 26, 2019 at 9:51 AM  wrote:

> On Tue, 26 Feb 2019 08:36:11 -0800, Seth Mattinen said:
> > On 2/25/19 9:59 PM, Keith Medcalf wrote:
> > > Are you offering an indemnity in case that code is malicious?  What
> are the
> > > terms and the amount of the indemnity?
>
> > Anyone who is that paranoid should read the RFC and write their own TOTP
> > client that lets them indemnify themselves from their own code.
>
> I seem to recall that the 1983 Turing Award lecture referenced a 1974 pen
> test
> of Multics that proved conclusively that level of paranoia isn't
> sufficient
>
>

Well, the OP was probably just speaking in shorthand.

What I'm sure they really meant was after developing your own silicon on
your own hardware, and hand assembling your own compiler and linker, and
then writing your own drivers for your hardware and building your own
operating system, you could easily write your own TOTP implementation on
your hardware running on your silicon with your operating system with your
compiler and your linker...and then you could be sure.

Right?

Matt


Re: DNS Flag Day, Friday, Feb 1st, 2019

2019-01-31 Thread Matthew Petach
On Thu, Jan 31, 2019, 01:27 Radu-Adrian Feurdean <
na...@radu-adrian.feurdean.net wrote:

>
>
> On Thu, Jan 31, 2019, at 03:24, Mark Andrews wrote:
> > You do realise that when the day was chosen it was just the date after
> > which new versions of name servers by the original group of Open Source
> > DNS developers would not have the work arounds incorporated?
>
> I think it's pretty safe to say that the "DNS Flag day" is more like a
> date of "end of support" rather than an "service termination". My guess is
> that some uncompliant servers will be still running long after that date...
>
> --
> R-A.F.
>


(resending from correct address)

Right.

The concern is that it's *also* the date when all the major recursive
lookup servers are changing their behaviour.

New software availability date?
Awesome, go for it.

Google, Cloudflare, Quad9 all changing their codebase/response behaviour on
a Friday before a major sporting and advertising event?

Not sounding like a really great idea from this side of the table.

Are we certain that the changes on the part of the big four recursive DNS
operators won't cause downstream issues?

As someone noted earlier, this mainly affects products from a specific
company, Microsoft, and L7 load balancers like A10s.  I'm going to hope
legal teams from each of the major recursive providers were consulted ahead
of time to vet the effort, and ensure there were no concerns about
collusion or anticompetitive practices, right?

I'm fine with rolling out software that stops supporting bad behaviour.

What I find to be concerning is when supposedly competing entities all band
together in a pact that largely holds the rest of the world hostage to
their arbitrary timeline.

Perhaps it's time to create a new recursive resolver service that
explicitly *is not* part of the cabal...

Matt
(hoping and praying this weekend will go smoothly)


Re: DNS Flag Day, Friday, Feb 1st, 2019

2019-01-30 Thread Matthew Petach
On Wed, Jan 23, 2019 at 4:12 PM Brian Kantor  wrote:

> Quoting from the web site at https://dnsflagday.net/
>
[...]

>   The current DNS is unnecessarily slow and suffers from inability
>   to deploy new features. To remediate these problems, vendors of
>   DNS software and also big public DNS providers are going to
>   remove certain workarounds on February 1st, 2019.
>

I would like to note that there is an entire
segment of the population that does not
interact with technology between sundown
on Friday, all the way through Sunday
morning.

Choosing Friday as a day to carry out an
operational change of this sort does not
seem to have given thought that if things
break, there is a possibility they will have
to stay broken for at least a full day before
the right people can be engaged to work on
the issue.

In the future, can we try to schedule such events
with more consideration on which day the change
will take place?

I will also note that this weekend is the Superbowl
in the US; one of the bigger advertising events of the
year.  Potentially breaking advertising systems that
rely on DNS two days before a major, once-a-year
advertising event is *also* somewhat inconsiderate.

While I understand that no day will work for everyone,
and at some point you just have to pick a day and go
for it, I will note that picking the Friday before the
Superbowl does seem like a very unfortunate random
pick for a day on which to do it.

Any chance this could wait until say the Tuesday
*after* the Superbowl, when we aren't cutting an
entire religion's worth of potential workers out of
the workforce available to fix issues in case it
turns out to be a bigger problem than is expected,
and when we have less chance of annoying the
vast army of football-loving fans of every sort?

Thanks!

Matt


Re: Effects of Cold Front on Internet Infrastructure - U.S. Midwest

2019-01-30 Thread Matthew Petach
On Wed, Jan 30, 2019 at 9:07 AM Christopher Morrow 
wrote:

> And here I always figured it was bespoke knit caps for all the packets in
> cold-weather climes?
> learn something new every day! (also, now I wonder what the people who
> told me they were too busy knitting caps are ACTUALLY doing??)
>

Unfortunately, they're knitting *DATA* caps.

;-P

Sorry, couldn't resist.

Matt


Re: Whats going on at Cogent

2018-10-28 Thread Matthew Petach
On Thu, Oct 25, 2018 at 1:54 PM Kenny Taylor  wrote:

> I wasn't familiar with it, so thanks for sharing!  The Google search for
> 'he cogent cake' was entertaining.  Hard to believe that conflict is going
> on 9+ years..
>
> Kenny
>

I can vouch for it.

The cake was delicious and moist.

And it was not a lie.

;)


Re: Impacts of Encryption Everywhere (any solution?)

2018-05-28 Thread Matthew Petach
On Mon, May 28, 2018 at 7:24 PM, John R. Levine  wrote:

> In article  gmail.com>,
> Matthew Petach   wrote:
>
>> Your 200mbit/sec link that costs you $300 in hardware
>> is going to cost you $4960/month to actually get IP traffic
>> across, in Nairobi.   Yes, that's about $60,000/year.
>>
>
> Nonetheless, Safaricom sells entirely usable data plans.  A one day
> 1GB bundle on a prepaid SIM costs about $1, a monthly 1GB costs about
> $5.  They have 4G, it works, I've used it.
>
> What do they know that Telegeography (who made that slide) doesn't?


Math.^_^;

1GB of volume over the course of a month is 3kb/sec sustained
throughput over the month.  (10*8/(86400*30))

$5 per 3kbit/sec means that 155mbit link would cost...$251,100/month.
(15500/((10*8)/(86400*30))*5)

We call that "Time Domain Multiplexing-based profits".

Comparing volumetric pricing with rate-based pricing
is one of the best ways of tucking in *lots* of room for
profit.:)

Matt


Re: Impacts of Encryption Everywhere (any solution?)

2018-05-28 Thread Matthew Petach
On Mon, May 28, 2018 at 11:22 AM, Ben Cannon  wrote:

> I’m sorry I simply believe that in 2018 with the advanced and cheap ptp
> radio (ubiquiti anyone? $300 and I have a 200mbit/sec link over 10miles!
> Spend a bit more and go 100km) plus the advancements in cubesats about to
> be launched, even the 3rd world can simply get with the times.
>
> -Ben
>

Hi Ben,

I do not think you adequately understand the economics of the
situation.

https://www.slideshare.net/InternetSociety/international-bandwidth-and-pricing-trends-in-subsahara-africa-79147043

slide 22, IP transit cost.

Your 200mbit/sec link that costs you $300 in hardware
is going to cost you $4960/month to actually get IP traffic
across, in Nairobi.   Yes, that's about $60,000/year.

Could *you* afford to "get with the times" if that's what
your bandwidth was going to cost you?

Please, do a little research on what the real
costs are before telling others they need to
"simply get with the times."

Thanks!

Matt


Peering with abusers...good or bad?

2018-03-02 Thread Matthew Petach
On Tue, Feb 27, 2018 at 4:13 PM, Dan Hollis  wrote:
> OVH does not suprise me in the least.
>
> Maybe this is finally what it will take to get people to de-peer them.
>

If I de-peer them, I pay my upstream to carry the
attack traffic.

If I maintain peering with them, the attack traffic is free.

It would seem the economics work the other way around.

It would be more cost effective for me to identify the largest sources
of attacks, and reach out to directly peer with them, to avoid paying
an upstream to carry the traffic, if I'm going to end up throwing it
away anyhow.


NANOG70 tee shirt mystery

2017-06-04 Thread Matthew Petach
So, I've been staring at the NANOG70 tee shirt for
a bit now:

https://flic.kr/p/VejX5y

and I have to admit, I'm a bit stymied.

Usually, the tee-shirts are somewhat referential
to the location or to a particular event; but this
one is leaving me scratching my head.

Is it perhaps a shot of the network engineering
"Ooops (I broke the network again)"  concert
tour?

Or is there some other cultural reference at
play that I'm not aware of?

Enquiring minds want to know!(tm).  :)

Matt


Re: Russian diplomats lingering near fiber optic cables

2017-06-03 Thread Matthew Petach
On Fri, Jun 2, 2017 at 11:40 AM,   wrote:
[...]
>
> Well, I'd be willing to buy that logic, except the specific buildings called
> out look pretty damned big for just drying off a cable.  For example, this
> is claimed to be the US landing point for TAT-14 - looks around 4K square 
> feet?

I think you might be off by an order of magnitude or two
on that.  4,000 sq ft is about the size of the guest bathroom
in Snowhorn's new house, isn't it?

(well, OK, maybe a slight exaggeration...  ;)

Matt


Re: Yahoo Geo Location

2017-04-13 Thread Matthew Petach
Hi hi,

Sorry, a bit behind in my email, apologies for that.

Ping me the /23 in private email and I'll see what we
can do about it.

Thanks!

Matt


On Tue, Apr 11, 2017 at 6:46 PM, Mike Callagy  wrote:
> Does anyone know who Yahoo uses for geo location?  I've got a /23 that has
> moved and I'm unable to find anything definitive regarding Yahoo.  I'm
> already working with Maxmind and Google but Yahoo is the outlier.
>
> Thanks in advance,
> Mike.
>


Peering BOF/Peering social @NANOG69?

2017-02-06 Thread Matthew Petach
I'm squinting at the Guidebook for NANOG69,
and I don't seem to see any peering BOF or
peering social this time around.  Am I being
blind again, and it's on the agenda somewhere
but I'm just overlooking it?
Pointers in the right direction would be appreciated.

Thanks!  :)

Matt


Re: Legislative proposal sent to my Congressman

2016-10-03 Thread Matthew Petach
On Mon, Oct 3, 2016 at 6:15 PM, Lyndon Nerenberg  wrote:
>
[...]
>
> The only way to stop this sort of thing once and for all is to make it 
> punitively costly to the humans at the helm of the corporations selling this 
> crap in the first place.  Under corporate law, this almost always means the 
> directors.  Only when they start losing their homes/yachts/Jaguars, or start 
> spending some quality time in jail, will this problem go away.
>
> Of course, this does require governments to grow some balls :-P
>
> --lyndon


Please, no.

This will put a sword through the heart of open source.

If you hold the executives of the hardware manufacturer
responsible for the software running on their devices,
then the next generation of hardware from every
manufacturer is going to be hardware locked to
ONLY run their software.  No OpenWRT, no Tomato,
no third party software that could be compromised
and leave them holding the liability bag.

If you want a world in which only a handful of companies
make the hardware and software, with commensurately
higher prices, and no freedom to select what software
you'd like to load on it, I suspect this is a good path
towards it.

I think there's got to be solutions that don't drive
us into a closed-software world.  Before we start
asking the government and the lawyers to solve
this in ways we'll come to hate down the road,
let's give it a few more tries ourselves, shall we?

Thanks!

Matt


Re: Optical Wave Providers

2016-09-01 Thread Matthew Petach
(Speaking purely for myself, and thoroughly
demonstrating my relative ignorance on the
topic, but also opening up an opportunity
to become better educated...)

You may find that optical providers don't really
want to mix 1G/10G waves in on systems that
are running Nx100G waves on the fiber.  With
100G coherent systems, optical dispersion
compensation is no longer necessary, as
the DSP on the receiving side takes care of
it.  The 10G waves, on the other hand, would
need dispersion compensation along the run,
which increases the cost of building and
maintaining such a system, because they'd
have to peel off your 10G waves to periodically
do dispersion compensation, then add them
back in.  It's far more cost effective for long
haul providers to carry the traffic as native 100G,
and provide the lower-speed handoff to you at
the endpoints via ethernet framing or MPLS.

It's not so much a matter of "not enough people
owning fiber across the US" as "Oh geez, you want
us to run our system in an inefficient and uneconomical
mode?  Uh...maybe you could call those other guys
down the street instead."   I suspect that if you ran
an experiment by calling for quotes and availability
for 100G waves between your endpoints you'd find
more availability for 100G waves than 10G or 1G
native waves.

(I'm half hoping to get a flurry of replies telling me
I'm completely wrong, and then explaining the real
issues to me.  If nobody replies, it might mean I'm
not entirely wrong).

Thanks!

Matt



On Wed, Aug 31, 2016 at 4:08 PM,   wrote:
> I have been looking at optical wave carriers for some long haul 1G/10G
> across the US. All to major cities and well known POP's.
> I am finding that there are not a lot of carriers who are offering wave
> services, usually just ethernet/MPLS.
> Particularly across the North west.
> Can someone shed some light on who some of the bigger carriers are and any
> challenges you have encountered with services like this?
> Who actually owns the fiber across the US?
>
> Thanks
>
> Tim
>


Re: NANOG67 - Tipping point of community and sponsor bashing?

2016-06-17 Thread Matthew Petach
On Wed, Jun 15, 2016 at 4:03 PM, Bill Woodcock  wrote:
>[...]  Only then does an IXP produce bandwidth.

Minor nitpick--an IXP never 'produces' bandwidth;
it facilitates movement of data between entities,
but the IXP itself shouldn't be producing bandwidth.
It's the allocation of ports and cross connects from
members into the IXP that produce the bandwidth,
and that would be the case even if the IXP were
removed from the picture and the ports were
cross-connected back-to-back.

(I suppose if an IXP switch fabric were compromised,
someone could use it to generate traffic that did not
originate from any member port, but that would be
a very unusual circumstance indeed...)

Thanks!

Matt


Re: Thinking Methodically about building a PoC

2016-06-13 Thread Matthew Petach
On Sun, Jun 12, 2016 at 9:49 PM, Roland Dobbins  wrote:
>
> On 13 Jun 2016, at 8:52, Kasper Adel wrote:
>
>> 2) Do some planning and research first.
>
> This.
>
> ---
> Roland Dobbins 
>


We never design in a vacuum.  There's always some
target we're  designing towards.  Testing is no different.
Think about what it is you'll need to support.  Look at
historical numbers related to those features/capabilities.
Yes, as the stork market keeps reminding us, past
performance is no guarantee of future results...but at
the same time, those who don't learn from the past
are doomed to re-implement it...poorly.

So, when we test, we look at protocols we've already
been running for years, and then we look at the growth
curves we've seen in those protocols over the past X years,
where X is approximately the estimated lifespan of the
hardware in question.  So, if the current router platform
you're looking to replace has been in place in your
network for 8 years, and you're testing the next
generation for BGP route scaling, look at what
the global BGP table size was 8 years ago,
and look at where it is today; work out the percentage
growth curve for it; then take the current BGP table
size, apply the same compound growth percentage
to it for the next 8 years, and you'll come up with a
reasonable idea of the scale you'll need the box
to handle over its lifetime.  Test that; then, to give
yourself a margin of error, double the number, and
test again.  That way you have a realistic idea of
whether it can support your current growth rate,
and whether it can support your growth if the
growth rate is 1.4x what you expect.

Do those calculations for each of the protocols
under test, and you'll be able to come up with
a reasonable testing profile that's supportable
based on historical information, rather than flights
of fancy.

Hope that helps!

Matt


Fwd: Sunday night social?

2016-06-10 Thread Matthew Petach
> I just finished registering for NANOG 67, and answered Yes to "Will you
be attending the sunday evening social" and booked my flight
accordingly...but now i can't seem to find any details on what time it
starts on the website.  Does anyone know what time it starts?
>
> Thanks!
>
> Matt
>
>


Re: announcement of freerouter

2015-12-29 Thread Matthew Petach
On Tue, Dec 29, 2015 at 4:51 AM, Rob Seastrom  wrote:
>> On Dec 29, 2015, at 4:08 AM, Josh Reynolds  wrote:
>>
>> It wasn't about trolling, it was about legitimate prior art and reasonably
>> so. Also, there's potentially a confusing association between the two.
>>
>> I'm glad the terminology was removed.
>
> Since it's an operating system for routing IP, maybe they could call it "IP 
> operating system", styled Ios, to prevent confusion with IOS and iOS.

And not to be confused with IoS,
the Internet of Shit:  ;P

https://youtu.be/soV7-gwxarE


> Lawyers gotta eat too...
>
> -r


Re: de-peering for security sake

2015-12-26 Thread Matthew Petach
On Sat, Dec 26, 2015 at 12:34 PM, Owen DeLong  wrote:
>> On Dec 26, 2015, at 08:14 , Joe Abley  wrote:
>> On Dec 26, 2015, at 10:09, Stephen Satchell  wrote
>>> My gauge is volume of obnoxious traffic.  When I get lots of SSH probes 
>>> from a /32, I block the /32.
[...]
>> With respect to ssh scans in particular -- disable all forms of
>> password authentication and insist upon public key authentication
>> instead. If the password scan log lines still upset you, stop logging
>> them.
>
> This isn’t a bad idea, per se, but it’s not always possible for the guy 
> running the server
> to dictate usage to the people using the accounts.
>
> Also, note that the only difference between a good long passphrase and a 
> private key is,
> uh, wait, um, come to think of it, really not much.
>
> The primary difference is that nobody expects to have to remember a private 
> key so we don’t
> get fussed when they contain lots of entropy. Users aren’t good at choosing 
> good long secure
> passphrases and the automated mechanisms that attempt to enforce strong 
> passwords just
> serve to increase user confusion and actually reduce the entropy in passwords 
> overall.


No, the difference is that a passphrase works
in conjunction with the private key, which is
the "something you have" vs the "something
you know" in two-factor authentication.

With password authentication, there's only a
single solution space for the attacker to
sift through; with private key authentication,
unless you're sloppy about securing your
private key, there's two massive solution spaces
for the attacker to sift through to find the unique
point of intersection.

Massively different solution space volumes
to deal with.  Equating the two only has meaning
in cosmological contexts.

> Owen
>

Matt


Re: de-peering for security sake

2015-12-26 Thread Matthew Petach
On Sat, Dec 26, 2015 at 6:37 PM, Owen DeLong  wrote:
>> On Dec 26, 2015, at 15:54 , Baldur Norddahl  
>> wrote:
>>
[...]

>> The key approach is still better. Even if the password is 123456 the
>> attacker is not going to get in, unless he somehow stole the key file.
>
> Incorrect… It is possible the attacker could brute-force the key file.
>
> A 1024 bit key is only as good as a ~256 character passphrase in terms of 
> entropy.
>
> If you are brute force or otherwise synthesizing the private key, you do not 
> need
> the passphrase for the on-disk key. As was pointed out elsewhere, the 
> passphrase
> for the key file only matters if you already stole the key file.
>
> In terms of guessing the private key vs. guessing a suitably long pass 
> phrase, the
> difficulty is roughly equivalent.

Intriguing point.   I was thinking about it
from the end-user perspective; but you're
right, from the bits-on-the-wire perspective,
it's all just a stream of 1's and 0's, whether
it came from a private key + passphrase
run through an algorithm or not.

Thanks for the reminder to look at it from
multiple perspectives.  ^_^


Matt


Re: Nat

2015-12-20 Thread Matthew Petach
On Sun, Dec 20, 2015 at 9:55 AM, Daniel Corbe  wrote:
>> On Dec 20, 2015, at 11:57 AM, Mike Hammett  wrote:
>>
>> There is little that can be done about much of this now, but at least we can 
>> label some of these past decisions as ridiculous and hopefully a lesson for 
>> next time.
>
> There isn’t going to be a next time.

*points and snickers quietly*

You're either an incredible optimist,
or you're angling to be the next oft-
misquoted "640KB should be enough
for anyone" voice.

We got a good quarter of a century
out of IPv4.  I think we *might* hit
the century mark with IPv6...maybe.
But before we hit that, I suspect we'll
have found enough shortcomings
and gaps that we'll need to start
developing a new addressing format
to go with the newer networking
protocols we'll be designing to
fix those shortcomings.

Until the sun goes poof, there's *always*
going to be a next time.  We're never going
to get it _completely_ right.  You just have
to consider a longer time horizon than our
own careers.

Matt


Re: Nat

2015-12-19 Thread Matthew Petach
On Fri, Dec 18, 2015 at 1:20 PM, Lee Howard <l...@asgard.org> wrote:
>
>
> On 12/17/15, 1:59 PM, "NANOG on behalf of Matthew Petach"
>
>>I'm still waiting for the IETF to come around
>>to allowing feature parity between IPv4 and IPv6
>>when it comes to DHCP.  The stance of not
>>allowing the DHCP server to assign a default
>>gateway to the host in IPv6 is a big stumbling
>>point for at least one large enterprise I'm aware
>>of.
>
>
> Tell me again why you want this, and not routing information from the
> router?

Apologies for the delay in replying, work has
been insanely busy as we come to the end
of the quarter.

The problem is when you have multiple routers
in a common arena, there's no way to associate
a given client with a given router unless the DHCP
server can give out that information.

In an enterprise wireless environment,
you have many different subnets
for different sets of employees.  Unfortunately,
the reality of common RF spectrum dictates
you can't do separate SSIDs for every subnet
your employees belong to; so, you have one
SSID for the company that employees associate
with, and then the DHCP server issues an appropriate
IP to the laptop based on the certificate/credentials
presented.  In the v4 world, you get your IP address
and your router information all from the DHCP server,
you end up on the right subnet in the right building
for your job classification, and all is good.
In the v6 world, your DHCP server hands you an IP
address, but the client sees an entire spectrum of
routers to choose from, with no idea of which one
would be most appropriate to make use of.  Do I
use the one that's here in the same building as me,
or do I use one that's several blocks away in an
entirely different part of the campus?

The wonderful thing about modern wireless setups
for enterprises is that you can allow your employees
to all have their laptops configured to associate with
the same SSID, and handle all the issues of assigning
them to a particular subnet and vlan at the RADIUS/DHCPv4
level; you don't have to have different employees on
different SSIDs for finance vs engineering vs HR
vs sales.  In v4, you can segment the employees
very nicely after they've associated with the AP
and give them all the necessary information for
building in which they're in.  V6 doesn't provide that
ability; so, I associate with the AP, I get my IPv6 address,
and then I look at which possible routers are announcing
information about my subnet, and I see there's one in
building B, one in building F, and one in building W, and
I just randomly pick one, which may be nearby, or may
be across the other side of campus.  Furthermore, I also
see all the announcements from routers for subnets
I'm *not* a part of, cluttering up the spectrum.  Rather
than have routers spewing out "here I am" messages
and taking up RF spectrum, I'd much prefer to explicitly
tell clients "you're in sales, you're in building W, here's
your IP address, and use the upstream router located
in your building."  No extra RF spectrum used up by
routers all over the place saying "here I am", no issues
of clients choosing a less-optimal upstream router and
then complaining about latency and performance.

I can see where in some environments, routers
using RAs to announce their presence to clients
makes sense.  Large-scale enterprise wireless
isn't one of those; so, give us the *ability* to
choose to explicitly give out router information
via DHCPv6 in those situations.  I'm not saying
RAs are bad; I'm simply saying that the IETF
plugging its ears to the needs of enterprises
and claiming that we just don't 'get' how IPv6
is supposed to work and therefore they won't
support assigning router information via DHCPv6
is an impediment to more rapid adoption.

And for those of you who say "but just use
different SSIDs for every subnet in the company",
please go do some reading on how SSIDs
are beaconed, and what the effective limit
is on how many SSIDs you can have within
a given region of RF coverage.  It's generally
best to keep your SSID count in the single
digits; by the time you get more than a dozen
SSIDs, you're using up nearly half of your
RF spectrum just beaconing the SSIDs.


>> Right now, the biggest obstacle to IPv6
>>deployment seems to be the ivory-tower types
>>in the IETF that want to keep it pristine, vs
>>allowing it to work in the real world.
>
> There¹s a mix of people at IETF, but more operator input there would be
> helpful. I have a particular draft in mind that is stuck between ³we¹d
> rather delay IPv6 than do it wrong² and ³be realistic about how people
> will deploy it."
>
> Lee

I agree more operator input would be good;
unfortunately, it's easier for management to
say "let's just delay IPv6 until they get it
working" than it is to justify sending employees

Re: Nat

2015-12-19 Thread Matthew Petach
On Sat, Dec 19, 2015 at 7:17 AM, Sander Steffann  wrote:
> Hi Jeff,
>
>> It's far past time to worry about architectural purity.  We need people
>> deploying IPv6 *NOW*, and it needs to be the job of the IETF, at this
>> point, to fix the problems that are causing people not to deploy.
>
> I partially agree with you. If people have learned how IPv6 works, deployed 
> IPv6 (even if just in a lab) and came to the conclusion that there is an 
> obstacle then I very much want to hear what problems they ran into. That's 
> rarely the case unfortunately. Most of the time I hear "we don't want to 
> learn something new".


Hi Sander,

I have multiple sets of clients on a particular subnet; the subnet
is somewhat geographically distributed; I have multiple routers
on the subnet.  I currently am able to explicitly associate clients
with the most appropriate router for them in v4.
How can I do this using only RAs in IPv6?

I'd be happy to learn something new.  Unfortunately, my
research hasn't shown me that there's something new
to learn, it's shown me that "IPv6 can't do that, sorry."

Thanks!

Matt


Re: Nat

2015-12-17 Thread Matthew Petach
On Wed, Dec 16, 2015 at 5:22 PM, Randy Bush  wrote:
>> We need to put some pain onto everyone that is IPv4 only.
>
> this is the oppress the workers so they will revolt theory.

Ah, yes, the workers are quite revolting!

> load of crap.
>
> make ipv6 easier to deploy, especially in enterprise.  repeat the
> previous sentence 42 times.


I'm still waiting for the IETF to come around
to allowing feature parity between IPv4 and IPv6
when it comes to DHCP.  The stance of not
allowing the DHCP server to assign a default
gateway to the host in IPv6 is a big stumbling
point for at least one large enterprise I'm aware
of.  Right now, the biggest obstacle to IPv6
deployment seems to be the ivory-tower types
in the IETF that want to keep it pristine, vs
allowing it to work in the real world.

> what keeps the cows in the pasture is the quality of the grass not
> the height of the fence.
>
> randy

Randy, I would happily appoint you as CIG-Q,
the Chief Inspector of Grass Quality.;)

Matt


Re: IPv6 Cogent vs Hurricane Electric

2015-12-06 Thread Matthew Petach
On Sun, Dec 6, 2015 at 4:24 PM, Max Tulyev  wrote:
> On 04.12.15 01:19, Baldur Norddahl wrote:
>> On 1 December 2015 at 20:23, Max Tulyev  wrote:
>>> I have to change at least one of my uplinks because of it, which one is
>>> better to drop, HE or Cogent?
>>
>> Question: Why would you have to drop one of them? You have no problem if
>> you have both.
>
> Because of money, isn't it? I don't want to pay twice!

Completely makes sense--you want to get the
most value possible for the dollars you spend,
which means you want to choose upstream
providers that give you the most complete
view of the internet possible.

> So as this is not a bug, but a long time story - I relized for me as a
> cutomer connectivity from both Hurricane Electric and Cogent is a crap.
> So people should avoid both, and buy for example from Level3 and NTT,
> which do not have such problem and do not sell me partial connectivity
> without any warning before signing the contract.
>
> I'm just a IP transit customer, and I don't give a something for that
> wars who is the real Tier1. I just want a working service for my money
> instead of answering a hundreds calls from my subscribers!

So, for you, the choice is going to come
down to a comparison of how much each
provider charges vs how much of a headache
they're creating for you in terms of partial
reachability problems.  While bigger entities
like Level 3 and NTT will give you fewer reachability
headaches, they're also likely to charge more; and
you don't want to put all your eggs in one basket.

So, hypothetically speaking, if Level3 and NTT
both charge $2/mb/s/month, and Cogent and
HE charge $0.75/mb/s/month, you might
find that you get a more cost-effective
blend by getting 3 circuits, one each
from Level3 OR NTT, and Cogent,
and HE, for a total cost of $2+$0.75+$0.75,
or $3.50, instead of the other option
of buying two circuits, one each
from Level3 and NTT, which would
be $2+$2, or $4.

Yes, I realize this is completely contrived
hypothetical set of prices, but the point is
only you have the knowledge of how much
each provider is charging you; take that
information, do a few searches in your
favorite search engine for "$PROVIDER
peering dispute", and see which providers
have the best and worst histories as far
as getting into peering disputes, and then
choose accordingly.

It would be nice if there were a rating
system for ISPs that would make it
easier for smaller companies to know
if they were buying from an "A" rated
ISP vs a "C" or "D" rated ISP, somewhat
like restaurants that have to post their
department of health scores visibly.
However, without any overseeing entity
that would provide such a rating service,
for now it's up to each buyer to do their
own research to decide which ISPs are
safer to work with, and which ones are
riskier.

Best of luck making the right choices!

Thanks!

Matt


Re: IPv6 Cogent vs Hurricane Electric

2015-12-04 Thread Matthew Petach
On Fri, Dec 4, 2015 at 5:43 PM, Randy Bush  wrote:
>> Or, if you feel that Cogent's stubborn insistence on partitioning the
>> global v6 internet
>
> if A does not peer with B,
> then for all A and B
> they are evil partitioners?
>
> can we lower the rhetoric?
>
> randy
>



I thought we already had this conversation
a few years ago, but my memory is short,
so we can have it again.   ^_^;

No, it's not an issue of A not peering
with B, it's A selling "internet transit"
for a known subset of the internet
rather than the whole kit and kaboodle.

I rather think that if you're going to put
a sign out saying "we sell internet transit",
it *is* incumbent on you to make a best
effort to ensure you have as complete
a copy of the full routing table as possible;
otherwise, it's potentially a fraudulent claim.
At least, that's what it would be in any other
industry if you sold something under a particular
name while knowing the whole time it didn't
fit the definition of the product.

I know in the service station industry,
I'd get in a lot of trouble if I sold "premium
unleaded gasoline" that was really just the
same as the "regular unleaded" with a
different label.  It's fortunate that we're
not a regulated industry, so there's nobody
checking up on us to make sure that if
we sell "internet transit", it's not really
"internet transit, minus level3, sprint, ATT,
and a bunch of other networks that won't
get your prefixes from me".

It all boils down to 'caveat emptor' -- not all
uses of the word "internet transit" mean the
same thing--check carefully when buying, and
make sure you make informed decisions.

Matt
(now with 50% less rhetoric!)


Re: IPv6 Cogent vs Hurricane Electric

2015-12-03 Thread Matthew Petach
Or, if you feel that Cogent's stubborn insistence on
partitioning the global v6 internet shouldn't be rewarded
with money, pay someone *other* than cogent for
IPv6 transit and also connect to HE.net; that way
you still have access to cogent routes, but you also
send a subtle economic nudge that says "hey cogent--
trying to get into the tier 1 club by partitioning the
internet isn't a good path for long-term sucess".

Note that this is purely my own opinion, not necessarily
that of my employer, my friends, my family, or even my
cat.  I asked my cat about cogent IPv6, and all I got was
a ghostly hairball as a reply[0].

Matt


[0] https://www.youtube.com/watch?v=6kEME0CxmtY



On Thu, Dec 3, 2015 at 3:19 PM, Baldur Norddahl
 wrote:
> On 1 December 2015 at 20:23, Max Tulyev  wrote:
>
>> Hi All,
>>
>> we got an issue today that announces from Cogent don't reach Hurricane
>> Electric. HE support said that's a feature, not a bug.
>>
>> So we have splitted Internet again?
>>
>> I have to change at least one of my uplinks because of it, which one is
>> better to drop, HE or Cogent?
>>
>
> Question: Why would you have to drop one of them? You have no problem if
> you have both.
>
> Even in the case of a link failure to one of them, you will likely not see
> a big impact since everyone else also keeps multiple transits. You will
> only have trouble with people that are single homed Cogent or HE, in which
> case it is more them having a problem than you.
>
> Regards,
>
> Baldur
>


Re: IPv6 Cogent vs Hurricane Electric

2015-12-03 Thread Matthew Petach
On Thu, Dec 3, 2015 at 6:02 PM, Jared Mauch  wrote:
>
> Looking at the most recent IPv6 data available at CAIDA you can see the 
> customer cone size:
>
> http://as-rank.caida.org/?data-selected-id=15
>
> Be careful as the tool seems fragile when switching from the 2014-09-01 IPv6 
> dataset and trying to sort by options, it seems to switch back to IPv4 
> silently.
>
> Prefixes and/or AS’es in customer cone are likely the best measure, but even 
> there Cogent is 2x HE.net.  The only place where he.net leads is the transit 
> degree with is likely distorted because of what you mention above, full 
> tables, etc.
>
> I find this data interesting and wish there was something more recent than 
> 2014-09-01 to test with.  Perhaps I could do something with all these atlas 
> credits I have.  (or someone could use them for me).
>
> - Jared


Note their analysis is horribly flawed,
as it suffers from a 32-bit limitation
for counting IPv6 addresses.

I'd love to see them fix their code
and then re-run the analysis.

Matt


Re: Bluehost.com

2015-11-29 Thread Matthew Petach
On Sat, Nov 28, 2015 at 8:13 AM, Bob Evans  wrote:
> I think he means to say the rich get richer on the other side of the
> investment by playing the shorting and the buying of stock in the gambling
> marketplace. As the stock itself can create a new currency so they
> make more money playing with that than the actually investment. They are
> on the inside hence the saying the rich get richer.
> Thank You
> Bob Evans
> CTO


Ah!

So there's two types of value being discussed;
network value, vs dollar value.  While dollar
value is being made, and the rich are getting
richer, the value of the network resources
may indeed be destroyed.

Unfortunately, it's very hard to steer behaviour
when the incentives are not aligned with the
desired outcome, and in these cases, the
incentive (get richer) is often at odds with
what the technical community might desire.
As much as we might wish it to be otherwise,
the primary job of public companies is to make
money, not create network value--at least, as
long as the majority of your voting shares are
held by investors rather than technologists.
I look at companies like Google, Alibaba, and
Facebook as interesting anomalies because
they've structured their corporate ownership
in a way that doesn't cede control over to the
institutional investors the way the vast majority
of public companies have.  It remains to be seen
if that separation allows them to prioritize creating
network value above making money.   (I suspect
Google sidestepped the question when picking their
motto--"Don't be evil" doesn't define the nature of
evil; for investors, not doing everything possible to
make a profit might be seen as 'evil'. )

Thanks!

Matt


>
>
>
>
>> On Wed, Nov 25, 2015 at 5:54 PM, Kiriki Delany 
>> wrote:
>>> [...]
>>>
>>> Bottom line, is the industry needs to be increasing value, because the
>>> flip
>>> side working for no profit, surviving off investment only... there's
>>> no
>>> end-game. You see this cycle time and time again as market share is
>>> grabbed,
>>> then underperforming companies are rolled up. In this process value is
>>> destroyed.
>>>
>>> Ultimately this is also why it's extremely damaging for investors to
>>> constantly invest in companies that don't make a profit, and don't
>>> provide a
>>> successful economical model for the services/products provided. These
>>> companies largely live on investor money, lose money, and in their wake
>>> destroy value for the entire industry. Of course the end-game for the
>>> investors is to make money... I'm always surprised how strong
>>> investment/gambles are for non-profitable companies. I guess there is no
>>> end
>>> to those with too much money that have to place that money somewhere. As
>>> the
>>> rich get richer, there will only be more dumb money cheapening the value
>>> proposition. After all, who needs value when you have willing investors.
>>
>>
>> I'm confused.  If these companies largely live on investor money,
>> lose money, and destroy value...how is it that a scant two sentences
>> later, the rich are getting richer, and there is _more_ dumb money?
>>
>> I would posit the rich get richer because they *do*
>> see value in the investments they make.  That is,
>> value is being created in these deals...just not for
>> everyone.
>>
>> Matt
>>
>
>


Re: OT: BdNOG announces website blocks

2015-11-29 Thread Matthew Petach
On Wed, Nov 18, 2015 at 3:22 PM, Scott Weeks  wrote:
>
> -
> Md. abdullah Al naser mail.naserbd at yahoo.com
> Wed Nov 18 12:56:15 BDT 2015
>
> The service of Facebook, Viber and Whatsapp are
> blocked from now till further notice. It has been
> ordered by Begum Tarana Halim, State Minister, Post
> and Telecommunications.
> --

It occurs to me this is a very pro-competition
on the part of the government.  By blocking the
major incumbent messaging apps, it opens up
the marketplace for newer, younger startups
to gain marketshare.  Kudos to the State Minister
for such a forward-thinking and pro-competitive-market
strategy!

Matt
(not sure if my tongue should be in my cheek or
not, actually...)


Re: Bluehost.com

2015-11-28 Thread Matthew Petach
On Wed, Nov 25, 2015 at 5:54 PM, Kiriki Delany  wrote:
> [...]
>
> Bottom line, is the industry needs to be increasing value, because the flip
> side working for no profit, surviving off investment only... there's no
> end-game. You see this cycle time and time again as market share is grabbed,
> then underperforming companies are rolled up. In this process value is
> destroyed.
>
> Ultimately this is also why it's extremely damaging for investors to
> constantly invest in companies that don't make a profit, and don't provide a
> successful economical model for the services/products provided. These
> companies largely live on investor money, lose money, and in their wake
> destroy value for the entire industry. Of course the end-game for the
> investors is to make money... I'm always surprised how strong
> investment/gambles are for non-profitable companies. I guess there is no end
> to those with too much money that have to place that money somewhere. As the
> rich get richer, there will only be more dumb money cheapening the value
> proposition. After all, who needs value when you have willing investors.


I'm confused.  If these companies largely live on investor money,
lose money, and destroy value...how is it that a scant two sentences
later, the rich are getting richer, and there is _more_ dumb money?

I would posit the rich get richer because they *do*
see value in the investments they make.  That is,
value is being created in these deals...just not for
everyone.

Matt


Re: route converge time

2015-11-28 Thread Matthew Petach
One thing I notice you don't mention is whether your
BGP sessions to your upstream providers are direct
or multi-hop eBGP.  I know for a while some of the
more bargain-basement providers were doing eBGP
multi-hop feeds for full tables, which will definitely
slow down convergence if the routers have to wait
for hold timers to expire to flush routes, rather than
being able to direct detect link state transitions.

Matt


On Sat, Nov 21, 2015 at 5:44 AM, Baldur Norddahl
 wrote:
> Hi
>
> I got a network with two routers and two IP transit providers, each with
> the full BGP table. Router A is connected to provider A and router B to
> provider B. We use MPLS with a L3VPN with a VRF called "internet".
> Everything happens inside that VRF.
>
> Now if I interrupt one of the IP transit circuits, the routers will take
> several minutes to remove the now bad routes and move everything to the
> remaining transit provider. This is very noticeable to the customers. I am
> looking into ways to improve that.
>
> I added a default static route 0.0.0.0 to provider A on router A and did
> the same to provider B on router B. This is supposed to be a trick that
> allows the network to move packets before everything is fully converged.
> Traffic might not leave the most optimal link, but it will be delivered.
>
> Say I take down the provider A link on router A. As I understand it, the
> hardware will notice this right away and stop using the routes to provider
> A. Router A might know about the default route on router B and send the
> traffic to router B. However this is not much help, because on router B
> there is no link that is down, so the hardware is unaware until the BGP
> process is done updating the hardware tables. Which apparently can take
> several minutes.
>
> My routers also have multipath support, but I am unsure if that is going to
> be of any help.
>
> Anyone got any tricks or pointers to what can be done to optimize the
> downtime in case of a IP transit link failure? Or the related case of one
> my routers going down or the link between them going down (the traffic
> would go a non-direct way instead if the direct link is down).
>
> Thanks,
>
> Baldur
>


Re: route converge time

2015-11-28 Thread Matthew Petach
Or, better yet, apply a REJECT-ALL type policy
on the neighbor to deny all inbound/outbound
prefixes; that way, you can keep the session
up as long as possible, but gracefully bleed
traffic off ahead of your work.

Matt


On Sat, Nov 28, 2015 at 3:46 PM, Jürgen Jaritsch  wrote:
> Hi,
>
> Why you not simply shut down the session upfront (before you turn down the 
> link)?
>
> Best regards
>
>
> Jürgen Jaritsch
> Head of Network & Infrastructure
>
> ANEXIA Internetdienstleistungs GmbH
>
> Telefon: +43-5-0556-300
> Telefax: +43-5-0556-500
>
> E-Mail: j...@anexia.at
> Web: http://www.anexia.at
>
> Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt
> Geschäftsführer: Alexander Windbichler
> Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
>
>
> -Original Message-
> From: Baldur Norddahl [baldur.nordd...@gmail.com]
> Received: Sonntag, 29 Nov. 2015, 0:39
> To: nanog@nanog.org [nanog@nanog.org]
> Subject: Re: route converge time
>
> Hi
>
> The IP transit links are direct links (not multihop). It is my impression
> that a link down event is handled with no significant delay by the router
> that has the link. The problem is the other router, the one that has to go
> through the first router to access the link the went down.
>
> The transit links are not unstable and in fact they have never been down
> due to a fault. But we are a young network and still frequently have to
> change things while we build it out. There have been cases where I have had
> to take down the link for various reasons. There seems to be no way to do
> this without causing significant disruption to the network.
>
> Our routers are 2015 hardware. The spec has 2M IPv4 + 1M IPv6 routes in FIB
> and 10M routes in RIB. Route convergence time is specified as 15k
> routes/second. 8 GB ram on the route engines.
>
> Say transit T1 is connected to router R1 and transit T2 is connected to
> router R2.
>
> I believe the underlying problem is that due to MPLS L3VPN the next hop on
> R2 for routes out through T1 is not the transit provider router as usual.
> Instead it is the loopback IP of R1. This means that when T1 goes down, the
> next hop is still valid and R2 is unable to deactivate the invalid routes
> as a group operation due to invalid next hop.
>
> I am considering adding a loopback2 interface that has a trigger on the
> transit interface, such that a shutdown on loopback2 is triggered if the
> transit interface goes down. And then force next hop to be loopback2. That
> way our IGP will signal that the next hop is gone and that should
> invalidate all the routes as a group operation.
>
> Regards,
>
> Baldur
>


Re: IGP choice

2015-10-30 Thread Matthew Petach
On Thu, Oct 22, 2015 at 12:35 PM, Dave Bell  wrote:
> On 22 October 2015 at 19:41, Mark Tinka  wrote:
>> The "everything must connect to Area 0" requirement of OSPF was limiting
>> for me back in 2008.
>
> I'm unsure if this is a serious argument, but its such a poor point
> today. Everything has to be connected to a level 2 in IS-IS. If you
> want a flat area 0 network in OSPF, go nuts. As long as you are
> sensible about what you put in your IGP, both IS-IS and OSPF scale
> very well.

It is rather nice that IS-IS does not require level-2 to be
contiguous, unlike area 0 in OSPF.  It is a valid topology
in IS-IS to have different level-2 areas connected by
level-1 areas, though you do have to be somewhat
careful about what routes you propagate into-and-back-out-of
the intervening level-1 area.

But other than that, yeah, the two protocols are
pretty much homologous.

Matt


Re: IGP choice

2015-10-23 Thread Matthew Petach
On Thu, Oct 22, 2015 at 9:57 AM, marcel.durega...@yahoo.fr
 wrote:
> Hi everyone,
>
> Anybody from Yahoo to share experience on IGP choice ?
> IS-IS vs OSPF, why did you switch from one to the other, for what reason ?
> Same question could apply to other ISP, I'd like to heard some international
> ISP/carriers design choice, please.
>
> Thank in advance,
> Best regards,
> -Marcel

When we decided to go dual-stack many many years
ago, we faced the choice of either running OSPFv2
and OSPFv3 in parallel in the core, or just running
IS-IS.  Several of us on the team had experience
with IS-IS from previous jobs, so we decided to
shift over from OSPF to IS-IS to simplify the
environment by only needing a single IGP for
both address families.

Hope this helps answer your question.

Thanks!

Matt


Re: IGP choice

2015-10-23 Thread Matthew Petach
On Fri, Oct 23, 2015 at 1:41 AM, marcel.durega...@yahoo.fr
 wrote:
> sorry for that, but the only one I've heard about switching his core IGP is
> Yahoo. I've no precision, and it's really interest me.
> I know that there had OSPF in the DC area, and ISIS in the core, and decide
> to switch the core from ISIS to OSPF.

Wait, what?
*checks memory*
*checks routers*

Nope.  Definitely went the other way; OSPF -> IS-IS in the core.

> Why spend so much time/risk to switch from ISIS to OSPF, _in the core_ a not
> so minor impact/task ?
> So I could guess it's for maintain only one IGP and have standardized
> config. But why OSPF against ISIS ? What could be the drivers? People skills
> (more people know OSPF than ISIS) --> operational reason ?

I'm sorry you received the wrong information,
the migration was from OSPF to IS-IS, not
the other way around.

Thanks!

Matt


Re: outlook.com outgoing blacklists?

2015-09-10 Thread Matthew Petach
On Wed, Sep 9, 2015 at 9:49 AM, Todd K Grand  wrote:
> I have an email server which hosts 3 domains.
> I have reason to believe that microsoft maintains an outgoing blacklist and 
> would like confirmation on this.
>
> I have had many a report that people on domains hosted on hotmail/outlook are 
> getting messages bounced back stating that our server was unreachable.
> This only happens for one of the three domains hosted on our server.
>
> I went to outlook.com and setup an account.
> When I create a new message and enter the recipient at that affected domain, 
> the address immediately turns red, and when I hover over it states that
> the address may not be valid.
> This happens without ever sending a packet to our servers.
> The affected domain can send emails to hotmail/outlook accounts just fine.
>
> Anybody have some recommendations on how I resolve this, as Microsoft support 
> seems to be under technical.
>
> Thanks,
>
> Todd K. Grand
>

Certainly looks to be broken to me:

mpetach@hinotori:~> nslookup -q=any gkstream.com
Server: 8.8.8.8
Address:8.8.8.8#53

Non-authoritative answer:
Name:   gkstream.com
Address: 185.53.179.7
gkstream.comnameserver = ns1.parkingcrew.net.
gkstream.comtext = "v=spf1 ip6:fd1b:212c:a5f9::/48 -all"
gkstream.comnameserver = ns2.parkingcrew.net.
gkstream.com
origin = ns1.parkingcrew.net
mail addr = hostmaster.gkstream.com
serial = 144189
refresh = 28800
retry = 7200
expire = 604800
minimum = 86400

Authoritative answers can be found from:

mpetach@hinotori:~>


mpetach@hinotori:~> traceroute gkstream.com
traceroute to gkstream.com (185.53.179.7), 64 hops max, 40 byte packets
 1  ws1 (69.36.244.130)  1 ms  1 ms  1 ms
 2  s0-0-0-2.core1.sjc.layer42.net (69.36.238.33)  4 ms  4 ms  4 ms
 3  ge2-48.core1.sv1.layer42.net (65.50.198.5)  4 ms  4 ms  4 ms
 4  te0-0-0-18.ccr21.sjc04.atlas.cogentco.com (38.104.141.145)  6 ms
41 ms  73 ms
 5  be2015.ccr21.sfo01.atlas.cogentco.com (154.54.7.173)  47 ms
(TOS=40!)  7 ms  7 ms
 6  be2132.ccr21.mci01.atlas.cogentco.com (154.54.30.54)  57 ms  57 ms  57 ms
 7  be2156.ccr41.ord01.atlas.cogentco.com (154.54.6.86)  57 ms  70 ms  57 ms
 8  be2351.ccr21.cle04.atlas.cogentco.com (154.54.44.86)  75 ms  64 ms  67 ms
 9  be2596.ccr21.yyz02.atlas.cogentco.com (154.54.31.54)  71 ms  71 ms  71 ms
10  be2090.ccr21.ymq02.atlas.cogentco.com (154.54.30.206)  84 ms  121 ms  161 ms
11  be2384.ccr21.lpl01.atlas.cogentco.com (154.54.44.138)  150 ms  150
ms  151 ms
12  be2182.ccr41.ams03.atlas.cogentco.com (154.54.77.245)  170 ms  170
ms  169 ms
13  be2261.ccr41.fra03.atlas.cogentco.com (154.54.37.30)  164 ms  164 ms  164 ms
14  be2228.ccr21.muc03.atlas.cogentco.com (154.54.38.50)  174 ms  174 ms  174 ms
15  te0-0-0-2.agr12.muc03.atlas.cogentco.com (154.54.56.222)  173 ms
te0-0-0-2.agr11.muc03.atlas.cogentco.com (154.54.56.206)  191 ms
te0-0-0-2.agr12.muc03.atlas.cogentco.com (154.54.56.222)  174 ms
16  154.25.8.26 (154.25.8.26)  170 ms 154.25.8.22 (154.25.8.22)  175
ms 154.25.8.26 (154.25.8.26)  170 ms
17  149.6.156.195 (149.6.156.195)  175 ms 149.6.156.202
(149.6.156.202)  173 ms  174 ms
18  * * *
19  * * *
20  * * *
21  * * *
22  * * *
23  * * *
24  * * *
25  *^C *
26 ^C
mpetach@hinotori:~>
mpetach@hinotori:~> telnet gkstream.com 25
Trying 185.53.179.7...

telnet: Unable to connect to remote host: Connection timed out
mpetach@hinotori:~>


Matt


Re: BGAN Optimized Laptops

2015-09-10 Thread Matthew Petach
On Thu, Sep 10, 2015 at 6:14 PM, Scott Weeks  wrote:
>
...
>
> Someone told me that there is a way for the browser to say
> to the web server, send me only the parts of the web page I
> request.  For example, send me everything but the flash and
> images.  Being a browser wuss I thought the web server just
> sent everything and the browser decided whether to display
> it or not.  That would mean the data already was transferred
> over the expensive sat link incurring the data costs.
>
> scott

Just wanted to clear one point up...

The web is *not* a "push" model; it's a "pull" model.

The HTML document is nothing but a text document
which has references to other elements that are
available to the browser, should it choose to
request them; but it is incumbent upon the
browser to request each and every one of
those other elements from the server before
they are transferred.  The server will not send
something that was not first requested by the
browser.

It's misunderstandings like this that make content
providers twitch every time an eyeball network
says "well you're *sending* all this data at my
network" -- absolutely nothing is being sent
that was not explicitly requested by the browser
first.   ^_^;

Thanks!

Matt


Re: Drops in Core

2015-08-15 Thread Matthew Petach
Quite the inverse, I'd say; most of the capacity
headaches center around the handoff between
networks, and most of the congestion points
I come across are with private peering links
where one party or the other is unwilling or
unable to augment capacity.  The first and
last mile are fine, but the handoff between
the networks is where congestion and drops
occur.
As others have noted, this will vary greatly
depending on the network in question--so
asking a broad community like this is going
to yield a broad range of answers.  You
aren't going to find one single answer, you'll
find a probability curve that represents the
answers from many people running different
networks.
You'll find the location of packet drops tends
to shift depending on where companies are
willing to spend money; some companies
will spend money on the access layer to
ensure no drops happen there, but are
less willing to pay for capacity upgrades
at peering handoffs.  Other networks will
short-change their access, but maintain a
well-connected peering edge.

So--short answer is there is no one answer
to your question.  Collect the different answers,
plot the curve, and decide where along the
curve you want *your* network to land,
and build accordingly.  Nobody has infinite
money, so nobody builds to a level to ensure
zero loss probability to every destination around
the planet.

Matt



On Sat, Aug 15, 2015 at 9:47 AM, Glen Kent glen.k...@gmail.com wrote:
 Hi,

 Is it fair to say that most traffic drops happen in the access layers, or
 the first and the last miles, and the % of packet drops in the core are
 minimal? So, if the packet has made it past the first mile and has
 entered the core then chances are high that the packet will safely get
 across till the exit in the core. Sure once it gets off the core, then all
 bets are off on whether it will get dropped or not. However, the key point
 is that the core usually does not drop too many packets - the probability
 of drops are highest in the access side.

 Is this correct?

 Glen



Re: net neutrality peering dispute between CenturyTel/Qwest and Cogent in Dallas

2015-08-15 Thread Matthew Petach
I dunno, Jim, that sounds almost like you might
think the inevitable outcome will be an everyone
pays model of settlements, the way telcos do
it.  Unfortunately, in that model, the only winners
are the transit networks in the middle, because
no accounting department is going to want to
keep track of settlements for 4,000 other ASNs
that you peer with; their demand will be reduce
the number of invoices, aggregate through 2 or
3 providers so we only have a small number of
invoices to reconcile.
I can see where you're coming from, but I'm not
sure I like the destination.  :(

Matt


On Sat, Aug 15, 2015 at 10:32 AM, jim deleskie deles...@gmail.com wrote:
 In my 20+ yrs now of playing this game, everyone has had a turn thinking
 their content/eyeballs are special and should get free peering.

 On Sat, Aug 15, 2015 at 1:59 PM, Mike Hammett na...@ics-il.net wrote:

 Arrogance is the only reason I can think of why the incumbents think that
 way. I'd be surprised if any competitive providers (regardless of their
 market dominance) would expect free peering.




 -
 Mike Hammett
 Intelligent Computing Solutions
 http://www.ics-il.com



 Midwest Internet Exchange
 http://www.midwest-ix.com


 - Original Message -

 From: Owen DeLong o...@delong.com
 To: Matthew Huff mh...@ox.com
 Cc: nanog@nanog.org
 Sent: Saturday, August 15, 2015 11:44:57 AM
 Subject: Re: net neutrality peering dispute between CenturyTel/Qwest and
 Cogent in Dallas

 This issue isn’t limited to Cogent.

 There is this bizarre belief by the larger eyeball networks (and CC, VZ,
 and TW are the worst offenders, pretty much in that order) that they are
 entitled to be paid by both the content provider _AND_ the eyeball user for
 carrying bits between the two.

 In a healthy market, the eyeball providers would face competition and the
 content providers would simply ignore these demands and the eyeballs would
 buy from other eyeball providers.

 Unfortunately, especially in the US, we don’t have a healthy market. In
 the best of circumstances, we have oligopolies and in the worst places, we
 have effective (or even actual) monopolies.

 For example, in the area where I live, the claim you will hear is that
 there is competition. With my usage patterns, that’s a choice between
 Comcast (up to 30/7 $100/mo), ATT DSL (1.5M/384k $40/mo+) and wireless (Up
 to 30/15 $500+/month).

 I’m not in some rural backwater or even some second-tier metro. I’m within
 10 miles of the former MAE West and also within 10 miles of Equinix SV1 (11
 Great Oaks). There’s major fiber bundles within 2 miles of my house. I’m
 near US101 and Capitol Expressway in San Jose.

 The reason that things are this way, IMHO, is because we have allowed
 “facilities based carriers” to leverage the monopoly on physical
 infrastructure into a monopoly for services over that infrastructure.

 The most viable solution, IMHO, is to require a separation between
 physical infrastructure providers and those that provide services over that
 infrastructure. Breaking the tight coupling between the two and requiring
 physical infrastructure providers to lease facilities to operators on an
 equal footing for all operators will reduce the barriers to competition in
 the operator space. It will also make limited competition in the facilities
 space possible, though unlikely.

 This model exists to some extent in a few areas that have municipal
 residential fiber services, and in most of those localities, it is working
 well.

 That’s one of the reasons that the incumbent facilities based carriers
 have lobbied so hard to get laws in states where a city has done this that
 prevent other cities from following suit.

 Fortunately, one of the big gains in recent FCC rulings is that these laws
 are likely to be rendered null and void.

 Unfortunately, there is so much vested interest in the status quo that
 achieving this sort of separation is unlikely without a really strong grass
 roots movement. Sadly, the average sound-bite oriented citizen doesn’t know
 (or want to learn) enough to facilitate such a grass-roots movement, so if
 we want to build such a future, we have a long slog of public education and
 recruitment ahead of us.

 In the mean time, we’ll get to continue to watch companies like CC, VZ, TW
 screw over their customers and the content providers their customers want
 to reach for the sake of extorting extra money from both sides of the
 transaction.

 Owen

  On Aug 15, 2015, at 06:40 , Matthew Huff mh...@ox.com wrote:
 
  It's only partially about net neutrality. Cogent provides cheap
 bandwidth for content providers, and sends a lot of traffic to eyeball
 networks. In the past, peering partners expected symmetrical load sharing.
 Cogent feels that eyeball networks should be happy to carry their traffic
 since the customers want their services, the eyeball networks want Cogent
 to pay them extra. When there is congestion, neither side wants to upgrade
 their peeing until this is 

Re: Super Core Hardware suggestions

2015-08-08 Thread Matthew Petach
I suspect you might want to look at the QFX10002-36Q series:

http://www.juniper.net/assets/us/en/local/pdf/datasheets/1000531-en.pdf

Matt


On Thu, Aug 6, 2015 at 7:10 PM, Ben Cornish b...@overthewire.com.au wrote:
 Hey All

 We are looking for suggestions for a device to act as a super Core Device / 
 MPLS P router only.
 There seems to be plenty of Chassis based solutions out there that also cater 
 for a lot more.
 We ideally would like a 1RU or 2RU device - Handling MPLS / IGP only

 * Ideally 16 to 48 ports of 10Gig - SFP

 * Non-blocking line rate capable on all ports.

 * MPLS / OSPF /BFD / ISIS / RSVP-TE capably.

 * Deep buffers on the ports would also be nice

 * With a possible option of 40Gig uplinks..

 Thanks



  1   2   3   4   5   >