Re: DNS resolution for hhs.gov

2023-04-15 Thread Doug Barton

Always love your in-depth analysis. Thanks, Mark.  :)


On 4/14/23 4:40 PM, Mark Andrews wrote:




On 15 Apr 2023, at 02:41, Doug Barton  wrote:

Responses in line below.

Doug


On 4/11/23 8:12 AM, Samuel Jackson wrote:

I wanted to run this by everyone to make sure I am not the one losing my mind 
over this.
A dig +trace cob.cms.hhs.gov <http://cob.cms.hhs.gov> fails for me as it looks like 
the NS for hhs.gov <http://hhs.gov> does not seem to resolve the hostname.


They shouldn't, since cms.hhs.gov is a delegated subzone. (Also, resolve is the 
wrong term, since those are authoritative servers, not resolvers.) The hhs.gov 
name servers are not authoritative for the cms.hhs.gov zone.

Using `dig +trace cob.cms.hhs.gov` worked for me just now, so it's possible 
that they fixed something in response to Mark's message.


No, they haven’t.

The problem is that QNAME minimisation asks _./A queries to elicit 
referrals and the servers for hhs.gov don’t respond to them.  Optimally we would ask 
NS queries but there are delegations where the child NS RRset are complete garbage 
and in this case hss.gov don’t respond to some of them either over TCP as was shown 
in the earlier messages.

Telling named to only use TCP with the servers for hss.gov should help.

e.g.
server 158.74.30.99 { tcp-only yes; };

For 'dig +trace’ the addresses of the nameservers are looked up and glue is not 
good enough.  When named
attempts to resolve rh202ns2.355.dhhs.gov and similar the queries it makes do 
not get responses.

% dig rh202ns2.355.dhhs.gov @158.74.30.99

; <<>> DiG 9.19.11-dev <<>> rh202ns2.355.dhhs.gov @158.74.30.99
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 50815
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 2636ce7eeb438b88fe1b0a2d6439dcce550e6799df6049a8 (good)
;; QUESTION SECTION:
;rh202ns2.355.dhhs.gov. IN A

;; ANSWER SECTION:
rh202ns2.355.dhhs.gov. 9000 IN A 158.74.30.99

;; Query time: 328 msec
;; SERVER: 158.74.30.99#53(158.74.30.99) (UDP)
;; WHEN: Sat Apr 15 09:07:58 AEST 2023
;; MSG SIZE  rcvd: 94

% dig _.355.dhhs.gov @158.74.30.99
;; communications error to 158.74.30.99#53: timed out
;; communications error to 158.74.30.99#53: timed out
;; communications error to 158.74.30.99#53: timed out

; <<>> DiG 9.19.11-dev <<>> _.355.dhhs.gov @158.74.30.99
;; global options: +cmd
;; no servers could be reached

% dig 355.dhhs.gov ns @158.74.30.99
;; communications error to 158.74.30.99#53: timed out
;; communications error to 158.74.30.99#53: timed out
;; communications error to 158.74.30.99#53: timed out

; <<>> DiG 9.19.11-dev <<>> 355.dhhs.gov ns @158.74.30.99
;; global options: +cmd
;; no servers could be reached

% dig 355.dhhs.gov ns @158.74.30.99 +tcp

; <<>> DiG 9.19.11-dev <<>> 355.dhhs.gov ns @158.74.30.99 +tcp
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51550
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 86462f55438e987dd7cd37926439dd174d9cf5907438ce51 (good)
;; QUESTION SECTION:
;355.dhhs.gov. IN NS

;; AUTHORITY SECTION:
dhhs.gov. 3600 IN SOA rh120ns1.368.dhhs.gov. hostmaster.psc.hhs.gov. 2023021761 
1200 300 2419200 3600

;; Query time: 351 msec
;; SERVER: 158.74.30.99#53(158.74.30.99) (TCP)
;; WHEN: Sat Apr 15 09:09:11 AEST 2023
;; MSG SIZE  rcvd: 137

% dig _.355.dhhs.gov @158.74.30.99 +tcp

; <<>> DiG 9.19.11-dev <<>> _.355.dhhs.gov @158.74.30.99 +tcp
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 19166
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 22078767daaad75caba70a826439dd1dcc25d44396d38240 (good)
;; QUESTION SECTION:
;_.355.dhhs.gov. IN A

;; AUTHORITY SECTION:
dhhs.gov. 3600 IN SOA rh120ns1.368.dhhs.gov. hostmaster.psc.hhs.gov. 2023021761 
1200 300 2419200 3600

;; Query time: 244 msec
;; SERVER: 158.74.30.99#53(158.74.30.99) (TCP)
;; WHEN: Sat Apr 15 09:09:17 AEST 2023
;; MSG SIZE  rcvd: 139

%

At this stage I don’t know if the email I sent earlier has even reached the 
administrator responsible.  I haven’t seen a response.  It could still be 
queued on our outbound SMTP servers trying to resolve MX records or their 
targets.

Also if named times out asking all 8 servers for an in-scope name why should 
expect to get an answer for a different in-scope name? Playing silly games by 
not answering consistently just causes issues.


However dig +trace cms.hhs.go

Re: DNS resolution for hhs.gov

2023-04-14 Thread Doug Barton

Responses in line below.

Doug


On 4/11/23 8:12 AM, Samuel Jackson wrote:
I wanted to run this by everyone to make sure I am not the one losing my 
mind over this.


A dig +trace cob.cms.hhs.gov  fails for me as it 
looks like the NS for hhs.gov  does not seem to resolve 
the hostname.


They shouldn't, since cms.hhs.gov is a delegated subzone. (Also, resolve 
is the wrong term, since those are authoritative servers, not 
resolvers.) The hhs.gov name servers are not authoritative for the 
cms.hhs.gov zone.


Using `dig +trace cob.cms.hhs.gov` worked for me just now, so it's 
possible that they fixed something in response to Mark's message.


However dig +trace cms.hhs.gov  resolves and so does 


That makes sense, delegated sub zone.  :)


dig +trace eclkc.ohs.acf.hhs.gov 


No delegated sub zones in the path here, so the hhs.gov name servers are 
authoritative for this host.


However if I simply ask my local resolver to resolve cob.cms.hhs.gov 
, it works. Any thoughts on why this is the case?


Because it's getting the answer from the child zone (cms) like it should.

I'm sort of curious about what `dig +trace` results you received 
originally that made you believe that you weren't getting the right 
response. Are you currently seeing what you expect to see?


Re: IoT - The end of the internet

2022-08-10 Thread Doug Barton

On 8/9/22 10:40 PM, b...@theworld.com wrote:


Possibly interesting:

This kind of idea came up w/in ICANN when they were first considering
the idea of adding 1000+ new generic and internationalized TLDs. Will
it cause a melt down?

Money was allocated, studies and simulations were done, reports were
tendered.

The conclusion was: Not likely a problem in terms of stress on the DNS
etc and that seems to have been correct even if there are other, more
social, complaints.

You could dig the studies up if you're interested, they should be on
the ICANN site.

But it's a reasonable approach to the question other than discovering
some structural flaw like we'll run out of IP addresses. Not likely
but just a "for instance" where we wouldn't need simulations to study.


I had the privilege of being part of that discussion in the early-mid 
2000's as IANA GM. Having come out of Yahoo! when it was still 
essentially the largest Internet company, I spent a lot of time 
explaining to folks that while it is important, the root DNS zone is 
still just a zone, and I had zones with tens of thousands of records in 
them at Yahoo! So you tell me how big you want the root zone to be, and 
I'll help scope the project for you.  :)



The studies and simulations were necessary in order to smooth the 
feathers of the non-technologists in the ICANN community, but we were 
just demonstrating what the technologists already knew.


FWIW,

Doug


Re: cf is down?

2022-06-21 Thread Doug Barton

Was someone scanning the Internet for vulnerabilities?


On 6/21/22 12:20 AM, Eric Kuhnke wrote:
Massive spike in consumer facing services reported as broken by 
downdetector, almost all are likely cf customers. See downdetector 
homepage.


Re: Serious Juniper Hardware EoL Announcements

2022-06-17 Thread Doug Barton
I don't want to glorify the idea of converting multicast space by 
commenting on it, however you're wrong in several particulars about the 
relationships around the IANA.


Most notably here is the issue that in relationship to what IP addresses 
can be handed out to who, and for what purpose, IANA is at the service 
of the IETF. At the end of the day the IP address registries are not 
that different from any of the other registries that IANA maintains on 
their behalf.


hope this helps,

Doug (Former IANA GM)


On 6/14/22 8:54 PM, b...@theworld.com wrote:


Just to put a little more flesh on that bone (having spent about a
decade going to ICANN conferences):

Although organized under ICANN, address allocation would generally be
the role of IANA which would assign address blocks to RIRs for
distribution.

It's a useful distinction because IANA and the RIRs act fairly
independently from the umbrella ICANN org unless there's some very
specific reason for, e.g., the ICANN board to interfere like some
notion that the allocation of these addresses would (literally)
threaten the stability and security of the internet, or similar.

Offhand (and following comments by people of competent jurisdiction) I
can't see why IANA or the RIRs would resist this idea in
principle. It's just more stock in trade for them, particularly the
RIRs.

Other than they (IANA, RIRs) wouldn't do this unless the IETF issued a
formal redeclaration of the use of these addresses.

Anyhow, that's roughly how the governance works in practice and has
for over 20 years.

So, I think the first major move would have to be the IETF issuing one
or more RFCs redefining the use of these addresses which would then
put them into the jurisdiction of IANA who could then issue them
(probably piecewise) to the RIRs.

On June 14, 2022 at 13:21 g...@toad.com (John Gilmore) wrote:
  > Dave Taht  wrote:
  > > > Then it was "what can we do with what we can afford" now it's more
  > > > like "What can we do with what we have (or can actually get)"?
  > >
  > > Like, working on better software...
  >
  > Like, deploying the other 300 million IPv4 addresses that are currently
  > lying around unused.  They remain formally unused due to three
  > interlocking supply chain problems: at IETF, ICANN, and vendors.  IETF's
  > is caused by a "we must force everyone to abandon trailing edge
  > technology" attitude.  ICANN's is because nobody is sure how to allocate
  > ~$15B worth of end-user value into a calcified IP address market
  > dominated by government-created regional monopolies doing allocation by
  > fiat.
  >
  > Vendors have leapfrogged the IETF and ICANN processes, and most have
  > deployed the key one-line software patches needed to fully enable these
  > addresses in OS's and routers.  Microsoft is the only major vendor
  > seemingly committed to never doing so.  Our project continues to track
  > progress in this area, and test and document compatability.
  >
  >  John
  >  IPv4 Unicast Extensions Project 



Re: A way that ARIN can help encourage RPKI adoption

2022-04-12 Thread Doug Barton

On 4/12/22 9:56 PM, John Curran wrote:

Doug, we’re not contracting with these parties to provide any other services…i.e.  
there’s nothing to "add a rider to”.
(Those who have any registration services agreement with ARIN already have 
access to all services incl. RPKI)


Thank you for considering my suggestion. Perhaps I misunderstand the 
current state.


I'm thinking of a scenario where a person holds legacy space, with no 
[L]RSA, but they do have a registered ASN through ARIN (for example). In 
that scenario are they eligible for RPKI for their legacy space?


If so, that's awesome, and I apologize for cluttering everyone's 
mailboxes.  :)


Doug


A way that ARIN can help encourage RPKI adoption

2022-04-12 Thread Doug Barton

On 4/6/22 10:55 AM, John Curran wrote:

Interesting philosophy - historically ARIN customers have asked for simplicity 
in the relationship; i.e. a single fee that encompasses all of the services - 
in this way, an organization can utilize something without having to “get new 
approval” and there’s no financial or service disincentive for deployment of 
IPv6, IRR, RPKI, etc.

Feel free to propose an alternative structure if you think it makes sense - the 
suggestion process would be a good step (but feel free to run for the ARIN 
Board of Trustees if you want to really advocate for a different approach.)


John,

I think you raise an interesting point here. From an outside perspective 
it seems to me that ARIN is using RPKI participation as leverage to get 
legacy space holders to sign an LRSA. You have mentioned in past 
messages that this is at least in part based on the desire to recover 
costs related to providing that service. So let's look creatively at the 
cost issue.


Taking that claim at face value, I wonder if it's possible for ARIN to 
compromise slightly here, in the interest of encouraging the adoption of 
RPKI to the benefit of the Internet community. My suggestion is to open 
participation in RPKI to anyone with legacy space who is paying ARIN a 
fee for service, regardless of LRSA status.


Someone else mentioned creating a lightweight agreement for legacy space 
holders who want RPKI, which I think is a good idea. I'm not up on the 
current contents of the LRSA, but I imagine that there is an 
indemnification clause. I would be surprised if your lawyers didn't want 
that for the situation I'm proposing as well. Being lawyers, I imagine 
that they can come up with other things too.  :)  But given that you're 
already contracting with these parties for other services, a "rider" for 
RPKI should be easily accomplished.


I think that there is broad agreement (although I note not universal 
agreement) that RPKI is a good thing, and that its use should be 
encouraged. I would like to see ARIN do everything in its power to 
support that goal. I think it's also worth noting that there are options 
with at least one other RIR for legacy space holders to get into RPKI 
with a lighter weight mechanism than what ARIN is offering. While on the 
one hand I think that there is some value in the RIR model in that 
services can be tailored to meet the needs of those in their regions, I 
don't think users in the ARIN region should need to "jump the fence" in 
order to help make the Internet more secure.


What do you think?

Doug


Re: "Permanent" DST

2022-03-15 Thread Doug Barton
All of this. The reason that the proposal is always worded "Permanent 
Daylight Savings Time" is that there are a non-trivial number of people 
who genuinely believe that with DST we get more sunlight. Not more 
sunlight during the hours when most people are awake, literally more 
sunlight.


In a world where institutional hours don't change, (schools, workplaces, 
etc.) DST actually makes sense because it more closely aligns the ideas 
of "morning" and "evening" with most people's schedules. For the most 
part people complaining about the change are actually reacting to the 
lengthening and/or shortening daylight hours. The fixed point to change 
the clocks just gives them something to focus on.


Keeping everything on standard time and adjusting schedules makes the 
most sense for letting kids travel too and from with the most daylight 
possible; but taking just the example of working parents, they would 
need all of their kids' schools to agree to the same change, as well as 
their workplace.


Alas, the true solution is education.


On 3/15/22 3:09 PM, Matthew Huff wrote:

They don't want their names on it when what happened in the 70s happens again. 
The effect of setting everything to DST and staying there is that in the 
winter, especially in the norther latitude it will be pitch dark during most of 
the morning when children get picked up at school bus stops. When the tragedy 
happens again, and it will, they will end up undoing this again...

History repeats itself, first as a tragedy, then as a farce...

Matthew Huff | Director of Technical Operations | OTA Management LLC

Office: 914-460-4039
mh...@ox.com | www.ox.com
...

-Original Message-
From: NANOG  On Behalf Of Jay R. Ashworth
Sent: Tuesday, March 15, 2022 5:30 PM
To: Tom Beecher 
Cc: nanog@nanog.org list 
Subject: Re: "Permanent" DST

Oh.  This was "Unanimous Consent"?  AKA "I want to vote for this, but *I do not want 
to be held responsible for having voted for it when it blows up*?"

I'd missed that; thanks.

- Original Message -

From: "Tom Beecher" 
To: "Eric Kuhnke" 
Cc: "nanog@nanog.org list" 
Sent: Tuesday, March 15, 2022 5:04:02 PM
Subject: Re: "Permanent" DST



I would say if something passes the United States Senate in our
current political environment by unanimous consent (which this did) ,
I kinda feel like there won't be a ton of issues with everybody
figuring out how to line themselves up appropriately.

On Tue, Mar 15, 2022 at 5:01 PM Eric Kuhnke  wrote:


That is true but at present everything business related in BC has a
clear expectation of being in the same time zone as WA/OR/CA, and AB
matches US Mountain time.

On Tue, 15 Mar 2022 at 13:35, Paul Ebersman 
wrote:


eric> If Canada doesn't do the same thing at the same time, it'll be
eric> a real hassle, dealing with a change from -8 to -7 crossing
eric> the border between BC and WA, for instance. It has to be done
eric> consistently throughout North America.

You must not have ever dealt with Indiana, where it was DST or not
by choice per county. It wasn't quite the cluster***k you'd think.





Re: S.Korea broadband firm sues Netflix after traffic surge

2021-10-12 Thread Doug Barton

On the cookie issue, I have had very good luck with this in Firefox:

https://addons.mozilla.org/en-US/firefox/addon/cookie-autodelete/

hope this helps,

Doug


On 10/12/21 6:26 AM, scott wrote:


On 10/12/21 9:15 PM, Matthew Petach wrote:


So, I take it you steadfastly block *all* cookies from being stored
or transmitted from your browser at home?
--\



I used to when Firefox had the "ask me every time" for cookies. They got 
rid of that, so now I clear them out all the time.  Many times a day and 
every time I close the browser... :)


Then I found out about Mozilla Location Services, how they made it so we 
can't block that and realized they only blocked others and not 
themselves from feasting on our data.


https://en.wikipedia.org/wiki/Mozilla_Location_Services

https://location.services.mozilla.com

Bastards!

scott



Re: S.Korea broadband firm sues Netflix after traffic surge

2021-10-10 Thread Doug Barton

[some snipping below]

Also just to be clear, these are my own opinions, not necessarily shared 
by any current or former employers.


On 10/10/21 12:31 PM, Mark Tinka wrote:



On 10/10/21 21:08, Doug Barton wrote
Given that issue, I have some sympathy for eyeball networks wanting to 
charge content providers for the increased capacity that is needed to 
bring in their content. The cost would be passed on to the content 
provider's customers...


But eyeballs are already paying you a monthly fee for 100Mbps of service 
(for example). So they should pay a surcharge, over-and-above that, that 
determines how they can use that 100Mbps? Seems overly odd, to me.


Yes, I get that. But as you pointed out here and in other comments, the 
ISP market is based entirely on undercutting competitors (with a lot of 
gambling thrown in, as Matthew pointed out).


(in the same way that corporations don't pay taxes, their customers 
do),...



Many a company pays corporate tax, which is separate from the income tax 
they pay for compensation to their staff.


Of course, YMMV depending on where you live.


I didn't say income tax. Corporate taxes are considered an expense by 
the corporation paying them. Like all other expenses, they are factored 
into the cost of goods/services sold.


so the people on that ISP who are creating the increased demand would 
be (indirectly) paying for the increased capacity. That's actually 
fairer for the other customers who aren't Netflix subscribers.


The reason that Netflix doesn't want to do it is the same reason that 
ISPs don't want to charge their customers what it really costs to 
provide them access.


So what rat hole does this lead us down into? People who want to stream 
Youtube should pay their ISP for that? People who want to spend 
unmentionable hours on Linkedin should be their ISP for that? People who 
want to gawk over Samsung's web site because they love it so much, 
should pay their ISP for that?


First, I'm not saying "should." I'm saying that given the market 
economics, having the content providers who use "a lot" of bandwidth do 
something to offset those costs to the ISPs might be the best/least bad 
option. Whether "something" is a local cache box, peering, money, or 
 is something I think that the market should determine.


And to answer Matthew's question, I don't know what "a lot" is. I think 
the market should determine that as well.


And for the record, not only have I never worked for an ISP, I was 
saying all the way back in the late '90s that the oversubscription 
business model (which almost always includes punishing users who 
actually use their bandwidth) is inherently unfair to the customers, and 
when the Internet becomes more pervasive in daily life will come back to 
bite them in the ass. I was laughed at for being hopelessly naive, not 
understanding how the bandwidth business works, etc.




Re: S.Korea broadband firm sues Netflix after traffic surge

2021-10-10 Thread Doug Barton

On 10/1/21 7:45 AM, Mark Tinka wrote:
The reason Google, Facebook, Microsoft, Amazon, e.t.c., all built their 
own global backbones is because of this nonsense that SK Broadband is 
trying to pull with Netflix. At some point, the content folk will get 
fed up, and go build it themselves. What an opportunity infrastructure 
cost itself!


Except that Facebook, Microsoft, and Amazon all caved to SK's demands:

"The popularity of the hit series "Squid Game" and other offerings have 
underscored Netflix's status as the country's second-largest data 
traffic generator after Google's YouTube, but the two are the only ones 
to not pay network usage fees, which other content providers such as 
Amazon, Apple and Facebook are paying, SK said."


Which has emboldened SK to go after the bigger fish.

One incentive I haven't seen anyone mention is that ISPs don't want to 
charge customers what it really costs to provide them access. If you're 
the only one in your market that is doing that, no one is going to sign 
up because your pricing would be so far out of line with your competition.


Given that issue, I have some sympathy for eyeball networks wanting to 
charge content providers for the increased capacity that is needed to 
bring in their content. The cost would be passed on to the content 
provider's customers (in the same way that corporations don't pay taxes, 
their customers do), so the people on that ISP who are creating the 
increased demand would be (indirectly) paying for the increased 
capacity. That's actually fairer for the other customers who aren't 
Netflix subscribers.


The reason that Netflix doesn't want to do it is the same reason that 
ISPs don't want to charge their customers what it really costs to 
provide them access.


Re: DoD IP Space

2021-02-10 Thread Doug Barton

On 2/10/21 5:56 AM, Ca By wrote>
The 3 cellular networks in the usa, 100m subs each, use ipv6 to uniquely 
address customers. And in the case of ims (telephony on a celluar), it 
is ipv6-only, afaik.


So that answers the question of how to scale networks past what can be 
done with 1918 space. Although why the phones would need to talk 
directly to each other, I can't imagine.


I also reject the premise that any org, no matter how large, needs to 
uniquely number every endpoint. When I was doing IPAM for a living, not 
allowing the workstations in Tucson to talk to the printers in Singapore 
was considered a feature. I even had one customer who wanted the 
printers to all have the same (1918) IP address in every office because 
they had a lot of sales people who traveled between offices who couldn't 
handle reconfiguring every time they visited a new location. I thought 
it was a little too precious personally, but the customer is always 
right.  :)


Sure, it's easier to give every endpoint a unique address, but it is not 
a requirement, and probably isn't even a good idea. Spend a little time 
designing your network so that the things that need to talk to each 
other can, and the things that don't have to, can't. I did a lot of 
large multinational corporations using this type of design and never 
even came close to exhausting 1918 space.


Doug


Re: DoD IP Space

2021-02-05 Thread Doug Barton

Owen,

I am genuinely curious, how would you explain the problem, and describe 
a solution, to an almost exclusively non-technical audience who just 
wants to get the bits flowing again?


Doug
(still not speaking for anyone other than myself)


On 2/5/21 2:25 PM, Owen DeLong wrote:
At the bottom of that page, there is a question “Was this answer 
helpful.” I clicked NO. It gave me a free form text box to explain why I 
felt it was not helpful… Here’s what I typed:


The advice is just bad and the facts are incorrect.
IPv6 is not blocking the Disney application. Either IPv6 is broken
in the users environment (in which case, the user should work with
their network administrator to resolve this) or Disney has failed to
implement IPv6 correctly on their DRM platform.

IPv6 cannot "Block" an application.

Turning off IPv6 will degrade several other services and cause
additional problems. This is simply very bad advice and shame on
Disney for issuing it.


Hopefully if enough people follow suit, Disney will get the idea.

Owen


Re: DoD IP Space

2021-01-22 Thread Doug Barton

The KB indicates that the problem is with the "LG TV WebOS 3.8 or above."

Doug

(not speaking for any employers, current or former)


On 1/22/21 12:42 PM, Mark Andrews wrote:

Disney should hire some proper developers and QA team.

RFC 1123 instructed developers to make sure your products handled multi-homed 
servers properly and dealing with one of the addresses being unreachable is 
part of that.  It’s not like the app can’t attempt to a stream from the IPv6 
address and if there is no response in 200ms start a parallel attempt from the 
IPv4 address.  If the IPv6 stream succeeds drop the IPv4 stream  Happy Eyeballs 
is just a specific case of multi-homed servers.

QA should have test scenarios where the app has a dual stack network and the 
servers are silently untraceable over one then the other transport.  It isn’t 
hard to do.  Dealing with broken networks is something every application should 
do.



Re: DoD IP Space

2021-01-22 Thread Doug Barton

Joe,

I haven't done that kind of work for a few years now, but I assume the 
answer to your question in terms of hardware is still yes.


By and large the problem isn't hardware, it's finding the institutional 
will to actually do the thing. That requires a lot of education, 
creating or buying resources that can do the architecture, and 
ultimately the rollout, etc. etc.


And before all of that you have to overcome the fear of things that are 
new and different, and even 20 years later that's still a tough hill to 
climb.


Doug


On 1/21/21 1:01 PM, j k wrote:
Organizations I have worked with for IPv6 transition, reduced CAPex and 
OPex by leveraging the IT refresh cycle, and by ensuring there 
investment included leveraging the USGv6 
(https://www.nist.gov/programs-projects/usgv6-program) or IPv6Ready 
(https://www.ipv6ready.org/) to mitigate the "We sell IPv6 products, and 
want to you to pay for the debugging costs".


Can I assume other organizations don't leverage the IT refresh cycle?

Joe Klein


Re: DoD IP Space

2021-01-22 Thread Doug Barton

Randy,

In one sense I agree with you, but what I was reacting to was the idea 
of an ISP begging IETF to reassign 22/8 as private space because their 
customers won't migrate to IPv6. That's problematic for many reasons, 
and causes the folks who aren't getting with the program to inflict the 
pain caused by their inaction on the rest of the network.


At the same time, I sympathize with the ISP because if they can't meet 
their customer's needs (however dumb those needs are) then the customers 
will leave.


I agree that we don't need a flag day for IPv6, but we have to stop 
creating new accommodations, and we need to be more creative about 
keeping the pain (aka cost) of not moving forward isolated to the folks 
who are creating the problems.


Doug


On 1/21/21 2:22 PM, Randy Bush wrote:

I’m sure we all remember Y2k (well, most of us, there could be some
young-uns on the list). That day was happening whether we wanted it to
or not. It was an unchangeable, unmovable deadline.


but i thought 3gpp was gong to force ipv6 adoption


let me try it a different way

why should i care whether you deploy ipv6, move to dual stack, cgnat,
...?  you will do whatever makes sense to the pointy heads in your c
suite.  why should i give them or some tech religion free rent in my
mind when i already have too much real work to do?

randy



Re: DoD IP Space

2021-01-20 Thread Doug Barton
I used to help large companies rearchitect their addressing, implement 
IPv6, etc. for a living, so no one is more sympathetic than I am about 
how difficult it can be to make these changes. However, I have to ask, 
how far backwards do we want to bend for those that refuse to migrate?


There have already been at least two lines in the sand that the IETF has 
backed down from. Is it even useful for us to keep saying "IPv6 is the 
way forward" any more?



On 1/20/21 7:26 AM, Fred Baker wrote:

I recently had a discussion with an Asian ISP that was asking the IETF to 
PLEASE re-declare DoD space to be private space so that they could use it. This 
particular ISP uses IPv6 extensively (a lot of their services are in fact 
IPv6-only) but has trouble with its enterprise customers. Frankly, enterprise 
use of IPv6 is a problem; they seem to push back pretty hard against using IPv6.

I find this thread highly appropriate.




Re: [EXTERNAL]Re: Don't need someone with clue @ Network Solutions.

2020-12-18 Thread Doug Barton
I'm curious, and my apologies if I missed it, but crocker.com is 
registered at Amazon, and the COM whois shows that it was Amazon's 
registrar that added the host records.


Were you able to work with the Amazon registrar (not AWS), as one of 
their customers, to get the records removed; since crocker.com is not 
delegated to those servers?


If not, that's a pretty big gap in their registrar offering.

Doug

http://registrar.amazon.com/


On 12/18/20 11:03 AM, Matthew Crocker wrote:


At this point I've basically given up and I'm moving the 66.59.48.x IPs to a 
new datacenter over the weekend.  I'll move the DNS servers on the old IPs to 
the new datacenter and call it a day.   We are trying to get all of the 
customers to re-register anyway, then I'll shut all of this down.

Thanks for the help

On 12/17/20, 3:16 PM, "NANOG on behalf of John R. Levine" 
 wrote:

 CAUTION: This email originated from outside of Crocker. Do not click links 
or open attachments unless you recognize the sender and know the content is 
safe.


 > a czds dl, however, shows:

 You're right, I checked again.

 > :; zgrep -E ^dns-auth.\.crocker\.com com.txt.gz
 > dns-auth1.crocker.com.172800  in  a   66.59.48.87
 > dns-auth2.crocker.com.172800  in  a   66.59.48.88
 > dns-auth3.crocker.com.172800  in  a   66.59.48.94
 > dns-auth4.crocker.com.172800  in  a   66.59.48.95
 >
 > and leaving off the ^ shows that a large number of zones use those.

 Since crocker.com uses different NS, I still don't see why they're in the
 .COM zone.  Making inquiries.

 Regards,
 John Levine, jo...@taugh.com, Primary Perpetrator of "The Internet for 
Dummies",
 Please consider the environment before reading this e-mail. https://jl.ly



Re: CNAME records in place of A records

2020-11-06 Thread Doug Barton

On 11/6/20 2:49 PM, Sabri Berisha wrote:

- On Nov 6, 2020, at 2:07 AM, Dovid Bender  wrote:

Hi,


Sorry if this is a bit OT. Recently several different vendors (in completely
different fields) where they white label for us asked us to remove A records
that we have going to them and replace them with CNAME records. Is there
anything *going around* in the security aranea that has caused this?


Security-wise, you should be good. But make sure you're not attempting to 
deliver e-mail to such a domain; CNAMEs cannot be used in MX records.


Or NS records, since you mentioned it.  :)

Doug


Re: Apple Catalina Appears to Introduce Massive Jitter - SOLVED!

2020-10-30 Thread Doug Barton
I would hesitate to blame BT. I have a macbook pro from ~1 year ago, on 
Catalina, and I use BT extensively ... mouse, keyboard, and headset. I 
do have location services trimmed down to just find my mac.


I ran: ping -c 1000 -i 0.1 

1000 packets transmitted, 998 packets received, 0.2% packet loss
round-trip min/avg/max/stddev = 1.255/2.378/9.095/0.634 ms

One thing that may contribute to blaming BT however is if you are using 
wifi on 2.4G only, and/or preferring it, as BT operates in the same 
frequency range neighborhood. My macbook is connected using 5G.


Happy to compare other settings if there is interest.

Doug


On 10/30/20 12:08 PM, Mark Tinka wrote:

Hi all.

So I may have fixed this for my end, and hopefully others may be able to 
use the same fix.


After a tip from Karl Auerbach and this link:

https://developer.apple.com/forums/thread/97805

... I was able to fix the problem by disabling Bluetooth.

However, disabling Bluetooth was not enough. I also had to disable all 
Location Services.


After that, I re-enabled Location Services and only allowed for two 
features:


     - NetSpot
     - Find My Mac

With just those two location services, as well as Bluetooth disabled, I 
have no more high jitter.


App performance like Zoom and Youtube uploads are now crisp, with 0.0% 
packet loss.


So looks like that Bluetooth is a huge problem. Confirmed by opening the 
"Console" app, and adding "scan" in the filter bar, top right.


A peak latency of 13.5ms after 300 packets:

Marks-MacBook-Pro.local (172.16.0.239) 2020-10-30T21:06:05+0200
Keys:  Help   Display mode   Restart statistics   Order of fields   quit
Packets   Pings
  Host Loss%   Snt   Last   Avg  Best  Wrst StDev
  1. 172.16.0.254 0.0%   300    3.1   4.8   2.2  13.5   1.9

Mark.




Re: Disney+ contacts or geolocation ideas

2020-07-22 Thread Doug Barton

I forwarded your message to the appropriate resource.

hope this helps,

Doug


On 7/22/20 4:51 PM, Paul Nash wrote:

I’m looking for a technical contact at Disney regarding geo-location.  I have a 
client (apartment building) with a /24 (one IP per apartment).  We recently 
upgraded out Internet connection to give a much-needed speed boost.  Same 
connectivity provider, same IP addresses, just a bigger pipe.

Since then, a while bunch of people have been unable to get to Disney+, while 
some can.  Those that fail have existing D+ subscriptions, and get “error code 
73”, which allegedly relates to location and rights management.

The bandwidth provider has checked DNS, reverse DNS and contacted Disney, to no 
avail.  WhoIS shows it as being in Toronto, Canada.

Meanwhile, there is a lynch mob forming, and a scaffold being built in the 
parking lot :-).

Any pointers on how to find someone at Disney with clue, who will be able to 
tickle their geolocation databane.

The really irritating part is that everything worked until we had the bandwidth 
upgrade.



Re: RIPE our of IPv4

2019-11-27 Thread Doug Barton

On 11/26/19 12:13 AM, Sabri Berisha wrote:

- On Nov 26, 2019, at 1:36 AM, Doug Barton do...@dougbarton.us wrote:


I get that some people still don't like it, but the answer is IPv6. Or,
folks can keep playing NAT games, etc. But one wonders at what point
rolling out IPv6 costs less than all the fun you get with [CG]NAT.


When the MBAs start realizing the risk of not deploying it.

I have some inside knowledge about the IPv6 efforts of a large eyeball network. 


For what it's worth, I have extensive experience in both eyeball and 
content networks.



In that particular case, the cost of deploying IPv6 internally is not simply 
configuring it on the network gear;


We're rehashing old ground here. Perhaps you weren't on the list the 
last N times this has come up. My short answer, I didn't say it would be 
easy, I said it is less expensive than the alternatives over time.



that has already been done. The cost of fully supporting IPv6 includes (but is 
probably not limited to):

- Support for deploying IPv6 across more than 20 different teams;


I don't understand how you're using "teams" here. For the most part you 
turn it on, and end-user systems pick up the RA and do the right thing. 
If you want something fancier, you can do that with DHCP, static 
addressing, etc. In other words, this works the exact same way that IPv4 
does.



- Modifying old (ancient) internal code;


What code? IPv4 isn't going away on the inside, so what needs to be 
modified? If you're talking monitoring software, etc., if you're still 
using software that doesn't understand IPv6, you're way overdue for an 
upgrade already.



- Modifying old (ancient) database structures (think 16 character fields for IP 
addresses);


Either see above, or much more likely you'd be adding a field, not 
modifying the existing one.



- Upgrading/replacing load balancers and other legacy crap that only support 
IPv4 (yeah, they still exist);


If we're talking about an enterprise that is seriously still using stuff 
this old, it's more likely than not that IPv6 is the least of their 
worries. And I'm not being flippant or disrespectful here. For at least 
the last 10 years or so, and definitely in the last 5, all of the 
enterprise level network gear sold has had support for IPv6. So again, 
way overdue for an update, but if this is all you have available, then 
you likely have bigger fish to fry. (And feel free to save the 
obligatory, "My favorite network widget that I use in my 100% 
enterprise-class network does not support IPv6." Yes, I realize that 
there are exceptions, but they are the exceptions, not the rule.)



- Modifying the countless home-grown tools that automate firewalls etc;


Yes, this is actually a legitimate point.


- Auditing the PCI infrastructure to ensure it is still compliant after 
deploying IPv6;


Also legit, where it applies, although you also have the option of not 
deploying on the network with the PCI data. For internal-only things, 
it's great to have IPv6, and will become increasingly important as time 
goes on, but it's not required.


Execs have bonus targets. IPv6 is not yet important enough to become part of that bonus target: there is no ROI at this point. 


That depends heavily on what enterprise you're talking about.

The point I'm trying to make is that there IS an ROI here. For content 
providers it's the ability to create a stable network architecture 
across all of your sites, and connect directly to the many eyeballs that 
are already on IPv6 (cell networks, many ISPs, etc.). There is also the 
much harder to define ROI for future-proofing the network, but that's 
part of the master class.  :)


For eyeball networks the same stable network architecture argument 
applies. The immediate ROI is harder to define, but similar, in the 
sense that connect directly to the many content networks that have 
already deployed IPv6 and future-proofing are both relevant.


Much harder for the eyeball networks to quantify are the savings related 
to NOT having to do [CG]NAT, etc. To create that slide you need an exec 
who truly understands the (rising over time) costs of twiddling around 
with the NATs, as well as the realistic costs involved in rolling out 
IPv6 balanced by the long term support. Then you also need an executive 
team and board that can understand those slides when they see them.


But it's not all in vain. I'm on Spectrum here at home, and I have 
native IPv6 that "just worked" from the moment I plugged my router into 
my cable modem.


So there are a non-trivial number of both eyeball and content networks 
that already get it. The value proposition obviously does exist, we just 
need more people in the right places with the right knowledge and 
experience to make it happen.


Doug




Re: RIPE our of IPv4

2019-11-25 Thread Doug Barton

On 2019-11-25 20:26, Brandon Martin wrote:

On 11/26/19 4:36 AM, Doug Barton wrote:
I get that some people still don't like it, but the answer is IPv6. 
Or, folks can keep playing NAT games, etc. But one wonders at what 
point 
rolling out IPv6 costs less than all the fun you get with [CG]NAT.


If it weren't for the ongoing need to continue to support IPv4
reachability (i.e. if we'd flag-day'd several years ago), I think the
(admittedly non-scientific) answer to that question is that we have
already passed it.

However, in the face of continuing need for IPv4 reachability, I'm
less sure.  I think that the incremental cost to deploy and support
IPv6 is probably no more than the incremental savings of CGNAT
headaches for service providers caused by offloading what traffic you
can to native IPv6.  Those savings from not just from capacity savings
(which can be extreme to totally trivial depending on your size) but
also support for having 3rd party services properly treat an SP
customer as an individual customer rather than the results of multiple
SP customers being lumped onto a small CGNAT target pool.

That is, even if you are 100% committed to needing to run a functional
CGNAT as a service provider and deal with everything that entails, I
think it's probably STILL in your short-term economic best interest to
deploy IPv6 simply due to the reduction in scope of "everything that
entails".


I think this is spot on. The only thing I'd add is that the costs to 
deploy IPv6 will remain fairly constant or perhaps go down some over 
time as economies of scale continue to grow, whereas the costs for 
continuing to prop up IPv4 will only increase.


Doug


Re: RIPE our of IPv4

2019-11-25 Thread Doug Barton

On 2019-11-25 1:47 PM, Valdis Klētnieks wrote:

On Tue, 26 Nov 2019 06:46:52 +1100, Mark Andrews said:

On 26 Nov 2019, at 03:53, Dmitry Sherman  wrote:

 I believe it’s Eyeball network’s matter to free IPv4 blocks and move to 
v6.



It requires both sides to move to IPv6.  Why should the cost of maintaining
working networks be borne alone by the eyeball networks?   That is what is
mostly happening today with CGN.


I believe that Dmitry's point is that we will still require IPv4 addresses for 
new
organizations deploying dual-stack


Right, which is why we started warning folks about this issue 10+ years 
ago, when IPv4 was still plentiful and cheap.


But even content networks have NAT options, and while most of them are 
not pretty, the options become more limited every day that passes.


I get that some people still don't like it, but the answer is IPv6. Or, 
folks can keep playing NAT games, etc. But one wonders at what point 
rolling out IPv6 costs less than all the fun you get with [CG]NAT.


Doug


Re: RIPE our of IPv4

2019-11-25 Thread Doug Barton
The two things feed each other. Big content networks have had IPv6 for 
years now, and the mobile phone networks are primarily, if not 
exclusively IPv6 on the inside.


Adding IPv6 now helps push the cycle forward, whether you are an 
eyeball, content, or other network.


Doug


On 11/25/19 11:50 AM, Dmitry Sherman wrote:
Because we can’t only use ipv6 on the boxes, each box with ipv6 must 
have IPv4 until the last eyeball broadband user will have ipv6 support.


Best regards,
Dmitry Sherman
Interhost Networks
www.interhost.co.il
dmi...@interhost.net
Mob: 054-3181182
Sent from Steve's creature


On 25 Nov 2019, at 21:47, Mark Andrews  wrote:

 It requires both sides to move to IPv6.  Why should the cost of 
maintaining working networks be borne alone by the eyeball networks?   
That is what is mostly happening today with CGN.


Every server that offers services to the public should be making them 
available over IPv6.   Most of the CDNs support both transports. Why 
are you scared to tick the box for IPv6?  HTTPS doesn’t care which 
transport is used.


--
Mark Andrews


On 26 Nov 2019, at 03:53, Dmitry Sherman  wrote:

 I believe it’s Eyeball network’s matter to free IPv4 blocks and 
move to v6.



Best regards,
Dmitry Sherman



On 25 Nov 2019, at 18:08, Billy Crook  wrote:


Huh.  I guess we get to go home early today then?  And look into 
that whole "Aye Pee Vee Sicks" thing next week aye boss?


On Mon, Nov 25, 2019 at 8:58 AM Dmitry Sherman > wrote:


Just received a mail that RIPE is out of IPv4:

Dear colleagues,

Today, at 15:35 UTC+1 on 25 November 2019, we made our final /22
IPv4 allocation from the last remaining addresses in our
available pool. We have now run out of IPv4 addresses.


Best regards,
Dmitry Sherman
Interhost Networks
www.interhost.co.il 
dmi...@interhost.net 
Mob: 054-3181182
Sent from Steve's creature



Re: IPv6 Pain Experiment

2019-10-04 Thread Doug Barton

On 10/4/19 7:45 AM, Warren Kumari wrote:

On Fri, Oct 4, 2019 at 5:13 AM Masataka Ohta
 wrote:


Doug Barton wrote:


And even
if you do need to change providers, once you have your addressing plan
in place all you have to change is the prefix.




This is the same as saying "If you need to change providers in IPv4,
once you have your addressing plan in place all you have to do is
change the prefix", or "To build the Eiffel Tower, all you have to do
is bolt bits of metal together" -- it's technically correct*, but
handwaves away the actual complexity and scale of work.
Yes, you (clearly) can renumber v6 networks, and it's *probably*
easier than renumbering v4, but "just change the prefix" oversells it.


I was assuming that this audience understands the relative complexity of 
changing the network parts of the address, and leaving the subnet and 
host parts in place.


And by and large, it is not true that you can do this with IPv4. You 
might occasionally get lucky with it, but that would be the exception, 
not the rule.


As for your Eiffel Tower example, the design and architecture are the 
pieces that would already be in place as part of your networking plan, 
so in a sense we're talking about RE-building the Eiffel Tower by taking 
off one bit of metal (the old network) and bolting another piece in its 
place. Not building it all over again from scratch.


So you can credibly accuse me of underselling the renumbering effort, 
but don't engage in hyperbole to try to balance the scales.


Renumbering has its pain points regardless of the protocol, but if 
you've designed your network correctly IPv6 renumbering is considerably 
easier than it is in IPv4.


Doug



Re: IPv6 Pain Experiment

2019-10-03 Thread Doug Barton

On 10/3/19 8:41 PM, Masataka Ohta wrote:

Doug Barton wrote:


Automatic renumbering involving DNS was important design goal
of IPv6 with reasons.

Lack of it is still a problem.


Meanwhile, the thing that most people miss about IPv6 is that except 
in edge cases, you never have to renumber. You get a massive address 
block that you can use as long as you pay your bill.


That is called "provider lock-in", which is the primary
reason, when IPng WG was formed, why automatic renumbering
is necessary for IPv6.


... unless you're large enough to have your own address space. And even 
if you do need to change providers, once you have your addressing plan 
in place all you have to change is the prefix.



So, again, stop spreading FUD.


Look at the fact that IPv6 failed badly.


Except that it's not failing, deployment and bits transported go up 
every month. Almost all of the large content providers are accessible 
via IPv6, and all of the major US mobile carriers are using it, some 
exclusively.


I get that you WANT it to fail, and you're entitled to your opinion. 
You're even entitled not to deploy it. But you're not entitled to your 
own facts.


Doug



Re: IPv6 Pain Experiment

2019-10-03 Thread Doug Barton

I'm going to reply in some detail to your points here because they are
very common arguments that have real answers. Those who have heard all 
this before are free to move on.  :)


You sound like someone who doesn't have experience with IPv6. I don't 
intend any criticism, I'm simply saying that once you start working with 
it, you learn it, and almost all of these concerns evaporate. Just like 
what happened when you learned IPv4.


On 10/3/19 8:20 AM, Naslund, Steve wrote:

I don’t think the issue is the readability of the addresses (although
hex does confuse some people), mainly it is the length and ability to
deal with any string of numbers that long for a human,


Coming from IPv4 world, it's very common that when you're working with 
addresses directly most or even sometimes all, of the bits are 
different. By and large this isn't the case with IPv6. If you sized your 
RIR request properly, you'll have the same /32 (or shorter) prefix 
across your entire network. That covers the first two hextets of the 
network portion of the address (that is, half the network portion).


One of the great things about IPv6 is sparse allocation. The next hextet 
(third of four in the network section of the address) are the bits from 
/33 through /48. Since except for all but the largest sites you'll 
assign a /48 per site (65,536 /64s) that hextet will be the same across 
the entire site. For a really large office site, or a medium size data 
center, you might assign a /44 (1,048,576 /64s), so 3/4 of that hextet 
will be stable across that site. For a large data center you might 
assign a /40 (16,777,216 /64s) so half the hextet will be stable across 
that site.


So let's say a site is allocated 2001:0db8:1234::/48. A common practice 
is to use the top of the space for infrastructure, so 
2001:0db8:1234:0/49 (32,768 /64s) would be the same prefix used 
everywhere at that site, and every site would have the same admin 
prefix. Hopefully by now you can see the opportunities ...



and I do
realize that you can do static addressing in IPv6 (but I sure would
not want to since the manual entry of the addresses is going to be
error prone on both the host and into DNS). 


Copy and paste is your friend. Seriously. If you're typing things in 
you're doing it wrong. Obviously there are a very few situations where 
you don't have a choice, but seriously, copy and paste.  :)



It is just way harder
for a human to deal with hex v6 address than to easily memorize four
decimal numbers in v4.  Most system admins and engineers can rattle
off the IPv4 address of a lot of their systems like gateways, DNS
servers, domain controllers, etc.  Can you imagine keeping those v6
addresses in your head the same way? 


Why would you bother? That's what DNS is for. Also, see above, where you 
can establish patterns that make this easier for the very few situations 
where you actually have to use addresses, and also easier to spot check 
them when you do.



Think about reading them over
the phone, typing them into a support case, typing a configuration
sheet to be entered by some remote hands etc.  I am not saying it is
insurmountable, it is just something people need to get used to.  To
me, that is the biggest reason not to do more manual assignments than
we need to. I do understand why they need to be the way they are but
I can't see anyone thinking IPv6 addresses are easier to read and
handle.


No one said easier. Just different. And not hard to learn as you work 
with them over time.



It is not a misconception that most server guys are used to static
addressing and not auto-assignment. 


See the other messages where we talked about how static addressing for 
services is not a problem for IPv6.



I also takes some time to get
people to stop hardcoding static addressing into system
configurations.  There are lots of applications that have dialog
boxes asking for addresses instead of names.  That needs to stop in
an auto-configured or DHCP environment


Yes, things are different ... one could even argue better, but yes, 
different.



(yeah, I know all about DHCP reservations but I hate them).


They aren't that bad, but made much easier with a good IPAM. :)


Your comment regarding small networks not needing IPv6 is exactly my
point.  The original post was talking about MANDATING the use of IPv6
to the exclusion of (or taxation of) IPv4.  My point is that there is
not really a need to do so in a lot of use cases.

Is there a huge advantage to stop using RFC1918
addressing within our network?  No, not really.


You've obviously never had to renumber two large internal networks after 
a merger.



Would I build a
completely new enterprise on IPv4...probably not.   Would I recommend
that every global enterprise eradicate the use of IPv4 in the next
couple of yearsno.


No serious person is doing that.

hope this helps,

Doug


Re: IPv6 Pain Experiment

2019-10-03 Thread Doug Barton

On 10/3/19 5:35 PM, Masataka Ohta wrote:

Doug Barton wrote:

Not if you configure your services (like DNS) with static 
addresses,which as we've already discussed is not only possible, but 
easy.


That's your opinion. But, as Mark Andrews said:

 > Actually you can do exactly the same thing for glue.


If Mark wants to do that on his network, he can. That doesn't mean that 
you have to.



I show it not so easy.


No, you've shown that there are ways to do things differently than using 
static addresses for your services.



 > Please stop spreading FUD regarding this topic.

Automatic renumbering involving DNS was important design goal
of IPv6 with reasons.

Lack of it is still a problem.


If you're talking clients (system using only outbound connections) then 
they will renumber with SLAAC+DHCPv6 the same way that IPv4 clients 
renumber with DHCP now.


If you're talking about services (with inbound connections) then yes, IF 
you ever have to renumber, AND you use static addresses instead of 
SLAAC, you'll have to do them by hand.


But weren't you just arguing that dynamic addresses for services is a 
bad thing? Which way do you want it? Because you can have it either way. 
In fact, you can have it BOTH ways if you want it, depending on the 
service. I find static addresses for services easier myself, but Mark is 
a lot smarter than I am, so I'm perfectly ready to be proven wrong.


Meanwhile, the thing that most people miss about IPv6 is that except in 
edge cases, you never have to renumber. You get a massive address block 
that you can use as long as you pay your bill. The chance that you'll 
have to renumber, ever, is incredibly small.


So, again, stop spreading FUD.

Doug



Re: IPv6 Pain Experiment

2019-10-03 Thread Doug Barton

On 10/2/19 10:27 PM, Masataka Ohta wrote:


The tricky part is in converting a domain name of a
primary nameserver to IP addresses,  when the IP
addresses of the primary nameserver changes.

If the primary nameserver ask DNS its IP address
to send an update request to itself, it will get
old addresses.


Not if you configure your services (like DNS) with static addresses,  
which as we've already discussed is not only possible, but easy.


Please stop spreading FUD regarding this topic.

Thanks!

Doug


Re: IPv6 Pain Experiment

2019-10-02 Thread Doug Barton
Yes, IPv6 suffers from Second System Syndrome. No this is not news, 
neither is it malleable (no matter how much whinging about roads not 
taken occurs).



On 10/2/19 6:30 PM, George Michaelson wrote:

A long time ago, in another country, JANET had a mail list to discuss
email, in a world before DNS. And, when DNS emerged, JANET mail list
made a *deliberate* decision to make the domain order of UK email
domains the reverse of every other country worldwide. A DELIBERATE
decision. (I was there, on this list. Others may disagree with my
interpretation of acts done and motivations, but I want to be clear I
didn't "hear this second hand" -I was receiving the mailflows
discussing this in public. I am sure there are other private
conversations I didnt see)

It wasn't a consensus decision. It wasn't an entirely rational
decision. OTOH it was a research network, email was a research
activity, and in some ways, it made sense to find out what happened.
That the decision had repercussions which echoed down the years, and
marginalized some communications Uk and internationally, is perhaps,
the real lesson.

IPv6 had an opportunity to consider designs which were intermediate,
(IPv4-and-a-bit) and backwards compatible. And, like JANET and domain
order, people decided not to do it, believing it was interesting and
research-y.

I too wish we had selected TUBA, or had thought more about interop
with IPv4. I sometimes wish I understood why SRC was the first element
off the wire, and not DST, Since rational ASIC/FPGA hardware can latch
early on the SRC and begin routing faster if it appears in natural bit
order first. Or, why we even have SRC in the header: it does not
inform routing. These are heresies.

Counterfactuals dog Historians. Some love them, some hate them. We
don't have time machines. This is the world we live in, we have to
make the best of it we can. IPv6 globally is rising, IPv6 in Asia is
rising. IPv6 in India is basically ubiquitous, IPv6 in America is
ubiquitous. We are going to live in a mixed protocol global internet
for the forseeable future. We can plan to extend V4, or end V4, or
deprecate V4, or end v6, and favour CGN  but we can't end either V4 or
V6 entirely, easily, soon.

-G



Re: IPv6 Pain Experiment

2019-10-02 Thread Doug Barton
Another misconception. Humans (by and large) count in decimal, base 10. 
IPv4 is not that. It only LOOKS like that. In fact, the similarity to 
familiar decimal numbers is one of the reasons that people who are new 
to networking stumble early on, find CIDR challenging, etc.


I do understand that the hex thing presents a (small) learning curve. 
But work with it for a little while and it will become familiar, just 
like IPv4 did.


In fact, once you get past a few basic concepts, the network'y bits 
should be familiar to you (pun intended). CIDR works the same way, the 
only real differences there are that a /64 is your basic network unit, 
and is roughly equivalent to an IPv4 /22 in the sense that ~1,000 hosts 
per network/VLAN is pretty much your limit. The other thing to keep in 
mind is that due to the massive size of the address space, it's rarely 
useful to allocate on anything other than a nibble boundary (that is, 
divisible by 4). There are two reasons, sparse allocation, and the fact 
that reverse DNS is much easier if you keep things in that framework.


Now I do admit that the whole RA/SLAAC vs. DHCPv6 thing is more 
complicated than it should be. Some of us fought very hard for the 
concept that SLAAC should be optional, and restricted to network and 
gateway; but we lost to the "SLAAC must be the new DHCP!" crowd. Sucks 
that you have to do both, but since you're already doing DHCP for 
end-user hosts anyway, and you're already configuring the router for the 
IPv6 network info, the marginal cost isn't really that high.


Take a look at 
https://dougbarton.us/IPv4_and_IPv6_Network_Structure_Planning-20190519.xls 
if you're interested in learning more. I have some cheat sheets that 
will help you understand assignment strategy, sparse allocation, nibble 
boundaries, etc. It also has handy calculators that allow you to plan 
for IPv4 and IPv6 networking based on the number of different 
types/sizes of offices, data centers, etc. in each region.


Enjoy,

Doug


On 10/2/19 5:54 PM, Matt Hoppes wrote:

I disagree on that. Ipv4 is very human readable. It is numbers.

Ipv6 is not human numbers. It’s hex, which is not how we normally county.

It is all water under the bridge now, but I really feel like ipv6 could have 
been made more human friendly and ipv4 interoperable.


On Oct 2, 2019, at 8:49 PM, Doug Barton  wrote:


On 10/2/19 3:03 PM, Naslund, Steve wrote:
The next largest hurdle is trying to explain to your server guys that you are 
going to go with all dynamically assigned addressing now


Completely false, but a very common misconception. There is nothing about IPv6 
that prevents you from assigning static addresses.


and explaining to your system admin that can’t get a net mask in v4 figured 
out, how to configure their systems for IPv6.


If they only need an outbound connection, they probably don't need any 
configuration. The instructions for assigning a static address for inbound 
connections vary by OS, but I've seen a lot of them, and none of them are more 
than 10 lines long.

Regarding the previous comments about all the drama of adding DNS records, 
etc.; that is what IPAM systems are for. If you're small enough that you don't 
need an IPAM for IPv4, you almost certainly don't for IPv6.

IPv6 is different, but it's not any more difficult to learn than IPv4. (You 
weren't born understanding IPv4 either.)

Doug


Re: IPv6 Pain Experiment

2019-10-02 Thread Doug Barton

On 10/2/19 3:03 PM, Naslund, Steve wrote:
The next largest hurdle is trying to explain to your server guys that 
you are going to go with all dynamically assigned addressing now


Completely false, but a very common misconception. There is nothing 
about IPv6 that prevents you from assigning static addresses.


and 
explaining to your system admin that can’t get a net mask in v4 figured 
out, how to configure their systems for IPv6.


If they only need an outbound connection, they probably don't need any 
configuration. The instructions for assigning a static address for 
inbound connections vary by OS, but I've seen a lot of them, and none of 
them are more than 10 lines long.


Regarding the previous comments about all the drama of adding DNS 
records, etc.; that is what IPAM systems are for. If you're small enough 
that you don't need an IPAM for IPv4, you almost certainly don't for IPv6.


IPv6 is different, but it's not any more difficult to learn than IPv4. 
(You weren't born understanding IPv4 either.)


Doug


Re: 44/8

2019-08-31 Thread Doug Barton

On 8/27/19 8:52 PM, Owen DeLong wrote:



On Jul 26, 2019, at 21:59 , Doug Barton <mailto:do...@dougbarton.us>> wrote:


Responding to no one in particular, and not representing views of any 
current or former employer ...


I find all of this hullabaloo to be ... fascinating. A little 
background to frame my comments below. I was GM of the IANA in the 
early 2000's, I held a tech license from 1994 through 2004 (I gave it 
up because life changed, and I no longer had time; but I still have 
all my toys, err, I mean, gear); and I have known two of the ARDC 
board members and one of the advisors listed at 
https://www.ampr.org/amprnet/ for over fifteen years. I consider them 
all friends, and trust their judgement explicitly. One of them I've 
known for over 20 years, and consider a close and very dear friend.


There have been a number of points over the past 30 years where anyone 
who genuinely cared about this space could have used any number of 
mechanisms to raise concerns over how it's been managed, and by whom. 
I cannot help but think that some of this current sound and fury is an 
excuse to express righteous indignation for its own sake. The folks 
involved with ARDC have been caring for the space for a long time. 
From my perspective, seeing the writing on the wall regarding the 
upcoming friction around IPv4 space as an asset with monetary value 
increasing exponentially, they took quite reasonable steps to create a 
legal framework to ensure that their ability to continue managing the 
space would be protected. Some of you may remember that other groups, 
like the IETF, were taking similar steps before during and after that 
same time frame. Sure, you can complain about what was done, how it 
was done, etc.; but where were you then? Are you sure that at least 
part of your anger isn't due to the fact that all of these things have 
happened over the last 20 years, and you had no idea they were happening?




Certainly part of my anger is that I did not know some of them were 
happening.


Fair enough.


However, most of my anger is around the fact that:
1.It never in a million years would have occurred to me that these 
people who I also consider friends and also trust explicitly
would take this particular action without significant prior (and much 
wider) consultation with the amateur radio community.


2.I believe this was done quietly and carefully orchestrated 
specifically to avoid any risk of successful backlash by the time

the community became aware of this particular intended action.


I have actually been in this exact same position, of knowing that a 
thing is the right thing to do, but also knowing that doing it would 
create a poop-storm. I don't know if your analysis is right or not, but 
if I had been in their shoes I probably would have done the same thing.


If you want to say shame on us for trusting these people and not 
noticing the severe corporate governance problems with ARDC until

they took this particular action, then I suppose that’s a fair comment.


No, I am not attempting to shame anyone (although I admit my message was 
a bit testy). My point is simply that all of this after-the-fact 
griping, in the absence of any proven harm, is probably not as much 
about the thing as it is about self-culpability in what lead up to the 
thing. But as humans it's hard to direct that anger towards ourselves, 
so it gets directed outwardly. So, no shame, as it's a very human 
reaction. But a little more self-awareness would not be out of place.


So let's talk a little about what "stewardship" means. Many folks have 
complained about how ARDC has not done a good job of $X function that 
stewards of the space should perform. Do you think having some money 
in the bank will help contribute to their ability to do that? Has 
anyone looked at how much of the space is actually being used now, and 
what percentage reduction in available space carving out a /10 
actually represents? And nowadays when IPv6 is readily available 
essentially "for free," how much is the amateur community actually 
being affected by this?


All of those are good questions. I don’t have data to answer any of them 


So shouldn't actually looking at the space to determine if any real harm 
was done be the next step?


Doug



Re: Feasibility of using Class E space for public unicast (was re: 44/8)

2019-07-27 Thread Doug Barton

On 2019-07-26 11:01 PM, William Herrin wrote:

On Fri, Jul 26, 2019 at 10:36 PM Doug Barton <mailto:do...@dougbarton.us>> wrote:
> So I'll just say this ... if you think that the advice I received 
from all of the many people I spoke to (all of whom are/were a lot 
smarter than me on this topic) was wrong, and that putting the same 
LOE into IPv6 adoption that it would have taken to make Class E usable 
was a better investment


Doug,

"Better investment?" What on earth makes you think it's a zero-sum game?


Because for all of us there are only 24 hours in a day, and the people 
who would have needed to do the work to make it happen were telling me 
that they were going to put the work into IPv6 instead, because it has a 
future. As Owen pointed out, no matter how much IPv4 space you added, 
all it would do would be delay the inevitable.


"Same level of effort?" A reasonable level of effort was adding the 
word "unicast" to the word "reserved" in the standards. Seven letters. 
A space. Maybe a comma.

I don't recall seeing your draft on that  refresh my memory?
That would have unblocked everybody else to apply however much or 
little effort they cared to. Here we are nearly 20 years later and had 
you not fumbled that ball 240/4 might be broadly enough supported to 
usefully replace the word "reserved" with something else.
You give me /way /too much credit on that. I was the reed tasting the 
wind on this topic. I was not the wind. I (like every other IANA 
manager) had exactly zero authority to say, "You SHALL NOT pursue making 
Class E space usable for anything!" The opportunity existed then, and 
still exists today, for anyone to make it work.
You're right about one thing: you won't be able to convince me that 
your conclusion was rational. No matter how many smart people say a 
stupid thing, it's still a stupid thing.


So as my last word on the topic, I return you to the point above, that 
whatever the discussion was 20 years ago, there is still no workable 
solution.


If you'd like another perspective, here is a reasonably good summary:

https://packetlife.net/blog/2010/oct/14/ipv4-exhaustion-what-about-class-e-addresses/

Doug




Re: Feasibility of using Class E space for public unicast (was re: 44/8)

2019-07-26 Thread Doug Barton


On 2019-07-26 10:07 PM, William Herrin wrote:
On Fri, Jul 26, 2019 at 9:21 PM Doug Barton <mailto:do...@dougbarton.us>> wrote:
> When I was running the IANA in the early 2000's we discussed this 
issue with many different experts, hardware company reps, etc. Not 
only was there a software issue that was insurmountable for all 
practical purposes (pretty much every TCP/IP stack has "Class E space 
is not unicast" built in), in the case of basically all network 
hardware, this limitation is also in the silicon. So even if it were 
possible to fix the software issue, it would not be possible to fix 
the hardware issue without replacing pretty much all the hardware.


> So the decision was made to start tooting the IPv4 runout horns in 
the hopes that folks would start taking conservation of the space 
seriously (which happened more often than not), and accelerate the 
adoption of IPv6. *cough*


Hi Doug,

That's what you wrote. Here's what I read:

"We decided keep this mile of road closed because you can't drive it 
anywhere unless the toll road operators in the next 10 miles open 
their roads too. What's that you say? Your house is a quarter mile 
down this road?** La la la I can't hear you. Look, just use the shiny 
new road we built over in the next state instead. Move your house 
there. The roads are better."


** Not every unicast use of 240/4 would require broad adoption of the 
change. Your reasoning that it does is so absurd as to merit outright 
mockery.


> So no, there were exactly zero "IPv6 loons" involved in this 
decision. :-)


No, when I said IPv6 loonies, reasoning of this character was pretty 
much what I was talking about.


So leaving aside how interesting I find the fact that you snipped the 
part of my comments that you did, the utter absurdity of your toll road 
analogy shows me that I will not be able to convince you of anything.


So I'll just say this ... if you think that the advice I received from 
all of the many people I spoke to (all of whom are/were a lot smarter 
than me on this topic) was wrong, and that putting the same LOE into 
IPv6 adoption that it would have taken to make Class E usable was a 
better investment because we're not running out of IPv6 any time soon, 
then you have a golden opportunity. Go forth and prove me wrong. Go 
rally support from all of the people and companies that you need in 
order to make any part of  Class E usable for any purpose (even, as you 
point it, if it's not for global unicast). If you're right, and I'm 
wrong, your income potential is essentially limitless.


Or, look at it from another perspective. If you're right, then why, in 
the last almost 15 years, has no one figured out how to do it yet? 
Including the companies whose mission is to sell us new hardware, and 
force us into contracts for software upgrades in order to keep said 
hardware on the 'net?


It's easy to sit back in the cheap seats and squawk about how "they" are 
out to get you. I'd be far more impressed if you put your money (or 
time, or effort) where your mouth is.


Doug




Re: 44/8

2019-07-26 Thread Doug Barton
Responding to no one in particular, and not representing views of any 
current or former employer ...


I find all of this hullabaloo to be ... fascinating. A little background 
to frame my comments below. I was GM of the IANA in the early 2000's, I 
held a tech license from 1994 through 2004 (I gave it up because life 
changed, and I no longer had time; but I still have all my toys, err, I 
mean, gear); and I have known two of the ARDC board members and one of 
the advisors listed at https://www.ampr.org/amprnet/ for over fifteen 
years. I consider them all friends, and trust their judgement 
explicitly. One of them I've known for over 20 years, and consider a 
close and very dear friend.


There have been a number of points over the past 30 years where anyone 
who genuinely cared about this space could have used any number of 
mechanisms to raise concerns over how it's been managed, and by whom. I 
cannot help but think that some of this current sound and fury is an 
excuse to express righteous indignation for its own sake. The folks 
involved with ARDC have been caring for the space for a long time. From 
my perspective, seeing the writing on the wall regarding the upcoming 
friction around IPv4 space as an asset with monetary value increasing 
exponentially, they took quite reasonable steps to create a legal 
framework to ensure that their ability to continue managing the space 
would be protected. Some of you may remember that other groups, like the 
IETF, were taking similar steps before during and after that same time 
frame. Sure, you can complain about what was done, how it was done, 
etc.; but where were you then? Are you sure that at least part of your 
anger isn't due to the fact that all of these things have happened over 
the last 20 years, and you had no idea they were happening?


So let's talk a little about what "stewardship" means. Many folks have 
complained about how ARDC has not done a good job of $X function that 
stewards of the space should perform. Do you think having some money in 
the bank will help contribute to their ability to do that? Has anyone 
looked at how much of the space is actually being used now, and what 
percentage reduction in available space carving out a /10 actually 
represents? And nowadays when IPv6 is readily available essentially "for 
free," how much is the amateur community actually being affected by this?


And with all due respect to Jon (and I mean that sincerely), what did 
it/does it really mean that "Jon gave $PERSON the space for $REASON" 30 
years later? Jon was a brilliant guy, but from what I've been told would 
also be one of the first to admit when he made a mistake. One of which, 
and one that he actively campaigned to fix, was the idea of classful 
address space to start with, and particularly the idea that it was OK to 
hand out massive chunks of it to anyone who asked. As a former ham I 
definitely appreciate the concept of them having space to play ... errr, 
experiment with. But did they ever, /really, /need a /8? Historically, 
what percentage of that space has ever actually been used? And as Dave 
Conrad pointed out, given all of the "historical" allocations that have 
been revisited and/or repurposed already, is taking another look at 44/8 
really that far out of line?


Now all that said, if any of my friends had asked me how I thought news 
of this sale should have been handled, I would have told them that this 
reaction that we're seeing now is 100% predictable, and while it could 
never be eliminated entirely it could be limited in scope and ferocity 
by getting ahead of the message. At minimum when the transfer occurred. 
But that doesn't change anything about my opinion that the sale itself 
was totally reasonable, done by reasonable people, and in keeping with 
the concept of being good stewards of the space.


hope this helps,

Doug




Feasibility of using Class E space for public unicast (was re: 44/8)

2019-07-26 Thread Doug Barton

On 2019-07-22 6:09 PM, Owen DeLong wrote:

On Jul 22, 2019, at 12:15 , Naslund, Steve > wrote:


I think the Class E block has been covered before.  There were two 
reasons to not re-allocate it.
1.A lot of existing code base does not know how to handle those 
addresses and may refuse to route them or will otherwise mishandle them.
2.It was decided that squeezing every bit of space out of the v4 
allocations only served to delay the desired v6 deployment.



Close, but there is a subtle error…

2.It was decided that the effort to modify each and every IP stack in 
order to facilitate use of this relatively small block (16 /8s being 
evaluated against a global
run rate at the time of roughly 2.5 /8s per month, mostly to RIPE and 
APNIC) vs. putting that same effort into modifying each and every IP 
stack to support
IPv6 was an equation of very small benefit for slightly smaller cost. 
(Less than 8 additional months of IPv4 free pool vs. hopefully making 
IPv6 deployable

before IPv4 ran out).


All of this, plus what Fred Baker said upthread.

When I was running the IANA in the early 2000's we discussed this issue 
with many different experts, hardware company reps, etc. Not only was 
there a software issue that was insurmountable for all practical 
purposes (pretty much every TCP/IP stack has "Class E space is not 
unicast" built in), in the case of basically all network hardware, this 
limitation is also in the silicon. So even if it were possible to fix 
the software issue, it would not be possible to fix the hardware issue 
without replacing pretty much all the hardware.


... and even if some magical forces appeared and gave every open source 
software project, and every company, and every consumer in the developed 
world the means and opportunity to do all of the above; doing so would 
have left the developing world out in the cold, since in many cases 
there is reliance on older, second/third/fourth hand equipment that they 
could never afford to replace.


So the decision was made to start tooting the IPv4 runout horns in the 
hopes that folks would start taking conservation of the space seriously 
(which happened more often than not), and accelerate the adoption of 
IPv6. *cough*


Personally, I also pushed to bring IPv6 support from ICANN up to par, 
including going the last mile on putting the IPv6 addresses of the root 
servers in the zone, creating and socializing a long-term plan for 
allocating to the RIRs, etc.


So no, there were exactly zero "IPv6 loons" involved in this decision. :-)

Doug




Re: 44/8

2019-07-26 Thread Doug Barton

On 2019-07-23 10:43 AM, William Herrin wrote:

On Tue, Jul 23, 2019 at 7:32 AM Naslund, Steve > wrote:


In defense of John and ARIN, if you did not recognize that ARDC
represented an authority for this resource, who would be?


The American Radio Relay League (ARRL) is THE organization which 
represents Hams in regulatory matters in the U.S. and is well known to 
Hams worldwide.


You don't have to look very far. Just ask any ham.


The Internet, and amateur radio, both transcend the US.



Re: any interesting/useful resources available to IPv6 only?

2019-05-05 Thread Doug Barton

On 5/3/19 1:33 PM, Mohammad Khalil wrote:

Hello all
I have prepared something in the past you might find useful (hopefully).


First, it's considered rude to send attachments of any size to a mailing 
list, never mind one that's almost 2 megs in size. Much better to put it 
on a web site somewhere and send a URL.


Second, I normally wouldn't respond to something like this, except that 
there are so many errors and bad ideas in your document that I felt 
compelled to respond lest someone find it in the archives and rely on 
it. I will assume that your intentions were good here, however your 
results are dangerous, in the sense that someone reading your document 
would be worse off than if they had not read it.


Taking one tidbit from one of your early paragraphs, "The IPv6 protocol 
creates a 128-bit address, four times the size of the 32-bit IPv4 
standard." There is, sort of, a sense in which you could say that the 
addresses themselves are four times the size, but it creates a dangerous 
impression that the total address space of IPv6 is only four times the 
size of IPv4; and it's the address space that is the thing actually 
worth talking about.


Many of your other errors also involve math, which suggests a lack of 
understanding of basic networking concepts, binary math, etc. For 
example, "With 264 available addresses per segment, it is highly 
unlikely to see prefix lengths shorter than /64 for segments that host 
end systems." A /64 segment in IPv6 has 2^64 address, or the entire IPv4 
address range, squared. Maybe you meant to say 2^64 and forgot the 
exponent indicator? Given that you correctly identify exponents in other 
sections, it's hard to tell.


The document is also out of date in regards to the latest protocol 
changes, deprecations, etc.; and further out of date in regards to how 
operators are actually implementing IPv6.


Again, sorry to pile on ...

If anyone is looking for a pretty good introduction to the basics of 
IPv6 the Wikipedia article is a good start.


hope this helps,

Doug


Re: any interesting/useful resources available to IPv6 only?

2019-05-03 Thread Doug Barton

On 5/3/19 8:14 AM, Brian J. Murrell wrote:

Hi,

I am trying to make a case (to old fuddy-duddies, which is why I even
need to actually make a case) for IPv6 for my own selfish reasons.  :-)

I wonder if anyone has any references to interesting/useful/otherwise
resources on are only available to IPv6 users that they can forward to
me.


This type of marketing approach was pursued doggedly for many of the 
early years of IPv6 rollout. It was as misguided then as it was 
ineffective.


If you have plenty of IPv4 space, you have no case for IPv6. (And I say 
that as one of the most enthusiastic proponents of it.) OTOH, if you 
are/might/will be approach(ing) any kind of IPv4 capacity limitation, 
then you want to start deploying IPv6 ASAP.


The other case that makes business sense is a content provider with a 
lot of traffic. You can get different, and often better, peering 
relationships over IPv6; and there are a lot of eyeball networks, 
especially mobile providers, who are using it natively nowadays.


hope this helps,

Doug


Re: Comcast storing WiFi passwords in cleartext?

2019-04-25 Thread Doug Barton

On 4/25/19 8:04 AM, K. Scott Helms wrote:
Just so you know, if you have an embedded router from a service provider 
all of that data is _already_ being transmitted and has been for a long 
long time.


Responding to a pseudo-random message ...

If you are an average consumer and purchase a managed solution (in this 
case a WAP that comes as part of your package) I think it's perfectly 
reasonable for the vendor to manage it accordingly, even if said 
consumer doesn't fully understand the implications of that decision.


In my mind, the problem here is not that the vendor has access to this 
data, it's that they are STORING it in the first place, and storing it 
in the clear to boot. In the hypothetical service call that we've 
speculated is the driver for this, the extra 15 or 20 seconds that it 
would take to pull the data via SNMP is in the noise.


There are two mindsets that desperately need changing in the tech world:

1. Do not store data that you don't have a legitimate requirement to store
2. Do not store anything even remotely sensitive in the clear

We live in a world of all breaches, all of the time. So we need to start 
thinking not in terms of just protecting said data from the outside, but 
rather in terms of limiting the attack surface to start with, and 
protecting the data at rest. So that WHEN there is a breach, whether 
from within or without, the damage will be minimal.


As many have pointed out, this information is freely available via SNMP, 
so it's a classic example of something that didn't need to be stored in 
the first place.


Doug


Re: DNS Flag Day, Friday, Feb 1st, 2019

2019-01-31 Thread Doug Barton

On 2019-01-31 08:32, James Stahr wrote:


I think the advertised testing tool may be flawed as blocking TCP/53
is enough to receive a STOP from the dnsflagday web site.  It's been
my (possibly flawed) understanding that TCP/53 is an option for
clients but primarily it is a mechanism for the *server* to request
the client communicate using TCP/53 instead - this could be due to UDP
response size, anti-spoofing controls, etc...


That understanding is flawed, but understandable given how deeply 
ingrained this misinformation has become in the zeitgeist. Sections 4.2 
and 4.2.2 of 1035 clearly state that TCP is an expected channel, not 
optional.


This is relevant operationally for two reasons. First while most folks 
make an effort to keep answers under 512 bytes for response time 
reasons, you cannot guarantee that someone, somewhere in your org won't 
add a record that overflows. Also, you are guaranteed to overflow at 
some point once you roll out DNSSEC. I've even seen seemingly mundane 
things like SRV records overflow 512 bytes.


The other reason it's relevant operationally is that there are more and 
more experimental projects in development now that involve using TCP, 
even without the need for truncation, as a way to have more assurance 
that a response is not spoofed. There are also efforts under way to 
evaluate whether "pipelining" DNS requests to servers that you are 
already sending a lot of requests to is ultimately more efficient than 
UDP at high volumes.


So, like lack of EDNS compliance, lack of TCP "compliance" here is going 
to be a limiting factor for the growth of new features, at minimum; and 
could result in actual breakage.


hope this helps,

Doug


Re: Ticketmaster?

2017-12-03 Thread Doug Barton

On 12/02/2017 02:39 PM, Ryan Gard wrote:

*Oh, you must be sharing your IP with everyone else in your area*


CGNAT by any chance?



Re: Last Week's Canadian Fiber Cut

2017-08-18 Thread Doug Barton
Does this sound like a dry run to anyone else? Or did I forget to take 
my anti-paranoia pills today?



On 08/15/2017 06:05 PM, David Charlebois wrote:

Just read this on http://www.ctvnews.ca/business/bell-aliant-says-
double-cable-cut-that-led-to-cell-outages-was-perfect-storm-1.3547018

"Bell spokesman Nathan Gibson says the first cut was by a highway
construction crew near Drummondville, Que.

He says service wasn't impacted in any significant way because of
redundancy in the network until a second major cut near Richibucto, N.B.,
by a logging company in a densely forested location.

He says the second cut was difficult to access and took some time to locate
precisely, and the site's inaccessibility slowed the arrival of heavy
equipment and repair crews."


Re: loc.gov

2017-07-08 Thread Doug Barton

Isn't that a problem that suggests its own solution?


On 7/8/2017 1:43 PM, Joly MacFie wrote:

(sorry I'm not on the outage list)


Re: [NOC] ARIN contact needed: something bad happens with legacy IPv4 block's reverse delegations

2017-03-19 Thread Doug Barton

On 03/18/2017 10:53 PM, John Curran wrote:

On 19 Mar 2017, at 12:50 AM, Doug Barton <do...@dougbarton.us
<mailto:do...@dougbarton.us>> wrote:

...
Meanwhile, my offer to help y'all fix your DNS was a sincere one. Feel
free to hit me up off list.


Doug -

  You’d want to make that offer to the RIPE NCC


My offer was in response to your assertion that normal DNS techniques of 
delegation were not sufficient to the unique problems ARIN has to deal 
with in regards to the address space you manage delegations for.


Subsequent to our conversation however, Shane Kerr was kind enough to 
explain the problem that the "zonelets" are designed to solve:


https://www.ripe.net/ripe/mail/archives/dns-wg/2017-March/003406.html


Short version, ARIN maintains foo/8, but bar/16 within it is managed by 
RIPE, who wants to delegate it directly to the registered party for that 
block. They use a zonelet to tell ARIN how to do that.


As you have indicated that ARIN will not make any changes to its 
existing practices without specific instructions from RIPE, I will offer 
my suggestions to them instead.  :)


best,

Doug


Re: [NOC] ARIN contact needed: something bad happens with legacy IPv4 block's reverse delegations

2017-03-18 Thread Doug Barton

On 03/18/2017 09:40 PM, John Curran wrote:

On 19 Mar 2017, at 12:27 AM, Doug Barton <do...@dougbarton.us> wrote:

...

Despite the associated risk, we are happy to install such checks if
RIPE requests them, but are this time are processing them as we
agreed to do so – which is whenever we receive correctly formatted
and properly signed requests from them. (You should inquire to RIPE
for more detail regarding their future intentions in this regard.)


Already did, thanks. :)  Meanwhile, one could make a legitimate argument that 
even absent specific guidance from RIPE, ARIN should have a sufficient level of 
concern for the health of the larger Internet to consider unilateral action 
here. At least in the form of delaying implementation until some human 
coordination takes place.


We’ll process RIPE’s requests in whatever manner they direct, as it is their
customers that are affected by whatever decision they make in this regard.


I'll let others chime in on whether they they think that's a reasonable 
response. I've already said my piece.


Meanwhile, my offer to help y'all fix your DNS was a sincere one. Feel 
free to hit me up off list.


Doug


Re: [NOC] ARIN contact needed: something bad happens with legacy IPv4 block's reverse delegations

2017-03-18 Thread Doug Barton

Thanks for the response, John. Some thoughts below.

On 03/18/2017 08:58 PM, John Curran wrote:

On 18 Mar 2017, at 9:58 PM, Doug Barton <do...@dougbarton.us
<mailto:do...@dougbarton.us>> wrote:


My eyebrows reacted to this the same way Bill's did. It sounds
like this is at least a semi-automated system. Such things should
have sanity checks on the receiving side when told to remove large
gobs of data, even if the instructions validate correctly.

More fundamentally, according to the RIPE report they are sending
you something called "zonelets" which you then process into actual
DNS data. Can you say something about the relative merit of this
system, vs. simply delegating the right zones to the right parties
and letting the DNS do what it was intended to do?

At minimum the fact that this automated system was allowed to wipe
out great chunks of important data calls it into question. And
sure, you can all 3 fix the bugs you found this time around, but up
until these bugs were triggered you all thought the system was
functioning perfectly, in spite of it ending up doing something
that obviously was not intended.


Doug -

We could indeed decide to ignore correctly formatted and signed
information if it doesn’t match some heuristics that we put in place
(e.g. empty zone, zone with only 1 entry, zone that changes more than
10% in size, etc.)


I have used the latter type of system with good results, for what it's 
worth. And funny you should mention 10%, as that's what I've found to be 
fairly commonly at least a yellow flag, if not a big red one.



Some downsides with this approach is that that: 1) we’d be
establishing heuristics for data that originates with a different
organization and absent knowledge of their business changes,


If you're not already having ongoing discussions about said changes well 
in advance, your system is broken.



and 2)
this would be mean that there could be occasions where proper data
cannot be installed without manual intervention (because the changes
happens to be outside of whatever heuristics have previously been
put in place.)


See above. Also, not putting in place *new* changes on a scale 
sufficient to trip the heuristics is far superior to automatically 
putting in place changes that take huge chunks of address space off the 
network. Or am I missing something?


And if you're having regular conversations with your "customers" in this 
scenario about upcoming major changes, tripping the alarm should only 
happen if someone, somewhere, made a mistake. Thus, human intervention 
is required regardless.


But, see below.


Despite the associated risk, we are happy to install such checks if
RIPE requests them, but are this time are processing them as we
agreed to do so – which is whenever we receive correctly formatted
and properly signed requests from them. (You should inquire to RIPE
for more detail regarding their future intentions in this regard.)


Already did, thanks. :)  Meanwhile, one could make a legitimate argument 
that even absent specific guidance from RIPE, ARIN should have a 
sufficient level of concern for the health of the larger Internet to 
consider unilateral action here. At least in the form of delaying 
implementation until some human coordination takes place.


Personally, I don't buy, "They told us to do it!" as a legitimate 
response here.



As to why DNS-native zone operations are not utilized, the challenge
is that reverse DNS zones for IPv4 and DNS operations are on octet
boundaries, but IPv4 address blocks may be aligned on any bit
boundary.


Yes, deeply familiar with that problem. Are you dealing with any address 
blocks smaller than a /24? If the answer is no (which it almost 
certainly is), what challenges are you facing that you haven't figured 
out how to overcome yet? (Even < /24 blocks can be dealt with, 
obviously, but I'd be interested to learn that there are problems with 
/24 and up that are too difficult to solve.)


Personally, I would be happy to donate my time to help y'all sort this 
out, and I'm sure there are others who would also be willing to help.


Doug


Re: [NOC] ARIN contact needed: something bad happens with legacy IPv4 block's reverse delegations

2017-03-18 Thread Doug Barton

On 03/17/2017 10:42 AM, Mark Kosters wrote:

On 3/17/17, 12:26 PM, "NANOG on behalf of William Herrin"
 wrote:

On Fri, Mar 17, 2017 at 7:52 AM, Romeo Zwart 
wrote:

RIPE NCC have issued a statement about the issue here:

https://www.ripe.net/ripe/mail/archives/dns-wg/2017-March/003394.html







Our apologies for the inconvenience caused.


Hmm. That sounds like an ARIN-side bug too. ARIN's code responded to
corrupted data by zeroing out the data instead of using the last
known good data. That's awfully brittle for such a critical service.

Regards, Bill Herrin


Hi Bill,

The analysis was not yet complete when the notice went out from RIPE.
After doing a post-mortum, there were no bugs in ARIN’s software in
regards to this issue. We followed exactly what RIPE told us to do.
When we noticed an issue with RIPE’s updates yesterday, we notified
them as well.


My eyebrows reacted to this the same way Bill's did. It sounds like this 
is at least a semi-automated system. Such things should have sanity 
checks on the receiving side when told to remove large gobs of data, 
even if the instructions validate correctly.


More fundamentally, according to the RIPE report they are sending you 
something called "zonelets" which you then process into actual DNS data. 
Can you say something about the relative merit of this system, vs. 
simply delegating the right zones to the right parties and letting the 
DNS do what it was intended to do?


At minimum the fact that this automated system was allowed to wipe out 
great chunks of important data calls it into question. And sure, you can 
all 3 fix the bugs you found this time around, but up until these bugs 
were triggered you all thought the system was functioning perfectly, in 
spite of it ending up doing something that obviously was not intended.


Doug


Re: IANA IPv4 Recovered Address Space registry updated

2017-03-04 Thread Doug Barton

Paula,

Thank you for this update. Is there a convenient resource for viewing 
the delta?


Doug

On 03/01/2017 12:15 PM, Paula Wang wrote:

Hi,



An update has been made to the IANA IPv4 Recovered Address Space registry 
according to the Global Policy for Post Exhaustion IPv4 Allocation Mechanisms 
by the IANA 
(https://www.icann.org/resources/pages/allocation-ipv4-post-exhaustion-2012-05-08-en).



The list of allocations can be found at: 
https://www.iana.org/assignments/ipv4-recovered-address-space/



Kind regards,



Paula Wang

IANA Services Specialist

PTI





Re: Wanted: volunteers with bandwidth/storage to help save climate data

2016-12-23 Thread Doug Barton

On 12/21/2016 06:15 PM, Royce Williams wrote:

On Wed, Dec 21, 2016 at 3:49 PM, Ken Chase <m...@sizone.org> wrote:

On Wed, Dec 21, 2016 at 04:41:29PM -0800, Doug Barton said:
 [..]
  >>Everyone has a line at which "I don't care what's in the pipes, I just
  >>work here" changes into something more actionable.
  >
  >Stretched far beyond any credibility. Your argument boils down to, "If it's
  >a political thing that *I* like, it's on topic."


I can see why you've concluded that. My final phrasing was indeed
ambiguous. I would have hoped that the rest of my carefully
non-partisan post would have offset that ambiguity.


There was no ambiguity, your argument was clear. I simply think you were 
wrong. :)



"If it's a politically-generated thing I'll have to deal with at an
operational level, it's on topic."

That work?


That is indeed what I was trying to say - thanks, Ken.


Again, hard to see how the OP asking for assistance with his pet project 
fits any definition of "have to deal with at an operational level."


But now I'm repeating myself, so I'll leave it at that.

Doug




Re: Wanted: volunteers with bandwidth/storage to help save climate data

2016-12-21 Thread Doug Barton

On 12/20/2016 8:08 AM, Royce Williams wrote:

n Sat, Dec 17, 2016 at 6:15 PM, Doug Barton <do...@dougbarton.us> wrote:

On 12/16/2016 1:48 PM, Hugo Slabbert wrote:


This started as a technical appeal, but:

https://www.nanog.org/list

1. Discussion will focus on Internet operational and technical issues as
described in the charter of NANOG.


Hard to see how the OP has anything to do with either of the above.


Actually, it's not that hard ... *if* we can control ourselves from
making them partisan, and focus instead on the operational aspects.
(Admittedly, that's pretty hard!)

The OP's query was a logical combination of two concepts:

- First, from the charter (emphasis mine): "NANOG provides a forum
where people from the network research community, the network operator
community and the network vendor community can come together *to
identify and solve the problems that arise in operating and growing
the Internet*."

- Second, from John Gilmore: "The Net interprets censorship as damage
and routes around it."


[snip]


Everyone has a line at which "I don't care what's in the pipes, I just
work here" changes into something more actionable.


Stretched far beyond any credibility. Your argument boils down to, "If 
it's a political thing that *I* like, it's on topic."




Re: Wanted: volunteers with bandwidth/storage to help save climate data

2016-12-17 Thread Doug Barton

On 12/16/2016 1:48 PM, Hugo Slabbert wrote:

This started as a technical appeal, but:

https://www.nanog.org/list

1. Discussion will focus on Internet operational and technical issues as
described in the charter of NANOG.


Hard to see how the OP has anything to do with either of the above.



Re: Spitballing IoT Security

2016-10-30 Thread Doug Barton

On 10/29/2016 05:32 PM, Ronald F. Guilmette wrote:

you don't need
to be either an omnious "state actor" or even SPECTER to assemble a
truly massive packet weapon.


Please, it's SPECTRE  show some respect


Re: Domain renawals

2016-09-22 Thread Doug Barton

On 09/21/2016 01:44 PM, Richard Holbo wrote:

FWIW, as I'm in the middle of this right now. It would appear that many of
the less expensive registrars no longer support glue records in any
meaningful way.  They all expect you to host DNS with them. So might want
to check on that before buying the cheapest and hosting your own DNS.


What do you think glue records are, and why do you think you need them? 
:)  (Those are serious questions, btw)


Doug



Israeli Online Attack Service ‘vDOS’ Earned $600,000 in Two Years

2016-09-09 Thread Doug Barton
vDOS — a “booter” service that has earned in excess of $600,000 over the 
past two years helping customers coordinate more than 150,000 so-called 
distributed denial-of-service (DDoS) attacks designed to knock Web sites 
offline — has been massively hacked, spilling secrets about tens of 
thousands of paying customers and their targets.


http://krebsonsecurity.com/2016/09/israeli-online-attack-service-vdos-earned-60-in-two-years/


Re: number of characters in a domain?

2016-07-23 Thread Doug Barton

On 07/23/2016 07:07 AM, Matthew Pounsett wrote:

On 23 July 2016 at 14:31, Ryan Finnesey  wrote:


I was hoping someone can help me confirm my research.  I am correct that
domains are now limited to 67 characters in length including the extension?



63 octets per label (the bits between the period separators) , 255 octets
per domain name. In ascii octets and characters are interchangeable, but as
Stephan noted, it gets slightly more complicated for IDN.


Not really ... the 1035 limits apply to the A-label



Re: Cogent - Google - HE Fun

2016-03-13 Thread Doug Barton

s/IPv6/Cogent/  :)

No one who is serious about IPv6 is single-homed to Cogent. Arguably, no 
one who is serious about "The Internet" is single-homed on either protocol.


Thus your conclusion seems to be more like wishful thinking. :)

Doug


On 03/13/2016 11:20 AM, Matthew Kaufman wrote:

I come to the opposite conclusion - that this situation can persist with 
apparently no business impact to either party shows that IPv6 is still 
unnecessary.

Matthew Kaufman

(Sent from my iPhone)


On Mar 13, 2016, at 7:31 AM, Dennis Burgess  wrote:

In the end, google has made a choice. I think these kinds of choices will delay 
IPv6 adoption.

-Original Message-
From: Damien Burke [mailto:dam...@supremebytes.com]
Sent: Friday, March 11, 2016 2:51 PM
To: Mark Tinka ; Owen DeLong ; Dennis Burgess 

Cc: North American Network Operators' Group 
Subject: RE: Cogent - Google - HE Fun

Just received an updated statement from cogent support:

"We appreciate your concerns. This is a known issue that originates with Google 
as it is up to their discretion as to how they announce routes to us v4 or v6.

Once again, apologies for any inconvenience."

And:

"The SLA does not cover route transit beyond our network. We cannot route to IPs 
that are not announced to us by the IP owner, directly or through a network peer."





Re: .pro whois registry down?

2016-03-09 Thread Doug Barton

On 03/09/2016 04:54 PM, Mark Andrews wrote:

Additionally we should be publishing where the whois server for the
tld is in the DNS.  whois applications could be looking for this
then falling back to other methods.

e.g.

_whois._tcp.pro. srv 0 100 43 whois.afilias.net.

If we want machines to follow referrals we have to provide them in
appropriate forms.


Brilliant, wish I'd thought of it :)

Doug



Re: FW: [tld-admin-poc] Fwd: Re: .pro whois registry down?

2016-03-09 Thread Doug Barton

Joseph,

Thanks for the update. However the current state of things is not good 
... My Ubuntu host tries to use whois.dotproregistry.net, which has no 
address records. FreeBSD by default uses pro.whois-servers.net, which 
resolves to whois.registrypro.pro (which has an A record), but never 
returns with any data (arguably worse than failing immediately with an 
obvious error).


If it were me, I would have done the following:

1. Reach out to the OS vendors and the folks at whois-servers.net with 
information that the proper host name for your whois service is 
changing. Include a drop-dead date of 3 years in the future for the old 
names to stop working.


2. Place a CNAME at the two (or more?) old host names so that the 
service will continue to work in the meantime.


The CNAME costs you nothing, and while I agree that it should be able to 
be removed at some point in the future, having things not work at all in 
the short term is not the right approach.


It's also not realistic to expect folks to be able to chase this down on 
their own ... anyone familiar with using whois on the command line has 
most assuredly grown accustomed to the convenience of having it "just 
work," as it has for the last decade or so. While people certainly *can* 
go back to the "good old days" of having to hunt down each registry's 
whois server individually, it's hard to think of that as the best approach.


Is there some reason that the above can't be/hasn't been done that I'm 
missing?


Doug


On 03/09/2016 02:17 PM, Joseph Yee wrote:

Hi Doug,

Afilias had updated .PRO whois host in Jan 2016, and we filed the record
to ICANN & IANA (http://www.iana.org/domains/root/db/pro.html).

The new host is 'whois.afilias.net <http://whois.afilias.net>' and not
'whois.dotproregistry.net <http://whois.dotproregistry.net>' anymore.

Some operating systems may not update their whois configuration yet.
You may need to check and update the configuration manually for PRO
WHOIS server before official patch were available.

Best Regards,
Joseph Yee
Afilias

On Wed, Mar 9, 2016 at 4:56 PM, Michael Flanagan <mflana...@afilias.info
<mailto:mflana...@afilias.info>> wrote:

-Original Message-
From: Doug Barton [mailto:do...@dougbarton.us
<mailto:do...@dougbarton.us>]
Sent: Wednesday, March 09, 2016 4:54 PM
To: tld-admin-...@afilias.info <mailto:tld-admin-...@afilias.info>;
tld-tech-...@afilias.info <mailto:tld-tech-...@afilias.info>
Subject: [tld-admin-poc] Fwd: Re: .pro whois registry down?

FYI

 Forwarded Message 
Subject: Re: .pro whois registry down?
Date: Wed, 9 Mar 2016 13:51:28 -0800
From: Doug Barton <do...@dougbarton.us <mailto:do...@dougbarton.us>>
To: Bryan Holloway <bhollo...@pavlovmedia.com
<mailto:bhollo...@pavlovmedia.com>>, NANOG list <nanog@nanog.org
<mailto:nanog@nanog.org>>

On 03/09/2016 01:24 PM, Bryan Holloway wrote:
 > Anyone else noticing that the .pro TLD is failing for some
things, and
 > their WHOIS registry appears to be unavailable?

The delegation from the root to PRO, and the PRO name servers
themselves,
seem to be working.

 > I appear to be able to resolve, but whois times out, and we're
getting
 > reports that mail isn't going through for some folks with this TLD.

The address records for whois.dotproregistry.net
<http://whois.dotproregistry.net> are missing.

Doug






Re: .pro whois registry down?

2016-03-09 Thread Doug Barton

On 03/09/2016 01:24 PM, Bryan Holloway wrote:

Anyone else noticing that the .pro TLD is failing for some things, and their 
WHOIS registry appears to be unavailable?


The delegation from the root to PRO, and the PRO name servers 
themselves, seem to be working.



I appear to be able to resolve, but whois times out, and we're getting reports 
that mail isn't going through for some folks with this TLD.


The address records for whois.dotproregistry.net are missing.

Doug



Re: The IPv6 Travesty that is Cogent's refusal to peer Hurricane Electric - and how to solve it

2016-01-23 Thread Doug Barton

On 01/23/2016 02:43 AM, Tore Anderson wrote:

William,


Don't get me wrong. You can cure this fraud without going to extremes.
An open peering policy doesn't require you to buy hardware for the
other guy's convenience. Let him reimburse you or procure the hardware
you spec out if he wants to peer. Nor do you have to extend your
network to a location convenient for the other guy. Pick neutral
locations where you're willing to peer and let the other guy build to
them or pay you to build from there to him. Nor does an open peering
policy require you to give the other guy a free ride on your
international backbone: you can swap packets for just the regions of
your network in which he's willing to establish a connection. But not
ratios and traffic minimums -- those are not egalitarian, they're
designed only to exclude the powerless.

Taken in this context, the Cogent/HE IPv6 peering spat is very simple:
Cogent is -the- bad actor. 100%.


I'm curious: How do you know that Cogent didn't offer to peer under
terms such as the ones you mention, but that those were refused by HE?


Because Cogent has repeatedly stated that they refuse to peer, period?

Doug



Re: ICYMI: FBI looking into LA fiber cuts, Super Bowl

2016-01-19 Thread Doug Barton

On 01/19/2016 12:37 PM, Bacon Zombie wrote:

Am I the only one who thinks the below line is BS?

  "...pose a risk of injury to event-goers if an operator loses control."

If there is not safeguards in-place for "normal" network issues then
we would of heard of injuries before.


I think that line refers to drone operators ...


Re: de-peering for security sake

2016-01-17 Thread Doug Barton

On 1/17/2016 12:44 PM, b...@theworld.com wrote:

We need an effective forum with effective participation perhaps
eventually leading to signed contractual obligations agreed to by all
parties.


Not gonna help. The same people who have no incentive to do the right 
thing now will still have no incentive to join the group you propose.


I've said it before, and it's an unpopular option, but the only way that 
this will change is to make it more expensive to do the wrong thing than 
it is to do the right thing. That means lawsuits filed by companies that 
have been harmed as a result of those that are not doing the right 
thing. That will produce the incentives which will be recognized and 
understood by all layers of management, and result in real action for 
the better.


As nice as it would be if everyone were to do the right thing because 
it's the right thing, we already have ample evidence that won't happen. 
Time to stop pretending otherwise.


Doug



Real Customer Choice For T-Mobile’s Binge On Requires Transparency, Opt-In

2016-01-16 Thread Doug Barton
If you’ve been paying attention, you probably noticed the recent 
headlines about T-Mobile CEO John Legere and his anti-EFF mini-rant on 
Twitter. Legere was responding to a question we had asked about 
T-Mobile’s Binge On service: “Does Binge On alter the video stream in 
any way, or just limit its bandwidth?” But it apparently made him angry 
enough to drop an f-bomb on us.


http://techcrunch.com/2016/01/16/real-customer-choice-for-t-mobiles-binge-on-requires-transparency-opt-in/


Re: Binge On! - get your umbrellas out, stuff's hitting the fan.

2016-01-11 Thread Doug Barton

T-Mobile CEO Apologizes For “Offending” EFF And Its Supporters

After an aggressive response to his company, T-Mobile, being called out 
for being anti-Net Neutrality on its new “Binge On” product by the EFF, 
CEO John Legere has backtracked a bit. In case you missed it, he 
flippantly asked “Who the fuck is the EFF?” during a Twitter Q last week.


http://techcrunch.com/2016/01/11/t-mobile-ceo-apologizes-for-offending-eff-and-its-supporters/


Re: Nat

2016-01-07 Thread Doug Barton

On 12/19/2015 07:17 AM, Sander Steffann wrote:

Hi Jeff,


It's far past time to worry about architectural purity.  We need people
deploying IPv6 *NOW*, and it needs to be the job of the IETF, at this
point, to fix the problems that are causing people not to deploy.


I partially agree with you. If people have learned how IPv6 works, deployed IPv6 (even if 
just in a lab) and came to the conclusion that there is an obstacle then I very much want 
to hear what problems they ran into. That's rarely the case unfortunately. Most of the 
time I hear "we don't want to learn something new".


A) You don't need to deploy something to see that the design is overly 
complex, and not a good fit for existing, well-entrenched workflows.


B) Many people have done this, and provided the exact feedback you 
describe, for well over a decade.


There is no technical reason that IPv6 cannot have full-featured DHCP. 
The only obstacles are the purists like you who insist on the entire 
installed base coming up to speed with their cleverness. All the user 
education in the world will not fix that problem.




Re: Nat

2016-01-07 Thread Doug Barton

On 12/18/2015 01:20 PM, Lee Howard wrote:



On 12/17/15, 1:59 PM, "NANOG on behalf of Matthew Petach"



I'm still waiting for the IETF to come around
to allowing feature parity between IPv4 and IPv6
when it comes to DHCP.  The stance of not
allowing the DHCP server to assign a default
gateway to the host in IPv6 is a big stumbling
point for at least one large enterprise I'm aware
of.



Tell me again why you want this, and not routing information from the
router?


C'mon Lee, stop pretending that you're interested in the answer to this 
question, and wasting everyone's time in the process. You know the 
answers, just as well as the people who would give them.



Right now, the biggest obstacle to IPv6
deployment seems to be the ivory-tower types
in the IETF that want to keep it pristine, vs
allowing it to work in the real world.


There¹s a mix of people at IETF, but more operator input there would be
helpful. I have a particular draft in mind that is stuck between ³we¹d
rather delay IPv6 than do it wrong² and ³be realistic about how people
will deploy it."


On this topic the operator input has been clear for over a decade, and 
yet the purists have blocked progress this whole time. The biggest 
roadblock to IPv6 deployment are its most ardent "supporters."





NSA/GCHQ Exploits Against Juniper Networking Equipment

2015-12-28 Thread Doug Barton
The Intercept just published a 2011 GCHQ document outlining their 
exploit capabilities against Juniper networking equipment, including 
routers and NetScreen firewalls as part of this article.


https://www.schneier.com/blog/archives/2015/12/nsagchq_exploit.html


Re: [CVE-2015-7755] Backdoor in Juniper/ScreenOS

2015-12-21 Thread Doug Barton

https://www.schneier.com/blog/archives/2015/12/back_door_in_ju.html


Re: Dual stack IPv6 for IPv4 depletion

2015-07-15 Thread Doug Barton

On 7/15/15 4:45 PM, Joe Maimon wrote:



Doug Barton wrote:

On 7/15/15 10:24 AM, Joe Maimon wrote:

I suspect a 16 /8 right about now would be very welcome for everybody
other then the ipv6 adherents.


Globally we were burning through about a /8 every month or two in the
good old days. So in the best case scenario we'd get 32 more months of
easy to get IPv4, but at an overwhelming cost to re-implement every
network stack.

This option was considered back in the early 2000's when I was still
involved in the discussion, and rejected as impractical.




Removing experimental status does not equate with the burden of making
it equivalent use to the rest of the address space.

How about the ARIN burn rate post IANA runout? How long does 16 /8 last
then?

What would be wrong with removing experimental status and allowing one
of the /8 to be used for low barrier to /16 assignment to any party
demonstrating a willingness to coax usability of the space?

Yes, any such effort has to run the gauntlet of IETF/IANA/RIR policy.

CGN /10 managed. This could too, if all the naysayers would just step
out of the way.


Joe,

In this post, and in your many other posts today, you seem to be 
asserting that this would work if $THEY would just get out of the way, 
and let it work. You've also said explicitly that you believe that this 
is an example of top-down dictates. I know you may find this hard to 
believe, but neither of these ideas turn out to be accurate. A little 
history ...


In 2004 I was the manager of the IANA. Tony Hain came to me and said 
that he'd been crunching some numbers and his preliminary research 
indicated that the burn rate on IPv4 was increasing fairly dramatically, 
and that runout was likely to happen a lot sooner than folks expected it 
would. Various people started doing their own research along similar 
lines and confirmed Tony's findings.


So amongst many others, I started taking various steps to get ready 
for IPv4 runout. One of those steps was to talk to folks about the 
feasibility of utilizing Class E space. Now keep in mind that I have no 
dog in this hunt. I've never been part of an RIR, I've never worked for 
a network gear company, I'm a DNS guy. To me, bits are bits.


I was told, universally, that there was no way to make Class E space 
work, in the public Internet or private networks (because the latter was 
being considered as an expansion of 1918). There are just too many 
barriers, not the least of which is the overwhelming number of 
person-years it would take to rewrite all the software that has 
assumptions about Class E space hard coded.


Further, the vendors we spoke to said that they had no intention of 
putting one minute's worth of work into that project, because the ROI 
was basically zero. In order for address space to work the standard is 
universal acceptance ... and that was simply never going to happen. 
There are literally hundreds of millions of devices in active use right 
now that would never work with Class E space because they cannot be 
updated.


Of course it's also true that various folks, particularly the IETF 
leadership, were/are very gung ho that IPv6 is the right answer, so any 
effort put into making Class E space work is wasted effort; which should 
be spent on deploying IPv6. On a *personal* level I agree with that 
sentiment, but (to the extent I'm capable of being objective) I didn't 
let that feeling color my effort to get an honest answer from the many 
folks I talked to about this.


But all that said, nothing is stopping YOU from working on it. :)  The 
IETF can't stop you, the vendors can't stop you, no one can stop you ... 
if you think you can make it work, by all means, prove us all wrong. :) 
 Find some others that agree with you, work on the code, do the 
interoperability tests, and present your work. You never know what might 
happen.


In the meantime, please stop saying that not using this space was 
dictated from the top down, or that any one party/cabal/etc. is holding 
you back, because neither of those are accurate.


Good luck,

Doug


--
I am conducting an experiment in the efficacy of PGP/MIME signatures. 
This message should be signed. If it is not, or the signature does not 
validate, please let me know how you received this message (direct, or 
to a list) and the mail software you use. Thanks!




signature.asc
Description: OpenPGP digital signature


Re: Dual stack IPv6 for IPv4 depletion

2015-07-15 Thread Doug Barton

On 7/15/15 10:24 AM, Joe Maimon wrote:

I suspect a 16 /8 right about now would be very welcome for everybody
other then the ipv6 adherents.


Globally we were burning through about a /8 every month or two in the 
good old days. So in the best case scenario we'd get 32 more months of 
easy to get IPv4, but at an overwhelming cost to re-implement every 
network stack.


This option was considered back in the early 2000's when I was still 
involved in the discussion, and rejected as impractical.


--
I am conducting an experiment in the efficacy of PGP/MIME signatures. 
This message should be signed. If it is not, or the signature does not 
validate, please let me know how you received this message (direct, or 
to a list) and the mail software you use. Thanks!




signature.asc
Description: OpenPGP digital signature


Re: Dual stack IPv6 for IPv4 depletion

2015-07-15 Thread Doug Barton

On 7/15/15 8:20 AM, George Metz wrote:

Reasonability, like beauty, is in the eye of the beholder, but I thank
you for the compliment. :)


I call them like I see them. :)


The short answer is yes, that constitutes being prudent.


Ok, good news so far. :)


The longer
answer is it depends on what you consider the wildest dreams.

There's a couple of factors playing in. First, look at every /64 that is
assigned as an IPv4 /32 that someone is running NAT behind.


Ok, that's a relatively common analogy, even if it isn't quite 
technically correct.



This is flat
out WRONG from a routing perspective, but from an allocation
perspective, it's very much exactly what's happening because of SLAAC
and the 48-bit MAC address basis for it. Since /64 is the minimum, that
leaves us with less than half of the available bit mask in which to hand
out that 1/8th the address space.


I have my own issues with RA/SLAAC, but let's leave those aside for a 
second. It's probably a more correct analogy (although still not 
completely accurate) to say that a /64 is equivalent to an IPv4 /24, or 
some other small network that would be utilized by an end user with the 
expectation that there are multiple devices running in it. I agree with 
you that you'd never want to route that /64, but you (generally) 
wouldn't want to route a /24, or more accurately something like a /28, 
either.


Also, as Owen pointed out, the original concept for IPv6 networking was 
a 64 bit address space all along. The extra (or some would say, 
wasted) 64 bits were tacked on later.



Still oodles of addresses, but worth
noting and is probably one reason why some of the conservationists
react the way they do.


It's easy to look at the mandatory /64 limit and say See, the address 
space is cut in half to start with! but it's not accurate. Depending on 
who's using it a single /64 could have thousands of devices, up to the 
limit of the broadcast domain on the network gear. At minimum even for a 
home user you're going to get several devices.



Next, let's look at the wildest dreams aspect. The current
implementation I'm thinking of in modern pop culture is Big Hero 6
(the movie, not the comics as I've never read them). Specifically,
Hiro's microbots. Each one needs an address to be able to communicate
with the controller device. Even with the numbers of them, can probably
be handled with a /64, but you'd also probably want them in separate
buckets if you're doing separated tasks. Even so, a /48 could EASILY
handle it.


Right, 65k /64s in a /48.


Now make them the size of a large-ish molecule. Or atom. Or protons.
Nanotech or femtotech that's advanced enough gets into Clarke's Law -
any sufficiently advanced technology is indistinguishable from magic -
but in order to do that they need to communicate. If you think that
won't be possible in the next 30 years, you probably haven't been paying
attention.


I do see that as a possibility, however in this world that you're 
positing, how many of those molecules need to talk to the big-I 
Internet? Certainly they need to communicate internally, but do they 
need routable space? Also, stay tuned for some math homework. :)



I wrote my email as a way of pointing out that maybe the concerns (on
both sides)- aren't baseless,


Please note that I try very hard not to dismiss anyone's concerns as 
baseless, whether I agree with them or not. As I mentioned in my 
previous message, I believe I have a pretty good understanding of how 
the IPv6 conservationists think. My concern however is that while 
their concerns have a basis, their premise is wrong.



but at the same time maybe there's a way
to split the difference. It's not too much of a stretch to see that,
soon, 256 subnets may not actually be enough to deal with the connected
world and Internet of Things that's currently being developed. But
would 1024? How about 4096? Is there any need in the next 10-15 years
for EVERYONE to be getting handed 65,536 /64 subnets?


So, here's where the math gets to be both fun, and mind-boggling. :) 
There are 32 /8s in 2000::/3. Let's assume for sake of argument that 
we've wasted two whole /8s with various drama. There are 2 to the 40th 
power /48s in a /8, multiply by 30, and divide by 10 billion (to 
represent a fairly future-proof number of people on the planet). That's 
3,298.5 /48s per person.


So you asked an interesting question about whether or not we NEED to 
give everyone a /48. Based on the math, I think the more interesting 
question is, what reason is there NOT to give everyone a /48? You want 
to future proof it to 20 billion people? Ok, that's 1,600+ /48s per 
person. You want to future proof it more to 25% sparse allocation? Ok, 
that's 400+ /48s per person (at 20 billion people).


At those levels even if you gave every person's every device a /48, 
we're still not going to run out, in the first 1/8 of the available space.



Split the difference, go with a /52


That's not splitting the difference. 

Re: Dual stack IPv6 for IPv4 depletion

2015-07-15 Thread Doug Barton

On 7/15/15 12:43 PM, George Metz wrote:



On Wed, Jul 15, 2015 at 2:11 PM, Doug Barton do...@dougbarton.us
mailto:do...@dougbarton.us wrote:

On 7/15/15 8:20 AM, George Metz wrote:



Snip!

Also, as Owen pointed out, the original concept for IPv6 networking
was a 64 bit address space all along. The extra (or some would
say, wasted) 64 bits were tacked on later.

Still oodles of addresses, but worth
noting and is probably one reason why some of the conservationists
react the way they do.


It's easy to look at the mandatory /64 limit and say See, the
address space is cut in half to start with! but it's not accurate.
Depending on who's using it a single /64 could have thousands of
devices, up to the limit of the broadcast domain on the network
gear. At minimum even for a home user you're going to get several
devices.

Allow me to rephrase: A single /32 could have thousands of devices, up
to the limit of a 10/8 NATted behind it. This, plus the fact that it
WAS originally 64-bit and was expanded to include RA/SLAAC, is why I
chose that analogy.


Sure, so in that context it's a valid analogy, but my point still 
stands. We're not talking about routable/PI space for customers, even at 
the /48 level.


Now it is true that the CW seems to be leaning towards /48 being the 
largest routable prefix *for commercial networks*, but that's orthogonal 
to the issue of home users.



I do see that as a possibility, however in this world that you're
positing, how many of those molecules need to talk to the big-I
Internet? Certainly they need to communicate internally, but do they
need routable space? Also, stay tuned for some math homework. :)


So, you're advising that all these trillions of nanites should, what,
use NAT? Unroutable IP space of another kind? Why would we do that when
we've already got virtually unlimited v6 address space?

See what I mean? Personally I'd suspect something involving quantum
states would be more likely for information passage, but who knows what
the end result is?


I very carefully tried to skirt the issue, since NAT is a hot-button 
topic for the most ardent of the IPv6 zealots. You were positing a world 
where we need addressing at a molecular level, my point is simply that 
in that world we may or may not be dealing with publicly routable space; 
but *more importantly*, even if we are, we're still covered.



I wrote my email as a way of pointing out that maybe the
concerns (on
both sides)- aren't baseless,


Please note that I try very hard not to dismiss anyone's concerns as
baseless, whether I agree with them or not. As I mentioned in my
previous message, I believe I have a pretty good understanding of
how the IPv6 conservationists think. My concern however is that
while their concerns have a basis, their premise is wrong.

I wasn't intending yourself as the recipient keep in mind. However, IS
their premise wrong? Is prudence looking at incomprehensible numbers and
saying we're so unlikely to run out that it just doesn't matter


Yeah, that's totally not what I'm saying, and I don't think even the 
most ardent IPv6 zealot is saying it either. What I'm saying is that 
there is a very solid, mathematical foundation on which to base the 
conclusion that ISPs handing out /48s to end users is a very reasonable 
thing to do.



or is
prudence Well, we have no idea what's coming, so let's be a little less
wild-haired in the early periods? The theory being it's a lot harder to
take away that /48 30 years from now than it is to just assign the rest
of it to go along with the /56 (or /52 or whatever) if it turns out
they're needed. I personally like your idea of reserving the /48 and
issuing the /56.


Thanks. :)  I do recognize that even with all of the math in the world 
we don't know what the world will look like in 20 years, so *some 
degree* of pragmatism is valuable, especially as we're ramping up 
deployment.


But your argument that it'll be hard to take away the /48 is almost 
certainly wrong. This isn't like handling out Class A's and Class 
B's in the early days of IPv4, when we're talking home users we're 
talking about PA space, which can be withdrawn at will.


Even at the RIR level, assuming some unimaginable future where 400+ /48s 
per human on the planet isn't enough, they can simply revise their 
policies to require justification at some other level per user than /48, 
thereby proclaiming that an ISP's existing space is adequate by 
administrative fiat.


In that sense I actually believe that we've learned the lessons from the 
early days of IPv4, and that we've adequately accounted for them in the 
current set of policies.


... and not to flog the expired equine, but we're still only talking 
about 1/8 of the available space. I'm not being snarky when I say that 
we really are dealing with numbers that are so large that it's hard for 
the human

Re: Dual stack IPv6 for IPv4 depletion

2015-07-15 Thread Doug Barton

On 7/15/15 1:48 PM, valdis.kletni...@vt.edu wrote:

On Wed, 15 Jul 2015 16:23:36 -0400, Ricky Beam said:


What seems like a great idea today becomes tomorrow's what the f*** were
they thinking.


However, this statement doesn't provide any actual guidance, as it's
potentially equally applicable to the give each end customer a /48 crew and
the Give them all a /56 crew.

Actually, not true - in fact, it's demonstrable that a residential customer
can run through a /56.  Just get a largish house, put up one router using
CeroWRT (or, I suspect a current/recent OpenWRT) that burns through 6-7 subnet
allocations), and then put a second one at the other end of the house and
it burns through 6-7.  The second one has to dhcp-pd request at least 3 bits
for itself, which leaves the first one only 5 bits, of which *it* will burn
at least 3.  If you create any VLANs at all, you just burned 4 and 4 bits,
and there goes that /56.

And that's burned all the subnets in a /56 *just hooking up 2 plug and play
routers*.  There's none left for doing anything experimental/different.

(And I suspect Dave Taht can provide several CeroWRT config checkboxes that
will each burn another 1-3 bits each if you click on them and hit apply :)


I tend to think that you're correct here, Validis; which is why I 
suggest reserving the /48 per customer whatever they decide to assign. I 
think the problem of expanding the assignment to a more reasonable size 
will happen on its own since at some point the support calls for hey, I 
need more space! will become a burden.


Doug

--
I am conducting an experiment in the efficacy of PGP/MIME signatures. 
This message should be signed. If it is not, or the signature does not 
validate, please let me know how you received this message (direct, or 
to a list) and the mail software you use. Thanks!




signature.asc
Description: OpenPGP digital signature


Re: Dual stack IPv6 for IPv4 depletion

2015-07-14 Thread Doug Barton

On 7/14/15 6:23 AM, George Metz wrote:

It's always easier to be prudent from the get-go than it is to rein in the
insanity at a later date. Just because we can't imagine a world where IPv6
depletion is possible doesn't mean it can't exist, and exist far sooner
than one might expect.


I've been trying to stay out of this Nth repetition of the same 
nonsensical debate, since neither side has anything new to add. However 
George makes a valid point, which is learn from the mistakes of the past.


So let me ask George, who seems like a reasonable fellow ... do you 
think that creating an addressing plan that is more than adequate for 
even the wildest dreams of current users and future growth out of just 
1/8 of the available space (meaning of course that we have 7/8 left to 
work with should we make a complete crap-show out of 2000::/3) 
constitutes being prudent, or not?


And please note, this is not a snark, I am genuinely interested in the 
answer. I used to be one of the people responsible for the prudent use 
of the integers (as the former IANA GM) so this is something I've put a 
lot of thought into, and care deeply about. If there is something we've 
missed in concocting the current plan, I definitely want to know about it.


Even taking into account some of the dubious decisions that were made 20 
years ago, the numbers involved in IPv6 deployment are literally so 
overwhelming that the human brain has a hard time conceiving of them. 
Combine that with the conservation mindset that's been drilled into 
everyone regarding IPv4 resources, and a certain degree of 
over-enthusiasm for conserving IPv6 resources is understandable. But at 
the same time, because the volume of integers is so vast, it could be 
just as easy to slip into the early-days v4 mindset of infinite, which 
is why I like to hear a good reality check now and again. :)


Doug

--
I am conducting an experiment in the efficacy of PGP/MIME signatures. 
This message should be signed. If it is not, or the signature does not 
validate, please let me know how you received this message (direct, or 
to a list) and the mail software you use. Thanks!




signature.asc
Description: OpenPGP digital signature


Re: REMINDER: LEAP SECOND

2015-06-22 Thread Doug Barton

On 6/19/15 2:58 PM, Harlan Stenn wrote:

Bad idea.

When restarting ntpd your clocks will likely be off by a second, which
will cause a backward step, which will force the problem you claim to be
avoiding.

There are plenty of ways to solve this problem, and you just get to
choose what you want to risk/pay.


You misunderstand the problem. :)  The problem is not clock skips 
backward one second, because most of the time that's not what happens. 
The problem is that most software does not handle it well when the clock 
ticks ... :59 :60 :00 instead of ticking directly from :59 to :00.


THAT problem is avoided by temporarily turning off NTP and then turning 
it back on again when the coast is clear. Most software can handle the 
clock skips forward or backwards one second problem fairly robustly, 
and as Baldur pointed out by doing the reset in a controlled manner you 
greatly reduce your overall risk.


Doug

--
I am conducting an experiment in the efficacy of PGP/MIME signatures. 
This message should be signed. If it is not, or the signature does not 
validate, please let me know how you received this message (direct, or 
to a list) and the mail software you use. Thanks!




signature.asc
Description: OpenPGP digital signature


Re: Android (lack of) support for DHCPv6

2015-06-10 Thread Doug Barton

On 6/10/15 2:27 PM, Ted Hardie wrote:



On Wed, Jun 10, 2015 at 2:16 PM, Doug Barton do...@dougbarton.us
mailto:do...@dougbarton.us wrote:

On 6/10/15 2:00 PM, Ted Hardie wrote:

Lorenzo has detailed why N=1 doesn't work for devices that need
to use xlat


... and it's been well demonstrated that this is a red herring
argument since the provider has to configure xlat for it to have any
chance of working.

or which might want to tether other devices;


... and this argument has been refuted by the word bridging.


​To repeat Valdis' question:​

​And the router knows to send to the front address to reach the
back address, how, exactly? Seems like somebody should invent a
way to assign a prefix to the front address that it can delegate to
things behind it.  :)​


I saw that, he was refuted by others, most notably by the simple fact 
that bridging works now in IPv4, so obviously there is a way to make it 
work.


I think PD is the right answer here of course, but that doesn't mean 
that bridging is the wrong answer.



​The other option would, of course, be bridging plus IPv6 NAT, and I
assume you see the issues there.​


No, actually I don't. I realize that you and Lorenzo are part of the 
rabid NAT-hating crowd, but I'm not. I don't think it's the right answer 
here, but I don't think it's automatically a problem either.



​Back to the question I asked before:  does static solve the stated
problems without single?


It *could*, but Lorenzo actually does have a point when he talks about 
not wanting to cripple future application development. I'd also like to 
see a rough outline of an implementation before commenting further.


Meanwhile, DHCPv6 + PD solves all of Lorenzo's stated problems, but he 
won't implement it because DHCP. That's not something you can engineer 
around.


Doug

--
I am conducting an experiment in the efficacy of PGP/MIME signatures. 
This message should be signed. If it is not, or the signature does not 
validate, please let me know how you received this message (direct, or 
to a list) and the mail software you use. Thanks!




signature.asc
Description: OpenPGP digital signature


Re: Android (lack of) support for DHCPv6

2015-06-10 Thread Doug Barton

On 6/10/15 2:46 PM, Ted Hardie wrote:

​That's fair enough, and some variability in what N is depending on
device is as a well.  But understanding whether what we're actually
looking for is static or single is a pretty key piece of the
requirements scoping, and it sounds like static is it, at least from
your perspective.  Is that a fair assessment?


Ted,

I honestly can't tell if you're deliberately misrepresenting my 
argument, or if you're just being dense. You snipped the several places 
in my previous message where I stated what I think the best way forward 
is. But just in case it's the latter, not the former:


I think PD is the right answer here of course ...

Meanwhile, DHCPv6 + PD solves all of Lorenzo's stated problems, but he 
won't implement it because DHCP. That's not something you can engineer 
around.


Doug

--
I am conducting an experiment in the efficacy of PGP/MIME signatures. 
This message should be signed. If it is not, or the signature does not 
validate, please let me know how you received this message (direct, or 
to a list) and the mail software you use. Thanks!




signature.asc
Description: OpenPGP digital signature


Re: Android (lack of) support for DHCPv6

2015-06-10 Thread Doug Barton

On 6/10/15 2:00 PM, Ted Hardie wrote:

Lorenzo has detailed why N=1 doesn't work for devices that need to use xlat


... and it's been well demonstrated that this is a red herring argument 
since the provider has to configure xlat for it to have any chance of 
working.



or which might want to tether other devices;


... and this argument has been refuted by the word bridging.

I'm not a fan of N=1 for IPv6, but none of Lorenzo's arguments hold water.


--
I am conducting an experiment in the efficacy of PGP/MIME signatures. 
This message should be signed. If it is not, or the signature does not 
validate, please let me know how you received this message (direct, or 
to a list) and the mail software you use. Thanks!




signature.asc
Description: OpenPGP digital signature


Re: Android (lack of) support for DHCPv6

2015-06-10 Thread Doug Barton

On 6/9/15 1:27 PM, Joel Maslak wrote:

Agreed - apparently the solution is to implement SLAAC + DNS advertisements
*AND* DHCPv6.  Because you need SLAAC + DNS advertisements for Android, and
you need DHCPv6 for Windows.

Am I the only one that thinks this situation is stupid?


No, you're not. Some of us have been saying that requiring RA is a bad 
idea, and that adding features to it is a bad idea, for over 15 years now.


Unfortunately the anti-DHCP crowd hasn't budged, no matter how many 
operators have told them that they cannot manage an IPv6 network with 
the current state of the protocol.


Doug

--
I am conducting an experiment in the efficacy of PGP/MIME signatures. 
This message should be signed. If it is not, or the signature does not 
validate, please let me know how you received this message (direct, or 
to a list) and the mail software you use. Thanks!




signature.asc
Description: OpenPGP digital signature


Re: Android (lack of) support for DHCPv6

2015-06-10 Thread Doug Barton

On 6/10/15 8:15 AM, Ray Soucy wrote:

The statement that Android would still not implement DHCPv6 NA, but it
would implement DHCPv6 PD. is troubling because you're not even willing to
entertain the idea for reasons that are rooted in idealism rather
than pragmatism.


I was going to respond on this issue in more depth, but others have 
already gotten there ahead of me. I think Ray's paragraph above sums it 
up best.


Doug

--
I am conducting an experiment in the efficacy of PGP/MIME signatures. 
This message should be signed. If it is not, or the signature does not 
validate, please let me know how you received this message (direct, or 
to a list) and the mail software you use. Thanks!




signature.asc
Description: OpenPGP digital signature


Re: 192.0.1.0/24?

2015-04-17 Thread Doug Barton
Harley is correct that 192.0.1/24 is mentioned in 1166, but AFAICS after 
cursory examination it has fallen through the cracks since then. (Note, 
this is not the same as 192.0.2/24, which has been updated in several 
RFCs recently, including 6303 by Mark Andrews (cc'ed for his information).


I've also cc'ed Leo and Michelle from ICANN so that hopefully they can 
see about getting some whois info set up for that network. Michelle, let 
me know if it would be easier for you if I opened a ticket for this 
request.


Doug


On 4/17/15 1:26 PM, Harley H wrote:

It is mentioned in RFC 1166 as BBN-TEST-C. I suppose it's still not
publicly allocated.




On Fri, Apr 17, 2015 at 4:14 PM, Harley H bobb.har...@gmail.com wrote:


Does anyone know the status of this netblock? I've come across a malware
sample configured to callback to an IP in that range but it does not
appear
to be routable. Yet, it is not mentioned in RFC 5735 nor does it have any
whois information.

Thanks,
   Harley




--
I am conducting an experiment in the efficacy of PGP/MIME signatures. 
This message should be signed. If it is not, or the signature does not 
validate, please let me know how you received this message (direct, or 
to a list) and the mail software you use. Thanks!




signature.asc
Description: OpenPGP digital signature


Re: FIXED - Re: Broken SSL cert caused by router?

2015-03-28 Thread Doug Barton

On 3/28/15 9:05 AM, Mike wrote:

I went back to Frank's list and did some additional testing. I have a
different server which was set up the same way as the previous one
discussed, and I thought I would use the above tools and see if my
problem would have been identified by any of them. I am sorry to report,
no, none of these either caught the problem either. Although I still do
not fully understand the dependencies involved, it seems that if my
server was failing to supply the full certificate chain, and the browser
was compensating for it by (attempting?) to load the missing certificate
from elsewhere,  and this Meraki router was somehow able to confound
that process, that would be an issue worthy of exploring more. I
certainly don't blame these ssl check sites but clearly theres more
checks needed.


The Qualsys site (https://www.ssllabs.com/ssltest/analyze.html) will 
report whether or not the server supplied the intermediate cert. But I 
agree with you that the other tools should make a bigger deal about it 
if the server doesn't supply it.


FWIW, it's been the CW to do this for some time now, as there are 
systems like the one you've run into that were designed before 
intermediate certs were commonplace, and don't know how to handle them.


I've also experienced situations where an enterprise purchases a DV 
certificate to be used on an offline system, and while that system has 
access to the root CA certs, it cannot retrieve the intermediate cert. 
Having the end system supply the intermediate cert as well solves this 
issue.


The method of supplying the intermediate cert is simple, just append the 
intermediate certificate to the end of the file with your server 
certificate (the .crt file). Any reasonably modern software will handle 
that transparently, and provide the intermediate cert along with the 
server cert when doing its business.


hope this helps,

Doug

--
I am conducting an experiment in the efficacy of PGP/MIME signatures. 
This message should be signed. If it is not, or the signature does not 
validate, please let me know how you received this message (direct, or 
to a list) and the mail software you use. Thanks!




signature.asc
Description: OpenPGP digital signature


Re: OT: VPS with Routed IP space

2015-02-24 Thread Doug Barton

On 2/24/15 1:42 PM, Michael Helmeste wrote:

ARP Networks: https://www.arpnetworks.com/vps

Routed IP space (v4 and v6) as well as BGP peering.


+1 for Arp, I'm a happy customer (no other affiliation).

FWIW,

Doug




Re: Dynamic routing on firewalls.

2015-02-06 Thread Doug Barton

On 2/6/15 8:39 AM, Bill Thompson wrote:

You can fix a car with a swiss army knife, but why would you want to?


Is it a metric swiss army knife?



Re: OT - Verizon/ATT Cell/4G Signal Booster/Repeater

2014-12-20 Thread Doug Barton

On 12/19/14 8:30 PM, Javier J wrote:

Add T-mobile LTE and to that list.

I need one.


I'm using wifi calling on my T-mobile device now and then 'just 'cuz', 
and it works a treat. Usually my cell coverage is excellent, but I'm 
sure that someday I'll be in a spot where I need it, so I want to keep 
exercising that path occasionally. :)


FWIW,

Doug

(Usually I wouldn't bother speaking about a specific vendor, especially 
one that's arguably off-topic, but given the historical scuzziness of 
most of the mobile vendors, and what T-mobile is doing now to improve 
the situation; albeit with occasionally distasteful marketing theatrics, 
I thought it worth mentioning ...)


Re: Comcast thinks it ok to install public wifi in your house

2014-12-11 Thread Doug Barton

On 12/11/14 10:16 AM, Livingood, Jason wrote:

On 12/11/14, 1:06 PM, Kain, Rebecca (.) bka...@ford.com wrote:



No one who has Comcast, who I've forward this to, knew about this (all US
customers).  Maybe you can send here the notification Comcast sent out,
to your customers.


I emailed you off-list. I am happy to investigate individual cases. The
rollout has been happening since probably 2009 or 2010.


Jason,

While that offer is noble, and appreciated, as are your other responses 
on this thread; personally I would be interested to hear more about how 
customers were notified. Was there a collateral piece included in their 
bill? Were they e-mailed?


And are we correct in assuming that this is strictly opt-out? And is the 
report that if you opt out with your account that you are not then able 
to access the service elsewhere correct?


Completely aside from the fact that other services have done something 
similar, I regard all of this as quite troubling, as it seems others 
here do as well.


Doug




Re: Comcast thinks it ok to install public wifi in your house

2014-12-11 Thread Doug Barton
That's interesting, thanks for that info, Mike. Jason has a good point 
in that a lot of the reporting on this topic so far has been 
ill-informed, and I think it's important to understand the truth.


Re Rodney and Randy's point about this being blown out of proportion, 
the thing I'm most concerned about is not the service itself, which is 
interesting, and has the capability to be a good utilization of 
resources (as in, a cheap way to provide a beneficial service).


My concerns are that apparently customers are not informed about the 
thing before it gets enabled, and the issue of wifi density that was 
raised by several people here. If you have an apartment building for 
example, where a significant majority of the tenants are Comcast 
customers (cuz in 'murica we loves us some monopolies) I see a lot of 
strong xfinity signals stomping on an already crowded 2.4 G spectrum.


So just to be clear, I'm not being critical at this point, I'm simply 
interested in separating the facts from the hype.


Doug


On 12/11/14 12:42 PM, Mike wrote:

Doug,

  I use my own router at home, so I opted out, and I can use the 
service without issue.

Mike


On Dec 11, 2014, at 2:53 PM, Doug Barton do...@dougbarton.us wrote:


On 12/11/14 10:16 AM, Livingood, Jason wrote:
On 12/11/14, 1:06 PM, Kain, Rebecca (.) bka...@ford.com wrote:



No one who has Comcast, who I've forward this to, knew about this (all US
customers).  Maybe you can send here the notification Comcast sent out,
to your customers.


I emailed you off-list. I am happy to investigate individual cases. The
rollout has been happening since probably 2009 or 2010.


Jason,

While that offer is noble, and appreciated, as are your other responses on this 
thread; personally I would be interested to hear more about how customers were 
notified. Was there a collateral piece included in their bill? Were they 
e-mailed?

And are we correct in assuming that this is strictly opt-out? And is the report 
that if you opt out with your account that you are not then able to access the 
service elsewhere correct?

Completely aside from the fact that other services have done something similar, 
I regard all of this as quite troubling, as it seems others here do as well.

Doug






Re: Comcast residential DNS contact

2014-12-03 Thread Doug Barton

On 12/3/14 10:07 AM, Grant Ridder wrote:

Did more digging and found the RFC regarding ANY queries:

3.2.3 - * 255 A request for all records
https://www.ietf.org/rfc/rfc1035.txt


When listing URLs for RFCs it's better to use the tools site, as it 
gives a much better experience:


https://tools.ietf.org/html/rfc1035

Meanwhile, the text is correct, but what you're missing is the nuance of 
authoritative vs. recursive. If you send an ANY query to an 
authoritative server it is naturally going to send you all of the 
related records, since it has them all.


A recursive (or iterative if you prefer) server only has what it has in 
the cache, but it will send you all records that it has. What this 
does not imply is that the recursive server will go out and do its own 
ANY query for the RR you're asking about, unless there is nothing in the 
cache to start with.


There are any number of explanations for why some of the recursive 
servers you're querying have more records than others. None of them are 
bugs. :)



However Wikipedia (http://en.wikipedia.org/wiki/List_of_DNS_record_types)
lists this as a request for All cached records instead of A request for
all records per the RFC.


Wikipedia is good for a lot of things, but standards work is not one of 
them. :)  The text above is a good example of why.


Doug



Re: How do I handle a supplier that delivered a faulty product?

2014-11-25 Thread Doug Barton

On 11/25/14 10:06 PM, Mark Andrews wrote:

Any router/modem that*crashes*  when the input rate exceeds the
output rate is broken.  A router/modem shouldn't crash regardless
of the data input rate.  It might drop packets but not crash.


Maybe the bit-bucket got full?


Re: Equinix Virginia - Ethernet OOB suggestions

2014-11-12 Thread Doug Barton

On 11/12/14 11:49 AM, Christopher Morrow wrote:

On Wed, Nov 12, 2014 at 1:17 PM, Randy Bush ra...@psg.com wrote:

I hear the chaps at Hurricane Electric can help you with a nice
tunnel for that...

yea.. because when the sh*t hits the fan I REALLY need a dependency
upon a wonky tunnel server made of cheese and mouse parts to be in the
middle of my work process?


wait a sec!  there's cheese?  where?


I understand that it is ashburn equinix.


randy, who may have to rethink tunnels


:)


cheese++



Re: Reporting DDOS reflection attacks

2014-11-09 Thread Doug Barton

On 11/8/14 6:33 PM, Roland Dobbins wrote:

this is incorrect and harmful, and should be removed:

 iii.Consider dropping any DNS reply packets which are larger
than 512 Bytes – these are commonly found in DNS DoS Amplification attacks.

This *breaks the Internet*.  Don't do it.


+1


Re: NIST NTP Server List

2014-10-29 Thread Doug Barton
Also getting a 404 over IPv6. You can verify what transport we're using 
in Firefox using the SixorNot plugin.


hth,

Doug



Re: NIST NTP Server List

2014-10-29 Thread Doug Barton
Happy Eyeballs has nothing to do with it. This is a server 
misconfiguration plain and simple.


Doug


On 10/29/14 11:30 AM, Christopher Morrow wrote:

On Wed, Oct 29, 2014 at 11:19 AM, Brian Christopher Raaen
mailing-li...@brianraaen.com wrote:

That is interesting as the computer I am using is on dual-stack, and I am
probably using IPv6 to reach it.



happy eyeballs


On Wed, Oct 29, 2014 at 1:56 PM, Stefan Bethke s...@lassitu.de wrote:


Seems to be working over IPv4, not over IPv6.

$ curl -6 http://tf.nist.gov/tf-cgi/servers.cgi 2/dev/null | head -5
!DOCTYPE HTML PUBLIC -//IETF//DTD HTML 2.0//EN
htmlhead
title404 Not Found/title
/headbody
h1Not Found/h1
$ curl -4 http://tf.nist.gov/tf-cgi/servers.cgi 2/dev/null | head -5
html
 head
 titleNIST Internet Time Service/title
 meta http-equiv=content-type
content=text/html;charset=iso-8859-1
script language=JavaScript id=_fed_an_js_tag
src=/js/federated-analytics.all.min.js?agency=NISTsubagency=tfpua=
UA-42404149-6yt=true/script



Am 29.10.2014 um 18:26 schrieb Brian Christopher Raaen 

mailing-li...@brianraaen.com:


I'm still getting a 404.  I am using a Windstream backbone, is this maybe
path/server specific.  Here is a dig.

dig tf.nist.gov


--
Stefan Bethke s...@lassitu.de   Fon +49 151 14070811








--
Brian Christopher Raaen
Network Architect
Zcorum




Re: NIST NTP Server List

2014-10-29 Thread Doug Barton

On 10/29/14 12:36 PM, Christopher Morrow wrote:

On Wed, Oct 29, 2014 at 11:36 AM, Doug Barton do...@dougbarton.us wrote:

Happy Eyeballs has nothing to do with it. This is a server misconfiguration
plain and simple.



I meant that it seems that v4 is broken, but v6 is not.


Other way around.



Re: Linux: concerns over systemd adoption and Debian's decision to switch

2014-10-23 Thread Doug Barton

On 10/23/14 4:01 PM, Simon Lyall wrote:

On Wed, 22 Oct 2014, Stephen Satchell wrote:

On 10/22/2014 08:20 PM, Simon Lyall wrote:

On Wed, 22 Oct 2014, Miles Fidelman wrote:

And maybe, you should check out some of the upstream bug reports re.
systemd interactions with NTP.


If you think the current situation is all good then maybe you should
look at other bugs for ntp. eg this one I that affected me with Ubuntu
Disktop. They only run time syncing when the network is bounced so if
you have a stable network then your machine will never sync:

https://bugs.launchpad.net/ubuntu/+source/ntp/+bug/1178933

[..]

I'm a long-time user of NTP, and what you are asking for is a no-good
way of doing things.  What you are supposed to do is use the ntpdate(8)
utility *ONCE* on boot to initially set the system clock, then you have
ntpd(8) running to do two things for you:  sync up to one or more time
sources, and discipline the local clock.

[..]

That's the SERVER way of running a time synchronization.  So it would
appear that you have a quarrel with GUI support, not with NTP itself.


What my point was is that the simple default for end users [1] is
already significantly broken in Ubuntu (that is just one bug that bit
me, there are plenty of others).

The systemd system seems to offer and improvement on the existing
simple default setup while still enabling experts to run a full ntpd
install if they wish.

[1] - I know how to setup and run ntpd, I didn't expect to need to do it
on my workstation however.


If you are actually arguing that because Ubuntu made a mistake on how 
the Internet time synchronization option is configured, therefore we 
need systemd, you need to rethink your position. :)


FWIW, the problem you're describing with that option is real, and was 
revisited in later versions. As of 14.04 it was still broken, but for a 
different reason having to do with permissions on the ntpd install. 
However, fixing that problem doesn't require systemd, it requires fixing 
*that* problem.


I am not against systemd per se, I honestly don't know enough about it 
to form an intelligent opinion. The line of reasoning (I believe to be) 
espoused here is quite concerning, If there is a problem, we need to 
bring the solution into systemd. To the extent that's accurate, it's 
overwhelmingly likely to be wrong.


I could say a lot more about Unix system design philosophies from my 
time in the FreeBSD project, but this thread started off-topic, and has 
only gotten worse. :)


Doug





Re: Why is .gov only for US government agencies?

2014-10-21 Thread Doug Barton

On 10/21/14 8:08 AM, David Conrad wrote:

Folks outside of the US have issues with the US government having a
role in the administration of the root, even if that role is to
ensure ICANN does screw the pooch.


Freudian slip, David? :)

Doug


Re: Why is .gov only for US government agencies?

2014-10-21 Thread Doug Barton

On 10/20/14 10:44 PM, Jared Mauch wrote:

I’ve had operational issues introduced by *TLD operators and choices they made.


When that happens, report them to ICANN's SSAC. They take the 
Stability part of their name seriously.


That said, new TLDs are not going away, so operations needs to take that 
into account.


Doug



  1   2   3   >