Re: WEBINAR TUESDAY: Can We Make IPv4 Great Again?

2017-03-07 Thread Mike Jones
On 7 March 2017 at 23:27, Dennis Bohn  wrote:

> >
> >
> > In addition, IPv6 has link local addresses.
> > This one seemingly insignificant detail causes so much code churn
> > and is probably responsible for 10 years of the IPv6 drag.
>
> AFAICT, Cisco V6 HSRP (mentioning that brand only because it caused me to
> try to figure something out, a coincidence that this is in reply to Jakob
> from Cisco but is based on what he wrote)  relies on Link Local addresses.
> I didn't understand why link locals should be there in the first place
> seemed klugey and have googled, looked at rfcs and tried to understand why
> link local addresses were baked into V6. The only thing I found was that it
> enabled interfaces on point to point links to be unaddressed in V6. (To
> save address space!??) Can anyone point me in a direction to understand the
> reasoning for link local addressing?
>

So you can print whilst your Internet connection is down. IPv6 allowed
people to rethink IPv4 assumptions, and they realised that a lot of IPv4
things were hacks to work around a lack of functionality in the protocol.
NAT has polluted peoples minds when it comes to the distinctions between
local and global addressing.

Why would you use a global address, with an extra code check to make sure
it is on a directly attached interface, to point a route at? "Router 2 on
interface B" makes more sense to me than "Router with global address 12345"
in this context.

I would also have loved it if the all-routers-anycast thing had been better
defined rather than deprecated. One of the potential default behaviours
could have been fe80:: as a default gateway on every segment, with a
logical meaning of "All upstream routers on this interface".

- Mike


Re: Soliciting your opinions on Internet routing: A survey on BGP convergence

2017-01-10 Thread Mike Jones
On 10 January 2017 at 19:58, Job Snijders <j...@instituut.net> wrote:
> On Tue, Jan 10, 2017 at 03:51:04AM +0100, Baldur Norddahl wrote:
>> If a transit link goes, for example because we had to reboot a router,
>> traffic is supposed to reroute to the remaining transit links.
>> Internally our network handles this fairly fast for egress traffic.
>>
>> However the problem is the ingress traffic - it can be 5 to 15 minutes
>> before everything has settled down. This is the time before everyone
>> else on the internet has processed that they will have to switch to
>> your alternate transit.
>>
>> The only solution I know of is to have redundant links to all transits.
>
> Alternatively, if you reboot a router, perhaps you could first shutdown
> the eBGP sessions, then wait 5 to 10 minutes for the traffic to drain
> away (should be visible in your NMS stats), and then proceed with the
> maintenance?
>
> Of course this only works for planned reboots, not suprise reboots.
>
> Kind regards,
>
> Job

If I tear down my eBGP sessions the upstream router withdraws the
route and the traffic just stops. Are your upstreams propagating
withdraws without actually updating their own routing tables?

I believe the simple explanation of the problem can be seen by firing
up an inbound mtr from a distant network then withdrawing the route
from the path it is taking. It should show either destination
unreachable or a routing loop which "retreats" (under the right
circumstances I have observed it distinctly move 1 hop at a time)
until it finds an alternate path.

My observed convergence times for a single withdraw are however in the
sub-10 second range, to get all the networks in the original path
pointing at a new one. My view on the problem is that if you are
failing over frequently enough for a customer to notice and report it,
you have bigger problems than convergence times.

- Mike Jones


Re: Forwarding issues related to MACs starting with a 4 or a 6 (Was: [c-nsp] Wierd MPLS/VPLS issue)

2016-12-06 Thread Mike Jones
> MACs that didnt make it through the switch when running 4.12.3.1:
>
> 4*:**:**:**:**:**
> 6*:**:**:**:**:**
> *4:**:**:**:**:**
> *6:**:**:**:**:**
> **:**:*B:**:6*:**
> **:**:*F:**:4*:**

Can anyone explain the last 2 for me?

I was under the impression that this bug was mainly caused by some
optimistic attempt to detect raw IPv4 or IPv6 payloads by checking for
a version at the start of the frame. This does not explain why it
would be looking at the 5th octet.

I also would assume that there must be something else to the last 2
examples beyond just the B or F and 4 or 6 because otherwise it would
match way too many addresses to have not been noticed before. Perhaps
the full MAC address looks like some other protocol with a 4 byte
header?

Thanks,
Mike


Re: BCP38 adoption "incentives"?

2016-09-27 Thread Mike Jones
On 27 September 2016 at 15:32, Mikael Abrahamsson <swm...@swm.pp.se> wrote:
> On Tue, 27 Sep 2016, Joe Klein wrote:
>
>> What would it take to test for BCP38 for a specific AS?
>
>
> Well, you can get people to run
> https://www.caida.org/projects/spoofer/#software
>
> I tried to get OpenWrt to include similar software, on by default, but some
> people are afraid that they might incur legal action on themselves by doing
> antispoofing-testing.

Any network operator should know if their network is blocking it or
not without having to deploy active probes across their network.

If a network is thinking about it enough to want to block it, they
will probably do so by turning knobs on their routers rather than
deploying another patch to the CPE.

I don't think the CPE is the solution here.

- Mike Jones


Re: IPv6 deployment excuses

2016-07-02 Thread Mike Jones
Thanks guys, this is what I have come up with so far. Next week i'll
put together a web page or something with slightly better write-ups,
but these are my initial ideas for responses to each point. Better
answers would be welcome.

"We have NAT, therefore we don't need IPv6."
"We still have plenty of IPv4 addresses"
"IPv4 works so we don't need IPv6."

They said similar things about IPX, DECNET, Appletalk but they
eventually realised it was easier to move to IP than to keep making
excuses for why their network can't connect to anything.

"we want NAT for IPv6 for security reasons"

NAT does not provide any security, typically a NAT will also include a
stateful firewall which provides the security. You can deploy a
stateful firewall for your IPv6 network if you like, however it isn't
required as much as you probably think it is - see below.

"IPv6 is just another way for hackers to get in."

There is no difference between IPv4 and IPv6 when it comes to
firewalls and reachability. It is worth noting that hosts which
support IPv6 are typically a lot more secure than older IPv4-only
hosts. As an example every version of Windows that ships with IPv6
support also ships with the firewall turned on by default.

"End users don't care about IPv6"

Are you saying this in response to someone asking for IPv6? because
that would be contradictory. I am an end user and I care about IPv6!

"But it isn't a priority and we have other stuff to do"

Reconfiguring every router on your network is not something you want
to rush when you realise you needed IPv6 yesterday, early adopters
have the advantage that they can gain experience with running IPv6 and
test their infrastructure without worrying about critical traffic
being IPv6-only.

"None of the software vendors support IPv6."

If your software vendors were following best practices and writing
decent code then this would not be a problem, I suggest pushing your
vendors to fix their code. If you only have 1 of two systems that are
IPv4-only then you can always "special case" them. See NAT64 for
information about one way of reaching IPv4 hosts from an IPv6 network.
If you dual stack then it doesn't matter and you can just use IPv4 for
those few services than require it until you get a fix from the
vendor.

"None of our staff understand IPv6."

Do your staff understand IPv4? because it's not that different...

"IPv6 addresses are too long to remember"

You shouldn't need to remember IP addresses, that's what DNS is for.
However I will say that in my experience and many other peoples having
the extra bits to structure your network in a logical fasion can make
addresses more obvious and easier to remember. You have a single
prefix to remember, then can address hosts within that subnet however
you like. In IPv4 you rarely have the luxury of being able to number
your DNS server 192.0.2.53 and your web server 192.0.2.80 to make them
easier to remember, whereas in IPv6 you can easily assign hosts easy
to remember addresses.

"Having to dual stack means I still have to manage a 4 and 6 network."

Good point, however if you want to ease your network management and
run an IPv6-only network with IPv4-only services still reachable over
the top of it then there are several ways to do it, the most obvious
being NAT64.

"Our DNS provider won't let us as add  records"

Seriously? who is your DNS provider? You need to ask them why they
don't support standard record types. If they refuse to add standard
records to your zone, get a new provider there are plenty out there.

"We'll deploy IPv6 right after we finish deploying DNSSEC"

The 2 are not mutually exclusive - at a large organisation where this
sort of project would be major work, you probably have different teams
dealing with IP and DNS so there's no reason one would stop the other.

"But Android doesn't support stateful DHCPv6."

I will admit that the specifications were written a little loosely so
you have 2 ways of configuring hosts, however if you configure both RA
and DHCP then you will cover 100% of IPv6-capible hosts.

"Our legal intercept setup does not work with IPv6"

If your lawful intercept equipment can't see traffic just because they
used an "unknown" protocol then it has a major flaw!

- Mike Jones


IPv6 deployment excuses

2016-07-01 Thread Mike Jones
Hi,

I am in contact with a couple of network operators trying to prod them
to deploy IPv6, I figured that 10 minutes to send a couple of emails
was worth the effort to make them "see a customer demand" (now none of
them can use the excuse that nobody has asked for it!), but the
replies I got were less than impressive to say the least.

I was wondering if you guys could summarise your experiences with
people who make excuses for not deploying IPv6? I regularly see a
specific person saying "we can't deploy it because X" followed by you
guys "correcting them" and telling them how to deploy it without the
problems they claim they will have, but that is only a small snapshot
of the people who bother to post about their ignorance in public. I
suspect there is also a lot of selection-bias in the NANOG membership
base but you deal with a lot of enterprise networks off of this list
so probably have broader experience than the NANOG archives.

Can we have a thread summarising the most common excuses you've heard,
and if they are actual problems blocking IPv6 deployment or just down
to ignorance? Perhaps this could be the basis for an FAQ type page
that I can point people to when they say they don't know how to deploy
IPv6 on their networks? :)

- Mike Jones


Fw: new message

2015-10-26 Thread Mike Jones
Hey!

 

New message, please read <http://doctorcatherinebarry.com/act.php?lm>

 

Mike Jones



Re: Cisco/Level3 takedown

2015-04-11 Thread Mike Jones
On 9 April 2015 at 19:16, Randy Bush ra...@psg.com wrote:
 It does make one wonder why Cisco or Level 3 is involved, why they
 feel they have the authority to hijack someone else's IP space, and
 why they didn't go through law enforcement. This is especially true
 for the second netblock (43.255.190.0/23), announced by a US company
 (AS26484).

 vigilantes always wear white hats.

 randy

It seems to me from reading the article that the defence to this is
to set up a legitimate hosting company in the same IP space, even if
it only has 1 customer. Then if you get blocked you turn around and
shout and scream that level3 are abusing their market dominance to
prevent a rival firms customers (this legitimate hosting company)
being able to use the Internet.

How screwed would they be in in court? I suspect it won't be a US
court that gets to side with a US company and ignore everyone else, I
suspect it would be an EU court case where there are actual
consequences to a company trying to abuse their market dominance to
force others to do what they want. This specific group might not have
the balls to try sueing level3, but if they make a habit of blocking
peoples access to the internet then ambulance chasing lawyers will
likely try to trick them in to screwing up and blocking their clients.

- Mike


Re: How our young colleagues are being educated....

2014-12-25 Thread Mike Jones
I am a university student that has just completed the first term of
the first year of a Computer Systems and Networks course. Apart from a
really out of place MATH module that did trig but not binary, it has
been reasonably well run so far. The binary is covered in a different
module, just not maths. The worst part of the course is actually the
core networking module, which is based on Cisco material. The cisco
material is HORRIBLE! those awkward book page things with the stupid
higherarchical menu. As for the content.. a scalable network is one
you can add hosts to, so what's a non-scalable network? will the
building collapse if i plug my laptop in?

As I have been following NANOG for years I do notice a lot of mistakes
or over-simplifications that show a clear distinction between the
theory in the university books and the reality on nanog, and
demonstrate the lecturers lack of real world exposure. As a simple
example, in IPv4 the goal is to conserve IP addresses therefore on
point to point links you use a /30 which only wastes 50% of the
address space. In the real world - /31's? but a /31 is impossible I
hear the lecturers say...

The entire campus is not only IPv4-only, but on the wifi network they
actually assign globally routable addresses, then block protocol 41,
so windows configures broken 6to4! Working IPv6 connectivity would at
least expose students to it a little and let them play with it...

Amoung the things I have heard so far: MAC Addresses are unique, IP
fragments should be blocked for security reasons, and the OSI model
only has 7 layers to worry about. All theoretically correct. All
wrong.
- Mike Jones


On 22 December 2014 at 09:13, Javier J jav...@advancedmachines.us wrote:
 Dear NANOG Members,

 It has come to my attention, that higher learning institutions in North
 America are doing our young future colleagues a disservice.

 I recently ran into a student of Southern New Hampshire University enrolled
 in the Networking/Telecom Management course and was shocked by what I
 learned.

 Not only are they skimming over new technologies such as BGP, MPLS and the
 fundamentals of TCP/IP that run the internet and the networks of the world,
 they were focusing on ATM , Frame Relay and other technologies that are on
 their way out the door and will probably be extinct by the time this
 student graduates. They are teaching classful routing and skimming over
 CIDR. Is this indicative of the state of our education system as a whole?
 How is it this student doesn't know about OSPF and has never heard of RIP?

 If your network hardware is so old you need a crossover cable, it's time to
 upgrade. In this case, it’s time to upgrade our education system.

 I didn't write this email on the sole experience of my conversation with
 one student, I wrote this email because I have noticed a pattern emerging
 over the years with other university students at other schools across the
 country. It’s just the countless times I have crossed paths with a young IT
 professional and was literally in shock listening to the things they were
 being taught. Teaching old technologies instead of teaching what is
 currently being used benefits no one. Teaching classful and skipping CIDR
 is another thing that really gets my blood boiling.

 Are colleges teaching what an RFC is? Are colleges teaching what IPv6 is?

 What about unicast and multicast? I confirmed with one student half way
 through their studies that they were not properly taught how DNS works, and
 had no clue what the term “root servers” meant.

 Am I crazy? Am I ranting? Doesn't this need to be addressed? …..and if not
 by us, then by whom? How can we fix this?


Re: .nyc - here we go...

2013-07-05 Thread Mike Jones
On 5 July 2013 02:02, Eric Brunner-Williams e...@abenaki.wabanaki.netwrote:

 Someone who should know better wrote:

  Well give that .com thingie is IPv6 accessable and has DNSSEC there
  is nothing we need to let you know.  And yes you can get IPv6
  everywhere if you want it.  Native IPv6 is a little bit harder but
  definitely not impossible nor more expensive.

 And this was true when the v6 and DEC requirements entered the DAG?

 Try again, and while you're inventing a better past, explain how
 everyone knew that it would take 6 revisions of the DAG and take until
 3Q2012 before an applicant could predict when capabilities could be
 scheduled.

 The one thing you've got going for you is that in 2009 no one knew
 that almost all of the nearly 2,000 applicants would be forced by
 higher technical and financial requirements to pick one of a universe
 of fewer than 50 service providers, or that nearly all of the
 developing economies would be excluded, or self-exclude, from
 attempting to apply. So the basic diversity assumption was wrong.

 Why are the people who don't follow the shitty process so full of
 confidence they have all the clue necessary?


Why do people who make statements about .com not being IPv6 reachable think
they have all the clue necessary? And what about those people who think
that DNSSEC is about validating the answers from the root/TLD name servers?

At least you avoided the common mistake of citing the 1% end user IPv6
availability figure when claiming that IPv6 wasn't available in data
centres... ;)

- Mike


Re: PRISM: NSA/FBI Internet data mining project

2013-06-08 Thread Mike Jones
On 8 June 2013 12:12, Jimmy Hess mysi...@gmail.com wrote:

 On 6/7/13, Måns Nilsson mansa...@besserwisser.org wrote:
  Subject: Re: PRISM: NSA/FBI Internet data mining project Date: Fri, Jun
 07,
  2013 at 12:25:35AM -0500 Quoting jamie rishaw (j...@arpa.com):
  tinfoilhat
  Just wait until we find out dark and lit private fiber is getting
  vampired.
  /tinfoilhat
  I'm not even assuming it, I'm convinced. In Sweden, we have a law,
  that makes what NSA/FBI did illegal while at the same time legalising,

 Perhaps  strong crypto should be implemented on transceivers  at each
 end of every link,  so users could be protected from that without
 having to implement the crypto themselves at the application layer? :)

 --
 -JH


Encrypted wifi doesn't help if the access point is the one doing the
sniffing. How often are 'wiretaps' done by tapping in to a physical line vs
simply requesting a switch/router copy everything going through it to
another port? the CIA might use physical taps to monitor the russian
governments traffic, but within the US I imagine they normally just ask the
targets ISP to copy the data to them.

To be automatic and 'just work' would also mean not having to configure the
identity of the devices at the other end of every link. In this case you'll
just negotiate an encrypted link to the CIAs sniffer instead of the switch
you thought you were talking to.

End to end encryption with secure automatic authentication is needed, it's
taking a while to gain traction but DANE looks like the solution. When SSL
requires the overhead of getting a CA to re-sign everything every year you
only use it when you have a reason to. When SSL is a single copy/paste
operation to set it up and no maintenance it becomes much harder to justify
why you're not doing it. Unfortunately I haven't come across any good ideas
yet for p2p type applications were you don't have anywhere to securely
publish your certificates.

- Mike


Re: cannot access some popular websites from Linode, geolocation is wrong, ARIN is to blame?

2013-03-02 Thread Mike Jones
Inline Reply

On 2 March 2013 21:58, Constantine A. Murenin muren...@gmail.com wrote:
 Dear NANOG@,

 I've had a Linode in Fremont, CA (within 173.230.144.0/20 and
 2600:3c01::/32) for over a year, and, in addition to some development,
 I sometimes use it as an ssh-based personal SOCKS-proxy when
 travelling and having to use any kind of public WiFi.

 Since doing so, I have noticed that most geolocation services think
 that I'm located in NJ (the state of the corporate headquarters of
 Linode), instead of Northern California (where my Linode is physically
 from, and, coincidentally or not, where I also happen to live, hence
 renting a Linode from a very specific location).

 Additionally, it seems like both yelp.com and retailmenot.com block
 the whole 173.230.144.0/20 from their web-sites, returning some
 graphical 403 Forbidden pages instead.

 ...

 I would like to point out that 173.230.144.0/20 and 2600:3c01::/32,
 announced out of AS6939, are allocated by Linode from their own
 ARIN-assigned allocations, 173.230.128.0/19 and 2600:3C00::/30, which
 Linode, in turn with their other ARIN-assigned space, allocates to 4
 of their distinct DCs in the US, in Dallas, Fremont, Atlanta and
 Newark.

 However, Linode does not maintain any individual whois records of
 which DC they announce a given sub-allocation from.  They also do not
 document their IPv6 assignments, either: if one of their customers
 misbehaves, the offended network would have no clue how to block just
 one customer, so, potentially, a whole set of customers may end up
 being blocked, through a wrong prefixlen assumption.


 I've tried contacting Linode in regards to whois, giving an example of
 some other smaller providers (e.g. vr.org) that label their own
 sub-allocations within their ARIN-assigned space to contain an address
 of the DC where the subnet is coming from, and asked whether Linode
 could do the same;  however, Linode informed me that they don't have
 any kind of mail service from the DCs they're at, and that their ARIN
 contact, effectively, said that they're already doing everything right
 in regards not having any extra whois entries with the addresses of
 their DC, since that would actually be wrong, as noone will be
 expecting mail for Linode at those addresses.  (In turn, it's unclear
 whether a much smaller vr.org has mail service at nearly a dozen of
 the DCs that they have their servers at, and which they provide as the
 addresses in ARIN's whois, but I would guess that they do not.)

 This would seem like a possible shortcoming of ARIN's policies and the
 whois database:  with RIPE, every `netname` has a `country` associated
 with it, seemingly without any requirements of a mailing address where
 mail could be received; but with ARIN, no state is ever provided, only
 a mailing address.  (I've also just noticed that RIPE whois now has an
 optional `geoloc` field in addition to the non-optional `country`.)

 Now, back to ARIN:  is Linode doing it right?  Is vr.org doing it
 wrong?  Are they both doing it correct, or are they both wrong?

You need to give me what you need to give me, but if you give me more
am I going to complain? What about if you miss something?


 And in regards to yelp and retailmenot; why are they blocking Linode
 customers in 173.230.144.0/20?  I've tried contacting both on multiple

Could be many reasons, I suspect they get little legitimate traffic
from there with their target audiences.

 occasions, and have never received any replies from yelp, but
 retailmenot has replied several times with a blanket someone may have
 tried to scrap, spam or proxy our site from this network.  I have

Probably likely, many geo-restricted sites also block hosting providers.

 repeatedly asked retailmenot if they'd block Verizon or ATT if
 someone tries to scrap or spam their web-site from those networks,
 too, but have never received any replies.  I have also tried

Residential provider: Block millions of users to stop a few scrapers.
Hosting provider: Block a couple of users to block 90% of the
scrapers, and all the places the scrapers go to when you block them.

 contacting Linode regarding this issue, and although they were very
 patient and tried troubleshooting the problem, reporting that it
 appears that other addresses within 173.230.144.0/20 are likewise
 blocked, but some of their other address ranges at another DC are not,
 they have not been able to get in touch with anyone at yelp or
 retailmenot to isolate the problem.


 Now, if you were operating yelp or rmn, would you not block an address
 range with a fishy geoloc like that of Linode?  I'm somewhat convinced
 that 403 Forbidden stems entirely out of some logic that notes that
 the geoloc data is likely fishy, or which [erroneously] concludes that
 the address range is used for anonymity purposes.

Have you done a lot of 'looking up IP addresses'?

Browse around some of the whois records from hosting providers, and
see if you can figure out 

Re: Muni fiber: L1 or L2?

2013-02-13 Thread Mike Jones
On 13 February 2013 12:34, Scott Helms khe...@zcorum.com wrote:
 Using the UK as a model for US and Canadian deployments is a fallacy.

I don't believe anyone was looking at the UK model? But now that you
mention it the UK has a rather interesting model for fibre deployment,
a significant portion of the country has fibre optic broadband
avaliable from multiple providers.

BT Openreach (and others on their infrastructure) offer Fibre Optic
Broadband over twisted pair, and VirginMedia offer Fibre Optic
Broadband over coax.

The UKs 'just pretend it's fibre' deployment method is cheaper than
both PON and SS. Only requirement is that you have a regulator that
doesn't care when companies flat out lie to customers.

- Mike



Re: Slashdot: UK ISP PlusNet Testing Carrier-Grade NAT Instead of IPv6

2013-01-18 Thread Mike Jones
On 19 January 2013 04:48, Doug Barton do...@dougbarton.us wrote:
 No, because NAT-like solutions to perpetuate v4 only handle the client side
 of the transaction. At some point there will not be any more v4 address to
 assign/allocate to content provider networks. They have seen the writing on
 the wall, and many of the largest (both by traffic and market share) have
 already moved to providing their content over v6.

Potentially another source of IPv4 addresses - every content network
(/hosting provider/etc) that decides they don't want to give their
customers IPv6 reachability is a future bankrupt ISP with a load of
IPv4 to sell off :)

- Mike



Re: Slashdot: UK ISP PlusNet Testing Carrier-Grade NAT Instead of IPv6

2013-01-17 Thread Mike Jones
On 17 January 2013 10:06, . oscar.vi...@gmail.com wrote:
 i am not network engineer, but I follow this list to be updated about
 important news that affect internet stability.

 NAT is already a problem for things like videogames.  You want people
 to be able to host a multiplayer game, and have his friends to join
 the game. A free to play MMO may want to make a ban for a bad person
 permanent, and for this banning a IP is useful,  if a whole range of
 players use a ip, it will be harder to stop these people from
 disrupting other people fun.  Players that can't connect to the other
 players whine on the forums, and ask the game devs to fix the problem,
 costing these people money. People that can't connect to other
 players, for a problem that is not in his side, or under his control,
 get frustrated.  This type of problems are hard to debug for users.

 The people on this list have a influence in how the Internet run, hope
 somebody smart can figure how we can avoid going there, because there
 is frustrating and unfun.

If you follow this list then you should already know the answer,
functional* IPv6 deployments.

- Mike

*Some ISPs have some very weird ideas that I hope never catch on.



Re: For those who may use a projector in the NOC

2013-01-17 Thread Mike Jones
On 18 January 2013 02:19, Eric Adler eapt...@gmail.com wrote:
 This appears to be an Epson / 3LCD marketing campaign.

 whois shows an admin contact at wintergroup.net.  wintergroup.net (on http)
 is the home to a marketing agency, their client links below include Epson
 and 3LCD; clicking 3LCD brings up a still image showing this page.
 Searching for 3LCD finds this Epson page: 
 http://global.epson.com/innovation/projection_technology/3LCD_technology/.
 http://3lcd.com/ has a very familiar 'feel' as well... and has an admin
 contact at Seiko Epson Corporation


 I won't get into display theory on this list (feel free to contact me if
 you want to discuss such)

 - Eric Adler
 Broadcast Engineer

The only thing I can think relevant regarding projector/monitors in a
NOC situation would be general eye strain issues, which should be
taken in to account in the same way as keyboard/chair positioning etc
by whoever is responsible for health and safety. Anything beyond eye
strain is probably just getting in to colour reproduction discussions
which are largely irrelivant in a NOC.

I for example have all my monitors set to a lower colour temperature
and dimmed as much as feasable, colour reproduction is terrible but
great for avoiding eye strain. I switch back to reasonably normal
settings for watching videos and films etc, but during normal NOC
operation I doubt the colour accuracy needs to be able to distinguish
more than than green/yellow/red (with maybe some shades between).

- Mike



Re: Gmail and SSL

2013-01-01 Thread Mike Jones
On 1 January 2013 19:04, Keith Medcalf kmedc...@dessus.com wrote:
 Perhaps Googles other harvesters and the government agents they sell or 
 give user credentials to, don't work against privately (not under the 
 goverment thumb) encryption keys without the surveillance state expending 
 significantly more resources.

 Perhaps the cheapest way to solve this is to apply thumbscrews and have 
 google require the use of co-option freindly keying material by their victims 
 errr customers errr users.

There is no difference in encryption terms between a certificate
signed by an external CA and a certificate signed by itself, in either
case only parties with the private key (which you should never send to
the CA) can decrypt messages encrypted with that public key.

Some CAs will offer to generate a key pair for you instead of managing
your own keys, however that merely demonstrates that those CAs (and
anyone who uses that service) don't know how the certificates they are
issuing are meant to work. If anyone other than the party directly
identified by the public key ever gets a copy of the private key then
those keys are no longer secure and the certificate should be revoked
immediately as it no longer has any meaning*.

But if you ignore facts (as most conspiracy theories do) and try to
argue it's part of a conspiracy to intercept data - we're talking
about hop by hop transport encryption not end to end content
encryption, google already have a copy of all the messages going
through their service anyway.

- Mike

* A CA signs to say we have verified this is google, not this is
either google or their CA or some other random person, well really we
don't have a clue who it is but someone gave us money to sign here -
although the latter is probably more accurate in the real world.



Re: Big day for IPv6 - 1% native penetration

2012-11-20 Thread Mike Jones
On 20 November 2012 16:05, Patrick W. Gilmore patr...@ianai.net wrote:
 On Nov 20, 2012, at 08:45 , Owen DeLong o...@delong.com wrote:

 It is entirely possible that Google's numbers are artificially low for a 
 number
 of reasons.

 AMS-IX publishes stats too:
 https://stats.ams-ix.net/sflow/

 This is probably a better view of overall percentage on the Internet than a 
 specific company's content.  It shows order of 0.5%.

 Why do you think Google's numbers are lower than the real total?


They are also different stats which is why they give such different numbers.

In a theoretical world with evenly distributed traffic patterns if 1%
of users were IPv6 enabled it would require 100% of content to be IPv6
enabled before your traffic stats would show 1% of traffic going over
IPv6.

If these figures are representative (google saying 1% of users and
AMSIX saying 0.5% of traffic) then it would indicate that dual stacked
users can push ~50% of their traffic over IPv6. If this is even close
to reality then that would be quite an achievement.

- Mike



Re: Issues encountered with assigning all ones IPv6 /64 address? (Was Re: Issues encountered with assigning .0 and .255 as usable addresses?)

2012-10-23 Thread Mike Jones
On 23 October 2012 14:16, Rob Laidlaw laid...@consecro.com wrote:
 RFC 2526 reserves the last 128 host addresses in each subnet for anycast use.

IPv4 addresses ending in .0 and .255 can't be used either because the
top and bottom addresses of a subnet are unusable.

Why would hetzner be making such assumptions about what is and is not
a valid address on a remote network? if you have a route to it then it
is a valid address that you should be able to exchange packets with,
any assumptions beyond that are almost certainly going to be wrong
somewhere.

Even if they did happen to correctly guess what sized subnets a remote
network is using and what type of access media that remote network is
using, I am pretty sure it would be wrong to assume that these
addresses can't be accessed remotely considering the only address that
is currently defined :)

I really hope this is down to some kind of bug and not something
someone did deliberately.

- Mike



Re: Throw me a IPv6 bone (sort of was IPv6 ignorance)

2012-09-24 Thread Mike Jones
On 24 September 2012 21:11, Adrian Bool a...@logic.org.uk wrote:

 On 24 Sep 2012, at 17:57, Tore Anderson tore.ander...@redpill-linpro.com 
 wrote:

 * Tore Anderson

 I would pay very close attention to MAP/4RD.

 FYI, Mark Townsley had a great presentation about MAP at RIPE65 today,
 it's 35 minutes you won't regret spending:

 https://ripe65.ripe.net/archives/video/5
 https://ripe65.ripe.net/presentations/91-townsley-map-ripe65-ams-sept-24-2012.pdf

 Interesting video; thanks for posting the link.

 This does seem a strange proposal though.  My understanding from the video is 
 that it is a technology to help not with the deployment of IPv6 but with the 
 scarcity of IPv4 addresses.  In summary; it simply allows a number of users 
 (e.g. 1024) to share a single public IPv4 address.

 My feeling is therefore, why are the IPv4 packets to/from the end user being 
 either encapsulated or translated into IPv6 - why do they not simply remain 
 as IPv4 packets?

 If the data is kept as IPv4, this seems to come down to just two changes,

 * The ISP's router to which the user connects being able to route packets on 
 routes that go beyond the IP address and into the port number field of 
 TCP/UDP.
 * A CE router being instructed to constrain itself to using a limited set of 
 ports on the WAN side in its NAT44 implementation.

 Why all the IPv6 shenanigans complicating matters?

While you could do something similar without the encapsulation this
would require that every router on your network support routing on
port numbers, by using IPv6 packets it can be routed around your
network by existing routers. And it's not like anyone is going to be
deploying such a system without also deploying IPv6, so it's not
adding any additional requirements doing it that way.

- Mike



Re: Layer2 over Layer3

2012-09-13 Thread Mike Jones
On 12 September 2012 23:23, Philip Lavine source_ro...@yahoo.com wrote:
 To all,

 I am trying to extend a layer2 connection over Layer 3 so I can have 
 redundant Layer connectivity between my HQ and colo site. The reason I need 
 this is so I can give the appeareance that there is one gateway and that 
 both data centers can share the same Layer3 subnet (which I am announcing via 
 BGP to 2 different vendors).

 I have 2 ASR's. Will EoMPLS work or is there another option?

 Philip

Depending on your specific requirements, if you simply want to be able
to address hosts out of the same subnet and don't need actual layer 2
connectivity you might also want to consider proxy ARP(+NDP). This
trades some issues for others, for example you don't need to worry
about MTU issues, but it won't forward broadcast packets (this could
be either an advantage or disadvantage) or non-IP packets, it also
requires you to configure the routers so they know which addresses are
at the other site. Lots of disadvantages but there are situations
where it might be an option.

- Mike



Re: Finding Name Servers (not NS records) of domain name

2012-08-17 Thread Mike Jones
On 17 August 2012 13:14, Matthew Palmer mpal...@hezmatt.org wrote:
 On Wed, Aug 15, 2012 at 06:10:25PM -0400, Anurag Bhatia wrote:
 Now as you would be knowing if I do regular dig with ns, it provides NS
 records. However I was able to find nameservers by digging gTLD root for
 gTLD based domains. This works for .com/net/org etc but again fails for say
 .us, .in etc. I was wondering if there's an easy way to do it rather then
 running script on thousands of domain names again  again digging registry
 specific nameservers?

 I religiously use http://squish.net/dnscheck/ the moment I suspect *any*
 sort of DNS hinkiness.  Verbose, but *damn* if it doesn't hand me the answer
 practically every time.


It doesn't say anything about both of the servers for your domain
currently being broken ;)

http://nswalk.com/?hostname=hezmatt.orgtype=A

- Mike



Re: BGPttH. Neustar can do it, why can't we?

2012-08-06 Thread Mike Jones
On 6 August 2012 16:11, Leo Bicknell bickn...@ufp.org wrote:
 In a message written on Mon, Aug 06, 2012 at 10:05:30AM -0500, Chris Boyd 
 wrote:
 Speaking as someone who does a lot of work supporting small business IT, I 
 suspect the number is much lower.  As a group, these customers tend to be 
 extremely cost averse.  Paying for a secondary access circuit may become 
 important as cloud applications become more critical for the market segment, 
 but existing smart NAT boxes that detect primary upstream failure and switch 
 over to a secondary ISP will work for many cases.  Yes, it's ugly, but it 
 gets them reconnected to the off-site email server and the payment card 
 gateway.

 I don't even think the dual-uplink NAT box is that ugly of a solution.
 Sure it's outbound only, but for a lot of applications that's fine.

 However, it causes me to ask a differnet question, how will this
 work in IPv6?  Does anyone make a dual-uplink IPv6 aware device?
 Ideally it would use DHCP-PD to get prefixes from two upstream
 providers and would make both available on the local LAN.  Conceptually
 it would then be easy to policy route traffic to the correct provider.
 But of course the problem comes down to the host, it now needs to
 know how to switch between source addresses in some meaningful way,
 and the router needs to be able to signal it.

Multiple prefixes is very simple to do without needing a dual uplink
router, just get 2 normal routers and have them both advertise their
own prefixes, the problem is the policy routing that you mentioned a
dual WAN router would do.

A client that sees prefix A from router A and prefix B from router B
should IMO prefer router A for traffic from prefix A and router B for
traffic from prefix B. Current implementations seem to abstract away
the addressing and the routing in to completely separate things
resulting in it picking a default router then using that for all
traffic, this isn't too much of a problem if neither of your upstreams
do any source address filtering but I would much rather OS vendors
change their implementations than tell ISPs to stop filtering their
customers.

 As messy as IPv4 NAT is, it seems like a case where IPv6 NAT might
 be a relatively clean solution.  Are there other deployable, or nearly
 deployable solutions?

If you have a router that sends out RAs with lifetime 0 when the
prefix goes away then this should be deployable for poor mans
failover (the same category I put IPv4 NAT in), however there are
some bugs with client implementations and some clients might refuse to
use that prefix/router again until they have rebooted which could be
an issue for infrequently rebooted clients if the other connection
later goes down. A lifetime of 1 instead of 0 should in theory work
around this behaviour but I haven't seen any implementations that do
this and haven't tested myself.

It's a shame that this IPv6 stuff doesn't work properly out of the
box, I fear that there will be a lot of hackery due to early
limitations that will stick around - for example if NAT becomes
readily available before clients can properly handle multiple prefixes
from multiple routers (and DHCP-PD chaining, and the various other
we're working on it things), then even once clients start being able
to do it properly I suspect people will still stick to their beloved
NAT because that's what they are used to.

- Mike



Re: Verizon FiOS - is BGP an option?

2012-08-04 Thread Mike Jones
On 4 August 2012 04:07, Frank Bulk frnk...@iname.com wrote:
 As someone else posted, many FTTH installations are centralized as much as
 possible to avoid having non-passive equipment in the plant, allowing for
 the practicality of onsite generators.  That's what we do.  But for those
 who have powered nodes in the field (distributed/tiered BPON or GPON
 configurations and cable plants), it's not realistic to keep them all
 powered.  Despite what the DOT may be able to do.


If only they had some kind of copper cabling running from some kind of
central location (like perhaps the same place the fiber runs to, I
imagine the same buildings that the old POTS lines ran to) that went
all the way out to the huts full of powered equipment (that would
likely be next to the old POTS junction boxes) that as a result of
their new fiber installs would have a few pairs unused, then they
could possibly have hooked those up as backup power when grid power
becomes unavailable for a large area (poor power distribution
efficiency would probably stop you wanting to power it that way all
the time).

It's a shame that there isn't any such copper infrastructure owned by
those same companies already in place, but perhaps they could have
thrown an extra copper cable in to the middle of that fiber bundle at
the same time they were running it at negligible additional cost.

- Mike



Re: using reserved IPv6 space

2012-07-15 Thread Mike Jones
On 15 July 2012 16:58, Grzegorz Janoszka grzeg...@janoszka.pl wrote:
 Allowing 2000::/3 is fine as well. Btw - what are the estimates - how
 long are we going to be within 2000::/3?


I expect it to be long enough that we can enjoy lots of discussions
about how to deal with broken route filtering and broken software that
assumes only 2000::/3 is valid, and we can talk about how we should
have seen this coming and done something differently to prevent it.

- Mike



Re: Cool IPs: 1.234.35.245 brute force SSHing

2012-02-26 Thread Mike Jones
On 26 February 2012 09:46, Richard Barnes richard.bar...@gmail.com wrote:
 While you're in Korea, you could talk to Samsung as well about
 123.32.0.0/12 (including 123.45.67.89).  Closer to home, you could
 also talk to ATT about 12.0.0.0/8 (12.34.56.78).
 --Richard

Or if you don't mind a little unsolicited traffic you could always
ask APNIC if you could have 1.2.3.0/24 which is unlikely to ever get
assigned by the normal process (currently assigned to the debogon
project but i don't think they are actively doing anything with it
just waiting for it to become usable some day)

Also had a look and it appears that along with 8.8.8.0/24 google also
got assigned 4.3.2.0/24 by Level3, but they aren't currently using it.
I wonder which company has the best collection of IP assignments...

- Mike



Re: Question regarding anycasting in CDN setup

2012-02-01 Thread Mike Jones
On 1 February 2012 20:25, Anurag Bhatia m...@anuragbhatia.com wrote:
snip
 Now my question here is - why this setup and not simply using having a A
 record for googlehosted.l.googleusercontent.com. which comes from any
 anycasted IP address space? Why not anycasting at CDN itself rather then
 only at DNS layer?

You are confusing anycasting with offering different results.

I can have an anycast DNS setup where all my servers give the same
response (example: most DNS providers), I can also have a single DNS
server give 192.0.2.80 out to queries sourced from a US IP Address,
198.51.100.80 for queries sourced from a German IP Address and
203.0.113.80 to queries sourced from a Chinese address (djbdns has a
module for this for example).

I would guess that google probably have a highly customised algorithm
which uses a combination of source IP and the node that your query
arrived at as part of the process for deciding what answer to give
you, along with dozens of other internal factors.

Although I do sometimes wonder why they use CNAME chains in cases
where the same servers are authoritative for the target name anyway.

If you were wondering why they direct you to the unicast addresses for
the local datacentre instead of just giving an anycast address which
your nearest datacentre would answer, well their algorithm might
decide that it wants to serve you content from the second closest
datacentre because the closest one is near capacity, anycast can't do
that.

- Mike



Re: IP addresses are now assets

2011-12-02 Thread Mike Jones
On 2 December 2011 20:01, Henry Yen he...@aegisinfosys.com wrote:
 On Fri, Dec 02, 2011 at 12:37:29PM -0700, joshua sahala wrote:
 On Thu, Dec 1, 2011 at 10:20 PM, John Curran jcur...@arin.net wrote:[cut]
  Your subject line (IP addresses are now assets) could mislead folks,
 [cut]
 ianal, but the treatment of ip addresses by the bankruptcy court would
 tend to agree with the definition of an asset from webster's new world
 law dictionary (http://law.yourdictionary.com/asset):

    Any property or right that is owned by a person or entity and has
    monetary value. See also liability.

    All of the property of a person or entity or its total value;
    entries on a balance sheet listing such property.

    intangible asset
       An asset that is not a physical thing and only evidenced by a
       written document.


 the addresses are being exchanged for money, in order to pay a
 debt...how is this not a sale of an asset?

 I guess I'm in the same minority in that I agree with you.

 Note that Asset !== Property.

 The IP addresses in question are unquestionably Assets (albeit
 Restricted assets or hard-to-value assets), but not so evidently
 Property.  So, the original subject line IP addresses are now assets
 seems accurate; it does not say IP addresses are now property.
 Conflation of the two terms is in the mind of the reader, and perhaps
 that's what John Curran was seeking to clarify.


What about land? it's a public resource that you've paid money to
someone in exchange for transferring their rights over that public
resource to you.

That said, I think in the case of land shortages there is an argument
that land no longer being used by someone should be freed up for use
by new people. Although i'm not entirely sure how to justify a
instead of selling it you have to return it so it can be allocated to
whoever has a need for it policy without also justifying the same for
my house, which has spare rooms that I don't need.

- Mike



Re: ATT GigE issue on 11/19 in Kansas City

2011-11-30 Thread Mike Jones
On 30 November 2011 17:45, Joe Maimon jmai...@ttec.com wrote:


 Brad Fleming wrote:


 In either case I'm a customer and will likely never be told what went
 wrong. I'm OK with that so long as it doesn't happen again!




 Does being told what happened somehow prevent it from happening it again?

 What is the utilitarian value in an RFO?


The outage was caused by an engineer turning off the wrong router, it
has been turned back on and service restored
The outage appears to have been caused by a bug in the routers
firmware, we are working with the vendor on a fix
There was an outage, now service is back up again

A brief isolated incident in any case you probably don't care enough
to change providers (if you care about outages that much, you just
divert traffic to your other redundant connections), but say you've
had 2 outages in a week with that given as the explanation, which one
makes you feel more concerned about going shopping for another
provider?

Technically the first provider knows the causes of the outages and it
has been fixed while the second one doesn't know for sure what the
problem is and they might have fixed it or might not, however I
suspect most people would probably not agree with that interpretation.
The third provider I don't think there's any way to interpret it to
make them look good.

From a utilitarian point of view the more detail customers get the
less angry they normally are, and I believe less angry is a
generally accepted form of happier in the ISP world (at least some
ISPs seem to think so). Therefore for utilitarian reasons you should
write nice long details reports, unless the cause is incompetence then
you should probably just shut up and let people assume incompetence
instead of confirming it, as confirming it might make them less happy.
Although one could also argue that by being honest about incompetence
your customers will likely change providers sooner, causing an overall
increase in their level of happiness. This utilitarian thing is
complicated.

- Mike



Link local for P-t-P links? (Was: IPv6 prefixes longer then /64: are they possible in DOCSIS networks?)

2011-11-30 Thread Mike Jones
On 1 December 2011 00:55, Jimmy Hess mysi...@gmail.com wrote:
 Please explain.    What are the better ways that you would propose
 of mitigating ND table overflows?
 If you can show a rational alternative, then it would be persuasive as
 a better option.


Link-Local?

For true P-t-P links I guess you don't need any addresses on the
interfaces (I don't on my 6in4 tunnels), but I assume most people are
referring to ethernet type cross connects, so Link-Local addresses.

As long as each device has at least 1 address assigned somewhere
(loopback?) that it can use for management/packet generation purposes
you don't need an address on every link like you used to do with IPv4.
Now that there are plenty of addresses you don't need as many :)

I suspect there are probably some practical issues with link-local on
some kit and some network configurations due to lack of people doing
it this way, but in theory there shouldn't be any reason that you
couldn't use link-local for all your links then just have a /128
assigned to each routers loopback for management/packet generation
purposes. This could remove overheads of worrying about address
assignment for those links, you just need a single /128 per router.
Depending on the network it might be better to statically set link
locals rather than using automatic ones (fe80::1 and fe80::2 for
'upstream' and 'downstream' or whatever rules you currently use for
deciding which end is 1 and which is 2)

You could also do something similar for datacentres assigning blocks
to customer servers, instead of configuring a /64 for each customer,
or a /48 then giving customers blocks inside that to use, just
configure a single /64 and give each customer a single address from
that block with unassigned ones filtered, then route a /64 to them for
any extra addresses they might want. Chances are if they need more
than a couple of addresses they probably want them routed to them
anyway rather than using ND for them all.

The issue of dynamic assignments to end customers in a non-datacentre
setting is a little more difficult, but I wonder how badly CPEs would
break if you tried using DHCPv6 to give them back their link-local
addresses, then DHCPv6-PD delegating by routing to their link local
address, probably a lot worse than if you just used a /112 of global
space. I don't think this area has too many issues though because DHCP
ensures that the actual addresses on the links are all known, so this
just needs to be used for filtering unknown addresses.

Then there's just the question of what to do about the edge networks
where SLAAC might be used and where you don't have such strict control
over address assignment, i'll pass on that one for now.

Link locals aren't as useful nearer to the edges, but for the
backbones and datacentre networks they should be able to solve most of
the biggest problems with ND attacks. The edge networks where just
using link locals isn't really an option you can probably put a
stateful firewall in quite easily, as these will be the edge default
gateways that clients are sending their traffic directly to rather
than needing to be done in the core. Although there really should be
an option for users to open the firewall for inbound connections.

Am I a complete idiot missing some obvious major issues with link
locals, or am i just the only one not thinking IPv4-think? Opinions?
:)

- Mike



Re: Link local for P-t-P links? (Was: IPv6 prefixes longer then /64: are they possible in DOCSIS networks?)

2011-11-30 Thread Mike Jones
On 1 December 2011 02:22, Ray Soucy r...@maine.edu wrote:
 I for one get really irritated when my traceroutes and pings are
 broken and I need to troubleshoot things. ;-)  But I guess something
 has to give.


My home connection gets IPv6 connectivity via a tunnelbroker tunnel, i
didn't use the tunnel interface addresses in the instructions but
configured it without addresses, traceroutes (in all directions) show
up with the routers single assigned global address.

Routers would still have a single global address assigned to loopback
(or anywhere) for management/packet generation purposes so traceroutes
should work fine, although rather than getting a per-interface address
you'll get a per-router address. What addresses do you currently get
in the real world? some routers give a loopback address, some give the
ingress interface, some give the egress interface, all you can safely
assume from the address is the router it hit.

- Mike



Re: Outgoing SMTP Servers

2011-10-28 Thread Mike Jones
On 28 October 2011 16:41,  valdis.kletni...@vt.edu wrote:
 You *do* realize that for all your nice Thei Internet Is Not A Commons
 ranting, the basic problem is that some people (we'll call them spammers) *do*
 think that (a) it's a commons (or at least the exact ownership of a given
 chunk is irrelevant), and (b) they're allowed to graze their sheep upon it.

If someone keeps putting their animals in my garden then some
countries would permit me to shoot them and sue the owner for the cost
of the bullets. Even those with more restrictive property rights laws
would still permit me to throw them off of my land and sue the owner
for any damage they caused me.

On the Internet when people start shooting their sheep they think they
are the victims and go to the dutch police, sorry i mean the police of
whichever jurisdiction the hypothetical entity is from, complaining
that they are being deprived of their right to abuse peoples gardens.

- Mike



Re: Outgoing SMTP Servers

2011-10-26 Thread Mike Jones
On 26 October 2011 05:44, Owen DeLong o...@delong.com wrote:
 Mike recommends a tactic that leads to idiot hotel admins doing bad things.
 You bet I'll criticize it for that.

 His mechanism breaks things anyway. I'll criticize it for that too.


Just to clarify, I was merely pointing out a possible argument behind
someone doing it that way. For a hotel wifi type network I would
consider it a valid option that is arguably (to some) better than
straight blocking for the average user, for other types of networks
with more long term user bases I would be very surprised if there was
any justification for redirecting as opposed to simply blocking. If
someone were asking for my advice on deploying a network like that I
would have to point out that the extra effort required to
deploy/support it is unlikely to be worth it. Blocking port 25 is
unlikely to cause much of a problem compared to a single incident with
that SMTP server that your hotel now needs to maintain.

In a perfect world we would all have as many static globally routed IP
addresses as we want with nothing filtered, in the real world a
residential ISP who gives their customers globally routable IPv4
addresses for each computer (ie. a CPE that supports multiple
computers without NAT) with no filtering at all is probably going to
have to hire more support staff to deal with it, even before people
from this list start null routing their IP space for being a rogue ISP
that clearly doesn't give a damn etc :)

Perhaps our next try with IPv6 can be a perfect world where hosts are
secure enough for open end to end connectivity and infected machines
are rarely a problem? IPv6 enabled systems are more secure than a lot
of the systems we have floating around on IPv4 networks, but I still
think we're going to end up with port blocking becoming reasonably
common on IPv6 as well once that starts getting widely deployed to
residential users.

- Mike



Re: Outgoing SMTP Servers

2011-10-25 Thread Mike Jones
On 25 October 2011 20:52, Alex Harrowell a.harrow...@gmail.com wrote:
 Ricky Beam jfb...@gmail.com wrote:

Works perfectly even in networks where a VPN doesn't and the idiot
hotel
intercepts port 25 (not blocks, redirects to *their* server.)

--Ricky

 Why do they do that?


My home ISP run an open relay on port 25 with IP-based authentication,
so I might configure my laptops email client to send email via
smtp.myisp.com port 25 (many/most? residential ISPs have
unauthenticated relays, even ISPs that tell you to use authentication
often have another server next to it that doesn't need authentication
for customer IP space)

If the hotel simply blocks port 25 then my email is broken, if they
allow it then my email is broken (as my ISP doesn't let the hotel
relay through their mail servers), however if the hotel redirects 25
to their own open relays then in theory my email should work fine.

They could always tell people there is a relay at 10.0.0.25 so you
can change your settings to use that, however by redirecting all port
25 traffic there they are effectively forcibly auto-configuring anyone
who was already configured to send via an unauthenticated server on
port 25. They are probably acting under the assumption that the only
people using 25 are using it for unauthenticated access, I believe
most servers that do use authentication tell users to use alternate
ports so this is probably a reasonable assumption.

Compared to straight blocking of port 25 it's probably better as long
as the relay it is redirecting you to works properly so you don't have
to try and diagnose issues - However considering the quality of the
average hotel network I suspect most of them that are trying to do
this probably have it set to redirect to a dead server anyway.

- Mike



Re: Open Letters to Sixxs

2011-09-15 Thread Mike Jones
On 15 September 2011 15:12, Meftah Tayeb tayeb.mef...@gmail.com wrote:
 ok, that's a positive answer.
 but let me ask you a question:
 do HE.NET peer with cogent? level3?

  4   189 ms   134 ms99 ms  10gigabitethernet7-4.core1.nyc4.he.net
[2001:470:0:3e::1]
  5   131 ms   152 ms   111 ms  2001:470:0:202::2
  6   144 ms   147 ms   238 ms  2001:1900:19:7::4
  7   132 ms   241 ms   143 ms  vl-4060.car2.NewYork2.Level3.net
[2001:1900:4:1::fe]
[Jitter is my cable connection, not reflective of the performance of
HEs network]

As for cogent - Does anyone really care? this is only a problem for
reaching a single homed network behind cogent, and anyone running such
a network knows that their IPv6 connectivity doesn't work properly
anyway and they are the broken ones.

Whatever you think of the issues surrounding the peering dispute (I am
sure at least comcast agree with cogent that a Tier 1 network should
pay what is essentially a Tier 2 network for peering!), the fact
remains that HE did get there first with their defacto tier 1 status,
and for the time being at least working IPv6 is realistically
working IPv6 connection to HE and peers.

The more users/content that is behind HE and peers that is not
reachable from cogent the better, as it puts more pressure on them to
start behaving themselves and peering properly like everyone else.

- Mike



Re: Microsoft deems all DigiNotar certificates untrustworthy, releases updates

2011-09-12 Thread Mike Jones
On 12 September 2011 18:39, Robert Bonomi bon...@mail.r-bonomi.com wrote:
 Seriously, about the only way I see to ameliorate this kind of problem is
 for people to use self-signed certificates that are then authenticated
 by _multiple_ 'trust anchors'.  If the end-user world raises warnings
 for a certificate 'authenticated' by say, less than five separate entities.
 then the compomise of any _single_ anchor is of pretty much 'no' value.
 Even better, let the user set the 'paranoia' level -- how many different
 'trusted' authorities have to have authenticated the self-signed certificate
 before the user 'really trusts' it.

So if I want my small website to support encryption, I now have to pay
5 companies, and hope that all my users have those 5 CAs in their
browser? Much better to use the existing DNS infrastructure (that all
5 of them would likely be using for their validation anyway), and not
have to pay anyone anything.

- Mike



Why are we still using the CA model? (Re: Microsoft deems all DigiNotar certificates untrustworthy, releases updates)

2011-09-11 Thread Mike Jones
On 11 September 2011 16:55, Bjørn Mork bj...@mork.no wrote:
 You can rewrite that: Trust is the CA business.  Trust has a price.  If
 the CA is not trusted, the price increases.

 Yes, they may end up out of business because of that price jump, but you
 should not neglect the fact that trust is for sale here.


The CA model is fundamentally flawed in the fact that you have CAs
whose sole trustworthiness is the fact that they paid for an audit
(for Microsoft, lower requirements for others), they then issue
intermediate certificates to other companies (many web hosts and other
minor companies have them) whose sole trustworthiness is the fact
that they paid for an intermediate certificate, all of those
companies/organisations/people are then considered trustworthy enough
to confirm the identity of my web server despite the fact that none of
them have any connection at all to me or my website.

There is already a chain of trust down the DNS tree, if that is
compromised then my SSL is already compromised (if they control my
domain, they can verify they are me and get a certificate), what
happened to RFC4398 and other such proposals? EV certificates have a
different status and probably still need the CA model, however with
standard SSL certificates the only validation done these days is
checking someone has control over the domain. DNSSEC deployment is
advanced enough now to do that automatically at the client. We just
need browsers to start checking for certificates in DNS when making a
HTTPS connection (and if one is found do client side DNSSEC validation
- I don't trust my ISPs DNS servers to validate something like that,
considering they are the ones likely to be intercepting my connections
in the first place!).

It will take a while to get updated browsers rolled out to enough
users for it do be practical to start using DNS based self-signed
certificated instead of CA-Signed certificates, so why don't any
browsers have support yet? are any of them working on it?

- Mike



Re: NAT444 or ?

2011-09-08 Thread Mike Jones
As HTTP seems to be a major factor causing a lot of short lived
connections, and several large ISPs have demonstrated that large scale
transparent HTTP proxies seem to work just fine, you could also move
the IPv4 port 80 traffic from the CGN to a transparent HTTP proxy. As
well as any benefits from caching keeping connections local it can
also combine 1000 users trying to load facebook in to a handful of
persistent connections to the facebook servers. The proxy can of
course also have its own global IPv4 address rather than going through
the NAT, I have no experience with large scale HTTP proxy deployments
but I strongly suspect a single HTTP proxy can handle traffic for a
lot more users than low hundreds currently being suggested for NAT444!
and can be scaled out separately if required.

As an end user this is probably a little worse with HTTP coming from a
different IP address to everything else, but not that much worse. As a
provider it may be much easier to scale to larger numbers of
customers. The proxy can also take IPv4-only users to a dual stacked
site over IPv6, as I am under no illusions that even with IPv6 to
every customer you will still have customers behind IPv4-only NAT
routers they bought themselves for quite a while. With some DNS tricks
this might be useful for those users reaching IPv6-only sites, however
it would probably be better if they were unable to reach those sites
at all to give them an incentive to fix their IPv6.

On 7 September 2011 21:37, Leigh Porter leigh.por...@ukbroadband.com wrote:
 Other simple tricks such as ensuring that your own internal services such as 
 DNS are available without traversing NAT also help.

As obvious as this probably is, i'm sure someone will overlook it!
Also other services such as providers with CDN nodes in their network
may want to talk to the CDN operator about having those connected to
directly from the internal addresses to avoid traversing the NAT, and
I'm sure there are other services as well.

- Mike



Re: VRF/MPLS on Linux

2011-08-23 Thread Mike Jones
On 23 August 2011 14:45,  na...@rhemasound.org wrote:
 While I have found some information on a project called linux-mpls I am 
 having a hard time finding any solid VRF framework for Linux.  I have a 
 monitoring system that needs check devices that sit in overlapping private ip 
 space, and I was wondering if there is anyway I could use some kind or VRF 
 type solution that would allow me to label the site the traffic is intended 
 for.  The upstream router supports VRF/MPLS, but I need to know how I can get 
 the server to label the traffic.  I would appreciate any input.

I would probably go for the suggestion of (ab)using QoS tags for the
routing table selection, but just to throw this alternate idea out
there:

1.0.0.0/8 1:1 NATed to 10.0.0.0/8 marked to use routing table 1, which
routes to network 1
2.0.0.0/8 1:1 NATed to 10.0.0.0/8 marked to use routing table 2, which
routes to network 2
etc

That way your application layer won't need any additional logic and
can just deal with them as separate non-overlapping IP spaces, this
won't work if you have too many overlapping networks (but then linux
only supports 252 additional routing tables anyway afaik) or if you
need external connectivity that can't be proxied.

In a similar manner if your tools support IPv6 you could have a /96
that is NAT64'ed on to each different network, i'm not sure about this
for a production setup although it would have the added benefit that
you can expose these routes to your management network to provide
easier access from your other machines if you wanted to.

- Mike