Re: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment

2015-02-24 Thread Blair Trosper
That's sort of what I meant to say.  I did not articulate it well.

The problem *with* AWS is that in VPC (or different regions), the internal
network space is unique to each region.  So, in theory, I could get
10.1.2.3 in two regions on two instances.  In VPC, you can also designate
your own subnets, which makes things a little more tough a la
interconnecting the disparate regions.

But, as you say, IPv6 would be an elegant solution to that problem...and
that's what I meant to articulate.  IPv6 as a region unification tool as
well as an Internet-facing protocol.

On Tue, Feb 24, 2015 at 12:27 PM, Luan Nguyen lngu...@opsource.net wrote:

 Shouldn't it be the other way around? Ipv6 as the unique universal
 external network and you can define your own IPv4 within your cloud context
 separate from the cloud provider network and from other customers. So if
 you have contexts in different region - you can interconnect using layer 3
 or layer 2 - through the cloud provider network...bring your own IPv4. If
 you need internet access, you'll get NATted. If you need connections to
 your branches/HQs...etc, build your own tunnel or use the cloud provider -
 which by the way gives you your own vrf so no need to worry about
 overlapping anything.
 Noone heard of Dimension Data Cloud? :)

 On Tue, Feb 24, 2015 at 1:10 PM, Blair Trosper blair.tros...@gmail.com
 wrote:

 ADDENDUM:  They're taking into consideration my suggestion of using IPv6
 as
 a universal internal network so that the different regions could be
 interconnected without having to give up the region-independent use of
 10.0.0.0/8, which I think would be an elegant solution.

 On Tue, Feb 24, 2015 at 12:08 PM, Blair Trosper blair.tros...@gmail.com
 wrote:

  I have an unimpeachable source at AWS that assures me they're working
 hard
  to deploy IPv6.  As it was explained to me, since AWS was sort of first
 to
  the table -- well before IPv6 popped, they had designed everything on
 the
  v4 only.  Granted, you can get an IPv6 ELB, but only in EC2 classic,
 which
  they're phasing out.
 
  But I'm assured they're rushing IPv6 deployment of CloudFront and other
  services as fast as they can.  I'm assured of this.
 
  But you also have to appreciate the hassle of retrofitting a cloud
  platform of that scale, so I do not envy the task that AWS is
 undertaking.
 
  On Tue, Feb 24, 2015 at 11:35 AM, Owen DeLong o...@delong.com wrote:
 
  Amazon is not the only public cloud.
 
  There are several public clouds that can support IPv6 directly.
 
  I have done some work for and believe these guys do a good job:
 
  Host Virtual (vr.org http://vr.org/)
 
  In no particular order and I have no relationship with or loyalty or
  benefit associated with any of them. I neither endorse, nor decry any
 of
  the following:
 
  Linode
  SoftLayer
  RackSpace
 
  There are others that I am not recalling off the top of my head.
 
  Owen
 
   On Feb 23, 2015, at 07:52 , Ca By cb.li...@gmail.com wrote:
  
   On Mon, Feb 23, 2015 at 7:02 AM, Eric Germann ekgerm...@cctec.com
  wrote:
  
   Currently engaged on a project where they’re building out a VPC
   infrastructure for hosted applications.
  
   Users access apps in the VPC, not the other direction.
  
   The issue I'm trying to get around is the customers who need to
 connect
   have multiple overlapping RFC1918 space (including overlapping what
 was
   proposed for the VPC networks).  Finding a hole that is big enough
 and
  not
   in use by someone else is nearly impossible AND the customers could
 go
   through mergers which make them renumber even more in to overlapping
  1918
   space.
  
   Initially, I was looking at doing something like (example IP’s):
  
  
   Customer A (172.28.0.0/24)  — NAT to 100.127.0.0/28 —— VPN to
 DC
  ——
   NAT from 100.64.0.0/18 ——  VPC Space (was 172.28.0.0/24)
  
   Classic overlapping subnets on both ends with allocations out of
   100.64.0.0/10 to NAT in both directions.  Each sees the other end
 in
   100.64 space, but the mappings can get tricky and hard to keep
 track of
   (especially if you’re not a network engineer).
  
  
   In spitballing, the boat hasn’t sailed too far to say “Why not use
   100.64/10 in the VPC?”
  
   Then, the customer would be allocated a /28 or larger (depending on
  needs)
   to NAT on their side and NAT it once.  After that, no more NAT for
 the
  VPC
   and it boils down to firewall rules.  Their device needs to NAT
  outbound
   before it fires it down the tunnel which pfSense and ASA’s appear
 to be
   able to do.
  
   I prototyped this up over the weekend with multiple VPC’s in
 multiple
   regions and it “appears” to work fine.
  
   From the operator community, what are the downsides?
  
   Customers are businesses on dedicated business services vs. consumer
  cable
   modems (although there are a few on business class cable).  Others
 are
  on
   MPLS and I’m hashing that out.
  
   The only one I can see is if the customer has a service 

Re: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment

2015-02-24 Thread Owen DeLong
Amazon is not the only public cloud.

There are several public clouds that can support IPv6 directly.

I have done some work for and believe these guys do a good job:

Host Virtual (vr.org http://vr.org/)

In no particular order and I have no relationship with or loyalty or benefit 
associated with any of them. I neither endorse, nor decry any of the following:

Linode
SoftLayer
RackSpace

There are others that I am not recalling off the top of my head.

Owen

 On Feb 23, 2015, at 07:52 , Ca By cb.li...@gmail.com wrote:
 
 On Mon, Feb 23, 2015 at 7:02 AM, Eric Germann ekgerm...@cctec.com wrote:
 
 Currently engaged on a project where they’re building out a VPC
 infrastructure for hosted applications.
 
 Users access apps in the VPC, not the other direction.
 
 The issue I'm trying to get around is the customers who need to connect
 have multiple overlapping RFC1918 space (including overlapping what was
 proposed for the VPC networks).  Finding a hole that is big enough and not
 in use by someone else is nearly impossible AND the customers could go
 through mergers which make them renumber even more in to overlapping 1918
 space.
 
 Initially, I was looking at doing something like (example IP’s):
 
 
 Customer A (172.28.0.0/24)  — NAT to 100.127.0.0/28 —— VPN to DC ——
 NAT from 100.64.0.0/18 ——  VPC Space (was 172.28.0.0/24)
 
 Classic overlapping subnets on both ends with allocations out of
 100.64.0.0/10 to NAT in both directions.  Each sees the other end in
 100.64 space, but the mappings can get tricky and hard to keep track of
 (especially if you’re not a network engineer).
 
 
 In spitballing, the boat hasn’t sailed too far to say “Why not use
 100.64/10 in the VPC?”
 
 Then, the customer would be allocated a /28 or larger (depending on needs)
 to NAT on their side and NAT it once.  After that, no more NAT for the VPC
 and it boils down to firewall rules.  Their device needs to NAT outbound
 before it fires it down the tunnel which pfSense and ASA’s appear to be
 able to do.
 
 I prototyped this up over the weekend with multiple VPC’s in multiple
 regions and it “appears” to work fine.
 
 From the operator community, what are the downsides?
 
 Customers are businesses on dedicated business services vs. consumer cable
 modems (although there are a few on business class cable).  Others are on
 MPLS and I’m hashing that out.
 
 The only one I can see is if the customer has a service provider with
 their external interface in 100.64 space.  However, this approach would
 have a more specific in that space so it should fire it down the tunnel for
 their allocated customer block (/28) vs. their external side.
 
 Thoughts and thanks in advance.
 
 Eric
 
 
 Wouldn't it be nice if Amazon supported IPv6 in VPC?
 
 I have disqualified several projects from using the public cloud and put
 them in the on-premise private cloud  because Amazon is missing this key
 scaling feature -- ipv6.   It is odd that Amazon, a company with scale
 deeply in its DNA, fails so hard on IPv6.  I guess they have a lot of
 brittle technical debt they can't upgrade.
 
 I suggest you go with private cloud if possible.
 
 Or, you can double NAT non-unique IPv4 space.
 
 Regarding 100.64.0.0/10, despite what the RFCs may say, this space is just
 an augment of RFC1918 and i have already deployed it as such.
 
 CB



Re: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment

2015-02-24 Thread Owen DeLong
As one of the authors involved in what eventually became RFC6598, this isn’t 
entirely accurate.

100.64/10 is intended as space to be used by service providers for dealing with 
situations where additional shared private address space is required, but it 
must be distinct from the private address space in use by their customers. 
Stacked NAT in a CGN scenario is only the most common example of such a 
situation.

The application described is another example, though if the application 
provider’s customers are with ISPs that start doing CGN and using RFC6598 for 
that process, some difficulties could arise which the application provider 
would have to be prepared to cope with.

Owen

 On Feb 23, 2015, at 10:52 , Benson Schliesser bens...@queuefull.net wrote:
 
 Hi, Eric -
 
 Bill already described the salient points. The transition space is meant to 
 be used for cases where there are multiple stacked NATs, such as CGN with 
 CPE-based NAT. In theory, if the NAT implementations support it, one could 
 use it repeatedly by stacking NAT on top of NAT ad nauseum, but the wisdom of 
 doing so is questionable. If one uses it like additional RFC1918 space then 
 routing could become more difficult, specifically in the case where hosts 
 (e.g. VPC servers) are numbered with it. This is true because, in theory, you 
 don't need the transition space to be routed on the internal network which 
 avoids having NAT devices hold conflicting routes etc. Even if the edge NAT 
 devices don't currently see conflicting routes to 100.64/10, if that changes 
 in the future then client hosts may find themselves unable to reach the VPC 
 hosts at that time.
 
 That being said, if you understand the risks that I described above, then it 
 may work well for a community of interest type of inter-network that hosts 
 non-global resources. From your description it sounds like that might be the 
 situation you find yourself in. To be clear, it's not unwise to do so, but it 
 does carry risk that needs to be evaluated (and documented).
 
 Cheers,
 -Benson
 
 
 William Herrin mailto:b...@herrin.us
 February 23, 2015 at 12:58 PM
 
 Hi Eric,
 
 The main risk is more or less as you summarized it. Customer has no
 firewall or originates the VPN directly from their firewall. Customer
 buys a non-hosting commodity Internet product that uses carrier NAT to
 conserve IP addresses. The customer's assigned address AND NETMASK
 combined overlap some of the hosts you're trying to publish to them.
 
 
 
 Mitigations for that risk:
 
 Can you insist that the customer originate connections from inside
 their firewall (on RFC1918 space)?
 
 Most service providers using 100.64/10 either permit customers to opt
 out (getting dynamic globally routable addresses) or offer customers
 the opportunity to purchase static global addresses for a nominal fee.
 Are you comfortable telling impacted customers that they have to do
 so?
 
 
 A secondary risk comes in to play where a customer may wish to
 interact with another service provider doing the same thing as you.
 That essentially lands you back in the same problem you're having now
 with RFC1918.
 
 
 One more question you should consider: what is the nature of your
 customer's networks? Big corps that tend to stretch through 10/8 won't
 let their users originate VPN connections in the first place. They
 also don't touch 100.64/10 except where someone is publishing a
 service like yours. Meanwhile, home and SOHO users who are at liberty
 to originate VPNs might currently hold a 100.64/10 address. But they
 just about never use the off-bit /16s in 10/8. By off-bit I mean the
 ones with 4 or 5 random 1-bits in the second octet.
 
 
 My opinion: The likelihood of collisions in 100.64/10 increases
 significantly if you use them on servers. I would confine my use to
 client machines and try to put servers providing service to multiple
 organizations on globally unique IPs. Confining 100.64/10 to client
 machines, you're unlikely to encounter a problem you can't readily
 solve.
 
 Regards,
 Bill Herrin
 
 
 Eric Germann mailto:ekgerm...@cctec.com
 February 23, 2015 at 10:02 AM
 Currently engaged on a project where they’re building out a VPC 
 infrastructure for hosted applications.
 
 Users access apps in the VPC, not the other direction.
 
 The issue I'm trying to get around is the customers who need to connect have 
 multiple overlapping RFC1918 space (including overlapping what was proposed 
 for the VPC networks). Finding a hole that is big enough and not in use by 
 someone else is nearly impossible AND the customers could go through mergers 
 which make them renumber even more in to overlapping 1918 space.
 
 Initially, I was looking at doing something like (example IP’s):
 
 
 Customer A (172.28.0.0/24) — NAT to 100.127.0.0/28 —— VPN to DC —— NAT 
 from 100.64.0.0/18 —— VPC Space (was 172.28.0.0/24)
 
 Classic overlapping subnets on both ends with allocations out of 
 100.64.0.0/10 to NAT in both 

Re: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment

2015-02-24 Thread Blair Trosper
ADDENDUM:  They're taking into consideration my suggestion of using IPv6 as
a universal internal network so that the different regions could be
interconnected without having to give up the region-independent use of
10.0.0.0/8, which I think would be an elegant solution.

On Tue, Feb 24, 2015 at 12:08 PM, Blair Trosper blair.tros...@gmail.com
wrote:

 I have an unimpeachable source at AWS that assures me they're working hard
 to deploy IPv6.  As it was explained to me, since AWS was sort of first to
 the table -- well before IPv6 popped, they had designed everything on the
 v4 only.  Granted, you can get an IPv6 ELB, but only in EC2 classic, which
 they're phasing out.

 But I'm assured they're rushing IPv6 deployment of CloudFront and other
 services as fast as they can.  I'm assured of this.

 But you also have to appreciate the hassle of retrofitting a cloud
 platform of that scale, so I do not envy the task that AWS is undertaking.

 On Tue, Feb 24, 2015 at 11:35 AM, Owen DeLong o...@delong.com wrote:

 Amazon is not the only public cloud.

 There are several public clouds that can support IPv6 directly.

 I have done some work for and believe these guys do a good job:

 Host Virtual (vr.org http://vr.org/)

 In no particular order and I have no relationship with or loyalty or
 benefit associated with any of them. I neither endorse, nor decry any of
 the following:

 Linode
 SoftLayer
 RackSpace

 There are others that I am not recalling off the top of my head.

 Owen

  On Feb 23, 2015, at 07:52 , Ca By cb.li...@gmail.com wrote:
 
  On Mon, Feb 23, 2015 at 7:02 AM, Eric Germann ekgerm...@cctec.com
 wrote:
 
  Currently engaged on a project where they’re building out a VPC
  infrastructure for hosted applications.
 
  Users access apps in the VPC, not the other direction.
 
  The issue I'm trying to get around is the customers who need to connect
  have multiple overlapping RFC1918 space (including overlapping what was
  proposed for the VPC networks).  Finding a hole that is big enough and
 not
  in use by someone else is nearly impossible AND the customers could go
  through mergers which make them renumber even more in to overlapping
 1918
  space.
 
  Initially, I was looking at doing something like (example IP’s):
 
 
  Customer A (172.28.0.0/24)  — NAT to 100.127.0.0/28 —— VPN to DC
 ——
  NAT from 100.64.0.0/18 ——  VPC Space (was 172.28.0.0/24)
 
  Classic overlapping subnets on both ends with allocations out of
  100.64.0.0/10 to NAT in both directions.  Each sees the other end in
  100.64 space, but the mappings can get tricky and hard to keep track of
  (especially if you’re not a network engineer).
 
 
  In spitballing, the boat hasn’t sailed too far to say “Why not use
  100.64/10 in the VPC?”
 
  Then, the customer would be allocated a /28 or larger (depending on
 needs)
  to NAT on their side and NAT it once.  After that, no more NAT for the
 VPC
  and it boils down to firewall rules.  Their device needs to NAT
 outbound
  before it fires it down the tunnel which pfSense and ASA’s appear to be
  able to do.
 
  I prototyped this up over the weekend with multiple VPC’s in multiple
  regions and it “appears” to work fine.
 
  From the operator community, what are the downsides?
 
  Customers are businesses on dedicated business services vs. consumer
 cable
  modems (although there are a few on business class cable).  Others are
 on
  MPLS and I’m hashing that out.
 
  The only one I can see is if the customer has a service provider with
  their external interface in 100.64 space.  However, this approach would
  have a more specific in that space so it should fire it down the
 tunnel for
  their allocated customer block (/28) vs. their external side.
 
  Thoughts and thanks in advance.
 
  Eric
 
 
  Wouldn't it be nice if Amazon supported IPv6 in VPC?
 
  I have disqualified several projects from using the public cloud and
 put
  them in the on-premise private cloud  because Amazon is missing this
 key
  scaling feature -- ipv6.   It is odd that Amazon, a company with scale
  deeply in its DNA, fails so hard on IPv6.  I guess they have a lot of
  brittle technical debt they can't upgrade.
 
  I suggest you go with private cloud if possible.
 
  Or, you can double NAT non-unique IPv4 space.
 
  Regarding 100.64.0.0/10, despite what the RFCs may say, this space is
 just
  an augment of RFC1918 and i have already deployed it as such.
 
  CB





Re: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment

2015-02-24 Thread Ca By
On Tue, Feb 24, 2015 at 10:08 AM, Blair Trosper blair.tros...@gmail.com
wrote:

 I have an unimpeachable source at AWS that assures me they're working hard
 to deploy IPv6.  As it was explained to me, since AWS was sort of first to
 the table -- well before IPv6 popped, they had designed everything on the
 v4 only.  Granted, you can get an IPv6 ELB, but only in EC2 classic, which
 they're phasing out.

 But I'm assured they're rushing IPv6 deployment of CloudFront and other
 services as fast as they can.  I'm assured of this.


talk is cheap.  I suggest people use Cloudflare or Akamai for proper IPv6
CDN reach to major IPv6 eyeball networks at ATT, Verizon, Comcast, and
T-Mobile US -- all of which have major ipv6 deployments

http://www.worldipv6launch.org/measurements/




 But you also have to appreciate the hassle of retrofitting a cloud
 platform of that scale, so I do not envy the task that AWS is undertaking.


work is hard


 On Tue, Feb 24, 2015 at 11:35 AM, Owen DeLong o...@delong.com wrote:

 Amazon is not the only public cloud.

 There are several public clouds that can support IPv6 directly.

 I have done some work for and believe these guys do a good job:

 Host Virtual (vr.org http://vr.org/)


 In no particular order and I have no relationship with or loyalty or
 benefit associated with any of them. I neither endorse, nor decry any of
 the following:

 Linode
 SoftLayer
 RackSpace

 There are others that I am not recalling off the top of my head.

 Owen

  On Feb 23, 2015, at 07:52 , Ca By cb.li...@gmail.com wrote:
 
  On Mon, Feb 23, 2015 at 7:02 AM, Eric Germann ekgerm...@cctec.com
 wrote:
 
  Currently engaged on a project where they’re building out a VPC
  infrastructure for hosted applications.
 
  Users access apps in the VPC, not the other direction.
 
  The issue I'm trying to get around is the customers who need to connect
  have multiple overlapping RFC1918 space (including overlapping what was
  proposed for the VPC networks).  Finding a hole that is big enough and
 not
  in use by someone else is nearly impossible AND the customers could go
  through mergers which make them renumber even more in to overlapping
 1918
  space.
 
  Initially, I was looking at doing something like (example IP’s):
 
 
  Customer A (172.28.0.0/24)  — NAT to 100.127.0.0/28 —— VPN to DC
 ——
  NAT from 100.64.0.0/18 ——  VPC Space (was 172.28.0.0/24)
 
  Classic overlapping subnets on both ends with allocations out of
  100.64.0.0/10 to NAT in both directions.  Each sees the other end in
  100.64 space, but the mappings can get tricky and hard to keep track of
  (especially if you’re not a network engineer).
 
 
  In spitballing, the boat hasn’t sailed too far to say “Why not use
  100.64/10 in the VPC?”
 
  Then, the customer would be allocated a /28 or larger (depending on
 needs)
  to NAT on their side and NAT it once.  After that, no more NAT for the
 VPC
  and it boils down to firewall rules.  Their device needs to NAT
 outbound
  before it fires it down the tunnel which pfSense and ASA’s appear to be
  able to do.
 
  I prototyped this up over the weekend with multiple VPC’s in multiple
  regions and it “appears” to work fine.
 
  From the operator community, what are the downsides?
 
  Customers are businesses on dedicated business services vs. consumer
 cable
  modems (although there are a few on business class cable).  Others are
 on
  MPLS and I’m hashing that out.
 
  The only one I can see is if the customer has a service provider with
  their external interface in 100.64 space.  However, this approach would
  have a more specific in that space so it should fire it down the
 tunnel for
  their allocated customer block (/28) vs. their external side.
 
  Thoughts and thanks in advance.
 
  Eric
 
 
  Wouldn't it be nice if Amazon supported IPv6 in VPC?
 
  I have disqualified several projects from using the public cloud and
 put
  them in the on-premise private cloud  because Amazon is missing this
 key
  scaling feature -- ipv6.   It is odd that Amazon, a company with scale
  deeply in its DNA, fails so hard on IPv6.  I guess they have a lot of
  brittle technical debt they can't upgrade.
 
  I suggest you go with private cloud if possible.
 
  Or, you can double NAT non-unique IPv4 space.
 
  Regarding 100.64.0.0/10, despite what the RFCs may say, this space is
 just
  an augment of RFC1918 and i have already deployed it as such.
 
  CB





Re: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment

2015-02-24 Thread Patrick W. Gilmore
I personally find it amusing that companies try to have it both ways.

We are huge, you should use us instead of $LITTLE_GUY because our resources  
scale make us better able to handle things. Oh, what, you want IPv6? We're too 
big to do that quickly

But hey, I would try the same thing in their position.

-- 
TTFN,
patrick

 On Feb 24, 2015, at 13:15 , Ca By cb.li...@gmail.com wrote:
 
 On Tue, Feb 24, 2015 at 10:08 AM, Blair Trosper blair.tros...@gmail.com
 wrote:
 
 I have an unimpeachable source at AWS that assures me they're working hard
 to deploy IPv6.  As it was explained to me, since AWS was sort of first to
 the table -- well before IPv6 popped, they had designed everything on the
 v4 only.  Granted, you can get an IPv6 ELB, but only in EC2 classic, which
 they're phasing out.
 
 But I'm assured they're rushing IPv6 deployment of CloudFront and other
 services as fast as they can.  I'm assured of this.
 
 
 talk is cheap.  I suggest people use Cloudflare or Akamai for proper IPv6
 CDN reach to major IPv6 eyeball networks at ATT, Verizon, Comcast, and
 T-Mobile US -- all of which have major ipv6 deployments
 
 http://www.worldipv6launch.org/measurements/
 
 
 
 
 But you also have to appreciate the hassle of retrofitting a cloud
 platform of that scale, so I do not envy the task that AWS is undertaking.
 
 
 work is hard
 
 
 On Tue, Feb 24, 2015 at 11:35 AM, Owen DeLong o...@delong.com wrote:
 
 Amazon is not the only public cloud.
 
 There are several public clouds that can support IPv6 directly.
 
 I have done some work for and believe these guys do a good job:
 
 Host Virtual (vr.org http://vr.org/)
 
 
 In no particular order and I have no relationship with or loyalty or
 benefit associated with any of them. I neither endorse, nor decry any of
 the following:
 
 Linode
 SoftLayer
 RackSpace
 
 There are others that I am not recalling off the top of my head.
 
 Owen
 
 On Feb 23, 2015, at 07:52 , Ca By cb.li...@gmail.com wrote:
 
 On Mon, Feb 23, 2015 at 7:02 AM, Eric Germann ekgerm...@cctec.com
 wrote:
 
 Currently engaged on a project where they’re building out a VPC
 infrastructure for hosted applications.
 
 Users access apps in the VPC, not the other direction.
 
 The issue I'm trying to get around is the customers who need to connect
 have multiple overlapping RFC1918 space (including overlapping what was
 proposed for the VPC networks).  Finding a hole that is big enough and
 not
 in use by someone else is nearly impossible AND the customers could go
 through mergers which make them renumber even more in to overlapping
 1918
 space.
 
 Initially, I was looking at doing something like (example IP’s):
 
 
 Customer A (172.28.0.0/24)  — NAT to 100.127.0.0/28 —— VPN to DC
 ——
 NAT from 100.64.0.0/18 ——  VPC Space (was 172.28.0.0/24)
 
 Classic overlapping subnets on both ends with allocations out of
 100.64.0.0/10 to NAT in both directions.  Each sees the other end in
 100.64 space, but the mappings can get tricky and hard to keep track of
 (especially if you’re not a network engineer).
 
 
 In spitballing, the boat hasn’t sailed too far to say “Why not use
 100.64/10 in the VPC?”
 
 Then, the customer would be allocated a /28 or larger (depending on
 needs)
 to NAT on their side and NAT it once.  After that, no more NAT for the
 VPC
 and it boils down to firewall rules.  Their device needs to NAT
 outbound
 before it fires it down the tunnel which pfSense and ASA’s appear to be
 able to do.
 
 I prototyped this up over the weekend with multiple VPC’s in multiple
 regions and it “appears” to work fine.
 
 From the operator community, what are the downsides?
 
 Customers are businesses on dedicated business services vs. consumer
 cable
 modems (although there are a few on business class cable).  Others are
 on
 MPLS and I’m hashing that out.
 
 The only one I can see is if the customer has a service provider with
 their external interface in 100.64 space.  However, this approach would
 have a more specific in that space so it should fire it down the
 tunnel for
 their allocated customer block (/28) vs. their external side.
 
 Thoughts and thanks in advance.
 
 Eric
 
 
 Wouldn't it be nice if Amazon supported IPv6 in VPC?
 
 I have disqualified several projects from using the public cloud and
 put
 them in the on-premise private cloud  because Amazon is missing this
 key
 scaling feature -- ipv6.   It is odd that Amazon, a company with scale
 deeply in its DNA, fails so hard on IPv6.  I guess they have a lot of
 brittle technical debt they can't upgrade.
 
 I suggest you go with private cloud if possible.
 
 Or, you can double NAT non-unique IPv4 space.
 
 Regarding 100.64.0.0/10, despite what the RFCs may say, this space is
 just
 an augment of RFC1918 and i have already deployed it as such.
 
 CB
 
 
 



Re: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment

2015-02-24 Thread Luan Nguyen
Shouldn't it be the other way around? Ipv6 as the unique universal external
network and you can define your own IPv4 within your cloud context separate
from the cloud provider network and from other customers. So if you have
contexts in different region - you can interconnect using layer 3 or layer
2 - through the cloud provider network...bring your own IPv4. If you need
internet access, you'll get NATted. If you need connections to your
branches/HQs...etc, build your own tunnel or use the cloud provider - which
by the way gives you your own vrf so no need to worry about overlapping
anything.
Noone heard of Dimension Data Cloud? :)

On Tue, Feb 24, 2015 at 1:10 PM, Blair Trosper blair.tros...@gmail.com
wrote:

 ADDENDUM:  They're taking into consideration my suggestion of using IPv6 as
 a universal internal network so that the different regions could be
 interconnected without having to give up the region-independent use of
 10.0.0.0/8, which I think would be an elegant solution.

 On Tue, Feb 24, 2015 at 12:08 PM, Blair Trosper blair.tros...@gmail.com
 wrote:

  I have an unimpeachable source at AWS that assures me they're working
 hard
  to deploy IPv6.  As it was explained to me, since AWS was sort of first
 to
  the table -- well before IPv6 popped, they had designed everything on
 the
  v4 only.  Granted, you can get an IPv6 ELB, but only in EC2 classic,
 which
  they're phasing out.
 
  But I'm assured they're rushing IPv6 deployment of CloudFront and other
  services as fast as they can.  I'm assured of this.
 
  But you also have to appreciate the hassle of retrofitting a cloud
  platform of that scale, so I do not envy the task that AWS is
 undertaking.
 
  On Tue, Feb 24, 2015 at 11:35 AM, Owen DeLong o...@delong.com wrote:
 
  Amazon is not the only public cloud.
 
  There are several public clouds that can support IPv6 directly.
 
  I have done some work for and believe these guys do a good job:
 
  Host Virtual (vr.org http://vr.org/)
 
  In no particular order and I have no relationship with or loyalty or
  benefit associated with any of them. I neither endorse, nor decry any of
  the following:
 
  Linode
  SoftLayer
  RackSpace
 
  There are others that I am not recalling off the top of my head.
 
  Owen
 
   On Feb 23, 2015, at 07:52 , Ca By cb.li...@gmail.com wrote:
  
   On Mon, Feb 23, 2015 at 7:02 AM, Eric Germann ekgerm...@cctec.com
  wrote:
  
   Currently engaged on a project where they’re building out a VPC
   infrastructure for hosted applications.
  
   Users access apps in the VPC, not the other direction.
  
   The issue I'm trying to get around is the customers who need to
 connect
   have multiple overlapping RFC1918 space (including overlapping what
 was
   proposed for the VPC networks).  Finding a hole that is big enough
 and
  not
   in use by someone else is nearly impossible AND the customers could
 go
   through mergers which make them renumber even more in to overlapping
  1918
   space.
  
   Initially, I was looking at doing something like (example IP’s):
  
  
   Customer A (172.28.0.0/24)  — NAT to 100.127.0.0/28 —— VPN to DC
  ——
   NAT from 100.64.0.0/18 ——  VPC Space (was 172.28.0.0/24)
  
   Classic overlapping subnets on both ends with allocations out of
   100.64.0.0/10 to NAT in both directions.  Each sees the other end in
   100.64 space, but the mappings can get tricky and hard to keep track
 of
   (especially if you’re not a network engineer).
  
  
   In spitballing, the boat hasn’t sailed too far to say “Why not use
   100.64/10 in the VPC?”
  
   Then, the customer would be allocated a /28 or larger (depending on
  needs)
   to NAT on their side and NAT it once.  After that, no more NAT for
 the
  VPC
   and it boils down to firewall rules.  Their device needs to NAT
  outbound
   before it fires it down the tunnel which pfSense and ASA’s appear to
 be
   able to do.
  
   I prototyped this up over the weekend with multiple VPC’s in multiple
   regions and it “appears” to work fine.
  
   From the operator community, what are the downsides?
  
   Customers are businesses on dedicated business services vs. consumer
  cable
   modems (although there are a few on business class cable).  Others
 are
  on
   MPLS and I’m hashing that out.
  
   The only one I can see is if the customer has a service provider with
   their external interface in 100.64 space.  However, this approach
 would
   have a more specific in that space so it should fire it down the
  tunnel for
   their allocated customer block (/28) vs. their external side.
  
   Thoughts and thanks in advance.
  
   Eric
  
  
   Wouldn't it be nice if Amazon supported IPv6 in VPC?
  
   I have disqualified several projects from using the public cloud and
  put
   them in the on-premise private cloud  because Amazon is missing this
  key
   scaling feature -- ipv6.   It is odd that Amazon, a company with scale
   deeply in its DNA, fails so hard on IPv6.  I guess they have a lot of
   brittle 

Re: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment

2015-02-24 Thread Blair Trosper
I have an unimpeachable source at AWS that assures me they're working hard
to deploy IPv6.  As it was explained to me, since AWS was sort of first to
the table -- well before IPv6 popped, they had designed everything on the
v4 only.  Granted, you can get an IPv6 ELB, but only in EC2 classic, which
they're phasing out.

But I'm assured they're rushing IPv6 deployment of CloudFront and other
services as fast as they can.  I'm assured of this.

But you also have to appreciate the hassle of retrofitting a cloud platform
of that scale, so I do not envy the task that AWS is undertaking.

On Tue, Feb 24, 2015 at 11:35 AM, Owen DeLong o...@delong.com wrote:

 Amazon is not the only public cloud.

 There are several public clouds that can support IPv6 directly.

 I have done some work for and believe these guys do a good job:

 Host Virtual (vr.org http://vr.org/)

 In no particular order and I have no relationship with or loyalty or
 benefit associated with any of them. I neither endorse, nor decry any of
 the following:

 Linode
 SoftLayer
 RackSpace

 There are others that I am not recalling off the top of my head.

 Owen

  On Feb 23, 2015, at 07:52 , Ca By cb.li...@gmail.com wrote:
 
  On Mon, Feb 23, 2015 at 7:02 AM, Eric Germann ekgerm...@cctec.com
 wrote:
 
  Currently engaged on a project where they’re building out a VPC
  infrastructure for hosted applications.
 
  Users access apps in the VPC, not the other direction.
 
  The issue I'm trying to get around is the customers who need to connect
  have multiple overlapping RFC1918 space (including overlapping what was
  proposed for the VPC networks).  Finding a hole that is big enough and
 not
  in use by someone else is nearly impossible AND the customers could go
  through mergers which make them renumber even more in to overlapping
 1918
  space.
 
  Initially, I was looking at doing something like (example IP’s):
 
 
  Customer A (172.28.0.0/24)  — NAT to 100.127.0.0/28 —— VPN to DC
 ——
  NAT from 100.64.0.0/18 ——  VPC Space (was 172.28.0.0/24)
 
  Classic overlapping subnets on both ends with allocations out of
  100.64.0.0/10 to NAT in both directions.  Each sees the other end in
  100.64 space, but the mappings can get tricky and hard to keep track of
  (especially if you’re not a network engineer).
 
 
  In spitballing, the boat hasn’t sailed too far to say “Why not use
  100.64/10 in the VPC?”
 
  Then, the customer would be allocated a /28 or larger (depending on
 needs)
  to NAT on their side and NAT it once.  After that, no more NAT for the
 VPC
  and it boils down to firewall rules.  Their device needs to NAT outbound
  before it fires it down the tunnel which pfSense and ASA’s appear to be
  able to do.
 
  I prototyped this up over the weekend with multiple VPC’s in multiple
  regions and it “appears” to work fine.
 
  From the operator community, what are the downsides?
 
  Customers are businesses on dedicated business services vs. consumer
 cable
  modems (although there are a few on business class cable).  Others are
 on
  MPLS and I’m hashing that out.
 
  The only one I can see is if the customer has a service provider with
  their external interface in 100.64 space.  However, this approach would
  have a more specific in that space so it should fire it down the tunnel
 for
  their allocated customer block (/28) vs. their external side.
 
  Thoughts and thanks in advance.
 
  Eric
 
 
  Wouldn't it be nice if Amazon supported IPv6 in VPC?
 
  I have disqualified several projects from using the public cloud and
 put
  them in the on-premise private cloud  because Amazon is missing this
 key
  scaling feature -- ipv6.   It is odd that Amazon, a company with scale
  deeply in its DNA, fails so hard on IPv6.  I guess they have a lot of
  brittle technical debt they can't upgrade.
 
  I suggest you go with private cloud if possible.
 
  Or, you can double NAT non-unique IPv4 space.
 
  Regarding 100.64.0.0/10, despite what the RFCs may say, this space is
 just
  an augment of RFC1918 and i have already deployed it as such.
 
  CB




Re: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment

2015-02-24 Thread Gino O'Donnell
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-peering.html

On 2/24/15 10:59 AM, Blair Trosper wrote:
 In VPC, you can also designate
 your own subnets, which makes things a little more tough a la
 interconnecting the disparate regions.
 
 On Tue, Feb 24, 2015 at 12:27 PM, Luan Nguyen lngu...@opsource.net wrote:
 
 Shouldn't it be the other way around? Ipv6 as the unique universal
 external network and you can define your own IPv4 within your cloud context
 separate from the cloud provider network and from other customers. So if
 you have contexts in different region - you can interconnect using layer 3
 or layer 2 - through the cloud provider network...bring your own IPv4. If
 you need internet access, you'll get NATted. If you need connections to
 your branches/HQs...etc, build your own tunnel or use the cloud provider -
 which by the way gives you your own vrf so no need to worry about
 overlapping anything.
 Noone heard of Dimension Data Cloud? :)

 On Tue, Feb 24, 2015 at 1:10 PM, Blair Trosper blair.tros...@gmail.com
 wrote:

 ADDENDUM:  They're taking into consideration my suggestion of using IPv6
 as
 a universal internal network so that the different regions could be
 interconnected without having to give up the region-independent use of
 10.0.0.0/8, which I think would be an elegant solution.

 On Tue, Feb 24, 2015 at 12:08 PM, Blair Trosper blair.tros...@gmail.com
 wrote:

 I have an unimpeachable source at AWS that assures me they're working
 hard
 to deploy IPv6.  As it was explained to me, since AWS was sort of first
 to
 the table -- well before IPv6 popped, they had designed everything on
 the
 v4 only.  Granted, you can get an IPv6 ELB, but only in EC2 classic,
 which
 they're phasing out.

 But I'm assured they're rushing IPv6 deployment of CloudFront and other
 services as fast as they can.  I'm assured of this.

 But you also have to appreciate the hassle of retrofitting a cloud
 platform of that scale, so I do not envy the task that AWS is
 undertaking.

 On Tue, Feb 24, 2015 at 11:35 AM, Owen DeLong o...@delong.com wrote:

 Amazon is not the only public cloud.

 There are several public clouds that can support IPv6 directly.

 I have done some work for and believe these guys do a good job:

 Host Virtual (vr.org http://vr.org/)

 In no particular order and I have no relationship with or loyalty or
 benefit associated with any of them. I neither endorse, nor decry any
 of
 the following:

 Linode
 SoftLayer
 RackSpace

 There are others that I am not recalling off the top of my head.

 Owen

 On Feb 23, 2015, at 07:52 , Ca By cb.li...@gmail.com wrote:

 On Mon, Feb 23, 2015 at 7:02 AM, Eric Germann ekgerm...@cctec.com
 wrote:

 Currently engaged on a project where they’re building out a VPC
 infrastructure for hosted applications.

 Users access apps in the VPC, not the other direction.

 The issue I'm trying to get around is the customers who need to
 connect
 have multiple overlapping RFC1918 space (including overlapping what
 was
 proposed for the VPC networks).  Finding a hole that is big enough
 and
 not
 in use by someone else is nearly impossible AND the customers could
 go
 through mergers which make them renumber even more in to overlapping
 1918
 space.

 Initially, I was looking at doing something like (example IP’s):


 Customer A (172.28.0.0/24)  — NAT to 100.127.0.0/28 —— VPN to
 DC
 ——
 NAT from 100.64.0.0/18 ——  VPC Space (was 172.28.0.0/24)

 Classic overlapping subnets on both ends with allocations out of
 100.64.0.0/10 to NAT in both directions.  Each sees the other end
 in
 100.64 space, but the mappings can get tricky and hard to keep
 track of
 (especially if you’re not a network engineer).


 In spitballing, the boat hasn’t sailed too far to say “Why not use
 100.64/10 in the VPC?”

 Then, the customer would be allocated a /28 or larger (depending on
 needs)
 to NAT on their side and NAT it once.  After that, no more NAT for
 the
 VPC
 and it boils down to firewall rules.  Their device needs to NAT
 outbound
 before it fires it down the tunnel which pfSense and ASA’s appear
 to be
 able to do.

 I prototyped this up over the weekend with multiple VPC’s in
 multiple
 regions and it “appears” to work fine.

 From the operator community, what are the downsides?

 Customers are businesses on dedicated business services vs. consumer
 cable
 modems (although there are a few on business class cable).  Others
 are
 on
 MPLS and I’m hashing that out.

 The only one I can see is if the customer has a service provider
 with
 their external interface in 100.64 space.  However, this approach
 would
 have a more specific in that space so it should fire it down the
 tunnel for
 their allocated customer block (/28) vs. their external side.

 Thoughts and thanks in advance.

 Eric


 Wouldn't it be nice if Amazon supported IPv6 in VPC?

 I have disqualified several projects from using the public cloud
 and
 put
 them in the on-premise private cloud  because Amazon is 

Re: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment

2015-02-23 Thread Ca By
On Mon, Feb 23, 2015 at 7:02 AM, Eric Germann ekgerm...@cctec.com wrote:

 Currently engaged on a project where they’re building out a VPC
 infrastructure for hosted applications.

 Users access apps in the VPC, not the other direction.

 The issue I'm trying to get around is the customers who need to connect
 have multiple overlapping RFC1918 space (including overlapping what was
 proposed for the VPC networks).  Finding a hole that is big enough and not
 in use by someone else is nearly impossible AND the customers could go
 through mergers which make them renumber even more in to overlapping 1918
 space.

 Initially, I was looking at doing something like (example IP’s):


 Customer A (172.28.0.0/24)  — NAT to 100.127.0.0/28 —— VPN to DC ——
 NAT from 100.64.0.0/18 ——  VPC Space (was 172.28.0.0/24)

 Classic overlapping subnets on both ends with allocations out of
 100.64.0.0/10 to NAT in both directions.  Each sees the other end in
 100.64 space, but the mappings can get tricky and hard to keep track of
 (especially if you’re not a network engineer).


 In spitballing, the boat hasn’t sailed too far to say “Why not use
 100.64/10 in the VPC?”

 Then, the customer would be allocated a /28 or larger (depending on needs)
 to NAT on their side and NAT it once.  After that, no more NAT for the VPC
 and it boils down to firewall rules.  Their device needs to NAT outbound
 before it fires it down the tunnel which pfSense and ASA’s appear to be
 able to do.

 I prototyped this up over the weekend with multiple VPC’s in multiple
 regions and it “appears” to work fine.

 From the operator community, what are the downsides?

 Customers are businesses on dedicated business services vs. consumer cable
 modems (although there are a few on business class cable).  Others are on
 MPLS and I’m hashing that out.

 The only one I can see is if the customer has a service provider with
 their external interface in 100.64 space.  However, this approach would
 have a more specific in that space so it should fire it down the tunnel for
 their allocated customer block (/28) vs. their external side.

 Thoughts and thanks in advance.

 Eric


Wouldn't it be nice if Amazon supported IPv6 in VPC?

I have disqualified several projects from using the public cloud and put
them in the on-premise private cloud  because Amazon is missing this key
scaling feature -- ipv6.   It is odd that Amazon, a company with scale
deeply in its DNA, fails so hard on IPv6.  I guess they have a lot of
brittle technical debt they can't upgrade.

I suggest you go with private cloud if possible.

Or, you can double NAT non-unique IPv4 space.

Regarding 100.64.0.0/10, despite what the RFCs may say, this space is just
an augment of RFC1918 and i have already deployed it as such.

CB


Re: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment

2015-02-23 Thread William Herrin
On Mon, Feb 23, 2015 at 10:02 AM, Eric Germann ekgerm...@cctec.com wrote:
 In spitballing, the boat hasn't sailed too far to say Why not
 use 100.64/10 in the VPC?

 The only one I can see is if the customer has a service provider
 with their external interface in 100.64 space.  However, this
 approach would have a more specific in that space so it
 should fire it down the tunnel for their allocated customer
 block (/28) vs. their external side.

Hi Eric,

The main risk is more or less as you summarized it.  Customer has no
firewall or originates the VPN directly from their firewall. Customer
buys a non-hosting commodity Internet product that uses carrier NAT to
conserve IP addresses. The customer's assigned address AND NETMASK
combined overlap some of the hosts you're trying to publish to them.



Mitigations for that risk:

Can you insist that the customer originate connections from inside
their firewall (on RFC1918 space)?

Most service providers using 100.64/10 either permit customers to opt
out (getting dynamic globally routable addresses) or offer customers
the opportunity to purchase static global addresses for a nominal fee.
Are you comfortable telling impacted customers that they have to do
so?


A secondary risk comes in to play where a customer may wish to
interact with another service provider doing the same thing as you.
That essentially lands you back in the same problem you're having now
with RFC1918.


One more question you should consider: what is the nature of your
customer's networks? Big corps that tend to stretch through 10/8 won't
let their users originate VPN connections in the first place. They
also don't touch 100.64/10 except where someone is publishing a
service like yours.  Meanwhile, home and SOHO users who are at liberty
to originate VPNs might currently hold a 100.64/10 address. But they
just about never use the off-bit /16s in 10/8. By off-bit I mean the
ones with 4 or 5 random 1-bits in the second octet.


My opinion: The likelihood of collisions in 100.64/10 increases
significantly if you use them on servers. I would confine my use to
client machines and try to put servers providing service to multiple
organizations on globally unique IPs. Confining 100.64/10 to client
machines, you're unlikely to encounter a problem you can't readily
solve.

Regards,
Bill Herrin


-- 
William Herrin  her...@dirtside.com  b...@herrin.us
Owner, Dirtside Systems . Web: http://www.dirtside.com/


Re: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment

2015-02-23 Thread Luan Nguyen
I put lots of these to good use
http://en.wikipedia.org/wiki/Reserved_IP_addresses
Regarding public cloud with ipv6 support, contact me off-list i might even
get you a special discount



On Mon, Feb 23, 2015 at 10:52 AM, Ca By cb.li...@gmail.com wrote:

 On Mon, Feb 23, 2015 at 7:02 AM, Eric Germann ekgerm...@cctec.com wrote:

  Currently engaged on a project where they’re building out a VPC
  infrastructure for hosted applications.
 
  Users access apps in the VPC, not the other direction.
 
  The issue I'm trying to get around is the customers who need to connect
  have multiple overlapping RFC1918 space (including overlapping what was
  proposed for the VPC networks).  Finding a hole that is big enough and
 not
  in use by someone else is nearly impossible AND the customers could go
  through mergers which make them renumber even more in to overlapping 1918
  space.
 
  Initially, I was looking at doing something like (example IP’s):
 
 
  Customer A (172.28.0.0/24)  — NAT to 100.127.0.0/28 —— VPN to DC
 ——
  NAT from 100.64.0.0/18 ——  VPC Space (was 172.28.0.0/24)
 
  Classic overlapping subnets on both ends with allocations out of
  100.64.0.0/10 to NAT in both directions.  Each sees the other end in
  100.64 space, but the mappings can get tricky and hard to keep track of
  (especially if you’re not a network engineer).
 
 
  In spitballing, the boat hasn’t sailed too far to say “Why not use
  100.64/10 in the VPC?”
 
  Then, the customer would be allocated a /28 or larger (depending on
 needs)
  to NAT on their side and NAT it once.  After that, no more NAT for the
 VPC
  and it boils down to firewall rules.  Their device needs to NAT outbound
  before it fires it down the tunnel which pfSense and ASA’s appear to be
  able to do.
 
  I prototyped this up over the weekend with multiple VPC’s in multiple
  regions and it “appears” to work fine.
 
  From the operator community, what are the downsides?
 
  Customers are businesses on dedicated business services vs. consumer
 cable
  modems (although there are a few on business class cable).  Others are on
  MPLS and I’m hashing that out.
 
  The only one I can see is if the customer has a service provider with
  their external interface in 100.64 space.  However, this approach would
  have a more specific in that space so it should fire it down the tunnel
 for
  their allocated customer block (/28) vs. their external side.
 
  Thoughts and thanks in advance.
 
  Eric
 

 Wouldn't it be nice if Amazon supported IPv6 in VPC?

 I have disqualified several projects from using the public cloud and put
 them in the on-premise private cloud  because Amazon is missing this key
 scaling feature -- ipv6.   It is odd that Amazon, a company with scale
 deeply in its DNA, fails so hard on IPv6.  I guess they have a lot of
 brittle technical debt they can't upgrade.

 I suggest you go with private cloud if possible.

 Or, you can double NAT non-unique IPv4 space.

 Regarding 100.64.0.0/10, despite what the RFCs may say, this space is just
 an augment of RFC1918 and i have already deployed it as such.

 CB



Re: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment

2015-02-23 Thread Benson Schliesser

Hi, Eric -

Bill already described the salient points. The transition space is 
meant to be used for cases where there are multiple stacked NATs, such 
as CGN with CPE-based NAT. In theory, if the NAT implementations support 
it, one could use it repeatedly by stacking NAT on top of NAT ad 
nauseum, but the wisdom of doing so is questionable. If one uses it like 
additional RFC1918 space then routing could become more difficult, 
specifically in the case where hosts (e.g. VPC servers) are numbered 
with it. This is true because, in theory, you don't need the transition 
space to be routed on the internal network which avoids having NAT 
devices hold conflicting routes etc. Even if the edge NAT devices don't 
currently see conflicting routes to 100.64/10, if that changes in the 
future then client hosts may find themselves unable to reach the VPC 
hosts at that time.


That being said, if you understand the risks that I described above, 
then it may work well for a community of interest type of 
inter-network that hosts non-global resources. From your description it 
sounds like that might be the situation you find yourself in. To be 
clear, it's not unwise to do so, but it does carry risk that needs to be 
evaluated (and documented).


Cheers,
-Benson



William Herrin mailto:b...@herrin.us
February 23, 2015 at 12:58 PM

Hi Eric,

The main risk is more or less as you summarized it. Customer has no
firewall or originates the VPN directly from their firewall. Customer
buys a non-hosting commodity Internet product that uses carrier NAT to
conserve IP addresses. The customer's assigned address AND NETMASK
combined overlap some of the hosts you're trying to publish to them.



Mitigations for that risk:

Can you insist that the customer originate connections from inside
their firewall (on RFC1918 space)?

Most service providers using 100.64/10 either permit customers to opt
out (getting dynamic globally routable addresses) or offer customers
the opportunity to purchase static global addresses for a nominal fee.
Are you comfortable telling impacted customers that they have to do
so?


A secondary risk comes in to play where a customer may wish to
interact with another service provider doing the same thing as you.
That essentially lands you back in the same problem you're having now
with RFC1918.


One more question you should consider: what is the nature of your
customer's networks? Big corps that tend to stretch through 10/8 won't
let their users originate VPN connections in the first place. They
also don't touch 100.64/10 except where someone is publishing a
service like yours. Meanwhile, home and SOHO users who are at liberty
to originate VPNs might currently hold a 100.64/10 address. But they
just about never use the off-bit /16s in 10/8. By off-bit I mean the
ones with 4 or 5 random 1-bits in the second octet.


My opinion: The likelihood of collisions in 100.64/10 increases
significantly if you use them on servers. I would confine my use to
client machines and try to put servers providing service to multiple
organizations on globally unique IPs. Confining 100.64/10 to client
machines, you're unlikely to encounter a problem you can't readily
solve.

Regards,
Bill Herrin


Eric Germann mailto:ekgerm...@cctec.com
February 23, 2015 at 10:02 AM
Currently engaged on a project where they’re building out a VPC 
infrastructure for hosted applications.


Users access apps in the VPC, not the other direction.

The issue I'm trying to get around is the customers who need to 
connect have multiple overlapping RFC1918 space (including overlapping 
what was proposed for the VPC networks). Finding a hole that is big 
enough and not in use by someone else is nearly impossible AND the 
customers could go through mergers which make them renumber even more 
in to overlapping 1918 space.


Initially, I was looking at doing something like (example IP’s):


Customer A (172.28.0.0/24) — NAT to 100.127.0.0/28 —— VPN to DC 
—— NAT from 100.64.0.0/18 —— VPC Space (was 172.28.0.0/24)


Classic overlapping subnets on both ends with allocations out of 
100.64.0.0/10 to NAT in both directions. Each sees the other end in 
100.64 space, but the mappings can get tricky and hard to keep track 
of (especially if you’re not a network engineer).



In spitballing, the boat hasn’t sailed too far to say “Why not use 
100.64/10 in the VPC?”


Then, the customer would be allocated a /28 or larger (depending on 
needs) to NAT on their side and NAT it once. After that, no more NAT 
for the VPC and it boils down to firewall rules. Their device needs to 
NAT outbound before it fires it down the tunnel which pfSense and 
ASA’s appear to be able to do.


I prototyped this up over the weekend with multiple VPC’s in multiple 
regions and it “appears” to work fine.


From the operator community, what are the downsides?

Customers are businesses on dedicated business services vs. consumer 
cable modems (although there are a few on business class cable). 

Re: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment

2015-02-23 Thread Måns Nilsson
Subject: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment 
Date: Mon, Feb 23, 2015 at 10:02:44AM -0500 Quoting Eric Germann 
(ekgerm...@cctec.com):
 Currently engaged on a project where they’re building out a VPC 
 infrastructure for hosted applications.

snip

 Thoughts and thanks in advance.

using the wasted /10 for this is pretty much equal to using RFC1918 space. 

IPv6 was invented to do this right. 

-- 
Måns Nilsson primary/secondary/besserwisser/machina
MN-1334-RIPE +46 705 989668
It's NO USE ... I've gone to CLUB MED!!


signature.asc
Description: Digital signature


Re: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment

2015-02-23 Thread Jimmy Hess
On Mon, Feb 23, 2015 at 9:02 AM, Eric Germann ekgerm...@cctec.com wrote:

 In spitballing, the boat hasn’t sailed too far to say “Why not use 100.64/10 
 in the VPC?”

Read RFC6598.
If you can assure the conditions are met that are listed in 4.
Use of Shared CGN Space..

Then usage of the 100.64/10  shared space may be applicable,  under
other conditions it may be risky;   the proper usage of IP addresses
is in accordance with the standards or by the registrant under the
right assignment agreements.

If you are just needing space to squat on regardless of the
standardized usage,  then you might do anything you want ---  you may
as well use 25/8  or  11.0.0.0/8  internally,   after taking steps to
ensure you will not be leaking Reverse DNS queries, routes,  or
anything like that,  this space is larger than a /10 and would provide
more expansion flexibility.


 Then, the customer would be allocated a /28 or larger (depending on needs) to 
 NAT on their side and NAT it once.  After that, no more NAT for the VPC and 
 it boils down to firewall rules.  Their device needs to NAT outbound before 
 it fires it down the tunnel which pfSense and ASA’s appear to be able to do.


--
-JH


Re: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment

2015-02-23 Thread Randy Bush
 Then usage of the 100.64/10  shared space may be applicable,  under
 other conditions it may be risky

about as risky as the rest of private address space.

randy


Re: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment

2015-02-23 Thread Blair Trosper
Might be ill-advised since AWS uses it themselves for their internal
networking.  Just traceroute to any API endpoint from an EC2/VPC resource
or instance.  :)

On Mon, Feb 23, 2015 at 2:43 PM, Måns Nilsson mansa...@besserwisser.org
wrote:

 Subject: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC
 deployment Date: Mon, Feb 23, 2015 at 10:02:44AM -0500 Quoting Eric Germann
 (ekgerm...@cctec.com):
  Currently engaged on a project where they’re building out a VPC
 infrastructure for hosted applications.

 snip

  Thoughts and thanks in advance.

 using the wasted /10 for this is pretty much equal to using RFC1918 space.

 IPv6 was invented to do this right.

 --
 Måns Nilsson primary/secondary/besserwisser/machina
 MN-1334-RIPE +46 705 989668
 It's NO USE ... I've gone to CLUB MED!!



Re: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment

2015-02-23 Thread Eric Germann
Mulling over the implications of this.

[root@ip-100-64-0-55 ~]# traceroute s3.amazonaws.com
traceroute to s3.amazonaws.com (54.231.0.64), 30 hops max, 60 byte packets
 1  ec2-79-125-0-202.eu-west-1.compute.amazonaws.com (79.125.0.202)  1.068 ms  
0.824 ms  0.787 ms
 2  178.236.1.18 (178.236.1.18)  1.193 ms  1.164 ms  0.869 ms
 3  * * *
 4  54.239.41.133 (54.239.41.133)  76.046 ms  76.029 ms  75.986 ms
 5  54.239.41.166 (54.239.41.166)  76.314 ms  76.281 ms  76.244 ms
 6  72.21.220.77 (72.21.220.77)  76.143 ms  76.054 ms  76.095 ms
 7  205.251.245.224 (205.251.245.224)  76.346 ms 72.21.222.149 (72.21.222.149)  
76.261 ms 205.251.245.230 (205.251.245.230)  76.360 ms
 8  * * *
...
30  * * *

but, 

[root@ip-100-64-0-55 ~]# wget https://s3.amazonaws.com
--2015-02-24 04:20:18--  https://s3.amazonaws.com/
Resolving s3.amazonaws.com... 54.231.12.48
Connecting to s3.amazonaws.com|54.231.12.48|:443... connected.
HTTP request sent, awaiting response... 307 Temporary Redirect
Location: http://aws.amazon.com/s3/ [following]
--2015-02-24 04:20:18--  http://aws.amazon.com/s3/
Resolving aws.amazon.com... 54.240.250.195
Connecting to aws.amazon.com|54.240.250.195|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: “index.html.1”

[=
] 179,606  158K/s   in 1.1s

2015-02-24 04:20:20 (158 KB/s) - “index.html.1” saved [179606]

ICMP would break from the intermediates, but ICMP from the API endpoint should 
still work.  Will have to chew on this a bit overnight.

EKG


 On Feb 23, 2015, at 9:03 PM, Blair Trosper blair.tros...@gmail.com wrote:
 
 Might be ill-advised since AWS uses it themselves for their internal 
 networking.  Just traceroute to any API endpoint from an EC2/VPC resource or 
 instance.  :)
 
 On Mon, Feb 23, 2015 at 2:43 PM, Måns Nilsson mansa...@besserwisser.org 
 mailto:mansa...@besserwisser.org wrote:
 Subject: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC 
 deployment Date: Mon, Feb 23, 2015 at 10:02:44AM -0500 Quoting Eric Germann 
 (ekgerm...@cctec.com mailto:ekgerm...@cctec.com):
  Currently engaged on a project where they’re building out a VPC 
  infrastructure for hosted applications.
 
 snip
 
  Thoughts and thanks in advance.
 
 using the wasted /10 for this is pretty much equal to using RFC1918 space.
 
 IPv6 was invented to do this right.
 
 --
 Måns Nilsson primary/secondary/besserwisser/machina
 MN-1334-RIPE +46 705 989668 
 tel:%2B46%20705%20989668
 It's NO USE ... I've gone to CLUB MED!!