Hi Yoav,

I thought I would answer your questions about DMVPN.   I also think that there 
are a couple places where we need to clarify some concepts in the draft.

More inline.

Thanks,

Mike.

Mike Sullenberger, DSE
[email protected]            .:|:.:|:.
Customer Advocacy          CISCO

> -----Original Message-----
> From: [email protected] [mailto:[email protected]] On Behalf
> Of Yoav Nir
> Sent: Wednesday, October 02, 2013 11:44 AM
> To: Paul Hoffman
> Cc: [email protected] WG
> Subject: Re: [IPsec] NUDGE: Reviewing the AD VPN drafts
> 
> Hi
> 
> I have read the DM-VPN draft. I would not call my reading a review, as it was
> quite superficial, but here's some thoughts:
> 
> - I have to admit that I'm still having trouble wrapping my head around some
> of the concepts. I understand domain-based and route-based VPNs, as well
> as IPsec and GRE tunnels. But I haven't thought of all the VPN endpoints as
> actually being on the same network. In fact I always thought of VPN as an
> abstract concept or as a bunch of tunnels, not as a real network. Having said
> that, I'm still trying to get used to the idea of NHRP being used for 
> discovery
> (and to the idea of NHRP itself). To my mind a VPN is not an NBMA, but that
> does not mean an NBMA cannot serve as a good model for a VPN.

It is true that we normally have a single tunnel subnet (IP addresses on the 
tunnel interfaces), but it isn't a requirement, we can support using tunnels 
with different subnets in the same DMVPN. Though often within a single 
domain/area having the tunnels in the same subnet works well for grouping those 
nodes.  Spokes in region A connect to Region A Hub and Spokes in region B 
connect to region B Hub.  Region A and B hubs connect to central hub. This 
could be divided into three tunnel subnets (Region A, B and Central).  We still 
support dynamic spoke-spoke tunnels between any pair of nodes (spoke in A - 
spoke in A, spoke in A - spoke in B, Spoke in A or B - Central Hub, Region A 
Hub - Region B Hub, spoke in A - Region B Hub, spoke in B Region A Hub).

The use of GRE with NHRP was our way of using layering to solve issues.  By 
separating the tunneling function from the encryption function we solve a 
number of issues.
1. Encryption only has to deal with encrypting the GRE/IP tunnel packet so 
there is at most one set of encryption selectors between encryption peers 
needed, no matter how many host data flows or subnets are using the tunnel.
2. We naturally handle two VPN gateways supporting the same hosts/subnets. The 
remote peer can use regular routing/forwarding to forward traffic over the two 
tunnels to one or both (load-balancing) to the VPN gateways.
3. It is easy to add other passenger protocols, because we only need to add 
them to GRE (Protocol Type field in header) and to NHRP (is designed to handle 
multiple passenger protocol types). For example: this made it easy to add IPv6 
passenger support.

> - I like how the process described in section 4.3 to 4.6 quickly converges. If
> host a behind gateway A sends packets to host e behind gateway E, and the
> packets travel though the tunnels AB, BC, CD, and DE, then the discovery
> process might go through several hops, but the next tunnel to be set up is
> AE. There is no case of setting up AC or AD.

We worked fairly diligently to make it so we only build the end-to-end tunnel 
(ingress-egress) tunnel.

> - Reading section 4.8, I see that within a single DM-VPN, there is a natural
> progression from hub&spoke to mesh. There doesn't seem to be a place for
> policy on whether a shortcut should or should not be established. The
> resolution request is forwarded until the egress node. So if, for example,
> you have two government agencies, each with its own set of gateways, and
> two gateways (one belonging to each agency) have a tunnel between them,
> there are two possible configurations: a single DM-VPN, in which case they
> become a mesh, or two DM-VPNs, so that the inter-domain tunnel endpoints
> are egress nodes on their respective DM-VPNs. Is there a way to implement
> a policy so that some shortcuts are created between those other gateways?

Yes you can.  This would be controlled on the gateways.  They could be 
configured to send or not to send indirection messages depending on destination 
subnet or routing to the destination subnet. That would suppress indirection 
messages when you don't want a spoke-spoke tunnel at all.  Otherwise an 
indirection message would be sent, which would generate a resolution request. 
The resolution request goes through the same path as the data packets that 
triggered the indirection.  At this point the gateways on either side of the 
inter-government tunnel have the choice of answering the resolution request 
themselves (shortcut the end-end tunnel) or forwarding the resolution request 
(allow the end-end tunnel).  Note, the endpoint devices will not know the 
difference. I think that this is not very clear in the draft, and we will have 
to expand the explanation.

> - Section 4.10 seems to be missing something. Suppose gateway A is
> forwarding traffic to gateway B, and receives an Indirection notification. So
> A sends a Resolution request. After a while, it receives a Resolution reply
> with TunS2 and PubS2 addresses. So now A should open a tunnel with TunS2
> (right? I'm not clear about the fields).

Two things can happen, when B receives the resolution request from A, within 
the request is the TunS1 and PubS1 addresses.
1. B now has sufficient information to initiate the tunnel (PubS2 - PubS1) back 
to A.  Assuming B authenticates with A and they have common crypto parameters, 
then B will send the resolution reply through this new tunnel.  When A receives 
the resolution reply it will check to initiate a tunnel back to B (PubS1-  
PubS2) only to find on already exists and it only needs to change the packet 
forwarding to send data packets over the new tunnel.
2. B sends the resolution reply back to A via the gateway/hub path.  A receives 
the resolution reply and it now has sufficient information to initiate the 
tunnel (PubS1 - PubS2) to B. Once the tunnel is setup the A can change the 
packet forwarding to send data packets over the new tunnel.

Note, this describes the interaction for data traffic going from A to B. It is 
assumed that there will also be corresponding data traffic going from B to A.  
The same above actions take place relative to this "return" traffic.  But, 
since the tunnels use PubS1 and PubS2 as the endpoints we only get one 
tunnel/encryption between these two endpoints which is used for the by 
directional traffic within two data subnets.  Note, if there is only one-way 
data traffic (say A  to B) everything still works, it is just that B will never 
send a resolution request nor receive a resolution reply and not modify its 
routing/forwarding for corresponding packets going B to A. The A to B traffic 
will flow over the tunnel.

> So section 4.10 says that
> authentication between these two nodes will be done using certificates.
> Considering that A has only just heard of TunS2's existence, what fields are
> there going to be in the certificate that will let A know that this is indeed
> TunS2? While a single domain might have some convention for this, AD-VPN
> is supposed to be good for multiple administrative domains, so there
> should be some rule about this.

The assumption has been that the certificates have a common node in the 
certificate chain or the nodes have multiple certificate chains installed so 
that they can authenticate each other.  I agree that this may not cover all 
inter-domain scenarios.  You could add the capability in the Gateway nodes 
(those that have the inter-domain interconnect tunnel) for them to attach any 
needed certificate information to the resolution request and replies.  This 
would mean that for these types of connections you would use option 2 in the 
above forwarding of request and replies. 
 
> - The same section hints at using certificate fields for filtering. This 
> sounds
> weird. Suppose two gateways learn of each other through the resolution
> process. Now they try to connect, only to find out that the certificate fields
> do not allow them to connect. So we've gone through an entire IKE
> handshake just to discover that policy prohibits forming this tunnel? And
> because caches expire, the gateways will try again and again to form this
> tunnel, all the time failing on authorization.

I covered this above.  If the policy is such that if two nodes should not build 
a direct tunnel then they or their gateways should be configured to block 
sending indirection messages, block sending resolution requests, block sending 
resolution replies (optional to send NACK resolution reply).  Also indirection 
messages must be rate-limited so if the above condition is transitory, we don't 
trigger the end nodes "too often" to attempt to set up the direct tunnel.

> 
> Yoav
> 
> _______________________________________________
> IPsec mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/ipsec
_______________________________________________
IPsec mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/ipsec

Reply via email to