On Oct 31, 2013, at 9:56 AM, Lucy yong <[email protected]> wrote: > Hi Petro and other co-authors, > > Here are some comments. > > The draft provides an IP VPN service solution to end-system virtual > interface. But, IMHO, it is not a network virtualization solution.
Can you please elaborate ? There is one implementation based on this document that provides a network virtualization solution. You can find additional information at http://opencontrail.org. To my knowledge there is no functionality missing in that solution. It uses additional specifications but when it comes to IP unicast it does follow this document. > The network virtualization in NVO3 or industry has more ingredients besides > providing IP VN. Do you have a plan to extend this solution for the network > virtualization? If not, suggest distinguishing two. Solution is different from ingredients. When it comes to the solution, the OpenContrail solution uses a set of published documents to provide: ip unicast, ip broadcast/multicast, EVPN and service enablement across virtual networks (i.e. automatically managing the connectivity of virtual appliances). To my knowledge, the solution fulfills the needs a self-service data-center offering for both private and public data-centers. It may be that we use less ingredients... if that is the case i would take that as an advantage. > > This solution essential is having network-based access control, which could > make VM mobility solution very hard. I don't understand why you connected both. But the solution i describe above provides both NAC and VM mobility. Can you please clarify the point you trying to make. > Because the network has to give the access permission first to the new site > first. Using VPN/Route Target concept provides VPN route path control also > results in quite complex import/export RT policies configuration. By complexity are you referring to computational complexity (http://en.wikipedia.org/wiki/Computational_complexity_theory) ? Or are you referring to any sort of manual process ? When it comes to computation complexity, you can inspect the source code here: https://github.com/Juniper/contrail-controller/blob/master/src/config/schema-transformer/to_bgp.py As far as i can tell, the algorithmic complexity of calculating the RT policies seems to be O(n) where n is the number of virtual-networks a specific network is connected to. There is no manual process involved. > People may not realize that yet. The solution further pretty relies on > egress assigned local label for VN traffic segregation in data plane and > facilities egress local forwarding process. IMO: this solution principal is > quite different from industry vision on cloud applications, virtualization, > enabling a cloud application in a full virtualized environment although it > may fit some cloud applications. Like to hear your opinion on this. Egress assigned labels are incredibly powerful. MPLS has proven this. The fact that the semantics of the label are assigned by the egress has enabled many applications that are not possible with a global label assignment such as a VLAN tag. When it comes to the "industry vision" part of your statement i think this is just a question of personal opinion. > > Text: > BGP also optimizes the route distribution for sparse events. > The Route Target Constraint [RFC4684] extension, builds an optimal > distribution tree for message propagation based on VPN membership. > > Comments: This method optimized the route distribution for interested VPN > sites, not interested end-system virtual network interface. In a > virtualization environment, caching interested virtual network interfaces at > the forwarder is valuable for the scalability. I'm not sure what you mean. If you are talking about having the full reachability information vs a local cache of the destinations that are currently being used then this is a tradeoff that can be reasoned about: - By pre-populating all reachability information one consumes memory. e.g. a really large routing table may be 20/30M... modern servers tend to start at 128G. Pre-populating the routing information reduces latency of new communications and provides a more predictable environment by having the networking state being control driven rather than data-driven. Previous experience with networking technologies point to clear advantages on the side of technologies that pre-populate routing information at control plane time vs data-driven approaches. BGP/MPLS L3VPNs being a clear example of this. > > What is the point in the given example? “As an example consider a topology > in which 100 End-System Route Servers are deployed in a network each serving > a subset of the VPN forwarding elements…”. It is obvious if using more > End-System Route Servers, each server will serve less number of clients? Yes. Horizontal scalability of the control plane. It is a feature missing from the "industry vision" in many cases. > > Text: > From an IP address assignment point of view, a virtual network > interface is addressed out of the virtual IP topology and associated > with a "closed user group" or VPN, while the physical interface of > the machine is addressed in the network infrastructure topology. > > The statement is not clear to me. Does it mean that IP address separation > between the VN and physical network? Yes. That is correct. > > Text: > Both static and dynamic IP address allocation can be supported. The > later assumes that the VPN Forwarder implements a DHCP relay or DHCP > proxy functionality. > Does this mean to assign an IP address to virtual network interface? Yes. The discussion is whether virtual network interface addresses are statically or dynamically assigned from the perspective of the guest. > The solution also require IP configuration on the client side of the > interface. This means that the solution requires the special configuration on > guest OS, is that right? No. Guest use 3 common methods of acquiring IP addresses: 1. Config file injection from the hypervisor. 2. metadata service: http://docs.openstack.org/grizzly/openstack-compute/admin/content//metadata-service.html 3. DHCP. My personal observation is that deployments are mostly gravitating torwards DHCP. > > Have any end-system vender implemented this solution? Juniper Networks. This solution is also included as part of other vendors: http://www.cloudscaling.com/blog/press-releases/ocs-2-5-ga/ http://thoughtsoncloud.com/index.php/2013/09/more-advances-in-open-cloud-architecture/ > > Do you also plan to address multicast support in the solution? It is included. The document is http://tools.ietf.org/html/draft-marques-l3vpn-mcast-edge-01. Unfortunately i've not updated it in a while. it is on my TODO list. > > The draft uses both virtual interface and virtual network interface, suggest > making the consistency. Good point . > > > The draft should define the end-system in the terminology. Agree. Pedro.
