Hi Vishwas.

Thank for the detailed comments.

Vishwas Manral <[email protected]> writes:

> 1. Abstract - I am not happy with the mention of NIC in the abstract, what
> happens in case of LOM's. I think we should just state network hardware
> instead of NIC.

how about: changing "NIC" to "virtual network interface", i.e:

NEW:

    The primary functionality required is provisioning virtual
    networks, associating a virtual machine's virtual network
    interface(s) with the appropriate virtual network, and maintaining
    that association as the virtual machine is activated, migrated
    and/or deactivated.


> 2. Abstract - I agree with the idea of the statement:
> Use of an overlay-based approach enables scalable deployment on large
> network infrastructures.

> From the edge perspective however overlays cause scalability limitations,
> however P2P models work well as the edge is not loaded (take the case of
> IPsec tunnels versus MPLS VPN). So I think you may want to clarify the
> statement.

Not sure what you think I should say instead. I'm not immediately sure
in what dimension you think P2P scales better. I don't know what
assumptions you are making...

> 3. Introduction - We talk about tenant having expectation of
> separation of resources. We however only talk of traffic separation
> in the whole document. Are there other resources we are considering
> here? If so we may want to state them.

Do you have specifics? I'd like to hear from others on this. I can
imagine some things here, but I'm not at all sure I want to tackle
them now. IMO, the primary focus should be on traffic separation.

> 4. In the document the assumption seems to be that each end station can
> connect to "a" virtual network. Is that what we intend to state or do we
> also consider connecting to multiple networks - though for each we could
> have a different MAC address?

Unless others say otherwise, I'd assume that a given interface/VNIC
connects to exactly one VN. A VM can have multiple interfaces
(physical or virtual) if it wants to connect to multiple VNs. This
would be analagous to VPNs today.

In other words, I think connections to multiple VNs is supported by a
simple model. Is there any reason to go further?

> 5. Section 2.1 - In my view Multi-tenancy and elasticity impose different
> requirements and should be treated as different problems.

How about I change the name of 2.1 as follows:

Old:

    2.1.  Multi-tenant Environment Scale

New:

    2.1.  Elastic Provisioning

> 6. Section 2.2 - Besides retaining IP/ MAc, we also need to retain the port
> numbers, assume TCP.

Right. There were other comments about this section (see other
message).

Key sentence now reads:

    A key requirement for live migration is that a VM retain critical
    network state at its new location, including its IP and MAC
    address(es).

> 7. Section 2.2 - We talk about "today" IP address based on ToR. I think we
> should state in traditional DC etc. Also it should go before we talk about
> VM/ vMotion etc.

See revised text in separate message. Multiple changes made.

> 8. Section 2.5 - I think another idea should be that we need to allow the
> tenant to do addressing irrespective of the infrastructure, to have clear
> tenant/ provider boundaries.

How about:

      <section title="Separating Tenant Addressing from Infrastructure 
Addressing">
      <t>
        It is highly desireable to be able to number the data center
        underlay network using whatever addresses make sense for it,
        without having to worry about address collisions between
        addresses used by the underlay and those used by
        tenants. 
      </t>


> 9. Section 2.6, another use case is the fact that there will be
> communication required between devices in the DC and end points in the
> branch/ campus network, which may not support the NVO3
> functionality.

Old:

   2.6.  Support Communication Between VMs and Non-virtualized Devices

   Within data centers, not all communication will be between VMs.
   Network operators will continue to use non-virtualized servers for
   various reasons, traditional routers to provide L2VPN and L3VPN
   services, traditional load balancers, firewalls, intrusion detection
   engines and so on.  Any virtual network solution should be capable of
   working with these existing systems.


New:

    Not all communication will be between devices connected to
    virtualized networks.  Devices using overlays will continue to
    access devices and make use of services on traditional,
    non-virtualized networks, whether in the data center, the public
    Internet, or at remote/branch campuses.  Any virtual network
    solution should be capable of interoperating with existing
    routers, VPN services, load balancers, intrusion detection
    servics, firewalls, etc.

> 10. Section 2.7, why would sparsely populated members in a DC be highly
> distributed, as a general characteristic.

It's not that I'd always expect sparsely populated & highly
distributed in all cases, but with lots of VM migration, even if you
started out with good locality, you'd lose it over time.

Thus, we need to design for it.

> 11. Section 2.7, though the network infrastructure is administered by a
> single domain, in my view the virtual infrastructure should be independent
> of the physical one and should be operated by the tenant infrastructure.
> Look at the case of Vyatta like routers that can be spun on demand.

Not necessarily disagreeing. But is there a need for specific text to
talk about this? 

> 1. s/ resiliancy/resiliency
> 2. s/ trade offs/ tradeoffs

OK.

Thomas

_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to