Pedro Roque Marques <[email protected]> writes:

> I object to the document on the following points:

> 1) It doesn't appear to recognize that there are multiple type of 
> data-center deployments with different requirements.

I'd like to hear more from others on this point.

I don't see right off how discussions on different data center
deployments types actually changes the problem statement
document. Yes, there are many different ones. But trying to describe
them in any sort of detail will be challenging, as ARMD discovered.

But more importantly, one of the reasons for doing overlays is to
separate the details of the underlay network from the overlay. i.e.,
just layer an overlay on top of whatever underlay deployment a
particular DC has chosen to use. That way one can change/tweak the
underlay more easily.

Please provide some examples of differences in DC deployments and
*how* they impact NVO3, either from a problem statement perspective or
from a framework perspective.

In another message:

Pedro Roque Marques <[email protected]> writes:

> On Jun 26, 2012, at 7:25 AM, <[email protected]> <[email protected]> 
> wrote:

> > Hi Pedro,
> >
> > Could you provide some examples of the requirements differences
> > between the types of data centers?

> Differences in aggregate bandwidth and locality of data access drive
> very different network designs.

> To give you one example: large scale/large aggregate throughput DC 
> networks typically do not support any form of broadcast / multicast. 
> Solutions that rely on broadcast / IP multicast for learning are not 
> very useful for this category of DCs.

I agree that support for multicast in the underlay cannot be
assumed. Pretty much everyone I've asked on this point says the same
thing. So I think one of the requirements will need to be emulation of
multicast (at the VN level) across underlays that do not support (or
want to enable) multicast. But that is a requirements discussion, not
a problem statement issue.

Section 4.2.3 of the framework draft covers this.

I also agree that solutions that rely on multicast for learning are
only useful for smallish data centers. But we've been saying that all
along.  That is, one of the key work areas (as described in the
problem statement) is having a separate oracle that can be queried for
mappings precisely so NVE's do not have to do learning.

> Another example: "traditional / legacy" DCs may require the
> transport of unmodified ethernet frames; large scale/low cost DCs do
> not have any use for ethernet headers since applications are not
> allowed to send / receive non IP packets.


I believe it is perfectly reasonably for an overlay to provide two
almost identical services. L2 services (which preserves ethernet
frames at an e2e level) and L3 service, for those cases where a
deployment is happy with IP only emulation.

While there clearly are some differences in those two service models,
I think both can be accomodated by the same NVO3 framework (and
problem statement). Indeed, that is IMO very much a goal of this
WG. Earlier versions of the charter had explicit text about this, that
was later removed. But I see no reason why it should not continue to
be a goal of this WG, even though not explicitly called out in the
current charter.

Thomas

_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to