Hi,

While I do agree with all comments made by Bruno below I have few a bit
higher level questions to the applicability of this work.


1. Assume I have CLOS non blocking DC fabric. Why would I want to
complicate life by breaking full ECMP rather then utilize it as efficiently
as possible ?

Two uses cases are known: link/node "overload" as well as weighted ECMP.
But one solution for those is already provided in draft-lapukhov-bgp-sdn-00.


2. Assume my service for tenant virtualization is an overlay based. Same
for appliances & storage clusters. Why it is the transport network role to
steer packets between my services rather then pure overlay solution
(example: draft-rfernando-bess-service-chaining-00 or any similar analogy)
?

By proper prefix advertisement there is already sufficient mechanism to
enforce flow via required services.

So if my overlay starts on compute nodes and are orchestrated by openstack
+ network overlay how would the transport be able to properly guide the
flows as well as properly apply source based headers to packets involved ?


3. As we are providing a case for EBGP based DC fabric it seems to me that
we can easily use various BGP traffic engineering well known by ISPs
methods used every day globally to switch to particular destinations along
the paths which operator requires. Simple use of LPM + eBGP policy gives so
much control today on shipping hardware that I find it really puzzling why
to load a new operational burden on top.


Best,
R.


​> ​
On Sun, Nov 16, 2014 at 7:12 PM, Bruno Rijsman <[email protected]>
wrote:

>
>
> *See >>> below for some comments on *
> *draft-filsfils-spring-segment-routing-msdc-00*
>
> *-- Bruno*
>
_______________________________________________
spring mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/spring

Reply via email to