Hi andrew Indeed i wrote stp isolation, that means to keep stp inside each dc.
I m wondering since otv is not ieee if i got it well... So which is your suggestion for the lan and wan devices? Wan i understand 6800 or n7k ? Lan? Rgds Sent with Mobile -------- Messaggio originale -------- Da: Andrew Miehs <[email protected]> Data: A: R LAS <[email protected]> Cc: [email protected] Oggetto: Re: [c-nsp] Best design can fit DC to DC Dont know if you really want l2 spanning across 35km... The qfx5100s are an extremely cost effective solution. If you want mpls you would need c6800s or n7ks possibly using otv - this will become very pricey very quickly... Sent from a mobile device > On 1 Nov 2014, at 3:43, R LAS <[email protected]> wrote: > > Hi all > a customer of mine is thinking to renew DC infrastructure and interconnection > among the main (DC1) and secondary (DC2), with the possibility in the future > to add another (DC3). > > Main goal are: sub-second convergence in case of a single fault of any > component among DC1 and DC2 (not DC3), to have the possibility to extend L2 > and L3 among DCs, to provide STP isolation among DCs, to provide ports on the > server at eth 1/10Gbs speed. > > DC1 and DC2 are 35 km away, DC3 around 1000 km away from DC1 and DC2. > > Customer would like to design using Cisco or Juniper and at the end to decide. > > Talking about Juniper my idea was to build and MPLS interconnection with > MX240 or MX104 in VC among DC1 and DC2 (tomorrow will be easy to add DC3) and > to use QFX in a virtual chassis fabric configuration. > > And if you would go with Cisco, what do you propose in this scenario ? > > Rgds > _______________________________________________ > cisco-nsp mailing list [email protected] > https://puck.nether.net/mailman/listinfo/cisco-nsp > archive at http://puck.nether.net/pipermail/cisco-nsp/ _______________________________________________ cisco-nsp mailing list [email protected] https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
