Re: [j-nsp] small multitenant datacenter
Ryan Goldberg rgoldb...@compudyne.net writes: Do you see an issue with blowing up ex4200s with all this ospf and vrrp? I'm labbing tomorrow and will try to get the boxes to thrash. From a routing table size POV I'm not worried (many customers having no extra routes, lots have 4-6, a handful having as many as 30 or 40), I'm a little concerned all those processes might upset the RE if things get flappy. I can handle a little bump but if they just freak out that wouldn't be good. I do not really have much experience with EX-series and layer 3. I let the MX80s do the VRRP and OSPF, which has not been entirely smooth sailing lately. Good thought. Can you hook up L3 addresses to the inner tags on EX boxes? I'll have to play with that. No, the EX4200 would have to handle a dual tag push, and the hardware can't do that. Rumour has it that the EX4500 and EX4550 has the necessary hardware, but even if that turns out to be true, their software can't do it (yet?). /Benny ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] small multitenant datacenter
Ryan Goldberg rgoldb...@compudyne.net writes: I will re-review what we may need/what may be lacking. It seems the 3300s are catching up, and we have had good luck in small single-tenant deployments (3 vmware host + SAN), using them strictly as stacked L2 switches, generally in place of a pair of 3750x or 2960s. I avoid the EX3300 because it requires a feature license for q-in-q tunneling. Even HP has stopped doing that. Personally I find it confusing that feature licenses are so different across the EX series. It is probably unavoidable that not all hardware is equally capable feature-wise, but checking for which features need a license on which box is a bit of a nightmare. Not as bad as a certain other vendor, but still. /Benny ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] small multitenant datacenter
Thanks Benny. DMVPN boxes (cisco x8xx), and MPLS boxes (MX80s). All those L3 addresses are in customer-specific routing-instance (or, VRF on cisco) and there's a per-customer ospf instance keeping things knitted together. That design is somewhat similar to one that I am familiar with; it all looks sane. Do you see an issue with blowing up ex4200s with all this ospf and vrrp? I'm labbing tomorrow and will try to get the boxes to thrash. From a routing table size POV I'm not worried (many customers having no extra routes, lots have 4-6, a handful having as many as 30 or 40), I'm a little concerned all those processes might upset the RE if things get flappy. I can handle a little bump but if they just freak out that wouldn't be good. Will your design hit any problems if a customer already uses 10.144.x? Yeah. I'd have to pick some other subnet for that customer, which would break the tidiness of everything, but so be it. In a green-field deployment today I would move all the special traffic to IPv6 and only care about public IP addresses in IPv4. The MPLS would still move customer traffic with IPv4 private IPs and the hosted servers and firewalls would still have private IPv4 addresses, but all monitoring traffic would be IPv6. Good thought. One thing was different in the design: The equivalents of your VLANs 2000-2999 and 3000-3999 are carried inside q-in-q, to make it possible to eventually grow beyond 4000 customers and to ensure that overlap between customer VLANs and other VLANs would not cause problems. Good thought. Can you hook up L3 addresses to the inner tags on EX boxes? I'll have to play with that. Ryan ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] small multitenant datacenter
On Mon, Dec 3, 2012 at 11:06 PM, Ryan Goldberg rgoldb...@compudyne.net wrote: Do you see an issue with blowing up ex4200s with all this ospf and vrrp? I'm labbing tomorrow and will try to get the boxes to thrash. I'm interested to know your thoughts on RE performance after you have labbed this scenario. I've read the EX4200 supports 256 VRF-Lite instances, but like you, I imagine the control-plane may become sluggish before it gets to that point. I noticed you include the EX3300 in your design. I also considered this switch and decided against it once I read the feature table. I would like to use them once additional features are working, but right now, it lacks critical items like storm-control. https://www.juniper.net/techpubs/en_US/release-independent/junos/topics/concept/ex-series-software-features-overview.html Also, you mention both EX4200 virtual-chassis, and VRRP. I think it is unusual to choose BOTH V-C and STP+VRRP as redundancy mechanisms, because you get the worst of both worlds in terms of potential failure modes. For example, you get the unknown-unicast problems associated with ingress traffic arriving on the VRRP non-master and potentially being flooded out many ports of that switch, because it may never learn MAC addresses of downstream servers while it is the non-master. You also get any problems that you might encounter with virtual-chassis, meaning, bugs. I think you should pick one: V-C or STP+VRRP, depending on which technology you are most comfortable with. Mixing the two is IMO not smart, not because of any unique problems that arise from this combination, but simply because you have decided to expose yourself to two sets of gotchas without necessarily gaining anything. My experience with EX4200 virtual-chassis has been extremely good since Junos 10.4. Before then, we had problems with file system corruption on the EX4200, but this was fixed in 10.4. I have not had any serious stacking-specific bugs since about Junos 9.5. I rely totally on EX4200 virtual-chassis for redundancy in many environments, and am very pleased with the results. Good luck with your project. I hope my comments are constructive and helpful! -- Jeff S Wheeler j...@inconcepts.biz Sr Network Operator / Innovative Network Concepts ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] small multitenant datacenter
Thanks Jeff- Do you see an issue with blowing up ex4200s with all this ospf and vrrp? I'm labbing tomorrow and will try to get the boxes to thrash. I'm interested to know your thoughts on RE performance after you have labbed this scenario. I've read the EX4200 supports 256 VRF-Lite instances, but like you, I imagine the control-plane may become sluggish before it gets to that point. I also saw the vrf limits in the docs http://www.juniper.net/techpubs/en_US/junos11.1/topics/concept/bridging-vrf-ex-series.html but if you look at 11.2 http://www.juniper.net/techpubs/en_US/junos11.2/topics/concept/bridging-vrf-ex-series.html poof, limits gone? I'm working remote right now and just have one MX80 plumbed to one 4200. I brought up 300 virtual-routers with ospf between them. During the commit on the 4200s that initially fired the ospfs load went to about 6.5 and then fell off quickly. From load of .05 to 6.5 and back to .05 was under 2 minutes (roughly). Each of the 300 vrfs has just the connected routes between the boxes and then another route on the other side of the MX80. So I loaded 12k static routes onto the MX80 into one of the vrfs and there was almost no noticeable impact on the 4200, cpu climbed to like 50% for maybe 20 seconds. I then dropped the link briefly between the boxes and the impact to the 4200 was about the same, 50% or so for maybe 20-30 seconds. I'll have more time to play tomorrow, and will report back findings. I noticed you include the EX3300 in your design. I also considered this switch and decided against it once I read the feature table. I would like to use them once additional features are working, but right now, it lacks critical items like storm-control. https://www.juniper.net/techpubs/en_US/release- independent/junos/topics/concept/ex-series-software-features- overview.html I will re-review what we may need/what may be lacking. It seems the 3300s are catching up, and we have had good luck in small single-tenant deployments (3 vmware host + SAN), using them strictly as stacked L2 switches, generally in place of a pair of 3750x or 2960s. Also, you mention both EX4200 virtual-chassis, and VRRP. I think it is unusual to choose BOTH V-C and STP+VRRP as redundancy mechanisms, because you ... I think you should pick one: V-C or STP+VRRP, depending on which This has been causing me loss of sleep. On the one hand, I like independent brains and feel that hinging everything on IRF, VC, VSS, etc, just puts you in a riskier spot, with invisible magic keeping things afloat My experience with EX4200 virtual-chassis has been extremely good since Junos 10.4. Before then, we had problems with file system corruption on the EX4200, but this was fixed in 10.4. I have not had any serious stacking-specific bugs since about Junos 9.5. I rely totally on EX4200 virtual-chassis for redundancy in many environments, and am very pleased with the results. But like you, I've had really good luck with the 4200s. In fact, I have had zero issues. We didn't start getting them till 10.4, so I think we escaped some initial ick. The invisible magic might be better than a relatively delicate and somewhat complex configuration. Good luck with your project. I hope my comments are constructive and helpful! Very much so. As I play with the failure modes, and try and balance performance and manageability with meeting the various business goals and constraints, I think it will be a bunch of fun. Thanks- Ryan ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp