> More or less. You can also define an "mdt data" group range. When > traffic for a particular (s,g) pair exceeds a configurable threshold, a > group is picked from this range and that (s,g) transitions to the new > group in the default VRF, and PEs with receivers join this group. This > means you don't flood high-bandwidth groups to every PE - just to PEs > which are interested.
I just noticed this in SXI (only just getting there - I'm trying to read through the release notes during morning exercise... could take weeks :( ). This however frightens me, because during that transition it seems as though you're bound to drop a packet somewhere. It's ok if you're IPTV or some other multimedia streaming, but for financial market data that's a no-no of highest order. > > Plus, GRE encap with MPLS means a recirc through the EARL so you pay the > > latency penalty twice plus the extra load. And you need jumbo frames to > > encap at full standard 1500 MTU. > I was under the impression that there's no recirc in this case, but I > could be wrong and can't find a reference. You may be right - GRE for tunnels requires a recirc, GRE for MVPN might not - the recirc for the tunnel is IIRC because they have to treat the un-encap'ed tunnel packet as if it's a new inbound packet on the tunnel "SVI", for mdt they may be able to shortcut it. > As for MTU - if you're running MPLS you presumably have jumbos enabled > anyway? Doesn't mean your WAN provider gave you enough MTU to fit both headers, though. (One provider I work with has all long-haul capped at 1546, presumably a holdover from fast-enet days) > > Hence, don't do it if you don't have a really good reason to. I don't, > > so I don't. > Well, the "good reason" for doing this is if you want multicast in MPLS > L3VPNs, surely ;o) I don't want it that badly. :) -bacon _______________________________________________ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/