On Mon, 15 Sept 2025 at 20:11, Havard Eidnes <[email protected]> wrote:
> I think that depends on which variant of IPFIX you choose to use > on the PTXes and MXes. > > Juniper has implemented what they refer to as "inline > monitoring", which I guess is somewhat of a cross between sFlow > and IPFIX, as it includes some payload bytes in each sample. Inline monitoring just means that it is done on the hardware. Hardware in MX means Trio (caching and export), or in PTX LC CPU export. This means on MX there is only visibility during export for $this Trio, whereas on PTX there is visibility for all Paradise/Triton. > On the PTX10001-36MR, we found that this collects a (small?) > number of samples in a single IPFIX export packet. However, when > we tried applying the same config on our MX routers (at least in > part in an effort to unify our configurations across platforms, > IMHO increasingly a lost cause), we ended up with just 1 record > in each exported IPFIX packet from the MX routers, and when we > tried reporting this as a bug, we got back "nope, this is as > designed". Go figure. Mr. Kostin provided me with some data and they are also seeing a low number of flows per packet on MX. Similar to us, around 3. I can't explain why so few, since even the single Trio should have visibility for more flows during export. I'm now thinking that there isn't actually any way to configure inline export packet size, and if they are very conservative and don't export more than 576B, since obviously the hardware won't react to any PMTUD ICMP message and reduce packet size on export. Or maybe it just prefers to trigger small exports often rather than big exports rarely. > Ref. > https://community.juniper.net/blogs/david-roy/2024/03/01/from-sflow-to-imon-sampling-on-mx10k-platforms This is incorrectly making equivalence between two things, inline and IPFIX 315. You can do it inline without IPFIX 315. IPFIX 315 is raw packet headers, like sflow. You don't need to turn on IPFIX 315 to do it inline. -- ++ytti _______________________________________________ juniper-nsp mailing list [email protected] https://puck.nether.net/mailman/listinfo/juniper-nsp

