On Mon, 2 Oct 2023 at 09:14, Hank Nussbacher via cisco-nsp <[email protected]> wrote:
> Does this make sense to go 1:1 which will only increase the number of > Netflow record to export? Everyone that does 1:1000 or 1:10000 > sampling, do you also seen a discrepancy between Netflow stats vs SNMP > stats? Both 1:1000 and 1:10000 make netflow expensive sflow. You will see almost all records exported are exactly 1 packet of data. You are spending a lot of resources storing that data and later exporting that data out, when you only ever punch the flow exactly once. This is because people have run the same configuration for decades, while traffic has exponentially grown, so the probability of hitting packets in the same flow twice has exponentially gone down. As the amount of traffic grows, sampling needs to become more and more aggressive to retain the same resolution. It is basically becoming massively more expensive over time, and likely cache based in-line netflow is dead in the water, and will become specialised in-line tap devices for the few who actually can justify the cost. Juniper has realised this, and PTX no longer uses cache at all, but exports immediately after sampling. IPFIX has newer sampling entities, which allow you to communicate that every N packet you sample C packets. This would allow you to ensure that once you fire sampling/export, you can sample enough packets to fill the MTU on export, to have an ideal balance of resource use and data density. Again entirely without cache, as cache does nothing unless you have very very aggressive sampling. -- ++ytti _______________________________________________ cisco-nsp mailing list [email protected] https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
