> What I'm trying to figure out is this: do I need to try and determine a
> formula for the sizes that should be used for main heap and stat segment
> based on X number of routes and Y number of worker threads? Or is there a
> downside to just setting the main heap size to 32G (which seems like a
> number that is unlikely to ever be exhausted sans memory leaks)?

I do not think it would be a good idea:
 - it depends upon overcommit configuration: 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/vm/overcommit-accounting.rst
 - under the default overcommit setting ("heuristic") this would prevent small 
configs to run VPP by default: think developer VMs or smaller cloud instances 
(eg. AWS C5n.large are 4GB) which are pretty common

Maybe having an (optional) dyncamically growing heap could be a better option?

ben
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19651): https://lists.fd.io/g/vpp-dev/message/19651
Mute This Topic: https://lists.fd.io/mt/83856384/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to