Your first connection limit is 20million, now it is 300 million - so I am not 
sure what your requirements are, nor what you are optimizing/testing for.

The memory consumption of bihash depends on how the connections are hashed into 
secondary buckets. 
If you know that precisely - then you can calculate the memory requirements 
precisely.

If you want worst case - count one connection per bucket, but that will be 
probably quite a lot.

Barring the exact knowledge of your traffic - treat the connection count limit 
as a traffic sign “warning” before a brick wall of “available memory” when you 
drive a car at high speed.

Keep halving the connection count limit until you stop to run our of memory if 
your memory is fixed, or keep doubling memory until you get to your needed 
connection count without running out of memory if you need to adhere to certain 
connection count.

There are also options to play with bihash parameters - but given that they 
have performance implications, the reader curious enough ought to be able to 
trace these from the code.

--a

> On 31 Jul 2021, at 08:22, NetHappy <nethappys...@gmail.com> wrote:
> 
> Hi,
> 
> I have updated the config with 16G  and session count 300000000
> memory {
>  main-heap-size 16G
> }
> acl-plugin {
>  connection count max 300000000
> }
> But i am still seeing same crash i reported earlier. But now occurring less.
> Is there any calculation as to how much sessions should be attempted for 16GB 
> or memory ? What's the max suggested values for session count for 64 GB 
> system?
> 
> Here is the crash dump. 
> #8  os_out_of_memory () at /root/code/vpp/src/vppinfra/unix-misc.c:221
> #9  0x00007ff9f0fc56de in clib_mem_alloc_aligned_at_offset 
> (os_out_of_memory_on_failure=1, align_offset=0, align=64, size=98368)
>     at /root/code/vpp/src/vppinfra/mem.h:243
> #10 clib_mem_alloc_aligned (align=64, size=98368) at 
> /root/code/vpp/src/vppinfra/mem.h:263
> #11 alloc_aligned_16_8 (nbytes=98304, h=0x7ff7f05fd998 <acl_main+728>) at 
> /root/code/vpp/src/vppinfra/bihash_template.c:55
> #12 value_alloc_16_8 (h=0x7ff7f05fd998 <acl_main+728>, log2_pages=<optimized 
> out>)
>     at /root/code/vpp/src/vppinfra/bihash_template.c:462
> #13 0x00007ff9f147eefd in split_and_rehash_16_8 (h=h@entry=0x7ff7f05fd998 
> <acl_main+728>, old_values=old_values@entry=0x7ff92e595240, 
>     old_log2_pages=old_log2_pages@entry=9, 
> new_log2_pages=new_log2_pages@entry=10)
>     at /root/code/vpp/src/vppinfra/bihash_template.c:581
> #14 0x00007ff9f148096f in clib_bihash_add_del_inline_with_hash_16_8 (arg=0x0, 
> is_stale_cb=0x0, is_add=1, hash=<optimized out>, 
>     add_v=0x7fc01011cb48, h=0x7ff7f05fd998 <acl_main+728>) at 
> /root/code/vpp/src/vppinfra/bihash_template.c:893
> #15 clib_bihash_add_del_inline_16_8 (arg=0x0, is_stale_cb=0x0, is_add=1, 
> add_v=0x7fc01011cb48, h=0x7ff7f05fd998 <acl_main+728>)
>     at /root/code/vpp/src/vppinfra/bihash_template.c:985
> #16 clib_bihash_add_del_16_8 (h=h@entry=0x7ff7f05fd998 <acl_main+728>, 
> add_v=add_v@entry=0x7fc01011cb48, is_add=is_add@entry=1)
>     at /root/code/vpp/src/vppinfra/bihash_template.c:992
> #17 0x00007ff7f01618ae in acl_fa_add_session (current_policy_epoch=<optimized 
> out>, p5tuple=0x7ff7f43f70f8, now=27359859845375580, 
>     sw_if_index=<optimized out>, is_ip6=0, is_input=0, am=<optimized out>) at 
> /root/code/vpp/src/plugins/acl/session_inlines.h:577
> #18 acl_fa_inner_node_fn (reclassify_sessions=1, node_trace_on=0, 
> with_stateful_datapath=1, is_l2_path=1, is_input=0, is_ip6=0, 
>     frame=0x7ff96e946c00, node=0x7ff96b0b4d00, vm=0x7ff804f20d80) at 
> /root/code/vpp/src/plugins/acl/dataplane_node.c:1144
>
> Thanks,
> Mahamuda
> 
> 
> 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19905): https://lists.fd.io/g/vpp-dev/message/19905
Mute This Topic: https://lists.fd.io/mt/77321080/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to