Hi Mercury, 

So for one nginx worker vpp ends up with about 14k sessions. With more nginx 
workers the number will probably increase. 

As for the mq_try_lock_and_alloc_msg messages, those indicate that the shared 
message queue between vpp and nginx is overwhelmed. From the vcl configuration 
lower it looks like defaults are used which are not enough for high scale 
testing. For instance, if close messages are lost, connections keep on 
accumulating in nginx until nginx times them out. 

Some config recommendation for large number of sessions: 

- In vcl.conf, use large segments (e.g., segment-size 8g add-segment-size 8g) 
and increase message queue length (e.g.,  event-queue-size 500000)
- On vpp side, increase vpp’s worker message queues (e.g., under session stanza 
in startup.conf add event-queue-length 100000). For older vpp’s you have to 
also specify the mqs segments size, e.g., evt_qs_seg_size 200m

Could you retry the test with these updated configs? Also, would it be possible 
to also try with master latest if the issues persist as it looks like the 
testing was done with 21.06?

Regards,
Florin

> On Nov 30, 2021, at 12:58 AM, mercury noah <mercury124...@gmail.com> wrote:
> 
> Hi Florin,
> 
> Thanks for your quick response
> 
> There is one nginx worker ,
> root@ubuntu:~# ps -ef |grep nginx |grep -v grep
> root       46187   46138  0 16:36 pts/6    00:00:00 nginx: master process 
> /usr/sbin/nginx -c /opt/configs/csit_nginx_cps.conf
> nobody     46189   46187 99 16:36 pts/6    00:00:14 nginx: worker process
> 
> Nginx conf is:
> root@ubuntu:~# cat /opt/configs/csit_nginx_cps.conf
> worker_processes 1;
> master_process on;
> daemon off;
> 
> user root;
> worker_rlimit_core 10000m;
> working_directory /var/log/coredump/;
> 
> worker_rlimit_nofile 10240;
> 
> events {
>     use epoll;
>     worker_connections  10240;
>     accept_mutex       off;
>     multi_accept       off;
> }
> 
> http {
>     access_log off;
>     include mime.types;
>     default_type application/octet-stream;
>     sendfile on;
> 
>     ##RPS test
>     keepalive_timeout 0;
>     # keepalive_requests 1000000;
> 
>     server {
>         listen 12345;
>         root   html;
>         index  index.html index.htm;
>         location /return {
>                 return 204;
>         }
>         location /64B.json {
>                 return 200 '{"status":"success","result":"this is a 64Byte 
> json file test!"}';
>         }
>     }
> }
> 
> Here is "show session output"
> vpp# show session 
> Thread 0: 1 sessions
> Thread 1: 14116 sessions
> 
> Addition, during the test, vpp output some message
> 1: mq_try_lock_and_alloc_msg:105: failed to alloc msg
> 1: mq_try_lock_and_alloc_msg:105: failed to alloc msg
> 1: mq_try_lock_and_alloc_msg:105: failed to alloc msg
> 
> My partner issued a bug, there is a nginx coredump in it
> his env is differ from mime, but the result is almost the same, 
> https://jira.fd.io/browse/VPP-2001 <https://jira.fd.io/browse/VPP-2001>
> 
> Regards,
> Mercury
> 
> On Tue, 30 Nov 2021 at 16:15, Florin Coras <fcoras.li...@gmail.com 
> <mailto:fcoras.li...@gmail.com>> wrote:
> Hi Mercury, 
> 
> VCL sessions are allocated on the heap and given that the size of one session 
> is ~200B, 1M will eat up about 200MB of memory. How many are actually 
> allocated in nginx? Just do “show session” in vpp and that should report how 
> many sessions vpp is tracking. 
> 
> Also, how many workers does nginx come up with? By default vcl limits the 
> number of app workers to 16. More can be configured in vcl.conf with 
> "max-workers <n>” although I’d first recommend running nginx with a lower 
> number of workers with “worker_processes <n>”. 
> 
> Regards,
> Florin
> 
>> On Nov 29, 2021, at 11:22 PM, mercury noah <mercury124...@gmail.com 
>> <mailto:mercury124...@gmail.com>> wrote:
>> 
>> Hi,
>> 
>> I want to use nginx with LD_PRELOAD to act as a http web server(1 nginx 
>> worker and 1 nginx master),
>> VCL_CFG=/etc/vpp/vcl.conf
>> LDP_PATH=/usr/lib/x86_64-linux-gnu/libvcl_ldpreload.so
>> LD_PRELOAD=$LDP_PATH VCL_CONFIG=$VCL_CFG /usr/sbin/nginx -c 
>> /vsap/configs/vcl.conf
>> 
>> traffic generator is ab,
>> test case is CPS(connections per second), 
>> ab -n 5000000 -q -c 500 http://10.0.0.1/ <http://10.0.0.1/>
>> when traffic is accumulated to a certain amount, nginx worker process will 
>> crash,
>> I dont know what's wrong with this environment,
>> 
>> below is some env info:
>> [root@abc ~]# nginx -v
>> nginx version: nginx/1.14.2
>> 
>> vpp# show version 
>> vpp v21.10-rc0~240-ga70b015ce built by root on debian at 2021-08-16T06:11:35
>> 
>> nginx config file is almost the same with vsap
>> https://github.com/FDio/vsap/blob/master/configs/nginx.conf 
>> <https://github.com/FDio/vsap/blob/master/configs/nginx.conf>
>> 
>> vcl config is
>> vcl {
>>   heapsize 128M
>>   rx-fifo-size 4000000
>>   tx-fifo-size 4000000
>>   # app-scope-local
>>   # app-scope-global
>>   api-socket-name /run/vpp/api.sock
>> }
>> 
>> backtrace of nginx is:
>> (gdb) bt
>> #0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
>> #1  0x00007f31d244d535 in __GI_abort () at abort.c:79
>> #2  0x00007f31d233ecb6 in os_panic () from 
>> /lib/x86_64-linux-gnu/libvppinfra.so.21.10
>> #3  0x00007f31d2346a35 in vec_resize_allocate_memory () from 
>> /lib/x86_64-linux-gnu/libvppinfra.so.21.10
>> #4  0x00007f31d23eb2bc in _vec_resize_inline (numa_id=<optimized out>, 
>> data_align=<optimized out>, header_bytes=<optimized out>, 
>> data_bytes=<optimized out>, length_increment=<optimized out>, v=<optimized 
>> out>) at /root/cx/vpp/src/vppinfra/vec.h:172
>> #5  vcl_session_alloc (wrk=<optimized out>) at 
>> /root/cx/vpp/src/vcl/vcl_private.h:383
>> #6  vppcom_epoll_create () at /root/cx/vpp/src/vcl/vppcom.c:2734
>> #7  0x00007f31d23f0c14 in vcl_session_accepted_handler (ls_index=0, 
>> mp=0x7ffe4a6f61e0, wrk=0x7f31ce1ae840) at /root/cx/vpp/src/vcl/vppcom.c:1779
>> #8  vppcom_session_accept (listen_session_handle=0, 
>> ep=ep@entry=0x7ffe4a6f6310, flags=flags@entry=2048) at 
>> /root/cx/vpp/src/vcl/vppcom.c:1779
>> #9  0x00007f31d240c97e in vls_accept (listener_vlsh=listener_vlsh@entry=0, 
>> ep=ep@entry=0x7ffe4a6f6310, flags=flags@entry=2048) at 
>> /root/cx/vpp/src/vcl/vcl_locked.c:368
>> #10 0x00007f31d2c834d2 in ldp_accept4 (flags=2048, addr_len=0x7ffe4a6f63dc, 
>> addr=0x7ffe4a6f63f0, listen_fd=32) at /root/cx/vpp/src/vcl/ldp.c:2140
>> #11 accept4 (fd=32, addr=0x7ffe4a6f63f0, addr_len=0x7ffe4a6f63dc, 
>> flags=2048) at /root/cx/vpp/src/vcl/ldp.c:2178
>> #12 0x0000562dee1952b6 in ngx_event_accept ()
>> #13 0x0000562dee19fab8 in ?? ()
>> #14 0x0000562dee194a3a in ngx_process_events_and_timers ()
>> #15 0x0000562dee19f170 in ngx_single_process_cycle ()
>> #16 0x0000562dee17222b in main ()
>> 
>> I found the heapsize can influence the result,
>> heapsize: 256M , successful cps: 4869103 ,
>> heapsize: 128M , successful cps: 2341828 ,
>> heapsize: 64M , successful cps: 1100925 ,
>> 
>> it seems like a memory leak,
>> 
>> when I turn nginx configuration's master_process from on to off(only 1 nginx 
>> master),
>> then test for a longer time(6 min), nginx web server seem to be normal and 
>> will not crash,
>> 
>> I have tested vsap with LD_PRELOAD, things seem to be the same(nginx will 
>> crash), 
>> 
>> I found the performance test result of vsap LD_PRELOAD in the following 
>> website, 
>> https://docs.fd.io/csit/master/report/vpp_performance_tests/hoststack_testing/vsap/index.html
>>  
>> <https://docs.fd.io/csit/master/report/vpp_performance_tests/hoststack_testing/vsap/index.html>
>> and configurations in the following website, 
>> https://git.fd.io/csit/tree/ <https://git.fd.io/csit/tree/>
>> https://docs.fd.io/csit/master/report/introduction/methodology_hoststack_testing/methodology_vsap_ab_with_nginx.html
>>  
>> <https://docs.fd.io/csit/master/report/introduction/methodology_hoststack_testing/methodology_vsap_ab_with_nginx.html>
>> 
>> hope someone may help me,
>> Thanks,
>> mercury,
>> 
>> 
>> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20563): https://lists.fd.io/g/vpp-dev/message/20563
Mute This Topic: https://lists.fd.io/mt/87398061/21656
Mute #vpp-hoststack:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-hoststack
Mute #hoststack:https://lists.fd.io/g/vpp-dev/mutehashtag/hoststack
Mute #vppcom:https://lists.fd.io/g/vpp-dev/mutehashtag/vppcom
Mute #nginx:https://lists.fd.io/g/vpp-dev/mutehashtag/nginx
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to