hansedong commented on issue #9343:
URL: https://github.com/apache/apisix/issues/9343#issuecomment-1516089117
I can only provide some reference based on my experience.
When I directly load test an HTTP server written in Go (only outputting
'hello world'), the QPS is 89246, while when forwarded by AIPSIX( the
configuration of APISIX server is 8 cores and 8GB RAM. ), the QPS is 72765. The
concurrency during load testing was 500. From my results, it seems that there
is not a significant difference in the load testing results between using
APISIX forwarding and directly load testing the target server.
I think the following points can be considered:
1. The bandwidth situation of AIPSIX nodes, investigate whether there are
any bottlenecks.
2. Check the kernel parameters of APISIX's Linux node.
Below are my system kernel parameters of APISIX node, you can take a look
for reference( do not use directly for production due to differences in network
environment. ):
```
kernel.msgmnb = 655360
kernel.msgmax = 65536
kernel.msgmni = 16384
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
net.ipv4.ip_forward = 1
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.tcp_retrans_collapse = 1
net.ipv4.conf.all.log_martians = 0
net.ipv4.conf.macv-host.log_martians = 0
net.ipv4.conf.default.log_martians = 0
net.ipv4.conf.bond0.log_martians = 0
net.ipv4.ip_nonlocal_bind = 0
fs.inotify.max_queued_events = 16384000
fs.inotify.max_user_instances = 1280000
fs.inotify.max_user_watches = 8192000
net.core.netdev_max_backlog = 200000
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
vm.swappiness = 0
vm.overcommit_memory = 1
vm.panic_on_oom = 0
fs.file-max = 52706963
fs.nr_open = 52706963
net.netfilter.nf_conntrack_tcp_timeout_established = 432000
net.nf_conntrack_max = 10485760
net.netfilter.nf_conntrack_max=10485760
net.netfilter.nf_conntrack_buckets = 655360
net.ipv4.neigh.default.gc_thresh1 = 163840
net.ipv4.neigh.default.gc_thresh2 = 327680
net.ipv4.neigh.default.gc_thresh3 = 500000
net.ipv6.neigh.default.gc_thresh1 = 163840
net.ipv6.neigh.default.gc_thresh2 = 327680
net.ipv6.neigh.default.gc_thresh3 = 500000
kernel.pid_max = 1966080
kernel.threads-max = 2062606
vm.max_map_count = 26214400
net.core.somaxconn = 2621440
net.ipv4.tcp_max_syn_backlog = 3276800
net.ipv4.tcp_max_orphans = 2621440
net.ipv6.conf.all.disable_ipv6 = 1
net.core.rmem_default = 212992
net.core.wmem_default = 212992
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.ip_local_port_range = 10000 65000
net.ipv4.tcp_synack_retries = 5
net.ipv4.tcp_syn_retries = 5
net.ipv4.tcp_keepalive_time = 150
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1
vm.min_free_kbytes = 262144
vm.panic_on_oom = 0
vm.vfs_cache_pressure = 200
vm.swappiness = 30
net.ipv4.route.max_size = 5242880
net.ipv4.tcp_syn_retries = 6
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_retries2 = 6
```
Additionally, you can try increasing the number of worker_connections( vim
config.yaml ):
```
max_running_timers: 40960 # increase it if you see
"lua_max_running_timers are not enough" error
event:
worker_connections: 655350
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]