pavankumar-siripurapu opened a new issue, #11442:
URL: https://github.com/apache/apisix/issues/11442
### Description
Hi,
we have deployed APISIX into our production environment via helm chart
apisix/apisix (version : 2.7.0), but we override the apisix version with 3.8.0.
Below are the resource limits configured. The autoscaling configuration as min
6 and max 10 pods.
```
# -- Set pod resource requests & limits
resources:
limits:
cpu: 500m
memory: 2Gi
requests:
cpu: 300m
memory: 2Gi
```
We have enabled 17 routes out of which 5 routes are based on vars condition
something like below.
```
name: route-generic
id: route-generic
uri: /*
vars:
- - uri
- '~~'
- '^\/([a-zA-Z][a-zA-Z])\/([a-zA-Z][a-zA-Z])\/.*'
```
when we deployed to the prod, the CPU and memory of the pods are increasing
tremendously within no time. The usual traffic will be like 3000~4000 requests
per minute. And the memory or CPU is not coming down gradually.

Kong in our same environment used to handle much more requests with less CPU
and memory.

We're planning to enable more traffic in upcoming days with more route
configurations, with this kind of performance issue , we're hesitant to
introduce the new traffic. we require your support in identifying the issue.
we're happy to connect based at your available time.
@juzhiyuan and @bzp2010 we need your expertise here.
### Environment
- APISIX version (run `apisix version`): 3.8.0
- Operating system (run `uname -a`): Kubernetes-compatible Linux x86
- OpenResty / Nginx version (run `openresty -V` or `nginx -V`):
- etcd version, if relevant (run `curl
http://127.0.0.1:9090/v1/server_info`): 3.5.0
- APISIX Dashboard version, if relevant:
- Plugin runner version, for issues related to plugin runners:
- LuaRocks version, for installation issues (run `luarocks --version`):
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]