shan307 opened a new issue, #12803: URL: https://github.com/apache/apisix/issues/12803
### Description i sacle my server from 2 to 4 instance,apisix debug logs: <img width="1457" height="142" alt="Image" src="https://github.com/user-attachments/assets/7c6ab1a9-6c01-4350-88d6-75427c743d0c" /> but when i scale my server to two instance,the APISIX healthcheck still detects the IP that has been scale down. <img width="1923" height="141" alt="Image" src="https://github.com/user-attachments/assets/86034287-4ec3-4076-908c-e8c8da49ba52" /> <img width="1461" height="117" alt="Image" src="https://github.com/user-attachments/assets/b1f01a9a-02d9-4484-a2c0-f4a4f8140c3b" /> and i noticed that there are many unhealthy IPs in the healthcheck still remaining in the detection list. (This route only has two upstream IPs for me.) <img width="1454" height="184" alt="Image" src="https://github.com/user-attachments/assets/ac3be970-a02a-4fde-a9c2-bfd12e750276" /> for my route is: ``` "checks": { "active": { "concurrency": 10, "healthy": { "http_statuses": [ 200, 302 ], "interval": 5, "successes": 2 }, "http_path": "/ping", "timeout": 3, "type": "http", "unhealthy": { "http_failures": 3, "http_statuses": [ 404, 500, 501, 502, 503, 504, 505 ], "interval": 5, "tcp_failures": 2, "timeouts": 3 } } }, "scheme": "http", "discovery_type": "kubernetes", "pass_host": "pass", "service_name": "mgb-namespace/my-service:http", ``` ### Environment - APISIX version (run `apisix version`): - Operating system (run `uname -a`): - OpenResty / Nginx version (run `openresty -V` or `nginx -V`): - etcd version, if relevant (run `curl http://127.0.0.1:9090/v1/server_info`): - APISIX Dashboard version, if relevant: - Plugin runner version, for issues related to plugin runners: - LuaRocks version, for installation issues (run `luarocks --version`): -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
