AmerDwight opened a new issue, #10721:
URL: https://github.com/apache/apisix/issues/10721
### Description
It was doing well...
It worked fine,
But recently, I found a problems on healthchecking.
When I go visit "localhost:9092/v1/healthcheck" it shows :
```
{}
```
but I remember that was a healthchecking table for indicating the health
status.
I'm pretty sure the upstream got two nodes, and was called and activated.
Further, in the metrics of prometheus, I can't find the
"apisix_upstream_status" either.
Well, I don't know what's going wrong, so came up here for a instruction.
All the other functions are still fine.
The followings are information:
**docker-compose**
```
version: "3"
services:
apisix-dashboard:
container_name: apisix_dashboard
image: apache/apisix-dashboard:latest
restart: always
volumes:
- ./dashboard_conf/conf.yaml:/usr/local/apisix-dashboard/conf/conf.yaml
ports:
- "9000:9000"
networks:
apisix:
apisix:
container_name: apisix
image: apache/apisix:3.2.2-centos
restart: always
volumes:
- ./apisix_log:/usr/local/apisix/logs
- ./apisix_conf/config.yaml:/usr/local/apisix/conf/config.yaml:ro
- ./apisix_conf/debug.yaml:/usr/local/apisix/conf/debug.yaml:ro
depends_on:
- etcd
ports:
- "9180:9180/tcp"
- "9080:9080/tcp"
- "9091:9091/tcp"
- "9443:9443/tcp"
- "9092:9092/tcp"
networks:
apisix:
etcd:
container_name: etcd
image: bitnami/etcd:3.5
restart: always
volumes:
- etcd_data:/bitnami/etcd
environment:
ETCD_ENABLE_V2: "true"
ALLOW_NONE_AUTHENTICATION: "yes"
ETCD_ADVERTISE_CLIENT_URLS: "http://etcd:2379"
ETCD_LISTEN_CLIENT_URLS: "http://0.0.0.0:2379"
ports:
- "12379:2379/tcp"
networks:
apisix:
prometheus:
container_name: prometheus
image: prom/prometheus:v2.25.0
restart: always
volumes:
- ./prometheus_conf/prometheus.yml:/etc/prometheus/prometheus.yml
- ./prometheus_conf/rules.yml:/etc/prometheus/rules.yml:ro
ports:
- "9090:9090"
networks:
apisix:
alertmanager:
container_name: alert_manager
image: bitnami/alertmanager:latest
restart: always
ports:
- "9093:9093"
volumes:
- ./alertmanager:/config
- alertmanager-data:/data
networks:
apisix:
command: --config.file=/config/alertmanager.yml --log.level=debug
grafana:
container_name: grafana
image: grafana/grafana:7.3.7
restart: always
ports:
- "3000:3000"
volumes:
- "./grafana_conf/provisioning:/etc/grafana/provisioning"
- "./grafana_conf/dashboards:/var/lib/grafana/dashboards"
- "./grafana_conf/config/grafana.ini:/etc/grafana/grafana.ini"
networks:
apisix:
web1:
container_name: nginx_web1
image: nginx:1.19.0-alpine
restart: always
volumes:
- ./upstream/backend.conf:/etc/nginx/nginx.conf:ro
ports:
- "9081:80/tcp"
networks:
apisix:
environment:
- NGINX_PORT=80
web2:
container_name: nginx_web2
image: nginx:1.19.0-alpine
restart: always
volumes:
- ./upstream/sample2.conf:/etc/nginx/nginx.conf:ro
ports:
- "9082:80/tcp"
networks:
apisix:
environment:
- NGINX_PORT=80
syslog_server:
container_name: syslog_server
image: admiralobvious/tinysyslog:latest
restart: always
ports:
- "5140:5140/udp"
networks:
apisix:
networks:
apisix:
driver: bridge
volumes:
etcd_data:
driver: local
alertmanager-data:
driver: local
```
**config of apisix**
```
apisix:
node_listen: 9080
enable_ipv6: false
enable_control: true
control:
ip: "0.0.0.0"
port: 9092
deployment:
admin:
allow_admin:
- 0.0.0.0/0
admin_key:
- name: "admin"
key: edd1c9f034335f136f87ad84b625c8f1
role: admin # admin: manage all configuration data
- name: "viewer"
key: 4054f7cf07e344346cd3f287985e76a2
role: viewer
etcd:
host:
- "http://etcd:2379"
prefix: "/apisix"
timeout: 30
plugin_attr:
file-logger:
path: "./logs/file.logger/daily.log"
prometheus:
metrics:
http_status:
extra_labels:
- upstream_addr: $upstream_addr
- upstream_status: $upstream_status
export_addr:
ip: "0.0.0.0"
port: 9091
```
**upstream:**
```
{
"nodes": [
{
"host": "nginx_web1",
"port": 80,
"weight": 1
},
{
"host": "nginx_web2",
"port": 80,
"weight": 1
}
],
"timeout": {
"connect": 6,
"send": 6,
"read": 6
},
"type": "roundrobin",
"checks": {
"active": {
"concurrency": 10,
"healthy": {
"http_statuses": [
200,
302
],
"interval": 1,
"successes": 2
},
"http_path": "/health",
"timeout": 1,
"type": "http",
"unhealthy": {
"http_failures": 3,
"http_statuses": [
429,
404,
500,
501,
502,
503,
504,
505
],
"interval": 1,
"tcp_failures": 2,
"timeouts": 3
}
}
},
"scheme": "http",
"pass_host": "pass",
"name": "DemoSites",
"keepalive_pool": {
"idle_timeout": 60,
"requests": 1000,
"size": 320
}
}
```
**Routes:**
```
{
"uri": "/nginx",
"name": "DemoSitesRoute",
"methods": [
"GET",
"POST",
"PUT",
"DELETE",
"PATCH",
"HEAD",
"OPTIONS",
"CONNECT",
"TRACE",
"PURGE"
],
"upstream_id": "493186026689266370",
"status": 1
}
```
for the additional issue,
It's always restarting under those configs, if I switch the image to
apache/apisix:3.6.0
but I don;t know how either... but that is not the major problem.
### Environment
- APISIX version (run `apisix version`):3.2.2 centos (Docker images)
- Operating system (run `uname -a`):Docker
- OpenResty / Nginx version (run `openresty -V` or `nginx -V`):
- etcd version, if relevant (run `curl
http://127.0.0.1:9090/v1/server_info`):3.5
- APISIX Dashboard version, if relevant:
- Plugin runner version, for issues related to plugin runners:
- LuaRocks version, for installation issues (run `luarocks --version`):
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]