jujiale opened a new issue, #7108: URL: https://github.com/apache/apisix/issues/7108
### Description hello, I want to reset the prometheus metrics in some situations while I use prometheus plugin,what I want is when I reset the metrics,then I invoke http://127.0.0.1:9092/apisix/prometheus/metrics, the response is the same as when I restart apisix and first invoke http://127.0.0.1:9092/apisix/prometheus/metrics. I add a control api in prometheus.lua,the api finally invoke the method like below: ` local function metrics_flush() local dict = ngx.shared["prometheus-metrics"] dict.flush_all(dict) dict.flush_expired(dict) end ` but when I invoke my reset control api, then I send a request mate the route, then I invoke http://127.0.0.1:9092/apisix/prometheus/metrics,the result sometimes like A, sometimes like B, sometimes like C, the A,B,C like below A: ` apisix_bandwidth{type="egress",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215"} 1003 apisix_bandwidth{type="ingress",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215"} 519 apisix_http_latency_bucket{type="apisix",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215",le="0.05"} 1 apisix_http_latency_bucket{type="apisix",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215",le="0.15"} 1 apisix_http_latency_bucket{type="apisix",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215",le="0.5"} 1 apisix_http_latency_bucket{type="apisix",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215",le="0.75"} 1 apisix_http_latency_bucket{type="apisix",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215",le="1"} 1 apisix_http_latency_bucket{type="apisix",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215",le="+Inf"} 1 apisix_http_latency_bucket{type="request",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215",le="0.5"} 1 apisix_http_latency_bucket{type="request",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215",le="0.75"} 1 apisix_http_latency_bucket{type="request",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215",le="1"} 1 apisix_http_latency_bucket{type="request",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215",le="+Inf"} 1 apisix_http_latency_bucket{type="upstream",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215",le="0.5"} 1 apisix_http_latency_bucket{type="upstream",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215",le="0.75"} 1 apisix_http_latency_bucket{type="upstream",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215",le="1"} 1 apisix_http_latency_bucket{type="upstream",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215",le="+Inf"} 1 apisix_http_latency_count{type="apisix",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215"} 1 apisix_http_latency_count{type="request",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215"} 1 apisix_http_latency_count{type="upstream",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215"} 1 apisix_http_latency_sum{type="apisix",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215"} 8.2015991220707e-08 apisix_http_latency_sum{type="request",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215"} 0.48900008201599 apisix_http_latency_sum{type="upstream",route="399831007143920568",service="401146765056672696",consumer="",node="18.215.122.215"} 0.489 apisix_http_status{code="200",route="399831007143920568",matched_uri="/get*",matched_host="",service="401146765056672696",consumer="",node="18.215.122.215"} ` B ` apisix_etcd_modify_indexes{key="consumers"} 0 apisix_etcd_modify_indexes{key="global_rules"} 4968 apisix_etcd_modify_indexes{key="max_modify_index"} 82435 apisix_etcd_modify_indexes{key="prev_index"} 82749 apisix_etcd_modify_indexes{key="protos"} 0 apisix_etcd_modify_indexes{key="routes"} 82435 apisix_etcd_modify_indexes{key="services"} 71976 apisix_etcd_modify_indexes{key="ssls"} 0 apisix_etcd_modify_indexes{key="stream_routes"} 0 apisix_etcd_modify_indexes{key="upstreams"} 71975 apisix_etcd_modify_indexes{key="x_etcd_index"} 82752 apisix_etcd_reachable 1 apisix_nginx_http_current_connections{state="accepted"} 65 apisix_nginx_http_current_connections{state="active"} 3 apisix_nginx_http_current_connections{state="handled"} 65 apisix_nginx_http_current_connections{state="reading"} 0 apisix_nginx_http_current_connections{state="total"} 69 apisix_nginx_http_current_connections{state="waiting"} 2 apisix_nginx_http_current_connections{state="writing"} 1 apisix_node_info{hostname="ksshuat01218"} 1 ` C: resut is A and B **what I find is that the result A is the metrics that linked request. the result B is the metrics that is none of business of request,such as "apisix_etcd_modify_indexes"** I want to know how I can reset metrics, or if I has some error in flush dict ### Environment - APISIX version (run `apisix version`):v2.11 - Operating system (run `uname -a`):Linux 3.10.0-957.21.3.el7.x86_64 - OpenResty / Nginx version (run `openresty -V` or `nginx -V`):openresty/1.19.9.1 - etcd version, if relevant (run `curl http://127.0.0.1:9090/v1/server_info`):3.5.0 - APISIX Dashboard version, if relevant: - Plugin runner version, for issues related to plugin runners: - LuaRocks version, for installation issues (run `luarocks --version`): -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
