Sn0rt commented on issue #9327:
URL: https://github.com/apache/apisix/issues/9327#issuecomment-1517300844
> this is my error log file ,thanks ***@***.*** From: Sn0rt Date: 2023-04-21
11:40 To: apache/apisix CC: bin-53; Mention Subject: Re: [apache/apisix]
interval between health checks is incorrect (Issue #9327) @bin-53 try again ?
modify the log level. diff --git a/conf/config-default.yaml
b/conf/config-default.yaml index 4f97adc4..07242889 100755 ---
a/conf/config-default.yaml +++ b/conf/config-default.yaml @@ -136,7 +136,7 @@
nginx_config: # config for render the template to generate n # the "user"
directive makes sense only if the master process runs with super-user
privileges. # if you're not root user,the default is current user. error_log:
logs/error.log - error_log_level: warn # warn,error + error_log_level: debug #
warn,error worker_processes: auto # if you want use multiple cores in
container, you can inject the number of cpu as environment variable
"APISIX_WORKER_PROCESSES" enable_cpu_affinity: false # disable CPU affinity by
default, if APISIX is deployed on a physical mac
hine, it can be enabled and work well. worker_rlimit_nofile: 20480 # the
number of files a worker process can open, should be larger than
worker_connections — Reply to this email directly, view it on GitHub, or
unsubscribe. You are receiving this because you were mentioned.Message ID:
***@***.***>
sorry. I can't get the log file.
I will provider a `healthcheck.lua` file to override the healthcheck file.
you can download the file from the link as below.
```shell
https://raw.githubusercontent.com/Sn0rt/lua-resty-healthcheck/sn0rt/try-fix-apisix-issues-9327/lib/resty/healthcheck.lua
```
and the target path is
```shell
$(APISIX_PATH)/deps//share/lua/5.1/resty/healthcheck.lua
```
more detailt about this file you can found from the diff block as below.
```diff
~/w/lua-resty-healthcheck *sn0rt/try-fix-apisix-issues-9327> diff
lib/resty/healthcheck.lua
/Users/guohao/workspace/apisix/deps//share/lua/5.1/resty/healthcheck.lua
136,178d135
<
< -- cache timers in "init", "init_worker" phases so we use only a single
timer
< -- and do not run the risk of exhausting them for large sets
< -- see https://github.com/Kong/lua-resty-healthcheck/issues/40
< -- Below we'll temporarily use a patched version of ngx.timer.at, until
we're
< -- past the init and init_worker phases, after which we'll return to the
regular
< -- ngx.timer.at implementation
< local ngx_timer_at do
< local callback_list = {}
<
< local function handler(premature)
< if premature then
< return
< end
<
< local list = callback_list
< callback_list = {}
<
< for _, args in ipairs(list) do
< local ok, err = pcall(args[1], ngx_worker_exiting(), unpack(args, 2,
args.n))
< if not ok then
< ngx.log(ngx.ERR, "timer failure: ", err)
< end
< end
< end
<
< ngx_timer_at = function(...)
< local phase = ngx.get_phase()
< if phase ~= "init" and phase ~= "init_worker" then
< -- we're past init/init_worker, so replace this temp function with
the
< -- real-deal again, so from here on we run regular timers.
< ngx_timer_at = ngx.timer.at
< return ngx.timer.at(...)
< end
<
< local n = #callback_list
< callback_list[n+1] = { n = select("#", ...), ... }
< if n == 0 then
< -- first one, so schedule the actual timer
< return ngx.timer.at(0, handler)
< end
< return true
< end
180,182d136
< end
<
<
321c275
< local _, terr = ngx_timer_at(0, run_fn_locked_target_list, self, fn)
---
> local _, terr = ngx.timer.at(0, run_fn_locked_target_list, self, fn)
576c530
< local _, terr = ngx_timer_at(0, run_mutexed_fn, self, ip, port,
hostname, fn)
---
> local _, terr = ngx.timer.at(0, run_mutexed_fn, self, ip, port,
hostname, fn)
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]