zx01-coder opened a new issue, #11639: URL: https://github.com/apache/apisix/issues/11639
### Description 环境: mac m2 docker启动 ApiSix镜像: 3.10.0-debian ApiSix配置文件: `# # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # apisix: node_listen: 9080 # APISIX listening port # enable_ipv6: false enable_control: true control: ip: "0.0.0.0" port: 9092 plugins: - "my-prometheus" # - "api-key-signature" - "prometheus" extra_lua_path: "/usr/local/apisix/plugins/?.lua;;" deployment: admin: allow_admin: # https://nginx.org/en/docs/http/ngx_http_access_module.html#allow - 0.0.0.0/0 # We need to restrict ip access rules for security. 0.0.0.0/0 is for test. admin_key: - name: "admin" key: edd1c9f034335f136f87ad84b625c8f1 role: admin # admin: manage all configuration data - name: "viewer" key: 4054f7cf07e344346cd3f287985e76a2 role: viewer etcd: host: # it's possible to define multiple etcd hosts addresses of the same etcd cluster. - "http://etcd:2379" # multiple etcd address prefix: "/apisix" # apisix configurations prefix timeout: 30 # 30 seconds # Nginx 配置 nginx_config: worker_processes: auto # 自动设置 worker 进程数量 # enable_access_log: true # 是否启用访问日志 # access_log_format_escape: default # 日志格式的转义方式 # 自定义 Lua 搜索路径,用于自定义插件的加载 http: lua_package_path: "/usr/local/apisix/plugins/?.lua" lua_shared_dicts: # 自定义共享内存区域 prometheus_metrics: 10m # 为 Prometheus 指标分配 10MB 内存 # 日志路径 error_log_level: "debug" error_log: "/usr/local/apisix/logs/error1.log" access_log: "/usr/local/apisix/logs/access1.log" # Prometheus 监控插件的配置 plugin_attr: prometheus: export_addr: ip: "127.0.0.1" # Prometheus 指标暴露地址 port: 9091 # Prometheus 指标暴露端口` 镜像内nginx部分配置: `http { # put extra_lua_path in front of the builtin path # so user can override the source code lua_package_path "/usr/local/apisix/plugins/?.lua;;$prefix/deps/share/lua/5.1/?.lua;$prefix/deps/share/lua/5.1/?/init.lua;/usr/local/apisix/?.lua;/usr/local/apisix/?/init.lua;;/usr/local/apisix/?.lua;/usr/local/apisix/deps/share/lua/5.1/?/init.lua;./?.lua;/usr/local/openresty/luajit/share/luajit-2.1/?.lua;/usr/local/share/lua/5.1/?.lua;/usr/local/share/lua/5.1/?/init.lua;/usr/local/openresty/luajit/share/lua/5.1/?.lua;/usr/local/openresty/luajit/share/lua/5.1/?/init.lua;;"; lua_package_cpath "$prefix/deps/lib64/lua/5.1/?.so;$prefix/deps/lib/lua/5.1/?.so;;./?.so;/usr/local/lib/lua/5.1/?.so;/usr/local/openresty/luajit/lib/lua/5.1/?.so;/usr/local/lib/lua/5.1/loadall.so;"; lua_max_pending_timers 16384; lua_max_running_timers 4096; ` 其中插件my-prometheus.lua复制的内置插件,仅仅修改了插件名 镜像内部对应的自定义插件已经映射到对应的目录中 问题: 当启用自定义插件my-prometheus时,提示 "error_msg": "unknown plugin [my-prometheus]" 当前nginx日志级别为debug,未观察到启动报错的日志信息。 求助解决 ### Environment - APISIX version (run `apisix version`): github中expample中的docker脚本 image: apache/apisix:3.10.0-debian - Operating system (run `uname -a`): maxos m2 - OpenResty / Nginx version (run `openresty -V` or `nginx -V`): - etcd version, if relevant (run `curl http://127.0.0.1:9090/v1/server_info`): - APISIX Dashboard version, if relevant: - Plugin runner version, for issues related to plugin runners: - LuaRocks version, for installation issues (run `luarocks --version`): -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
