Copilot commented on code in PR #12445:
URL: https://github.com/apache/apisix/pull/12445#discussion_r2917016146


##########
apisix/discovery/nacos/init.lua:
##########
@@ -408,6 +430,19 @@ function _M.nodes(service_name, discovery_args)
         return nil
     end
     local nodes = core.json.decode(value)
+
+    -- Apply metadata filtering if specified
+    local required_metadata = discovery_args and discovery_args.metadata
+    if required_metadata and next(required_metadata) then
+        local filtered_nodes = {}
+        for _, node in ipairs(nodes) do
+            if metadata_contains(node.metadata, required_metadata) then
+                core.table.insert(filtered_nodes, node)
+            end
+        end
+        return filtered_nodes
+    end

Review Comment:
   `_M.nodes()` always `core.json.decode()`s from shared dict, and when 
metadata filtering is enabled it allocates a new `filtered_nodes` array each 
call. Given `compare_upstream_node()`’s fast path (`old_t == new_t`) and its 
reference-based comparison for `metadata`, consider adding a small per-worker 
cache keyed by (namespace, group, service, raw_json_value, required_metadata) 
so repeated requests can reuse the same nodes table when nothing changed. This 
prevents unnecessary upstream refresh work and keeps node versions stable.



##########
apisix/discovery/nacos/init.lua:
##########
@@ -329,6 +344,13 @@ local function fetch_from_host(base_uri, username, 
password, services)
                     port = host.port,
                     weight = host.weight or default_weight,
                 }
+                if host.metadata ~= nil then
+                    if type(host.metadata) == 'table' then
+                        node.metadata = host.metadata
+                    else
+                        log.error('nacos host metadata is not a table: ', 
host.metadata)
+                    end

Review Comment:
   `fetch_from_host` now persists `host.metadata` into each returned node. 
Because `nacos_dict` stores nodes as JSON and `_M.nodes()` decodes it on every 
call, the `metadata` table will be a fresh Lua table each time. 
`upstream_util.compare_upstream_node()` compares `metadata` by reference (not 
deep value), so this will make APISIX treat upstream nodes as changed on every 
request (nodes_ver bumps, frequent resource updates, and potential log spam). 
To avoid this, either (a) don’t include `metadata` in the nodes table returned 
to upstream (keep it only for filtering and strip it before returning), or (b) 
cache the decoded nodes table per service key/value so the same table (and 
metadata sub-tables) are reused when unchanged.
   ```suggestion
                   if host.metadata ~= nil and type(host.metadata) ~= 'table' 
then
                       log.error('nacos host metadata is not a table: ', 
host.metadata)
   ```



##########
ci/pod/nacos/service/Dockerfile:
##########
@@ -25,8 +25,8 @@ ENV GROUP=${GROUP:-DEFAULT_GROUP}
 
 ADD 
https://raw.githubusercontent.com/api7/nacos-test-service/main/spring-nacos-1.0-SNAPSHOT.jar
 /app.jar

Review Comment:
   The `ADD` of the remote JAR from 
`https://raw.githubusercontent.com/api7/nacos-test-service/main/spring-nacos-1.0-SNAPSHOT.jar`
 introduces a supply-chain risk: the image build and runtime depend on code 
fetched from a mutable third-party branch without any integrity pinning or 
verification. If that repository or the `main` branch is compromised, a 
malicious JAR could be served and then executed inside your CI/test environment 
with access to any secrets or network resources available to this container. To 
harden this, pin the artifact to an immutable reference (e.g., a specific 
commit hash or release URL) or vendor the JAR into this repository and verify 
its integrity before execution.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to