tzssangglass commented on issue #7088:
URL: https://github.com/apache/apisix/issues/7088#issuecomment-1136953360
> but from the test result, that is no relation with actice-conn.
I updated case 1 (removed port sorting)
```
use t::APISIX 'no_plan';
repeat_each(2);
log_level('info');
no_root_location();
worker_connections(1024);
no_shuffle();
add_block_preprocessor(sub {
my ($block) = @_;
if (!$block->request) {
$block->set_value("request", "GET /mysleep?seconds=0.1");
}
if ((!defined $block->error_log) && (!defined $block->no_error_log)) {
$block->set_value("no_error_log", "[error]");
}
my $http_config = $block->http_config // <<_EOC_;
# fake server, only for test
server {
listen 1970;
location / {
content_by_lua_block {
ngx.sleep(0.5)
ngx.say(1970)
}
}
}
server {
listen 1971;
location / {
content_by_lua_block {
ngx.sleep(0.5)
ngx.say(1971)
}
}
}
_EOC_
$block->set_value("http_config", $http_config);
});
run_tests();
__DATA__
=== TEST 1: same weight
--- config
location /t {
content_by_lua_block {
local t = require("lib.test_admin").test
local code, body = t('/apisix/admin/upstreams/1',
ngx.HTTP_PUT,
[[{
"nodes": {
"127.0.0.1:1970": 1,
"127.0.0.1:1971": 2
},
"type": "least_conn"
}]]
)
if code >= 300 then
ngx.status = code
ngx.say(body)
end
local code, body = t('/apisix/admin/routes/1',
ngx.HTTP_PUT,
[[{
"uri": "/hello",
"upstream_id": "1"
}]]
)
if code >= 300 then
ngx.status = code
ngx.say(body)
end
local http = require "resty.http"
local uri = "http://127.0.0.1:" .. ngx.var.server_port
.. "/hello"
local t = {}
local ports = {}
for i = 1, 3 do
local th = assert(ngx.thread.spawn(function(i)
local httpc = http.new()
local res, err = httpc:request_uri(uri, {method = "GET"})
if not res then
ngx.log(ngx.ERR, err)
return
end
local port = tonumber(res.body)
ports[i] = port
end, i))
end
for i, th in ipairs(t) do
ngx.thread.wait(th)
end
ngx.sleep(1)
ngx.say(table.concat(ports, ", "))
}
}
--- request
GET /t
--- timeout: 15
--- response_body
1971, 1971, 1970
--- no_error_log
[error]
```
> So case 2 is serially or concurrently?
Correction, case 1 sends the request concurrently and case 2 sends it
serially. To test least_conn, I need to base it on case 1.
Let's discuss case 1 based on the above update :
3 requests will be sent, the` least_conn` will also do 3 rounds of selection
round 1:
1970 score: 0(active connection) + 1 / 1 (weight) = 1
1971 score: 0(active connection) + 1 / 2 (weight) = 0.5
select score smaller 1971
round 2:
1970 score: 0(active connection) + 1 / 1 (weight) = 1
1971 score: 1(active connection) + 1 / 2 (weight) = 1
1971 would be chosen even though the score is the same (not discussing it,
which is a separate detail)
round 3:
1970 score: 0(active connection) + 1 / 1 (weight) = 1
1971 score: 2(active connection) + 1 / 2 (weight) = 1.5
select score smaller 1970
Based on this analysis, I think there is no problem with `least_conn`, and
the implementation is also related to active connection.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]