tzssangglass commented on a change in pull request #4559:
URL: https://github.com/apache/apisix/pull/4559#discussion_r681179162
##########
File path: docs/en/latest/plugins/request-id.md
##########
@@ -72,6 +73,60 @@ X-Request-Id: fe32076a-d0a5-49a6-a361-6c244c1df956
......
```
+### Use the snowflake algorithm to generate an ID
+
+> supports using the Snowflake algorithm to generate ID.
+> read the documentation first before deciding to use snowflake. Because once
the configuration information is enabled, you can not arbitrarily adjust the
configuration information. Failure to do so may result in duplicate ID being
generated.
+
+The Snowflake algorithm is not enabled by default and needs to be configured
in 'conf/config.yaml'.
+
+```yaml
+plugin_attr:
+ request-id:
+ snowflake:
+ enable: true
+ snowflake_epoc: 1609459200000
+ data_machine_bits: 12
+ sequence_bits: 10
+ data_machine_ttl: 30
+ data_machine_interval: 10
+```
+
+#### Configuration parameters
+
+| Name | Type | Requirement | Default | Valid |
Description |
+| ------------------- | ------- | ------------- | -------------- | ------- |
------------------------------ |
+| enable | boolean | optional | false | |
When set it to true, enable the snowflake algorithm. |
+| snowflake_epoc | integer | optional | 1609459200000 | |
Start timestamp (in milliseconds) |
+| data_machine_bits | integer | optional | 12 | |
Maximum number of supported machines (processes) `1 << data_machine_bits` |
+| sequence_bits | integer | optional | 10 | |
Maximum number of generated ID per millisecond per node `1 << sequence_bits` |
+| data_machine_ttl | integer | optional | 30 | |
Valid time of registration of 'data_machine' in 'etcd' (unit: seconds) |
+| data_machine_interval | integer | optional | 10 | |
Time between 'data_machine' renewal in 'etcd' (unit: seconds) |
+
+- `snowflake_epoc` default start time is `2021-01-01T00:00:00Z`, and it can
support `69 year` approximately to `2090-09-0715:47:35Z` according to the
default configuration
+- `data_machine_bits` corresponds to the set of workIDs and datacEnteridd in
the snowflake definition. The plug-in aslocates a unique ID to each process.
Maximum number of supported processes is `pow(2, data_machine_bits)`. The
default number of `12 bits` is up to `4096`.
+- `sequence_bits` defaults to `10 bits` and each process generates up to
`1024` ID per second
Review comment:
The `per second` here and the `per millisecond` above confuse me
##########
File path: conf/config-default.yaml
##########
@@ -348,3 +348,12 @@ plugin_attr:
report_ttl: 3600 # live time for server info in etcd (unit: second)
dubbo-proxy:
upstream_multiplex_count: 32
+ request-id:
+ snowflake:
+ enable: false
+ snowflake_epoc: 1609459200000 # the starting timestamp is expressed in
milliseconds
+ data_machine_bits: 12 # data machine bit, maximum 31, because
Lua cannot do bit operations greater than 31
Review comment:
should check `data_machine_bits` + `sequence_bits` = 22 always? may the
test case need to cover it.
##########
File path: t/plugin/request-id.t
##########
@@ -470,3 +470,268 @@ GET /t
X-Request-Id and Custom-Header-Name are different
--- no_error_log
[error]
+
+
+
+=== TEST 12: check for snowflake id
+--- yaml_config
+plugins:
+ - request-id
+plugin_attr:
+ request-id:
+ snowflake:
+ enable: true
+ snowflake_epoc: 1609459200000
+ data_machine_bits: 10
+ sequence_bits: 10
+ data_machine_ttl: 30
+ data_machine_interval: 10
+--- config
+location /t {
+ content_by_lua_block {
+ ngx.sleep(3)
+ local core = require("apisix.core")
+ local key = "/plugins/request-id/snowflake/1"
+ local res, err = core.etcd.get(key)
+ if err ~= nil then
+ ngx.status = 500
+ ngx.say(err)
+ return
+ end
+ if res.body.node.key ~= "/apisix/plugins/request-id/snowflake/1" then
+ ngx.say(core.json.encode(res.body.node))
+ end
+ ngx.say("ok")
+ }
+}
+--- request
+GET /t
+--- response_body
+ok
+--- no_error_log
+[error]
+
+
+
+=== TEST 13: wrong type
+--- config
+ location /t {
+ content_by_lua_block {
+ local plugin = require("apisix.plugins.request-id")
+ local ok, err = plugin.check_schema({algorithm = "bad_algorithm"})
+ if not ok then
+ ngx.say(err)
+ end
+ ngx.say("done")
+ }
+ }
+--- request
+GET /t
+--- response_body
+property "algorithm" validation failed: matches none of the enum values
+done
+--- no_error_log
+[error]
+
+
+
+=== TEST 14: add plugin with algorithm snowflake (default uuid)
+--- config
+ location /t {
+ content_by_lua_block {
+ local t = require("lib.test_admin").test
+ local code, body = t('/apisix/admin/routes/1',
+ ngx.HTTP_PUT,
+ [[{
+ "plugins": {
+ "request-id": {
+ "algorithm": "snowflake"
+ }
+ },
+ "upstream": {
+ "nodes": {
+ "127.0.0.1:1982": 1
+ },
+ "type": "roundrobin"
+ },
+ "uri": "/opentracing"
+ }]],
+ [[{
+ "node": {
+ "value": {
+ "plugins": {
+ "request-id": {
+ "algorithm": "snowflake"
+ }
+ },
+ "upstream": {
+ "nodes": {
+ "127.0.0.1:1982": 1
+ },
+ "type": "roundrobin"
+ },
+ "uri": "/opentracing"
+ },
+ "key": "/apisix/routes/1"
+ },
+ "action": "set"
+ }]]
+ )
+ if code >= 300 then
+ ngx.status = code
+ end
+ ngx.say(body)
+ }
+ }
+--- request
+GET /t
+--- response_body
+passed
+--- no_error_log
+[error]
+
+
+
+=== TEST 15: check for snowflake id
+--- yaml_config
+plugins:
+ - request-id
+plugin_attr:
+ request-id:
+ snowflake:
+ enable: true
+--- config
+ location /t {
+ content_by_lua_block {
+ local http = require "resty.http"
+ local t = {}
+ local ids = {}
+ for i = 1, 180 do
+ local th = assert(ngx.thread.spawn(function()
+ local httpc = http.new()
+ local uri = "http://127.0.0.1:" .. ngx.var.server_port ..
"/opentracing"
+ local res, err = httpc:request_uri(uri,
+ {
+ method = "GET",
+ headers = {
+ ["Content-Type"] = "application/json",
+ }
+ }
+ )
+ if not res then
+ ngx.log(ngx.ERR, err)
+ return
+ end
+ local id = res.headers["X-Request-Id"]
+ if not id then
+ return -- ignore if the data is not synced yet.
+ end
+ if ids[id] == true then
+ ngx.say("ids not unique")
+ return
+ end
+ ids[id] = true
Review comment:
Is there any way to check that the length of the `ids` has reached a
fixed value? such as 100, or at least 2. I am worried that there is no data in
the ids due to sync.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]