tokers edited a comment on issue #2853: URL: https://github.com/apache/apisix/issues/2853#issuecomment-735644179
> > > > > Do we need to update the `lua-resty-etcd`? > > > > > > > > > > > > Yep, we shall. > > > > > > > > > So do we need to add a new API to read etcd by range? > > > > > > I glanced the description of https://github.com/api7/lua-resty-etcd/blob/master/api_v3.md#readdir and found there is no such a `start_key` to support our demand to get bulk keys in batched manner. I prefer to extend the `etcd:readdir` instead of adding new one, the new API would be so similar with `etcd:readdir`, it doesn't make so much sense. > > So I will try add `start_key` or `offset` in `readdir` parameter `opts`. > Another question is, we should smooth the traffic in the `readdir` function, or we should write the code in where we call `readdir`? Then we might need to write it in many place where we call `readdir`. > > I mean another way is we implement `readdir` with a parameter might called `rate`. > The rate will limit the keys we fetch from etcd each time to smooth the network traffic. We call `readdir` still return the whole bunch of keys, but hide the implementation details of fetch data in multiple round trip. Not only for the bandwidth, you know create a huge table is expensive, both due to the table grown and memory allocation. So if we still keep all items in a single table and give them to caller at one time, the CPU cycle overheads are still high (the iteration loop will be tight). Actually an ideal way is passing in a callback to be called when we get new items from etcd, but that would conflict with the current `etcdcli:readdir`, so maybe we need to create another method like `etcdcli:walkdir`. ```lua local ok, err = etcdcli:walkdir(key, func(item) -- do something here to process the item. end, opts) ``` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
