Hi Kit, Does ts.fetch support unix domain socket ? or only http
Thanks, Di Li > On Dec 12, 2016, at 9:25 AM, Di Li <di...@apple.com> wrote: > > Hi Kit, > > Thanks for the advise, I will do some benchmark with redis and maybe write a > small GO application to fill our end, I think the stats way probably not made > for such thing > > Thanks for the help > > Thanks, > Di Li > > > > >> On Dec 9, 2016, at 3:52 PM, Shu Kit Chan <chanshu...@gmail.com >> <mailto:chanshu...@gmail.com>> wrote: >> >> I honestly don't know if your model can support million lines. I guess >> you just have to stress test to make sure. >> >> For the redis way, I find that to be fine for some modest work load >> (e.g. 5K rps) . But beyond that, you just need to stress test that and >> perhaps need to tune it or use other similar solutions. The takeaway >> is that you need a scalable fast store for your lookups and that you >> can occasionally update these lookups. >> >> Thanks. >> >> Kit >> >> On Fri, Dec 9, 2016 at 3:38 PM, Di Li <di...@apple.com >> <mailto:di...@apple.com>> wrote: >>> Hi Kit, >>> >>> Thanks for the suggestion, do you think it will be a problem to use the >>> ts.stats way to maintain a key value pair, I don’t need that to be >>> persistent, as long as it can be shared between requests. >>> >>> For example, if I just want to do a access control and use the src_ip + >>> domain as key , and I don’t have to do the action for the values >>> >>> 1.1.1.1_youtube.com = 1 >>> 2.2.2.2_google.com = 1 >>> … >>> … >>> >>> do you think this model will work for support millions lines like this ? I >>> check the code a little bit, eventually it use the hash, I don’t think it >>> will cause a issue as the rule grows. >>> >>> I like the idea for a local redis way, but if I have to call the local redis >>> for every single request, it probably won’t scale ? >>> >>> Thanks, >>> Di Li >>> >>> >>> >>> >>> >>> On Dec 9, 2016, at 3:20 PM, Shu Kit Chan <chanshu...@gmail.com >>> <mailto:chanshu...@gmail.com>> wrote: >>> >>> Yeah. That's right. It won't work the way you intended. >>> >>> you probably need an external source of truth as your trigger. e.g. a >>> local redis store. >>> So you have some cron to keep checking some URL and update a key in >>> the local redis store. >>> And then you have lua script to check the key in the local redis store >>> and make decision based on that. >>> >>> Thanks. >>> >>> On Fri, Dec 9, 2016 at 2:18 PM, Di Li <di...@apple.com >>> <mailto:di...@apple.com>> wrote: >>> >>> looks like so far only thing that’s not request lifetime is that >>> ts.stat_create and find >>> >>> >>> Thanks, >>> Di Li >>> >>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>> + SHIELD :: Self-Service Load-Balancer (LB) and Web App Firewall (WAF) >>> + http://shield.apple.com <http://shield.apple.com/> >>> + >>> + SHIELD classes: >>> + >>> http://shield.apple.com/FAQ/doku.php?id=start#shield_classes_trainings_tutorials >>> >>> <http://shield.apple.com/FAQ/doku.php?id=start#shield_classes_trainings_tutorials> >>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>> >>> >>> >>> On Dec 9, 2016, at 11:21 AM, Di Li <di...@apple.com> wrote: >>> >>> I just did a quick test with counter = counter + 1 , and seems that global >>> lua table has the same lifetime as the request. >>> >>> is there any way I can store data beyond the request lifetime ? >>> >>> >>> >>> Thanks, >>> Di Li >>> >>> >>> >>> >>> On Dec 9, 2016, at 10:23 AM, Di Li <di...@apple.com> wrote: >>> >>> Hey Kit, >>> >>> >>> Thanks for taking time to respond my emails, I still have some confusions, >>> hopefully you can help me understand more about those pieces. >>> >>> The background here is we are trying to do a forward proxy, the reason I was >>> thinking about using ts.schedule is that it can keep get the latest our >>> control data from a external end point like whatever NOSQL family solution >>> via ts.fetch, so that we don’t have to depends on the __init__ function with >>> which we have to restart the traffic server to get latest data. The __init__ >>> function can be used to serve the first fetch of data, this is what in my >>> mind at beginning. >>> >>> Looks like the pieces I missed here is a shared_lua_dict part, that other >>> lua script won’t be able to access whatever that scheduler fetched. >>> >>> Now I’m thinking a different path to make this happen, maybe not ideal, but >>> maybe gonna work, what if I have a do_global_read_request (we are forward >>> proxy, we don’t have remap rules) and checks where the a request being >>> called, and where I can match our control fetch, and then that will fetch >>> the data from a external point that has our control data and update the >>> GLOBAL variable, which will be used in the same script. >>> >>> ============================ >>> >>> local control_data = {} >>> >>> function __init__() >>> -- do whatever the logic to fetch a external endpoint, and update lua table >>> control_data, >>> -- maybe use luasocket to do that >>> end >>> >>> >>> >>> function control_request() >>> >>> local url_host = ts.client_request.get_url_host() >>> >>> if url_host == '127.0.0.1' then >>> -- for example this is our local cron call to update the control_data >>> table, could match with IP or whatever make sense. >>> -- ts.fetch our endpoint to get control_data >>> >>> local res = ts.fetch(url, {method = 'GET', header=hdr}) >>> if res.status == 200 then >>> -- parse result, and update the contorl_data >>> end >>> >>> else >>> -- this is client normal call, check if our control_data has logic on it, >>> for example simple allow or not >>> if control_data[url_host] == 'allow' then >>> return 0 >>> else >>> -- this is not allow >>> ts.http.set_resp(403) >>> return 0 >>> end >>> return 0 >>> end >>> >>> >>> function do_global_read_request() >>> ts.hook(TS_LUA_HOOK_READ_REQUEST_HDR, control_request) >>> return 0 >>> end >>> =============================== >>> >>> If I understand this correctly, the init will only be called once at traffic >>> server start up, and then all the rest request will go through >>> do_global_read_request logic (we are forward proxy). >>> >>> two questions here: >>> >>> 1. with this work flow, after init, I will have a control_data table, and if >>> I update it by local calls from 127.0.0.1 and update that control_data >>> table, does the following requests check the control_data with new data or >>> still the initialized data by init ? >>> >>> 2. ts.fetch’s context is after_do_remap, you mentioned that yesterday, I >>> don’t have the do_remap(), but do_global_read_request(), I call the fetch >>> inside a Hook, I should be OK ? >>> >>> >>> >>> Thanks, >>> Di Li >>> >>> >>> >>> >>> On Dec 8, 2016, at 4:27 PM, Shu Kit Chan <chanshu...@gmail.com> wrote: >>> >>> 1) No. you don't need to do anything in txn close hook. >>> >>> 2) See the example in the documentation. I think we can definitely >>> improve the text a bit. What it means is that you need to add a hook >>> inside do_remap and ts.schedule() can only be called inside that hook >>> function. >>> It is similar to >>> https://docs.trafficserver.apache.org/en/latest/developer-guide/api/functions/TSContSchedule.en.html >>> However, inside ts_lua we only support net and task. >>> >>> 3) There is an example (the second one) close to the beginning of the >>> doc - >>> https://docs.trafficserver.apache.org/en/latest/admin-guide/plugins/ts_lua.en.html >>> >>> 4) we don't have this for now. Suggestions/patches are welcome. >>> >>> IMHO, you don't need to use ts.schedule() . You can directly use >>> luasocket inside __init__ function since this is run inside >>> TSPluginInit(). You can use global variable to store the results you >>> want similar to the __init__ example in the document . >>> However, pls be aware that we instantiate multiple lua state and thus >>> we run __init__ for each of those state so it may result in a slow >>> startup time for you. See jira - >>> https://issues.apache.org/jira/browse/TS-4994 for a patch for this. >>> >>> Thanks. Let me know if i can provide any more help. >>> >>> Kit >>> >>> >>> >>> On Thu, Dec 8, 2016 at 3:52 PM, Di Li <di...@apple.com> wrote: >>> >>> Hey Guys, >>> >>> Several questions about the ts-lua , just start to use it, so some question >>> may seem very simple >>> >>> 1. question about log part “[globalHookHandler] has txn hook -> adding txn >>> close hook handler to release resources” >>> >>> for example I’m using the following code, and the debug log shows above log >>> , do I need to do anything to handle a txt close hook to release the >>> resource, or I should just ignore the log >>> >>> >>> function do_some_work() >>> - - do some logic >>> return 0 >>> end >>> >>> >>> function do_global_read_request() >>> ts.debug('this is do_global_read_request') >>> ts.hook(TS_LUA_HOOK_READ_REQUEST_HDR, do_some_work) >>> return 0 >>> end >>> >>> >>> 2. question for ts.schedule >>> >>> what does “after do_remap” means, is that after hook TS_HTTP_POST_REMAP_HOOK >>> ? >>> what are the types in “ THREAD_TYPE” other than the one in the example >>> "TS_LUA_THREAD_POOL_NET”, and what’s the different between those types. >>> >>> >>> ts.schedule >>> syntax: ts.schedule(THREAD_TYPE, sec, FUNCTION, param1?, param2?, ...) >>> context: after do_remap >>> >>> >>> 3. init function being called when traffic_server starts >>> >>> is there a init function being called when traffic_server starts, like the >>> following in nginx >>> >>> https://github.com/openresty/lua-nginx-module#init_worker_by_lua >>> >>> >>> 4. Global shared lua dict >>> >>> is there a global shared lua dict, that will not has the lift time as >>> ts.ctx, something like lua_shared_dict in nginx ? >>> >>> >>> What I’m trying to do here is that when traffic server starts up, it will >>> try to call a init script, which will init a scheduler to fetch a url either >>> internal or external and get that response store to a shared_lua_dict as >>> key/value pairs, and later on each of the request comes to the ATS will try >>> to check the key/values use that shared_lua_dict. With that in mind, I need >>> to understand those 4 questions above. >>> >>> >>> >>> Thanks, >>> Di Li >>> >>> >>> >>> >>> >>> >