2020-01-08 01:40:01 UTC - Tyson Norris: concurrency is here <https://github.com/apache/openwhisk/blob/master/docs/concurrency.md> thankyou : Rodric Rabbah https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578447601009700 ---- 2020-01-08 01:40:29 UTC - Tyson Norris: (but I don’t think there are docs for the cli :disappointed: ) https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578447629010100 ---- 2020-01-08 06:50:51 UTC - Keerthi Kumar S R: @Rodric Rabbah Thank you... i go through this https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578466251010300?thread_ts=1578293600.123600&cid=C3TPCAQG1 ---- 2020-01-08 14:59:15 UTC - dan mcweeney: FYI Tech Interchange is starting shortly: <https://zoom.us/my/asfopenwhisk> https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578495555010900 ---- 2020-01-08 17:23:30 UTC - Ali Tariq: Hi guys, can someone clarify two things regarding the openwhisk-kubedeployment? first, there is a parameter `actionsInvokesConcurrent` which i thought was limiting how many function can run in parallel. Its description states `The maximum number of concurrent action invocations by a single namespace` which is not the same thing as functions running in parallel right? I say this because i did a setup of more than `1200 container memory pool` and set `actionInvokesConcurrent` to 1000. When i sent a request burst of 1200, more than 1000 functions started! (is there a way to ensure only 1000 functions running in parallel?). Secondly, how do we modify the out standing request queue length? When i did a setup with `200 container memory pool` , any requests above the limit will be queued up and run in turn without any drops. But the setup with `1000 container memory pool` , when i go above the limit lets say 1200 burst requests - i start getting `500 Internal Server Error`, nginx/1.15.12 (for some requests not for all 1200 remaining). How can i fix this? thanks! https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578504210020500?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 18:59:02 UTC - Rodric Rabbah: What’s the hold time (function duration)? https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578509942000500?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 18:59:14 UTC - Ali Tariq: 15 seconds https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578509954001100?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 18:59:25 UTC - Rodric Rabbah: Is the 500 error from nginx or the controller (is there a “code” attached). https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578509965001500?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 18:59:50 UTC - Ali Tariq: `<Response [500]> b'<html>\r\n<head><title>500 Internal Server Error</title></head>\r\n<body>\r\n<center><h1>500 Internal Server Error</h1></center>\r\n<hr><center>nginx/1.15.12</center>\r\n</body>\r\n</html>\r\n'` https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578509990002500?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 19:00:21 UTC - Rodric Rabbah: That’s Nginx. Did you check its logs? https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578510021003100?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 19:00:35 UTC - Rodric Rabbah: What should happen is the requests get queued, as you expected. https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578510035003700?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 19:00:46 UTC - Ali Tariq: no ... let me check! https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578510046004100?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 19:00:53 UTC - Rodric Rabbah: Are you using more than one controller? https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578510053004500?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 19:02:10 UTC - Ali Tariq: `2020/01/08 17:16:52 [crit] 6#6: accept4() failed (24: Too many open files)` (logs from nginx) https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578510130005100?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 19:02:17 UTC - Ali Tariq: yes ... 5 controllers https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578510137005300?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 19:08:17 UTC - Ali Tariq: i don't see any configuration parameter for increasing max-open-files in nginx configuration ... should i increase its replica count? https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578510497005700?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 19:18:38 UTC - Rodric Rabbah: with multiple controllers i believe there is a slop factor to account for the fact that the threshold is partitioned across the controllers and they need time to communicate and converge. so the limit of 1K is raised slightly, that computation is in the load balancer code somewhere. https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578511118006000?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 19:26:24 UTC - Ali Tariq: i was actually talking about increasing the replica count for nginx (just did and it seems like working) although i cant make sense of it ... i have a total container memory pool of `1050` (21 invokers with 50 containers (`12800m`) each). When i sent 1200 requests, 1150 finished successfully and 50 return rate exeeded error (`<Response [429]> b'{\n "code": "rEuz45ZdeTzOubFbJXnyNDfeH4pDp4VT",\n "error": "Too many concurrent requests in flight (count: 240, allowed: 240)."\n}'`) ... and its also worth mentioning that i have set the `actionInvokesConcurrent` to 1000. https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578511584006200?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 19:32:37 UTC - Rodric Rabbah: 240x5=1200, have to check the lb calculation - check the lb computation for max concurrent https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578511957006400?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 19:38:11 UTC - Ali Tariq: lb is loadbalancer? i don't see any configuration for loadbalancer in values.yaml (the default configuration for kube-deployment) except for `blackboxFraction` & `timeoutFactor` . https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578512291006600?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 19:55:05 UTC - Ali Tariq: Could you also point to towards request queuing docs (extra requests past the max_concurrent)? What is the default value/length of queue and how can i modify that? https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578513305006800?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 20:01:04 UTC - Rodric Rabbah: it’s in the code - i have to look myself i dont remember where it is exactly https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578513664007100?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 20:01:36 UTC - Rodric Rabbah: there is no limit on the queue length - it’s a “feature” you can keep queuing while you stay below the per minute api threshold and the max concurrent limit https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578513696007300?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 20:02:06 UTC - Rodric Rabbah: there is a max limit in the system that acts as a global kill switch beyond which no requests are accepted https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578513726007500?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 20:02:43 UTC - Ali Tariq: so then ... whats the point of `actionInvokesConcurrent` ? https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578513763007700?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 20:03:22 UTC - Rodric Rabbah: That’s the max any user can have active in the system at any given time. Meaning when you hit this threshold your requests are 429 https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578513802008900?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 20:03:44 UTC - Rodric Rabbah: But if you stay below this limit the system is supposed to queue if it needs to https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578513824009700?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 20:04:25 UTC - Ali Tariq: but i am not receiving that ... which is the issue, i am using the default user in the deployment ... i should get 429 after 1000 invocations (which is not happening) https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578513865009900?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 20:40:31 UTC - Ali Tariq: Okay, `But if you stay below this limit the system is supposed to queue if it needs to` this helped me, ensure queuing always ... previously both `actionInvokesConcurrent` and `containerPool` were 1000 hence queuing was inconsistent. Increasing `actionInvokesConcurrent` ensures no drop. https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578516031010100?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 20:41:35 UTC - Ali Tariq: Although i am still curious about `240x5=1200` & `lb calculations` , let me know whenever you get time. Its not very important because now i can resume my work without hindrance. Thanks! https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578516095010300?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 21:32:51 UTC - Rodric Rabbah: look for the function ```dilateLimit``` in Entitlement.scala https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578519171010600?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 21:33:34 UTC - Rodric Rabbah: it increases the concurrency limit by 20% if the cluster contains more than 1 controller https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578519214010800?thread_ts=1578504210.020500&cid=C3TPCAQG1 ---- 2020-01-08 21:34:24 UTC - Rodric Rabbah: the error message is misleading - it should say 1K not 240, that should be fixed +1 : Ali Tariq https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1578519264011000?thread_ts=1578504210.020500&cid=C3TPCAQG1 ----