2022-06-01 17:50:47 UTC - Brendan Doyle: Just wanted to open a discussion about 
the new scheduler. It's very optimized towards short running requests i.e. few 
milliseconds. However if there are very long running functions i.e. 10+ seconds 
then it scales out very quickly since the container throughput calculation is 
just going to be that it needs a new container for each added level of 
concurrency, this obviously is a very miniscule amount of faas use cases but 
it's supported nonetheless. These use cases are much more async then the normal 
use case of sync responses expected in a reasonable http request time of a few 
milliseconds so some latency to wait for available space should be much more 
acceptable. For example if a function takes 10 seconds to run, the user of that 
function won't really care if it has to wait 2-3 seconds for available space, 
and both the namespace and operator would likely prefer latency over 
uncontrolled fan out of concurrency. The problem imo is that the activation 
staleness value is constant for all function types (currently 100ms). 100ms 
definitely makes sense for anything that runs within a second, but do we think 
that we could make this value dynamic to what the average duration is for that 
function? Or if I'm on the right track here on how we could potentially control 
fan out of long running functions and prefer latency over fan out?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1654105847846179?thread_ts=1654105847.846179&cid=C3TPCAQG1
----

Reply via email to