bdoyle0182 commented on issue #5256:
URL: https://github.com/apache/openwhisk/issues/5256#issuecomment-1145172885

   @rabbah things fan out dramatically for functions that run for minutes. Long 
running functions to me make it seem like intra-container concurrency of the 
utmost importance. Otherwise you eat away at the memory pool very quickly where 
much of the overhead are the fixed assets required to run the server / 
container. My guess is in the vast majority of cases is that the increase in 
memory usage per additional activation within a single container is negligible. 
However right now only nodejs supports intra-container concurrency and it 
requires the user to very smartly design their function around it and to do 
benchmarking.
   
   @style95: On the topic of 100ms for activation staleness, does that make 
sense as the default. Cold starts in any system as far as I know will not be 
100ms. 500ms-1s is more the norm in real world use with a bunch of variables 
that could make it even higher than that. If latency added to create a new 
container is far more than 100ms, does it make sense to only wait 100ms before 
deciding to spawn a new container? The latency is then 100ms + cold start time. 
Of course the tradeoff is if you increased it to say 1s, the latency if still 
needing a cold start is then 1s + cold start time so I'm not sure which is 
better.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to