2019-03-27 03:21:19 UTC - Rodric Rabbah: i think @Ben Browning would know best
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1553656879435300?thread_ts=1553635831.429600&cid=C3TPCAQG1
----
2019-03-27 03:21:33 UTC - Rodric Rabbah: cool!
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1553656893435500?thread_ts=1553638882.431200&cid=C3TPCAQG1
----
2019-03-27 18:17:58 UTC - Andrei Palade: what is the default auto-scaling 
metric used by openwhisk when deployed on  top of a kubernetes cluster
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1553710678436700
----
2019-03-27 18:19:03 UTC - Andrei Palade: 
<https://github.com/apache/incubator-openwhisk-deploy-kube>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1553710743436900
----
2019-03-27 18:54:39 UTC - Dave Grove: What part of the system you expecting to 
scale?  The default design is to use a DaemonSet for the invokers so that as 
you add worker nodes (that are labelled with `openwhisk-role=invoker`) we’ll 
scale up to use the additional compute
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1553712879439400
----
2019-03-27 19:00:58 UTC - Andrei Palade: As I increate the number of requests 
to some function (action) deployed, I can see that the number of pods that have 
that action deployed increases. In my understanding is that the invoker 
performs this process. What is the default metric that is used to make this 
decision? Does it make sense what I'm asking?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1553713258442400
----
2019-03-27 19:01:04 UTC - Andrei Palade: *increase
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1553713264442600
----
2019-03-27 19:21:36 UTC - Dave Grove: Take a look at 
<https://github.com/apache/incubator-openwhisk/blob/master/docs/about.md> if 
you haven’t already.   The basic flow is that invocations of actions come into 
the system via nginx/controller.  A LoadBalancer (running in the Controller) 
assigns each incoming work item to an Invoker.  Heuristically the LoadBalancer 
tries to pick an Invoker that (a) is likely to have the capacity to run the 
work “right now” and (b) is likely to have run that action in the recent past 
(so it is likely to have a container it can reuse to execute the action and 
avoid cold starts).  Each Invoker simply dequeues actions to run, looks to see 
if it has an available container already and then either (a) uses the available 
container (warm path) or (b) creates a new container to run the action (cold 
path).    There’s a blog 
<https://medium.com/openwhisk/squeezing-the-milliseconds-how-to-make-serverless-platforms-blazing-fast-aea0e9951bd0>
 that goes into more of how the critical path works.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1553714496448200
----

Reply via email to