Hi Michael,

thx for checking. I wasn't referring to adding/removing VMs, but rather 
activation contaIners. In today's model that is done intrinsically, while 
I *think* in what Dragos described, the containers would have to be 
monitored somehow so this new component can decide (based on 
cpu/mem/io/etc load within the containers) when to add/remove containers.


Thanks & best regards
Michael

---------------------------
IBM Distinguished Engineer
Chief Architect, Serverless / FaaS & OpenWhisk
Mobile: +49-170-7993527
michaelbehre...@de.ibm.com |  @michael_beh

IBM Deutschland Research & Development GmbH / Vorsitzender des 
Aufsichtsrats: Martina Koederitz
Geschäftsführung: Dirk Wittkopp 
Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, 
HRB 243294 



From:   Michael Marth <mma...@adobe.com.INVALID>
To:     "dev@openwhisk.apache.org" <dev@openwhisk.apache.org>
Date:   07/05/2017 08:28 PM
Subject:        Re: Improving support for UI driven use cases



Hi Michael,

To make sure we mean the same thing with the word ?autoscaling? in the 
context of this thread and in the context of OpenWhisk: I refer to the 
(automated) increase/decrease of the VMs that run the action containers.
Is that what you also refer to?

If so, then the proposal at hand is orthogonal to autoscaling. At its core 
it is about increasing the density of executing actions within one 
container and in that sense independent of how many containers, VMs, etc 
there are in the system or how the system is shrunk/grown.

In practical terms there is still a connection between proposal and 
scaling the VMs: if the density of executing actions is increased by 
orders of magnitude then the topic of scaling the VMs becomes a much less 
pressing topic (at least for the types of workload I described 
previously). But this practical consideration should not be mistaken for 
this being a discussion of autoscaling.

Please let me know if I misunderstood your use of the term autoscaling or 
if the above does not explain well.

Thanks!
Michael 




On 05/07/17 16:57, "Michael M Behrendt" <michaelbehre...@de.ibm.com> 
wrote:

>
>
>Hi Michael/Rodric,
>
>I'm struggling to understand how a separate invoker pool helps us 
avoiding
>to implement traditional autoscaling if we process multiple activations 
as
>threads within a shared process. Can you pls elaborate / provide an
>example?
>
>Sent from my iPhone
>
>> On 5. Jul 2017, at 16:53, Michael Marth <mma...@adobe.com.INVALID> 
wrote:
>>
>> Michael B,
>> Re your question: exactly what Rodric said :)
>>
>>
>>
>>> On 05/07/17 12:32, "Rodric Rabbah" <rod...@gmail.com> wrote:
>>>
>>> The issue at hand is precisely because there isn't any autoscaling of
>capacity (N invokers provide M containers per invoker). Once all those
>slots are consumed any new requests are queued - as previously discussed.
>>>
>>> Adding more density per vm is one way of providing additional capacity
>over finite resources. This is the essence of the initial proposal.
>>>
>>> As noted in previous discussions on this topic, this should be viewed 
as
>managing a different resource pool (and not the same pool of containers 
as
>ephemeral actions). Once you buy into that, generalization to other
>resource pools becomes natural.
>>>
>>> Going further, serverless becomes the new PaaS.
>>>
>>> -r
>>>
>>>> On Jul 5, 2017, at 6:11 AM, Michael M Behrendt
><michaelbehre...@de.ibm.com> wrote:
>>>>
>>>> Hi Michael,
>>>>
>>>> thanks for the feedback -- glad you like my stmt re value prop :-)
>>>>
>>>> I might not yet have fully gotten my head around Steve's proposal --
>what
>>>> are your thoughts on how this would help avoiding the 
reimplementation
>of
>>>> an autoscaling / feedback loop mechanism, as we know it from more
>>>> traditional runtime platforms?
>>>>
>>>>
>>>> Thanks & best regards
>>>> Michael
>>>>
>>>>
>>>>
>>>> From:   Michael Marth <mma...@adobe.com.INVALID>
>>>> To:     "dev@openwhisk.apache.org" <dev@openwhisk.apache.org>
>>>> Date:   07/05/2017 11:25 AM
>>>> Subject:        Re: Improving support for UI driven use cases
>>>>
>>>>
>>>>
>>>> Hi Michael,
>>>>
>>>> Totally agree with your statement
>>>> ?value prop of serverless is that folks don't have to care about 
that"
>>>>
>>>> Again, the proposal at hand does not intend to change that at all. On
>the
>>>> contrary - in our mind it?s a requirement that the developer should 
not
>
>>>> change or that internals of the execution engines get exposed.
>>>>
>>>> I find Stephen?s comment about generalising the runtime behaviour 
very
>>>> exciting. It could open the door to very different types of workloads
>>>> (like training Tensorflow or running Spark jobs), but with the same
>value
>>>> prop: users do not have to care about the managing resources/servers.
>And
>>>> for providers of OW systems all the OW goodies would still apply 
(e.g.
>>>> running untrusted code). Moreover, if we split the Invoker into
>different
>>>> specialised Invokers then those different specialised workloads could
>live
>>>> independently from each other (in terms of code as well as resource
>>>> allocation in deployments).
>>>> You can probably tell I am really excited about Stephen's idea :) I
>think
>>>> it would be a great step forward in increasing the use cases for OW.
>>>>
>>>> Cheers
>>>> Michael
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On 04/07/17 20:15, "Michael M Behrendt" <michaelbehre...@de.ibm.com>
>>>> wrote:
>>>>
>>>>> Hi Dragos,
>>>>>
>>>>>> What stops
>>>>>> Openwhisk to be smart in observing the response times, CPU
>consumption,
>>>>>> memory consumption of the running containers ?
>>>>>
>>>>> What are your thoughts on how this approach would be different from
>the
>>>> many IaaS- and PaaS-centric autoscaling solutions that have been 
built
>>>> over the last years? All of them require relatively complex policies
>(eg
>>>> scale based on cpu or mem utilization, end-user response time, etc.?
>What
>>>> are the thresholds for when to add/remove capacity?), and a value 
prop
>of
>>>> serverless is that folks don't have to care about that.
>>>>>
>>>>> we should discuss more during the call, but wanted to get this out 
as
>>>> food for thought.
>>>>>
>>>>> Sent from my iPhone
>>>>>
>>>>> On 4. Jul 2017, at 18:50, Dascalita Dragos <ddrag...@gmail.com> 
wrote:
>>>>>
>>>>>>> How could a developer understand how many requests per container 
to
>>>> set
>>>>>>
>>>>>> James, this is a good point, along with the other points in your
>email.
>>>>>>
>>>>>> I think the developer doesn't need to know this info actually. What
>>>> stops
>>>>>> Openwhisk to be smart in observing the response times, CPU
>consumption,
>>>>>> memory consumption of the running containers ? Doing so it could
>learn
>>>>>> automatically how many concurrent requests 1 action can handle. It
>>>> might be
>>>>>> easier to solve this problem efficiently, instead of the other
>problem
>>>>>> which pushes the entire system to its limits when a couple of 
actions
>
>>>> get a
>>>>>> lot of traffic.
>>>>>>
>>>>>>
>>>>>>
>>>>>>> On Mon, Jul 3, 2017 at 10:08 AM James Thomas 
<jthomas...@gmail.com>
>>>> wrote:
>>>>>>>
>>>>>>> +1 on Markus' points about "crash safety" and "scaling". I can
>>>> understand
>>>>>>> the reasons behind exploring this change but from a developer
>>>> experience
>>>>>>> point of view this adds introduces a large amount of complexity to
>the
>>>>>>> programming model.
>>>>>>>
>>>>>>> If I have a concurrent container serving 100 requests and one of 
the
>>>>>>> requests triggers a fatal error how does that affect the other
>>>> requests?
>>>>>>> Tearing down the entire runtime environment will destroy all those
>>>>>>> requests.
>>>>>>>
>>>>>>> How could a developer understand how many requests per container 
to
>>>> set
>>>>>>> without a manual trial and error process? It also means you have 
to
>>>> start
>>>>>>> considering things like race conditions or other challenges of
>>>> concurrent
>>>>>>> code execution. This makes debugging and monitoring also more
>>>> challenging.
>>>>>>>
>>>>>>> Looking at the other serverless providers, I've not seen this
>featured
>>>>>>> requested before. Developers generally ask AWS to raise the
>concurrent
>>>>>>> invocations limit for their application. This keeps the platform
>doing
>>>> the
>>>>>>> hard task of managing resources and being efficient and allows 
them
>to
>>>> use
>>>>>>> the same programming model.
>>>>>>>
>>>>>>>> On 2 July 2017 at 11:05, Markus Thömmes <markusthoem...@me.com>
>>>> wrote:
>>>>>>>>
>>>>>>>> ...
>>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>> To Rodric's points I think there are two topics to speak about and
>>>> discuss:
>>>>>>>>
>>>>>>>> 1. The programming model: The current model encourages users to
>break
>>>>>>>> their actions apart in "functions" that take payload and return
>>>> payload.
>>>>>>>> Having a deployment model outlined could as noted encourage users
>to
>>>> use
>>>>>>>> OpenWhisk as a way to rapidly deploy/undeploy their usual 
webserver
>
>>>> based
>>>>>>>> applications. The current model is nice in that it solves a lot 
of
>>>>>>> problems
>>>>>>>> for the customer in terms of scalability and "crash safeness".
>>>>>>>>
>>>>>>>> 2. Raw throughput of our deployment model: Setting the concerns
>aside
>>>> I
>>>>>>>> think it is valid to explore concurrent invocations of actions on
>the
>>>>>>> same
>>>>>>>> container. This does not necessarily mean that users start to
>deploy
>>>>>>>> monolithic apps as noted above, but it certainly could. Keeping 
our
>>>>>>>> JSON-in/JSON-out at least for now though, could encourage users 
to
>>>>>>> continue
>>>>>>>> to think in functions. Having a toggle per action which is 
disabled
>
>>>> by
>>>>>>>> default might be a good way to start here, since many users might
>>>> need to
>>>>>>>> change action code to support that notion and for some 
applications
>
>>>> it
>>>>>>>> might not be valid at all. I think it was also already noted, 
that
>>>> this
>>>>>>>> imposes some of the "old-fashioned" problems on the user, like: 
How
>
>>>> many
>>>>>>>> concurrent requests will my action be able to handle? That kinda
>>>> defeats
>>>>>>>> the seemless-scalability point of serverless.
>>>>>>>>
>>>>>>>> Cheers,
>>>>>>>> Markus
>>>>>>>>
>>>>>>>>
>>>>>>> --
>>>>>>> Regards,
>>>>>>> James Thomas
>>>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>>




Reply via email to