Hi Michael,

thanks for the feedback -- glad you like my stmt re value prop :-)

I might not yet have fully gotten my head around Steve's proposal -- what 
are your thoughts on how this would help avoiding the reimplementation of 
an autoscaling / feedback loop mechanism, as we know it from more 
traditional runtime platforms?


Thanks & best regards
Michael



From:   Michael Marth <mma...@adobe.com.INVALID>
To:     "dev@openwhisk.apache.org" <dev@openwhisk.apache.org>
Date:   07/05/2017 11:25 AM
Subject:        Re: Improving support for UI driven use cases



Hi Michael,

Totally agree with your statement
?value prop of serverless is that folks don't have to care about that"

Again, the proposal at hand does not intend to change that at all. On the 
contrary - in our mind it?s a requirement that the developer should not 
change or that internals of the execution engines get exposed.

I find Stephen?s comment about generalising the runtime behaviour very 
exciting. It could open the door to very different types of workloads 
(like training Tensorflow or running Spark jobs), but with the same value 
prop: users do not have to care about the managing resources/servers. And 
for providers of OW systems all the OW goodies would still apply (e.g. 
running untrusted code). Moreover, if we split the Invoker into different 
specialised Invokers then those different specialised workloads could live 
independently from each other (in terms of code as well as resource 
allocation in deployments).
You can probably tell I am really excited about Stephen's idea :) I think 
it would be a great step forward in increasing the use cases for OW.

Cheers
Michael





On 04/07/17 20:15, "Michael M Behrendt" <michaelbehre...@de.ibm.com> 
wrote:

>Hi Dragos,
>
>> What stops
>> Openwhisk to be smart in observing the response times, CPU consumption,
>> memory consumption of the running containers ? 
>
>What are your thoughts on how this approach would be different from the 
many IaaS- and PaaS-centric autoscaling solutions that have been built 
over the last years? All of them require relatively complex policies (eg 
scale based on cpu or mem utilization, end-user response time, etc.? What 
are the thresholds for when to add/remove capacity?), and a value prop of 
serverless is that folks don't have to care about that.
>
>we should discuss more during the call, but wanted to get this out as 
food for thought.
>
>Sent from my iPhone
>
>On 4. Jul 2017, at 18:50, Dascalita Dragos <ddrag...@gmail.com> wrote:
>
>>> How could a developer understand how many requests per container to 
set
>> 
>> James, this is a good point, along with the other points in your email.
>> 
>> I think the developer doesn't need to know this info actually. What 
stops
>> Openwhisk to be smart in observing the response times, CPU consumption,
>> memory consumption of the running containers ? Doing so it could learn
>> automatically how many concurrent requests 1 action can handle. It 
might be
>> easier to solve this problem efficiently, instead of the other problem
>> which pushes the entire system to its limits when a couple of actions 
get a
>> lot of traffic.
>> 
>> 
>> 
>>> On Mon, Jul 3, 2017 at 10:08 AM James Thomas <jthomas...@gmail.com> 
wrote:
>>> 
>>> +1 on Markus' points about "crash safety" and "scaling". I can 
understand
>>> the reasons behind exploring this change but from a developer 
experience
>>> point of view this adds introduces a large amount of complexity to the
>>> programming model.
>>> 
>>> If I have a concurrent container serving 100 requests and one of the
>>> requests triggers a fatal error how does that affect the other 
requests?
>>> Tearing down the entire runtime environment will destroy all those
>>> requests.
>>> 
>>> How could a developer understand how many requests per container to 
set
>>> without a manual trial and error process? It also means you have to 
start
>>> considering things like race conditions or other challenges of 
concurrent
>>> code execution. This makes debugging and monitoring also more 
challenging.
>>> 
>>> Looking at the other serverless providers, I've not seen this featured
>>> requested before. Developers generally ask AWS to raise the concurrent
>>> invocations limit for their application. This keeps the platform doing 
the
>>> hard task of managing resources and being efficient and allows them to 
use
>>> the same programming model.
>>> 
>>>> On 2 July 2017 at 11:05, Markus Thömmes <markusthoem...@me.com> 
wrote:
>>>> 
>>>> ...
>>>> 
>>> 
>>>> 
>>> To Rodric's points I think there are two topics to speak about and 
discuss:
>>>> 
>>>> 1. The programming model: The current model encourages users to break
>>>> their actions apart in "functions" that take payload and return 
payload.
>>>> Having a deployment model outlined could as noted encourage users to 
use
>>>> OpenWhisk as a way to rapidly deploy/undeploy their usual webserver 
based
>>>> applications. The current model is nice in that it solves a lot of
>>> problems
>>>> for the customer in terms of scalability and "crash safeness".
>>>> 
>>>> 2. Raw throughput of our deployment model: Setting the concerns aside 
I
>>>> think it is valid to explore concurrent invocations of actions on the
>>> same
>>>> container. This does not necessarily mean that users start to deploy
>>>> monolithic apps as noted above, but it certainly could. Keeping our
>>>> JSON-in/JSON-out at least for now though, could encourage users to
>>> continue
>>>> to think in functions. Having a toggle per action which is disabled 
by
>>>> default might be a good way to start here, since many users might 
need to
>>>> change action code to support that notion and for some applications 
it
>>>> might not be valid at all. I think it was also already noted, that 
this
>>>> imposes some of the "old-fashioned" problems on the user, like: How 
many
>>>> concurrent requests will my action be able to handle? That kinda 
defeats
>>>> the seemless-scalability point of serverless.
>>>> 
>>>> Cheers,
>>>> Markus
>>>> 
>>>> 
>>> --
>>> Regards,
>>> James Thomas
>>> 
>




Reply via email to