To Adrian's definition (which I also like) -- if the instance only lives
half a second, how does that fit with the autoscaling behavior you outlined
below, which i _think_ relies onmulti-threaded long-running processes?

Sent from my iPhone

On 4. Jul 2017, at 23:18, Dascalita Dragos <ddrag...@gmail.com> wrote:

>> how this approach would be different from the many IaaS- and
PaaS-centric
>
> I like Adrian Cockcroft's response (
> https://twitter.com/intent/like?tweet_id=736553530689998848 ) to this:
> *"...If your PaaS can efficiently start instances in 20ms that run for
half
> a second, then call it serverless..."*
>
> I think none of us here imagines that we're building a PaaS experience
for
> developers, nor does the current proposal intends to suggest we should. I
> also assume that none of us imagines to run in production a scalable
system
> with millions of concurrent users with the current setup that demands an
> incredible amount of resources.
>
> Quoting Michael M, which said it nicely, the intent is to make  "*the
> current problem ... not be a problem in reality anymore (and simply
remain
> as a theoretical problem)*".
>
> I think we have a way to be pragmatic about the current limitations, make
> it so that developers don't suffer b/c of this, and buy us enough time to
> implement the better model that should be used for serverless, where
> monitoring, "crash safety", "scaling" , and all of the concerns listed
> previously in this thread are addressed better, but at the same time, the
> performance doesn't have to suffer so much. This is the intent of this
> proposal.
>
>
>
> On Tue, Jul 4, 2017 at 11:15 AM Michael M Behrendt <
> michaelbehre...@de.ibm.com> wrote:
>
>> Hi Dragos,
>>
>>> What stops
>>> Openwhisk to be smart in observing the response times, CPU consumption,
>>> memory consumption of the running containers ?
>>
>> What are your thoughts on how this approach would be different from the
>> many IaaS- and PaaS-centric autoscaling solutions that have been built
over
>> the last years? All of them require relatively complex policies (eg
scale
>> based on cpu or mem utilization, end-user response time, etc.? What are
the
>> thresholds for when to add/remove capacity?), and a value prop of
>> serverless is that folks don't have to care about that.
>>
>> we should discuss more during the call, but wanted to get this out as
food
>> for thought.
>>
>> Sent from my iPhone
>>
>> On 4. Jul 2017, at 18:50, Dascalita Dragos <ddrag...@gmail.com> wrote:
>>
>>>> How could a developer understand how many requests per container to
set
>>>
>>> James, this is a good point, along with the other points in your email.
>>>
>>> I think the developer doesn't need to know this info actually. What
stops
>>> Openwhisk to be smart in observing the response times, CPU consumption,
>>> memory consumption of the running containers ? Doing so it could learn
>>> automatically how many concurrent requests 1 action can handle. It
might
>> be
>>> easier to solve this problem efficiently, instead of the other problem
>>> which pushes the entire system to its limits when a couple of actions
>> get a
>>> lot of traffic.
>>>
>>>
>>>
>>>> On Mon, Jul 3, 2017 at 10:08 AM James Thomas <jthomas...@gmail.com>
>> wrote:
>>>>
>>>> +1 on Markus' points about "crash safety" and "scaling". I can
>> understand
>>>> the reasons behind exploring this change but from a developer
experience
>>>> point of view this adds introduces a large amount of complexity to the
>>>> programming model.
>>>>
>>>> If I have a concurrent container serving 100 requests and one of the
>>>> requests triggers a fatal error how does that affect the other
requests?
>>>> Tearing down the entire runtime environment will destroy all those
>>>> requests.
>>>>
>>>> How could a developer understand how many requests per container to
set
>>>> without a manual trial and error process? It also means you have to
>> start
>>>> considering things like race conditions or other challenges of
>> concurrent
>>>> code execution. This makes debugging and monitoring also more
>> challenging.
>>>>
>>>> Looking at the other serverless providers, I've not seen this featured
>>>> requested before. Developers generally ask AWS to raise the concurrent
>>>> invocations limit for their application. This keeps the platform doing
>> the
>>>> hard task of managing resources and being efficient and allows them to
>> use
>>>> the same programming model.
>>>>
>>>>> On 2 July 2017 at 11:05, Markus Thömmes <markusthoem...@me.com>
wrote:
>>>>>
>>>>> ...
>>>>>
>>>>
>>>>>
>>>> To Rodric's points I think there are two topics to speak about and
>> discuss:
>>>>>
>>>>> 1. The programming model: The current model encourages users to break
>>>>> their actions apart in "functions" that take payload and return
>> payload.
>>>>> Having a deployment model outlined could as noted encourage users to
>> use
>>>>> OpenWhisk as a way to rapidly deploy/undeploy their usual webserver
>> based
>>>>> applications. The current model is nice in that it solves a lot of
>>>> problems
>>>>> for the customer in terms of scalability and "crash safeness".
>>>>>
>>>>> 2. Raw throughput of our deployment model: Setting the concerns aside
I
>>>>> think it is valid to explore concurrent invocations of actions on the
>>>> same
>>>>> container. This does not necessarily mean that users start to deploy
>>>>> monolithic apps as noted above, but it certainly could. Keeping our
>>>>> JSON-in/JSON-out at least for now though, could encourage users to
>>>> continue
>>>>> to think in functions. Having a toggle per action which is disabled
by
>>>>> default might be a good way to start here, since many users might
need
>> to
>>>>> change action code to support that notion and for some applications
it
>>>>> might not be valid at all. I think it was also already noted, that
this
>>>>> imposes some of the "old-fashioned" problems on the user, like: How
>> many
>>>>> concurrent requests will my action be able to handle? That kinda
>> defeats
>>>>> the seemless-scalability point of serverless.
>>>>>
>>>>> Cheers,
>>>>> Markus
>>>>>
>>>>>
>>>> --
>>>> Regards,
>>>> James Thomas
>>>>
>>
>>

Reply via email to