traditional behavior than what you described.
> >
> > Dragos,
> > did I misinterpret you?
> >
> >
> >
> > Thanks & best regards
> > Michael
> >
> >
> >
> >
> > From: Rodric Rabbah <rod...@gmail.com>
&
Michael
>
>
>
>
> From: Rodric Rabbah <rod...@gmail.com>
> To: dev@openwhisk.apache.org
> Date: 07/06/2017 01:04 PM
> Subject:Re: Improving support for UI driven use cases
>
>
>
> The prototype PR from Tyson was based on a fixed capacity of
07/13 would work for me, while 9 am would be better.
Would that work for you?
Sent from my iPhone
> On 6. Jul 2017, at 19:05, Tyson Norris wrote:
>
>
>> On Jul 5, 2017, at 10:44 PM, Tyson Norris
wrote:
>>
>> I meant to add : I will work
> On Jul 5, 2017, at 10:44 PM, Tyson Norris wrote:
>
> I meant to add : I will work out with Dragos a time to propose asap, and get
> back to the group so that we can negotiate a meeting time that will work for
> everyone who wants to attend in realtime.
>
For a
;rod...@gmail.com>
To: dev@openwhisk.apache.org
Date: 07/06/2017 01:04 PM
Subject: Re: Improving support for UI driven use cases
The prototype PR from Tyson was based on a fixed capacity of concurrent
activations per container. From that, I presume once the limit is reached,
the load
nwhisk.apache.org" <dev@openwhisk.apache.org>
Date: 07/05/2017 08:28 PM
Subject: Re: Improving support for UI driven use cases
Hi Michael,
To make sure we mean the same thing with the word ?autoscaling? in the
context of this thread and in the context of OpenWhisk: I refer to the
(autom
Thanks everyone for the feedback.
I’d be happy to join a call -
A couple of details on the proposal that may or may not be clear:
- no changes to existing behavior without explicit adoption by the action
developer or function client (e.g. developer would have to “allow” the function
to
re value prop :-)
>>>>
>>>> I might not yet have fully gotten my head around Steve's proposal --
>what
>>>> are your thoughts on how this would help avoiding the reimplementation
>of
>>>> an autoscaling / feedback loop mechanism, as we know it f
n how this would help avoiding the reimplementation
>of
>>>> an autoscaling / feedback loop mechanism, as we know it from more
>>>> traditional runtime platforms?
>>>>
>>>>
>>>> Thanks & best regards
>>>> Michael
>>>>
the feedback -- glad you like my stmt re value prop :-)
>>>>
>>>> I might not yet have fully gotten my head around Steve's proposal --
> what
>>>> are your thoughts on how this would help avoiding the reimplementation
> of
>>>> an autoscaling / feedb
traditional runtime platforms?
Thanks & best regards
Michael
From: Michael Marth <mma...@adobe.com.INVALID>
To: "dev@openwhisk.apache.org" <dev@openwhisk.apache.org>
Date: 07/05/2017 11:25 AM
Subject: Re: Improving support for UI driven use cases
Hi Michael,
Totally agree with your statement
“value prop of serverless is that folks don't have to care about that"
Again, the proposal at hand does not intend to change that at all. On the
contrary - in our mind it’s a requirement that the developer should not change
or that internals of the
To Adrian's definition (which I also like) -- if the instance only lives
half a second, how does that fit with the autoscaling behavior you outlined
below, which i _think_ relies onmulti-threaded long-running processes?
Sent from my iPhone
On 4. Jul 2017, at 23:18, Dascalita Dragos
> how this approach would be different from the many IaaS- and PaaS-centric
I like Adrian Cockcroft's response (
https://twitter.com/intent/like?tweet_id=736553530689998848 ) to this:
*"...If your PaaS can efficiently start instances in 20ms that run for half
a second, then call it
Hi Dragos,
> What stops
> Openwhisk to be smart in observing the response times, CPU consumption,
> memory consumption of the running containers ?
What are your thoughts on how this approach would be different from the many
IaaS- and PaaS-centric autoscaling solutions that have been built over
Michael , +1 to how you summarized the problem.
> I’d suggest that the first step is to support “multiple heterogeneous
resource pools”
I'd like to reinforce Stephen's idea on "multiple resource pools". We've
been already using this idea in production systems successfully in other
setups, with
> How could a developer understand how many requests per container to set
James, this is a good point, along with the other points in your email.
I think the developer doesn't need to know this info actually. What stops
Openwhisk to be smart in observing the response times, CPU consumption,
I like that approach a lot!
On 04/07/17 16:05, "Stephen Fink" wrote:
>Hi all,
>
>I’ve been lurking a bit on this thread, but haven’t had time to fully digest
>all the issues.
>
>I’d suggest that the first step is to support “multiple heterogeneous resource
>pools”,
Hi all,
I’ve been lurking a bit on this thread, but haven’t had time to fully digest
all the issues.
I’d suggest that the first step is to support “multiple heterogeneous resource
pools”, where a resource pool is a set of invokers managed by a load balancer.
There are lots of reasons we may
Hi Jeremias, all,
Tyson and Dragos are travelling this week, so that I don’t know by when they
get to respond. I have worked with them on this topic, so let me jump in and
comment until they are able to reply.
From my POV having a call like you suggest is a really good idea. Let’s wait
for
+1 on Markus' points about "crash safety" and "scaling". I can understand
the reasons behind exploring this change but from a developer experience
point of view this adds introduces a large amount of complexity to the
programming model.
If I have a concurrent container serving 100 requests and
Right, I think the UI workflows are just an example of apps that are latency
sensitive in general.
I had a discussion with Stephen Fink on the matter of detecting ourselves that
an action is latency sensitive by using the blocking parameter or as mentioned
the user's configuration in terms of
The thoughts I shared around how to realize better packing with intrinsic
actions are aligned with the your goals: getting more compute density with a
smaller number of machines. This is a very worthwhile goal.
I noted earlier that packing more activations into a single container warrants
a
the solution
> should address *any* (or as many as possible of) such applications.
>
> Regards,
> Alex
>
>
>
> From: Tyson Norris <tnor...@adobe.com.INVALID>
> To: "dev@openwhisk.apache.org" <dev@openwhisk.apache.org>
> Date:
02/07/2017 01:35 AM
Subject: Re: Improving support for UI driven use cases
> On Jul 1, 2017, at 2:07 PM, Alex Glikson <glik...@il.ibm.com> wrote:
>
>> a burst of users will quickly exhaust the system, which is only fine
for
> event handling cases, and not fine at
> I'm not sure how you would split out these network vs compute items without
> action devs taking that responsibility (and not using libraries) or how it
> would be done generically across runtimes.
You don't think this is already happening? When you use promises and chain
promises together,
On Jul 1, 2017, at 3:31 PM, Rodric Rabbah wrote:
>> I’m not sure it would be worth it to force developers down a path of
>> configuring actions based on the network ops of the code within, compared to
>> simply allowing concurrency.
>
> I think it will happen naturally:
> On Jul 1, 2017, at 2:07 PM, Alex Glikson wrote:
>
>> a burst of users will quickly exhaust the system, which is only fine for
> event handling cases, and not fine at all for UI use cases.
>
> Can you explain why is it fine for event handling cases?
> I would assume that
> I’m not sure it would be worth it to force developers down a path of
> configuring actions based on the network ops of the code within, compared to
> simply allowing concurrency.
I think it will happen naturally: imagine a sequence where the first operation
is a request and the rest is the
> it is quite common, for example to run nodejs applications that happily serve
> hundreds or thousands of concurrent users,
I can see opportunities for treating certain actions as intrinsic for which
this kind of gain can be realized. Specifically actions which are performing
network
> the concurrency issue is currently entangled with the controller
discussion, because sequential processing is enforced
how so? if you invoke N actions they don't run sequentially - each is its
own activation, unless you actually invokes a sequence. Can you clarify
this point?
-r
to sizing, scaling and
>> fragmentation of resources - nicely avoided with single-tasked containers?
>> Also, I wonder what would be the main motivation to implement such a
>> policy compared to just having a number of hot containers, ready to
>> process incoming requests?
>>
bbah <rod...@gmail.com<mailto:rod...@gmail.com>>
> To: dev@openwhisk.apache.org<mailto:dev@openwhisk.apache.org>
> Cc: Dragos Dascalita Haut <ddas...@adobe.com<mailto:ddas...@adobe.com>>
> Date: 01/07/2017 06:56 PM
> Subject:Re: Improving su
.com>>
To: dev@openwhisk.apache.org<mailto:dev@openwhisk.apache.org>
Cc: Dragos Dascalita Haut <ddas...@adobe.com<mailto:ddas...@adobe.com>>
Date: 01/07/2017 06:56 PM
Subject: Re: Improving support for UI driven use cases
Summarizing the wiki notes:
1. separate control a
Summarizing the wiki notes:
1. separate control and data plane so that data plane is routed directly to
the container
2. desire multiple concurrent function activations in the same container
On 1, I think this is inline with an outstanding desire and corresponding
issues to take the data flow
35 matches
Mail list logo