Hi Tzu, 

You are right about GreenGrass. AFAIK, they are not using Docker in their 
solution. BTW, this brings about some limitations: e.g., they run Python 
lambdas in GreenGrass, while OpenWhisk at the edge will be able to run any 
container, just like it happens in the cloud, which makes it a polyglot 
capability. 

Azure Functions on IoT Edge uses containers. So, the approaches differ :) 
In general, I agree: containers are there for isolation. If edge is viewed 
as a cloud extension, then a typical use case might be migrating user's 
containers from the cloud to edge to save bandwidth, for example. This 
includes migrating a serverless workload to the edge more or less as is. 
So, at the moment we just want to lay a first brick to enable this. 

Concerning the cold start, I agree that this is a problem and it's more 
pronounced in the edge than in the cloud. But if we ignore this problem 
for a moment, we still get two benefits (out of 3 that you emphasize): 
autonomy and lower bandwidth by just allowing OW to run at the edge.

I agree that considering alternatives to containers when putting 
serverless at the edge makes a lot of sense in the long run (or maybe even 
medium term) and will be happy to discuss this.

Cheers.
 

-- david 




From:   TzuChiao Yeh <su3g4284zo...@gmail.com>
To:     dev@openwhisk.apache.org
Date:   17/07/2018 05:49 PM
Subject:        Re: Proposing Lean OpenWhisk



Hi David,

Looks cool! Glad to see OpenWhisk step forward to the edge use case.

Simple question: have you considered the way that remove out docker
containers (break up isolation)?

Due to close-source, I'm not sure how aws greengrass did, but seems like
there's no docker got installed at all.

The edge computing benefits for some advantages,
1. bandwidth reduction.
2. lower latency.
3. offline computing capability (not for all scenarios, but this is indeed
aws greengrass claimed for).

We can first ignore the use cases that required ultra low-latency (i.e.
interactive AR/VR, speech language translation). But for general use 
cases,
cold start problem in serverless makes low latency no sense. Since there's
only about 100-200ms RTT from device to cloud, but container
creation/deletion is much higher. Besides from this, (part of) edge 
devices
are not provided as an IaaS service, therefore we can even care no
multi-tenancy or weaker isolation. What do you think?

Thanks,
Tzu-Chiao Yeh (@tz70s)


On Tue, Jul 17, 2018 at 9:43 PM David Breitgand <davi...@il.ibm.com> 
wrote:

> Sure. Will do directly on Wiki.
> Cheers.
>
> -- david
>
>
>
>
> From:   "Markus Thoemmes" <markus.thoem...@de.ibm.com>
> To:     dev@openwhisk.apache.org
> Date:   17/07/2018 04:31 PM
> Subject:        Re: Proposing Lean OpenWhisk
>
>
>
> Hi David,
>
> I absolutely agree, this should not be held back. It'd be great if you
> could chime in on the discussion I opened on the new proposal regarding
> your use-case though. It might be nice to verify a similar topology as 
you
> are proposing is still implementable or maybe even easier to implement
> when moving to a new architecture, just so we have all requirements to 
it
> on the table.
>
> I agree it's entirely orthogonal though and your proposal can be
> implemented/merged independent of that.
>
> Cheers,
> Markus
>
>
>
>
>
>




Reply via email to