2019-07-22 05:51:15 UTC - chris: HI Guys, I created the files ( __main__.py and 
xxx.py)  into a zip file. `action` and `invoke` print `ok`.
However, when i run `activation get`, why log shows error `ModuleNotFoundError: 
No module named 'keras'` Does anybody know ?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563774675008600?thread_ts=1563774675.008600&cid=C3TPCAQG1
----
2019-07-22 05:53:40 UTC - Satwik Kolhe: See if this blog post by @James Thomas 
- <http://jamesthom.as/blog/2017/04/27/python-packages-in-openwhisk/> - helps 
you package your python function
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563774820008900?thread_ts=1563774675.008600&cid=C3TPCAQG1
----
2019-07-22 05:56:16 UTC - chris: thanks for your replying.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563774976009200?thread_ts=1563774675.008600&cid=C3TPCAQG1
----
2019-07-22 06:08:05 UTC - Roberto Santiago: @chris, took me a bit to figure out 
how to package up python actions with `virtualenv` and `zip`.  James' tutorial 
is great.  Let me know if you hit any roadblocks.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563775685009600?thread_ts=1563774675.008600&cid=C3TPCAQG1
----
2019-07-22 06:11:44 UTC - chris: @Roberto Santiago  OK!!thank you...you guys 
are really awesome :heart_eyes:
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563775904009800?thread_ts=1563774675.008600&cid=C3TPCAQG1
----
2019-07-22 08:12:08 UTC - chris: Finally......Error again :fearful:
the log shows that `ModuleNotFoundError: No module named 'pyjokes'` .......
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563783128010000?thread_ts=1563774675.008600&cid=C3TPCAQG1
----
2019-07-22 08:13:03 UTC - chris: But i did installed it in `virtualenv`
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563783183010200?thread_ts=1563774675.008600&cid=C3TPCAQG1
----
2019-07-22 08:13:35 UTC - chris: 
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563783215010400?thread_ts=1563774675.008600&cid=C3TPCAQG1
----
2019-07-22 08:14:04 UTC - chris: 
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563783244010800?thread_ts=1563774675.008600&cid=C3TPCAQG1
----
2019-07-22 08:15:48 UTC - chris: 
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563783348011600?thread_ts=1563774675.008600&cid=C3TPCAQG1
----
2019-07-22 08:17:04 UTC - chris: Does anyone know why it not work for me 
!!!!!!!!!:dizzy_face:
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563783424012000?thread_ts=1563774675.008600&cid=C3TPCAQG1
----
2019-07-22 08:51:27 UTC - Satwik Kolhe: Try, change values according to your 
environment

zip -r jokes.zip venv/bin/activate_this main.py venv/&lt;path to site 
packages&gt;/jokes 

'activate_this' file is important
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563785487012200?thread_ts=1563774675.008600&cid=C3TPCAQG1
----
2019-07-22 09:25:14 UTC - Pepi Paraskevoulakou: hello anyone experienced with 
test:
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563787514012700
----
2019-07-22 09:25:17 UTC - Pepi Paraskevoulakou: WARNING: The 
DOCKER_COMPOSE_HOST variable is not set. Defaulting to a blank string.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563787517012900
----
2019-07-22 09:25:33 UTC - Pepi Paraskevoulakou: need to do something specific?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563787533013300
----
2019-07-22 10:26:52 UTC - Michael Schmidt: ^This is a lack of experience with 
Docker Compose, I'd read up on docker compose itself.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563791212013900
----
2019-07-22 11:33:56 UTC - Roberto Santiago: Looking for some testing advice.  I 
am having success with using triggers to orchestrate actions to fulfill 
functional requirements.  While I am testing the actions individually (i.e. 
unit testing), I am wondering how to go about testing the orchestration of 
those actions, sort of like an integration test.  Any thoughts?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563795236016300?thread_ts=1563795236.016300&cid=C3TPCAQG1
----
2019-07-22 11:49:54 UTC - Satwik Kolhe: How are triggers different from APIs 
exposed using openwhisk/apigateway ? Except that triggers just invoke a 
function without returning the actual result!
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563796194016400?thread_ts=1563795236.016300&cid=C3TPCAQG1
----
2019-07-22 12:08:42 UTC - Roberto Santiago: Triggers are like events.  Many 
rules can be attached to a single trigger.  Each of these rules invoke an 
action.  But all of those invocations are independent of one another.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563797322016600?thread_ts=1563795236.016300&cid=C3TPCAQG1
----
2019-07-22 12:29:04 UTC - James Thomas: If you expose those actions as an API - 
I’d write e2e tests with those API endpoints.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563798544016800?thread_ts=1563795236.016300&cid=C3TPCAQG1
----
2019-07-22 12:29:22 UTC - James Thomas: otherwise, write e2e tests and set up 
and fix up the test environment and wait for the results
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563798562017000?thread_ts=1563795236.016300&cid=C3TPCAQG1
----
2019-07-22 16:31:12 UTC - Jona: Hey there ! I just discovered Apache OpenWhisk, 
and stumbled upon this on the homepage:
```
Scaling Per-Request &amp; Optimal Utilization
Run your action ten thousand times in a fraction of a second, or once a week. 
Action instances scale to meet demand as needed, then disappear. Enjoy optimal 
utilization where you don't pay for idle resources.
```
I wonder if someone could give more details on the kind of (hardware) setup 
needed to achieve *per-request scaling* while *don't pay for idle resources* ? 
I can in fact imagine installing OpenWhisk on an Amazon EC2 instance but I 
would still need to pay for idle resources... am I missing something ? 
:slightly_smiling_face:
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563813072022500
----
2019-07-22 16:58:16 UTC - Michael Schmidt: You would use openwhisk as an on 
prem solution probably, or openwhisk-as-a-service
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563814696022800
----
2019-07-22 16:58:37 UTC - Michael Schmidt: AWS lamba exists, but I haven't 
heard tons of people crazy about it...
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563814717023300
----
2019-07-22 16:58:51 UTC - Michael Schmidt: that would be your aws get charged 
by the function solution though...
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563814731023700
----
2019-07-22 16:59:07 UTC - Michael Schmidt: you probably woun't put OpenWhisk on 
aws instances imo
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563814747024300
----
2019-07-22 17:02:02 UTC - James Thomas: &lt;vendor-plug&gt;If you want to just 
use “openwhisk-as-a-service” with those features check out IBM Cloud 
Functions&lt;/vendor-plug&gt;
whisking : Upkar Lidder, Michael Schmidt
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563814922025000
----
2019-07-22 17:02:38 UTC - Michael Schmidt: lol
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563814958025400
----
2019-07-22 17:02:43 UTC - Michael Schmidt: this too
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563814963025600
----
2019-07-22 17:39:02 UTC - Jona: AHahahah
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817142026100
----
2019-07-22 17:40:06 UTC - Jona: Was the homepage claims actually a 
&lt;vendor-plug&gt;&lt;/vendor-plug&gt; thing ?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817206027000
----
2019-07-22 17:40:37 UTC - Jona: AWS EC2 was just an illustration
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817237027600
----
2019-07-22 17:40:39 UTC - Rodric Rabbah: hi Jona - welcome to the commnity
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817239027800
----
2019-07-22 17:41:10 UTC - Rodric Rabbah: if you’re paying the bill to operate 
the openwhisk deployment, you will have a fixed cost for the control/data 
plane… and depending on setup, a cost for the resources to run containers
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817270029000
----
2019-07-22 17:41:22 UTC - Jona: What I don't understand is how I can install 
OpenWhisk [somewhere] and not pay for idle resources
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817282029600
----
2019-07-22 17:42:22 UTC - Rodric Rabbah: in that sense, we might need to 
clarify the docs… you have to run at least an edge server and a resource 
manager so the costs for a self hosted instance cannot be zero
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817342031100
----
2019-07-22 17:43:03 UTC - Jona: Yeah I totally get the "master" node needing to 
opeate 24/7, but my question was more toward subsequent containers 
:slightly_smiling_face:
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817383031900
----
2019-07-22 17:43:45 UTC - Jona: Is there any infrastructure/service able to 
instantly scale (and remove) containers under 125ms ?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817425033300
----
2019-07-22 17:43:46 UTC - Rodric Rabbah: the current project’s capabilities fit 
into one of two modes: 1. deploy a fixed capacity, 2. manually add/remove 
capacity.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817426033400
----
2019-07-22 17:44:17 UTC - Rodric Rabbah: by fixed capacity i mean number of vms 
for running user containers, a vm may be able to run say up to 16 containers 
(at the same time)
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817457034000
----
2019-07-22 17:45:20 UTC - Jona: Sorry, I am quite new to serverless and faas in 
general, but how is this different than Kubernetes ?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817520034800
----
2019-07-22 17:45:55 UTC - Jona: From what I understand, k8s do this management 
out of the box right ?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817555035700
----
2019-07-22 17:46:26 UTC - Rodric Rabbah: kubernetes is a general orchestrator 
for containers
openwhisk’s control and data plane can be deployed on top of kubernetes
but openwhisk does not use kubernetes in production settings for managing the 
user’s function containers because it’s too slow for short running functions
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817586037000
----
2019-07-22 17:46:47 UTC - Rodric Rabbah: some projects like google’s knative 
are trying to unique kube and serverless and faas
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817607037600
----
2019-07-22 17:47:09 UTC - Jona: *because it’s too slow for short running 
functions* You got my attention.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817629038300
----
2019-07-22 17:47:28 UTC - Rodric Rabbah: others here who work on kantive and 
openwhisk can tell you more/better
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817648038700
----
2019-07-22 17:47:58 UTC - Jona: Would openwhisk help to provision resources 
faster then ?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817678039100
----
2019-07-22 17:48:05 UTC - Rodric Rabbah: _it does_
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817685039300
----
2019-07-22 17:48:28 UTC - Jona: Hmmm
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817708039600
----
2019-07-22 17:48:44 UTC - Michael Behrendt: yep, it bypasses the kube scheduler 
and replaces it with a daemonset running on each node
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817724040300
----
2019-07-22 17:48:47 UTC - Jona: But only on-premise then ?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817727040400
----
2019-07-22 17:49:57 UTC - Jona: What I don't understand is, with actual 
state-of-the art IAAS cloud providers, openwhisk might not help due to 
limitations of underlying platforms. Am I correct ?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817797041700
----
2019-07-22 17:50:47 UTC - Jona: For example: on AWS, even using openwhisk on 
EKS, would not help me reduce resource provision latency
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817847043100
----
2019-07-22 17:50:54 UTC - Rodric Rabbah: i dont understand the q
there are existing production deployment of openwhisk on kube which deliver 
&lt; 10ms time to provision a container for a function
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817854043300
----
2019-07-22 17:51:21 UTC - Rodric Rabbah: if you deploy on a VMs or mesos or 
some other platform, the numbers might vary
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817881044000
----
2019-07-22 17:51:31 UTC - Rodric Rabbah: i guess we should ask, what are you 
trying to do?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817891044500
----
2019-07-22 17:52:06 UTC - Jona: Yeah, sorry, my goal is to understand under 
what circumstances I can comply with the homepage claim: "don't pay for  idle 
resources"
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817926045600
----
2019-07-22 17:52:36 UTC - Jona: I would like to get around AWS Lambda 
limitations
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817956046000
----
2019-07-22 17:52:54 UTC - Jona: Like the 15mn timeout
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817974046400
----
2019-07-22 17:53:06 UTC - Jona: But also use of GPU
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817986046700
----
2019-07-22 17:53:14 UTC - Jona: which is not offered so far
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563817994046900
----
2019-07-22 17:53:33 UTC - Jona: I am basically trying all FaaS solutions on 
earth
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818013047600
----
2019-07-22 17:53:35 UTC - Jona: ahahaha
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818015047800
----
2019-07-22 17:53:49 UTC - Jona: And came across openwhisk a few hours ago
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818029048200
----
2019-07-22 17:54:29 UTC - Jona: All FaaS are quite cool, except... I have to 
size my servers AND PAY FOR IDLE RESSOURCES
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818069049100
----
2019-07-22 17:55:07 UTC - Jona: The really good side of AWS Lambda is that it 
just works and scales seamlessly from 0
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818107049900
----
2019-07-22 17:55:42 UTC - Jona: And this is what attracted me here: "don't pay 
for idle resources [as much as possible]"
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818142050800
----
2019-07-22 17:55:59 UTC - Jona: (GPU is really expensive)
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818159051600
----
2019-07-22 17:56:05 UTC - Rodric Rabbah: we should soften that claim on the 
website… if you’re operating your own instance, you will incur costs.

openwhisk does not have a built-in VM/infra scaler in the project yet

this may be useful reading 
<https://medium.com/openwhisk/the-serverless-contract-44329fab10fb>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818165051900
----
2019-07-22 17:56:10 UTC - Michael Schmidt: If anyone has any docs on the 
knative vs openwhisk sutff @Rodric Rabbah is talking about that'd be cool
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818170052200?thread_ts=1563818170.052200&cid=C3TPCAQG1
----
2019-07-22 17:56:27 UTC - Rodric Rabbah: @Markus Thömmes might be a good 
reference.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818187052500?thread_ts=1563818170.052200&cid=C3TPCAQG1
----
2019-07-22 17:56:32 UTC - Michael Schmidt: with the trade off of running just 
docker container vs the function son k8s
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818192052800
----
2019-07-22 17:57:36 UTC - Jona: Like I said, I get that one should pay for 
orchestrator's resources (like an ELB)
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818256053500
----
2019-07-22 17:59:19 UTC - Jona: My point is more like: what would be the time 
to fire a new container to compute my job ? and most importantly, how (which 
provider/setup) ?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818359055400
----
2019-07-22 18:01:05 UTC - Jona: If you tell me that openwhisk can provision 100 
instances based on 100 API requests in say 100ms, then execute the job... I'm 
in !!
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818465057200
----
2019-07-22 18:02:05 UTC - Jona: The minimum I could get so far was with Amazon 
Fargate, and it provisioned 2 instances in about 1 minute and 30 sec...
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818525058500?thread_ts=1563818525.058500&cid=C3TPCAQG1
----
2019-07-22 18:02:21 UTC - Rodric Rabbah: if you want an experience and 
capabilities as close to aws lambda, i think openwhisk is your answer. to get 
that kind of performance you have to provision your deployment accordingly but 
certainly ibm’s offering can do that @Michael Behrendt can say more
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818541058800
----
2019-07-22 18:03:47 UTC - Jona: Hmmm so... let's put it that way: how is IBM 
serving Fn ? Are they actually paying for idle resources ?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818627059900
----
2019-07-22 18:04:40 UTC - Jona: Also, I didn't see GPU on IBM offering 
:sweat_smile:
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818680060500
----
2019-07-22 18:04:52 UTC - Michael Behrendt: a recent test created a 1000 
instances in 3 seconds, for example
yay : Rodric Rabbah, Jona
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818692060900
----
2019-07-22 18:05:30 UTC - Jona: Nice. Using IBM ?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818730061500
----
2019-07-22 18:05:37 UTC - Michael Behrendt: yep
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818737061700
----
2019-07-22 18:07:11 UTC - Michael Behrendt: of course, just as a data point -- 
depending on what you optimize for, you can even drive it further
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818831062500
----
2019-07-22 18:08:26 UTC - Michael Behrendt: you mean container instances or 
VMs? 1m30s for 2 instances sounds like VMs -- is that correct?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563818906063800?thread_ts=1563818525.058500&cid=C3TPCAQG1
----
2019-07-22 18:10:33 UTC - Jona: What I don't get is: say I open IBM account 
right now... and push my `helloworld.go` to their FaaS service. Whenever a user 
requests 
"<http://ibm.endpoint.com/my/helloworld|ibm.endpoint.com/my/helloworld>", do 
they actually PROVISION a new resource on-the-go ? or are they actually load 
balancing the user to redirect him to an already running resource ?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819033066400?thread_ts=1563819033.066400&cid=C3TPCAQG1
----
2019-07-22 18:11:08 UTC - Michael Behrendt: re GPUs -- which use case are you 
shoting for (just out of curiosity) ? Many of the common use cases i'm seeing 
need multiple gpu vs just a fraction of it. What's your take on that?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819068067100
----
2019-07-22 18:11:33 UTC - Jona: Computer vision
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819093067400
----
2019-07-22 18:11:45 UTC - Jona: For inference
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819105067600
----
2019-07-22 18:12:11 UTC - Michael Behrendt: you mean RPA, for instance?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819131067800
----
2019-07-22 18:12:19 UTC - Michael Behrendt: or just vision in general?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819139068100
----
2019-07-22 18:12:53 UTC - Jona: No, just inference of computer vision models
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819173068600
----
2019-07-22 18:12:58 UTC - Michael Behrendt: there is no provisioning for each 
and every request. the system is trying to reuse container instances as much as 
possible, for security reasons only within the context of a give user. also, it 
preprovisions containers with runtimes, to have them ready to go when a request 
comes in
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819178068800?thread_ts=1563819033.066400&cid=C3TPCAQG1
----
2019-07-22 18:13:19 UTC - Jona: But rather advanced ones involving heavy 
computation use
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819199069300
----
2019-07-22 18:13:39 UTC - Jona: CPU would take 1 or 2 minutes to answer
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819219069600
----
2019-07-22 18:14:01 UTC - Jona: GPU takes &lt;1s
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819241070100
----
2019-07-22 18:14:06 UTC - Michael Behrendt: however, when there are 1000 
concurrent requests to be processed, a 1000 container instances will be running 
in the extreme case
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819246070200?thread_ts=1563819033.066400&cid=C3TPCAQG1
----
2019-07-22 18:14:28 UTC - Michael Behrendt: cool -- yep, that definitely makes 
sense
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819268070700
----
2019-07-22 18:15:09 UTC - Jona: This is why I crave to find "not pay for your 
idle resources"
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819309071200
----
2019-07-22 18:15:21 UTC - Jona: idle GPU cost like WAY too much
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819321071500
----
2019-07-22 18:16:20 UTC - Jona: I have yet to find a solution to provision GPU 
instances under 100ms
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819380072400
----
2019-07-22 18:16:29 UTC - Jona: I thought OpenWhisk might help 
:slightly_smiling_face:
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819389072700
----
2019-07-22 18:17:02 UTC - Michael Behrendt: yep, it could be what you're 
looking for. Iirc, there were some folks in south korea who had implemented gpu 
support for openwhisk
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819422073400
----
2019-07-22 18:17:11 UTC - Michael Behrendt: let me see if i can still find that
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819431073700
----
2019-07-22 18:17:26 UTC - Jona: That would be awesome !
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819446074000
----
2019-07-22 18:24:21 UTC - Michael Behrendt: i haven't exactly found what i was 
looking, but as part of that i stumbled across this....would this be relevant 
to you?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819861074800
----
2019-07-22 18:24:21 UTC - Michael Behrendt: 
<https://github.com/5g-media/incubator-openwhisk-runtime-cuda>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819861074900
----
2019-07-22 18:26:57 UTC - Jona: Totally !!
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563820017075200
----
2019-07-22 18:27:02 UTC - Jona: Thank you !
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563820022075400
----
2019-07-22 18:30:13 UTC - Jona: However, sorry to insist on this but: I 
understand openwhisk might create containers under 100ms but UNDER AN ALREADY 
RUNNING environment. But how can I scale RESOURCES (hardware/machines) from 0 
to 1 under 100ms ? is that even feasible ?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563820213078700
----
2019-07-22 18:31:18 UTC - Jona: Or do you guys use a specific provider that 
bill per-container usage or something like that ?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563820278079700
----
2019-07-22 18:32:04 UTC - Michael Behrendt: this is what i responded in a 
sub-thread -- does that address what you're looking for?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563820324080300
----
2019-07-22 18:32:05 UTC - Michael Behrendt: 
<https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563819178068800?thread_ts=1563819033.066400&amp;cid=C3TPCAQG1>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563820325080400
----
2019-07-22 18:32:30 UTC - Michael Behrendt: you can't scale machines in less 
than 100ms
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563820350080800
----
2019-07-22 18:32:45 UTC - Michael Behrendt: at least to my knowledge
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563820365081100
----
2019-07-22 18:32:57 UTC - Michael Behrendt: there are some techniques that 
comes close to that
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563820377081600
----
2019-07-22 18:33:09 UTC - Michael Behrendt: however, they in turn rely on 
baremetal machines being available
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563820389082000
----
2019-07-22 18:33:17 UTC - Michael Behrendt: so you'd have to pay for them
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563820397082500
----
2019-07-22 18:33:24 UTC - Michael Behrendt: ie you wouldn't win anything
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563820404082800
----
2019-07-22 18:33:49 UTC - Jona: Yeah that was what I was thinking
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563820429083000
----
2019-07-22 18:34:24 UTC - Michael Behrendt: the best way is to solve this by 
economies of scale and law of large numbers....but at a certain point, there is 
no magic
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563820464083900
----
2019-07-22 18:35:23 UTC - Jona: But as IBM offers FaaS, I was hoping that maybe 
some other service could bill per GB-s usage...
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563820523085000
----
2019-07-22 18:35:56 UTC - Jona: But hey, I can dream :slightly_smiling_face:
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563820556085500
----
2019-07-22 18:36:27 UTC - Jona: Anyway thank you for all your replies, I'll try 
my way with OpenWhisk CUDA
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563820587086200
----
2019-07-22 21:12:45 UTC - Michael Behrendt: :slightly_smiling_face: more than 
happy to help. if you have any follow-up questions, pls don't hesitate to reach 
out. Would be great if you could share your experience with CUDA @Jona
+1 : Jona
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563829965087400
----

Reply via email to