2021-12-06 21:02:33 UTC - Ben Carver: So, it looks like the OS on my Kubernetes 
node is killing my Java processes:
```$ dmesg | less | grep -i "out of memory"
[ 4552.610130] Memory cgroup out of memory: Kill process 186759 (java) score 
2012 or sacrifice child
[10851.979067] Memory cgroup out of memory: Kill process 483710 (java) score 
1998 or sacrifice child
[15302.580520] Memory cgroup out of memory: Kill process 204525 (java) score 
2011 or sacrifice child
[16201.881584] Memory cgroup out of memory: Kill process 223867 (java) score 
2000 or sacrifice child```
This explains why the OW actions would exit abruptly w/o any errors in their 
logs & also explains why I'd see OW spinning up new pods immediately after. 
I think I have configuration problem, but I can't find pertinent documentation.

There are several different locations where I can configure memory usage. I can 
set memory limits `whisk.limits.actions.memory`, I can specify the Invoker's 
memory `invoker.jvmHeapMB`, I can specify per-action memory limits via the 
`--memory` parameter, and I can specify a `memory` parameter for custom 
runtimes in the `stemCell` configuration (the `memory` parameter). Then there 
is `whisk.containerPool.userMemory`, which I am also confused about. (I have 
read the documentation about it, but it is still unclear to me how this 
parameter should be set.) I am not sure how exactly all of these fit together, 
but I think my issue is arrising from improperly configuring these values.

Should the invoker's `jvmHeapMB` be the largest of all of these, or does this 
not impact individual actions? How exactly does the `stemCell`'s `memory` 
parameter impact RAM available to individual actions? (e.g., if I set the 
`stemCell`'s `memory` parameter to 512MB and pass `--memory 1024MB` to an 
action using the custom runtime, then what happens?)
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1638824553170900
----

Reply via email to