Carlos Alexandro Becker added the comment:
Any updates?
--
___
Python tracker
<https://bugs.python.org/issue42411>
___
___
Python-bugs-list mailing list
Unsub
Carlos Alexandro Becker added the comment:
Just did more tests here:
**on my machine**:
$ docker run --name test -m 1GB fedora:33 python3 -c 'import resource; m =
int(open("/sys/fs/cgroup/memory/memory.limit_in_bytes").read());
resource.setrlimit(resource.RLIMIT_AS, (m, m
Carlos Alexandro Becker added the comment:
FWIW, here, both cases:
```
❯ docker ps -a
CONTAINER IDIMAGE COMMAND CREATED
STATUSPORTS NAMES
30fc350a8dbdpython:rc-alpine"python -c 'x =
Carlos Alexandro Becker added the comment:
Maybe you're trying to allocate more memory than the host has available? I
found out that it gives MemoryError in those cases too (kind of easy to
reproduce on docker for mac)...
--
___
Python tracker
Carlos Alexandro Becker added the comment:
The problem is that, instead of getting a MemoryError, Python tries to "go out
of bounds" and allocate more memory than the cgroup allows, causing Linux to
kill the process.
A workaround is to set RLIMIT_AS to the contents of
/sys/fs/cgr
New submission from Carlos Alexandro Becker :
A common use case is running python inside containers, for instance, for
training models and things like that.
The python process sees the host memory/cpu, and ignores its limits, which
often leads to OOMKills, for instance:
docker run -m 1G