Thanks for the analysis, that could theoretically also be the case.
What confuses me is that the compilation works fine on i586. If the
problem you describe would be the cause, shouldn't the OOM occur equally
on both i586 and x86_64? Or am I missing something here?
The vm's total memory usage seems to be below 1GB when OOM happens
according to this line:
[ T339] Out of memory: Killed process 4105
(cc1plus)total-vm:891536kB,anon-rss:368340kB,file-rss:0kB,shmem-rss:0kB,UID:399
pgtables:1756kB oom_score_adj:0
so maybe giving the workers a bit more than 1GB memory is still worth
trying as a work-around?
Thanks for your time,
Simon
On 30.01.23 Olaf Hering wrote:
Sun, 29 Jan 2023 13:50:14 +0100 Simon Vogl<[email protected]>:
failing due to out-of-memory
A slightly related anecdote:
Recently I had to deploy a VM which runs git-daemon.service.
No matter how many GB of memory I assigned to the VM, git-daemon was terminated
due to OOM.
It turned out, this drop-in for git-daemon.service was required in this
environment:
[Service]
IOSchedulingClass=idle
CPUSchedulingPolicy=batch
MemoryLow=1000M
MemoryHigh=2000M
MemoryMax=2048M
I think without such resource restrictions, the process never got ENOMEM from
malloc, and had to assume unlimited resources are available. I think with these
restrictions, malloc will return an error, and apparently git is smart enough
to work with whatever amount of memory it is allowed to use.
Under the assumption that gcc is equally smart to deal with ENOMEM, someone has
to workout what these systemd knobs actually do to the spawned processes. Then
the same needs to be done in the OBS build script, so that each child process
of the build script gets proper errors from malloc.
Good luck.
Olaf
_______________________________________________
Packman mailing list
[email protected]
https://lists.links2linux.de/cgi-bin/mailman/listinfo/packman