Hi Randy,
On 2018-09-17 06:25, Randy MacLeod wrote:
On 09/16/2018 04:40 PM, Peter Bergin wrote:
Hi,
during the task do_package_write_rpm I get the error "liblzma: Memory
allocation failed". It happens during packaging of binary RPM
packages. The root cause seems to be the host environment that is
used in our project. We run our builds on a big server with 32 cores
and 256GB of physical RAM but each user has a limit of virtual memory
usage to 32GB (ulimit -v). The packaging in rpm-native has been
parallelized in the commit
http://git.yoctoproject.org/cgit/cgit.cgi/poky/commit/meta/recipes-devtools/rpm?id=84e0bb8d936f1b9094c9d5a92825e9d22e1bc7e3.
What seems to happen is that rpm-native put up 32 parallel tasks with
'#pragma omp', each task is using liblzma that also put up 32 tasks for
#pragma omp
Tha'ts OpenMP, right? I haven't played with that at all but
it looks like you can limit the number of threads using an
environment variable:
OMP_NUM_THREADS num
https://www.openmp.org/wp-content/uploads/OpenMP3.0-SummarySpec.pdf
Doing that would be a little ugly but for now at least, there doesn't
seem to be that many packages using such a pragma.
Does that work for your case?
Yes, it's OpenMP. I tried '#pragma omp parallel num_thread(4)' and it
worked as a workaround. On the failing server the build succeeded. The
problem is to get this as a generic solution based on the host settings
because the #pragma is a compiler directive. But for sure we can make a
bbappend on this to get it working on our host.
the compression work. The memory calculations in liblzma is based on
the amount of physical RAM but as the user is limited by 'ulimit -v'
we get into a OOM situation in liblzma.
Here is the code snippet from rpm-native/build/pack.c where it happens:
#pragma omp parallel
#pragma omp single
// re-declaring task variable is necessary, or older gcc
versions will produce code that segfaults
for (struct binaryPackageTaskData *task = tasks; task != NULL;
task = task->next) {
if (task != tasks)
#pragma omp task
{
task->result = packageBinary(spec, task->pkg, cookie,
cheating, &(task->filename), buildTime, buildHost);
rpmlog(RPMLOG_NOTICE, _("Finished binary package job,
result %d, filename %s\n"), task->result, task->filename);
}
}
Steps to reproduce is to set 'ulimit -v' in your shell to, for
example, 1/8 of the amount of physical RAM and then build for example
glibc-locale. I have tested this with rocko. If the '#pragma omp'
statements in code snippet above is removed the problem is solved.
But that not good as the parallel processing speed up the process.
Is the host environment used here with restricted virtual memory
supported by Yocto? If it is, someone that have any suggestion for a
solution on this issue?
This is a little tricky.
From bitbake's point of view, it's almost like you are building
on a 32 core, 32 GB box and runing out of RAM/swap.
Clearly we would not fix a build that OOMs in that case
(it does seem odd that 32 GB isn't enough ...)
Are you sure that there isn't something else going on?
I have a 24 core machine with 64 GB RAM that never comes
close to such a problem (so I haven't paid attention to RAM usage).
I'm pretty sure I have narrowed down the root cause to the restriction
of virtual memory and that liblzma base its memory calculations on
physical RAM.
To prove this I added a printout in rpm-native/rpmio/rpmio.c and the
function lzopen_internal.
uint64_t memory_usage = lzma_stream_encoder_mt_memusage(&mt_options);
rpmlog(RPMLOG_NOTICE, "DBG: memory_usage %lu\n", memory_usage);
The value of memory_usage is the same regardless of which 'ulimit -v'
value I set. On the host with 256GB of physical RAM and 32GB of virtual
memory, memory_usage is ~5.1GB. On another host with 16GB of physical
RAM I get memory_usage of ~660MB.
I guess you have not seen this kind of failure if you not have
restricted virutal memory on your host. If you want to try to reproduce
this set 'ulimit -v 8388608' (8GB) in your shell and then 'bitbake
glibc-locale -c package_write_rpm -f'.
Best regards,
/Peter
../Randy
Best regards,
/Peter
--
_______________________________________________
yocto mailing list
[email protected]
https://lists.yoctoproject.org/listinfo/yocto