On Mon, Sep 24, 2018 at 9:08 AM Peter Bergin <[email protected]> wrote:
>
> On 2018-09-24 00:06, Andrea Adami wrote:
> > On Fri, Sep 21, 2018 at 1:04 AM Andrea Adami <[email protected]> wrote:
> >> On Thu, Sep 20, 2018 at 1:03 PM Peter Bergin <[email protected]> 
> >> wrote:
> >>> On 2018-09-17 10:27, Burton, Ross wrote:
> >>>
> >>> On Mon, 17 Sep 2018 at 08:13, Peter Bergin <[email protected]> wrote:
> >>>
> >>> I'm pretty sure I have narrowed down the root cause to the restriction
> >>> of virtual memory and that liblzma base its memory calculations on
> >>> physical RAM.
> >>>
> >>> To prove this I added a printout in rpm-native/rpmio/rpmio.c and the
> >>> function lzopen_internal.
> >>>
> >>>           uint64_t memory_usage = 
> >>> lzma_stream_encoder_mt_memusage(&mt_options);
> >>>          rpmlog(RPMLOG_NOTICE, "DBG: memory_usage %lu\n", memory_usage);
> >>>
> >>>
> >>> The value of memory_usage is the same regardless of which 'ulimit -v'
> >>> value I set. On the host with 256GB of physical RAM and 32GB of virtual
> >>> memory, memory_usage is ~5.1GB. On another host with 16GB of physical
> >>> RAM I get memory_usage of ~660MB.
> >>>
> >>> I guess you have not seen this kind of failure if you not have
> >>> restricted virutal memory on your host. If you want to try to reproduce
> >>> this set 'ulimit -v 8388608' (8GB) in your shell and then 'bitbake
> >>> glibc-locale -c package_write_rpm -f'.
> >>>
> >>> Wouldn't a solution be to change lzma to look at free memory, not
> >>> total physical memory?
> >>>
> >>> Ross
> >>>
> >>> I have been in contact with the maintainer of liblzma. There is currently 
> >>> no way to restrict the memory usage in liblzma during multi threaded 
> >>> compression. He recommended to adjust the number of threads used during 
> >>> compression. This kind of check is done for 32-bits but not when running 
> >>> 64-bits (in rpmio/rpmio.c lzopen_internal). To rewrite liblzma is another 
> >>> option but I took an easier way out.
> >>>
> >>> I have come up with a patch 
> >>> (https://patchwork.openembedded.org/patch/155017/) that solves my problem 
> >>> and do a restriction of memory usage when the 'ulimit -v' is set. The 
> >>> calculation is based on the assumption that lzopen_internal is run in 
> >>> parallel with as many instances as cpu threads as '#pragma omp parallel' 
> >>> is used in build/pack.c.
> >>>
> >>> When running test on my machine with 4 cores 16GB of physical RAM and 
> >>> 'ulimit -v 2097152' (2GB). It works good and the log shows:
> >>>
> >>> XZ: virtual memory restricted to 2147483648 and per CPU thread 536870912
> >>> XZ: Adjusted the number of threads from 4 to 3 to not exceed the memory 
> >>> usage limit of 2147483648 bytes
> >>>
> >>> Didn't get a clear answer if this is something Yocto/OE should support 
> >>> but I hope my patch solves the issue and it will not affect the normal 
> >>> environments where 'ulimit -v' is not set.
> >>>
> >>> /Peter
> >>> --
> >> Peter,
> >> first of all sorry for butting-in again. My bad I misunderstood the point,
> >>
> >> Looking properly at it, I see the sources do restrict only #if __WORDSIZE 
> >> == 32.
> >>
> >> Again, there is the other way to reduce total memory footprint just
> >> using lower preset in mt_options.
> >> As far as I see default is  #define LZMA_PRESET_DEFAULT     UINT32_C(6)
> >> so you could tr to modify   mt_options.preset to your needs.
> >> I think more threads is normally  better.
> >>
> >> Cheers
> >> Andrea
> >>
> >>> _______________________________________________
> >>> yocto mailing list
> >>> [email protected]
> >>> https://lists.yoctoproject.org/listinfo/yocto
> >
> > Well,
> > call it luck, I now get similar problems with
> > -  opkg-utils: use multithreaded xz when building packages
> >     369c43cca4f0
> >
> > I build in tmpfs (8GB) and have 8 cores.
> > Strange enough, it does happen with only one machine of our bsp (akita).
> >
> > Build Configuration:
> > BB_VERSION           = "1.39.1"
> > BUILD_SYS            = "x86_64-linux"
> > NATIVELSBSTRING      = "ubuntu-16.04"
> > TARGET_SYS           = "arm-oe-linux-gnueabi"
> > MACHINE              = "akita"
> > DISTRO               = "nodistro"
> > DISTRO_VERSION       = "nodistro.0"
> > TUNE_FEATURES        = "arm armv5 thumb dsp"
> > TARGET_FPU           = "soft"
> > meta                 = "master:8cc08e44cf3e969c124456d459c6c06a76cad018"
> > meta-handheld        = "master:61e16e253ea870b87592075efd87727e8b5c574d"
> > meta-initramfs
> > meta-oe              = "master:7ac8d82fec470a8c8e66cfc4216381e2364cb4b2"
> >
> > ...
> > Installing kernel-module-nls-cp1251-4.14.62-handheld (4.14) on root
> > Downloading 
> > file:/tmp/build/tmp-glibc/work/akita-oe-linux-gnueabi/core-image-base/1.0-r0/oe-rootfs-repo/akita/kernel-module-nls-cp1251-4.14.62-handheld_4.14-r0_akita.ipk.
> > Installing kernel-mo
> > ERROR: core-image-base-1.0-r0 do_rootfs: Function failed: do_rootfs
> > ERROR: Logfile of failure stored in:
> > /tmp/build/tmp-glibc/work/akita-oe-linux-gnueabi/core-image-base/1.0-r0/temp/log.do_rootfs.28481
> > ERROR: Task 
> > (/oe/oe-core/meta/recipes-core/images/core-image-base.bb:do_rootfs)
> > failed with exit code '1'
> > ...
> >
> > and in dmesg you see opkg segfaulting and OOM kicking in.
> > It is the same issue...unnoticed building for other similar devices
> > (spitz). Maybe it did not happen using musl as TC_LIBC.
> >
> > Reverting the patch fixes that.
> > Ross, worth opening a bug?
> >
> > Cheers
> > Andrea
> Andrea,
>
> do you have any virtual memory restrictions (ulimit -v) or overcommit
> settings (vm.overcommit_memory)  on your build host?
>

Anything special, is just Ubuntu 16.04 on x86_64.

The point is, I do build since years with this setup:
- TMPDIR in tmpfs  (80% of 16GiB set in fstab)
- 8 cores
- rm_work

Lately I have been testing musl libc so probably I missed it back in march.
As for the opkg backend, the solution is very easy (if we want to keep
the parallelism):

- compressorargs="--threads 0"
+ compressorargs="--threads 0 -O 3"

The preset could be even 4 or 5, we need to test and accept a minimum.
I'll test on my setup.

> Regarding my problem with rpm and liblzma when virtual memory is
> restricted I have supplied a patch to oe-core
> https://patchwork.openembedded.org/patch/155017/ with a suggested solution.
>

Yes, I have seen it. IMHO the sources are buggy.

I still think we are giving too much memory (default is -O 6)
Cheers
Andrea

> Regards,
> /Peter
-- 
_______________________________________________
yocto mailing list
[email protected]
https://lists.yoctoproject.org/listinfo/yocto

Reply via email to