On 2012-02-21 Gilles Espinasse wrote:
> On 2.6.32, only lzma option is available and by default, kernel use
> lzma -9 and this require much more memory than needed. As one
> compiler of my distrib reported a compilation breakage on a 512 MB VM
> during kernel compression, I started hacking scripts/Makefile.lib,
> removed -9 and added -vv. I then played with information displayed
> during compression to adjust xz memory requirement.

lzma -9 from LZMA Utils uses 32 MiB dictionary and requires 311 MiB of
memory. xz -9 uses 64 MiB dictionary and requires 674 MiB of memory.
The lzma emulation in xz uses the same presets as xz, so lzma -9 from
XZ Utils needs 674 MiB of memory. So the emulation isn't very good
although by default both XZ Utils and LZMA Utils use 8 MiB dictionary.

Using a dictionary bigger than the uncompressed file is waste of
memory. So if the kernel image is small, switching to a much smaller
dictionary doesn't affect compression ratio.

> Should not a patch be pushed on LKLM to at least remove the -9 part?

I don't know. If -9 is removed, then a kernel bigger than 8 MiB may
compress worse than it does now.

The -9 probably was put there when XZ Utils hadn't taken over LZMA
Utils, so the memory usage was much lower. Using a high setting is fine
from decompression point of view, because in the specific case of
kernel decompression the dictionary size doesn't affect the
decompressor memory usage. So from that point of view it is fine to use
a high setting "just in case".

scripts/xz_wrap.sh uses 32 MiB dictionary (370 MiB memory) to compress
a kernel image with xz. Maybe that would work on 512 MiB VMs but it can
still be a bit annoying on them.

An alternative to local patching is to set a memory usage limit for xz
when compiling the kernel:

    $ XZ_OPT=-M100MiB make

xz will then scale down the dictionary size. It does it also when
emulating lzma.

> Secondly, could I trust the decompression memory requirement
> displayed by xz?

It can be trusted but:

  - It's rounded *up* to the nearest MiB, so it's not very precise
    when memory requirements are low. This could be fixed since a
    more accurate number is known internally already.

  - The number assumes that the decompressor needs to allocate
    a separate dictionary buffer. This isn't always the case.
    Linux kernel decompression doesn't need a dictionary buffer
    but initramfs and initrd decompression does.

> Is the kernel decompressor really requiring the same
> memory size that xz display during compression?

No. Kernel decompression with a XZ-compressed kernel requires about
30 KiB of memory. The dictionary size doesn't matter because the output
buffer is used as the dictionary buffer. This is done even when a BCJ
filter is used.

I think with a LZMA-compressed kernel the memory usage is very similar
to XZ.

-- 
Lasse Collin  |  IRC: Larhzu @ IRCnet & Freenode

Reply via email to