$ time lziprecover -v -Fc2% -n 2048 archive.tar.lz
   archive.tar.lz.fec: 320_594_048 bytes, 320_339_968 fec bytes, 611 blocks

real    132m45.193s
user    194m39.860s
sys     3m44.510s

Hope this helps,
Antonio.


yes it helps a lot. On the 8GB file.
With -n 1024:
$ time lziprecover -v -n 1024 -Fc2% -9 archive.tar.lz -o archive.tar.lz.level-9.fec archive.tar.lz.level-9.fec: 171_714_048 bytes, 171_442_176 fec bytes, 654 blocks

real    *16m33,838s*
user    50m51,727s
sys     0m25,251s

Without:
$ time lziprecover -Fc2% -9 archive.tar.lz -o archive.tar.lz.level-9.fec

real    *130m46,860s*
user    88m53,441s
sys     7m46,270s




For a complete picture of this issue I have run some tests on a 6, 7 and 8GB files. With the 6GB file I get, after around 30 sec/1 min of intense disk read at the start, 100% cpu usage (no waiting for disk) with a time of 9 min 16:

$ time lziprecover -Fc2% -9 crypto.tar.lz -o archive.tar.lz.level-9.fec
real    9m16,551s
user    34m59,735s
sys     0m0,452s

After that, I tried with the 8GB file and get 130min as shown above. To see if it was the lack of available RAM that triggers this permanent read of the disk, I started a 2 GB VM (really consumed 512 MB) on the server and get this result
$ time lziprecover -Fc2% -9 archive.tar.lz -o archive.tar.lz.level-9.fec

real    15m39,630s
user    39m52,017s
sys     0m21,826s

When the VM was on, lziprecover was reading the disk at full speed. Almost immediately after I stopped the VM (3min in the test), lziprecover slowed its read and reached to 100% on all cores.

To me it's a conclusive experiment that without -n equal to Fec block numbers, when the file is about the available RAM size or bigger (oddly, the swap seems to not matter), it causes these permanent reads.

Regards.

Reply via email to