On Thu, Jun 20, 2019 at 01:10:53AM +0200, Mark Wielaard wrote: > On Wed, 2019-06-19 at 02:04 +0200, Mark Wielaard wrote: > > The patch also contains a testcase. But since it is necessary to create > > and process a 4GB+ file it is guarded by some checks to make sure the > > machine has enough disk and memory. Do these checks look reasonable? > > They probably prevent (make the test SKIP) on all buildbot CI workers. > > We discussed this a bit on irc and it was pointed out that the machine > also really needs to have a somewhat fast disk. After a bit of > experimenting I found the following the be a good indicator of the > testcase being able to run in reasonable time: > > diff --git a/tests/run-large-elf-file.sh b/tests/run-large-elf-file.sh > index 3d1bdb6..d97eec9 100755 > --- a/tests/run-large-elf-file.sh > +++ b/tests/run-large-elf-file.sh > @@ -35,6 +35,15 @@ if test $space_available -lt 10; then > exit 77 > fi > > +# Make sure the disk is reasonably fast, should be able to write 100MB/s > +fast_disk=1 > +timeout -s9 10s dd conv=fsync if=/dev/urandom of=tempfile bs=1M count=1K \ > + || fast_disk=0; rm tempfile
Why /dev/urandom? I suggest to use /dev/zero instead. Also, the check itself is quite expensive, it writes 1G into tempfile, I suggest to move it after mem_available check. > +if test $fast_disk -eq 0; then > + echo "Disk not fast enough, need at least 100MB/s" It isn't necessarily a disk, I'd say that file system is not fast enough. -- ldv
signature.asc
Description: PGP signature