On 08/10/15 01:33, Dmitry V. Levin wrote:
> On Thu, Oct 08, 2015 at 03:17:58AM +0300, Dmitry V. Levin wrote:
>> On Tue, Sep 22, 2015 at 03:13:54AM +0200, Bernhard Voelker wrote:
>>> On 09/21/2015 02:56 AM, Pádraig Brady wrote:
>>>> On 21/09/15 00:47, Bernhard Voelker wrote:
>>>>> >From 1335ab2713aab020564275c49fdb3e92bb9a207b Mon Sep 17 00:00:00 2001
>>>>> From: Bernhard Voelker <[email protected]>
>>>>> Date: Mon, 21 Sep 2015 01:40:33 +0200
>>>>> Subject: [PATCH] maint: use adaptive approach for `ulimit -v` based tests
>>>>>
>>>>> When configured with either 'symlinks' or 'shebangs' as value for
>>>>> the --enable-single-binary option, tests based on `ulimit -v` are
>>>>> skipped.  The reason is that the multicall 'coreutils' binary requires
>>>>> much more memory due to shared libraries being loaded, and the size of
>>>>> the 'date' binary (~290KiB) compared to the multicall binary (~5MiB),
>>>>> of course.  Finally, in the case of 'shebangs', the starting shell
>>>>> requires more memory, too
>>>>>
>>>>> Instead of using hard-coded values for the memory limit, use an
>>>>> adaptive approach: first determine the amount of memory for a similar,
>>>>> yet more trivial command, and then do the real test run using that
>>>>
>>>> s/command/invocation of the command/
>>>>
>>>> I can't find any significant issues with the patch at all.
>>>
>>> Thanks, I'll push with that change tomorrow.
>>
>> This approach is very fragile.  It actually failed tests/misc/head-c
>> on an x86 box with the following diagnostics:
>>
>>
>> $ (ulimit -v 2048 && src/head -c1 < /dev/null; echo $?)
>> 0
>> $ (ulimit -v 2048 && src/head --bytes=-2147483647 < /dev/null; echo $?)
>> src/head: memory exhausted
>> 1
>> $ (ulimit -v $((2048+128)) && src/head --bytes=-2147483647 < /dev/null; echo 
>> $?)
>> 0
> 
> I've ended up with the following workaround:
> 
> diff --git a/init.cfg b/init.cfg
> index f71f94c..b199302 100644
> --- a/init.cfg
> +++ b/init.cfg
> @@ -156,7 +156,7 @@ get_min_ulimit_v_()
>        prev_v=$v
>        for v in $( seq $(($prev_v-1000)) -1000 1000 ); do
>          ( ulimit -v $v && "$@" ) >/dev/null \
> -          || { echo $prev_v; return 0; }
> +          || { echo $(($prev_v+256)); return 0; }
>          prev_v=$v
>        done
>      fi

Each test will need to consider further adjustments to
the value returned by the above function.  It's just
that tests/misc/head-c.sh failed to do that.

I've adjusted things accordingly in the attached,
along with some other improvements.

thanks,
Pádraig.
>From c1dba59a79249f8e9f546fd3453da32a17ccaeb1 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?P=C3=A1draig=20Brady?= <[email protected]>
Date: Thu, 8 Oct 2015 03:41:07 +0100
Subject: [PATCH] tests: adjust recent changes to virtual memory limits

* tests/dd/no-allocate.sh: Account for timeout(1) when
determining the required mem, as timeout has additional shared libs.
This avoids the need for the hardcoded 4M addition to the limit.
* tests/misc/head-c.sh: Increase the base limit, to account for
the fact that head(1) will allocate some additional mem in this case.
* tests/misc/cut-huge-range.sh: Remove mention of specific limits.
* tests/misc/printf-surprise.sh: Likewise.
Reported by Dmitry V. Levin
---
 tests/dd/no-allocate.sh       | 8 ++++----
 tests/misc/cut-huge-range.sh  | 2 +-
 tests/misc/head-c.sh          | 2 +-
 tests/misc/printf-surprise.sh | 3 +--
 4 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/tests/dd/no-allocate.sh b/tests/dd/no-allocate.sh
index d122e35..f6ce003 100755
--- a/tests/dd/no-allocate.sh
+++ b/tests/dd/no-allocate.sh
@@ -21,9 +21,9 @@ print_ver_ dd
 
 # Determine basic amount of memory needed.
 echo . > f || framework_failure_
-vm=$(get_min_ulimit_v_ dd if=f of=f2 status=none) \
+vm=$(get_min_ulimit_v_ timeout 10 dd if=f of=f2 status=none) \
   || skip_ "this shell lacks ulimit support"
-rm -f f || framework_failure_
+rm f f2 || framework_failure_
 
 # count and skip are zero, we don't need to allocate memory
 (ulimit -v $vm && dd  bs=30M count=0) || fail=1
@@ -43,8 +43,8 @@ check_dd_seek_alloc() {
   timeout 10 dd count=1 if=/dev/zero of=tape&
 
   # Allocate buffer and read from the "tape"
-  (ulimit -v $(($vm+4000)) \
-    && timeout 10 dd $dd_buf=30M $dd_op=1 count=0 $dd_file=tape)
+  (ulimit -v $vm \
+     && timeout 10 dd $dd_buf=30M $dd_op=1 count=0 $dd_file=tape)
   local ret=$?
 
   # Be defensive in case the tape reader is blocked for some reason
diff --git a/tests/misc/cut-huge-range.sh b/tests/misc/cut-huge-range.sh
index 4df2fc0..633ca85 100755
--- a/tests/misc/cut-huge-range.sh
+++ b/tests/misc/cut-huge-range.sh
@@ -50,7 +50,7 @@ subtract_one='
 CUT_MAX=$(echo $SIZE_MAX | sed "$subtract_one")
 
 # From coreutils-8.10 through 8.20, this would make cut try to allocate
-# a 256MiB bit vector.  With a 20MB limit on VM, the following would fail.
+# a 256MiB bit vector.
 (ulimit -v $vm && : | cut -b$CUT_MAX- > err 2>&1) || fail=1
 
 # Up to and including coreutils-8.21, cut would allocate possibly needed
diff --git a/tests/misc/head-c.sh b/tests/misc/head-c.sh
index ab821ac..0c63e3a 100755
--- a/tests/misc/head-c.sh
+++ b/tests/misc/head-c.sh
@@ -42,7 +42,7 @@ esac
 # Only allocate memory as needed.
 # Coreutils <= 8.21 would allocate memory up front
 # based on the value passed to -c
-(ulimit -v $vm && head --bytes=-$SSIZE_MAX < /dev/null) || fail=1
+(ulimit -v $(($vm+1000)) && head --bytes=-$SSIZE_MAX < /dev/null) || fail=1
 
 # Make sure it works on funny files in /proc and /sys.
 
diff --git a/tests/misc/printf-surprise.sh b/tests/misc/printf-surprise.sh
index 8480693..f098bc1 100755
--- a/tests/misc/printf-surprise.sh
+++ b/tests/misc/printf-surprise.sh
@@ -59,8 +59,7 @@ cleanup_() { kill $pid 2>/dev/null && wait $pid; }
 
 head -c 10 fifo > out & pid=$!
 
-# Choosing the virtual memory limit, 11000 is enough, but 10000 is too
-# little and provokes a "memory exhausted" diagnostic on FreeBSD 9.0-p3.
+# Trigger large mem allocation failure
 ( ulimit -v $vm && env $prog %20000000f 0 2>err-msg > fifo )
 exit=$?
 
-- 
2.5.0

Reply via email to