Cornelis de Gier <[EMAIL PROTECTED]> wrote:
> Jim Meyering <[EMAIL PROTECTED]> wrote:
>> Cornelis de Gier <[EMAIL PROTECTED]> wrote:
>
>> Would you please see if the following works on your system?
>> The `$deep' should be `$tmp'.
>> ( ulimit -s 50; du -s $tmp > /dev/null ) || fail=1
>
>> If my system can do it with 8KB of stack, yours should be
>> able to do it in no more than 50KB.
>> If not, please try to find the smallest value larger than 50
>> that works for you.
>
> Lower values up to 800 cause du to segfault 9 in 10 cases, around 1500
> du segfaults about 1 in 10 cases. From 2000 du doesn't seem to
> segfault.

Thanks for the info.
Sounds like something strange is going on.
Is ulimit the bash built-in for you?  Run `type ulimit'.
Can you reproduce this on the bash command line?
I find it hard to believe that you need 1.5MB of stack to run du like that,
so maybe something in libc or your kernel has changed how `ulimit -s' works?
bash's `help ulimit' command says its argument specifies units of 1024 bytes.

FYI, this works for me

  ( ulimit -s 8; du -s $tmp ) || echo fail

with the same version of bash, and
with (either Debian unstable's linux-2.4.24 or stock linux-2.6.2)
and libc6-2.3.2.ds1-11.

In any case, to avoid hassles with this test, I've
temporarily disabled the part that tests du in that file.

>>>Fedora Core 1 on Pentium 4
>>>Linux 2.4.22-1.2149.nptl
>>>bash, version 2.05b.0(1)-release
>>>gcc (GCC) 3.3.2 20031022 (Red Hat Linux 3.3.2-1)
>>>CFLAGS=-O2 -march=pentium4


_______________________________________________
Bug-coreutils mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/bug-coreutils

Reply via email to