Bruno Haible <[EMAIL PROTECTED]> wrote: > Jim Meyering wrote: >> It would be good for gnulib to detect the bug and to use >> the replacement snprintf on losing systems. > > Does the "checking whether printf survives out-of-memory conditions" test > from gnulib (part of any of the *print-posix modules) print "yes" or "no" > on the two machines you used?
It prints "yes" on each of the latest - rawhide - debian unstable >> I chose to eliminate the shared-libraries: >> >> gcc -static -W -Wall k.c > > Is it necessary? No. > Won't it crash also with dynamic linking? Yes, it does for me, but it is good to eliminate the variable, since using shared libraries adds start-up memory usage, thus potentially invalidating the 5000kB limit in the test below. That's also why I clear the environment with env -i and use zsh's -f option: to minimize the possibility of external perturbations. >> Then run it like this: >> >> env -i -- zsh -f -c \ >> 'ulimit -v 5000; MALLOC_PERTURB_=9 ./a.out %$[5*2**20]d' || dmesg|tail >> -1 > > Is the MALLOC_PERTURB_ essential for the failure or not? It appears to be essential, to ensure that the internal failure is manifested. >> FYI, the libc in freebsd 6.1 and newer has no problem with the above >> snprintf usage. > > But it fails the gl_PRINTF_ENOMEM check that is already in m4/printf.m4. Oh well. This suggests that all gnulib clients should use the replacements until the upstream/vendor implementations improve. Perhaps our standards are too high.
