Hello Pádraig, > The reason is because xprintf pulls in the whole gnulib vasnprintf > implementation, and then it does a huge number of reallocs.
Did a profiling reveal where in vasnprintf the speed problems are? > The reason my vprintf from glibc-2.12.90-21.i686 is not used is because: > > $ grep gl_cv_func_printf.*no$ config.log > gl_cv_func_printf_infinite_long_double=no > > I had a quick look at that check, and my suspicions are raised > as it was added in 2007. Is my glibc still not up to spec? > I extracted the test (attached) which output: > > [0.000000e+4922] is not NAN > [0e+4922] is not NAN > [0.000000] is not NAN > [0.000000e+00] is not NAN > [0] is not NAN > [1.550000] is not NAN > [1.550000e+00] is not NAN > [1.55] is not NAN > [0.000000] is not NAN > [8.405258e-4934] is not NAN > [8.40526e-4934] is not NAN Yup, this means that while glibc does not classify Pseudo-Infinity, Pseudo-Zero, Unnormalized number, Pseudo-Denormal as NaN. It even prints them as if they were ordinary numbers. But at least is appears to no longer segfault on them. The issue is a question of reliability, not spec. There is no spec that would mandate how glibc handles these numbers. That's why the glibc bug <http://sourceware.org/bugzilla/show_bug.cgi?id=4586> is closed as "RESOLVED INVALID". The point is that these floating-point values are outside of IEEE 754: they are not finite, not infinite, not zero, and not "normal" NaNs. This reliability is provided by the gnulib module 'printf-safe', which is a dependency of 'printf-posix'. It is particularly interesting for the 'od' program, when used as "od -t fL". You could use the gnulib-tool option '--avoid=printf-safe' but then 'od -t fL' will likely crash on random input on some platforms. See these threads: <http://lists.gnu.org/archive/html/bug-gnulib/2007-06/msg00041.html> <http://lists.gnu.org/archive/html/bug-gnulib/2007-06/msg00046.html> Bruno -- In memoriam Buddy Holly <http://en.wikipedia.org/wiki/Buddy_Holly>
