In coreutils 9.0: $ /usr/bin/printf "%a\n" 0x1.0000000000001p-20 0x8.0000000000008p-23
$ c_printf 0x1.0000000000001p-20 # printf("%a\n", strtod(argv[1], NULL)); 0x1.0000000000001p-20 Imo, this normalization inconsistency is worth addressing. What do you think? There's nothing numerically wrong with either; both emitted strings hew to C99 (which allows any nonzero leading hex digit for normal values). But still... it would sure be nice (and helpful for testing) if they behaved identically so that direct string comparisons could be done. As a 2c aside from the consistency issue itself: If there is a willingness to consistentize them, the approach used by printf(3) seems to me to be preferable because it has the nice readily-interpretable semantic that the leading hex digit (prior to the radix point) is the implied normalization indicator, i.e. either 0 or 1. So you can tell right off the bat whether the value is subnormal or not. Nice feature for humans. Admittedly, that choice is somewhat IEEE-754-double centric, but given its ubiquity, it doesn't seem unreasonable to favor that approach if there's a decision to consistentize between them. (Probably this topic has been previously discussed on the list, but could not quickly find the thread. If it has been, and you can point me to it, I'll educate myself on the background and history.) Thanks, Glenn