Sorry, this isn't really a POSIX or a standards question, but does anyone know 
why this was defined this way?   Was it just codification of "historical 
practice" (i.e., a non-fatal bug)?

While we're at it: when print formatting integers, are there any disadvantages 
of using a precision specification over a zero flag followed with a field 
width, i.e., "%#.8x" vs. "%#08x"?

Reply via email to