On 10/26/07, Larry Hastings <[EMAIL PROTECTED]> wrote: > His point is that Python has a fixed-point number type called "Decimal", > and that this will lead to confusion. I can see his point, but we all know > from years of C programming that "%d" takes an int and formats it in base > 10--there is no confusion about this.
Sure there is. C isn't the only language where I've used it, but I still sometimes have to look up whether 'd' is "decimal" or "double". I've found bugs in C where someone else just assumed it was "double". If it weren't for backwards compatibility, 'i' would be a much better option, and saving 'd' for an actual Decimal (which might have a decimal point) would be good. http://docs.python.org/lib/typesseq-strings.html already allows both. The question is whether repurposing 'd' would break too much. That said, I think a Decimal that happens to be an integer probably *should* print differently from an integer, because the precision is an important part of a Decimal, and won't always fall conveniently at the decimal point. -jJ _______________________________________________ Python-3000 mailing list Python-3000@python.org http://mail.python.org/mailman/listinfo/python-3000 Unsubscribe: http://mail.python.org/mailman/options/python-3000/archive%40mail-archive.com