I have always been puzzled about how to correctly define the python built-in 
`round(number, ndigits)` when `number` is a binary float and `ndigits` is 
greater than zero.
Apparently CPython and numpy disagree:
        >>> round(2.765, 2)
        2.77
        >>> np.round(2.765, 2)
        2.76

My question for the numpy devs are:
- Is there an authoritative source that explains what `round(number, ndigits)` 
means when the digits are counted in a base different from the one used in the 
floating point representation?
- Which was the first programming language to implement an intrinsic function 
`round(number, ndigits)` where ndgits are always decimal, irrespective of the 
representation of the floating point number? (I’m not interested in algorithms 
for printing a decimal representation, but in languages that allow to store and 
perform computations with the rounded value.)
- Is `round(number, ndigits)` a useful function that deserves a rigorous 
definition, or is its use limited to fuzzy situations, where accuracy can be 
safely traded for speed?

Personally I cannot think of sensible uses of `round(number, ndigits)` for 
binary floats: whenever you positively need `round(number, ndigits)`, you 
should use a decimal floating point representation.

Stefano

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com

Reply via email to