I think that Python's float.__round__ is correct. AIUI it rounds
correctly based on the true value represented by the float:
In [4]: round(1.05, 1)
Out[4]: 1.1
In [5]: import decimal
In [6]: decimal.Decimal(1.1)
Out[6]: Decimal('1.100088817841970012523233890533447265625')
That's
The question is whether we can, with limitations of binary representation,
give a result that is consistent with what we would expect if using base-10
notation. The advantage that SymPy and Decimal have is that they know, by
merit of values given at instantiation, what the last significant