On Monday, 11 March 2019 at 15:23:34 UTC, BoQsc wrote:
If it is unavoidable to use Floating point, how can I quickly and simply understand the rules of using float to make the least error, or should I just find a third party package for that as well?

It's taught in a computational mathematics course. In short, you estimate how errors accumulate over operations in your algorithm, e.g. for a sum operation `x+y` of values with errors you have (x±dx)+(y±dy)±r, for which you calculate the range of values: minimum is x-dx+y-dy-r, maximum is x+dx+y+dy+r, so the error is dx+dy+r, where r is a rounding error, this error gets carried over to subsequent calculations. Similar for other operations. See for example https://en.wikipedia.org/wiki/Round-off_error#Accumulation_of_roundoff_error

Reply via email to