On Sat, Jun 28, 2014 at 03:31:36PM +0000, Alex_Dovhal via Digitalmars-d wrote: > On Saturday, 28 June 2014 at 10:42:19 UTC, Walter Bright wrote: > >It happens with both numerical integration and inverting matrices. > >Inverting matrices is commonplace for solving N equations with N > >unknowns. > > > >Errors accumulate very rapidly and easily overwhelm the significance > >of the answer. > > if one wants better precision with solving linear equation he/she at > least would use QR-decomposition.
Yeah, inverting matrices is generally not the preferred method for solving linear equations, precisely because of accumulated roundoff errors. Usually one would use a linear algebra library which has dedicated algorithms for solving linear systems, which extracts the solution(s) using more numerically-stable methods than brute-force matrix inversion. They are also more efficient than inverting the matrix and then doing a matrix multiplication to get the solution vector. Mathematically, they are equivalent to matrix inversion, but numerically they are more stable and not as prone to precision loss issues. Having said that, though, added precision is always welcome, particularly when studying mathematical objects (as opposed to more practical applications like engineering, where 6-8 digits of precision in the result is generally more than good enough). Of course, the most ideal implementation would be to use algebraic representations that can represent quantities exactly, but exact representations are not always practical (they are too slow for very large inputs, or existing libraries only support hardware floating-point types, or existing code requires a lot of effort to support software arbitrary-precision floats). In such cases, squeezing as much precision out of your hardware as possible is a good first step towards a solution. T -- Time flies like an arrow. Fruit flies like a banana.
