On Sun, Feb 5, 2012 at 3:36 AM, Fraser Jackson <fraser.jack...@xtra.co.nz> wrote: > I would reiterate what Henry has said about this problem. If you are trying > it you should at least have a very careful read of the material in Numerical > Recipes. You may find enough there to solve some problems. > > If you are going to persist you should begin by clarifying what class of > matrices you want to develop your calculations for. NR provides some groups > of interest. Then explore the detailed transformations which are required, > and ways of ensuring that they do not suffer from serious rounding problems
He has explained this already, in the text you quoted. He is trying to develop a mental model of what eigenvectors (and eigenvalues) are. I have my own model, for example, but I do not know if it will be as rich as the model he is generating for himself. > The difficulty with most of the methods is that they depend on a sequence of > steps operating on small subsets of the cells and they do not generalise > readily to the larger scale array operations for which J is especially well > suited. That's a performance issue (or, if floating point values are used instead of arbitrary precision rationals, a precision issue). And, even there, it helps to see the problems arise before alternative approaches can be properly appreciated. But when working to develop a mental model of how what things mean, you are generally limited to small arrays. Large masses of numbers do little, by themselves, for comprehension purposes. (Though, of course, they can represent something else which might be helpful -- for example, a large mass of numbers representing pixels might be useful which viewed as an image). Advising the use LAPACK here is like advising the use of +/ .* when a person is asking for a multiplication table. It's a good solution to the wrong problem. -- Raul ---------------------------------------------------------------------- For information about J forums see http://www.jsoftware.com/forums.htm