Well. Assuming ur datatypes are "floats" then 128 bits per float (right?), with 16000*3000 floats means abt 1 GIG of RAM + ull need space in RAM for temporary work areas for the diagonalization process that's typically goes as N^2 or some such power law, so waaay too much I think (my numerical analysis fundas are somewhat sketchy, but I think if ur diagonalizing matrices using Householder-type Transformations+Jacobi rotations then space requirements go as some power law of the matrix order) But I think that most distributive teragrids can take the load (if that's where ur doing it).
If ur using "doubles" then out of the question in standard boxen. On 1/26/06, Emmanuel Eckard <[EMAIL PROTECTED]> wrote: > > Greetings ! > > Is there any known maximum size of matrices for the library to still be > useful ? I am trying thinks with matrices of the order of magnitude of 16 > 000 > per 3000, and it fails to allocate the blocks. Is this dependant on the > memory of the computer, or is this an intrinsic limitation of the library > ? > > Thank you very much in advance. > -- > Emmanuel Eckard > Ingénieur Physicien de l'École Polytechnique de Lausanne > Laboratoire d'Intelligence Artificielle, EPFL > LIA/IC 1014 Ecublens, Suisse > +41 21 693 66 97 > > > _______________________________________________ > Help-gsl mailing list > [email protected] > http://lists.gnu.org/mailman/listinfo/help-gsl > _______________________________________________ Help-gsl mailing list [email protected] http://lists.gnu.org/mailman/listinfo/help-gsl
