Le lun. 4 mars 2019 à 08:44, Bruno Blais <[email protected]> a écrit : > I'm using the wrapper, so I guess by default that means it is using the > AztecOO stack of solvers?
Yes, that's right. You won't get any speedup using OpenMP with AztecOO, you need to switch to the Tpetra stack and Belos to use OpenMP (but we don't have wrappers for the all Tpetra stack) >> 2) Why do you think that OpenMP would be faster than MPI? MPI is usually >> faster than OpenMP unless you are very careful about your data management. > My original idea was that since in shared memory parallelism you could > precondition a larger chunk of the matrix as a whole, that the ILU > preconditioning would be more efficient in a shared-memory context than in a > distributed one. Thus you would need less GMRES iterations to solve > your > system. It seems I am wrong :) ? Using larger blocks for ILU preconditioning will decrease the number of GMRES iterations but you will spend more time in ILU, so it's hard to say if it's worth it. Best, Bruno -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
