> > > > I said the Hypre setup cost is not scalable, > I'd be a little careful here. Scaling for the matrix triple product is hard and hypre does put effort into scaling. I don't have any data however. Do you?
> but it can be amortized over the iterations. You can quantify this > just by looking at the PCSetUp time as your increase the number of > processes. I don't think they have a good > model for the memory usage, and if they do, I do not know what it is. > However, generally Hypre takes more > memory than the agglomeration MG like ML or GAMG. > > agglomerations methods tend to have lower "grid complexity", that is smaller coarse grids, than classic AMG like in hypre. THis is more of a constant complexity and not a scaling issue though. You can address this with parameters to some extent. But for elasticity, you want to at least try, if not start with, GAMG or ML. > Thanks, > > Matt > > >> >> Giang >> >> On Mon, Jan 18, 2016 at 5:25 PM, Jed Brown <[email protected]> wrote: >> >>> Hoang Giang Bui <[email protected]> writes: >>> >>> > Why P2/P2 is not for co-located discretization? >>> >>> Matt typed "P2/P2" when me meant "P2/P1". >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener >
