Umfpack is quite good on a single node, but: Is this 2d or 3d? How many equations (single PDE or coupled system)? How large of a mesh? How much memory on your machine?
It's possible you're hitting the limits of a direct method on your machine and need to explore iterative methods which can be fussy, powerful, and use far less memory. Sent from my iPhone > On Jul 22, 2014, at 6:18 AM, "Jan Blechta" <[email protected]> wrote: > > [please, keep [email protected] in CC] > > On Tue, 22 Jul 2014 12:15:45 +0200 > Maria Cristina Colombo <[email protected]> wrote: > >> Is there a way to continue using >>> problem = MyNonlinearProblem(L,a,bc) >>> solver = NewtonSolver() >>> solver.parameters["linear_solver"] = "lu" >>> solver.parameters["convergence_criterion"] = "incremental" >>> solver.parameters["relative_tolerance"] = 1e-6 >> >> without having that problem? > > Well, I don't know. My point was that LU solvers usually tend to having > problems when the underlying equations are stiff/difficult to > solve (well-posedness is close to be lost). So you could try tweaking > parameters of your problem (including spatial/time resolution, > time-stepping scheme, regularization parameters if any...) to make the > problem more numerically stable for LU factorization. > > Nevertheless, I would recommend you trying > solver.parameters["linear_solver"] = "mumps" > I don't see a reason why this is unacceptable for you? > > Jan > >> I'm not understanding what you say :( >> >> >> 2014-07-22 11:51 GMT+02:00 Jan Blechta <[email protected]>: >> >>> On Tue, 22 Jul 2014 11:19:58 +0200 >>> Maria Cristina Colombo <[email protected]> wrote: >>> >>>> Dear all, >>>> >>>> I'm trying to solve a nonlinear problem on a very big mesh using >>>> newton solver. These are the lines of my code: >>>> problem = MyNonlinearProblem(L,a,bc) >>>> solver = NewtonSolver() >>>> solver.parameters["linear_solver"] = "lu" >>>> solver.parameters["convergence_criterion"] = "incremental" >>>> solver.parameters["relative_tolerance"] = 1e-6 >>>> >>>> I encountered this error: >>>> >>>> UMFPACK V5.4.0 (May 20, 2009): ERROR: out of memory >>>> >>>> Traceback (most recent call last): >>>> File "CH_BC_Tdip.py", line 170, in >>>> solver.solve(problem, u.vector()) >>>> RuntimeError: >>>> >>>> *** >>>> ------------------------------------------------------------------------- >>>> *** DOLFIN encountered an error. If you are not able to resolve >>>> this issue *** using the information listed below, you can ask >>>> for help at ------------------------------ >>>> >>>> *** [email protected] >>>> ------------------------------ >>>> >>>> *** Remember to include the error message listed below and, if >>>> possible, *** include a *minimal* running example to reproduce the >>>> error. ------------------------------ >>>> >>>> *** >>>> ------------------------------------------------------------------------- >>>> *** Error: Unable to successfully call PETSc function 'KSPSolve'. >>>> *** Reason: PETSc error code is: 76. >>>> *** Where: This error was encountered inside >>>> /build/buildd/dolfin-1.4.0+dfsg/dolfin/la/PETScLUSolver.cpp. >>>> *** Process: unknown >>>> >>>> >>>> How can I fix the problem? I have found out that I should switch >>>> to mumps as linear solver.. but I prefer to use LU. Is there a >>>> way to >>> >>> MUMPS is also LU/Cholesky solver. >>> >>>> save memory? Is it related to the 4GB limit of UMFPACK? I'm new at >>>> dolfin and I don't know how to solve my model .. >>> >>> I don't know much about UMFPACK but in LU solver you usually get run >>> out of memory when problem is stiff and too much of pivoting is >>> required for accuracy so fill-in is large. Try fixing the stiffness >>> of your problem. >>> >>> I'd recommend using MUMPS when one can set plenty of MUMPS options >>> (see MUMPS manual) from DOLFIN by >>> >>> PETScOptions.set("mat_mumps_icntl_foo", bar) >>> PETScOptions.set("mat_mumps_cntl_foo", bar) >>> >>> But generally, one should switch to Cholesky or even >>> positive-definite Cholesky when the problem is symmetric or SPD >>> respectively. The theory of factorization is much more stronger >>> there and solvers' robustness reflect that. >>> >>> Jan >>> >>>> >>>> >>>> Thanks >>>> >>>> Cristina > > _______________________________________________ > fenics mailing list > [email protected] > http://fenicsproject.org/mailman/listinfo/fenics _______________________________________________ fenics mailing list [email protected] http://fenicsproject.org/mailman/listinfo/fenics
