I finally got a chance to get back to this problem. 

I played around with the different orderings available in petsc, 
-pc_factor_mat_ordering_type, and Quotient Minimum Degree seemed to do the 
trick. I am able to solve the system with about 7.5 GB of memory footprint in 
about 4.5 minutes, whereas all other ordering types throw out of memory errors. 

-Manav


> On Feb 6, 2015, at 10:41 PM, Jed Brown <j...@jedbrown.org> wrote:
> 
> Manav Bhatia <bhatiama...@gmail.com> writes:
> 
>> Ok. Just so that I understand your point:  you are saying that because
>> petsc (and other advanced direct solvers) will calculate a new
>> ordering for the matrix factorization, it does not help to calculate
>> it in advance. 
> 
> Correct.  Though the PETSc native LU is probably the most basic solver
> you'll ever use.  The third party packages (-pc_type umfpack, mumps,
> superlu_dist, etc.) are much more advanced.
> 
>> However, for a more basic direct solver, which does not calculate the
>> reordering as a part of its factorization, it may be beneficial to
>> calculate this reordering in advance. correct?
> 
> You'll probably never find such an implementation unless you write it
> yourself and either ignore the literature or intentionally stop halfway
> on the implementation.


------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Libmesh-users mailing list
Libmesh-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to