Ben had the brilliant idea of doing boundary data output by creating a 
boundary mesh, solving a local system to project functions of interior 
data onto that mesh, then outputting the results.

Unfortunately, with certain partitionings, this can put *all* your 
boundary dofs on a single processor.  This processor wants to create 
matrices and vectors.  It now creates an MPI matrix (thanks to a bugfix 
of Ben's and one of my own; bad things happen if you try to do parallel 
operations on sequential matrices or mix them)... but how does it know 
to create an MPI vector?  It sees n_local == n_global, it knows that 
sometimes we like creating sequential vectors on parallel runs...

I think we need another argument to init.  Probably a boolean with a 
sane default value so as not to break API compatibility, but I'm not 
sure on the details yet.

Ben, you can see the current state of our merged code in 
~roystgnr/fins.devel *on the ICES filesystem* - but I'm posting to 
libmesh-devel because anyone's opinions would be welcome.
---
Roy

------------------------------------------------------------------------------
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
_______________________________________________
Libmesh-devel mailing list
Libmesh-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-devel

Reply via email to