On Mar 23, 2011, at 9:29 AM, Roy Stogner wrote:

> On Wed, 23 Mar 2011, Vetter Roman wrote:
> 
>> I'm solving a transient nonlinear implicit system in parallel. In
>> each time step, a list of interacting element pairs is built, based
>> on which the residual and jacobian are constructed. These element
>> interactions spread over arbitrary processors, e.g. element 0
>> (processor 0) interacts with element 24 (processor 1), and in the
>> next step the interaction happens between element 0 and 30
>> (processors 0 and 2), and so on. In other words, any node may
>> interact with any other node on any processor at any time. The
>> interaction pairs are recalculated after every nonlinear iteration
>> step.

Right.  As Roy mentioned we are doing something similar here at INL.  We can 
recalculate which dofs it is that we need to get from off-processor... and if 
there was an API we could add those to the send list.  But...

>> How can I implement this in libMesh? It works fine in serial mode,
>> where I can just fill in ANY entries in the residual vector and
>> jacobian matrix. The program fails to converge however in parallel
>> mode, and I suppose this is because entries outside the (initially
>> set) sparsity pattern are set, which then don't get communicated,
>> because libMesh is not expecting arbitrary long range interactions.

This is not something we're currently doing (adding more entries into the 
Jacobian beyond the normal sparsity pattern).

> Try in debug mode - I suspect you won't just see convergence failure,
> you'll see an assertion thrown when you try to access nonexistent data
> from a ghosted vector.

Yes - you should.

>> Will I need to deal with updated send_lists? How does this work?
> 
> Right now, it might not work at all - there's no provision or official
> support for user-updated send_lists.  We've been talking about an API
> for doing that nicely, but mostly discussing off-list since the
> application is sensitive.  Derek, would you mind going back through
> that discussion and reposting (censored if necessary) to the list
> where possible?

I don't really feel like that last discussion went anywhere useful.  We were 
trying to go for something very general that even the internal stuff in libMesh 
could use (like ParallelMesh)... but I'm not thinking that's the right approach 
currently.

I've outlined some changes with John here we're going to add a callback for 
tacking on more entries to the send_list.  We'll see how that interface plays 
out and evolve it from there.

BTW: If you want to do this (access off-processor solution entries) _now_ just 
use solution->localize() and pass in a std::vector... that will give you a full 
copy of the solution vector on every processor... then you can access into it 
using off processor dof indices and everything will work fine.  Unfortunately 
you are paying a price for this: memory for the full solution vector on every 
processor AND a lot of parallel communication (all to all of a vector).  But 
we've found that for medium numbers of processors (<100) it's not that big of a 
deal.

But that's only half of what Roman is asking for... adding to the send_list 
will ensure that off-processor entries in the solution vector will be ghosted 
onto the correct processors in current_local_solution.  This is necessary for 
assembly of residuals / jacobian entries for on-processor dofs... but doesn't 
address the second part which is: new entries in the Jacobian.

To achieve this you will need to modify the SparsityPattern objects.  It's been 
a while since I've been in that system... and don't really know where you would 
start.  You could think about adding some sort of a callback to get extra dof 
pairs though... and see if you can get them put into the sparsity pattern.

> But the discussion may be academic for you: I don't have time or
> direct motivation to implement the result right now, so unless you or
> the INL people want to dig in we won't have a nice API any time soon.
> In the meantime, you'll probably need to hack into the system reinit
> code to allow you to manually add dofs to each send_list in between
> where DofMap generates the list and where reinit uses it to
> reinitialize vectors.

I think we will commit an interim API soon that allows you to add to the 
send_list.  The SparsityPattern is beyond our scope for now though... although 
I would be happy to look through any patches for it.

Our previous discussions centered around trying to create ONE way to tell 
libMesh that dofs are related... and then have all the systems that need this 
kind of information use that one datastructure (SparsityPattern, send_list, 
ParallelMesh, etc.).  I still think this _could_ be possible.... but there are 
some issues (like the fact that we have some dofs that need to be "linked" to 
every other dof in the simulation for the send_list but _not_ for the 
SparsityPattern).  Coming up with the "one true API" here is a bit beyond reach 
at the moment.... I think  we should take some baby steps first and see what 
falls out.

Derek
------------------------------------------------------------------------------
Enable your software for Intel(R) Active Management Technology to meet the
growing manageability and security demands of your customers. Businesses
are taking advantage of Intel(R) vPro (TM) technology - will your software 
be a part of the solution? Download the Intel(R) Manageability Checker 
today! http://p.sf.net/sfu/intel-dev2devmar
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to