On Wed, 9 Feb 2011, Vetter Roman wrote:
> VTK output crashes when run in parallel (mpirun -n 2 ...). The following
> assertion fails:
>
> Assertion `it!=_global_to_local_map.end()' failed.
> [...]/libmesh/include/numerics/petsc_vector.h, line 1073, [...]
Yup - the VTK support was a contribution
On Tue, 8 Feb 2011, Derek Gaston wrote:
> On Feb 8, 2011, at 1:20 PM, Derek Gaston wrote:
>
>> So while I have you looking at this piece of code how would I go about
>> injecting things into the send_list? I won't know what the dofs are to
>> inject until after distribute_dofs... but that
Just to follow up this point:
>> I would like to use a parallel direct solver from petsc which I have
>> compiled with full set of solvers, umfpack, superlu etc.. but the
>> command
>> line argument -mat_type superlu etc. all throw an error in petsc
>> 3.1.4 as
>> unrecognized, so what is the co
All,
Currently our adjoints framework assembles the discrete adjoint
problem by transposing the matrix representing the discrete forward
problem. This is simple and works as long as we have (or only desire)
the discrete adjoint for an adjoint consistent formulation. Of course,
this might not b
On Fri, 4 Feb 2011, Saurabh Srivastava wrote:
I looked into the file and quite worried about this implementation, have you
followed any particular implementation of Hierarchics on triangles? I
presume the triangles are not trivial tensor product based and so analytical
derivatives are good effo
Dear libmesh developers
VTK output crashes when run in parallel (mpirun -n 2 ...). The following
assertion fails:
Assertion `it!=_global_to_local_map.end()' failed.
[...]/libmesh/include/numerics/petsc_vector.h, line 1073, [...]
The problem is reproducible with the following minimal working exa