On May 25, 2011, at 12:54 PM, Roy Stogner wrote:

> I really wish you'd stopped after "reproducible"...

Heh... me too!

>> 2.  Using the old System::update() with a solution->close() at the
>> beginning is _not_ sufficient!  It still segfaults!
> 
> This is astonishing.
> 
> This is on PETSc 2.3.3 still?  Any chance you could give it a shot
> with PETSc 3.1, and/or a debug-compiled PETSc?

Nope - this is with Petsc 3.1.  Haven't tried debug Petsc yet..... but I did do 
a run with debug libMesh and above.... and it wasn't helpful

> I've been thinking of adding a libMeshInit handler for SIGSEGV, which
> would do a libmesh_write_traceout() and then hand off to any
> previously registered handler.  Would this be helpful, or do you
> already have a stack trace from the segfault?

I like the idea of adding a handler for segfaults that writes a stack trace!  
Especially if it overrides the Petsc error handlers.  It wouldn't be helpful in 
this particular case because I already got the stack trace with a core dump... 
but it would be helpful in general in the future.

> If you couldn't compile with Trilinos and Ghosted both on, most of our
> regression tests would fail.  libMesh doesn't just support
> PETSc or Trilinos, it supports PETSc-and-Trilinos, then defaults
> factory-built linear algebra to the former when both are enabled.
> The way our Trilinos interface is supposed to work is by implementing
> GHOSTED vectors as SERIAL - i.e. the old inefficient way we used to do
> all current_local_solution type vectors.
> 
> The trouble is that while most operations you'd want to perform on a
> GHOSTED vector work fine (just less efficiently) on a SERIAL vector,
> operator=(PARALLEL vector) is not yet one of them.
> 
> Anyway, that's why I haven't worried about the Trilinos problem: I
> think I understand the missing feature that's causing it, that would
> be an easy enough feature to add if we needed it, and anyway it'll be
> fixed automatically when we figure out how to fix (and can thus again
> safely use) localize().

I see.... another good fix would be to actually make Trilinos vectors 
ghosted....  That can be done with proper manipulations of the Epetra maps.  I 
just don't think anyone has put in the time....

It's something that's been on my personal todo list for years....

> The Petsc-noMPI problem is much more troubling, but the bad news is
> that I've been swamped with other stuff and haven't looked into it
> closely yet.
> 
> The good news is that "other stuff" includes ParallelMesh, which is
> now starting to pass tests with adaptive coarsening.  The catch is
> that redistribute() still needs work, so you have to partition in
> serial (or read from a partitioned file, I guess?) and you're stuck
> without load-rebalancing.  There's probably bugs I haven't run into
> yet (in fact, I think there's bugs with redistribute() that go beyond
> the DofObject communication work we previously discussed), and if you
> guys have any time to play with it I'd appreciate that.  I intend to
> have some large adaptive ParallelMesh results by mid-October, but for
> now we're just working on it to enable some finer-grid uniform runs

That's really great news!  We're starting to use ParallelMesh more and more 
these days (have another set of runs with 20 million nodes coming up this 
weekend)... so any improvements there would be much appreciated!

We haven't yet had the desire to do adaptivity with ParallelMesh.... but that 
is coming pretty soon...

Derek


------------------------------------------------------------------------------
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
_______________________________________________
Libmesh-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-devel

Reply via email to