>> Along these lines, I am gonna take a hard look at equation_systems_io.C. I
>> think it should be possible to write a solution restart file without
>> synchronizing the mesh. If so, then the user can happily write restart
>> files which can then be post-processed later to create
>> GMV/Tecplot
> I guess that violates my "don't make users change their code" ideal,
> though, doesn't it? Even if the change is as simple as saying
> "prepend p to your file extensions".
Yeah, well, my philosophy on external API changes is a little more flexible
than yours. Basically, I believe the extent t
On Wed, 7 Nov 2007, Benjamin Kirk wrote:
> Thanks primarily to Roy's heroic efforts we are a lot closer to having a
> truly parallel mesh class. It has taken a lot less time to get to this
> point than I thought.
No kidding. In our Sandia talk, Derek estimated 2 man-months (of
focused work, not
On Wed, 7 Nov 2007, Benjamin Kirk wrote:
> I've taken a look at what you submitted last night and like pretty much all
> of it. The only changes I would like to make, if you agree, are
>
> (1) getting rid of the const_cast<> in mesh_output.C and
> equation_systems_io.C. I'd rather replace them
> Getting regular svn updates of header files that every file depends on
> can't be helping, huh? There's an updated dof_map.h and parallel.h in
> the pipe; I've enabled parallel constraint calculations when
> !mesh.is_serial(). Of course, we never hit that code path because we
> can't yet adapti
Thanks primarily to Roy's heroic efforts we are a lot closer to having a
truly parallel mesh class. It has taken a lot less time to get to this
point than I thought.
The only major hurdle at this point to running non-AMR on meshes larger than
the memory available on any given node is the I/O prob
On Wed, 7 Nov 2007, Benjamin Kirk wrote:
>> I guess that violates my "don't make users change their code" ideal,
>> though, doesn't it? Even if the change is as simple as saying
>> "prepend p to your file extensions".
>
>
> Yeah, well, my philosophy on external API changes is a little more flexib
Hi
Great work going on!
Roy Stogner writes:
> A parallel-friendly format (whether that means wacky multi-file
> partitioning or just eliminating element blocks) is different enough
> to merit if not require a new file extension.
Why not base new file format on HDF5/PHDF5. That library is extr
Although from a Unix-standards point of view we ought to be using
stderr for error messages, I don't really like seeing 4 or 8 copies of
every error when it's something like the messages in examples/ex*.C
that are guaranteed to be printed on all processors. Would anyone
mind if I changed those to
Looks like it has an LGPL-like license
http://hdf.ncsa.uiuc.edu/HDF5/doc/Copyright.html
I don't know if we want to depend on external libraries for core
functionality of libmesh though. PETSc is enough of a headache...
Depending on the size of the source and complexity of the build process, we
c
Roy Stogner writes:
>
> Reading xdr_io.C scared me, and adding yet-another-case to it scares
> me too. But it does seem like getting rid of the element blocks would
> make processor-wise block I/O easier.
XdrIO is a good example of how code gets cobbled together over time.
It might be possi
On Wed, 7 Nov 2007, Martin Lüthi wrote:
Why not base new file format on HDF5/PHDF5. That library is extremly
fast, versatile, and is designed for parallel I/O. And it is standard.
My only objection is that I really like having an ASCII version of our
file format. That's made debugging easier
Hi,
I am trying to compile libmesh 0.6.1 with petsc2.3.3 and openmpi. I do:
$ PETSC_DIR=/usr/lib/petsc PETSC_ARCH=linux-gnu-c-opt ./configure
--prefix=/usr --disable-laspack --disable-slepc --disable-sfc
--disable-gzstreams --disable-tecplot --disable-metis
--disable-parmetis --disable-tetgen --d
I'm using essentially an identical configuration. OpenMPI uses an ambitious
set of shared libraries that defy typical PETSc snooping.
The easy answer is to use the compiler wrappers provided with OpenMPI:
$ ./configure --with-cxx=mpicxx --with-cc=mpicc --with-f77=mpif77
...and all the other opti
14 matches
Mail list logo