On Sat, Jul 19, 2008 at 5:42 PM, Nasser Mohieddin Abukhdeir
<[EMAIL PROTECTED]> wrote:
> Hi:
>I got a nasty surprise after I tried to use Hermite elements in my
> simulation. I got this error when I reran in debug mode:
>
> src/solvers/system_projection.C, line 665, compiled Jul 14 2008 at 22:
Having thought about it a little more, the stuff about needing to have the
inner domain boundaries line up across processor boundaries is clearly wrong.
(I should draw things on the white board first.)
Given that, I think a 2-level patitioning should work just fine.
Bill.
--
Bill Barth, Ph.D.,
Hi:
I got a nasty surprise after I tried to use Hermite elements in my
simulation. I got this error when I reran in debug mode:
src/solvers/system_projection.C, line 665, compiled Jul 14 2008 at 22:27:45
terminate called after throwing an instance of 'libMesh::LogicError'
and figured out th
Dear Developers and Users,
Firstly, thank you for this piece of software! I have been using the Debian
package for a month now,
and have been impressed with its capabilities. However, since I need to
enable VTK and SLEPc, I have
been trying to compile libmesh from sources. I need SLEPc since I am
Ah, well. That clarifies things a bunch. If you've got a decent partition for
all the cores (ignoring on-node and off-node distinctions) from metis/parmetis,
perhaps the right way to to deal with things is to partition the graph of the
subdomains for distribution to the compute nodes in a way th
Derek,
The higher latency doesn't really surprise me. Things are often fast first in
MVAPCICH1 before
MVAPICH2.
Without more details about what Ben suggests, I'm not sure I can offer any
constructive comments. NUMA and other architectureal quirks affects are weird
and hard to predict. Also, it
On Tue, Jul 22, 2008 at 10:56 AM, Benjamin Kirk <[EMAIL PROTECTED]> wrote:
> Ultimately I would like to generalize the partitioner interface to work on
> an input iterator range.
>
> You could then do something similar to this...
>
> // partition into NNodes
> Partition(mesh.active_elements_begin()
> It's also unclear yet whether it's worth doing this sort of thing at alll
> given that MVAPICH is already multi-core aware and the on-node communications
> are done via shared memory.
Yeah, that is what I would like to take advantage of. My thought process is
that you may want to group "nearest
Ultimately I would like to generalize the partitioner interface to work on
an input iterator range.
You could then do something similar to this...
// partition into NNodes
Partition(mesh.active_elements_begin(),
mesh.active_elements_end(),
n_nodes);
// partition each subdomai
Definitely interesting numbers. What I find most interesting is that
MVAPICH2 has higher latency than MVAPICH1... any ideas about that?
Do you have an idea about how you would actually implement this using
Metis / ParMetis?
Derek
On Jul 22, 2008, at 9:03 AM, Benjamin Kirk wrote:
> Check ou
Check out attached...
I've been doing some MPI profiling on my 4-socket, dual-core per node
Opteron cluster. I've been curious for a while about "multilevel domain
decomposition" for this class of architectures - e.g.
(1) partition into the number of nodes
(2) partition each subdomain into the nu
11 matches
Mail list logo