>>etc, while the MPI programs are running.
> >>This tends to oversubscribe the cores, and may lead
> >> to crashes.
> >>
> >>2) RAM:
> >>Can you mon
ould be easy to
make a test application but you'll need to have OpenFOAM installed.
Mattijs
--
Mattijs Janssens
OpenCFD Ltd.
9 Albert Road,
Caversham,
Reading RG4 7AN.
Tel: +44 (0)118 9471030
Email: m.janss...@opencfd.co.uk
URL: http://www.OpenCFD.co.uk
I can see some ways that
> might work, but they are pretty complex - for example, I could create an
> intercept library that loads a real MPI library explicitly and do whatever
> needs be done (for example, translating MPI_Comm parameters). Does anyone
> know of anything that might h
-
> Brian M. Adams, PhD (bria...@sandia.gov)
> Optimization and Uncertainty Estimation
> Sandia National Laboratories, Albuquerque, NM
> http://www.sandia.gov/~briadam
--
Mattijs Janssens
OpenCFD Ltd.
9 Albert Road,
Caversham,
Reading RG4 7AN.
Tel: +44 (0)118 9471030
Email: m.janss...@opencfd.co.uk
URL: http://www.OpenCFD.co.uk
Sounds like a typical deadlock situation. All processors are waiting for one
another.
Not a specialist but from what I know if the messages are small enough they'll
be offloaded to kernel/hardware and there is no deadlock. That why it might
work for small messages and/or certain mpi