Dear Mark Abraham all,
We used another benchmarking systems, such as d.dppc on 4 processors,
but we have the same problem (1 proc use about 100%, the others 0%).
After for a while we receive the following error:
Working directory is /localuser/armen/d.dppc
Running on host
This seems to be a problem with your MPI library. Test to see whether other
MPI programs don't have the same problem. If it is not GROMACS specific
please ask on the mailinglist of your MPI library. If it only happens with
GROMACS be more specific about what your setup is (what MPI library, what
Dear Roland,
We need to run the GROMACS on the base of the nodes of our cluster (in
order to use all computational resources of the cluster), that's why we
need MPI (instead of using thread or OpenMP within the SMP node).
I can run simple MPI examples, so I guess the problem on the
On 4/28/2011 4:44 AM, Hrachya Astsatryan wrote:
Dear Roland,
We need to run the GROMACS on the base of the nodes of our cluster (in
order to use all computational resources of the cluster), that's why
we need MPI (instead of using thread or OpenMP within the SMP node).
I can run simple MPI
Dear all,
I would like to inform you that I have installed the gromacs4.0.7
package on the cluster (nodes of the cluster are 8 core Intel, OS: RHEL4
Scientific Linux) with the following steps:
yum install fftw3 fftw3-devel
./configure --prefix=/localuser/armen/gromacs --enable-mpi
Also I
On 4/22/2011 5:40 PM, Hrachya Astsatryan wrote:
Dear all,
I would like to inform you that I have installed the gromacs4.0.7
package on the cluster (nodes of the cluster are 8 core Intel, OS:
RHEL4 Scientific Linux) with the following steps:
yum install fftw3 fftw3-devel
./configure
6 matches
Mail list logo