[gmx-users] GROMACS 4.6v - Myrinet2000

2013-04-08 Thread Hrachya Astsatryan

Dear all,

We have installed the latest version of Gromacs (version 4.6) on our 
cluster by the following step:


 * cmake .. -DGMX_MPI=ON -DCMAKE_INSTALL_PREFIX=/backup/sicnas/gromacs
   -DGMX_BUILD_OWN_FFTW=ON -DGMX_PREFER_STATIC_LIBS=ON

The interconnection of the cluster is Myrinet2000 and the driver is MX 
(and mpich 1.3.1).


The job scripts (see below) and different simulations show that there is 
no adequate performance (usually it should be 2-3 time more).


Could you, pls. help us to overcome the problem in order to get better 
performance?


With regards,
Hrach

#PBS -l nodes=8:ppn=2
##PBS -l walltime=360:00
#PBS -q armcluster
#PBS -e 33mM_16_new.err
#PBS -o 33mM_16_new.log
## Specify the shell to be bash
#PBS -S /bin/bash

export 
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/backup/sicnas/gromacs/lib:/opt/mpi/lib


/opt/mpi/bin/mpirun -machinefile /backup/sicnas/npt101/machine -np 16 
/backup/sicnas/gromacs/bin/mdrun_mpi -s /backup/sicnas/npt101/topol.tpr -v






--
Hrachya Astsatryan
Head of HPC Laboratory,
Institute for Informatics and Automation Problems,
National Academy of Sciences of the Republic of Armenia
1, P. Sevak str., Yerevan 0014, Armenia
t: 374 10 284780
f: 374 10 285812
e: hr...@sci.am
skype: tighra
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] gmx 4.6 mpi installation through openmpi?

2013-04-08 Thread Hrachya Astsatryan
Dear Zhikun Cai,

Thank you for your quick response.


On 4/8/13 11:15 AM, Zhikun Cai wrote:
 Hi, see installation instruction with CMAKE here:

 http://www.gromacs.org/Documentation/Installation_Instructions

 I guess that maybe you need to specify your Openmpi and FFTW installation
 directories using options CMAKE_PREFIX_PATH.
 For example, my Openmpi and FFTW were firstly installed in
  /home/ucaizk/ComTools/openmpi and /home/ucaizk/ComTools/fftw
 Then, I installed gromacs with command lines below:
wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.6.1.tar.gz
  $ tar -xzvf gromacs-4.6.1.tar.gz
  $ mkdir build
  $ cd build
Done
  $
 CMAKE_PREFIX_PATH=/home/ucaizk/ComTools/openmpi:/home/ucaizk/ComTools/fftw
export CMAKE_PREFIX_PATH=/opt/mpi (I use an option below to download and
install the fftw by gromacs)
  cmake -DGMX_MPI=ON -DCMAKE_INSTALL_PREFIX
 =/home/ucaizk/ComTools/gromacs  ../gromacs-4.6.1
cmake .. -DGMX_MPI=ON -DCMAKE_INSTALL_PREFIX=/backup/sicnas/gromacs
-DGMX_BUILD_OWN_FFTW=ON (I want to install on /backup/sicnas/gromacs,
which is shared by nodes)
  $ make
  $ make install
done!

And finally get the following error:
/backup/sicnas/gromacs/bin/mdrun_mpi: error while loading shared
libraries: libblas.so.3: cannot open shared object file: No such file or
directory

 When above was done, cd to your home directory and add one line to
 .bashrc file
  export PATH=$PATH:/home/ucaizk/ComTools/gromacs/bin

 restart bash shell, then all are done!

 Hope it helps!

 Zhikun


 On Fri, Apr 5, 2013 at 11:12 PM, 라지브간디 ra...@kaist.ac.kr wrote:

 Dear gmx users,


 I could able to install gmx4.6.1 without MPI option in my cluster, whereas
 the MPI fails to install and gives the following error ( used command line
 cmake .. -DGMX_MPI=ON  -DGMX_BUILD_OWN_FFTW=ON )


 CMake Error at cmake/gmxManageMPI.cmake:161 (message):
   MPI support requested, but no MPI compiler found.  Either set the
   C-compiler (CMAKE_C_COMPILER) to the MPI compiler (often called mpicc),
 or
   set the variables reported missing for MPI_C above.
 Call Stack (most recent call first):
   CMakeLists.txt:494 (include)




 I have also installed openmpi 1.5 version. which mpirun shows
 /usr/bin/mpirun.
 It seems openmpi installed in /usr/bin/openmpi/


 I dont know how to do link this cmake.


 Please need some suggestion. Thanks.

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



-- 
Hrachya Astsatryan
Head of HPC Laboratory,
Institute for Informatics and Automation Problems,
National Academy of Sciences of the Republic of Armenia
1, P. Sevak str., Yerevan 0014, Armenia
t: 374 10 284780
f: 374 10 285812
e: hr...@sci.am
skype: tighra
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Help: Gromacs Installation

2011-04-27 Thread Hrachya Astsatryan

Dear Mark Abraham  all,

We  used another benchmarking systems, such as d.dppc on 4 processors, 
but we have the same problem (1 proc use about 100%, the others 0%).

After for a while we receive the following error:

Working directory is /localuser/armen/d.dppc
Running on host wn1.ysu-cluster.grid.am
Time is Fri Apr 22 13:55:47 AMST 2011
Directory is /localuser/armen/d.dppc
START
Start: Fri Apr 22 13:55:47 AMST 2011
p2_487:  p4_error: Timeout in establishing connection to remote process: 0
rm_l_2_500: (301.160156) net_send: could not write to fd=5, errno = 32
p2_487: (301.160156) net_send: could not write to fd=5, errno = 32
p0_32738:  p4_error: net_recv read:  probable EOF on socket: 1
p3_490: (301.160156) net_send: could not write to fd=6, errno = 104
p3_490:  p4_error: net_send write: -1
p3_490: (305.167969) net_send: could not write to fd=5, errno = 32
p0_32738: (305.371094) net_send: could not write to fd=4, errno = 32
p1_483:  p4_error: net_recv read:  probable EOF on socket: 1
rm_l_1_499: (305.167969) net_send: could not write to fd=5, errno = 32
p1_483: (311.171875) net_send: could not write to fd=5, errno = 32
Fri Apr 22 14:00:59 AMST 2011
End: Fri Apr 22 14:00:59 AMST 2011
END

We tried new version of Gromacs, but receive the same error.
Please, help us to overcome the problem.


With regards,
Hrach

On 4/22/11 1:41 PM, Mark Abraham wrote:

On 4/22/2011 5:40 PM, Hrachya Astsatryan wrote:

Dear all,

I would like to inform you that I have installed the gromacs4.0.7 
package on the cluster (nodes of the cluster are 8 core Intel, OS: 
RHEL4 Scientific Linux) with the following steps:


yum install fftw3 fftw3-devel
./configure --prefix=/localuser/armen/gromacs --enable-mpi

Also I have downloaded gmxbench-3.0 package and try to run d.villin 
to test it.


Unfortunately it wok fine until np is 1,2,3, if I use more than 3 
procs I receive low CPU balancing and the process in hanging.


Could you, please, help me to overcome the problem?


Probably you have only four physical cores (hyperthreading is not 
normally useful), or your MPI is configured to use only four cores, or 
these benchmarks are too small to scale usefully.


Choosing to do a new installation of a GROMACS version that is several 
years old is normally less productive than the latest version.


Mark





--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Help: Gromacs Installation

2011-04-27 Thread Hrachya Astsatryan

Dear Roland,

We need to run the GROMACS on the base of the nodes of our cluster (in 
order to use all computational resources of the cluster), that's why we 
need MPI (instead of using thread or OpenMP within the SMP node).
I can run simple MPI examples, so I guess the problem on the 
implementation of the Gromacs.



Regads,
Hrach

On 4/27/11 11:29 PM, Roland Schulz wrote:
This seems to be a problem with your MPI library. Test to see whether 
other MPI programs don't have the same problem. If it is not GROMACS 
specific please ask on the mailinglist of your MPI library. If it only 
happens with GROMACS be more specific about what your setup is (what 
MPI library, what hardware, ...).


Also you could use the latest GROMACS 4.5.x. It has built in thread 
support and doesn't need MPI as long as you only run on n cores within 
one SMP node.


Roland

On Wed, Apr 27, 2011 at 2:13 PM, Hrachya Astsatryan hr...@sci.am 
mailto:hr...@sci.am wrote:


Dear Mark Abraham  all,

We  used another benchmarking systems, such as d.dppc on 4
processors, but we have the same problem (1 proc use about 100%,
the others 0%).
After for a while we receive the following error:

Working directory is /localuser/armen/d.dppc
Running on host wn1.ysu-cluster.grid.am
http://wn1.ysu-cluster.grid.am
Time is Fri Apr 22 13:55:47 AMST 2011
Directory is /localuser/armen/d.dppc
START
Start: Fri Apr 22 13:55:47 AMST 2011
p2_487:  p4_error: Timeout in establishing connection to remote
process: 0
rm_l_2_500: (301.160156) net_send: could not write to fd=5, errno = 32
p2_487: (301.160156) net_send: could not write to fd=5, errno = 32
p0_32738:  p4_error: net_recv read:  probable EOF on socket: 1
p3_490: (301.160156) net_send: could not write to fd=6, errno = 104
p3_490:  p4_error: net_send write: -1
p3_490: (305.167969) net_send: could not write to fd=5, errno = 32
p0_32738: (305.371094) net_send: could not write to fd=4, errno = 32
p1_483:  p4_error: net_recv read:  probable EOF on socket: 1
rm_l_1_499: (305.167969) net_send: could not write to fd=5, errno = 32
p1_483: (311.171875) net_send: could not write to fd=5, errno = 32
Fri Apr 22 14:00:59 AMST 2011
End: Fri Apr 22 14:00:59 AMST 2011
END

We tried new version of Gromacs, but receive the same error.
Please, help us to overcome the problem.


With regards,
Hrach


On 4/22/11 1:41 PM, Mark Abraham wrote:

On 4/22/2011 5:40 PM, Hrachya Astsatryan wrote:

Dear all,

I would like to inform you that I have installed the
gromacs4.0.7 package on the cluster (nodes of the cluster
are 8 core Intel, OS: RHEL4 Scientific Linux) with the
following steps:

yum install fftw3 fftw3-devel
./configure --prefix=/localuser/armen/gromacs --enable-mpi

Also I have downloaded gmxbench-3.0 package and try to run
d.villin to test it.

Unfortunately it wok fine until np is 1,2,3, if I use more
than 3 procs I receive low CPU balancing and the process
in hanging.

Could you, please, help me to overcome the problem?


Probably you have only four physical cores (hyperthreading is
not normally useful), or your MPI is configured to use only
four cores, or these benchmarks are too small to scale usefully.

Choosing to do a new installation of a GROMACS version that is
several years old is normally less productive than the latest
version.

Mark




-- 
gmx-users mailing list gmx-users@gromacs.org

mailto:gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org
mailto:gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists





--
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov http://cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Help: Gromacs Installation

2011-04-22 Thread Hrachya Astsatryan

Dear all,

I would like to inform you that I have installed the gromacs4.0.7 
package on the cluster (nodes of the cluster are 8 core Intel, OS: RHEL4 
Scientific Linux) with the following steps:


yum install fftw3 fftw3-devel
./configure --prefix=/localuser/armen/gromacs --enable-mpi

Also I have downloaded gmxbench-3.0 package and try to run d.villin to 
test it.


Unfortunately it wok fine until np is 1,2,3, if I use more than 3 procs 
I receive low CPU balancing and the process in hanging.


Could you, please, help me to overcome the problem?


Regards,
Hrach
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists