Re: [gmx-users] Issues running Gromacs with MPI/OpenMP in cpu cluster

2014-02-13 Thread Szilárd Páll
On Thu, Feb 13, 2014 at 6:39 PM, Thomas Schlesier wrote: > Hi, > I'm no expert for this stuff, but could it be that you generate about 40 of > the #my_mol.log.$n# files (probably only 39)? > It could be that the 'mpirun' starts 40 'mdrun'-jobs and each generates its > own out put. > For GROMACS 4.

[gmx-users] Issues running Gromacs with MPI/OpenMP in cpu cluster

2014-02-13 Thread Thomas Schlesier
Hi, I'm no expert for this stuff, but could it be that you generate about 40 of the #my_mol.log.$n# files (probably only 39)? It could be that the 'mpirun' starts 40 'mdrun'-jobs and each generates its own out put. For GROMACS 4.6.x I always used mdrun -nt X ... to start a parallel run (where X

Re: [gmx-users] Issues running Gromacs with MPI/OpenMP in cpu cluster

2014-02-13 Thread Carsten Kutzner
Hi Mousumi, from the fact that you get lots of backup files directly at the beginning I suspect that your mdrun is not MPI-enabled. This behavior is exactly what one would get when launching a number of serial mdrun’s on the same input file. Maybe you need to look for a mdrun_mpi executable. The

[gmx-users] Issues running Gromacs with MPI/OpenMP in cpu cluster

2014-02-13 Thread Mousumi Bhattacharyya
Dear GROMACS users, I am facing a strange situation by running Gromacs (v - 4.6.3) in our local cpu-cluster using MPI/OpenMP parallelization process. I am trying to simulate a big heterogeneous aquas-polymer system in octahedron box. I use the following command to run my simulations and use sge