Hi,
As I said yesterday, you can't tell that about any program unless you look
at its output and how it performs with different amounts of resources.
Mark
On Mon, Oct 17, 2016 at 2:31 PM Mahmood Naderan
wrote:
> Problem is that I can not find out if gromacs (or MPI) is using the
> resources co
Problem is that I can not find out if gromacs (or MPI) is using the
resources correctly. Is there any idea to see if there is any bottleneck
for such low utilization?
Regards,
Mahmood
On Mon, Oct 17, 2016 at 11:30 AM, Mahmood Naderan
wrote:
> it is interesting for me that I specified Verlet,
it is interesting for me that I specified Verlet, but the log warns about
group.
mahmood@cluster:LPN$ grep -r cut-off .
./mdout.mdp:; cut-off scheme (group: using charge groups, Verlet: particle
based cut-offs)
./mdout.mdp:; nblist cut-off
./mdout.mdp:; long-range cut-off for switched potentials
.
Here is what I did...
I changed the cutoff-method to Verlet as suggested by
http://www.gromacs.org/Documentation/Cut-off_schemes#
How_to_use_the_Verlet_scheme
Then I followed two scenarios:
1) On the frontend, where gromacs and openmpi have been installed, I ran
mahmood@cluster:LPN$ date
Mon
Hi,
None, but with one of them you decided to ask cmake to build only
mdrun_mpi, so you don't get the other gmx tools. Since none of them are
aware of MPI, it's perfectly normal to build gmx without MPI and mdrun_mpi
with MPI.
Mark
On Sun, Oct 16, 2016 at 11:36 PM Mahmood Naderan
wrote:
> Hi m
Hi mark,
There is a question here... What is the difference between
mpirun gmx_mpi mdrun
And
mpirun mdrun_mpi
?
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read http://www.gromacs.org/Su
Hi,
GROMACS is compute bound when it is not network bound, but the output of ps
is barely informative. Looking inside md.log file for the helpful
diagnostics mdrun prints is a fine start. Also, do check out
http://manual.gromacs.org/documentation/2016/user-guide/mdrun-performance.html
for
basics.
Not sure, I do not use OpenMPI, you could try compile following simple mpi
program and run, see if you get proper node allocation .
--Masrul
On Sun, Oct 16, 2016 at 1:15 PM, Mahmood Naderan
wrote:
> Well that is provided by nodes=2:ppn=10 in the PBS script.
>
> Regards,
> Mahmood
>
>
>
> On Sun
>Where is -np option in mpirun ?
Please see this
https://mail-archive.com/users@lists.open-mpi.org/msg30043.html
Regards,
Mahmood
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read htt
Well that is provided by nodes=2:ppn=10 in the PBS script.
Regards,
Mahmood
On Sun, Oct 16, 2016 at 9:26 PM, Parvez Mh wrote:
> Hi,
>
> Where is -np option in mpirun ?
>
> --Masrul
>
> On Sun, Oct 16, 2016 at 12:45 PM, Mahmood Naderan
> wrote:
>
> > Hi,
> > A PBS script for a gromacs job has
Hi,
Where is -np option in mpirun ?
--Masrul
On Sun, Oct 16, 2016 at 12:45 PM, Mahmood Naderan
wrote:
> Hi,
> A PBS script for a gromacs job has been submitted with the following
> content:
>
> #!/bin/bash
> #PBS -V
> #PBS -q default
> #PBS -j oe
> #PBS -l nodes=2:ppn=10
> #PBS -N LPN
> #PBS -
11 matches
Mail list logo