Sorry, but I got confused. I tried to follow the correct options to specify
the location of libraries, bu failed...
mahmood@cluster:build$ cmake ..
-DCMAKE_C_COMPILER=/share/apps/computer/openmpi-2.0.1/bin/mpicc
-DCMAKE_CXX_COMPILER=/share/apps/computer/openmpi-2.0.1/bin/mpic++
OK. I ran
/share/apps/computer/cmake-3.2.3-Linux-x86_64/bin/cmake ..
-DCMAKE_C_COMPILER=/share/apps/computer/openmpi-2.0.1/bin/mpicc
-DCMAKE_CXX_COMPILER=/share/apps/computer/openmpi-2.0.1/bin/mpic++
-DCMAKE_PREFIX_PATH=/share/apps/chemistry/gromacs-5.1
-DBUILD_SHARED_LIBS=off
please note that
I got an error regarding fftw3. Maybe not related to GMX itself, but I
appreciate any comment for that.
root@cluster:build# /share/apps/computer/cmake-3.2.3-Linux-x86_64/bin/cmake
.. -DCMAKE_C_COMPILER=/share/ap
ps/computer/openmpi-2.0.1/bin/mpicc
Excuse me, what I understood from the manual is that
-DCMAKE_INSTALL_PREFIX is the same as --prefix in ./configure script. Do
you mean that I can give multiple location with that option? One for the
gromacs itself and the other for the MPI?
I mean
-DCMAKE_INSTALL_PREFIX=/share/apps/gromacs-5.1
Hi,
I am trying to install gromacs-5.1 from the source. What are the proper
cmake options for the following things:
1- Installing to a custom location and not /usr/local
2- Using a customized installation of MPI and not /usr/local
Regards,
Mahmood
--
Gromacs Users mailing list
* Please search
OMPI-2.0.1 is installed on the system. I want to tell gromacs that mpifort
(or other wrappers) are in /share/apps/openmpi-2.0.1/bin and libraries are
in /share/apps/opnempi-2.0.1/lib
How can I tell that to cmake?
Regards,
Mahmood
--
Gromacs Users mailing list
* Please search the archive at
Hi,
Users issue the command "mdrun -v" and that will automatically read input
files in the working directory. There are two issue with that which I am
not aware of the solution.
1- How the number of cores can be changed?
2- Viewing the output of "top" command, it is saying that mdrun uses 400%
for
> dependencies).
>
> Mark
>
> On Sun, Oct 9, 2016 at 12:20 PM Mahmood Naderan <mahmood...@gmail.com>
> wrote:
>
> > Hi mark,
> > Thank you very much. In fact the following commands did the job
> >
> >
> > $ cmake .. -DCMAKE_C_COMPILER=/share/ap
Hi,
What is the clear difference among nt, ntmpi and ntomp? I have built
gromacs with MPI suppport. Then I run
mpirun gmx_mpi mdrun
Simply I want to know, is it a good idea to use 'nt' for such command? Are
there some conflicts among them which degrade the performance? For example,
using 'nt'
Well such message flooding that is written in the log file at every step,
the network will be a bottleneck affecting the performance of the other jobs
Regards,
Mahmood
On Sat, Oct 22, 2016 at 3:42 PM, Mark Abraham
wrote:
> Hi,
>
> This is an energy minimization.
Hi Mark,
So I changed the code (gromacs-5.1/src/gromacs/mdlib/minimize.cpp) like
this:
if (MASTER(cr))
{
if (bVerbose && ((++myCounter)%1==0))
{
fprintf(stderr, "Step=%5d, Dmax= %6.1e nm, Epot= %12.5e
Fmax= %11.5e, atom= %d%c",
ach. That's 1.6 MB,
> which you could also suppress by piping terminal output to /dev/null. The
> minimization ran in around 100 seconds, so the load on the infrastructure
> was under 20 kb/sec. Can you name any workload on the cluster that produces
> less traffic?
>
> Mark
>
>
it is interesting for me that I specified Verlet, but the log warns about
group.
mahmood@cluster:LPN$ grep -r cut-off .
./mdout.mdp:; cut-off scheme (group: using charge groups, Verlet: particle
based cut-offs)
./mdout.mdp:; nblist cut-off
./mdout.mdp:; long-range cut-off for switched potentials
Here is what I did...
I changed the cutoff-method to Verlet as suggested by
http://www.gromacs.org/Documentation/Cut-off_schemes#
How_to_use_the_Verlet_scheme
Then I followed two scenarios:
1) On the frontend, where gromacs and openmpi have been installed, I ran
mahmood@cluster:LPN$ date
Mon
OK. I verified that the cutoff parameter inside the trp file is Group
mahmood@cluster:gromacs-5.1$ ./bin/gmx_mpi dump -s ~/LPN/topol.tpr | grep
cutoff
...
Note: file tpx version 83, software tpx version 103
cutoff-scheme = Group
Now, according to this reply (by you Mark)
ntion ;-)
>
> Mark
>
> On Fri, Oct 21, 2016 at 2:49 PM Mahmood Naderan <mahmood...@gmail.com>
> wrote:
>
> > OK. I verified that the cutoff parameter inside the trp file is Group
> >
> > mahmood@cluster:gromacs-5.1$ ./bin/gmx_mpi dump -s ~/LPN/topol.tpr |
>
e in gmx_mpi that isn't served by mdrun_mpi.
>
> Mark
>
> On Fri, Oct 21, 2016 at 3:14 PM Mahmood Naderan <mahmood...@gmail.com>
> wrote:
>
> > Meanwhile, I have been confused with one thing!
> > If I build Gromacs with -DGMX_BUILD_MDRUN_ONLY=on, then I cannot see
>
One more question.
Currently, gromacs prints the output on every step!
Step= 159, Dmax= 3.8e-03 nm, Epot= -8.39592e+05 Fmax= 2.45536e+03, atom=
2111
Step= 160, Dmax= 4.6e-03 nm, Epot= -8.39734e+05 Fmax= 4.30685e+03, atom=
2111
Step= 161, Dmax= 5.5e-03 nm, Epot= -8.39913e+05 Fmax= 3.78613e+03,
Hi,
I have specified Verlet in the mdp files according to the manual. However,
as I run mdrun_mpi with ntomp switch, it says that that the cut-off is
Group.
mahmood@cluster:LPN$ ls *.mdp
grompp.mdp md100.mdp mdout.mdp rest.mdp
mahmood@cluster:LPN$ grep -r Verlet .
./grompp.mdp:cutoff-scheme
Excuse me... this is a better output that shows the inconsistency
mahmood@cluster:LPN$ grep -r cutoff .
./grompp.mdp:cutoff-scheme = Verlet
./rest.mdp:cutoff-scheme = Verlet
./md100.mdp:cutoff-scheme = Verlet
./md.log: cutoff-scheme = Group
Verlet
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
I read that document from that web site but didn't understand what is the
issue!
Thanks
Regards,
Mahmood
On Mon, Oct 10, 2016 at 2:29 PM, Mahmood Naderan
OK. I understood the documents.
Thing that I want is to see two processes (for example) each consumes 100%
cpu. The command for that is
mpirun -np 2 mdrun -v -nt 1
Thanks Mark.
Regards,
Mahmood
--
Gromacs Users mailing list
* Please search the archive at
waiting for communication, but what does top think about that?
>
> Mark
>
> On Mon, Oct 10, 2016 at 11:47 AM Mahmood Naderan <mahmood...@gmail.com>
> wrote:
>
> > OK. I understood the documents.
> > Thing that I want is to see two processes (for example) ea
Hi mark,
Thank you very much. In fact the following commands did the job
$ cmake .. -DCMAKE_C_COMPILER=/share/apps/computer/openmpi-2.0.1/bin/mpicc
-DCMAKE_CXX_COMPILER=/share/apps/computer/openmpi-2.0.1/bin/mpic++
-DCMAKE_PREFIX_PATH=/share/apps/chemistry/gromacs-5.1
-DGMX_BUILD_OWN_FFTW=ON
Hi,
A PBS script for a gromacs job has been submitted with the following
content:
#!/bin/bash
#PBS -V
#PBS -q default
#PBS -j oe
#PBS -l nodes=2:ppn=10
#PBS -N LPN
#PBS -o /home/dayer/LPN/mdout.out
cd $PBS_O_WORKDIR
mpirun gromacs-5.1/bin/mdrun_mpi -v
As I ssh'ed to the nodes and saw mdrun_mpi
Well that is provided by nodes=2:ppn=10 in the PBS script.
Regards,
Mahmood
On Sun, Oct 16, 2016 at 9:26 PM, Parvez Mh <parvezm...@gmail.com> wrote:
> Hi,
>
> Where is -np option in mpirun ?
>
> --Masrul
>
> On Sun, Oct 16, 2016 at 12:45 PM, Mahmood Naderan <
>Where is -np option in mpirun ?
Please see this
https://mail-archive.com/users@lists.open-mpi.org/msg30043.html
Regards,
Mahmood
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read
Hi mark,
There is a question here... What is the difference between
mpirun gmx_mpi mdrun
And
mpirun mdrun_mpi
?
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read
Problem is that I can not find out if gromacs (or MPI) is using the
resources correctly. Is there any idea to see if there is any bottleneck
for such low utilization?
Regards,
Mahmood
On Mon, Oct 17, 2016 at 11:30 AM, Mahmood Naderan <mahmood...@gmail.com>
wrote:
> it is interesti
Hi,
Following the Lusozume tutorial, I face an error at the step of generating
ions.tpr which says too many warnings.
$ gmx grompp -f ions.mdp -c 1AKI_solv.gro -p topol.top -o ions.tpr
NOTE 1 [file ions.mdp]:
With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
that with the
I wrongly downloaded the v5.0. It seems that 2018 version is better! Such error
is now solved.
Regards,
Mahmood
On Friday, February 23, 2018, 4:10:15 PM GMT+3:30, Mahmood Naderan
<nt_mahm...@yahoo.com> wrote:
Hi,While I set -DGMX_GPU=on for a M2000 card, the make returned an
Hi,While I set -DGMX_GPU=on for a M2000 card, the make returned an error which
says compute_20 is not supported. So, where in the options I can drop the
compute_20 capability?
Regards,
Mahmood
--
Gromacs Users mailing list
* Please search the archive at
Hi,
While the cut-off is set to Verlet and I run "gmx mdrun -nb gpu -deffnm
input_md", I see that 9 threads out of total logical 16 threads are running on
the cpu while the gpu is utilized. The gmx also says
No option -multi
Using 1 MPI thread
Using 16 OpenMP threads
I want to know, why 9
Szillard,
So, the following commands have the same meaning (8 mpi threads each 2 openmp
threads as gromacs says) on an 8 core (16 threads) Ryzen CPU with one M2000.
mpirun -np 8 gmx_mpi mdrun -v -deffnm nvt
mpirun -np 8 gmx_mpi mdrun -v -ntomp 2 -deffnm nvt
Both have 8 GPU tasks. However,
Hi
Has anyone run gmx_mpi with MPS? Even with small input files (which are working
fine when MPS is turned off), I get out of memory error from the GPU device.
Don't know if there is a bug inside cuda or gromacs. I see some other related
topics for other programs. So, it sound like a cuda
>Assertion failed:
>Condition: cudaSuccess == cudaPeekAtLastError()
>We promise to return with clean CUDA state!
Hi,
I had some runtime problems with cuda 9.1 which were solved by 9.2!So, I
suggest you to first update 9.0 to 9.2 and then spend time in case you get any
error.
Regards,
Mahmood
It is stated that
mpirun -np 4 gmx mdrun -ntomp 6 -nb gpu -gputasks 00
Starts gmx mdrun on a machine with two nodes, usingfour total ranks, each rank
with six OpenMP threads,and both ranks on a node sharing GPU with ID 0.
Questions are:
1- Why gmx_mpi is not used?
2- How two nodes were
No idea? Those who use GPU, which command do they use? gmx or gmx_mpi?
Regards,
Mahmood
On Wednesday, July 11, 2018, 11:46:06 AM GMT+4:30, Mahmood Naderan
wrote:
Hi,Although I have read the manual and I have wrote programs with mpi, the
gromacs use of mpi is confusing
Hi
It seems that changing the number of ntmpi and ntomp affects the number of
steps that takes to calculate the optimal pme grid. Is that correct?
Please see the following output
gmx mdrun -nb gpu -ntmpi 1 -ntomp 16 -v -deffnm nvt
Using 1 MPI thread
Using 16 OpenMP threads
step 2400: timed
application clocks manually.
Is the behavior I see, related to this note? I doubt, but if someone has a
comment, I appreciate that.
Regards,
Mahmood
On Monday, July 9, 2018, 3:13:20 PM GMT+4:30, Mahmood Naderan
wrote:
Hi,
When I run "-nt 16 -nb cpu", I see nearly
Hi,
When I run mdrun with "-nb gpu", I see the following output
starting mdrun 'Protein'
2 steps, 40.0 ps.
step 200: timed with pme grid 64 80 60, coulomb cutoff 1.000: 3340.0 M-cycles
step 400: timed with pme grid 60 72 56, coulomb cutoff 1.075: 3742.2 M-cycles
step 600: timed with
Hi,
The manual says:
GROMACS can run in parallel on multiple cores of a singleworkstation using its
built-in thread-MPI. No user action is requiredin order to enable this.
However, that may not be correct because I get this error
Command line:
gmx_mpi mdrun -v -ntmpi 2 -ntomp 4 -nb gpu
Hi,Although I have read the manual and I have wrote programs with mpi, the
gromacs use of mpi is confusing.
Is it mandatory to use mpirun before gmx_mpi or not?
Can someone shed a light on that?
Regards,
Mahmood
--
Gromacs Users mailing list
* Please search the archive at
Hi,
When I run "-nt 16 -nb cpu", I see nearly 1600% cpu utilization. However, when
I run "-nt 16 -nb gpu", I see about 600% cpu utilization. Is there any reason
about that? I want to know with the cpu threads in a gpu run is controllable.
Regards,
Mahmood
--
Gromacs Users mailing list
*
Hi,
I want to do some tests on the lysozyme tutorial. Assume that the tutorial with
the default parameters which is run for 10ps, takes X seconds wall clock time.
If I want to increase the wall clock time, I can simply run for 100ps. However,
that is not what I want.
I want to increase the
>Additionally, you still have not provided the *mdrun log file* I requested.
>top output is not what I asked for.
See the attached file.
Regards,
Mahmood
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>The list does not accept attachments, so please use a file sharing or content
>sharing website so >everyone can see your data and has the context.
I uploaded here
https://pastebin.com/RCkkFXPx
Regards,
Mahmood
--
Gromacs Users mailing list
* Please search the archive at
:50 PM András Ferenc WACHA <wacha.and...@ttk.mta.hu>
wrote:
Dear Mahmood,
as far as I know, each command supports the "-nobackup" command line
switch...
Best regards,
Andras
On 02/28/2018 04:46 PM, Mahmood Naderan wrote:
> Hi,How can I disable the backup feature? I mean backed up
>(try the other parallel modes)
Do you mean OpenMP and MPI?
>- as noted above try offloading only the nonbondeds (or possibly the hybrid
>PME mode -pmefft cpu)
May I know how? Which part of the documentation says about that?
Regards,
Mahmood
--
Gromacs Users mailing list
* Please search
>- as noted above try offloading only the nonbondeds (or possibly the hybrid
>PME mode -pmefft cpu)
So, with "-pmefft cpu", I don't see any good impact!See the log at
https://pastebin.com/RTYaKSne
I will use other options to see the effect.
Regards,
Mahmood
--
Gromacs Users mailing list
>Again, first and foremost, try running PME on the CPU, your 8-core Ryzen will
>be plenty fast for that.
Since I am a computer guy and not a chemist, the question may be noob!
What do you mean exactly by running pme on cpu?
You mean "-nb cpu"? or you mean setting cut-off to Group instead of
No idea? Any feedback is appreciated.
Regards,
Mahmood
On Friday, March 9, 2018, 9:47:33 PM GMT+3:30, Mahmood Naderan
<nt_mahm...@yahoo.com> wrote:
Hi,
I want to do some tests on the lysozyme tutorial. Assume that the tutorial with
the default parameters which is run for 10ps,
KI) and follow the same procedure?
Generally, the type of my questions are like those.
Regards,
Mahmood
On Tuesday, March 13, 2018, 2:48:35 PM GMT+3:30, Justin Lemkul
<jalem...@vt.edu> wrote:
On 3/13/18 2:49 AM, Mahmood Naderan wrote:
> No idea? Any feedback is apprec
Hi,How can I disable the backup feature? I mean backed up files which start and
end with # character.
Regards,
Mahmood
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read
Yes you are right. Thank you very much.
Regards,
Mahmood
On Wednesday, February 28, 2018, 7:19:55 PM GMT+3:30, András Ferenc WACHA
wrote:
Dear Mahmood,
as far as I know, each command supports the "-nobackup" command line
switch...
Best regards,
Andras
application clocks of the detected Quadro M2000 GPU to improve
performance.
Regards,
Mahmood
On Wednesday, February 28, 2018, 7:15:13 PM GMT+3:30, Mahmood Naderan
<nt_mahm...@yahoo.com> wrote:
By runing
gmx mdrun -nb gpu -deffnm md_0_1
I see the following outputs
$ top -b | h
By runing
gmx mdrun -nb gpu -deffnm md_0_1
I see the following outputs
$ top -b | head -n 10
top - 19:14:10 up 7 min, 1 user, load average: 4.54, 1.40, 0.54
Tasks: 344 total, 1 running, 343 sleeping, 0 stopped, 0 zombie
%Cpu(s): 7.1 us, 0.5 sy, 0.0 ni, 91.9 id, 0.4 wa, 0.0 hi, 0.0
Command is "gmx mdrun -nobackup -pme cpu -nb gpu -deffnm md_0_1" and the log
says
R E A L C Y C L E A N D T I M E A C C O U N T I N G
On 1 MPI rank, each using 16 OpenMP threads
Computing: Num Num Call Wall time Giga-Cycles
Ranks
If you mean [1], then yes I read that and that recommends to use Verlet for the
new algorithm depicted in figures. At least that is my understanding about
offloading. If I read the wrong document or you mean there is also some other
options, please let me know.
[1]
Sorry for the confusion. My fault...
I saw my previous post and found that I missed something. In fact, I couldn't
run "-pme gpu".
So, once again, I ran all the commands and uploaded the log files
gmx mdrun -nobackup -nb cpu -pme cpu -deffnm md_0_1
https://pastebin.com/RNT4XJy8
gmx mdrun
Hi,
I set GMX_PRINT_DEBUG_LINE before the mdrun command, however, I don't see any
debug message
$ GMX_PRINT_DEBUG_LINES=1
$ gmx mdrun -nb gpu -ntmpi 8 -ntomp 1 -v -deffnm nvt
...NOTE: DLB can now turn on, when beneficialstep 1100, will finish Fri Jan 18
19:24:07 2019imb F 8%
step 1200 Turning
Hi
Where should I set the flag in order to see the fprintf statements like
if (debug)
{
fprintf(debug, "PME: number of ranks = %d, rank = %d\n",
cr->nnodes, cr->nodeid);
Any idea?
Regards,
Mahmood
--
Gromacs Users mailing list
* Please search the
Hi
With the following config command
cmake .. -DGMX_GPU=on -DCMAKE_INSTALL_PREFIX=`pwd`/../single
-DGMX_BUILD_OWN_FFTW=ON
I get the following error for "gmx mdrun -nb gpu -v -deffnm inp_nvp"
Fatal error:
Cannot run short-ranged nonbonded interactions on a GPU because there is none
detected.
Hi,
Although I have specified a custom CC and CXX path, the cmake command fails
with an error.
$ cmake .. -DGMX_GPU=on -DCMAKE_INSTALL_PREFIX=../single61
-DGMX_BUILD_OWN_FFTW=ON -DGMX_CUDA_TARGET_SM=61
-DCMAKE_C_COMPILER=/home/mahmood/tools/gcc-6.1.0/bin/gcc
Hi,
I see this line in the cmake output
-- Found CUDA: /usr/local/cuda (found suitable version "10.0", minimum required
is "7.0")
and I would like to change that default path to somewhere else. May I know how
to do that?
Regards,
Mahmood
--
Gromacs Users mailing list
* Please search the
Hi
I have build 2018.3 in order to test that with c2075 GPU.
I used this command to build it
$ cmake .. -DGMX_GPU=on -DCMAKE_INSTALL_PREFIX=../single
-DGMX_BUILD_OWN_FFTW=ON
$ make
$ make install
I have to say that the device is detected according to deviceQuery. However,
when I run
$ gmx
Hi
I have built 2018.3 with the following command in order to test that with c2075
$ cmake .. -DGMX_GPU=on -DCMAKE_INSTALL_PREFIX=../single
-DGMX_BUILD_OWN_FFTW=ON
While deviceQuery shows the device properly, when I run
$ gmx mdrun -nb gpu -v -deffnm nvt
I get this error
Fatal error:
Cannot
>Did you install the CUDA toolbox and drivers ?
>What is the output from "nvidia-smi" ?
Yes it is working. Please see the full output below
$ nvidia-smi
Mon Nov 25 08:53:22 2019
+--+
| NVIDIA-SMI 352.99
Hi,
I would like to know what is the last gromacs version that supports sm_20?I can
recursively find that with try and error. But maybe somewhere that is pointed
out.
Regards,
Mahmood
--
Gromacs Users mailing list
* Please search the archive at
-02 kl. 14:25, skrev Mahmood Naderan:
> Hi,
> Although I have specified a custom CC and CXX path, the cmake command fails
> with an error.
>
> $ cmake .. -DGMX_GPU=on -DCMAKE_INSTALL_PREFIX=../single61
> -DGMX_BUILD_OWN_FFTW=ON -DGMX_CUDA_TARGET_SM=61
> -DCMAKE_C_COMPILER=/h
Hi
Although I have built gromacs for 1080Ti and the device is working properly, I
get this error when running gmx command
$ ./gromacs-2019.4-1080ti/single/bin/gmx mdrun -nb gpu -v -deffnm nvt_5k
.
GROMACS: gmx mdrun, version 2019.4
Executable:
Hi
How can I disable MKL while building gromacs? With this configure command
cmake .. -DGMX_BUILD_OWN_FFTW=ON -DGMX_GPU=on -DGMX_FFT_LIBRARY=fftw3
I see
-- The GROMACS-managed build of FFTW 3 will configure with the following
optimizations: --enable-sse2;--enable-avx;--enable-avx2
-- Using
72 matches
Mail list logo