On Fri, Apr 2, 2021, 13:10 Satyen Dhamankar <[email protected]> wrote:
> Christoph, > > Thank you for the advice. > I have added -nt 12 in the following manner: > > <inverse> > <!-- 200*0.00831451 gromacs units --> > <kBT>1.66290</kBT> > <!-- use gromacs as simulation program --> > <program>gromacs</program> > <!-- gromacs specific options --> > <gromacs> > <!-- trash so many frames at the beginning --> > <equi_time>20</equi_time> > <!-- grid for table*.xvg !--> > <table_bins>0.002</table_bins> > <!-- cut the potential at this value (gromacs bug) --> > <pot_max>1000000</pot_max> > <!-- extend the tables to this value --> > <table_end>2.5</table_end> > <mdrun.opts>-nt 12</mdrun.opts> Try <mdrun> <opts>-nt 12</opts> </mdrun> Here > > </gromacs> > > however, i am getting the same message: > "Running on 1 node with total 28 cores, 28 logical cores" > and also runs on them > "Initializing Domain Decomposition on 28 ranks" > > I have attached my settings.xml file to this document (the only change i > have made is the -nt 12 part). > > > Speaking of, I made the change and wrote -nt 8 in the methanol tutorial, > and that one worked fine... > On Friday, April 2, 2021 at 10:20:55 AM UTC-4 Christoph Junghans wrote: > >> On Fri, Apr 2, 2021 at 7:16 AM Satyen Dhamankar <[email protected]> >> wrote: >> > >> > The gromacs run. It is strange because the spce ibi did not take as >> long. >> > I have attached my md.log file. After 2.5 hours, only 11999 steps were >> completed for a 3000 molecule simulation... >> Hmm, that might be more a question for the gromacs user mailing list. >> >> From your md.log, I see that Gromacs detected 28 cores >> "Running on 1 node with total 28 cores, 28 logical cores" >> and also runs on them >> "Initializing Domain Decomposition on 28 ranks" >> >> However you only requested 12 cores in your batch script: >> #SBATCH --ntasks-per-node=12 >> >> If the other cores are used by another user that could lead to >> performance problems as gromacs always runs as slow as the slowest >> core. >> gromacs has an option to limit the number cores, I believe "-nt 12", >> which you can hand over to gromacs from the xml file using >> cg.inverse.gromacs.mdrun.opts. >> >> Christoph >> >> > >> > On Friday, April 2, 2021 at 8:47:18 AM UTC-4 Christoph Junghans wrote: >> >> >> >> On Thu, Apr 1, 2021 at 21:57 Satyen Dhamankar <[email protected]> >> wrote: >> >>> >> >>> Hello, >> >>> >> >>> I have been running the propane IBI tutorial for propane. The IBI for >> propane takes quite some time to run. Step_001 has taken more than an hour >> to complete. >> >>> >> >>> This is the slurm script I have used to run this job: >> >>> >> >>> #!/bin/bash >> >>> # >> >>> #SBATCH --job-name=R1.5-CG # create a short name for your job >> >>> #SBATCH --qos=vshort # _quality of service_ >> >>> #SBATCH --nodes=1 # node count >> >>> #SBATCH --ntasks-per-node=12 # number of tasks per node >> >>> #SBATCH --cpus-per-task=1 # cpu-cores per task (>1 if multi-threaded >> tasks) >> >>> #SBATCH --mem=10GB # total memory requested >> >>> ##SBATCH --mem-per-cpu=4G # memory per cpu-core (4G per cpu-core is >> default) >> >>> #SBATCH --gres=gpu:1 # number of gpurs per node >> >>> #SBATCH --time=2:30:00 # total run time limit (HH:MM:SS) >> >>> #SBATCH --mail-type=all # send email on job start, end, and fail >> >>> module purge >> >>> module load intel/19.0/64/19.0.5.281 # for running gromacs >> >>> module load rh/devtoolset/7 >> >>> module load intel-mkl/2019.5/5/64 >> >>> module load cmake/3.x >> >>> # run the job >> >>> set -e >> >>> # keep track of the last executed command >> >>> trap 'last_command=$current_command; current_command=$BASH_COMMAND' >> DEBUG >> >>> # echo an error message before exiting >> >>> #trap 'echo "\"${last_command\" command filed with exit code $?." ' >> EXIT >> >>> bash run.sh >> >>> >> >>> Is taking such a long time for a sample IBI run normal? >> >> >> >> Where does it get stuck? In the gromacs run or after that? >> >> >> >> Christoph >> >>> >> >>> -- >> >>> Join us on Slack: https://join.slack.com/t/votca/signup >> >>> --- >> >>> You received this message because you are subscribed to the Google >> Groups "votca" group. >> >>> To unsubscribe from this group and stop receiving emails from it, >> send an email to [email protected]. >> >>> To view this discussion on the web visit >> https://groups.google.com/d/msgid/votca/dadf2e09-20dd-4e63-8eaf-f748a3d390a3n%40googlegroups.com. >> >> >> >> >> -- >> >> Christoph Junghans >> >> Web: http://www.compphys.de >> > >> > -- >> > Join us on Slack: https://join.slack.com/t/votca/signup >> > --- >> > You received this message because you are subscribed to the Google >> Groups "votca" group. >> > To unsubscribe from this group and stop receiving emails from it, send >> an email to [email protected]. >> > To view this discussion on the web visit >> https://groups.google.com/d/msgid/votca/0847a33c-636b-4a09-82c0-4fdd5a63c1e5n%40googlegroups.com. >> >> >> >> >> -- >> Christoph Junghans >> Web: http://www.compphys.de >> > -- > Join us on Slack: https://join.slack.com/t/votca/signup > --- > You received this message because you are subscribed to the Google Groups > "votca" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/votca/aeaccc1d-3236-409e-8dcf-2377c079f2dbn%40googlegroups.com > <https://groups.google.com/d/msgid/votca/aeaccc1d-3236-409e-8dcf-2377c079f2dbn%40googlegroups.com?utm_medium=email&utm_source=footer> > . > -- Join us on Slack: https://join.slack.com/t/votca/signup --- You received this message because you are subscribed to the Google Groups "votca" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/votca/CAHG27e7HBLFOvAfFO-9EZYz27L4bBSvUG9%3DbZEM%3D86e3FRK2JA%40mail.gmail.com.
