Christoph,
Thank you for the advice.
I have added -nt 12 in the following manner:
<inverse>
<!-- 200*0.00831451 gromacs units -->
<kBT>1.66290</kBT>
<!-- use gromacs as simulation program -->
<program>gromacs</program>
<!-- gromacs specific options -->
<gromacs>
<!-- trash so many frames at the beginning -->
<equi_time>20</equi_time>
<!-- grid for table*.xvg !-->
<table_bins>0.002</table_bins>
<!-- cut the potential at this value (gromacs bug) -->
<pot_max>1000000</pot_max>
<!-- extend the tables to this value -->
<table_end>2.5</table_end>
<mdrun.opts>-nt 12</mdrun.opts>
</gromacs>
however, i am getting the same message:
"Running on 1 node with total 28 cores, 28 logical cores"
and also runs on them
"Initializing Domain Decomposition on 28 ranks"
I have attached my settings.xml file to this document (the only change i
have made is the -nt 12 part).
Speaking of, I made the change and wrote -nt 8 in the methanol tutorial,
and that one worked fine...
On Friday, April 2, 2021 at 10:20:55 AM UTC-4 Christoph Junghans wrote:
> On Fri, Apr 2, 2021 at 7:16 AM Satyen Dhamankar <[email protected]> wrote:
> >
> > The gromacs run. It is strange because the spce ibi did not take as long.
> > I have attached my md.log file. After 2.5 hours, only 11999 steps were
> completed for a 3000 molecule simulation...
> Hmm, that might be more a question for the gromacs user mailing list.
>
> From your md.log, I see that Gromacs detected 28 cores
> "Running on 1 node with total 28 cores, 28 logical cores"
> and also runs on them
> "Initializing Domain Decomposition on 28 ranks"
>
> However you only requested 12 cores in your batch script:
> #SBATCH --ntasks-per-node=12
>
> If the other cores are used by another user that could lead to
> performance problems as gromacs always runs as slow as the slowest
> core.
> gromacs has an option to limit the number cores, I believe "-nt 12",
> which you can hand over to gromacs from the xml file using
> cg.inverse.gromacs.mdrun.opts.
>
> Christoph
>
> >
> > On Friday, April 2, 2021 at 8:47:18 AM UTC-4 Christoph Junghans wrote:
> >>
> >> On Thu, Apr 1, 2021 at 21:57 Satyen Dhamankar <[email protected]>
> wrote:
> >>>
> >>> Hello,
> >>>
> >>> I have been running the propane IBI tutorial for propane. The IBI for
> propane takes quite some time to run. Step_001 has taken more than an hour
> to complete.
> >>>
> >>> This is the slurm script I have used to run this job:
> >>>
> >>> #!/bin/bash
> >>> #
> >>> #SBATCH --job-name=R1.5-CG # create a short name for your job
> >>> #SBATCH --qos=vshort # _quality of service_
> >>> #SBATCH --nodes=1 # node count
> >>> #SBATCH --ntasks-per-node=12 # number of tasks per node
> >>> #SBATCH --cpus-per-task=1 # cpu-cores per task (>1 if multi-threaded
> tasks)
> >>> #SBATCH --mem=10GB # total memory requested
> >>> ##SBATCH --mem-per-cpu=4G # memory per cpu-core (4G per cpu-core is
> default)
> >>> #SBATCH --gres=gpu:1 # number of gpurs per node
> >>> #SBATCH --time=2:30:00 # total run time limit (HH:MM:SS)
> >>> #SBATCH --mail-type=all # send email on job start, end, and fail
> >>> module purge
> >>> module load intel/19.0/64/19.0.5.281 # for running gromacs
> >>> module load rh/devtoolset/7
> >>> module load intel-mkl/2019.5/5/64
> >>> module load cmake/3.x
> >>> # run the job
> >>> set -e
> >>> # keep track of the last executed command
> >>> trap 'last_command=$current_command; current_command=$BASH_COMMAND'
> DEBUG
> >>> # echo an error message before exiting
> >>> #trap 'echo "\"${last_command\" command filed with exit code $?." '
> EXIT
> >>> bash run.sh
> >>>
> >>> Is taking such a long time for a sample IBI run normal?
> >>
> >> Where does it get stuck? In the gromacs run or after that?
> >>
> >> Christoph
> >>>
> >>> --
> >>> Join us on Slack: https://join.slack.com/t/votca/signup
> >>> ---
> >>> You received this message because you are subscribed to the Google
> Groups "votca" group.
> >>> To unsubscribe from this group and stop receiving emails from it, send
> an email to [email protected].
> >>> To view this discussion on the web visit
> https://groups.google.com/d/msgid/votca/dadf2e09-20dd-4e63-8eaf-f748a3d390a3n%40googlegroups.com
> .
> >>
> >> --
> >> Christoph Junghans
> >> Web: http://www.compphys.de
> >
> > --
> > Join us on Slack: https://join.slack.com/t/votca/signup
> > ---
> > You received this message because you are subscribed to the Google
> Groups "votca" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to [email protected].
> > To view this discussion on the web visit
> https://groups.google.com/d/msgid/votca/0847a33c-636b-4a09-82c0-4fdd5a63c1e5n%40googlegroups.com
> .
>
>
>
> --
> Christoph Junghans
> Web: http://www.compphys.de
>
--
Join us on Slack: https://join.slack.com/t/votca/signup
---
You received this message because you are subscribed to the Google Groups
"votca" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/votca/aeaccc1d-3236-409e-8dcf-2377c079f2dbn%40googlegroups.com.
<cg>
<!-- example for a non-bonded interaction entry -->
<non-bonded>
<!-- name of the interaction -->
<name>A-A</name>
<!-- types involved in this interaction -->
<type1>A</type1>
<type2>A</type2>
<!-- dimension + grid spacing of tables for calculations -->
<min>0.31</min>
<max>1.36</max>
<step>0.01</step>
<inverse>
<!-- target distribution (rdf), just give gromas rdf.xvg -->
<target>A-A.dist.tgt</target>
<!-- update cycles -->
<do_potential>1 0 0</do_potential>
<!-- additional post processing of dU before added to potential -->
<post_update></post_update>
<!-- additional post processing of U after dU added to potential -->
<post_add></post_add>
<!-- name of the table for gromacs run -->
<gromacs>
<table>table_A_A.xvg</table>
</gromacs>
</inverse>
</non-bonded>
<non-bonded>
<!-- name of the interaction -->
<name>B-B</name>
<!-- types involved in this interaction -->
<type1>B</type1>
<type2>B</type2>
<!-- dimension + grid spacing of tables for calculations -->
<min>0.31</min>
<max>1.38</max>
<step>0.01</step>
<inverse>
<!-- target distribution (rdf), just give gromas rdf.xvg -->
<target>B-B.dist.tgt</target>
<!-- update cycles -->
<do_potential>0 1 0</do_potential>
<!-- additional post processing of dU before added to potential -->
<post_update></post_update>
<!-- additional post processing of U after dU added to potential -->
<post_add></post_add>
<!-- inverse monte carlo specific stuff -->
<!-- name of the table for gromacs run -->
<gromacs>
<table>table_B_B.xvg</table>
</gromacs>
</inverse>
</non-bonded>
<non-bonded>
<!-- name of the interaction -->
<name>A-B</name>
<!-- types involved in this interaction -->
<type1>A</type1>
<type2>B</type2>
<!-- dimension + grid spacing of tables for calculations -->
<min>0.31</min>
<max>1.36</max>
<step>0.01</step>
<inverse>
<!-- target distribution (rdf), just give gromas rdf.xvg -->
<target>A-B.dist.tgt</target>
<!-- update cycles -->
<do_potential>0 0 1</do_potential>
<!-- additional post processing of dU before added to potential -->
<post_update></post_update>
<!-- additional post processing of U after dU added to potential -->
<post_add></post_add>
<!-- inverse monte carlo specific stuff -->
<!-- name of the table for gromacs run -->
<gromacs>
<table>table_A_B.xvg</table>
</gromacs>
</inverse>
</non-bonded>
<!-- general options for inverse script -->
<inverse>
<!-- 200*0.00831451 gromacs units -->
<kBT>1.66290</kBT>
<!-- use gromacs as simulation program -->
<program>gromacs</program>
<!-- gromacs specific options -->
<gromacs>
<!-- trash so many frames at the beginning -->
<equi_time>20</equi_time>
<!-- grid for table*.xvg !-->
<table_bins>0.002</table_bins>
<!-- cut the potential at this value (gromacs bug) -->
<pot_max>1000000</pot_max>
<!-- extend the tables to this value -->
<table_end>2.5</table_end>
<mdrun.opts>-nt 12</mdrun.opts>
</gromacs>
<!-- these files are copied for each new run -->
<filelist>grompp.mdp topol.top table.xvg table_a1.xvg table_b1.xvg index.ndx</filelist>
<!-- do so many iterations -->
<iterations_max>5</iterations_max>
<!-- ibi: inverse biltzmann imc: inverse monte carlo -->
<method>ibi</method>
<!-- directory for user scripts -->
<scriptpath>$PWD</scriptpath>
<!-- write log to this file -->
<log_file>inverse.log</log_file>
<!-- write restart step to this file -->
<restart_file>restart_points.log</restart_file>
<!-- imc specific stuff -->
</inverse>
</cg>