I apologize for the confusion, but I found my error. I was failing to
request a certain number of cpus-per-task and the scheduler was having
issues assigning the threads because of this. Speed is now at ~400 ns/day
with a 3 fs timestep which seems reasonable.
Thanks for all the help,
Dan
On Wed,
Szilárd,
I think I must be misunderstanding your advice. If I remove the domain
decomposition and set pin on as suggested by Mark, using:
gmx_gpu mdrun -deffnm my_tpr -dd 1 -pin on
Then I get very poor performance and the following error:
NOTE: Affinity setting for 6/6 threads failed. This can
Hi,
Replying to say thanks and add a few details in case anyone comes across this
thread in a search.
Setting LD_LIBRARY_PATH did the trick. On Ubuntu 17.04 that’s
/usr/lib/gcc/x86_64-linux-gnu.
Also, I had to install gcc-5 and g++-5 again because the CUDA 8 toolkit doesn’t
like version 6,
Hi,
I'm interested in simulating a DNA ligase with an AMP bound covalently via
its P atom to the side chain of a Lysine residue. I can not find any
parameters for that. I really appreciate any help in how to achieve this.
thanks in advance,
Gilberto
--
Gromacs Users mailing list
* Please
+ let me emphasize again what Mark said: do not use
domain-decomposition with such a small system! All the overhead you
see comes from the communication you force mdrun to do by running
multiple ranks.
BTW the 1.1 us/day number you quote does a ~6000 atoms simulation with
4 or 5 fs time-step (so
You're choosing 240 ranks for 20 simulations, so that is 12 ranks per
simulation. Start 20 ranks for your position-restrained simulation, to
avoid this historical mal-implementation.
Mark
On Wed, 24 May 2017 17:15 Debdip Bhandary
wrote:
>
> Dear Users,
> I am
Hi,
I'm wondering why you want 8 ranks on the 14 or 28 cores. The log reports
that something F else is controlling thread affinity, which is the easiest
thing to screw up if you are doing node sharing. The job manager has to
give you cores that are solely yours, and you/it need to set the
Thanks so much for the quick reply. That seems to have fixed the wait time
issues. Unfortunately, I'm still only getting ~300 ns/day for the benchmark
system (villin vsites, http://www.gromacs.org/GPU_acceleration), while the
website claims over 1000 ns/day.
I'm running on a NVIDIA Tesla
Dear Gromacs users,
I have a question on RMSF calculation:
I am not sure how -res works in RMSF and I could not find any useful
explanation for it.
It will give the average fluctuations per residue, fine. but how exactly?
1) it calculates the RMSF for each atom in that residue first. Then, it
Try just using your equivalent of:
mpirun -n 2 -npernode 2 gmx_mpi mdrun (your run stuff here) -ntomp 4 -gpu_id 00
That may speed it up.
===
Micholas Dean Smith, PhD.
Post-doctoral Research Associate
University of Tennessee/Oak Ridge National Laboratory
Center for Molecular
Hello,
I'm using GROMACS 5.1.4 on 8 CPUs and 1 GPU for a system of ~8000 atoms in
a dodecahedron box, and I'm having trouble getting good performance out of
the GPU. Specifically it appears that there is significant performance loss
to wait times ("Wait + Comm. F" and "Wait GPU nonlocal"). I have
Thanks Justin! It worked. All bonds are now in place.
I’ll play with the branching a bit and will make a new ticket, if I had a
problem.
Thank again,
MH
> On May 24, 2017, at 1:04 PM, Justin Lemkul wrote:
>
>
>
> On 5/24/17 12:04 PM, Mohammad Hassan Khatami wrote:
>> Sorry
On 5/24/17 12:04 PM, Mohammad Hassan Khatami wrote:
Sorry for that. Here are the types.
atom type
C1 CC3162
O4 OC311
O4 should be OC301 for all 1-4 linkages (PRES 14aa, 14ab, 14ba, 14bb in
top_all36_carb.rtf).
-Justin
C4 CC3161
C5 CC3163
C3 CC3161
H4
Sorry for that. Here are the types.
atom type
C1 CC3162
O4 OC311
C4 CC3161
C5 CC3163
C3 CC3161
H4 HCA1
I checked them, but I was not able to find the exact combination of atoms
angles and dihedrals associated with them in the ffbonded.itp.
For example,
On 5/24/17 11:35 AM, Mohammad Hassan Khatami wrote:
On May 24, 2017, at 11:31 AM, Mohammad Hassan Khatami wrote:
Some how I figured out the problem and fixed it! I changed "O4 +C1" to "O4
2C1” and it worked. I hope it is correct.
Nope! the bond are not made this
> On May 24, 2017, at 11:31 AM, Mohammad Hassan Khatami wrote:
>
> Some how I figured out the problem and fixed it! I changed "O4 +C1" to
> "O4 2C1” and it worked. I hope it is correct.
Nope! the bond are not made this time! That’s why grompp did not complain!
> Now I
Some how I figured out the problem and fixed it! I changed "O4 +C1" to "O4
2C1” and it worked. I hope it is correct.
Now I have to move on to make the more complicated form of my polymer, and add
1->6 connection to the system, like below:
Dear Users,
I am trying to perform n number of simulations in parallel using -multi
option with mdrun (version 5.0.4). I want to simulate the system at
different conditions in parallel using '-multi' option. The process
involves nvt followed by NpT step. The jobscript is as following:
That should not be needed if this is a default Ubnutu 17.04 install that
comes with gcc 6 ootb, I think.
--
Szilárd
On Wed, May 24, 2017 at 2:13 PM, Mark Abraham
wrote:
> Hi,
>
> Using a gcc version also entails linking to the standard library that comes
> along with
Hi Justin,
Here are all the errors with the (atom indices) and the "atom names”. Based on
the order of the errors I think I am making mistakes when I am trying to modify
the AGLC units to connect them through O4-2C1 (O4 to the C1 of the next
residue). I am some how sure that I am making
On 5/24/17 8:52 AM, fatemeh ramezani wrote:
Hi dear gmx-usersI am simulating gold surface- protein interaction by
GOLP-CHARMM forcefield. after 30 ps equilibration, I started mdrun with some
freeze atoms in gold surface and NVT ensemble in temp=300 k, but when mdrun
starts, temperature
On 5/23/17 11:06 PM, Sailesh Bataju wrote:
* Making bonds...
*>>* Warning: Short Bond (5-1 = 0.025 nm)
*>>* Warning: Long Bond (5-6 = 0.99084 nm)
*>>* Warning: Long Bond (1-2 = 0.944378 nm)
*>>* Warning: Long Bond (1-3 = 0.895661 nm)
*>>* Warning: Long Bond (1-4 = 0.866986 nm)
*>>*
Hi dear gmx-usersI am simulating gold surface- protein interaction by
GOLP-CHARMM forcefield. after 30 ps equilibration, I started mdrun with some
freeze atoms in gold surface and NVT ensemble in temp=300 k, but when mdrun
starts, temperature reaches to 5.50769e+05 and mdrun fails. md.mdp
This kind of issues can mean that you moved compiled binary to quite different
machine.
Though I am not sure what you did.
Compile on the same machine you are going to use or at least as similar as
possible (same Linux ver. with same updates).
On Wednesday, May 24, 2017 3:13 PM, abhisek
Hi,
You should ask the cluster maintainers how they intend the compiler to be
used. You need to find the same infrastructure at run time as you used at
compile time, e.g. by loading the same modules.
Mark
On Wed, May 24, 2017 at 2:13 PM abhisek Mondal
wrote:
> Hi,
>
>
Hmm It might be problem that you removed compilers. Wonder what you have now.If
you still have problem please post output of:
g++ -v && gcc -v
Command.
On Wednesday, May 24, 2017 3:14 PM, Mark Abraham
wrote:
Hi,
Using a gcc version also entails linking to
Hi,
Using a gcc version also entails linking to the standard library that comes
along with it, ie libstdc++.so. By default, it will link at run time to the
version that shipped with the package. For any sufficiently new compiler,
that won't work (by design) because the library and the compiler
Hi,
I have been trying to install latest version of gromacs (Gromacs-5.1.4)
in my cluster.
Installation went without any error. But whenever I'm giving the command:
gmx_mpi
An error report comes:
gmx_mpi: /lib64/libz.so.1: no version information available (required by
Hi Vytautas,
I should have mentioned that I had deleted the ‘build’ directory and tried the
cmake command again, but I received the same error.
I had installed a large number of different compilers (clang-3.8, clang-4.0,
gcc-5) to get my CUDA installation to work but thought I had removed them
Mark,
I'm not really sure what you mean by technique. I assume NEMD stands
for Non-equilibrium MD? That is the case in my simulation..
About my simulation: I am trying to simulate a fluid flow between
surfaces. Two atomistic surfaces of gold atoms have been created at
Z=0 and Z=z, where z is the
Ubuntu 17.04 should give you quite up-to date gcc and co.
Make sure you try this on clean folder: remove everything download and try
again. Maybe some old files interfering.
On Wednesday, May 24, 2017 4:14 AM, Steffen Graether
wrote:
Hi,
I’ve tried installing
31 matches
Mail list logo