at. GROMACS automatically looks for the right version of g++ to find the
proper version of libstdc++
If you are referring to a different error please be more specific and post the
exact error.
Roland
> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
> On Be
BS”D
Dear All,
I would like to compile Gromacs 2020 with an intel compiler; I set the
following:
-DCMAKE_C_COMPILER=icc -DCMAKE_CXX_COMPILER=icpc -DGMX_MPI=on
-DGMX_BUILD_OWN_FFTW=OFF -DGMX_FFT_LIBRARY=mkl
-DMKL_LIBRARIES=$MKLROOT/lib/intel64 -DMKL_INCLUDE_DIR=$MKLROOT/include
Despite
BS”D
Seems that a bash formatted line has made it into the csh version of the GMXRC
file:
setenv GMXTOOLCHAINDIR=${GMXPREFIX}/share/cmake
The “=“ should be a space.
Harry
Harry M. Greenblatt
Associate Staff Scientist
BS”D
Dear All,
I have now run into this issue with two very different systems (one with 762
protein and 60 DNA residues, the other with 90 protein residues). If I try and
carry over the velocities from the final equilibration step into a production
run, and try to use more than one MPI
mdrun -nsteps 0 -nb cpu -pme
cpu a few times).
--
Szilárd
On Mon, Feb 11, 2019 at 11:25 AM Harry Mark Greenblatt
mailto:harry.greenbl...@weizmann.ac.il>> wrote:
BS”D
Dear All,
Trying to run a system with about 70,000 atoms, including waters, of a
trimeric protein. Went through mini
BS”D
Dear All,
Trying to run a system with about 70,000 atoms, including waters, of a
trimeric protein. Went through minimization, PT and NPT equilibration.
Most of the time, it starts and runs fine. But once in about 5 tries, I get:
12500 steps, 25.0 ps.
BS”D
Dear Michael,
ThermalTake PSU’s are better than many out there, but they aren’t the best.
Here is a link discussing how the theoretical wattage rating of a PSU may not
actually translate into enough watts in reality, especially under sustained
load.
BS”D
I had a similar issue with a workstation, and it turned out to be the power
supply, which is why it seemed that someone pulled the plug (at least it seems
to be the PSU, since having replaced it we don’t see failures). The problem
was very intermittent, and did not require any great
BS”D
The newest and most powerful AMD Threadripper, the 2990WX with 32 cores
coupled with two powerful GPU’s seems quite attractive, but 2 of the CPU dies
are not directly connected to the system memory. Some reviews have warned that
this will impact performance under certain workloads, as
BS”D
Dear Mark,
did the .tpr actually match the trajectory file?
Sorry, that was indeed the problem (mismatch of -f and -s files).
Thanks for the hint
Harry
Harry M. Greenblatt
Associate Staff Scientist
Dept of
BS”D
In a given system with several chains, after minimisation the chains are
split up by PBC. Using trjconv on this file to put all the chains back into a
unified complex, the Ions are converted to water molecules.
The input file (-f) and the reference file (-s) had waters and Ions
BS”D
Ok, thanks for the advice,
Harry
On 11 Apr 2018, at 11:21 AM, Alex
> wrote:
Screening effects in Gromacs come in a rather non-straightforward manner in
terms of data extraction: they certainly exist within the simulations in the
form of
If you plan to extract anything explicitly related to water from your reruns --
very much so. Basically unusable trajectories.
Alex
On 4/11/2018 1:48 AM, Harry Mark Greenblatt wrote:
B”SD
In an effort to reduce the size of output xtc files of simulations of large
systems, we thought of saving
B”SD
In an effort to reduce the size of output xtc files of simulations of large
systems, we thought of saving these files without water molecules.
It occurred to us, however, that upon subsequent cpu-only reruns in order to
do energy calculations, these results would be adversely affected,
BS”D
Someone has pointed out to me that the “Silver” line is not meant for HPC.
For HPC, you need to go with the “Gold” series, even if you don’t want 4
sockets.
The difference presumably lies is the fact that the Gold has 2 FMA units, and
the Silver series has 1.
Harry
On 20 Feb 2018,
BS”D
I am currently running simulations for protein-RNA complex. However i have
to include one Zn ion which is coordinated by 4 cysteine residues. when i
performed energy minimization itself zinc displaces. How can i restrain to
Zn, or to freeze Zn during simulations.
You probably should use
BS”D
Did you build with or without hwloc?
I did use hwloc.
—
Gromacs 2018 rc1 (using gcc 4.8.5)
—
Using AVX_256
You should be using AVX2_128 or AVX2_256 or Zen! The former will be fastest
in CPU-only runs, the latter can often be (a bit) faster in GPU
BS”D
In case anybody is interested we have tested Gromacs on a Threadripper machine
with two GPU’s.
Hardware:
Ryzen Threadripper 1950X 16 core CPU (multithreading on), with Corsair H100i V2
Liquid cooling
Asus Prime X399-A M/B
2 X Geforce GTX 1080 GPU’s
32 GB of 3200MHz memory
Samsung 850 Pro
BS”D
On 17 Oct 2017, at 1:03 PM, Bernhard Reuter
> wrote:
Dear Harry,
I would assume you are right with your argument that 6/8/10 cores per GPU are
better than i.e. 5 cores. If I were you I would still carefully test, if you
really have
BS”D
Dear Bernhard and Szilárd,
Since your replies to me are related, I’ll combine my replies to one letter.
you are right for the dual xeon 16 core setup you (and we) use, but for the
single core i9 setup we made the experience, that we only gained between 5-10%
of performance by adding a
BS”D
Dear All,
The new Skylake-X CPU’s have very high core counts, and could, in theory,
take the place of a two socket 16 core Xeon system. One could create a machine
as follows:
1 x Core i9 7960X with 16 cores
2 x Geforce 1080 graphic cards,
and run a simulation with 2 MPI ranks,
BS”D
We have successfully used ZAFF
Peters, M., et al., J. Chem. Theory Comput. 20120, 6, 2935-2947.
I have the input files from Martin Peters necessary to create additions for the
Amber force fields in Gromacs. Which configuration of Zn do you have?
Harry
On 24 Jul 2017, at 3:18 PM, Erik
BS”D
I am trying to simulate a multichain amino acid structure. PDB ID 2BEG.
I am experiencing problems with the C and N terminal residues i. e. C-ALA
and N-LEU.
An abnormal non-bonded interaction keeps forming between atoms C from ALA42
and N from LEU17.
Do you mean from Ala42 of Chain A to
BSD
Dear David and Szilárd
Here are the results from our tests:
Gromacs 5.0 with AVX2 on E5-2699v3
nodes cores core/socket ns/day wall time, s scaling ideal scaling ideal ns/day
1 12 6 20.023 4315.029 1.00 1.00 20.023
1 16 8 24.912 3468.233 1.24 1.33 26.697
1 20
BSD
Dear David,
Hey Harry,
Thanks for the caveat. Carsten Kutzner posted these results a few days ago.
This is what he said :
I never benchmarked 64-core AMD nodes with GPUs. With a 80 k atoms test
system using a 2 fs time step I get
24 ns/d on 64 AMD cores 6272
16 ns/d on 32 AMD cores
BSD
Dear All,
I was wondering how people deal with the issue of generating special amino
acids in Gaussian, then being left with residual charge upon removing blocking
blocks. The process I follow is:
1. Build the modified amino acid (side chain is non-standard) in Gaussian,
with
BSD
Gromacs is telling you that the -h option you supplied is not valid for this
command. You can see all the valid arguments for each
command here
http://manual.gromacs.org/online.html
Harry M. Greenblatt
Associate Staff Scientist
Dept of Structural Biology
BSD
Hi,
binutils package from this century is required ;-)
Mark
Thank you. Installation of binutils 2.24 has taken care of the issue. Perhaps
mention of the minimum release level of binutils required
on the Installation Instructions page is in order.
Harry
BSD
Dear All,
While trying to run cmake under Centos 6.6 on a Haswell cpu for Gromacs 5.0.3
using gcc 4.8.3, cmake fails claiming the compiler doesn't support AVX2, but as
far as I know it should...
Cannot find AVX2 compiler flag. Use a newer compiler, or choose AVX SIMD
(slower).
Some
BSD
Dear Mark,
Here is where it fails:
-- Try C++ compiler AVX2 flag = [-mavx2]
-- Performing Test CXX_FLAG_mavx2
-- Performing Test CXX_FLAG_mavx2 - Success
-- Performing Test CXX_SIMD_COMPILES_FLAG_mavx2
-- Performing Test CXX_SIMD_COMPILES_FLAG_mavx2 - Failed
-- Try C++ compiler AVX2 flag
BSD
Dear Justin,
The new water molecules are scattered throughout the box. The density of the
system behaves in the following manner
1. First run of solvate: 7025 waters added, density 994.192 g/l (a bit
low??)
2. Run solvate on result of 1: 20 waters added, density 996.828 g/l
BSD
Dear All,
While setting up a system with a small DNA binding domain (30 residues) in
close proximity to 9 bp of B DNA, in a dodecahedron box,
I ran gmx solvate which added about 7,000 water molecules (spce), followed by
addition of 0.125M NaCl.
I then minimized this setup, and ran 100ps
BSD
Dear Justin,
Yes, this is weird. Where are the new waters added? If they're at the
periphery, there's probably just some incorrect accounting for PBC, in which
case any subsequent run likely shouldn't be stable.
The new water molecules are scattered throughout the box. The density
, 2014 at 10:16 AM, Harry Mark Greenblatt
harry.greenbl...@weizmann.ac.il wrote:
BSD
Dear All,
In an attempt to save disk space while using longer simulations of a
DNA + Protein + solvent system,
I had mdrun create an xtc file with only the DNA, Protein, and ions,
excluding the water
BSD
Dear Szilárd,
Thanks very much for your reply.
This compiler is very outdated, you should use at least gcc 4.7 or 4.8
for best performance - especially the CPU-only runs should get quite a
bit faster.
Thanks, point taken. The person who compiled this on the test machine just
used what
BSD
Dear All,
I was given access to a test machine with
2 x E5-2630 2.3GHz 6 core processors
2 x Tesla K20x GPU's
Gromacs 5.0, compiled (gcc 4.4.7) with support for Intel MPI.
Ran a 1ns simulation on a 3-domain (all in one chain) DNA binding protein,
dsDNA, waters, and ions (~32,600 atoms).
BSD
Dear All,
I was given access to a test machine with
2 x E5-2630 2.3GHz 6 core processors
2 x Tesla K20x GPU's
Gromacs 5.0, compiled (gcc 4.4.7) with support for Intel MPI.
Ran a 1ns simulation on a 3-domain (all in one chain) DNA binding protein,
dsDNA, waters, and ions (~32,600 atoms).
BSD
Dear Szilárd,
Thanks for your reply.
Hence, to get a balanced hardware combination (assuming the same input
system and settings), you would need a GPU that's about 2x faster than
the K5000.
Is that a typo? We used a K4000, with half the number of CUDA cores (768) from
what we are
BSD
So a job should run the same or faster on 10 cores at 2.5GHz relative to 6
cores at 3.5GHz?
Thanks for letting me know
Harry
Btw, the 10-core 2.5 Ghz Xeon (2670?) will be a better
price/performance - for the same price - as the E5-2643V2, at least
for GROMACS and in general codes
BSD
Dear All,
I was asked to provide some examples of what we are doing to assess whether
my proposal for a GPU compute node is reasonable
(2 x 3.5GHz E5-2643V2 hexacore, with 2 x Geforce GTX 770; run two jobs, each
with six cores and 1 GPU). I did some tests on a workstation some time ago
BSD
Dear Mark,
OK. The GROMACS developers intended the user to follow normal practice,
such as doing the build in user space in the file system, and then
installing as root, and installing to somewhere other than the source
tarball location. Unpacking a source tarball in /opt and building from
BSD
Hi All,
I am trying to build 4.6.5 for testing, on a Rocks 5.3 cluster (yes it needs
upgrading!). I am using one of the compute nodes to do the compiling, using
the following:
CMAKE_PREFIX_PATH=/share/apps/fftw-3.3.2 /share/apps/cmake-2.8.12.2/bin/cmake
..
BSD
Dear All
I was thinking of configuring some GPU compute nodes with the following:
2 x E5-2643v2 (6 cores each, 12 cores total)
2 x GTX 770.
The idea is to run two Gromacs jobs on each node, each using 6 cores and 1 GPU
card.
1). Does this sound reasonable.
2). If so, I am not clear how
43 matches
Mail list logo