Hi,
My objective is to use Poisson equation based mean field approximation for
the electrostatic interactions. So, I need to figure out how to achieve
this in Gromacs.
I understand PME uses Poisson equation for the long-range component in
addition to the direct Coulomb sum for the pairs within a
Hi,
I am wondering what non-bonded kernel is used for water simulations when
gromacs is built with None SIMD option? Is it plainC or water specific
highly optimized kernel? I understand that there are highly optimized
kernels specific to water and are they still used even if SIMD=None?
Thanks,
Hi,
I am trying to simulate a confined water system with gromacs 5.0.5,
however, I am getting the segmentation fault. Below are the gdb backtrace
and the build info.
Thanks,
Sikandar
backtrace::
#0 0x7767d8dd in _mm_loadu_pd (tab_coul_F=0x7fffe6217100,
tab_coul_V=0x7fffe621b560,
Hi,
I need to set up a simulation of LJ system, such that neighbor list is
built using grid algorithm with a cut-off of lj_cut_off + 0.1 every 20
timesteps.
So I am wondering what should be the accurate settings in the grompp.mdp
file.
After reading the manual, I understand rvdw must be =
Thanks Mark. That helps.
--
Sikandar
On Mon, Jul 28, 2014 at 1:58 PM, Mark Abraham mark.j.abra...@gmail.com
wrote:
On Mon, Jul 28, 2014 at 8:27 PM, Sikandar Mashayak symasha...@gmail.com
wrote:
Hi,
I need to set up a simulation of LJ system, such that neighbor list is
built using
, Sikandar Mashayak symasha...@gmail.com wrote:
Thanks Szilárd.
I am bit confused about the -resethway or -resetstep options. Do they
exclude the time spent on initialization and load-balancing from the
total
time reported in the log file, i.e., the time reported is the total time
spent
Hi
I am running a benchmark test with the GPU. The system consists of simple
LJ atoms.
And I am running only very basic simulation with NVE ensemble and not
writing any
trajectories or energy values. My grompp.mdp file is attached below.
However, in the time accounting table in the md.log, I
Thanks Mark. -noconfout option helps.
--
Sikandar
On Thu, Jul 24, 2014 at 3:25 PM, Mark Abraham mark.j.abra...@gmail.com
wrote:
On Fri, Jul 25, 2014 at 12:12 AM, Sikandar Mashayak symasha...@gmail.com
wrote:
Hi
I am running a benchmark test with the GPU. The system consists of simple
Hi
I am wondering if I have a system of simple LJ atoms with no charge on
then, will gromacs still compute coulomb interaction computations?
Thanks,
Sikandar
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
Hi,
I am checking out GPU performance of Gromacs5.0 on a single node of a
cluster.
The node has two 8-core Sandy Bridge Xeon E5-2670 and two NVIDIA K20x GPUs.
My question - is there a restriction on how many number of MPI tasks can be
used per GPU task?
I observe that I could only perform mdrun
Hi
I am trying to run a simulation with more than 99,999 atoms in a conf.gro
file.
However, when I run grompp I get an error
Fatal error:
Something is wrong in the coordinate formatting of file conf.gro. Note that
gro is fixed format (see the manual)
As suggested in the online manual, I am
Thanks Chandan it worked!
On Tue, Jul 15, 2014 at 8:56 AM, Chandan Choudhury iitd...@gmail.com
wrote:
Hi Sikandar,
Did you try resetting your atom number to 1 as it reaches 10?
I hope this would resolve the issue.
Chandan
On Tue, Jul 15, 2014 at 3:43 PM, Sikandar Mashayak symasha
-number field. Is that the
question you meant to ask? :-) In practice, in making such a file, you
should wrap from 99 or whatever to 0, because GROMACS only ever checks
to see that the value changes.
Mark
On Mon, Jul 14, 2014 at 10:13 PM, Sikandar Mashayak symasha...@gmail.com
wrote
Hi,
I am trying to build with GPU acceleration, and I get the following build
error. However, if
set GMX_GPU=OFF and keep MPI=ON it compiles.
Could anyone please suggest how to resolve the issue?
Thanks,
Sikandar
/home/projects/cuda/6.0.37/bin/nvcc
14 matches
Mail list logo