Re: [gmx-users] extra gro file generation

2013-11-05 Thread Mirco Wahab

On 05.11.2013 10:04, sarah k wrote:

I'm going to perform a molecular dynamics simulation on a protein. As a
default the simulation gives one final *.gro file. I need to get a .gro
file after each say 500 ps of my simulation, in addition of the final file.
How can I do so?

Riccardo already gave the important hints in another posting,
here are some additional explanations.

# first, generate an empty subdirectory in order to keep
# the simulation directory clean. The rm command is
# important if you repeat these steps

$ mkdir -p GRO/ ; rm -rf GRO/*.gro

# then, decide which part of the system you need:
# 0 - evgerything
# 1 - the protein
# 2 - the cofactor (if any)
# Remember: these numbers correspond to the order of molecules
# named in the .top-file. If your protein is 1 and you
# need only that, do a

echo 1 | trjconv -b 500 -noh -novel -skip 2 -sep -nzero 5 -o GRO/out.gro

# this will dump the system part 1 (the protein or whatever),
# starting from 500 ps (-b) and saving every 2'nd trajectory snapshot.

For each option to trjconv (-noh, -novel), please read the manual 
(where all of this can be found).


Regards

M.


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Fwd: installing Gromacs4.6.3 on cygwin

2013-09-11 Thread Mirco Wahab

On 11.09.2013 09:38, shahid nayeem wrote:

I checked in folder
/cygdrive/c/packages/gromacs-4.6.3/build/src/contrib/fftw/gmxfftw-prefix/lib/libfftw3.a
, this file exists but perhaps `src/gmxlib/cyggmx_d-8.dll' is not able to
locate it.


Did you 'cmake' with -DGMX_PREFER_STATIC_LIBS=ON ?

BTW, from time to time I installed cygwin (out of curiosity)
and installed gromacs in it. The actual cygwin/64 (gcc 4.8)
combined with gromacs 4.6.3 happened to be the first gromacs
installation on cygwin after many years that is really usable.

I used the fftw3 package that came with cygwin (I didn't
use -DGMX_BUILD_OWN_FFTW=ON) and everything build fine
(you'll need -DGMX_PREFER_STATIC_LIBS=ON). This fftw3 was
build without any optimizations but if you don't need
PME electrostatics, you don't have to care.

If you need PME, then you'll have to download the fftw3
to your cygwin home, build it manually (as Mark proposed)
and install it to /usr/local where gromacs will find it.

Regards

M.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Fwd: installing Gromacs4.6.3 on cygwin

2013-09-10 Thread Mirco Wahab

On 10.09.2013 08:20, shahid nayeem wrote:

I am installing gromacs -4.6.3 on cygwin with following commands
tar -xvzf gramcs-4.6.3.tar.gz
cd gromacs-4.6.3
mkdir build
cd build
  cmake .. -DGMX_BUILD_OWN_FFTW=ON -DGMX_DOUBLE=on
It runs fine and write file in build directory.
when I run make command it gives following error.
...
/cygdrive/c/packages/gromacs-4.6.3/src/gmxlib/thread_mpi/impl.h:504:20:
error: field ‘timer_init’ has incomplete type
  struct timeval timer_init;
 ^
src/gmxlib/CMakeFiles/gmx.dir/build.make:3070: recipe for target


The Gromacs-file gmxlib/thread_mpi/impl.h is missing the
correct #define for the unixish Cygwin pseudo-os. You can
add it by inserting

 #define HAVE_SYS_TIME_H

at the very top of the file gmxlib/thread_mpi/impl.h

Then the package will probably compile and link, but
mdrun's thread-mpi (tMPI) will not work on Cygwin
(didn't work last time I tried).

So you could do the following: 1) install the Gromacs
package with normal compilation, and 2) build and
install the openmpi-version of mdrun (mdrun_mpi).

(1) cmake-options for package:
...
  -DGMX_GPU=OFF \
  -DGMX_PREFER_STATIC_LIBS=ON   \
...

make -j4 install

(delete all files from the build path)

(2) cmake options for mdrun_mpi
...
  -DGMX_GPU=OFF\
  -DGMX_MPI=ON \
  -DGMX_PREFER_STATIC_LIBS=ON  \
...

make -j4 install-mdrun

The openmpi-version (mdrun_mpi) runs reasonable on
Cygwin/64 1.7.25, but not as fast as the native
windows version (compiled with visual studio 10 or 12).
The windows-compiled version of 4.6.3 is very robust and
allows to link mdrun against CUDA 5.0 (but not 5.5(+VC12)
for unknown reasons). Then, you'll have full gpu support
under windows.

Regards

M.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] OpenSuse 12.1 + CUDA Installation Error

2013-07-25 Thread Mirco Wahab

On 25.07.2013 04:25, Carlos Bueno wrote:

I added all the repositories and installed all you told me.
And that solved the problem for one of the computers
The others still have an error in make:

[ 55%] Building NVCC (Device) object
src/mdlib/nbnxn_cuda/./nbnxn_cuda_generated_nbnxn_cuda.cu.o
/home/cuda2/Programas/gromacs-4.6.3/include/types/nbnxn_pairlist.h(216):
error: identifier nbnxn_alloc_t is undefined

/home/cuda2/Programas/gromacs-4.6.3/include/types/nbnxn_pairlist.h(217):
error: identifier nbnxn_free_t is undefined

2 errors detected in the compilation of
/tmp/tmpxft_76ad_-11_nbnxn_cuda.compute_20.cpp2.i.
CMake Error at CMakeFiles/nbnxn_cuda_generated_nbnxn_cuda.cu.o.cmake:256
(message):


Can you look in the first part of your cmake log to find out which
gcc version has been used? What does `g++ -v` say?

M.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Any chance to get mkl linked in 4.6.3 by any wizardry? (Linux/Intel64)

2013-07-25 Thread Mirco Wahab

I read the report (http://redmine.gromacs.org/issues/1110)
and tried some combinations. This was my last failing
attempt:


- - - 8 - - - - - - - - - - - - - - - - - - - - - - - - - -
#!/bin/sh
export GMXVERSION=gromacs-4.6.3
export GMXTARGET=/opt/gromacs463

MINC=/opt/intel/mkl/include
MLIB=/opt/intel/mkl/lib/intel64
ILIB=/opt/intel/composerxe/lib/intel64
#
cmake ../${GMXVERSION} \

-DGMX_FFT_LIBRARY=mkl-DMKL_LIBRARIES=${MLIB}/libmkl_intel_ilp64.so;${MLIB}/libmkl_core.so;${MLIB}/libmkl_intel_thread.so;${ILIB}/libiomp5.so 
\

   -DCMAKE_CXX_COMPILER=icpc\
   -DCMAKE_C_COMPILER=icc   \
   -DMKL_INCLUDE_DIR=${MINC}\
   -DCMAKE_INSTALL_PREFIX=${GMXTARGET}  \
   -DGMX_X11=OFF

- - - - 8 - - - - - - - - - - - - - - - - - - - - - - - - - -

ICC version is 13.1.1 (with mkl included).

4.6.2 worked. Does anybody have a spell for 4.6.3?

Thanks  regards

M.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Any chance to get mkl linked in 4.6.3 by any wizardry? (Linux/Intel64)

2013-07-25 Thread Mirco Wahab

On 25.07.2013 12:28, Mark Abraham wrote:

What doesn't work about the install guide instructions: 'Using MKL
with icc 11 or higher is very simple. Set up your compiler environment
correctly, perhaps with a command like source /path/to/compilervars.sh
intel64 (or consult your local documentation). Then set
-DGMX_FFT_LIBRARY=mkl when you run CMake.'


Wow! I just checked this - and, what a nice surprise, it works.

Since when don't you have to specify any mkl library in order
to get the correct linking?

Thank you

M.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] OpenSuse 12.1 + CUDA Installation Error

2013-07-24 Thread Mirco Wahab

On 24.07.2013 04:08, Carlos Bueno wrote:

*Hi,*
*I keep getting errors when I try to install gromacs in OpenSuse 12.1.*
*I have installed cuda 5.0 and the nvidia cards. **I have tried with
different parameters for cmake:*


How did you install Cuda5? What did you install and how?

I have one OpenSuSE 12.1 box that works as a cluster head
and serves to gpu-clusters. Every software except the
nvidia gpu driver is installed on this box for usage by
the nodes.

Installing a gpu-ready OpenSuSE 12.1 involves (for example, YMMD)
 * add community- and extra-repositories through yast/repositories,
   these involve (here):
   (a) included in yast/repositories (activate only)
Packman Repository
openSUSE BuildService - devel:languages:perl
openSUSE BuildService - devel:languages:python
Education
science
   (b) extra (add manually or ba script)
devel:/tools
devel:/libraries:/c_c++
devel:/gcc
   you can do the latter by script:
 #
 #!/bin/sh

Uri[1]=http://download.opensuse.org/repositories/devel:/tools/openSUSE_12.1/; 
   Name[1]=devel:/tools


Uri[2]=http://download.opensuse.org/repositories/devel:/libraries:/c_c++/openSUSE_12.1/; 
Name[2]=devel:/libraries:/c_c++


Uri[3]=http://download.opensuse.org/repositories/devel:/gcc/openSUSE_12.1/; 
 Name[3]=devel:/gcc

 #
 for i in 1 2 3; do
zypper --gpg-auto-import-keys ar ${Uri[i]} ${Name[i]}
zypper modifyrepo --refresh ${Name[i]}
 done
 #

 * install gcc 4.6 / g++ 4.6 through yast
 * install fftw 3.3.3 through yast, look for the following packages:
   gpuclu:~ # rpm -qa |grep fftw
libfftw3-3-3.3.3-5.1.x86_64
fftw3-devel-3.3.3-5.1.x86_64
fftw3-3.3-18.1.3.x86_64
fftw3-threads-3.3-18.1.3.x86_64
fftw3-threads-devel-3.3.3-5.1.x86_64
libfftw3_threads3-3.3.3-5.1.x86_64
 * install blas-devel, lapack-devel, gsl-devel through yast
 * Important: remove everything Nvidia-related stuff through yast, reboot
 * download and install (compile) the gpu driver
   NVIDIA-Linux-x86_64-319.32.run
   check if it functions properly (after reboot)
 * download and install CUDA from Nvidia:
   cuda_5.0.35_linux_64_suse12.1-1.run

if everything works, the command
  nvidia-smi
should display some meaningful output.



my € 0.05

M.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Squishing or Stretching Membranes

2013-07-09 Thread Mirco Wahab

On 09.07.2013 16:11, Neha wrote:

I had a question about trjconv. After one of my simulations has ended I want
to use the final structure file to run some other simulations. However, what
I want to do is run an NVT run using the average box size of the earlier
run. Since the final structure file will most likely not be at the exact
average I was wondering if I could use trjconv -pbc mol to put all the atoms
in a box either smaller or bigger than the original.


You could dump many structures from the last part of the trajectory, and:
 - take one structure that has all box vectors almost at, but *below*
   your target size,
 - edit the bottom line of the coordinate file to the desired size,
 - run some steps steepest descent minimization in order to correct
   for some overstretched bonds of molecules crossing the PBC,
 - run

The absolute last configuration of your run has no specific meaning
in a statistical sense. You can use any other configuration from
the equilibrium-part of the run.

my € 0.02

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs GPU system question

2013-06-22 Thread Mirco Wahab

On 22.06.2013 17:31, Mare Libero wrote:

I am assembling a GPU workstation to run MD simulations, and I was wondering if 
anyone has any recommendation regarding the GPU/CPU combination.
 From what I can see, the GTX690 could be the best bang for my buck in terms of 
number of cores, memory, clock rate. But being a dual GPU card, I was wondering 
if  there is any latency issue that could make its performances less favorable 
with respect to a GTX Titan.
Also, which motherboard, CPU is recommendable for this system.


The most important aspect to consider (by far) is, in my humble
opinion, *your specific workload*:

 - Size of the simulation box / number of atoms,
 - Specific force field/required integrator (verlet?),
 - Handling of long range electrostatics (pme/RF/coulomb).

Furthermore, the effect of the CPU is, imho, much more
pronounced. Remember, mdrun-gpu doesn't 'run' on the
GPU (as, e.g., HOOMD does) but loads work-sets up to
the GPU, runs them, and loads them back. For example: in
one box, I  have an AMD FX-8350 and a GTX-660Ti available
for tests, and I didn't see the GPU load going much over 60%,
even with millons of atoms. Here, small differences in the
potential/force field used by the model will probably change
the performance of the GPU-related parts significantly (due
to cu-offs and buffering schemes).

Regards

M.


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs GPU system question

2013-06-22 Thread Mirco Wahab

On 22.06.2013 22:18, Mare Libero wrote:

The vendor I contacted was pushing for one of the
high end i7 processors with hyper-threading. But from what I can read,
most of the MD software don't make any use of it. So, using a the
multi-cores AMD (like your  FX-8350) can be a cheaper and more
advantageous option.


Your vendor is, in my opinion, right. The AMD consumer multicores
(Piledriver) aren't actually eight-core cpus, but rather similar
to 4 core cpus (they are called 'modules').

For testing a user-defined potential, I once compiled performance
figures over a range of actual commodity hardware (available to me).
These are all workstations and usually overclocked somehow by the
students (but only if there's no crash at all in a year ;-)
This is all *without GPU*, only the plain and raw CPU processing
power for Gromacs is checked for (last column).

 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

 Test case:
 Two coarse-grained implicit-solvent vesicles bumping into each other
 SD integrator
 480,000 particles
 Box (110nm)³
 User-defined potential (rc=0.8225nm)
 dt=0.020ps
  
CPU ArchCores   ns/day
  
  - X6/1090T;3.3GHz SSE26C/6T   19.130
  - FX-8350;4.5GHz  AVX_FMA 4M/8T   34.175
  - i7/2600K;4.2GHz AVX_256 4C/8T   39.073
  - i7/3770K;4.4GHz AVX_265 4C/8T   41.931
  - i7/3930K;4.2GHz AVX_256 6c/12T  56.891

 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

You can see here, for CPU performance, you can't
really choose anything different from the 6-core i7/3930K.
It costs some bucks more than the 4-core-CPUs but will run
significantly faster the time you use it.


Most of what we do is protein-protein interactions and protein stability
studies with explicit water/ions. One of our projects now has 100,000
atoms in a 100 Ang water box (7,800 protein atoms + 67,000 water). It's
difficult to be more specific on the parameters since each project is
different, but in general we do not deviate much from a standard NPT run.


10nm box/75K atoms is not very large. I guess you'd use a time step
of 0.002 ps and a united atom model + spc or spc/e water? 100ns/day
seem possible with any GPU from GTX-660 or higher. If you buy a mighty
GPU (Titan), the question will be: can your n-core-CPU saturate such a
fast GPU monster? A good compromise would be, probably, the GTX-780
which is a slightly reduced Titan for half the price and all options
open.

my € 0.02

M.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Membrane Runs Crashing

2013-06-19 Thread Mirco Wahab

On 19.06.2013 15:25, Neha wrote:

Here's the full mdp file. Please let me know if you need anymore information
and thank you so much for helping!

dt   = 0.02


Should be 0.03 according to S. Marrinks own remarks
(but wouldn't change your experience)


nstcomm  = 1


This looks much too conservative ;-)


Pcoupltype   = semiisotropic
tau_p= 1.0  1.0
compressibility  = 5e-6 0.0
ref_p= 0.0  0.0


This (compressibility = XY  Z) means, the height of your
box is not allowed to change? Is this the intended behaviour?

The ref_p of zero should not pose any real problem here,
MARTINI W is a strongly associating fluid and will freeze
trough your box in no time even when setting negative pressure
below 300K. In newer Gromacs versions (from 4.6) there seems
to be additional shift (?) in the potential, so the water phase
transition took another hit and went up to over 320K (with
verlet) and to about 315K (with shift).

I could provide you (*) a large Martini-DPPC-bilayer (~25nm²,
2 x 1330 DPPC), fully solvated in W (98654 W) which is
stable at 0.03ps even with verlet integration, and
semiisotropic Parinello-Rahman coupling at ref_p of { 0, 0},
or {1, 1} or whatever you like. I made this some time ago for
a student's experiment.

It's (at least for me) remarkably complicated to get such a
simple thing like a Martini-bilayer right. You have to
equilibrate the pure-water box (exactly the same box size
as your target system), you have to make sure there are NO
waters later within the bilayer left (after genbox solvation)
 - (I had to write a script to remove the waters at the bilayer
core expicitly).

Regards

M.



(*) drop me a e-mail


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Membrane Runs Crashing

2013-06-19 Thread Mirco Wahab

Addendum:

On 19.06.2013 23:16, Mirco Wahab wrote:

...
MARTINI W is a strongly associating fluid and will freeze
trough your box in no time even when setting negative pressure
below 300K. In newer Gromacs versions (from 4.6) there seems
to be additional shift (?) in the potential, so the water phase
transition took another hit and went up to over 320K (with
verlet) and to about 315K (with shift).
...


I can't let this without further comment. After reading my own post
(and staring to thinking about it), I *checked back the results* of
freezing MARTINI water that allegedly changed it's freezing point
along different Gromacs versions.

These freezings really did occur - but were exclusively related
to equilibration problems, where untercooled (unequilibrated)
membrane patches caused violent density fluctuations in the coarse
grained water (W) on expanding on the way to their target temperature.

These high local densitiy fluctuations probably caused the Martini W
to freeze instantaneously at contact with the membrane and blocked
the further membrane expansion. This could be averted by choosing
higher temperatures (+20K).

**
 Checked again with well-equilibrated membranes in water, the Martini
 water (W) *does not freeze* above 300K with Gromacs 4.6.x.  The mem-
 brane properties  are in accord with expectations from the model do-
 cumentation.  Even with verlet integration and GPU usage.  The only
 significant difference is the speed, which is now *much* higher.
**

Sorry about my rant from the former posting, my information was
not correct/complete when I wrote the posting.

M.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Unable to download Gromacs source tar file

2013-06-08 Thread Mirco Wahab

On 08.06.2013 10:46, Bhamy Maithry Shenoy wrote:


Hi,
Thanks for your reply. But the problem still persists. Even now I am not able 
to download.


Can you download from this link?

ftp://ftp.gromacs.org/pub/gromacs/


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Unable to download Gromacs source tar file

2013-06-08 Thread Mirco Wahab

On 08.06.2013 14:39, Bhamy Maithry Shenoy wrote:

Can you download from this link?
ftp://ftp.gromacs.org/pub/gromacs/

I could not download from the mentioned link.


Then, your provider/router and/or your network
configuration is blocking FTP port traffic (Port 21),
but not HTTP port traffic (Port 80).

M.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Creating a monolayer from Martini bilayer

2013-06-03 Thread Mirco Wahab

On 03.06.2013 18:43, Neha wrote:

I am a new user to Gromacs and am working on lipid simulations with the
Martini forcefield. The Martini website provides a pre-equilibrated DPPC
bilayer, and I was wondering if there were any told that would allow me to
convert this bilayer into a monolayer. For periodic boundary conditions to
work, I was thinking of a stack that would go like

Water
Lipid Heads
Tails
Vacuum
Tails
Lipid Heads
Water


Is this the 128 DPPC system from
http://md.chem.rug.nl/cgmartini/index.php/downloads/example-applications/66-dppc-membrane
(2x 64 DPPC + 2000 W)?


Is there any way to split apart the provided bilayer using some combination
of Gromacs tools and introduce a space of at least 10 nm which is apparently
the distance needed for the tails to not interact with each other?


The layers are, 'afaics', somehow interdigitated and placed in a box
normal to z with a z height of 10nm and centered at about 5nm. They
are not sorted (first 64 lower, secod 64 upper layer).


If there is no way of simply splitting the bilayer, what would you recommend
for creating a simulation of lipid monolayers from a single DPPC molecule? I
feel like I could use genconf but it might require too much equilibration. I
am hopeful that there is some way of simply working with the provided
bilayer.


What I would do: write a simple script, read 12 lines (one molecule) and
check for the z coordinate of the NC3 type (first entry of each 12 line
record). Then, decide:
 - if NC3[z]  6 then add 1.0 to all 12 z coordinates (nm)
 - if NC3[z]  4 then keep it
write the 12 lines back to another file, either modified or unmodified.
next 12 lines/molecule (128 times total)

Then, the waters. This is somehow arbitrary. Maybe you could check if
each water's (W) z coordinate is below 5, so keep it - or above 5,
so move it up. This will, of course, require some steepest descent
minimization afterwards.




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] About Compilation error in gromacs 4.6

2013-05-28 Thread Mirco Wahab

On 28.05.2013 13:39, vidhya sankar wrote:

cmake .. -DCMAKE_INSTALL_PREFIX=/usr/local/gromacs4.6 -DGMX_DOUBLE=ON 
-DGMX_BINARY_SUFFIX=_d


please try

cmake ..  -DGMX_CPU_ACCELERATION=SSE4.1 
-DCMAKE_INSTALL_PREFIX=/usr/local/gromacs4.6 -DGMX_DOUBLE=ON 
-DGMX_BINARY_SUFFIX=_d


I remember to have encountered a problem with older gccs and AVX.

M.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] About Warnings in Mdrun

2013-05-28 Thread Mirco Wahab

On 28.05.2013 17:11, vidhya sankar wrote:

As you Mailed Me I have compiled gromacs 4.6
I have installed using the command As posted in mail
But I have AMD 8 Core black Edition
When I run the mdrun I saw a warning
Note: file tpx version 73, software tpx version 83
Using 8 MPI threads
Compiled acceleration: SSE4.1 (Gromacs could use AVX_128_FMA on this machine, 
which is better)

What is the Meaning of the Above statement ?
How to raise the Performance


1) The meaning of the above statement:
Your machine is a newer AMD FX with a FMA (fused multiply/add)
execution unit. Your Gromacs is compiled for older architectures
having SSE (streaming SIMD extensions) version 4.1.
But you cannt use the AVX unit properly unless you upgrade your
gcc compiler to 4.7.x (maybe 4.6.x will work too). The older
gcc is not able to not generate the AVX instructions properly.

2) How to raise the performance:
The performance difference between AVX and SSE4.1 on an
AMF FX 8350 is marginal (I have a 8350 too) and can be
ignored for most real world scenarios. The speedup would
be 3%-5% with 4.6.1 for now, depending on the scenario
(chosen integrator, system size, electrostatics).

Much higher performance gains are to be expected by
properly configuring your system (cut off, integration
method etc.). This is an advanced topic in itself, please
read this
http://www.gromacs.org/Documentation/Acceleration_and_parallelization
and this
http://www.gromacs.org/Documentation/Cut-off_schemes

Regards

M.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Fwd: Static compilation of gromacs

2013-05-15 Thread Mirco Wahab

On 15.05.2013 06:41, Андрей Гончар wrote:

I know, but on target machine there is a gcc compilator version 4.1, and on
gromacs site they told that this version is broken and 4.5 should used
instead. So I try to compile it on machine with 4.5 version of gcc


Андрей, if there is *no fftw3f on the target machine* and
if it's an old system which will run its old system until
thrown out, then you could do the /'tough admin'/ solution.

Contact the admin/root of the target machine and let him copy
the files that show up after:

   $ cd /usr
   $ du -a | grep lib64/libfftw3

to the same location (/usr/lib64) on the target machine. This
will most probably work fine (I did so in many cases).

Another variant: Put these files into your user directory
on the target machine (/home/andrey/fftw3) and point
LD_LIBRARY_PATH to this directory by issuing (bash):

$ LD_LIBRARY_PATH=/home/andrey/fftw3:$LD_LIBRARY_PATH  mdrun -v

Regards,

M.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] where can be obtain circled lipids bilayer?

2013-05-02 Thread Mirco Wahab

On 02.05.2013 18:32, Albert wrote:

I've got a question about where can be obtain circled lipids bilayer?
like shown here:
http://wwwuser.gwdg.de/~ggroenh/membed/vesicle.png


As has already been said by others, this is not really a circled
lipid bilayer but rather a lipid vesicle of very very small size
(maybe 10nm-15nm diameter, guessing from the layer thickness).

There are indeed circular lipid bilayer structures possible, but they 
are not found very frequently. See here for example:
  Seifert, U. Vesicles of toroidal topology. Phys Rev Lett 1991, 66 
:2404-2407
  Fourcade, B., Mutz, M., Bensimon, D. Experimental and theoretical 
study of toroidal vesicles. Phys Rev Lett 1992, 68:2551-2554


To build a (spehrical) vesicle, you would usually start from one lipid
molecule of your choice, which can be downloaded from one of the lipid
libraries (e.g: http://www.nyu.edu/pages/mathmol/library/lipids/ or
elsewhere). You'd rotate that in some way, maybe oriented parallel to
the z axis of your coordinate system. After this, you write a small
script which creates points distributed on an outer sphere (outer
vesicle layer) and on an inner sphere (inner vesicle layer) separated
by the length of about two molecules. On these points, you'd copy
your single molecule, rotated by the angles of your z axis to the
direction of the point. For the inner sphere, you'd add another 180°
(wo that the lipid tails point inwards). Thats it. This structure has
to be minimized and then simulated with caution (means: a very small
time step for the first round).

M.





--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: cygwin_mpi_gmx installation

2013-04-13 Thread Mirco Wahab

On 12.04.2013 20:20, Szilárd Páll wrote:

On Fri, Apr 12, 2013 at 3:45 PM, 라지브간디 ra...@kaist.ac.kr wrote:

Can cygwin recognize the CUDA installed in win 7? if so, how do i link them ?


Good question, I've no idea whether it can as I myself have never
built GROMACS with CUDA on cygwin neither have I heard of anyone else
do that. What I can safely state is that the native Win builds with
non-cygwin CMake and MSVC as a compiler do work with with a variety of
generators: nmake, ninja, and VS.

However, it would be very useful to know whether/how it is possible to
detect CUDA with CMake a build GROMACS with GPU acceleration on
cygwin. Perhaps someone else on the list with more cygwin experience
could help out with tips or even try to build with CUDA.


CUDA compilation cannot use cygwin's gcc afaik for now. It *might*
be  possible to *link* somethin Win64-Cuda into Cygwin64, but to my
knowledge, nobody succeeded so far in this.

Resumé: No CUDA acceleration for Gromacs under Cywin32/Cygwin64
(which corresponds to -DGMX_GPU=OFF).


cmake .. -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=C:\Program Files\NVIDIA GPU 
Computing Toolkit\CUDA


The whitespaces will need special treatment, AFAIR putting quotes
around the path (or simply copying the directory to C:\CUDA) should
work. Alternatively, you could try to put nvcc in your path, than
CMake should be able to do the path handling magic.


gcc and spaces in paths is a show stopper. The toolchain doesn't usually
work well  (if at all) if spaces in paths are present.

I checked today the new Cygwin64, the distribution which contains gcc
4.8.0 and fftw3f 3.3.3, with gromacs. *It builds fine* (aside from the
known warnings from gcc 4.8.0). The first time I saw a working 64 bit
gromacs under Cygwin. The provided fftw3f doesn't have a SIMD mode,
but hey ..

cmake -DCMAKE_INSTALL_PREFIX=/opt/gromacs461 -DGMX_PREFER_STATIC_LIBS=ON 
-DGMX_GPU=OFF ../gromacs-4.6.1


Regards

M.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Good News for Cygwin users (was: Re: [gmx-users] gromacs 4.6.1 on win7?)

2013-04-02 Thread Mirco Wahab

On 01.04.2013 14:58, 라지브간디 wrote:

I tried to install 4.6.1 version through cygwin and got following error by 
using this command :


In the last weeks of March 2013, there has been significant
progress made on the cygwin packages. Since April, 1st, there
is even a 64-bit build including gcc 4.8 (!) available.
ftp://ftp.gwdg.de/pub/linux/sources.redhat.com/cygwin/64bit/
(other mirrors: http://cygwin.com/mirrors.html)

On the usual 32-bit build, the gcc 4.7.2, which compiles
gromacs fine, is available since mid-March. You can choose
from gcc 4.3.x, 4.5.x, and 4.7.2 under gcc-4.

If I get some time left, I'll test the 64-bit gcc-4.8
system and report back.

Regards

M.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs-4.6 installation on cygwin problem

2013-03-13 Thread Mirco Wahab

On 13.03.2013 20:11, Mark Abraham wrote:

I think that post is saying you want

cmake -DCMAKE_INSTALL_PREFIX=/usr/local/lib -DGMX_GPU=OFF
-DGMX_BUILD_OWN_FFTW=OFF ../gromacs-4.6

and to make sure your PATH contains /usr/local/lib. I haven't tried it. I
don't know whether source-ing GMXRC in the usual way takes care of the
latter for you.


I refreshed the related information and instructions today and attach 
them to this posting (avoiding line breaks in commands) =


sourcing GMXRC  doesn't include /usr/local/lib into the PATH.

Under cygwin, you'll have to craft your own path, partly because
you have to make sure to avoid any installed windows stuff with 
the same name to be found during compilation and installation.
I ended up with the following path, set up in ~/.bashrc

[.bashrc]
 ...
 export 
PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/cygdrive/c/Windows/system32:/cygdrive/c/Windows:/cygdrive/c/Windows/System32/Wbem:/cygdrive/c/Windows/System32/
 # set some other useful vars
 export LD_LIBRARY_PATH=/usr/local/lib
 export LIBRARY_PATH=/usr/local/lib
 export CPATH=/usr/local/include
 # prepare for Gromacs
 if [ -e /opt/gromacs461/bin/GMXRC.bash ] ; then
   source /opt/gromacs461/bin/GMXRC.bash
 fi
 ...
 
 Log in/out to activate changes.
 
 Then, in order to get working gromacs, you'll have to
 jump over two another hurdles: one significant, one simple.
 The built-in gcc compiler in cygwin is 4.5.3, which will 
 compile Gromacs but only without AVX instructions. The
 fftw3 in cygwin is double precision, you'll need a single 
 precision and SSE2-optimized static library (anything 
 using the dll crashed).
 
 Install gcc 4.7.2 - follow those instructions in
 http://matpack.de/cygwin/index.html
 very closely. This will give you a working gcc 4.7.2 on cygwin. 
  ** for installation of GMP, MPFR , and MPC 
  follow those instructions in http://matpack.de/cygwin/index.html
  ** BUT use /different/ options for the gcc-configuration and compilation, 
  do the following:
cd ~
tar -xf gcc-4.7.2.tar.bz2
rm -rf gcc-build/; mkdir gcc-build/; cd gcc-build/
../gcc-4.7.2/configure --enable-languages=c,c++,fortran --enable-static 
--enable-shared --enable-shared-libgcc --disable-__cxa_atexit --with-gnu-ld 
--with-gnu-as --enable-libgomp --enable-threads=posix
make -j 4
make install
  
 Before installing the fftw3f, you'll have to change
 the make-version from 3.82 to 3.81 (do this in the
 cygwin setup interface, both versions are available)
 Then, do the following:
  cd ~
  mkdir -p fftw3; cd fftw3
  tar xzf fftw-3.3.3.tar.gz
  rm -rf fftw-3-build/; mkdir fftw-3-build/; cd fftw-3-build/
  ../fftw-3.3.3/configure --enable-float --enable-shared --enable-openmp 
--enable-sse2 --with-our-malloc
  make -j 4
  make install

If all went fine, you may try to install gromacs with static linkage (otherwise,
it will crash on dll call eg. of fftw3f)
  cd ~
  tar xzf gromacs-4.6.1.tar.gz
  rm -rf gromacs-build/; mkdir gromacs-build/; cd gromacs-build/
  cmake -DGMX_OPENMP=ON -DGMX_PREFER_STATIC_LIBS=ON 
-DCMAKE_PREFIX_PATH=/usr/local -DCMAKE_INSTALL_PREFIX=/opt/gromacs461 
-DGMX_GPU=OFF ../gromacs-4.6.1
  make -j 4
  make install

  Then, after re-login, the command which mdrun should show
  /opt/gromacs461/bin/mdrun

By chance, I installed 4.6.1 today and, so far, it just works. If
anybody is interested, I could post performance figures in comparison
to a Win64/VC10/SDK7.1 build.

Regards

M.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Gromacs-4.6 installation on cygwin problem

2013-03-13 Thread Mirco Wahab

On 13.03.2013 19:31, neshat haq wrote:

It would be great help please elaborate the steps.
I am installing the gromacs-4.6 on my desktop mostly for the analysis
purpose.


Neshat, if you have x64 windows machine, I'd recommend to build
a windows-64-version of Gromacs. You can do this with the Visual
Studio 10 Express (no cost) plus the Visual Studio SDK 7.1 (no cost).

This will give you
- a stable and fast 64-bit executable
- aworking gpu-computation supporting (verlet) mdrun
  if you happen to have a recent Nvdidia graphics card
  and CUDA5 installed.

If you got the SDK 7.1 installed, open the native x64 command
prompt (Windows SDK 7.1 x64 Release Win7), cd to the directory
above the un-packed Gromacs  directory and type:

mkdir gromacs-build
cd gromacs-build

cmake -G Visual Studio 10 Win64^
 -DCMAKE_INSTALL_PREFIX=D:/Gromacs461  ^
 -DCMAKE_PREFIX_PATH=D:/Usr/x64^
 -DFFTWF_LIBRARY=D:/Usr/x64/lib/libfftwf-3.3.lib ^
 -DGMX_GPU=ON^
  ..\gromacs-4.6.1

followed (on success) by:

devenv Gromacs.sln /build Release^
   /project ALL_BUILD /projectconfig Release ^
   /project INSTALL

The last line of the build process should read 0 errors or similar.
Then, you should setup the environment variables necessary
for Gromacs - this works by registry script:

File Gromacs461-reg-D.reg

 8  cut here 
Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\Environment]
GMXBIN=D:\\Gromacs461\\bin
GMXLIB=D:\\Gromacs461\\share\\gromacs\\top
GMXDATA=D:\\Gromacs461\\share\\gromacs
GMXLDLIB=D:\\Gromacs461\\lib
GMXMAN=D:\\Gromacs461\\share\\man
 8 --

And to your windows PATH, the gromacs bin directory
has to be added:

  PATH=%PATH%;D:\GROMACS461

This can be done conveniently by a tool (lookup RapidEE).

Of course, you'll need a windows-build fftw3f lib there
too (e.g., located in D:/Usr/x64/lib). This is another story,
but I could mail you one ;-)

Regards

M.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GPU warnings

2012-12-11 Thread Mirco Wahab

Am 11.12.2012 16:04, schrieb Szilárd Páll:

It looks like some gcc 4.7-s don't work with CUDA, although I've been using
various Ubuntu/Linaro versions, most recently 4.7.2 and had no
issues whatsoever. Some people seem to have bumped into the same problem
(see http://goo.gl/1onBz or http://goo.gl/JEnuk) and the suggested fix is
to put
#undef _GLIBCXX_ATOMIC_BUILTINS
#undef _GLIBCXX_USE_INT128
in a header and pre-include it for nvcc by calling it like this:
nvcc --pre-include undef_atomics_int128.h


The same problem occurs in SuSE 12.2/x64 with it's default 4.7.2
(20120920).

Another possible fix on SuSE 12.2: install the (older) gcc repository
from 12.1/x64 (with lower priority), install the gcc/g++ 4.6 from there
as an alternative compiler and select the active gcc through the
update-alternatives --config gcc mechanism. This works very well.

Regards

M.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] some hardware questions

2012-05-02 Thread Mirco Wahab

Hello Peter,

Am 02.05.2012 17:44, schrieb Peter C. Lai:


You can wait for ivy bridge, then stick some Kepler GPUs (nvidia gtx
680) in it. That should max the performance. asm is pretty much stagnant
for general purpose procs since core2 came out.


Is this true?

To my knowledge, the Fermi GPU (GF-110, eg. GTX-580) is
a 16 processor (streaming multiprocessor, SM) system, each
processor having 32 cores running at 1.5 GHz, and the
Kepler-1 (GK-104, eg. GTX-680) an 8 processor system
with 192 cores per processor at 1GHz.
Because on the Fermi each SM has more L1 cache than
each SM on the Kepler-1 and because it might be harder
to saturate 192 cores in a compute-scenario, I'd expect
the 580 (GF-110) to be significant(?) faster in the
next Gromacs (4.6). Maybe somebody tested this already.

(Here in Germany, I can by GTX-580/3GB for ~325€ + Tax.)

Regards,

M.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Regarding trajectory file

2012-01-18 Thread Mirco Wahab

On 18.01.2012 07:31, Ravi Kumar Venkatraman wrote:

Dear All,
I have ran a 10 ns production run for chloranil in 500 methanol solvent box. I 
want to get the coordinates of solvent and solute at different time steps from 
the trajectory file (*.xtc). Can anybody tell me how to extract the details 
using


In the directory of the xtc file, the (2'nd) command

 $  mkdir GRO
 $  trjconv -noh -novel -skip 1 -sep -nzero 4 -o GRO/out.gro

will save every single frame of your simulation in
a .gro file into a subdirectory GRO/

If you don't wan't every single frame, use
-skip 2 in the above option to get every
second frame.

mwa
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Number/name of the last frame in a trajectory

2012-01-11 Thread Mirco Wahab

For checking the id of the last frame contained
in a trajectory, I usually apply ugly command line
hacks like:

 echo 0|trjconv -dump 1e10 -o /tmp/x.gro 21/dev/null|perl -lne 'print $1 
if /t=(\d+)$/'

which might look awe-inspiring first but is of course tiresome.
Did I overlook the appropriate trjconv-option for this task?

Why would you need that? E.g. for saving the last snapshot
from a trajectory under the name of its time step.

Thanks in advance,

M.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] CygWin and Gromacs 4.5.5

2011-11-13 Thread Mirco Wahab

On 13.11.2011 02:10, Mirco Wahab wrote:

I was able to build Gromacs 4.5.5 (32 bit) on Cygwin (gcc 4.5.3)
w/o threads


For a better understanding of the usability of
cygwin-compiled Gromacs under windows I performed
some more tests on my desktop box.

First, I tried to install recent versions of MPICH2 or
OpenMPI in cygwin for creating a mdrun_mpi binary. Both
MPI implementations won't compile under cygwin alone
anymore (will require MSVC tools). Therefore, I tested
the single-threaded mdrun version under cygwin. The result
is compared with Gromacs compiled w/other toolchains on
the same desktop machine. To compare architectures,
runtime results on a cluster node (i7) are provided
for the same simulated system.

A. Simulated system
--
50nm Box w/coarse grained solvent (MARTINI W) and
10K coarse grained lipids (MARTINI DPPC), total
number of particles ~ 1 Mio. No Ewald or PME, simple
electrostatics w/shifted coulomb and rlist=1.3nm,
large dt=0.04, v-rescale and isotropic Berendsen
pcoupl. Large, but simple system. Gromacs 4.5.5
has been compiled from source as good as possible.
The GFlops values have been copied from mdrun output
ending after 2000 steps (or 200 steps for cygwin-mdrun).

B. Box@Home: PhenomII/X6 3.4GHz, 4GB DDR3, Windows 7 Pro/x64

No  GFlops  System

1 | 11.3 | Win64 native (MSVC 2010 SP1 + SDK 7.1 SP1), 6 Threads, 3x2x1
2 | 11.1 | SuSE 12.1rc (GCC 4.6) as VMWare Guest OS, 6 Threads, 3x2x1
3 | 0.49 | CygWin 1.7.9-1/Setup 2.738 (GCC 4.5.3), Single Thread, no DD

For the cygwin run, the mdrun.exe process has been allocated
to one cpu core and its priority was set to 'high'. These
optimizations didn't change the outcome at all. Maybe
on a Win32 box (instead of Win64), the situation is different?

C. For comparison: i7/2600K @4.2GHz, 4GB DDR3, SuSE 11.3/x64

No  GFlops  System

 1 | 14.0 | GCC  4.5.1 (4 Threads, 4 x 1 x 1)
 2 | 14.3 | ICC 12.0.4 (4 Threads, 4 x 1 x 1)
 3 | 19.6 | GCC  4.5.1 (8 Threads, 4 x 2 x 1)
 4 | 19.9 | ICC 12.0.4 (8 Threads, 4 x 2 x 1)


Regards,

M.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] CygWin and Gromacs 4.5.5

2011-11-12 Thread Mirco Wahab

On 10.11.2011 15:28, Szilárd Páll wrote:

On Tue, Nov 8, 2011 at 11:59 PM, Mark Abrahammark.abra...@anu.edu.auwrote:

Actually I don't think this issue has been addressed. Some NUMA-aware
thread_mpi stuff does not work under Cygwin, and code added since 4.5.4
assumes that it does. I can find no reason to support that assumption.

To work around, use configure --disable-threads.


This seems to be correct for the current situation
(CygWin 2.738 + Gromacs 4.5.5).

I was able to build Gromacs 4.5.5 (32 bit) on Cygwin (gcc 4.5.3)
w/o threads:

 $ tar xzf gromacs-4.5.5.tar.gz
 $ mkdir build; cd build
 $ LDFLAGS=-L/usr/local/lib/ -llapack -lblas -lgfortran \
   ../gromacs-4.5.5/configure \
   --disable-threads --with-fft=fftw3 --with-gsl
   --with-external-lapack --with-external-blas
 $ make install

This worked *after* I compiled and installed fftw3:

 $  ./configure --with-our-malloc16 --enable-threads \
--enable-float --enable-sse
 $ make -j 4

and Lapack 3.3.1 from source on this cygwin installation
(packaged version won't work due to missing s* linkage,
as fas as I understood).

Performance on the system I tested is rather poor.
I used a membrane system w/10^6 Particles (Martini/NPT,
coulombtype=Shift), the main memory used ~ 300MB.
This has been tested on a PhenomII/X6 @3400MHz.
(The same system is runnning also under Linux
on a i7/2600K @4.2GHz) :


4.5.5/32b | Cygwin-version: 0.5 GFlops (single process only)
4.5.5/64b | Windows-version: ~10 GFlops (6 threads)
4.5.5/64b | Linux/icc:   ~20GFlops  (8 threads)



I'd be surprised. Why should MSVC outperform gcc?


That statement was based on my previous experience which, admittedly, might
be outdated. I don't remember the exact details, but from what I recall,
the I had to fiddle quite a lot with gcc optimizations to get the
performance close to MSVC. One detail might be important: the code I was
working on is C/C++ mix which quite a lot of ++ in it.

Anyway, to get a better picture, it would be nice if people running GROMACS
on Windows could share their experience/performance numbers.


The Linux-compiled (both gcc and icc) versions *and* the
native Windows-64 report after startup:
...
Configuring nonbonded kernels...
Configuring standard C nonbonded kernels...
Testing x86_64 SSE2 support... present.
...

whereas, the Cygwin-compiled version says:
...
Configuring nonbonded kernels...
Configuring standard C nonbonded kernels...
Testing ia32 SSE2 support... present.
...

So I don't have any idea why it would be that slow,
(even scaled by 6 or processor affinity set) compared
to the msvc-64 (2010) compiled version. There's enough
memory and no disk activity during tests.


Regards,

M.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Regarding Gromacs Algorithms

2011-11-02 Thread Mirco Wahab

On 02.11.2011 13:38, Ravi Kumar Venkatraman wrote:

Could anybody suggest me some books or notes or some
materials which will help me to understand the way gromacs algorithms work
i.e. how neighbor group search works e.t.c. I kindly request you to suggest
me other than the gromacs manual.*


There are many good Books and papers on the topic as Justin already noted,
it'd be hard to give a comprehensive overview.

IMHO, a rather in-depth treatment of practical problems
related to how neighbor group search works (and more) is given
in:
http://www.springer.com/mathematics/computational+science+%26+engineering/book/978-3-540-68094-9
(Numerical Simulation in Molecular Dynamics, Griebel, Knapek, Zumbusch)

In the TOC of this book:
http://www.springer.com/cda/content/document/cda_downloaddocument/9783540680949-t1.pdf?SGWID=0-0-45-554207-p173713766
you'll find your question treated in Chap. 3


Regards

M.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Link to Intel MKL (fftw) via cmake options

2011-10-31 Thread Mirco Wahab

On 24.10.2011 23:23, Szilárd Páll wrote:

I've just realized that both you and the similar report you linked to
were using CMake 2.8.3. If you don't succeed could you try another
CMake version?


I could replicate the error with the simple cmake inviocation you proposed in 
your reply:


cmake ../gromacs-4.5.5 -DGMX_MPI=ON  -DCMAKE_INSTALL_PREFIX=/tmp/gromacs-4.5 
make mdrun -j4   make install-mdrun


This fails w/cmake 2.8.3 as before.

Then, I installed cmake 2.8.6 on the same system, cleaned the
build path and rerunned the build.

Your suspicion was correct, *it now works* (w/2.8.6).

So, 2.8.3 messes up the build process independend
of the specific tool chain, maybe this could be
added as a warning to the compilation instructions.

BTW, I even managed to get an win64 (multithreaded,
non-MPI) executeable displaying respectable performance
by using  windows-cmake, visual studio 2010 SP1,
visual studio sdk SP1 7.1/64, and Nasm-win.

Thank you very much for your help.

M.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Link to Intel MKL (fftw) via cmake options

2011-10-18 Thread Mirco Wahab

On 17.10.2011 05:18, Mark Abraham wrote:

On 17/10/2011 7:04 AM, Mirco Wahab wrote:

On 10/16/2011 2:25 PM, Mark Abraham wrote:

On 15/10/2011 9:02 PM, Mirco Wahab wrote:

On 10/15/2011 1:15 AM, Mark Abraham wrote:

I use
...

...

...

OK, I can understand that. But if the options (-mtune ***, -msse2)
are not longer available with the actual free Intel Compiler suites,
shouldn't the cmake definitions be adapted to this fact in order
to avoid loads of compiler warnings?


If someone can identify a way to detect old and new, free and non-free compiler 
suites then we might consider it. Reality is that GROMACS spends a heavy 
majority of its time in loops written mostly in assembly code (using SSE or 
SSE2 if applicable), or in
the FFT library. Compiler performance makes a negligible contribution to 
performance of the rest, so it is really not worth while maintaining complex 
compiler tweaks.


OK,


I'll attach the error messages err.msg, 5.9 KB).
- cmake version 2.8.3
- gcc 4.5.1 x64 (Linux) (20101208)


With cmake 2.8.2 and icc 12.1.0 20110811 using the aforementioned flags, I get 
smooth installation of the git version of 4.5.5. Moreover, I don't even get
.../build/src/gmxlib/CMakeFiles/CMakeRelink.dir
being created. So I think there is something idiosyncratic about your tool 
chain that is at the root of this.


I replicated the error on another machine and already answered this
as a response to Szilárd. I really don't know what the reason is, maybe
I'm doing something really wrong. The only ray of hope: w/autoconf,
everything works fine, even giving one the subjective impression of
'closely controlling' the build process ;-)

Thanks  regards

M.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Link to Intel MKL (fftw) via cmake options

2011-10-16 Thread Mirco Wahab

On 10/16/2011 2:25 PM, Mark Abraham wrote:

On 15/10/2011 9:02 PM, Mirco Wahab wrote:

On 10/15/2011 1:15 AM, Mark Abraham wrote:

I use
...

...
//Flags used by the compiler during all build types
CMAKE_CXX_FLAGS:STRING=' -msse2 -ip -funroll-all-loops -std=gnu99 '

//Flags used by the compiler during release builds.
CMAKE_CXX_FLAGS_RELEASE:STRING=-mtune=itanium2 -mtune=core2 -O3 -DNDEBUG

//Flags used by the compiler during all build types
CMAKE_C_FLAGS:STRING=' -msse2 -ip -funroll-all-loops -std=gnu99 '

//Flags used by the compiler during release builds.
CMAKE_C_FLAGS_RELEASE:STRING=-mtune=itanium2 -mtune=core2 -O3 -DNDEBUG

...
These are obviously the wrong flags for the detected architecture,
sse2 is no longer available and so are the the mtune architectures.
The correct options for the actual compiler for Intel64 would read:
CMAKE_CXX_FLAGS:STRING=' -msse3 -ip -funroll-all-loops -std=gnu99 '
CMAKE_CXX_FLAGS_RELEASE:STRING=-O3 -DNDEBUG
with some option-warnings but without error.


Sure. GROMACS will not benefit noticeably from the SSE3 instruction set,
so we don't bother looking for it.


OK, I can understand that. But if the options (-mtune ***, -msse2)
are not longer available with the actual free Intel Compiler suites,
shouldn't the cmake definitions be adapted to this fact in order
to avoid loads of compiler warnings?


But the install is broken. On `make install-mdrun`, the scripts would
remove any library from src/gmxlib/CMakeFiles/CMakeRelink.dir
and bail out with the error below. Even if you copy the libraries
by hand to CMakeRelink.dir/, the'll get removed by make install-mdrun
before trying to link with them.
...

That looks very weird. What cmake version? What does make install-mdrun
VERBOSE=1 say?


I'll attach the error messages err.msg, 5.9 KB).
 - cmake version 2.8.3
 - gcc 4.5.1 x64 (Linux) (20101208)

Thanks  regards

Mirco





/usr/bin/cmake -H/home/carlo/Gromacs/gromacs-4.5.5 -B/home/carlo/Gromacs/build 
--check-build-system CMakeFiles/Makefile.cmake 0
make -f CMakeFiles/Makefile2 install-mdrun
make[1]: Entering directory `/home/carlo/Gromacs/build'
/usr/bin/cmake -H/home/carlo/Gromacs/gromacs-4.5.5 -B/home/carlo/Gromacs/build 
--check-build-system CMakeFiles/Makefile.cmake 0
/usr/bin/cmake -E cmake_progress_start /home/carlo/Gromacs/build/CMakeFiles 68
make -f CMakeFiles/Makefile2 src/kernel/CMakeFiles/install-mdrun.dir/all
make[2]: Entering directory `/home/carlo/Gromacs/build'
make -f src/gmxlib/CMakeFiles/gmx.dir/build.make 
src/gmxlib/CMakeFiles/gmx.dir/depend
make[3]: Entering directory `/home/carlo/Gromacs/build'
cd /home/carlo/Gromacs/build  /usr/bin/cmake -E cmake_depends Unix 
Makefiles /home/carlo/Gromacs/gromacs-4.5.5 
/home/carlo/Gromacs/gromacs-4.5.5/src/gmxlib /home/carlo/Gromacs/build 
/home/carlo/Gromacs/build/src/gmxlib 
/home/carlo/Gromacs/build/src/gmxlib/CMakeFiles/gmx.dir/DependInfo.cmake 
--color=
make[3]: Leaving directory `/home/carlo/Gromacs/build'
make -f src/gmxlib/CMakeFiles/gmx.dir/build.make 
src/gmxlib/CMakeFiles/gmx.dir/build
make[3]: Entering directory `/home/carlo/Gromacs/build'
make[3]: Nothing to be done for `src/gmxlib/CMakeFiles/gmx.dir/build'.
make[3]: Leaving directory `/home/carlo/Gromacs/build'
/usr/bin/cmake -E cmake_progress_report /home/carlo/Gromacs/build/CMakeFiles  
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61
[ 70%] Built target gmx
make -f src/mdlib/CMakeFiles/md.dir/build.make 
src/mdlib/CMakeFiles/md.dir/depend
make[3]: Entering directory `/home/carlo/Gromacs/build'
cd /home/carlo/Gromacs/build  /usr/bin/cmake -E cmake_depends Unix 
Makefiles /home/carlo/Gromacs/gromacs-4.5.5 
/home/carlo/Gromacs/gromacs-4.5.5/src/mdlib /home/carlo/Gromacs/build 
/home/carlo/Gromacs/build/src/mdlib 
/home/carlo/Gromacs/build/src/mdlib/CMakeFiles/md.dir/DependInfo.cmake --color=
make[3]: Leaving directory `/home/carlo/Gromacs/build'
make -f src/mdlib/CMakeFiles/md.dir/build.make src/mdlib/CMakeFiles/md.dir/build
make[3]: Entering directory `/home/carlo/Gromacs/build'
make[3]: Nothing to be done for `src/mdlib/CMakeFiles/md.dir/build'.
make[3]: Leaving directory `/home/carlo/Gromacs/build'
/usr/bin/cmake -E cmake_progress_report /home/carlo/Gromacs/build/CMakeFiles  
85 86 87 88 89 90 91 92 93 94 95 96 97
[ 89%] Built target md
make -f src/kernel/CMakeFiles/gmxpreprocess.dir/build.make 
src/kernel/CMakeFiles/gmxpreprocess.dir/depend
make[3]: Entering directory `/home/carlo/Gromacs/build'
cd /home/carlo/Gromacs/build  /usr/bin/cmake -E cmake_depends Unix 
Makefiles /home/carlo/Gromacs/gromacs-4.5.5 
/home/carlo/Gromacs/gromacs-4.5.5/src/kernel /home/carlo/Gromacs/build 
/home/carlo/Gromacs/build/src/kernel 
/home/carlo/Gromacs/build/src/kernel/CMakeFiles/gmxpreprocess.dir/DependInfo.cmake
 --color=
make[3]: Leaving directory `/home/carlo/Gromacs/build'
make -f src/kernel/CMakeFiles

[gmx-users] oops, s/gcc 4.5.1/icc 12.1.0/

2011-10-16 Thread Mirco Wahab

On 10/16/2011 10:04 PM, Mirco Wahab wrote:

- gcc 4.5.1 x64 (Linux) (20101208)


with the said gcc, all builds works fine, the problem
arises with the Intel suite (icc/icpc 12.1.0)

sorry for the typo.

M.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Link to Intel MKL (fftw) via cmake options

2011-10-15 Thread Mirco Wahab

On 10/15/2011 1:15 AM, Mark Abraham wrote:

I use

ccmake ..\
-DGMX_FFT_LIBRARY=mkl \
-DMKL_LIBRARIES=${MKL}/lib/em64t/libmkl_intel_thread.so;${MKL}/lib/em64t/libmkl_lapack.so;${MKL}/lib/em64t/libmkl_core.so;${MKL}/lib/em64t/libmkl_em64t.a;${MKL}/lib/em64t/libguide.so;/usr/lib64/libpthread.so
\
-DMKL_INCLUDE_DIR=${MKL}/include\
-DGMX_MPI=ON\
-DGMX_THREADS=OFF


Thanks for your hints, I made it now through `cmake` with:

--- 8 --- [cut here] --

GMXVERSION=gromacs-4.5.5
GMXTARGET=/opt/gromacs455

MINC=/opt/intel/composerxe/mkl/include
MLIB=/opt/intel/composerxe/mkl/lib/intel64
ILIB=/opt/intel/composerxe/lib/intel64
LLIB=/usr/lib64

export CXX=icpc
export CC=icc

cmake ../$GMXVERSION  \
  -DGMX_FFT_LIBRARY=mkl
-DMKL_LIBRARIES=${MLIB}/libmkl_intel_ilp64.so;${MLIB}/libmkl_core.so;${MLIB}/libmkl_intel_thread.so;${ILIB}/libiomp5.so;${LLIB}/libpthread.so\
  -DMKL_INCLUDE_DIR=${MINC} \
  -DGMX_MPI=OFF \
  -DGMX_THREADS=ON  \
  -DCMAKE_INSTALL_PREFIX=${GMXTARGET}  \
  -DGMX_X11=OFF\
  -DGMX_BINARY_SUFFIX=_t

---

on `make`, there are still some compiler flag related
problems. The cmake scripts invoked by the command
above identified the Intel64 compiler and guessed
'almost correct' optimization and code-generation options:
(Compiler: /opt/intel/composer_xe_2011_sp1.6.233/bin/intel64/ic*)

--- [CMakeCache.txt] -

...

//Flags used by the compiler during all build types
CMAKE_CXX_FLAGS:STRING=' -msse2 -ip -funroll-all-loops -std=gnu99  '

//Flags used by the compiler during release builds.
CMAKE_CXX_FLAGS_RELEASE:STRING=-mtune=itanium2 -mtune=core2  -O3 -DNDEBUG

//Flags used by the compiler during all build types
CMAKE_C_FLAGS:STRING=' -msse2 -ip -funroll-all-loops -std=gnu99  '

//Flags used by the compiler during release builds.
CMAKE_C_FLAGS_RELEASE:STRING=-mtune=itanium2 -mtune=core2  -O3 -DNDEBUG

...



These are obviously the wrong flags for the detected architecture,
sse2 is no longer available and so are the the mtune architectures.

The correct options for the actual compiler for Intel64 would read:

   CMAKE_CXX_FLAGS:STRING=' -msse3 -ip -funroll-all-loops -std=gnu99 '
   CMAKE_CXX_FLAGS_RELEASE:STRING=-O3 -DNDEBUG

   CMAKE_C_FLAGS:STRING=' -msse3 -ip -funroll-all-loops -std=gnu99  '
   CMAKE_C_FLAGS_RELEASE:STRING=-O3 -DNDEBUG


Even with the wrong options, the `make` would eventually succeed
with some option-warnings but without error.

But the install is broken. On `make install-mdrun`, the scripts would
remove any library from src/gmxlib/CMakeFiles/CMakeRelink.dir
and bail out with the error below. Even if you copy the libraries
by hand to CMakeRelink.dir/, the'll get removed by make install-mdrun
before trying to link with them.

--- [cat errmsg.txt] -

[ 70%] Built target gmx
[ 89%] Built target md
[ 98%] Built target gmxpreprocess
[100%] Built target mdrun
[100%] Installing mdrun
-- Install configuration: Release
-- Install component: libraries
CMake Error at /x/y/Gromacs/build/src/gmxlib/cmake_install.cmake:38 (FILE):
   file INSTALL cannot find
   /x/y/Gromacs/build/src/gmxlib/CMakeFiles/CMakeRelink.dir/libgmx.so.6.



Still broken with the new Intel Compiler but probably close ;-)

Thanks and regards,

r^b

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Link to Intel MKL (fftw) via cmake options

2011-10-14 Thread Mirco Wahab

Dear Gromacs users,

I'm trying to build an Intel-MKL-linked version of
gromacs 4.5.5 and can't figure how to do that using
cmake.

There are already some (new) instructions on the GROMACS
web site regarding Intel icc but not how to use the
corresponding mkl.

On my system, the Intel 2011 (non commercial) compiler suite
is installed in:
  CCDIR=/opt/intel/composer_xe_2011_sp1.6.233/bin/intel64

and it's MKL is located here:
  MKL_FFTW_INCLUDE=$MKLROOT/include/fftw
  MKL_FFTW_LIBDIR=$MKLROOT/lib/intel64

From the examples given in the Gromacs docs (web), I came up
with the following (non-working) cmake invocation:

--- 8 --- [cut here] ---

export GMXVERSION=gromacs-4.5.5
export CCDIR=/opt/intel/composer_xe_2011_sp1.6.233/bin/intel64
export MKL_FFTW_INCLUDE=$MKLROOT/include/fftw
export MKL_FFTW_LIBDIR=$MKLROOT/lib/intel64
export CXX=icpc
export CC=icc

cmake ../$GMXVERSION \
  -DGMX_FFT_LIBRARY=mkl  \
  -DFFTW3F_INCLUDE_DIR=$MKL_FFTW_INCLUDE \
  -DMKL_LIBRARIES=$MKL_FFTW_LIBDIR   \
  -DCMAKE_INSTALL_PREFIX=/opt/gromacs455 \
  -DGMX_X11=OFF  \
  -DCMAKE_CXX_COMPILER=${CCDIR}/icpc \
  -DCMAKE_C_COMPILER=${CCDIR}/icc\
  -DGMX_MPI=OFF  \
  -DGMX_PREFER_STATIC_LIBS=OFF

-- [ereh tuc] --- 8 ---

Of course, the setting of *-DMKL_LIBRARIES*=$MKL_FFTW_LIBDIR
is wrong, the specific libraries are expected here.
But how? I didn't find a solution how to touch the
LDFLAGS in the correct way.

Before cmake has been priorized, a simple 'configure' call
would have done it:

--- 8 --- [cut here] ---

MINC=/opt/intel/composerxe/mkl/include
MLIB=/opt/intel/composerxe/mkl/lib/intel64
ILIB=/opt/intel/composerxe/compiler/lib/intel64

../gromacs-4.5.4/configure   \
  CC=icc CXX=icpc\
  CPPFLAGS=-I${MINC}   \
  LDFLAGS=-L${ILIB} -L${MLIB} -lmkl_intel_ilp64 -lmkl_core  \
   -lmkl_lapack95_ilp64 -lmkl_blas95_ilp64   \
   -lmkl_intel_thread -liomp5 -lpthread \
--prefix=/opt/gromacs454 \
--with-fft=mkl   \
--with-external-blas \
--with-external-lapack   \
--without-x  \
--program-suffix=_t

-- [ereh tuc] --- 8 ---


How can this be done using cmake?


Thank you in advance.

r^b
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] On multi-core PCs and gromacs installation

2011-07-04 Thread Mirco Wahab

I will be installing gromacs 4.5.x in another computer but this time with four 
cores. The PC runs in windows and I will be using cygwin.
... Do I still need to install MPI using cygwin?


Probably not, but I haven't tested threading on Cygwin.


I just did a test for fun and it worked remarkably good,
even on Cygwin 1.7 +  Win7/x64U box. I'ts very simple
using 'make', didn't check cmake (seems more complicated
here).

The following sequence will lead to success for
fully functional Gromacs 4.5.4 for the CYGWIN_i686
target on windows:

Install Cygwin 1.7.9 from its page, add
 + gcc/g++ 4.x and 3.x (but not mingw-gcc variants)
 + make/cmake
 + lapack, lapack-devel, lapack-libs
 + fftw3, *-devel, *-libs
 + gsl, *-devel, *-libs
 + wget, tar, vim
 + bash, rxvt
(do not install the 'pthread library stub' entry)

Open a cygwin bash shell, create a gromacs base directory
and change to it(!):
$ mkdir gromacs; cd gromacs

Download source and unpack it:
$ wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.5.4.tar.gz
$ tar xzf gromacs-4.5.4.tar.gz

Write a fancy build control file:
$ cat  mk_gromacs.sh
#!/bin/sh
../gromacs-4.5.4/configure CC=gcc CXX=g++ \
LDFLAGS=-L/usr/lib64 -llapack -lblas -lpthread \
--prefix=/usr/local/gromacs454 \
--with-fft=fftw3 \
--with-external-blas  \
--with-external-lapack \
--with-gsl

  if [ $? -eq 0 ]; then
 make -j 4
 if [ $? -eq 0 ]; then
 echo 
 echo Success!
 echo 
 echo now: ==  make install (as root) / exit
 fi
  fi

Modify the number in 'make -j 4' to match the number of
your processor cores, which is the output of:
$ echo $NUMBER_OF_PROCESSORS

Create a build directory and change to it(!):
$ mkdir build; cd build

Run your control script (see above):
$ sh ../mk_gromacs.sh

Wait and check error messages during configuration (if any),
otherwise: go and get a large cup of coffee and lay back.

If its ready (without errors), install it by:
$ make install

Initialize Gromacs environment variables in
your shell by modifying your .bashrc file:

$ vi .bashrc
 - go some lines down, at the start of an empty line
 - press 'i' (insert)
 - insert the following text

  # GROMACS
  if [ -e /usr/local/gromacs454/bin/GMXRC.bash ] ; then
   source /usr/local/gromacs454/bin/GMXRC.bash
  fi

 - press 'Escape' ':' 'x' (to write and close the file)

After leaving the shell and opening a new one,
mdrun, grommp and friends should be available
and working fine.


Nothing will do MPI for you. Threading and MPI are complementary approaches to 
achieving parallelism, and which is better depends on your execution 
environment.


On Cygwin, OpenMPI would't even work anymore as
they require MS Visual Studio Library linkage
nowadays. One could try MPICH2 which compiled
on Cygwin last time I tried (2 years ago?),
but why? Gromacs threading works perfectly
on Cygwin.

Regards

M.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists