Re: [gmx-users] Error detecting AMD GPU in GROMACS 2019.1

2019-02-25 Thread Michael Williams
Hi Szilárd,

I think these two files (from the complex/nbnxn-ljpme-LB test, just as an 
example) fit under the 50 kB size limit. The only additional notice I see (in 
mdrun.out) is: "WARNING: While sanity checking device #1, clBuildProgram did 
not succeed -11: CL_BUILD_PROGRAM_FAILURE"

I’m not sure if it is relevant, but the AMD GPU is elsewhere identified as #0 
while the Intel integrated chip is called GPU #1. Thanks again, and please let 
me know if these were not the output files you were looking for. 

Mike



> On Feb 26, 2019, at 1:56 AM, Szilárd Páll  wrote:
> 
> Michael,
> 
> Can you please post the full stdandard and log outputs (preferably through
> an external service). Having looked at the code, there must be an addition
> output that tells what type of error occurred during the sanity checks that
> produce the result you show.
> 
> In general, this likely means that the Apple compiler is not happy about
> the new sanity checks we implemented to improve the robustness of the
> OpenCL support..
> 
> --
> Szilárd
> 
> 
> On Sun, Feb 24, 2019 at 7:28 AM Michael Williams <
> michael.r.c.willi...@gmail.com> wrote:

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Residue BGC not found in the database

2019-02-25 Thread Justin Lemkul



On 2/25/19 4:26 PM, mary ko wrote:

  Thank you Justin. I am actually following the protein-ligand tutorial to do 
simulations of my protein-ligand system. I skipped the python 
cgenff_charmm2gmx.py... Because of the mismatch of cgenff error couldnt be 
fixed. I have the bgc which is beta-D-glucose and built its .gro file as a 
ligand and protein-processed.gro and I copied the ligand in the 
protein-processed.gro to have the complex.gro. I also use charmm36-nov2018 and 
added it in the force fields with the Bglc to Bgc change in merged.rtp. Now 
that I try to build the topology.top file with pdb2gmx or even x2top there is 
an error: atom ot1 in residue Phe was not found in rtp entry Phe with 20 atoms 
while sorting atoms for pdb2gmx and could only find a force field type for 
19078 out of 21750 atoms for x2top. What do you think I should do to build the 
topology?Thanks.


If you have a protein with BGLC as the ligand, use pdb2gmx and make sure 
there is a TER between the protein and glucose, or otherwise a change in 
chain identifier. Do not use x2top and do not modify merged.rtp (modify 
the residue names in the PDB file instead).


-Justin


 On Saturday, February 23, 2019, 6:50:39 PM EST, Justin Lemkul 
 wrote:
  
  


On 2/20/19 11:19 AM, mary ko wrote:

Dear all,
I face the 'Residue BGC not found in the database' error when I try to build 
the topology file of my protein-ligand system by pdb2gmx -f pro.pdb. I used 
Charmm force field but I tried it with all the other force fields as well to 
see if the problem could be solved which was useless. I searched the list and 
errors-gromacs in the website and found that the simplest way is to change the 
name of this residue. As I am not familiar with protein structure, I do not 
know what changes may be helpful to pass this error. also, it is said that one 
way is to parameterize the residue which I do not know how to do that. Your 
help would be highly appreciated.Thanks!

If "BGC" is a synonym for beta-D-glucose, that is called BGLC in CHARMM,
so you just need to rename the residue accordingly. If it is some other
unknown species, then yes, you need to parametrize it yourself.

-Justin



--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Residue BGC not found in the database

2019-02-25 Thread mary ko
 Thank you Justin. I am actually following the protein-ligand tutorial to do 
simulations of my protein-ligand system. I skipped the python 
cgenff_charmm2gmx.py... Because of the mismatch of cgenff error couldnt be 
fixed. I have the bgc which is beta-D-glucose and built its .gro file as a 
ligand and protein-processed.gro and I copied the ligand in the 
protein-processed.gro to have the complex.gro. I also use charmm36-nov2018 and 
added it in the force fields with the Bglc to Bgc change in merged.rtp. Now 
that I try to build the topology.top file with pdb2gmx or even x2top there is 
an error: atom ot1 in residue Phe was not found in rtp entry Phe with 20 atoms 
while sorting atoms for pdb2gmx and could only find a force field type for 
19078 out of 21750 atoms for x2top. What do you think I should do to build the 
topology?Thanks.
On Saturday, February 23, 2019, 6:50:39 PM EST, Justin Lemkul 
 wrote:  
 
 

On 2/20/19 11:19 AM, mary ko wrote:
> Dear all,
> I face the 'Residue BGC not found in the database' error when I try to build 
> the topology file of my protein-ligand system by pdb2gmx -f pro.pdb. I used 
> Charmm force field but I tried it with all the other force fields as well to 
> see if the problem could be solved which was useless. I searched the list and 
> errors-gromacs in the website and found that the simplest way is to change 
> the name of this residue. As I am not familiar with protein structure, I do 
> not know what changes may be helpful to pass this error. also, it is said 
> that one way is to parameterize the residue which I do not know how to do 
> that. Your help would be highly appreciated.Thanks!

If "BGC" is a synonym for beta-D-glucose, that is called BGLC in CHARMM, 
so you just need to rename the residue accordingly. If it is some other 
unknown species, then yes, you need to parametrize it yourself.

-Justin

-- 
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Error in energy minimization step/could anyone help me

2019-02-25 Thread Dallas Warren
Fatemeh,

Best to keep this to the emailing list.  Then others can find it later that
have a similar issue, or others can provide information.

>I want to investigate the adsorption of H2 and also CO on MOFs (metal
organic frameworks), in .top file I
>introduce CO and the number of it that is 1, but it gives error( no such
molecule type CO), as you told I
>went to atomtype.atp file to find atom type for C and H but as I saw, the
atomtype for carbonyle oxygen
>is O and for bare carbon is C, and the others are not related, when I
introduce CO in this way it gives
>error, when I introduce it one by one, C separately and O separately also
it gives error that no such
>atomtype C or O, I do not know how should I introduce CO? could you help
me with my problem?

What it sounds like you are missing is the topology file that provides
information on how the molecules are constructed, what is what the first
error you obtained is telling you. You have to provide information to the
program on how the molecule is put together, bonds, forces etc.  That is
either done within the .top file, or via a separate .itp file that is
subsequently included within the .top file.

http://manual.gromacs.org/documentation/current/reference-manual/topologies/topology-file-formats.html#molecule-itp-file

I would highly recommend that you step back for a moment and do some
tutorials using GROMACS.  They won't be with the system you are going to
perform the calculations on, but you need to develop a fundamental
understanding of how these simulations work, what information you need to
provide to them, why etc.  Otherwise you are going to waste a lot of time,
and more than likely generate something that cannot be used.

Catch ya,

Dr. Dallas Warren
Drug Delivery, Disposition and Dynamics
Monash Institute of Pharmaceutical Sciences, Monash University
381 Royal Parade, Parkville VIC 3052
dallas.war...@monash.edu
-
When the only tool you own is a hammer, every problem begins to resemble a
nail.


On Mon, 25 Feb 2019 at 08:34, Dallas Warren 
wrote:

> The molecule name you have in  your .top file, in the [ molecules ]
> section, does not match the molecule name in your .itp (could also be
> within your .top) file, in the [ moleculetype ] section.
>
> Catch ya,
>
> Dr. Dallas Warren
> Drug Delivery, Disposition and Dynamics
> Monash Institute of Pharmaceutical Sciences, Monash University
> 381 Royal Parade, Parkville VIC 3052
> dallas.war...@monash.edu
> -
> When the only tool you own is a hammer, every problem begins to resemble a
> nail.
>
>
> On Mon, 25 Feb 2019 at 02:24, banijamali_fs 
> wrote:
>
>> Hi there,
>>
>> I'm simulating the adsorption of H2 on a structure (Metal organic frame
>> work), after making the pdb structure and running the first cmmand of
>> gromacs to have gro file, I added two atoms of CU and one hydrogen
>> molecule to the gro file and also changed the number of the atoms at the
>> top of gro file. Then I run the following commands up to the command
>> about energy minimization that I got this error( No such molecule type
>> hydrogen), I want to ask that, gromacs knows H2 molecule, why it says
>> that, no such molecule type? it means that I should introduce it agin in
>> aminoacids.rtp file?
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] how to increase GMX_OPENMP_MAX_THREADS

2019-02-25 Thread Schulz, Roland
That's not my experience. In my experience , for single node runs, it is 
usually faster to use 72 OPENMP threads rather than using 72 (t)MPI threads.

Roland

> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
> [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of
> Szilárd Páll
> Sent: Monday, February 25, 2019 7:31 AM
> To: Discussion list for GROMACS users 
> Subject: Re: [gmx-users] how to increase GMX_OPENMP_MAX_THREADS
> 
> What are you trying to do? Using 72 threads per rank in most cases (except
> some extreme and unusual) will not be efficient at all. If you are sure that
> you still want to do that, you can override this at compile time using cmake -
> DGMX_OPENMP_MAX_THREADS=
> 
> 
> 
> --
> Szilárd
> 
> 
> On Thu, Feb 21, 2019 at 5:58 AM Lalehan Ozalp 
> wrote:
> 
> > Hello all,
> > I'd been running simulation with GROMACS 2018 using 72 open mpi
> > threads without problem until (I assume) it was updated to 2019
> > version. When I execute mdrun with option -nt 72 (which is the number
> > of cores of my
> > terminal) it says:
> >
> > "you are using 72 openmp threads, which is larger than
> > GMX_OPENMP_MAX_THREADS (64). Decrease the number of OpenMP
> threads or
> > rebuild GROMACS with a larger value for GMX_OPENMP_MAX_THREADS."
> >
> > In the documentation of GROMACS 2019 release, it's written:  "mdrun
> > can run with more than GMX_OPENMP_MAX_THREADS threads."
> >
> > Could you please help me how to get around this issue?
> >
> > Thank you in advance.
> > Lalehan
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
> 
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
> a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] how to increase GMX_OPENMP_MAX_THREADS

2019-02-25 Thread Lalehan Ozalp
Dear Szilárd,
Thank you for your response. I'm basically simulating an enzyme including a
cofactor and a ligand that is of my interest. I'm trying to observe the
ligand's behaviour through a 30 ns trajectory.

I'm confused by what you said. I used to employ the -nt option according to
the old versions' manuals. It basically advised to enter the number that is
the number of cores in a computer. Which reminds me of the fact that things
have must be changed with the new versions (e.g. CUDA compiling). I'll be
very glad to hear some guidance as I'm trying to learn how to employ GPUs
efficiently just now.

Thank you, best regards,

Lalehan
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Pull code errors

2019-02-25 Thread Berk Hess
Yes, this is due to an improvement.
Set the pbcatom (as the error message tries to say) for the pull groups 
mentioned. The options are described in the mdp section of the manual.

Berk


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Mark Abraham 

Sent: Sunday, February 24, 2019 6:55 AM
To: gmx-us...@gromacs.org
Cc: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] Pull code errors

Hi,

There have been some improvements to the pull code, but I don't know
offhand whether they apply here. Do consult the 2019 release notes for
clues.

Mark

On Sat., 23 Feb. 2019, 06:11 Ayesha Fatima, 
wrote:

> Dear All,
> I am using Gromax 2019 and running an umbrella sampling for two proteins.
> It is a simple pulling simulation. It is giving me the following error
>
>
> ERROR 1 [file md_pull.mdp]:
>   When the maximum distance from a pull group reference atom to other atoms
>   in the group is larger than 0.5 times half the box size a centrally
>   placed atom should be chosen as pbcatom. Pull group 1 is larger than that
>   and does not have a specific atom selected as reference atom.
>
>
> ERROR 2 [file md_pull.mdp]:
>   When the maximum distance from a pull group reference atom to other atoms
>   in the group is larger than 0.5 times half the box size a centrally
>   placed atom should be chosen as pbcatom. Pull group 2 is larger than that
>   and does not have a specific atom selected as reference atom.
>
> Pull group  natoms  pbc atom  distance at start  reference at t=0
>1  9157  4579
>2   589  5113   0.300 nm  0.300 nm
> Estimate for the relative computational load of the PME mesh part: 0.14
>
>
> --
> Is there anything to be added in the pull code?
>
> Thank you
> Regards
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] how to increase GMX_OPENMP_MAX_THREADS

2019-02-25 Thread Szilárd Páll
What are you trying to do? Using 72 threads per rank in most cases (except
some extreme and unusual) will not be efficient at all. If you are sure
that you still want to do that, you can override this at compile time using
cmake -DGMX_OPENMP_MAX_THREADS=



--
Szilárd


On Thu, Feb 21, 2019 at 5:58 AM Lalehan Ozalp 
wrote:

> Hello all,
> I'd been running simulation with GROMACS 2018 using 72 open mpi threads
> without problem until (I assume) it was updated to 2019 version. When I
> execute mdrun with option -nt 72 (which is the number of cores of my
> terminal) it says:
>
> "you are using 72 openmp threads, which is larger than
> GMX_OPENMP_MAX_THREADS (64). Decrease the number of OpenMP threads or
> rebuild GROMACS with a larger value for GMX_OPENMP_MAX_THREADS."
>
> In the documentation of GROMACS 2019 release, it's written:  "mdrun can run
> with more than GMX_OPENMP_MAX_THREADS threads."
>
> Could you please help me how to get around this issue?
>
> Thank you in advance.
> Lalehan
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Error detecting AMD GPU in GROMACS 2019.1

2019-02-25 Thread Szilárd Páll
Michael,

Can you please post the full stdandard and log outputs (preferably through
an external service). Having looked at the code, there must be an addition
output that tells what type of error occurred during the sanity checks that
produce the result you show.

In general, this likely means that the Apple compiler is not happy about
the new sanity checks we implemented to improve the robustness of the
OpenCL support..

--
Szilárd


On Sun, Feb 24, 2019 at 7:28 AM Michael Williams <
michael.r.c.willi...@gmail.com> wrote:

> Hi Mark,
>
> Thanks for the reply. I definitely intended to compile 2019.1 identically
> to 2018.5. I made sure the environmental variables were the same and I
> copy-pasted the cmake command I’d used before (just changed the install
> prefix). By the way, since my first message, I built a version of 2018.6
> and it seems to behave the same as the working version I have of 2018.5
> (finds the AMD GPU and appears to utilize it as expected). I don’t have any
> immediate need to use 2019.1, but I thought I’d post about the issue in
> case anyone else has seen it. Thanks again for the reply, and I’d be happy
> to try recompiling with any suggestions you might have.
>
> Mike
>
> > On Feb 24, 2019, at 4:51 PM, Mark Abraham 
> wrote:
> >
> > Hi,
> >
> > That's pretty mysterious. Normally only a driver mismatch could give that
> > error, but you have a 2018 version working. Was the 2018.x version built
> > exactly the same way?
> >
> > Mark
> >
> > On Sun., 24 Feb. 2019, 06:08 Michael Williams, <
> > michael.r.c.willi...@gmail.com> wrote:
> >
> >> Hello all, I’ve just tried compiling GROMACS 2019.1 and although I
> didn’t
> >> have any errors during the build, GROMACS no longer detects a GPU that a
> >> copy of 2018.5 was detecting and using without trouble. I will include
> my
> >> system info and the error message I’ve been getting below. Thanks very
> much
> >> for any suggestions as well as for your time. Have a good one,
> >>
> >> Mike
> >>
> >>
> >> (1) System: MacBook Pro with OS X 10.14.3 Mojave, AMD Radeon Pro 560
> GPU.
> >> (2) Cmake command used to build Gromacs 2019.1 (tabs added here for
> >> clarity):
> >>
> >> cmake .. \
> >>
> >>
> -DCMAKE_INSTALL_PREFIX=/Users/michael/.local/apps/gromacs-2019.1-apple-clang-omp-ocl
> >> \
> >>-DCMAKE_LIBRARY_PATH=/Users/michael/.local/lib \
> >>-DCMAKE_INCLUDE_PATH=/Users/michael/.local/include \
> >>-DCMAKE_C_COMPILER=/usr/bin/clang \
> >>-DCMAKE_CXX_COMPILER=/usr/bin/clang++ \
> >>-DCMAKE_C_FLAGS="-Xpreprocessor -fopenmp -lomp
> >> -L/Users/michael/.local/lib -I/Users/michael/.local/include” \
> >>-DCMAKE_CXX_FLAGS="-Xpreprocessor -fopenmp -lomp
> >> -L/Users/michael/.local/lib -I/Users/michael/.local/include” \
> >>-DGMX_FFT_LIBRARY=fftw3 \
> >>-DGMX_GPU=ON \
> >>-DGMX_USE_OPENCL=ON
> >>
> >> The path "/Users/michael/.local/“ is the prefix I used to build hwloc
> >> (1.11.12), libomp (7.0.1), and fftw3 (3.3.8) with the system default
> (apple
> >> clang) compiler. I used the same command (only changing the install
> prefix)
> >> to build GROMACS 2018.5, which is detecting and utilizing the GPU as
> >> expected.
> >>
> >> (3) The error that I see in 2019.1 (for any mdrun input file) is:
> >>
> >> Number of GPUs detected: 2
> >>#0: N/A, stat: insane
> >>#1: name: Intel(R) HD Graphics 630, vendor: Intel Inc., device
> >> version: OpenCL 1.2 , stat: incompatible (please recompile with
> >> GMX_OPENCL_NB_CLUSTER_SIZE=4)
> >>
> >> Whereas in 2018.5, in the same place I see:
> >>
> >> Number of GPUs detected: 2
> >>#0: name: AMD Radeon Pro 560 Compute Engine, vendor: AMD, device
> >> version: OpenCL 1.2 , stat: compatible
> >>#1: name: Intel(R) HD Graphics 630, vendor: Intel Inc., device
> >> version: OpenCL 1.2 , stat: incompatible
> >>
> >> (4) As a final note, GROMACS 2019.1 continues to run the calculation, it
> >> just no longer uses the GPU.
> >>
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>

Re: [gmx-users] Compile Gromacs with OpenCL for MacBook Pro with AMD Radeon Pro 560 GPU

2019-02-25 Thread Szilárd Páll
Hi Michael,

Thanks for the feedback. It seems like on your Apple system compilation of
OpenCL kernels fails due to include path issues, but only when running from
the build tree. I've filed an issue in our tracker (
https://redmine.gromacs.org/issues/2868).

--
Szilárd


On Sun, Feb 24, 2019 at 5:53 AM Michael Williams <
michael.r.c.willi...@gmail.com> wrote:

> Hi again Szilárd, I just wanted to follow up on my last message. I found
> no difference after modifying the file you suggested in Gromacs 2018.5.
> However, I did find that if I compiled the unmodified 2018.5 code and ran
> “make install” then (after sourcing the GMXRC file) I could run "./
> gmxtest.pl all" in the regression test folder and all of those tests
> passed. (It doesn’t seem like that runs all of the same tests as “make
> check”, though). In any case, even the tests that fail if doing “make
> check” directly after compiling only fail because a header file couldn’t be
> found; none of the tests failed due to returning a value outside of
> tolerance. Also, mdrun seems to be utilizing the GPU as expected. For the
> record, here is the cmake command I used:
>
> cmake ..
> -DCMAKE_INSTALL_PREFIX=/Users/michael/.local/apps/gromacs-2018.5-apple-clang-omp-ocl
> -DCMAKE_LIBRARY_PATH=/Users/michael/.local/lib
> -DCMAKE_INCLUDE_PATH=/Users/michael/.local/include
> -DCMAKE_C_COMPILER=/usr/bin/clang -DCMAKE_CXX_COMPILER=/usr/bin/clang++
> -DCMAKE_C_FLAGS="-Xpreprocessor -fopenmp -lomp -L/Users/michael/.local/lib
> -I/Users/michael/.local/include" -DCMAKE_CXX_FLAGS="-Xpreprocessor -fopenmp
> -lomp -L/Users/michael/.local/lib -I/Users/michael/.local/include"
> -DGMX_FFT_LIBRARY=fftw3 -DGMX_GPU=ON -DGMX_USE_OPENCL=ON
>
> My $HOME/.local directory is the prefix I used to install appropriate
> versions of hwloc (1.11.12), libomp (7.0.1), and fftw3 (3.3.8) that I
> compiled with the system’s default clang (in OSX 10.14.3, Mojave). Thanks
> again,
>
>
> Mike
>
>
> > On Feb 21, 2019, at 11:35 PM, Szilárd Páll 
> wrote:
> >
> > Michael,
> >
> > That was my copy-paste mistake, I meant to suggest editing the following
> > file:
> > src/gromacs/gpu_utils/ocl_compiler.cpp
> >
> > Please try this and run make check to see if it can now find included
> files.
> >
> > --
> > Szilárd
> >
> >
> > On Tue, Feb 19, 2019 at 10:59 PM Michael Williams <
> > michael.r.c.willi...@gmail.com >
> wrote:
> >
> >> Hi Szilárd, thank you for the suggestion. I modified the file
> >> "src/gromacs/gpu_utils/gpu_utils_ocl.cpp” and replaced both instances of
> >> “#ifdef __APPLE__” with “#if 0”. I then checked that the build and
> >> regression tests worked without enabling the GPU support. All of the
> tests
> >> passed. Then, in a new build folder, I added -DGMX_GPU=ON and
> >> -DGMX_USE_OPENCL=ON to the Cmake instructions (see below). I did this
> >> separately for the unmodified version of GROMACS 2018.5 as well as
> inside a
> >> separate copy where I’d made the above changes. When I ran “make check”
> for
> >> both the unmodified and modified versions, I got the same error as
> before
> >> (logs pasted below). However, it seems to happen at different parts of
> the
> >> test suite: the unmodified version passed the MdrunTests, then failed on
> >> one of the MdrunMpiTests. The modified version of GROMACS failed during
> >> MdrunTests and then passed the MdrunMpiTests.
> >>
> >> During both runs of the tests I got a system popup asking if I wanted to
> >> allow the test executable that was running to accept incoming network
> >> connections. The window was only visible for a few seconds and I didn’t
> >> click either option (allow / deny) before the window went away again.
> >>
> >> Thanks again for your help, and if you have any other ideas I’d be quite
> >> willing to try them out and let you know the results.
> >>
> >>
> >> Mike
> >>
> >>
> >> Build settings on MacBook Pro (OSX 10.14.3, Mojave) using system clang
> and
> >> OpenMP library (from LLVM 7.0.1) built in custom path (no other parts of
> >> LLVM 7.0.1 are installed in this path):
> >>
> >> cmake ..
> >>
> -DCMAKE_INSTALL_PREFIX=/Users/michael/.local/apps/gromacs-2018.5-apple-clang-omp
> >> \
> >>-DCMAKE_PREFIX_PATH=/Users/michael/.local \
> >>-DCMAKE_C_COMPILER=/usr/bin/clang \
> >>-DCMAKE_CXX_COMPILER=/usr/bin/clang++ \
> >>-DCMAKE_C_FLAGS="-Wno-deprecated-declarations
> >> -Wno-unused-command-line-argument -Xpreprocessor -fopenmp -lomp
> >> -I/Users/michael/.local/include -L/Users/michael/.local/lib” \
> >>-DCMAKE_CXX_FLAGS="-Wno-deprecated-declarations
> >> -Wno-unused-command-line-argument -Xpreprocessor -fopenmp -lomp
> >> -I/Users/michael/.local/include -L/Users/michael/.local/lib” \
> >>-DGMX_FFT_LIBRARY=fftw3 \
> >>-DGMX_GPU=ON \
> >>-DGMX_USE_OPENCL=ON \
> >>
> >>
> -DREGRESSIONTEST_PATH=/Users/michael/.local/source/regressiontests-2018.5
> >>

[gmx-users] Simulation crashed - Large VCM, Pressure scaling more than 1%, Bond length not finite

2019-02-25 Thread zeineb SI CHAIB
Dear GMX users,

I am running a Coarse-Grained simulation using Martini Force Field. My system 
consists of a Protein inserted in a lipid bilayer containing POPC, POPE, and 
CHOL molecules, in the presence of Water and ions.

I followed the conventional steps to prepare my system for the production run:
1- Generating Coarse-Grained structure and topology files
2- Insertion of the protein in the lipid bilayer
3- Adding neutralizing counterions
4- Energy minimization with backbone positions restraints for 1ns.
5- NVT equilibration with backbone positions restraints for 50 ns with 
v-rescale.
6- NPT equilibration with backbone positions restraints for 50 ns with 
Parrinello-Rahman barostat.

The simulation runs for ~ 2 microseconds before crashing:
Large VCM (group Protein_POPC_POPE_CHOL): 965789.43750, -1086946.5, 
135001.78125, Temp-cm:  9.21574e+14
Large VCM(group W_ION): -396609.0, 446514.28125, -55416.26953, Temp-cm:  
2.38187e+13

Step 123438101  Warning: Pressure scaling more than 1%. This may mean your 
system is not yet equilibrated. Use of Parrinello-Rahman pressure coupling 
during equilibration can lead to simulation instability and is discouraged.

Program: gmx mdrun, version 2018.3
Source file: src/gromacs/mdlib/clincs.cpp (line 2252)
Fatal error: Bond length not finite.

I used the following parameters in the mdp file (They were recommended by 
Martini People):
integrator   = md
dt = 0.02
nsteps = 5

nstxout   = 100
nstvout   = 100
nstfout= 0
nstlog = 1000
nstenergy  = 100
nstxout-compressed = 1000
compressed-x-precision   = 100

continuation   = yes

cutoff-scheme= Verlet
nstlist   = 20
ns_type   = grid
pbc   = xyz
verlet-buffer-tolerance = 0.005

coulombtype= reaction-field
rcoulomb  = 1.1
epsilon_r  = 15
epsilon_rf = 0
vdw_type = cutoff
vdw-modifier   = Potential-shift-verlet
rvdw  = 1.1

tcoupl = v-rescale
tc-grps= Protein POPC_POPE_CHOL W_ION
tau_t   = 1.0  1.0 1.0
ref_t= 315 315 315

Pcoupl= parrinello-rahman
Pcoupltype= semiisotropic
tau_p  = 12.0
compressibility = 3e-4  3e-4  3e-4
ref_p   = 1.0  1.0  1.0

gen_vel = no
gen_temp= 315
gen_seed = 473529

constraints   = none
constraint_algorithm = Lincs

nstcomm   = 100
comm-grps   = Protein_POPC_POPE_CHOL W_ION

I had a look on the GROMACS mailing list for similar problems and probable 
causes:

In one post they said that the Large VCM might be related to an un-equilibrated 
system but in my case, the system was equilibrated for 50ns (NVT equilibration) 
and another 50ns for the NPT equilibration so I don't think that this is the 
case here.

Another suggestion for the VCM thing is that it might be related to the force 
field and the topology. Since the last 5 residues of my protein have not been 
solved I thought that it's better to keep the actual C-terminal residue 
neutral. I would have liked to use the capping residues but they are not 
parameterized in Martini force field.  Maybe this is why the simulation is 
crashing? But it runs for more than 2 microseconds before it crashed!
I would like to have your opinion about this point, please.

The error message is really confusing and I can't really understand it : 
VCM/Pressure scaling more that 1%/Bond length not finite.

I would really appreciate your help on this.

Zeineb




-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] WG: Issue with CUDA and gromacs

2019-02-25 Thread Tafelmeier, Stefanie
Many thanks Páll, for your reply.

As you suggested, we installed now the most up-to-date versions: the Gromacs 
2019.1 Version and the Cuda 10 + Driver-version 415.27 as well as the 
gcc-version 7.3.0.

If we run gmx mdrun, still there is the error:
Assertion failed:
Condition: stat == cudaSuccess
Asynchronous H2D copy failed

In order to get more detailed information about  the error source, I performed 
the regression test.
The test 42 and 46 failed. 

I try to understand the meaning behind the tests and why they have not passed, 
but I would absolutely appreciate if someone could sent me an explanation of 
what could have caused the failing. Or if someone faced the same problem before 
and knows a remedy.
The failing-output is given below.

Many thanks in advance for your help.

Best regards,
Steffi


-
42/46 Test #42: regressiontests/complex .***Failed   88.99 sec
  :-) GROMACS - gmx mdrun, 2019.1 (-:

GROMACS is written by:
 Emile Apol  Rossen Apostolov  Paul Bauer Herman J.C. Berendsen
Par Bjelkmar  Christian Blau   Viacheslav Bolnykh Kevin Boyd
 Aldert van Buuren   Rudi van Drunen Anton Feenstra   Alan Gray
  Gerrit Groenhof Anca HamuraruVincent Hindriksen  M. Eric Irrgang
  Aleksei Iupinov   Christoph Junghans Joe Jordan Dimitrios Karkoulis
Peter KassonJiri Kraus  Carsten Kutzner  Per Larsson
  Justin A. LemkulViveca LindahlMagnus Lundborg Erik Marklund
Pascal Merz Pieter MeulenhoffTeemu Murtola   Szilard Pall
Sander Pronk  Roland Schulz  Michael ShirtsAlexey Shvetsov
   Alfons Sijbers Peter Tieleman  Jon Vincent  Teemu Virolainen
 Christian WennbergMaarten Wolf
   and the project leaders:
Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2018, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS:  gmx mdrun, version 2019.1
Executable:   /home/pcm-mess/gromacs-2019.1/build/bin/gmx
Data prefix:  /home/pcm-mess/gromacs-2019.1 (source tree)
Working dir:  /home/pcm-mess/gromacs-2019.1/build/tests/regressiontests-2019.1
Command line:
  gmx mdrun -h


Thanx for Using GROMACS - Have a Nice Day

Mdrun cannot use the requested (or automatic) number of ranks, retrying with 8.

Abnormal return value for ' gmx mdrun-nb cpu   -notunepme >mdrun.out 2>&1' 
was 1
Retrying mdrun with better settings...

Abnormal return value for ' gmx mdrun -ntmpi 1  -notunepme >mdrun.out 2>&1' 
was -1
FAILED. Check mdrun.out, md.log file(s) in distance_restraints for 
distance_restraints
FAILED. Check checkpot.out (24 errors), checkforce.out (1706 errors) file(s) in 
nbnxn-free-energy for nbnxn-free-energy
FAILED. Check checkpot.out (23 errors), checkforce.out (1913 errors) file(s) in 
nbnxn-free-energy-vv for nbnxn-free-energy-vv

Abnormal return value for ' gmx mdrun   -notunepme >mdrun.out 2>&1' was -1
FAILED. Check mdrun.out, md.log file(s) in nbnxn-vdw-force-switch for 
nbnxn-vdw-force-switch

Abnormal return value for ' gmx mdrun   -notunepme >mdrun.out 2>&1' was -1
FAILED. Check mdrun.out, md.log file(s) in nbnxn-vdw-potential-switch for 
nbnxn-vdw-potential-switch
Re-running nbnxn-vdw-potential-switch using CPU-based PME

Abnormal return value for ' gmx mdrun   -notunepme >mdrun.out 2>&1' was -1
FAILED. Check mdrun.out, md.log file(s) in nbnxn_pme for nbnxn_pme
Re-running nbnxn_pme using CPU-based PME

Abnormal return value for ' gmx mdrun -ntmpi 6  -notunepme >mdrun.out 2>&1' 
was 1
Retrying mdrun with better settings...

Abnormal return value for ' gmx mdrun   -notunepme >mdrun.out 2>&1' was -1
FAILED. Check mdrun.out, md.log file(s) in octahedron for octahedron
Re-running octahedron using CPU-based PME

Abnormal return value for ' gmx mdrun -ntmpi 1  -notunepme >mdrun.out 2>&1' 
was -1
FAILED. Check mdrun.out, md.log file(s) in orientation-restraints for 
orientation-restraints
Re-running orientation-restraints using CPU-based PME

Abnormal return value for ' gmx mdrun -ntmpi 1  -pme cpu-notunepme 
>mdrun.out 2>&1' was -1
FAILED. Check mdrun.out, md.log file(s) in orientation-restraints/pme-cpu for 
orientation-restraints-pme-cpu

Abnormal return value for ' gmx mdrun   -notunepme >mdrun.out 2>&1' was -1
FAILED. Check mdrun.out, md.log file(s) in position-restraints for 
position-restraints
FAILED.