Can I also ask, I am running it on high performance computer (HPC). So whether 
the cpu (-pe mpi 12) or memory (-l mem=2G) would influence the capability to 
handle a very long index file? 


When the job is submitted, longer queue time will be required if requesting 
more resources. So I want to optimise the cpu and memory with the least queue 
time.


Thank you!


------------------ Original ------------------
From:  "ZHANG Cheng"<272699...@qq.com>;
Date:  Tue, Feb 19, 2019 05:13 AM
To:  "gromacs.org_gmx-users"<gromacs.org_gmx-users@maillist.sys.kth.se>;

Subject:  Is there a more efficient way to calculate the "gmx distance" with a 
very long index?



My coarse-grained system has 10 proteins, each has 442 residues. After a period 
of time, those proteins aggregated. I want to use "gmx distance" to know which 
residues most likely to involve contact with other proteins.


I prepared a index.ndx file, in which there are 442*442*(9+8+7 ... + 1) = 
8791380 pairs of atom indices. But these indices are extremely too long for 
Gromacs to handle at a time. So I have to split it into shorter pieces.


But is there a more efficient way to achieve this?


Thank you!


Cheng
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Reply via email to