600 GB of memory? I highly doubt that you have that much memory available. Are you sure that this is not a typo? Can you please post evidence that you have >=600 GB of memory available? It is common for clusters to disallow an individual process from using >10% of the total memory on a head-node, which makes 600 MB more likely, in which case you can try submitting your job to a compute node.

-- original message --

Hello,
thank you for your reply. I have used following command :

g_msd  -n POPC.ndx  -lateral z -o POPC_msd.xvg -mol POPC_diff.xvg

Trajectory has 10000 frames and the system it was ran on is Fedora Red Hat 5.4.
Indeed my network administrator was very unhappy about comsumed memory.

Regards,
   Slawomir





Wiadomo¶æ napisana przez Tsjerk Wassenaar w dniu 2011-06-27, o godz. 15:40:

[Hide Quoted Text]
Hi Slawomir,

That's quite a usage of memory! Can you provide more information? Like
the number of frames in the trajectory, the command line you used, and
the system you ran on?

Cheers,

Tsjerk

2011/6/27 S³awomir Stachura <stachura.slawo...@gmail.com>:
Hi GMX Users,
I am writting this email, beacause I think the g_msd program in Gromacs 4.5.4 bears a problem. I was calculating the MSD od center of mass of POPC in membrane (system contains 274 POPC lipid molecules in all-atom force field) from 50 ns trajectory and it seems to consume great amount of memory. With time of calculations the memory reserves are gradually devoured to the extent, in my case, of over 600 GB (than my administrator of cluster killed the process). It seems that it does not release memory and it's pilling results up with steps in memory. Have you heard of such case?
Best wishes,

--
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Reply via email to