Hi Jack:
Thanks for the help.
Just to add insult to injury, I installed lammpi with DarwinPorts and
it all works with the fink package
I am making right out of the box.
So I am afraid there might be something amiss with the fink lammpi
package.
The main difference I can see is that the DarwinPorts version builds
only the static libraries. When I
googled the error message I get in the fink version, I came up with
this, which is consistent with the
above observation:
http://www.lam-mpi.org/MailArchives/lam/2003/02/5372.php
It says
"the problem does not occur when we use static libraries instead of
shared libraries.
So as temporary solution you can consider compiling LAM without
shared library suport. "
I'm guessing this is why the DarwinPorts package builds only the
lammpi static library.
So is the lammpi package broken, and if so, can we replace it with
one along the lines of what is in DarwinPorts?
Thanks.
Bill
On Nov 15, 2005, at 3:17 PM, Jack Howarth wrote:
William,
Its been awhile since I played with lammpi when I was creating the
gromacs-mpi packages. I don't have any instructions laying around
that are specifically for MacOS X. However here are some notes I
wrote up for using gromacs-mpi under Fedora Core linux...
1) Make sure ssh is password-less between all the LAM node machines
a) for each node of the lam, on that machine do...
ssh-keygen -t dsa
...accepting the default file location and enter a paraphrase.
b) copy the contents of the $HOME/.ssh/id_dsa generated above
to $HOME/.ssh/authorized_keys.
c) Fedora's openssh runs ssh-agent automatically so you can
just execute 'ssh-add $HOME/.ssh/id_dsa' and enter the
paraphrase
d) sftp the .ssh directory to all the nodes in the cluster
e) now ssh should no longer request a password.
2) edit ~/lamhosts to include all hostnames to be used as nodes for
LAM and the number of cpu's (cpu=N) to be used on each.
3) verify that the chosen cluster is bootable with 'recon -v ~/
lamhosts'
4) start LAM on the specified cluster with 'lamboot -v ~/lamhosts'
5) verify the cluster is active with 'tping -c1 N'
6) run a LAM-MPI program with 'mpirun -v -np 2 foo'
7) to monitor the status of the LAM use 'mpitask'
8) to clean out the current LAM without rebooting use 'lamclean -v'
9) to remove all traces of the LAM and shut it down do 'lamhalt'
Note that if localhost in the default /etc/lam/lam-bhost.def is
set for cpu=N, ~/lamhosts can be omitted from the steps above
currenting a single node N cpu LAM on the workstation.
To run a md run on the cluster use commands of the form...
grompp -np 1
mpirun n0 -c 2 mdrun
Note that -np in the grompp refers to the number of nodes so
it should be 1 for a single SMP workstation. Also for mpirun
we need to refer to the node in use and use -c to set the
number of available cpus.
The lamhosts file on a dual processor machine looks like...
# two processor single node LAM
graphics.msbb.uc.edu
graphics.msbb.uc.edu
I'll rebuild gromacs-mpi on my dual G5 and verify that this
works here with a machine as localhost.
Jack
-------------------------------------------------------
This SF.Net email is sponsored by the JBoss Inc. Get Certified Today
Register for a JBoss Training Course. Free Certification Exam
for All Training Attendees Through End of 2005. For more info visit:
http://ads.osdn.com/?ad_id=7628&alloc_id=16845&op=click
_______________________________________________
Fink-devel mailing list
Fink-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/fink-devel