Scott,
molpro checks that the machine returned by gethostname is in the node list. There is a --mpirun-nolocal flag to molpro but I'm not sure if this is the right thing to disable this. If you can't get gethostname to return the same name that pbs uses then alternatively you can either use the -N flag to specify the nodelist, or use sed to edit $PBS_NODEFILE in your job script.
Opteron build instructions are here: http://www.molpro.net/supported/opteron.php
The portland compiler (5.1 or 5.2) works fully in parallel. 5.2 is quicker than 5.1. If you're using version 5.2 of the portland compiler then make sure the same flags are being used when building global arrays and within molpro.
The intel premier support page has just released version 8.1 of the intel compiler which has the EM64 technology which is analagous to x86_64 so can produce 64bit code which runs on opteron. I've produced a working serial executable but configure doesn't work yet and you have to tweak the CONFIG file. I haven't yet tried to build global arrays with this compiler.
The pathscale compiler http://www.pathscale.com also works in serial but not yet in parallel. I will backport the configuration for this onto 2002.6 soon.
Some benchmark numbers here: http://www.molpro.net/benchmark/bench.cgi?fields=wall&fields=total&fields=xhost&nompp=1¬hroughput=1
Nick Wilson
Scott Yockel wrote:
Molpro users,
I am having a shared memory problem on my dual-proc Opteron based cluster under linux. Currently I'm running latest 32-bit Alhlon molpro 2002.6 mpp builds. It runs fine in dual proc when running interactivly submitting strait from the command line. However, when submitting through a batch system (OpenPBS) then I get the following output:
Nolocal= 1 Parlib= 0 Local machine n06.priv0.unt.edu not in node list!
What I assume is different is the tty control from the queuing system has now
lead molpro to try to look for distributed memory and other nodes, which we only
want it to run a local 2proc SMP-type job. If the "Nolocal=1" is a boolean type
(true/false) switch, can it be turned off, and how? Or is there any other type
of environment variable that I'm not aware of that needs to be set to fix this
problem? Also, this system came prebuild with quite a bit of mpi and pvm type
of libraries intended for distributed memory. Is is possible that they share
something in common with molpro that might foul up running 2proc SMP-type jobs
on these operons locally ?
Also, has there been any completely successful 64-bit compiles of Molpro2002.6 on a Opteron based cluster, and if so are there any makefiles for this?
Thanks so much for the help,
Scott Yockel University of North Texas Department of Chemistry New CHEM building Room 262C
