On Jun 16, 2007, at 3:22 AM, Francesco Pietra wrote:

The question is whether, in compiling openmpi,the flag
libnuma is needed or simply useful also in the special
arrangement of the Tyan S2895 Thunder K8WE with two
dual-core opterons and eighth memory modules, two GB
each.

At worst, it is not harmful.  At best, it is helpful.

We're still playing / experimenting with the memory affinity controls and don't have great support for it at the moment. It's ok, but it will likely mainly end up smoothing your results over repeated runs rather than result in dramatic performance improvements. Note that you need to enable processor affinity for OMPI to use memory affinity. See the FAQ for more information.

If so (being first time to compile a mpi, and being
nonexpert singleuser/administrator) I would be much
obliged for checking the series of commands below (as
superuser) for Linux Debian amd64 etch:

cd /usr/local

bunzip2 openmpi-1.2.2.tar.bz2

tar xvf openmpi-1.2.2.tar

cd /usr/local/openmpi-1.2.2

FC=/opt/intel/cce/9.1.036/bin/ifort; export FC

CC=/opt/intel/cce/9.1.042/bin/icc; export CC

CXX=/opt/intel/cce/9.1.042/bin/icpc; export CXX

./configure --with-libnuma=/full pat to libnuma-dev
0.9.11-4, 0.9.11-3 (not yet installed)

make

make install

This all looks fine. You might always want to set F77 to the same value as FC. Alternatively, you can run:

./configure CC=/opt/intel/... CXX=/opt/intel/... (etc.)

and not set the variables in the environment. In terms of end results, the effect is identical between the two techniques. The only difference is that if you put the CC=... stuff on the configure command line, which compilers you specifically chose will be recorded in the config.log file, which can be handy for referring to later and/ or debugging problems.

followed by setting as user in my .bashrc

MPI_HOME=/usr/local; export MPI_HOME

Note that Open MPI does not use the MPI_HOME environment variable.

If you already have /usr/local/bin in your PATH and /usr/local/lib in your LD_LIBRARY_PATH, you're set.

____
mpi for a computational application that is best
compiled with intel. On my system those intels already
furnish runtime

/opt/intel/fce/9.1.036/lib/libimf.so
/opt/intel/cce/9.1.042/lib/libimf.so

to a QM code (NWChem 5.0) that is built-in
parallelized with TCGMSG.

Thanks

francesco pietra



______________________________________________________________________ ______________
Yahoo! oneSearch: Finally, mobile search
that gives answers, not web links.
http://mobile.yahoo.com/mobileweb/onesearch?refer=1ONXIC
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
Cisco Systems

Reply via email to