Re: [OMPI users] Parallel Computation under WiFi for Beginners

2011-03-22 Thread Abdul Rahman Riza
THANKS JEFF..!!

On Tue, 2011-03-22 at 14:20 -0400, Jeff Squyres wrote:

> Try this URL:
> 
> http://www.citutor.org/login.php
> 
> 
> On Mar 22, 2011, at 2:19 PM, Abdul Rahman Riza wrote:
> 
> > Thank Jeff,
> > 
> > How can I get free account? It requires username and password
> > 
> > http://hpcsoftware.ncsa.illinois.edu/Software/user/show_all.php?deploy_id=989=NCSA%20=247ec50d90ddc9b3e8d7e1631bc1efa1
> > A username and password are being requested by 
> > https://internal.ncsa.uiuc.edu. The site says: "Secure (SSL) Kerberos Login"
> > 
> > 
> > 
> > On Tue, 2011-03-22 at 10:42 -0400, Jeff Squyres wrote:
> >> There's lots of good MPI tutorials on the web.
> >> 
> >> My favorites are at the NCSA web site; if you get a free account, you can 
> >> login and see their course listings.
> >> 
> >> 
> >> On Mar 22, 2011, at 7:30 AM, Abdul Rahman Riza wrote:
> >> 
> >> > Dear All,
> >> > 
> >> > I am newbie in parallel computing and would like to ask.
> >> > 
> >> > I have switch and 2 laptops:
> >> >  • Dell inspiron 640, dual core 2 gb ram
> >> >  • Dell inspiron 1010 intel atom 1 gb ram
> >> > 
> >> > Both laptop running Ubuntu 10.04 under wireles network using TP-LINK 
> >> > access point.
> >> > 
> >> > I am wondering if you have tutorial and source code as demo of simple 
> >> > parallel computing for  2 laptops to perform simultaneous computation.
> >> > 
> >> > Riza
> >> > ___
> >> > users mailing list
> >> > 
> >> us...@open-mpi.org
> >> 
> >> > 
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >> 
> >> 
> >> 
> >> 
> > 
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 




Re: [OMPI users] Parallel Computation under WiFi for Beginners

2011-03-22 Thread Jeff Squyres
Look in Open MPI's examples/ directory.


On Mar 22, 2011, at 2:15 PM, Abdul Rahman Riza wrote:

> Thank you guys for information.
> 
> I don;t know from where I should start. This is my first experience using 
> OpenMPI. Is there any simple calculation using my 2 laptops? 
> Please if there is very very simple tutorial for dummies...
> 
> On Tue, 2011-03-22 at 13:34 -0400, Prentice Bisbal wrote:
>> I'd like to point out that nothing special needs to be done because
>> you're using a wireless network. As long as you're using TCP for your
>> message passing, it won't make a difference what you're using as long as
>> you have TCP/IP configured correctly.
>> 
>> On 03/22/2011 10:42 AM, Jeff Squyres wrote:
>> > There's lots of good MPI tutorials on the web.
>> > 
>> > My favorites are at the NCSA web site; if you get a free account, you can 
>> > login and see their course listings.
>> > 
>> > 
>> > On Mar 22, 2011, at 7:30 AM, Abdul Rahman Riza wrote:
>> > 
>> >> Dear All,
>> >>
>> >> I am newbie in parallel computing and would like to ask.
>> >>
>> >> I have switch and 2 laptops:
>> >>   • Dell inspiron 640, dual core 2 gb ram
>> >>   • Dell inspiron 1010 intel atom 1 gb ram
>> >>
>> >> Both laptop running Ubuntu 10.04 under wireles network using TP-LINK 
>> >> access point.
>> >>
>> >> I am wondering if you have tutorial and source code as demo of simple 
>> >> parallel computing for  2 laptops to perform simultaneous computation.
>> >>
>> >> Riza
>> >> ___
>> >> users mailing list
>> >> 
>> us...@open-mpi.org
>> 
>> >> 
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> > 
>> > 
>> 
>> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] Parallel Computation under WiFi for Beginners

2011-03-22 Thread Abdul Rahman Riza
Thank Jeff,

How can I get free account? It requires username and password

http://hpcsoftware.ncsa.illinois.edu/Software/user/show_all.php?deploy_id=989=NCSA%20=247ec50d90ddc9b3e8d7e1631bc1efa1
A username and password are being requested by
https://internal.ncsa.uiuc.edu. The site says: "Secure (SSL) Kerberos
Login"



On Tue, 2011-03-22 at 10:42 -0400, Jeff Squyres wrote:

> There's lots of good MPI tutorials on the web.
> 
> My favorites are at the NCSA web site; if you get a free account, you can 
> login and see their course listings.
> 
> 
> On Mar 22, 2011, at 7:30 AM, Abdul Rahman Riza wrote:
> 
> > Dear All,
> > 
> > I am newbie in parallel computing and would like to ask.
> > 
> > I have switch and 2 laptops:
> > • Dell inspiron 640, dual core 2 gb ram
> > • Dell inspiron 1010 intel atom 1 gb ram
> > 
> > Both laptop running Ubuntu 10.04 under wireles network using TP-LINK access 
> > point.
> > 
> > I am wondering if you have tutorial and source code as demo of simple 
> > parallel computing for  2 laptops to perform simultaneous computation.
> > 
> > Riza
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 




Re: [OMPI users] Parallel Computation under WiFi for Beginners

2011-03-22 Thread Abdul Rahman Riza
Thank you guys for information.

I don;t know from where I should start. This is my first experience
using OpenMPI. Is there any simple calculation using my 2 laptops? 
Please if there is very very simple tutorial for dummies...

On Tue, 2011-03-22 at 13:34 -0400, Prentice Bisbal wrote:

> I'd like to point out that nothing special needs to be done because
> you're using a wireless network. As long as you're using TCP for your
> message passing, it won't make a difference what you're using as long as
> you have TCP/IP configured correctly.
> 
> On 03/22/2011 10:42 AM, Jeff Squyres wrote:
> > There's lots of good MPI tutorials on the web.
> > 
> > My favorites are at the NCSA web site; if you get a free account, you can 
> > login and see their course listings.
> > 
> > 
> > On Mar 22, 2011, at 7:30 AM, Abdul Rahman Riza wrote:
> > 
> >> Dear All,
> >>
> >> I am newbie in parallel computing and would like to ask.
> >>
> >> I have switch and 2 laptops:
> >>• Dell inspiron 640, dual core 2 gb ram
> >>• Dell inspiron 1010 intel atom 1 gb ram
> >>
> >> Both laptop running Ubuntu 10.04 under wireles network using TP-LINK 
> >> access point.
> >>
> >> I am wondering if you have tutorial and source code as demo of simple 
> >> parallel computing for  2 laptops to perform simultaneous computation.
> >>
> >> Riza
> >> ___
> >> users mailing list
> >> us...@open-mpi.org
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> > 
> > 
> 




Re: [OMPI users] intel compiler linking issue and issue of environment variable on remote node, with open mpi 1.4.3

2011-03-22 Thread Jeff Squyres
On Mar 21, 2011, at 8:21 AM, ya...@adina.com wrote:

> The issue is that I am trying to build open mpi 1.4.3 with intel 
> compiler libraries statically linked to it, so that when we run 
> mpirun/orterun, it does not need to dynamically load any intel 
> libraries. But what I got is mpirun always asks for some intel 
> library(e.g. libsvml.so) if I do not put intel library path on library 
> search path($LD_LIBRARY_PATH). I checked the open mpi user 
> archive, it seems only some kind user mentioned to use
> "-i-static"(in my case) or "-static-intel" in ldflags, this is what I did,
> but it seems not working, and I did not get any confirmation whether 
> or not this works for anyone else from the user archive. could 
> anyone help me on this? thanks!

Is it Open MPI's executables that require the intel shared libraries at run 
time, or your application?  Keep in mind the difference:

1. Compile/link flags that you specify to OMPI's configure script are used to 
compile/link Open MPI itself (including executables such as mpirun).

2. mpicc (and friends) use a similar-but-different set of flags to compile and 
link MPI applications.  Specifically, we try to use the minimal set of flags 
necessary to compile/link, and let the user choose to add more flags if they 
want to.  See this FAQ entry for more details:

http://www.open-mpi.org/faq/?category=mpi-apps#override-wrappers-after-v1.0

> (2) After compiling and linking our in-house codes  with open mpi 
> 1.4.3, we want to make a minimal list of executables for our codes 
> with some from open mpi 1.4.3 installation, without any dependent 
> on external setting such as environment variables, etc.
> 
> I orgnize my directory as follows:
> 
> parent---
>|
>   package
>   |
>   bin  
>   |
>   lib
>   |
>   tools
> 
> In package/ directory are executables from our codes. bin/ has 
> mpirun and orted, copied from openmpi installation. lib/ includes 
> open mpi libraries, and intel libraries. tools/ includes some c-shell 
> scripts to launch mpi jobs, which uses mpirun in bin/.

FWIW, you can use the following OMPI options to configure to eliminate all the 
OMPI plugins (i.e., locate all that code up in libmpi and friends, vs. being 
standalone-DSOs):

--disable-shared --enable-static

This will make libmpi.a (vs. libmpi.so and a bunch of plugins) which your 
application can statically link against.  But it does make a larger executable. 
 Alternatively, you can:

--disable-dlopen

(instead of disable-shared/enable-static) which will make a giant libmpi.so 
(vs. libmpi.so and all the plugin DSOs).  So your MPI app will still 
dynamically link against libmpi, but all the plugins will be physically located 
in libmpi.so vs. being dlopen'ed at run time.

> The parent/ directory is on a NFS shared by all nodes of the 
> cluster. In ~/.bashrc(shared by all nodes too), I clear PATH and 
> LD_LIBRARY_PATH without direct to any directory of open mpi 
> 1.4.3 installation. 
> 
> First, if I set above bin/ directory  to PATH and lib/ 
> LD_LIBRARY_PATH in ~/.bashrc, our parallel codes(starting by the 
> C shell script in tools/) run AS EXPECTED without any problem, so 
> that I set other things right.
> 
> Then again, to avoid modifying ~/.bashrc or ~/.profile, I set bin/ to 
> PATH and lib/ to LD_LIBRARY_PATH in the C shell script under 
> tools/ directory, as:
> 
> setenv PATH /path/to/bin:$PATH
> setenv LD_LIBRARY_PATH /path/to/lib:$LD_LIBRARY_PATH

Instead, you might want to try:

   /path/to/mpirun ...

which will do the same thing as mpirun's --prefix option (see mpirun(1) for 
details here), and/or use the --enable-mpi-prefix-by-default configure option.  
This option, as is probably pretty obvious :-), makes mpirun behave as if the 
--prefix option was specified on the command line, with an argument equal to 
the $prefix from configure.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] Parallel Computation under WiFi for Beginners

2011-03-22 Thread Jeff Squyres
There's lots of good MPI tutorials on the web.

My favorites are at the NCSA web site; if you get a free account, you can login 
and see their course listings.


On Mar 22, 2011, at 7:30 AM, Abdul Rahman Riza wrote:

> Dear All,
> 
> I am newbie in parallel computing and would like to ask.
> 
> I have switch and 2 laptops:
>   • Dell inspiron 640, dual core 2 gb ram
>   • Dell inspiron 1010 intel atom 1 gb ram
> 
> Both laptop running Ubuntu 10.04 under wireles network using TP-LINK access 
> point.
> 
> I am wondering if you have tutorial and source code as demo of simple 
> parallel computing for  2 laptops to perform simultaneous computation.
> 
> Riza
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




[OMPI users] "Re: RoCE (IBoE) & OpenMPI"

2011-03-22 Thread Eli Cohen
Hi,
this discussion has been brought to my attention so I joined this
mailing list to try to help.
As you already stated that the SL maps correctly to PCP when using
ibv_rc_pingpong, I assume OpenMPI works over rdma_cm. In that cases
please note the following:
1. If you're using OFED-1.5.2, than if if the rdma_cm socket is bound
to VLAN net device, all egress traffic will bear a default priority of
3.
2. The default priority is controlled by a module parameter to
rdma_cm.ko named def_prec2sl.
3. You may change the priority on a per socket basis (overriding the
module parameter) by using setsockopt() to set the option
RDMA_OPTION_ID_TOS to the required value of the TOS.
4. The TOS is mapped to SL according to the following formula: SL = TOS >> 5

I hope that clears things.

> Late yesterday I did have a chance to test the patch Jeff provided
> (against 1.4.3 - testing 1.5.x is on the docket for today). While it
> works, in that I can specify a gid_index, it doesn't do everything
> required - my traffic won't match a lossless CoS on the ethernet
> switch. Specifying a GID is only half of it; I really need to also
> specify a service level.
> The bottom 3 bits of the IB SL are mapped to ethernet's PCP bits in
> the VLAN tag. With a non-default gid, I can select an available VLAN
> (so RoCE's packets will include the PCP bits), but the only way to
> specify a priority is to use an SL. So far, the only RoCE-enabled app
> I've been able to make work correctly (such that traffic matches a
> lossless CoS on the switch) is ibv_rc_pingpong - and then, I need to
> use both a specific GID and a specific SL.
> The slides Pavel found seem a little misleading to me. The VLAN isn't
> determined by bound netdev; all VLAN netdevs map to the same IB
> adapter for RoCE. VLAN is determined by gid index. Also, the SL
> isn't determined by a set kernel policy; it's provided via the IB
> interfaces. As near as I can tell from Mellanox's documentation, OFED
> test apps, and the driver source, a RoCE adapter is an Infiniband card
> in almost all respects (even more so than an iWARP adapter).


Re: [OMPI users] intel compiler linking issue and issue of environment variable on remote node, with open mpi 1.4.3 (Tim Prince)

2011-03-22 Thread Ralph Castain
On a beowulf cluster? So you are using bproc?

If so, you have to use the OMPI 1.2 series - we discontinued bproc support at 
the start of 1.3. Bproc will take care of the envars.

If not bproc, then I assume you will use ssh for launching? Usually, the 
environment is taken care of by setting up your .bashrc (or equiv for your 
shell) on the remote nodes (which usually have a shared file system so all 
binaries are available on all nodes).


On Mar 22, 2011, at 7:00 AM, ya...@adina.com wrote:

> 
> 
> Thank you very much for the comments and hints. I will try to 
> upgrade our intel compiler collections. As for my second issue, 
> with open mpi, is there any way to propagate enviroment variables 
> of the current process on the master node to other slave nodes, 
> such that orted daemon could run on slave nodes too?
> 
> Thanks,
> Yiguang
> 
>> On 3/21/2011 5:21 AM, ya...@adina.com wrote:
>> 
>>> I am trying to compile our codes with open mpi 1.4.3, by intel
>>> compilers 8.1.
>>> 
>>> (1) For open mpi 1.4.3 installation on linux beowulf cluster, I use:
>>> 
>>> ./configure --prefix=/home/yiguang/dmp-setup/openmpi-1.4.3
>>> CC=icc
>>> CXX=icpc F77=ifort FC=ifort --enable-static LDFLAGS="-i-static -
>>> static-libcxa" --with-wrapper-ldflags="-i-static -static-libcxa"
>>> 2>&1 | tee config.log
>>> 
>>> and
>>> 
>>> make all install 2>&1 | tee install.log
>>> 
>>> The issue is that I am trying to build open mpi 1.4.3 with intel
>>> compiler libraries statically linked to it, so that when we run
>>> mpirun/orterun, it does not need to dynamically load any intel
>>> libraries. But what I got is mpirun always asks for some intel
>>> library(e.g. libsvml.so) if I do not put intel library path on
>>> library search path($LD_LIBRARY_PATH). I checked the open mpi user
>>> archive, it seems only some kind user mentioned to use
>>> "-i-static"(in my case) or "-static-intel" in ldflags, this is what
>>> I did, but it seems not working, and I did not get any confirmation
>>> whether or not this works for anyone else from the user archive.
>>> could anyone help me on this? thanks!
>>> 
>> 
>> If you are to use such an ancient compiler (apparently a 32-bit one),
>> you must read the docs which come with it, rather than relying on
>> comments about a more recent version.  libsvml isn't included
>> automatically at link time by that 32-bit compiler, unless you specify
>> an SSE option, such as -xW. It's likely that no one has verified
>> OpenMPI with a compiler of that vintage.  We never used the 32-bit
>> compiler for MPI, and we encountered run-time library bugs for the
>> ifort x86_64 which weren't fixed until later versions.
>> 
>> 
>> -- 
>> Tim Prince
>> 
>> 
>> --
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Is there an mca parameter equivalent to -bind-to-core?

2011-03-22 Thread Ralph Castain

On Mar 21, 2011, at 9:27 PM, Eugene Loh wrote:

> Gustavo Correa wrote:
> 
>> Dear OpenMPI Pros
>> 
>> Is there an MCA parameter that would do the same as the mpiexec switch 
>> '-bind-to-core'?
>> I.e., something that I could set up not in the mpiexec command line,
>> but for the whole cluster, or for an user, etc.
>> 
>> In the past I used '-mca mpi mpi_paffinity_alone=1'.

Must be a typo here - the correct command is '-mca mpi_paffinity_alone 1'

>> But that was before '-bind-to-core' came along.
>> However, my recollection of some recent discussions here in the list
>> is that the latter would not do the same as '-bind-to-core',
>> and that the recommendation was to use '-bind-to-core' in the mpiexec 
>> command line.

Just to be clear: mpi_paffinity_alone=1 still works and will cause the same 
behavior as bind-to-core.


>> 
> A little awkward, but how about
> 
> --bycorermaps_base_schedule_policy  core
> --bysocket  rmaps_base_schedule_policy  socket
> --bind-to-core  orte_process_bindingcore
> --bind-to-socketorte_process_bindingsocket
> --bind-to-none  orte_process_bindingnone
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] intel compiler linking issue and issue of environment variable on remote node, with open mpi 1.4.3 (Tim Prince)

2011-03-22 Thread yanyg


Thank you very much for the comments and hints. I will try to 
upgrade our intel compiler collections. As for my second issue, 
with open mpi, is there any way to propagate enviroment variables 
of the current process on the master node to other slave nodes, 
such that orted daemon could run on slave nodes too?

Thanks,
Yiguang

> On 3/21/2011 5:21 AM, ya...@adina.com wrote:
> 
> > I am trying to compile our codes with open mpi 1.4.3, by intel
> > compilers 8.1.
> >
> > (1) For open mpi 1.4.3 installation on linux beowulf cluster, I use:
> >
> > ./configure --prefix=/home/yiguang/dmp-setup/openmpi-1.4.3
> > CC=icc
> > CXX=icpc F77=ifort FC=ifort --enable-static LDFLAGS="-i-static -
> > static-libcxa" --with-wrapper-ldflags="-i-static -static-libcxa"
> > 2>&1 | tee config.log
> >
> > and
> >
> > make all install 2>&1 | tee install.log
> >
> > The issue is that I am trying to build open mpi 1.4.3 with intel
> > compiler libraries statically linked to it, so that when we run
> > mpirun/orterun, it does not need to dynamically load any intel
> > libraries. But what I got is mpirun always asks for some intel
> > library(e.g. libsvml.so) if I do not put intel library path on
> > library search path($LD_LIBRARY_PATH). I checked the open mpi user
> > archive, it seems only some kind user mentioned to use
> > "-i-static"(in my case) or "-static-intel" in ldflags, this is what
> > I did, but it seems not working, and I did not get any confirmation
> > whether or not this works for anyone else from the user archive.
> > could anyone help me on this? thanks!
> >
> 
> If you are to use such an ancient compiler (apparently a 32-bit one),
> you must read the docs which come with it, rather than relying on
> comments about a more recent version.  libsvml isn't included
> automatically at link time by that 32-bit compiler, unless you specify
> an SSE option, such as -xW. It's likely that no one has verified
> OpenMPI with a compiler of that vintage.  We never used the 32-bit
> compiler for MPI, and we encountered run-time library bugs for the
> ifort x86_64 which weren't fixed until later versions.
> 
> 
> -- 
> Tim Prince
> 
> 
> --



[OMPI users] Parallel Computation under WiFi for Beginners

2011-03-22 Thread Abdul Rahman Riza
Dear All,

I am newbie in parallel computing and would like to ask.

I have switch and 2 laptops: 

 1. Dell inspiron 640, dual core 2 gb ram
 2. Dell inspiron 1010 intel atom 1 gb ram


Both laptop running Ubuntu 10.04 under wireles network using TP-LINK
access point.

I am wondering if you have tutorial and source code as demo of simple
parallel computing for  2 laptops to perform simultaneous computation.

Riza


Re: [OMPI users] Building OpenMPI on Windows 7

2011-03-22 Thread Shiqing Fan

Hi Hiral,

You have to add "OMPI_IMPORTS" as a preprocessor definition in you 
project configuration. Or a easier way is to use the mpicc command line.


Please also take a look into the output of "mpicc --showme", it will 
give you the complete compile options.



Regards,
Shiqing

On 3/22/2011 10:36 AM, hi wrote:

Hi Shiqing,
While building my application (on Windows 7, Vistual Studio 2008 
32-bit application) with openmpi-1.5.2, getting following error...

util.o : error LNK2001: unresolved external symbol _ompi_mpi_byte
util.o : error LNK2001: unresolved external symbol _ompi_mpi_op_max
util.o : error LNK2001: unresolved external symbol _ompi_mpi_int
util.o : error LNK2001: unresolved external symbol _ompi_mpi_char
util.o : error LNK2001: unresolved external symbol _ompi_mpi_comm_world
util.o : error LNK2001: unresolved external symbol _ompi_mpi_op_sum
Linking options...
/LIBPATH:""c:\openmpi-1.5.2\installed"/lib/" libmpi_cxxd.lib 
libmpid.lib libmpi_f77d.lib libopen-pald.lib libopen-rted.lib

I seems that 'dllexport' is missing for above symbols.
Thank you.
-Hiral

On Fri, Mar 18, 2011 at 1:53 AM, Shiqing Fan > wrote:


Hi Hiral,



> There is no f90 bindings at moment for Windows.
Any idea when this available?

At moment, no. But only if there is strong requirements.



Regards,
Shiqing


Thank you.
-Hiral
On Thu, Mar 17, 2011 at 5:21 PM, Shiqing Fan > wrote:



I tried building openmpi-1.5.2 on Windows 7 (as described
below environment) with OMPI_WANT_F77_BINDINGS_ON and
OMPI_WANT_F90_BINDINGS_ON using "ifort".
I observed that it has generated mpif77.exe but didn't
generated mpif90.exe, any idea???


There is no f90 bindings at moment for Windows.



BTW: while using above generated mpif77.exe to compile
hello_f77.f got following errors...

c:\openmpi-1.5.2\examples> mpif77 hello_f77.f
Intel(R) Visual Fortran Compiler Professional for
applications running on IA-32,
 Version 11.1Build 20100414 Package ID:
w_cprof_p_11.1.065
Copyright (C) 1985-2010 Intel Corporation.  All rights
reserved.
C:/openmpi-1.5.2/installed/include\mpif-config.h(91):
error #5082: Syntax error,
 found ')' when expecting one of: ( 
...
  parameter (MPI_STATUS_SIZE=)
-^
compilation aborted for hello_f77.f (code 1)


It seems MPI_STATUS_SIZE is not set. Could you please send me
your CMakeCache.txt to me off the mailing list, so that I can
check what is going wrong? A quick solution would be just set
it to 0.


Regards,
Shiqing


Thank you.
-Hiral
On Wed, Mar 16, 2011 at 8:11 PM, Damien > wrote:

Hiral,
To add to Shiqing's comments, 1.5 has been running great
for me on Windows for over 6 months since it was in
beta.  You should give it a try.
Damien
On 16/03/2011 8:34 AM, Shiqing Fan wrote:

Hi Hiral,

> it's only experimental in 1.4 series. And there is
only F77 bingdings on Windows, no F90 bindings.
Can you please provide steps to build 1.4.3 with
experimental f77 bindings on Windows?

Well, I highly recommend to use 1.5 series, but I can
also take a look and probably provide you a patch for
1.4 .

BTW: Do you have any idea on: when next stable release
with full fortran support on Windows would be available?

There is no plan yet.
Regards,
Shiqing

Thank you.
-Hiral
On Wed, Mar 16, 2011 at 6:59 PM, Shiqing Fan
> wrote:

Hi Hiral,
1.3.4 is quite old, please use the latest version.
As Damien noted, the full fortran support is in
1.5 series, it's only experimental in 1.4 series.
And there is only F77 bingdings on Windows, no F90
bindings. Another choice is to use the released
binary installers to avoid compiling everything by
yourself.
Best Regards,
Shiqing
On 3/16/2011 11:47 AM, hi wrote:


Greetings!!!

I am trying to build openmpi-1.3.4 and
openmpi-1.4.3 on Windows 7 (64-bit OS), but
getting some difficuty...

My build environment:

OS : Windows 7 (64-bit)

C/C++ compiler : Visual Studio 2008 and Visual
Studio 2010

   

Re: [OMPI users] Building OpenMPI on Windows 7

2011-03-22 Thread hi
Hi Shiqing,

While building my application (on Windows 7, Vistual Studio 2008 32-bit
application) with openmpi-1.5.2, getting following error...

util.o : error LNK2001: unresolved external symbol _ompi_mpi_byte
util.o : error LNK2001: unresolved external symbol _ompi_mpi_op_max
util.o : error LNK2001: unresolved external symbol _ompi_mpi_int
util.o : error LNK2001: unresolved external symbol _ompi_mpi_char
util.o : error LNK2001: unresolved external symbol _ompi_mpi_comm_world
util.o : error LNK2001: unresolved external symbol _ompi_mpi_op_sum
Linking options...
/LIBPATH:""c:\openmpi-1.5.2\installed"/lib/" libmpi_cxxd.lib libmpid.lib
libmpi_f77d.lib libopen-pald.lib libopen-rted.lib

I seems that 'dllexport' is missing for above symbols.

Thank you.
-Hiral


On Fri, Mar 18, 2011 at 1:53 AM, Shiqing Fan  wrote:

> Hi Hiral,
>
>
>
> > There is no f90 bindings at moment for Windows.
> Any idea when this available?
>
> At moment, no. But only if there is strong requirements.
>
>
>
> Regards,
> Shiqing
>
>
> Thank you.
> -Hiral
>
> On Thu, Mar 17, 2011 at 5:21 PM, Shiqing Fan  wrote:
>
>>
>>  I tried building openmpi-1.5.2 on Windows 7 (as described below
>> environment) with OMPI_WANT_F77_BINDINGS_ON and
>> OMPI_WANT_F90_BINDINGS_ON using "ifort".
>>
>> I observed that it has generated mpif77.exe but didn't generated
>> mpif90.exe, any idea???
>>
>>
>> There is no f90 bindings at moment for Windows.
>>
>>
>>  BTW: while using above generated mpif77.exe to compile hello_f77.f got
>> following errors...
>>
>> c:\openmpi-1.5.2\examples> mpif77 hello_f77.f
>> Intel(R) Visual Fortran Compiler Professional for applications running on
>> IA-32,
>>  Version 11.1Build 20100414 Package ID: w_cprof_p_11.1.065
>> Copyright (C) 1985-2010 Intel Corporation.  All rights reserved.
>> C:/openmpi-1.5.2/installed/include\mpif-config.h(91): error #5082: Syntax
>> error,
>>  found ')' when expecting one of: (  
>> > _KIND_PARAM>   ...
>>   parameter (MPI_STATUS_SIZE=)
>> -^
>> compilation aborted for hello_f77.f (code 1)
>>
>> It seems MPI_STATUS_SIZE is not set. Could you please send me your
>> CMakeCache.txt to me off the mailing list, so that I can check what is going
>> wrong? A quick solution would be just set it to 0.
>>
>>
>> Regards,
>> Shiqing
>>
>>  Thank you.
>> -Hiral
>>
>>
>> On Wed, Mar 16, 2011 at 8:11 PM, Damien  wrote:
>>
>>
>>> Hiral,
>>>
>>> To add to Shiqing's comments, 1.5 has been running great for me on
>>> Windows for over 6 months since it was in beta.  You should give it a try.
>>>
>>> Damien
>>>
>>> On 16/03/2011 8:34 AM, Shiqing Fan wrote:
>>>
>>> Hi Hiral,
>>>
>>>
>>>
>>> > it's only experimental in 1.4 series. And there is only F77 bingdings
>>> on Windows, no F90 bindings.
>>> Can you please provide steps to build 1.4.3 with experimental f77
>>> bindings on Windows?
>>>
>>> Well, I highly recommend to use 1.5 series, but I can also take a look
>>> and probably provide you a patch for 1.4 .
>>>
>>>
>>>
>>> BTW: Do you have any idea on: when next stable release with full fortran
>>> support on Windows would be available?
>>>
>>> There is no plan yet.
>>>
>>>
>>> Regards,
>>> Shiqing
>>>
>>>
>>>
>>>
>>> Thank you.
>>> -Hiral
>>>
>>> On Wed, Mar 16, 2011 at 6:59 PM, Shiqing Fan  wrote:
>>>
>>>
 Hi Hiral,

 1.3.4 is quite old, please use the latest version. As Damien noted, the
 full fortran support is in 1.5 series, it's only experimental in 1.4 
 series.
 And there is only F77 bingdings on Windows, no F90 bindings. Another choice
 is to use the released binary installers to avoid compiling everything by
 yourself.


 Best Regards,
 Shiqing

 On 3/16/2011 11:47 AM, hi wrote:

  Greetings!!!



 I am trying to build openmpi-1.3.4 and openmpi-1.4.3 on Windows 7
 (64-bit OS), but getting some difficuty...



 My build environment:

 OS : Windows 7 (64-bit)

 C/C++ compiler : Visual Studio 2008 and Visual Studio 2010

 Fortran compiler: Intel "ifort"



 Approach: followed the "First Approach" described in README.WINDOWS
 file.



 *1) Using openmpi-1.3.4:***

 Observed build time error in version.cc(136). This error is related
 to getting SVN version information as described in
 http://www.open-mpi.org/community/lists/users/2010/01/11860.php. As we
 are using this openmpi-1.3.4 stable version on Linux platform, is there any
 fix to this compile time error?



 *2) Using openmpi-1.4.3:***

 Builds properly without F77/F90 support (i.e. i.e. Skipping MPI F77
 interface).

 Now to get the "mpif*.exe" for fortran programs, I provided proper
 "ifort" path and enabled "OMPI_WANT_F77_BINDINGS=ON" and/or
 OMPI_WANT_F90_BINDINGS=ON flag; but getting following 

Re: [OMPI users] Is there an mca parameter equivalent to -bind-to-core?

2011-03-22 Thread Eugene Loh

Gustavo Correa wrote:


Dear OpenMPI Pros

Is there an MCA parameter that would do the same as the mpiexec switch 
'-bind-to-core'?
I.e., something that I could set up not in the mpiexec command line,
but for the whole cluster, or for an user, etc.

In the past I used '-mca mpi mpi_paffinity_alone=1'.
But that was before '-bind-to-core' came along.
However, my recollection of some recent discussions here in the list
is that the latter would not do the same as '-bind-to-core',
and that the recommendation was to use '-bind-to-core' in the mpiexec command 
line.


A little awkward, but how about

 --bycorermaps_base_schedule_policy  core
 --bysocket  rmaps_base_schedule_policy  socket
 --bind-to-core  orte_process_bindingcore
 --bind-to-socketorte_process_bindingsocket
 --bind-to-none  orte_process_bindingnone