Re: [OMPI users] [slightly off topic] hardware solutions with monetary cost in mind

2016-05-20 Thread Damien
If you look around on Ebay, you can find old 16-core Opteron servers for 
a few hundred dollars.  It's not screaming performance, but 16 cores is 
enough to get you started on scaling and parallelism in MPI.  It's a 
cheap cluster in a box.


Damien

On 5/20/2016 12:40 PM, MM wrote:

Hello,

Say I don't have access to a actual cluster, yet I'm considering cloud 
compute solutions for my MPI program ultimately, but such a cost may 
be highly prohibitive at the moment.
In terms of middle ground, if I am interesting in compute only, no 
storage, what are possible hardware solutions out there to deploy my 
MPI program?
By no storage, I mean that my control linux box running the frontend 
of the program, but is also part of the mpi communicator always 
gathers all results and stores them locally.

At the moment, I have a second box over ethernet.

I am looking at something like Intel Compute Stick (is it possible at 
all to buy a few, is linux running on them, the arch seems to be the 
same x86-64, is there a possible setup with tcp for those and have 
openmpi over tcp)?


Is it more cost-effective to look at extra regular linux commodity boxes?
If a no hard drive box is possible, can the executables of my MPI 
program sendable over the wire before running them?


If we exclude GPU or other nonMPI solutions, and cost being a primary 
factor, what is progression path from 2boxes to a cloud based solution 
(amazon and the like...)


Regards,
MM


___
users mailing list
us...@open-mpi.org
Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2016/05/29257.php




Re: [OMPI users] Why do I need a C++ linker while linking in MPI C code with CUDA?

2016-03-20 Thread Damien Hocking

Durga,

The Cuda libraries use the C++ std libraries.  That's the std::ios_base 
errors.. You need the C++ linker to bring those in.


Damien


On March 20, 2016 9:15:47 AM "dpchoudh ." <dpcho...@gmail.com> wrote:


Hello all

I downloaded some code samples from here:

https://github.com/parallel-forall/code-samples/

and tried to build the subdirectory

posts/cuda-aware-mpi-example/src

in my CentOS 7 machine.

I had to make several changes to the Makefile before it would build. The
modified Makefile is attached (the make targets I am talking about are the
3rd and 4th from the bottom). Most of the modifications can be explained as
possible platform specific variations (such as path differences betwen
Ubuntu and CentOS), except the following:

I had to use a C++ linker (mpic++) to link in the object files that were
produced with C host compiler (mpicc) and CUDA compiler (nvcc). If I did
not do this, (i.e. I stuck to mpicc for linking), I got the following link
error:

mpicc -L/usr/local/cuda/lib64 -lcudart -lm -o ../bin/jacobi_cuda_normal_mpi
jacobi.o input.o host.o device.o  cuda_normal_mpi.o
device.o: In function `__static_initialization_and_destruction_0(int, int)':
tmpxft_4651_-4_Device.cudafe1.cpp:(.text+0xd1e): undefined
reference to `std::ios_base::Init::Init()'
tmpxft_4651_-4_Device.cudafe1.cpp:(.text+0xd2d): undefined
reference to `std::ios_base::Init::~Init()'
collect2: error: ld returned 1 exit status

Can someone please explain why would I need a C++ linker for object files
that were generated using C compiler? Note that if I use mpic++ both for
compiling and linking, there are no errors either.

Thanks in advance
Durga

Life is complex. It has real and imaginary parts.



--
___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2016/03/28760.php


Re: [OMPI users] single CPU vs four CPU result differences, is it normal?

2015-10-28 Thread Damien

Diego,

There aren't many linear solvers that are bit-consistent, where the 
answer is the same no matter how many cores or processes you use. 
Intel's version of Pardiso is bit-consistent and I think MUMPS 5.0 might 
be, but that's all.  You should assume your answer will not be exactly 
the same as you change the number of cores or processes, although you 
should reach the same overall error tolerance in approximately the same 
number of iterations.


Damien

On 2015-10-28 3:51 PM, Diego Avesani wrote:

dear Andreas, dear all,
The code is quite long. It is a conjugate gradient algorithm to solve 
a complex system.


I have noticed that when a do cycle is small, let's say
do i=1,3

enddo

the results are identical. If the cycle is big, let's say do i=1,20, 
the results are different and the difference increase with the number 
of iterations.


What do you think?



Diego


On 28 October 2015 at 22:32, Andreas Schäfer <gent...@gmx.de 
<mailto:gent...@gmx.de>> wrote:


On 22:03 Wed 28 Oct , Diego Avesani wrote:
> When I use a single CPU a get a results, when I use 4 CPU I get
another
> one. I do not think that very is a bug.

Sounds like a bug to me, most likely in your code.

> Do you think that these small differences are normal?

It depends on what small means. Floating point operations in a
computer are generally not commutative, so parallelization may in deed
lead to different results.

> Is there any way to get the same results? is some align problem?

Impossible to say without knowing your code.

Cheers
-Andreas


--
==
Andreas Schäfer
HPC and Grid Computing
Department of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
+49 9131 85-27910 <tel:%2B49%209131%2085-27910>
PGP/GPG key via keyserver
http://www.libgeodecomp.org
==

(\___/)
(+'.'+)
(")_(")
This is Bunny. Copy and paste Bunny into your
signature to help him gain world domination!

___
users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org>
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2015/10/27933.php




___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/10/27934.php




Re: [OMPI users] OpenMPI on Windows without Cygwin

2015-05-13 Thread Damien

Depending on what you need, the old 1.6 version might still work.

Damien

On 2015-05-13 2:19 PM, Walt Brainerd wrote:

No, I hadn't received any response.
That is too bad.
Knowing that earlier would have saved some hours.

Some day I'll look again at extracting some set of stuff
from Cygwin that will make it work. Maybe even that
is not possible. But Cygwin is huge. OTOH, maybe anybody
who is contemplating using Coarrays would be somebody
who has Cygwin anyway.

On Wed, May 13, 2015 at 8:55 AM, Damien <dam...@khubla.com 
<mailto:dam...@khubla.com>> wrote:


Walt,

I don't remember seeing a response to this.  OpenMPI isn't
supported on native Windows anymore.  The last version for Windows
was the 1.6 series.

Damien


On 2015-05-11 3:07 PM, Walt Brainerd wrote:

Is it possible to build OpenMPI for Windows
not running Cygwin?

I know it uses /dev/shm, so there would have to
be something equivalent to that not in Cygwin.

TIA.

-- 
Walt Brainerd



___
users mailing list
us...@open-mpi.org  <mailto:us...@open-mpi.org>
Subscription:http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this 
post:http://www.open-mpi.org/community/lists/users/2015/05/26855.php



___
users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org>
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2015/05/26862.php




--
Walt Brainerd


___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/05/26863.php




Re: [OMPI users] OpenMPI on Windows without Cygwin

2015-05-13 Thread Damien

Walt,

I don't remember seeing a response to this.  OpenMPI isn't supported on 
native Windows anymore.  The last version for Windows was the 1.6 series.


Damien

On 2015-05-11 3:07 PM, Walt Brainerd wrote:

Is it possible to build OpenMPI for Windows
not running Cygwin?

I know it uses /dev/shm, so there would have to
be something equivalent to that not in Cygwin.

TIA.

--
Walt Brainerd


___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/05/26855.php




Re: [OMPI users] OpenMPI is not using free processors, but overloading other processors already at 100%

2015-04-10 Thread Damien

Namu,

Hyperthreads aren't real cores, they're really just another hardware 
thread (two per physical core).  You have two CPUs with 6 physical cores 
each.  If you start 2 simulations, each with 6 MPI processes running, 
your 12 physical cores will each be running at 100%. Adding another 
simulation with another 6 MPI processes will oversubscribe your physical 
cores (you're asking for 150%), which is why you're still seeing 12 
processors at 100%, and everything else very low.  Your physical cores 
are switching hardware threads, but each core can't go any faster.  
Hyperthreads only help when your software doesn't load a core to 100%.  
Then the other hyperthread on that core can switch in and use leftover 
capacity.  Hardware thread switching is much faster than software thread 
switching, which is why it's there.


Most simulation software will load cores to 100% (even if it doesn't use 
that 100% wisely, which is a whole other flame war) and hyperthreading 
doesn't help you.


Damien

On 2015-04-10 2:22 PM, namu patel wrote:

Hello All,

I am using OpenMPI 1.8.4 on my workstation with 2 CPUs, each with 12 
processors (6 with hyper-threading). When I run simulations using 
mpiexec, I'm noticing a strange performance issue. If I run two 
simulations, each with 6 processors, then everything is fine and 12 
processors are under 100% load. When I start a 3rd simulation with 6 
processors, I notice throttling in all 3 simulations and only 12 
processors are at 100%, the rest are below 10%. My guess is that 
somehow the processes from the 3rd simulations are doubling onto the 
already busy processors. How can I be certain that this is the case?


Thanks,
namu


___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/04/26671.php




Re: [OMPI users] monitoring the status of processors

2015-03-17 Thread Damien

Ganglia might help:

http://ganglia.sourceforge.net/

Could be too high-level though.

Damien

On 2015-03-17 8:59 AM, Ralph Castain wrote:
Not at the moment - at least, not integrated into OMPI at this time. 
We used to have sensors for such purposes in the OMPI code itself, but 
they weren’t used and so we removed them.


The resource manager generally does keep track of such things - see 
for example ORCM:


https://github.com/open-mpi/orcm/wiki

Some of us are working on an extended version of PMI (called PMIx) 
that will include support for requesting such info from the resource 
manager in its upcoming version 2.0 release (sometime this summer). So 
that might help, and would be portable across environments.


https://github.com/open-mpi/pmix/wiki


On Mar 17, 2015, at 7:38 AM, etcamargo <etcama...@inf.ufpr.br 
<mailto:etcama...@inf.ufpr.br>> wrote:


Hi, All

I would like to know if there is a (MPI) tool for monitoring the 
status of a processor (and your cores) at runtime, i.e., while I am 
running a MPI application.


Let's suppose that some physical processors become overloaded while a 
MPI application is running. I am looking for a way to know which are 
the "busy" or the "slow" processors.


Thanks in advance!

Edson
___
users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org>
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/03/26484.php




___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/03/26485.php




Re: [OMPI users] latest stable and win7/msvc2013

2014-07-17 Thread Damien
Visual Studio can link libs compiled with Intel.  We used to make 
binaries available for Windows with Open-MPI.


On 2014-07-17 12:58 PM, Ralph Castain wrote:

Well, yeah - I dig that. However, barring doing something stupid, I am hopeful 
we'll be okay if I continue OMPI's practice of staying within the C99 standard. 
If MSVC won't support that, then we're out of luck anyway because there is no 
way OMPI is going back to pre-C99 at this stage.


On Jul 17, 2014, at 11:52 AM, Jed Brown  wrote:





Re: [OMPI users] latest stable and win7/msvc2013

2014-07-17 Thread Damien
Is this something that could be funded by Microsoft, and is it time to 
approach them perhaps?  MS MPI is based on MPICH, and if mainline MPICH 
isn't supporting Windows anymore, then there won't be a whole lot of 
development in an increasingly older Windows build.  With the Open-MPI 
roadmap, there's a lot happening.  Would it be a better business model 
for MS to piggy-back off of Open-MPI ongoing innovation, and put their 
resources into maintaining a Windows build of Open-MPI instead?


Damien

On 2014-07-17 11:42 AM, Jed Brown wrote:

Rob Latham <r...@mcs.anl.gov> writes:

Well, I (and dgoodell and jsquyers and probably a few others of you) can
say from observing disc...@mpich.org traffic that we get one message
about Windows support every month -- probably more often.

Seems to average at least once a week.  We also see regular petsc
support emails wondering why --download-{mpich,openmpi} does not work on
Windows.  (These options are pretty much only used by beginners for whom
PETSc is their first encounter with MPI.)


___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2014/07/24797.php




Re: [OMPI users] latest stable and win7/msvc2013

2014-07-16 Thread Damien

Guys,

Don't do it.  It doesn't work at all.  I couldn't pick up maintenance of 
it either, and the majority of the Windows support is removed as Ralph 
said.  Just use MPICH for Windows work and save yourself the pain.


Cheers,

Damien

On 2014-07-16 9:57 AM, Nathan Hjelm wrote:

It likely won't build because last I check the Microsoft toolchain does
not fit the minimum requirements (C99 or higher). You will have better
luck with either gcc or intel's compiler.

-Nathan

On Wed, Jul 16, 2014 at 04:52:53PM +0100, MM wrote:

hello,
I'm about to try to build 1.8.1 with win msvc2013 toolkit in 64bit mode.
I know the win binaries were dropped after failure to find someone to
pick them up (following shiqin departure), and i'm afraid I wouldn't
volunteer due to lack of time, but is there any general advice before
I start?

rds,
MM
___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2014/07/24787.php


___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2014/07/24789.php




Re: [OMPI users] calling a parallel solver from sequential code

2013-11-18 Thread Damien

Florian,

There's two ways.  You can make your whole app MPI-based, but only have 
the master process do any of the sequential work, while the others spin 
until the parallel part.  That's the easiest, but you then have MPI 
everywhere in your app.  The other way is to have the MPI processes 
exist totally outside your main sequential process. This keeps you 
isolated from the MPI, but it's a lot more work.


I've done the MPI on the outside with the MUMPS linear solver.  You need 
to spin up the MPI process group separately, so your sequential process 
isn't doing any work while they're running.  You also need to send data 
to the MPI processes, which I used Boost's Shared-Memory library for (if 
you can't use C++ in your project this won't work for you at all).  You 
also have to keep the MPI processes and the main process synchronised 
and you need your main process to surrender it's core while the MPI 
solve is going on, so you end up with a bunch of Sleep or sched_yield 
calls so that everything plays nicely.  The whole thing takes a *lot* of 
tweaking to get right. Honestly, it's a total pig and I'd recommend 
against this path (we don't use it anymore in our software).


If you just need a good parallel, direct linear solver (I'm making an 
assumption here) to run in one memory space, go grab SuperLU-MT from here:


http://crd-legacy.lbl.gov/~xiaoye/SuperLU/#superlu_mt

or use the LiS solver package from here if you want an iterative solver:

http://www.ssisc.org/lis/

Both of these can handle very large problems in shared-memory and have 
good scale-up.


Damien


On 18/11/2013 7:09 AM, Florian Bruckner wrote:

hi,

how can i call an MPI parallelized solver routine from a sequential 
code. The sequential code is already existing and the structure looks 
like to following:


void main()
{
   do {
 x = rand();

 sequential_code(); // this sequential code should only be 
executed on the master node

 if (x == 2345) MPIsolve(); // this should be run in parallel
   } while(x == 1234);
}

i'm wondering how the call MPI-parallelized solver routine can be 
called without parallelizing the whole existing sequential code. at a 
certain point of the code-path of the sequential code, the parallel 
execution should be started, but how can this be achived.


when starting the application with mpirun there must be some code 
running on each node. and the same code-path needs to be followed by 
each process. But this would mean that exactly the same sequential 
code needs to be executed on each node!?


what am i missing?
thanks in advance
Florian

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Get your Open MPI schwag!

2013-10-23 Thread Damien Hocking

Heheheheh.

Chuck Norris has zero latency and infinite bandwidth.
Chuck Norris is a hardware implementation only.  Software is for sissys.
Chuck Norris's version of MPI_IRecv just gives you the answer.
Chuck Norris has a 128-bit memory space.
Chuck Norris's Law says Chuck Norris gets twice as amazing every 18 months.
Chuck Norris is infinitely scalable.
MPI_COMM_WORLD is only a small part of Chuck Norris's mind.
Chuck Norris can power Exascale.  Twice.

:-)

Damien

On 23/10/2013 4:26 PM, Shamis, Pavel wrote:

+1 for Chuck Norris

Pavel (Pasha) Shamis
---
Computer Science Research Group
Computer Science and Math Division
Oak Ridge National Laboratory






On Oct 23, 2013, at 1:12 PM, Mike Dubman 
<mi...@dev.mellanox.co.il<mailto:mi...@dev.mellanox.co.il>> wrote:

maybe to add some nice/funny slogan on the front under the logo, and cool 
picture on the back.
some of community members are still in early twenties (and counting) .  :)

shall we open a contest for good slogan to put? and mid-size pict to put on the 
back side?

- living the parallel world
- iOMPI
- OpenMPI - breaking the barriers!
...
for the mid-sized back-side picture, I suggest chuck norris, you can`t beat it.





On Wed, Oct 23, 2013 at 7:48 PM, John Hearns 
<hear...@googlemail.com<mailto:hear...@googlemail.com>> wrote:

OpenMPI aprons. Nice! Good to wear when cooking up those Chef recipes. (Did I 
really just say that...)

___
users mailing list
us...@open-mpi.org<mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org<mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Unexpected behavior: MPI_Comm_accept, MPI_Comm_connect, and MPI_THREAD_MULTIPLE

2013-05-14 Thread Damien Kick

On May 14, 2013, at 3:02 PM, Ralph Castain <r...@open-mpi.org>
 wrote:

>
> On May 14, 2013, at 12:56 PM, Damien Kick <dk...@shoretel.com> wrote:
>
>>
>> On May 14, 2013, at 1:46 PM, Ralph Castain <r...@open-mpi.org>
>> wrote:
>>
>>> Problem is that comm_accept isn't thread safe in 1.6 series - we have a 
>>> devel branch that might solve it, but is still under evaluation
>>
>> So then probably the only way to implement an MPI server which handles 
>> multiple concurrent clients with Open MPI 1.6.4 is to use multiple processes 
>> to handle each client connection, yes?
>
> Or introduce your own thread protection around the comm_accept call

Hmm … but won't that cause other problems when the call to MPI_Comm_accept 
blocks while we're still holding the mutex?  There doesn't seem to be an option 
to MPI_Comm_accept to timeout nor does there seem to be a variant of MPI_Probe 
to allow for a non-blocking accept.  Am I missing something?  One more 
question, too (and thanks for all your help), I don't see anything in the man 
page for MPI_Comm_accept which mentions iterations with signals.  Is there a 
reliable behavior in this context, i.e. return an error from MPI_Comm_accept 
and set errno to EINTR?  Would the C++ binding throw an exception?

 I suppose that one could use MPI_Comm_join to have "normal socket 
code" handle connections without worrying about blocking the rest of MPI and 
then only introduce a mutex when we know we're reading to MPI_Comm_join.



This e-mail and any attachments are confidential. If it is not intended for 
you, please notify the sender, and please erase and ignore the contents.



Re: [OMPI users] Unexpected behavior: MPI_Comm_accept, MPI_Comm_connect, and MPI_THREAD_MULTIPLE

2013-05-14 Thread Damien Kick

On May 14, 2013, at 1:46 PM, Ralph Castain 
wrote:

> Problem is that comm_accept isn't thread safe in 1.6 series - we have a devel 
> branch that might solve it, but is still under evaluation

So then probably the only way to implement an MPI server which handles multiple 
concurrent clients with Open MPI 1.6.4 is to use multiple processes to handle 
each client connection, yes?




This e-mail and any attachments are confidential. If it is not intended for 
you, please notify the sender, and please erase and ignore the contents.



[OMPI users] Unexpected behavior: MPI_Comm_accept, MPI_Comm_connect, and MPI_THREAD_MULTIPLE

2013-05-14 Thread Damien Kick
) const;

operator MPI::Intercomm() const;
};

} // namespace seed
} // namespace shor

#include "seed/mpi_intercomm.cc.hh"

#endif
$ cat include/seed/mpi_intercomm.cc.hh
#ifndef INCLUDE_SEED_MPI_INTERCOMM_CC_HH
#define INCLUDE_SEED_MPI_INTERCOMM_CC_HH

#include 

inline MPI::Intercomm*
shor::seed::Mpi_intercomm::operator -> ()
{
return &(*impl_);
}

inline const MPI::Intercomm*
shor::seed::Mpi_intercomm::operator -> () const
{
return &(*impl_);
}

inline
shor::seed::Mpi_intercomm::operator MPI::Intercomm() const
{
return *impl_;
}

#endif
$ cat src/mpi_intercomm.cc
#include "seed/mpi_intercomm.hh"

shor::seed::Mpi_intercomm::Mpi_intercomm(
MPI::Intercomm impl)
: impl_(impl)
{ }

shor::seed::Mpi_intercomm::Mpi_intercomm(
Mpi_intercomm&& that)
: impl_(that.impl_)
{
that.impl_ = boost::none;
}

shor::seed::Mpi_intercomm::~Mpi_intercomm()
{
if (impl_
&& (*impl_ != MPI::COMM_WORLD) && (*impl_ != MPI::COMM_SELF))
{
std::clog << "Disconnecting intercomm\n";
impl_->Disconnect();
impl_ = boost::none;
}
}

shor::seed::Mpi_intercomm&
shor::seed::Mpi_intercomm::operator = (
Mpi_intercomm&& that)
{
impl_ = that.impl_;
that.impl_ = boost::none;
}
$ cat include/seed/mpi_info.hh
#ifndef INCLUDE_SEED_MPI_INFO_HH
#define INCLUDE_SEED_MPI_INFO_HH

#include 

#include 

namespace shor {
namespace seed {
class Mpi_info {
MPI::Info impl_;

public:
typedef std::pair Key_value;
typedef std::initializer_list Init_list;

Mpi_info();
explicit Mpi_info(const Init_list& some_values);
Mpi_info(const Mpi_info& that) = delete;
Mpi_info(Mpi_info&&);
~Mpi_info();

Mpi_info& operator = (const Mpi_info& that) = delete;
Mpi_info& operator = (Mpi_info&& that);

operator MPI::Info() const;
};

} // namespace seed
} // namespace shor

#include "seed/mpi_info.cc.hh"

#endif
$ cat include/seed/mpi_info.cc.hh
#ifndef INCLUDE_SEED_MPI_INFO_CC_HH
#define INCLUDE_SEED_MPI_INFO_CC_HH

#include "seed/mpi_info.hh"

inline shor::seed::Mpi_info::operator MPI::Info() const
{
return impl_;
}

#endif
$ cat src/mpi_info.cc
#include "seed/mpi_info.hh"

#include 
#include 

shor::seed::Mpi_info::Mpi_info()
: impl_(MPI::Info::Create())
{ }

shor::seed::Mpi_info::Mpi_info(
const Init_list& some_values)
: impl_(MPI::Info::Create())
{
std::for_each(
std::begin(some_values), std::end(some_values),
[this] (const Key_value& one_value) {
std::clog
<< "MPI_Info_set(\"" << std::get<0>(one_value)
<< "\", \"" << std::get<1>(one_value)
<< "\")\n";
impl_.Set(std::get<0>(one_value), std::get<1>(one_value));
});
}

shor::seed::Mpi_info::Mpi_info(Mpi_info&& that)
: impl_(that.impl_)
{ }

shor::seed::Mpi_info::~Mpi_info()
{
impl_.Free();
}

shor::seed::Mpi_info&
shor::seed::Mpi_info::operator = (Mpi_info&& that)
{
impl_ = that.impl_;
return *this;
}
$ cat include/seed/scope_exit.hh
#ifndef INCLUDE_SEED_SCOPE_EXIT_HH
#define INCLUDE_SEED_SCOPE_EXIT_HH

#include 

namespace shor {
namespace seed {
class Scope_exit {
std::function<void()> lambda_;

public:
Scope_exit(std::function<void()> lambda) : lambda_(lambda) { }
Scope_exit(const Scope_exit& that) = delete;
~Scope_exit() { lambda_(); }

Scope_exit& operator = (const Scope_exit& that) = delete;
};

} // namespace seed
} // namespace shor

#endif
$

And here is the output of ompi_info

$ ompi_info
 Package: Open MPI dkick@Damien-Kicks-MacBook-Pro.local
  Distribution
Open MPI: 1.6.4
   Open MPI SVN revision: r28081
   Open MPI release date: Feb 19, 2013
Open RTE: 1.6.4
   Open RTE SVN revision: r28081
   Open RTE release date: Feb 19, 2013
OPAL: 1.6.4
   OPAL SVN revision: r28081
   OPAL release date: Feb 19, 2013
 MPI API: 2.1
Ident string: 1.6.4
  Prefix: ${PREFIX}
 Configured architecture: x86_64-apple-darwin12.3.0
  Configure host: Damien-Kicks-MacBook-Pro.local
   Configured by: dkick
   Configured on: Thu May  9 21:36:29 CDT 2013
  Configure host: Damien-Kicks-MacBook-Pro.local
Built by: dkick
Built on: Thu May  9 21:53:32 CDT 2013
  Built host: Damien-Kicks-MacBook-Pro.local
  C bindings: yes
C++ bindings: yes
  Fortran77 bindings: yes (single underscore)
  Fortran90 bindings: yes
 Fortran90 bindings size: small
  C compiler: gcc
 C compiler absolute: /usr/bin/gcc
  C compiler

Re: [OMPI users] Windows C++ Linker Error "unresolved symbol" for MPI::Datatype::Free

2013-02-21 Thread Damien Hocking
Found it.  The MPI::Datatype class isn't exported in a Win dll (no 
dllexport wrappers on the class), so on a shared-libs build it's not in 
the library symbols for anything else to see.  The Windows CMAKE 
"BUILD_SHARED_LIBS" option is therefore busted.  On a static lib build 
everything's in there, a dumpbin shows all the MPI::Datatype symbols.  
Those symbols are missing all the way back into 1.5 shared-lib builds as 
well.


Damien

On 21/02/2013 12:19 PM, Jeff Squyres (jsquyres) wrote:

On Feb 21, 2013, at 10:59 AM, Damien Hocking <dam...@khubla.com> wrote:


Well this is interesting.  The linker can't find that because 
MPI::Datatype::Free isn't implemented on the Windows build (in 
datatype_inln.h).  It's declared in datatype.h though.  It's not there in the 
Linux version either, so I don't know where the Linux build is getting that 
symbol from, that link should fail too.  Is the C++ version of OpenMPI actually 
broken overall?

It's implemented in Datatype.cc.  I'm don't remember offhand why we didn't put 
it in the inline versions.  But it's definitely in the generated libmpi_cxx.so:

--
% nm -C libmpi_cxx.so | grep MPI::Datatype::Free
00016ed8 T MPI::Datatype::Free()
%
-





Re: [OMPI users] Windows C++ Linker Error "unresolved symbol" for MPI::Datatype::Free

2013-02-21 Thread damien
More or less.  There's just not enough critical mass to keep it going. 

Damien 

Sent from my android device.



-Original Message-
From: "Hartman, Todd W." <thart...@mst.edu>
To: 'Open MPI Users' <us...@open-mpi.org>
Sent: Thu, 21 Feb 2013 10:13 AM
Subject: Re: [OMPI users] Windows C++ Linker Error "unresolved symbol" for 
MPI::Datatype::Free



Re: [OMPI users] Windows C++ Linker Error "unresolved symbol" for MPI::Datatype::Free

2013-02-21 Thread Damien Hocking
Well this is interesting.  The linker can't find that because 
MPI::Datatype::Free isn't implemented on the Windows build (in 
datatype_inln.h).  It's declared in datatype.h though.  It's not there 
in the Linux version either, so I don't know where the Linux build is 
getting that symbol from, that link should fail too.  Is the C++ version 
of OpenMPI actually broken overall?


The Windows support is another issue.  I think it's semi-officially 
deprecated.


Damien

On 20/02/2013 11:20 PM, Hartman, Todd W. wrote:

I'm trying to build a simple Open MPI application for Windows. I've installed 
the binaries for OpenMPI-v1.6.2 (64-bit). I've also installed Visual Studio 
2010. The machine(s) are Windows 7 x64.


When I attempt to compile a simple program that uses MPI::Send(), I get a 
linker error saying that it cannot resolve MPI::Datatype::Free().

Here's a minimal example:

---
#include 
#include 
int main( int argc, char** argv ) {
 MPI::Init(argc,argv);

 // Meant to run with 2 processes.
 if (MPI::COMM_WORLD.Get_rank() == 0) {
 int data;
 MPI::COMM_WORLD.Recv(,1,MPI_INT,1,0);
 std::cout << "received " << data << std::endl;
 } else {
 int data = 0xdead;
 std::cout << "sending " << data << std::endl;
 MPI::COMM_WORLD.Send(,1,MPI_INT,0,0);
 }

 MPI::Finalize();
}
---

When I compile it:

mpic++ send_compile.cpp -o send_compile.exe -DOMPI_IMPORTS -DOPAL_IMPORTS 
-DORTE_IMPORTS


---
Microsoft (R) C/C++ Optimizing Compiler Version 16.00.40219.01 for x64
Copyright (C) Microsoft Corporation.  All rights reserved.




cl : Command line warning D9035 : option 'o' has been deprecated and will be 
removed in a future release
send_compile.cpp
Microsoft (R) Incremental Linker Version 10.00.40219.01
Copyright (C) Microsoft Corporation.  All rights reserved.

/out:send_compile.exe
/out:send_compile.exe
"/LIBPATH:C:\Program Files (x86)\OpenMPI_v1.6.2-x64/lib"
libmpi_cxx.lib
libmpi.lib
libopen-pal.lib
libopen-rte.lib
advapi32.lib
Ws2_32.lib
shlwapi.lib
send_compile.obj
send_compile.obj : error LNK2001: unresolved external symbol "public: virtual void 
__cdecl MPI::Datatype::Free(void)" (?Free@Datatype@MPI@@UEAAXXZ)
send_compile.exe : fatal error LNK1120: 1 unresolved externals
---

This program compiles and runs without complaint on an Ubuntu machine around 
here. I don't know what the problem is. Open MPI's documentation didn't say 
anything about adding the CPP defines (OMPI_IMPORTS, OPAL_IMPORTS, 
ORTE_IMPORTS) whose absence were causing other linker errors similar to this. 
Google found some items in the mailing list archive. I cannot find any 
information about this particular problem, though.

I tried using dumpbin to get symbols that were in the .lib files installed by 
MPI, but didn't find any reference to that function name. I didn't find any 
answers looking in the MPI headers, either.

I have a similar program in C that compiles and runs fine on this Windows 
machine. I don't know what I'm doing wrong with C++. Can someone point me in 
the right direction? Is there some documentation regarding getting things to 
work on Windows? The release notes don't address this problem, and I can't find 
any other documentation related to what might be different from *nix to Windows 
(WRT to Open MPI).

Thanks.


todd.

P.S. This is copied from a StackOverflow question I posted 
(http://stackoverflow.com/questions/14988099/open-mpi-c-link-error-mpidatatypefree-on-windows).
 Forgive the cross-posting.
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] openmpi-1.9a1r27674 on Cygwin-1.7.17

2012-12-18 Thread Damien
It's a historical and emotional decision that also used to have a 
business driver.  I learned MPI with LAM on Linux (minute's silence...) 
and switched to OpenMPI when LAM went to join the big supercomputer in 
the sky.  Shortly after OpenMPI launched, we had some discussions about 
a Windows version, but it took until 1.5 before there was one because 
Shiqing did the heavy lifting.


At the time, MS was still heavily into HPC and I used OpenMPI on Windows 
(and Linux) because I like the team that develops it and wanted to 
support the product.  In 2011 when MS changed the direction of their HPC 
team I think OpenMPI on Windows lost the actual business driver for a 
critical mass of users, and that's what we're seeing now.  I'll still 
use OpenMPI on Linux, but I think on Windows it will be HPC Pack or MPICH.


Damien

On 18/12/2012 10:20 AM, JR Cary wrote:

So a question - why do *you* use (native) OpenMPI on Windows, when
you could just download HPC Pack?  Was it for any reason related
to implementation?

(I may have been one of those 2-3 candidate users, but I actually
just download HPC Pack.)

Back to the point of why OpenMPI might be desirable: I agree with
Jeff that it is not about on-node performance, nor use of the network
stack.  It would have to be better or more implementations above that
layer, such as OpenMPI having implementations for some advanced MPI
methods that are absent in HPC Pack (which I understand has forked
from MPICH).

But, yeah, it does seem like the coffin is pretty well shut, otherwise.

Thx...John

On 12/18/12 9:00 AM, Damien wrote:
Proper Windows support of OpenMPI is likely around 20 hours a week. 
That can be maintained by a small group, but it's probably too much 
for one person unless they're working in Windows HPC every day. When 
I posted a couple of weeks back, there were three people (maybe two?) 
who responded that they used OpenMPI on Windows regularly, other than 
me.


I hate to say it, but against MPICH and the Microsoft and Intel MPICH 
versions with probably a few thousand regular users, I think OpenMPI 
on native Windows is dead in the water.


Damien

On 18/12/2012 8:06 AM, JR Cary wrote:

On 12/18/12 6:29 AM, Jeff Squyres wrote:

This brings up the point again, however, of Windows support.

Open MPI recently lost its only Windows developer (he moved on to 
non-HPC things).  This has been discussed on the lists a few times 
(I honestly don't remember if it was this users list or the devel 
list), and there hasn't really been anyone who volunteered their 
time to support Open MPI on Windows.


Definitely this list.

We're seriously considering removing all Windows support for 1.7 
and beyond (keep in mind that the native Windows support on the SVN 
trunk and v1.7 branch is very, very out of date and needs some 
serious work to get working again -- the last working native 
Windows version is on the v1.6 branch).


Sounds appropriate.  My conversations with Microsoft
went no where.  Spoke last night with another good
friend there who worked in their HPC unit when that
existed.  Microsoft has their own implementation, and
they see no need for another.

So, IMO, OpenMPI would have to turn to a different
group for support.  E.g., Microsoft compatible HPC
application vendors.  And for that one would need a
compelling case of being better in, e.g., performance.

Can this case be made?

Perhaps there is another way?

John







On Dec 18, 2012, at 3:04 AM, Siegmar Gross wrote:


Hi,

I tried to install openmpi-1.9a1r27674 on Cygwin-1.7.17 and
got the following error (gcc-4.5.3).

...
  CC   path.lo
../../../openmpi-1.9a1r27668/opal/util/path.c: In function
  'opal_path_df':
../../../openmpi-1.9a1r27668/opal/util/path.c:578:18: error:
  'buf' undeclared (first use in this function)
../../../openmpi-1.9a1r27668/opal/util/path.c:578:18: note:
  each undeclared identifier is reported only once for each
  function it appears in
Makefile:1669: recipe for target `path.lo' failed
make[3]: *** [path.lo] Error 1
...


The reason is that "buf" is only declared for some operating
systems. I added "defined(__CYGWIN__)" in some places and
was able to compile "path.c".


hermes util 41 diff path.c path.c.orig
452c452
< #elif defined(__linux__) || defined(__CYGWIN__) ||
  defined (__BSD) || (defined(__APPLE__) && defined(__MACH__))
---

#elif defined(__linux__) || defined (__BSD) ||

  (defined(__APPLE__) && defined(__MACH__))
480c480
< #elif defined(__linux__) || defined(__CYGWIN__) ||
  defined (__BSD) || (defined(__APPLE__) && defined(__MACH__))
---

#elif defined(__linux__) || defined (__BSD) ||

  (defined(__APPLE__) && defined(__MACH__))
517c517
< #elif defined(__linux__) || defined(__CYGWIN__)
---

#elif defined(__linux__)

549c549
< #elif defined(__linux__) || defined(__CYGWIN__) ||
  defined (__BSD) || \
---

#elif defined(__linux__) || defined (__BSD) ||\

562c

Re: [OMPI users] OpenMPI-1.6.3 MinGW64 buildup on Windows 7

2012-12-12 Thread Damien Hocking
I know 1.6.3 is broken for Win builds with VS2012 and Intel.  I'm not a 
MinGW expert by any means, I've hardly ever used it.  I'll try and look 
at this on the weekend.  If you can post on Friday to job my memory that 
would help.  :-)


Damien

On 12/12/2012 3:31 AM, Ilias Miroslav wrote:

Ad: http://www.open-mpi.org/community/lists/users/2012/12/20865.php

Thanks for your efforts, Damien;

however, in between I realized that this standalone Windows OpenMPI is built 
from ifort.exe + cl.exe, and I have only MinGW suite in disposal...

For that reason I tried to build-up OpenMPI on Windows 7 (both 64 and 32-bits), 
but failed, see:
http://www.open-mpi.org/community/lists/users/2012/12/20921.php

Best, Miro
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Windows support for OpenMPI

2012-12-07 Thread Damien Hocking
I can probably fix the 1.6.3 build.  I think it's just bumping CMake 
support and tweaks so that VS2012 works.  But yeah, it looks a bit grim 
going forward.


Damien

On 07/12/2012 8:28 AM, Jeff Squyres wrote:

Sorry for my late reply; I've been in the MPI Forum and Open MPI engineering 
meetings all week.  Some points:

1. Yes, it would be a shame to lose all the Windows support that Shiqing did.

2. Microsoft has told me that they're of the mindset "the more, the merrier" 
for their platform (i.e., they'd love to have more than one MPI on Windows, but probably 
can't help develop/support Open MPI on windows).  Makes perfect sense to me.

3. I see that we have 2 volunteers to keep the build support going for the v1.6 
series, and another volunteer to do continued development for v1.7 and beyond.  
But all of these would need good reasons to go forward (active Open MPI Windows 
users, financial support, etc.).  It doesn't look like there is much support.

4. I'm bummed to hear that Windows building is broken in 1.6.x.  $%#$%#@!!  If 
anyone wants to take a gander at fixing it, I'd love to see your patches, for 
nothing other than just maintaining Windows support for the remainder of the 
1.6.x series.  But per #3, it may not be worth it.

5. Based on this feedback, it seems like we should remove the Windows support 
from the OMPI SVN trunk and all future versions.  It can always be resurrected 
from SVN history if someone wants to pick up this effort again in the future.


On Dec 6, 2012, at 11:07 AM, Damien wrote:


So far, I count three people interested in OpenMPI on Windows.  That's not a 
case for ongoing support.

Damien

On 04/12/2012 11:32 AM, Durga Choudhury wrote:

All

Since I did not see any Microsoft/other 'official' folks pick up the ball, let 
me step up. I have been lurking in this list for quite a while and I am a 
generic scientific programmer (i.e. I use many frameworks such as OpenCL/OpenMP 
etc, not just MPI)
Although I am primarily a Linux user, I do own multiple versions of Visual 
Studio licenses and have a small cluster that dual boots to Windows/Linux (and 
more nodes can be added on demand). I cannot do any large scale testing on 
this, but I can build and run regression tests etc.

If the community needs the Windows support to continue, I can take up that 
responsibility, until a more capable person/group is found at least.

Thanks
Durga


On Mon, Dec 3, 2012 at 12:32 PM, Damien <dam...@khubla.com> wrote:
All,

I completely missed the message about Shiqing departing as the OpenMPI Windows 
maintainer.  I'll try and keep Windows builds going for 1.6 at least, I have 
2011 and 2013 Intel licenses and VS2008 and 2012, but not 2010.  I see that the 
1.6.3 code base already doesn't build on Windows in VS2012  :-(.

While I can try and keep builds going, I don't have access to a Windows cluster 
right now, and I'm flat out on two other projects. I can test on my 
workstation, but that will only go so far. Longer-term, there needs to be a 
decision made on whether Windows gets to be a first-class citizen in OpenMPI or 
not.  Jeff's already told me that 1.7 is lagging behind on Windows.  It would 
be a shame to have all the work Shiqing put in gradually decay because it can't 
be supported enough.  If there's any Microsoft/HPC/Azure folks observing this 
list, or any other vendors who run on Windows with OpenMPI, maybe we can see 
what can be done if you're interested.

Damien
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




___
users mailing list

us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users






Re: [OMPI users] Windows support for OpenMPI

2012-12-06 Thread Damien
So far, I count three people interested in OpenMPI on Windows. That's 
not a case for ongoing support.


Damien

On 04/12/2012 11:32 AM, Durga Choudhury wrote:

All

Since I did not see any Microsoft/other 'official' folks pick up the 
ball, let me step up. I have been lurking in this list for quite a 
while and I am a generic scientific programmer (i.e. I use many 
frameworks such as OpenCL/OpenMP etc, not just MPI)
Although I am primarily a Linux user, I do own multiple versions of 
Visual Studio licenses and have a small cluster that dual boots to 
Windows/Linux (and more nodes can be added on demand). I cannot do any 
large scale testing on this, but I can build and run regression tests etc.


If the community needs the Windows support to continue, I can take up 
that responsibility, until a more capable person/group is found at least.


Thanks
Durga


On Mon, Dec 3, 2012 at 12:32 PM, Damien <dam...@khubla.com 
<mailto:dam...@khubla.com>> wrote:


All,

I completely missed the message about Shiqing departing as the
OpenMPI Windows maintainer.  I'll try and keep Windows builds
going for 1.6 at least, I have 2011 and 2013 Intel licenses and
VS2008 and 2012, but not 2010.  I see that the 1.6.3 code base
already doesn't build on Windows in VS2012  :-(.

While I can try and keep builds going, I don't have access to a
Windows cluster right now, and I'm flat out on two other projects.
I can test on my workstation, but that will only go so far.
Longer-term, there needs to be a decision made on whether Windows
gets to be a first-class citizen in OpenMPI or not.  Jeff's
already told me that 1.7 is lagging behind on Windows.  It would
be a shame to have all the work Shiqing put in gradually decay
because it can't be supported enough.  If there's any
Microsoft/HPC/Azure folks observing this list, or any other
vendors who run on Windows with OpenMPI, maybe we can see what can
be done if you're interested.

Damien
___
users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users





___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Windows support for OpenMPI

2012-12-03 Thread Damien
This is a good start.  Stepping up a level and without wanting to start 
a bunfight with other MPI implementations, what are the advantages of 
OpenMPI over those other various MPI implementations, irrespective of 
platform?  There must be some advantages, or OpenMPI wouldn't exist.  Do 
those advantages apply on Windows and would they justify ongoing Windows 
support?


Damien


On 03/12/2012 11:59 AM, John R. Cary wrote:

Dear OpenMPI community,

This email is about whether a commercial version of OpenMPI for Windows
could be successful.  I hesitated before sending this, but upon asking
some others (notably Jeff) on this list, it seemed appropriate.

We at Tech-X have been asking whether a commercial/freemium support 
model for a Windows

version of OpenMPI would work.  We are currently working on this for some
other products, notably PETSc, which is discussed at
http://www.txcorp.com/home/cosml.

We see some downsides - in particular, with Microsoft's HPC Pack, 
Windows users

have free access to an MPI solution.  This has to be balanced by some
particular advantages of OpenMPI such that there would be a group of
users who would pay for it for anyone to make this work.

We would be very interested in hearing from folks on this list who either
(1) help define the competitive advantage of having OpenMPI on Windows or
(2) would be interested in a commercial solution, were it available.

Naturally, any solution should benefit the OpenMPI community as well to
be a success.

I would be glad to hear from folks on list or off.

ThxJohn Cary







On 12/3/2012 10:32 AM, Damien wrote:

All,

I completely missed the message about Shiqing departing as the 
OpenMPI Windows maintainer.  I'll try and keep Windows builds going 
for 1.6 at least, I have 2011 and 2013 Intel licenses and VS2008 and 
2012, but not 2010.  I see that the 1.6.3 code base already doesn't 
build on Windows in VS2012  :-(.


While I can try and keep builds going, I don't have access to a 
Windows cluster right now, and I'm flat out on two other projects. I 
can test on my workstation, but that will only go so far. 
Longer-term, there needs to be a decision made on whether Windows 
gets to be a first-class citizen in OpenMPI or not. Jeff's already 
told me that 1.7 is lagging behind on Windows. It would be a shame to 
have all the work Shiqing put in gradually decay because it can't be 
supported enough.  If there's any Microsoft/HPC/Azure folks observing 
this list, or any other vendors who run on Windows with OpenMPI, 
maybe we can see what can be done if you're interested.


Damien
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




[OMPI users] Windows support for OpenMPI

2012-12-03 Thread Damien

All,

I completely missed the message about Shiqing departing as the OpenMPI 
Windows maintainer.  I'll try and keep Windows builds going for 1.6 at 
least, I have 2011 and 2013 Intel licenses and VS2008 and 2012, but not 
2010.  I see that the 1.6.3 code base already doesn't build on Windows 
in VS2012  :-(.


While I can try and keep builds going, I don't have access to a Windows 
cluster right now, and I'm flat out on two other projects. I can test on 
my workstation, but that will only go so far. Longer-term, there needs 
to be a decision made on whether Windows gets to be a first-class 
citizen in OpenMPI or not.  Jeff's already told me that 1.7 is lagging 
behind on Windows.  It would be a shame to have all the work Shiqing put 
in gradually decay because it can't be supported enough.  If there's any 
Microsoft/HPC/Azure folks observing this list, or any other vendors who 
run on Windows with OpenMPI, maybe we can see what can be done if you're 
interested.


Damien


Re: [OMPI users] 0xc000007b error exit on 64-bit Windows 7

2012-12-03 Thread Damien
I just tried it on a clean VM, the 64-bit OpenMPI installer does install 
to Program Files (x86).  That's not the end of the world, but you have 
to watch your paths.


Miroslav, when you ran the installer did you say yes to adding OpenMPI 
to the system path?  If you installed both 32 and 64-bit binaries, and 
added both to the system path, it will typically just append the paths.  
So if you installed 32-bit first, then 64-bit, whenever you run 
something it will load the 32-bit OpenMPI runtime first, even running 
64-bit, which will cause that bad image error. I think that's why your 
32-bit run works and 64-bit doesn't.


I suggest uninstalling both 32 and 64-bit OpenMPIs, make sure they're 
removed from the path, then reinstall them *without* putting them into 
the system path, and try again from there.  You'll have to set your 
paths manually, but you'll be running with the right binaries each time.


Damien

On 03/12/2012 9:55 AM, Iliev, Hristo wrote:

Hi,

0xC07B is STATUS_INVALID_IMAGE_FORMAT. It mostly means that some of the
dynamic link libraries (DLLs) that the executable is linked against are of
different "bitness", e.g. 32-bit. It could be a packaging error in Open MPI,
or it could be messed up installation. You could use the Dependency Walker
tool to examine the list of DLLs that the executable depends upon and see
which one is the culprit. Dependency Walker is available here:

http://www.dependencywalker.com/

Which brings me to the question: why the win64 version of Open MPI is
installed in "Program Files (x86)", where 32-bit things go?!

Hope that helps.

Kind regards,
Hristo

--
Hristo Iliev, Ph.D. -- High Performance Computing
RWTH Aachen University, Center for Computing and Communication
Rechen- und Kommunikationszentrum der RWTH Aachen
Seffenter Weg 23,  D 52074  Aachen (Germany)



-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On Behalf Of Ilias Miroslav
Sent: Monday, December 03, 2012 3:40 PM
To: us...@open-mpi.org
Subject: [OMPI users] 0xc07b error exit on 64-bit Windows 7

Dear experts,

I just installed http://www.open-
mpi.org/software/ompi/v1.6/downloads/OpenMPI_v1.6.1-1_win64.exe on
our Intel i7 64-bit Windows 7 system.


When I try to run  some executable, I am getting error "Application Error

The

application was unable to start correctly (0xc07b)..."

Any help please ? The "C:\Program Files (x86)\OpenMPI_v1.6.1-x64\bin"
string is in my %Path% variable.

Yours, Miro

PS: On 32-bit Windows 7 the 32-bit OpenMPI application works fine.
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] 0xc000007b error exit on 64-bit Windows 7

2012-12-03 Thread damien
Ignore what I posted,  Hristo is right.   On my phone screen the C looked like 
a 0.  Sorry. 

Damien

Sent from my android device.



-Original Message-
From: "Jeff Squyres (jsquyres)" <jsquy...@cisco.com>
To: Open MPI Users <us...@open-mpi.org>
Cc: "us...@open-mpi.org" <us...@open-mpi.org>
Sent: Mon, 03 Dec 2012 9:01 AM
Subject: Re: [OMPI users] 0xc07b error exit on 64-bit Windows 7

I'm afraid we've lost the open MPI community windows developer. So I don't know 
if you'll get a good answer to this question. 

Sorry!  :(

Sent from my phone. No type good. 

On Dec 3, 2012, at 6:40 AM, "Ilias Miroslav" <miroslav.il...@umb.sk> wrote:

> Dear experts,
> 
> I just installed 
> http://www.open-mpi.org/software/ompi/v1.6/downloads/OpenMPI_v1.6.1-1_win64.exe
>  on our Intel i7 64-bit Windows 7 system.
> 
> 
> When I try to run  some executable, I am getting error "Application Error The 
> application was unable to start correctly (0xc07b)..."
> 
> Any help please ? The "C:\Program Files (x86)\OpenMPI_v1.6.1-x64\bin" string 
> is in my %Path% variable.
> 
> Yours, Miro
> 
> PS: On 32-bit Windows 7 the 32-bit OpenMPI application works fine.
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] 0xc000007b error exit on 64-bit Windows 7

2012-12-03 Thread damien
That's a null pointer access.   It's not necessarily OpenMPI causing that.   
You'll need to supply more info about what you're running. 

Damien 

Sent from my android device.



-Original Message-
From: "Jeff Squyres (jsquyres)" <jsquy...@cisco.com>
To: Open MPI Users <us...@open-mpi.org>
Cc: "us...@open-mpi.org" <us...@open-mpi.org>
Sent: Mon, 03 Dec 2012 9:01 AM
Subject: Re: [OMPI users] 0xc07b error exit on 64-bit Windows 7

I'm afraid we've lost the open MPI community windows developer. So I don't know 
if you'll get a good answer to this question. 

Sorry!  :(

Sent from my phone. No type good. 

On Dec 3, 2012, at 6:40 AM, "Ilias Miroslav" <miroslav.il...@umb.sk> wrote:

> Dear experts,
> 
> I just installed 
> http://www.open-mpi.org/software/ompi/v1.6/downloads/OpenMPI_v1.6.1-1_win64.exe
>  on our Intel i7 64-bit Windows 7 system.
> 
> 
> When I try to run  some executable, I am getting error "Application Error The 
> application was unable to start correctly (0xc07b)..."
> 
> Any help please ? The "C:\Program Files (x86)\OpenMPI_v1.6.1-x64\bin" string 
> is in my %Path% variable.
> 
> Yours, Miro
> 
> PS: On 32-bit Windows 7 the 32-bit OpenMPI application works fine.
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] OpenMPI on Windows when MPI_F77 is used from a C application

2012-10-30 Thread Damien Hocking

I've never seen that, but someone else might have.

Damien

On 30/10/2012 1:43 AM, Mathieu Gontier wrote:

Hi Damien,

The only message I have is:
[vs2010:09300] [[56007,0],0]-[[56007,1],0] mca_oob_tcp_msg_recv: readv 
failed: Unknown error (108)
[vs2010:09300] 2 more processes have sent help message 
help-odls-default.txt / odls-default:could-not-kill


Does it mean something for you?





Re: [OMPI users] OpenMPI on Windows when MPI_F77 is used from a C application

2012-10-29 Thread Damien
Is there a series of error messages or anything at all that you can post 
here?


Damien

On 29/10/2012 2:30 PM, Mathieu Gontier wrote:

Hi guys.

Finally, I compiled with /O: the options is deprecated and, like I did 
previously, I used /Od instead... unsuccessfully.


I also compiled my code from a script in order to call mpicc.exe / 
mpiCC.exe / mpif77.exe instead of directly calling cl.exe and 
ifort.exe. Only the linkage is done without mpicc.exe because I did 
not found how to call the linker from mpicc.exe (if it can change 
something, just let me know). So, the purpose is to compile with the 
default OpenMPI options (if there is any). But my solver still crashes.


So, if anybody has an idea...

Thanks for your help.

On Mon, Oct 29, 2012 at 7:33 PM, Mathieu Gontier 
<mathieu.gont...@gmail.com <mailto:mathieu.gont...@gmail.com>> wrote:


I crashes into the fortran routine calling a MPI functions. When I
run the debugger, the crash seems to be in libmpi_f77.lib, but I
cannot go further since the lib is not in debbug mode.

Attached to this email the files of my small case. But with
less aggressive options, it works.

I did not know the lowst optimization level is /O: I am going to try.


On Mon, Oct 29, 2012 at 5:08 PM, Damien <dam...@khubla.com
<mailto:dam...@khubla.com>> wrote:

Mathieu,

Where is the crash?  Without that info, I'd suggest turning
off all the optimisations and just compile it without any
flags other than what you need to compile it cleanly (so no /O
flags) and see if it crashes.

Damien


On 26/10/2012 10:27 AM, Mathieu Gontier wrote:

Dear all,

I am willing to use OpenMPI on Windows for a CFD instead of
 MPICH2. My solver is developed if Fortran77 and piloted by a
C++ interface; the both levels call MPI functions.

So, I installed OpenMPI-1.6.2-x64 on my system and compiled
my code successfully. But, at the runtime it crashed.
I reproduced the problem into a small C application calling a
Fortran function using MPI_Allreduce; when I removed
some aggressive optimization options from the Fortran, it worked:
*

 *

Optimization: Disable (/Od)

 *

Inline Function Expansion: Any Suitable (/Ob2)

 *

Favor Size or Speed: Favor Fast Code (/Ot)

*

So, I removed the same options from the Fortran parts of my
solver, but it still crashes. I tried some others, but it
still continues crashing. Does anybody has an idea? Should I
(de)activate some compilation options? Is there some
properties to build and link against libmpi_f77.lib?

Thanks for your help.
Mathieu.

-- 
Mathieu Gontier

- MSN: mathieu.gont...@gmail.com
<mailto:mathieu.gont...@gmail.com>
- Skype: mathieu_gontier


___
users mailing list
us...@open-mpi.org  <mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users



___
users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users




-- 
Mathieu Gontier

- MSN: mathieu.gont...@gmail.com <mailto:mathieu.gont...@gmail.com>
- Skype: mathieu_gontier




--
Mathieu Gontier
- MSN: mathieu.gont...@gmail.com <mailto:mathieu.gont...@gmail.com>
- Skype: mathieu_gontier


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] OpenMPI on Windows when MPI_F77 is used from a C application

2012-10-29 Thread Damien

Mathieu,

Where is the crash?  Without that info, I'd suggest turning off all the 
optimisations and just compile it without any flags other than what you 
need to compile it cleanly (so no /O flags) and see if it crashes.


Damien

On 26/10/2012 10:27 AM, Mathieu Gontier wrote:

Dear all,

I am willing to use OpenMPI on Windows for a CFD instead of  MPICH2. 
My solver is developed if Fortran77 and piloted by a C++ interface; 
the both levels call MPI functions.


So, I installed OpenMPI-1.6.2-x64 on my system and compiled my code 
successfully. But, at the runtime it crashed.
I reproduced the problem into a small C application calling a Fortran 
function using MPI_Allreduce; when I removed 
some aggressive optimization options from the Fortran, it worked:

*

 *

Optimization: Disable (/Od)

 *

Inline Function Expansion: Any Suitable (/Ob2)

 *

Favor Size or Speed: Favor Fast Code (/Ot)

*

So, I removed the same options from the Fortran parts of my solver, 
but it still crashes. I tried some others, but it still continues 
crashing. Does anybody has an idea? Should I (de)activate some 
compilation options? Is there some properties to build and link 
against libmpi_f77.lib?


Thanks for your help.
Mathieu.

--
Mathieu Gontier
- MSN: mathieu.gont...@gmail.com <mailto:mathieu.gont...@gmail.com>
- Skype: mathieu_gontier


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] openmpi shared memory feature

2012-10-27 Thread Damien

Mahmood,

To build on what Jeff said, here's a short summary of how diskless 
clusters work:


A diskless node gets its operating system through a physical network 
(say gig-E), including the HPC applications and the MPI runtimes, from a 
master server.  That master server isn't the MPI head node, it's a 
separate OS/Network boot server.  That's completely separate from how 
the MPI applications run.  The MPI-based HPC applications on the nodes 
communicate through a dedicated, faster physical network (say 
Infiniband).  There's two separate networks, one for starting and 
running nodes and one for doing HPC work.  On the same node, MPI 
processes use shared-memory to communicate, regardless of whether it's 
diskless or not, it's just part of MPI.  Between nodes, MPI processes 
use that faster, dedicated network, and that's regardless of whether 
it's diskless or not, it's just part of MPI. The networks are separate 
because it's more efficient.


Damien

On 27/10/2012 11:00 AM, Jeff Squyres wrote:

On Oct 27, 2012, at 12:47 PM, Mahmood Naderan wrote:


Because communicating through shared memory when sending messages between 
processes on the same server is far faster than going through a network stack.
  
I see... But that is not good for diskless clusters. Am I right? assume processes are on a node (which has no disk). In this case, their communication go though network (from computing node to server) then IO and then network again (from server to computing node).

I don't quite understand what you're saying -- what exactly is your distinction between 
"server" and "computing node"?

For the purposes of my reply, I use the word "server" to mean "one computational server, 
possibly containing multiple processors, a bunch of RAM, and possibly one or more disks."  For example, 
a 1U "pizza box" style rack enclosure containing the guts of a typical x86-based system.

You seem to be relating two orthogonal things: whether a server has a disk and 
how MPI messages flow from one process to another.

When using shared memory, the message starts in one process, gets copied to 
shared memory, then then gets copied to the other process.  If you use the knem 
Linux kernel module, we can avoid shared memory in some cases and copy the 
message directly from one process' memory to the other.

It's irrelevant as to whether there is a disk or not.





Re: [OMPI users] Linking failure on Windows

2012-10-02 Thread Damien

No worries.  It's good to see that it compiles.

Damien

On 02/10/2012 2:25 PM, Gib Bogle wrote:

Hi Shiqing,

Your post made me realize my mistake!  I was thinking only of the 
preprocessor definitions for compiling cvAdvDiff_non_p.c, forgetting 
about the previously built library sundials_nvecparallel.lib, which is 
of course where nvector_parallel.c was compiled.  When I rebuild that 
library with OMPI_IMPORTS my problem disappears.


Thanks Shiqing, and sorry Damien!

Gib




Re: [OMPI users] Linking failure on Windows

2012-10-02 Thread Damien

OK, I give.  I think this is a Shiqing question.

Damien

On 02/10/2012 12:25 AM, Gib Bogle wrote:

They don't make any difference.  I had them in, but dropped them when I found 
that the mpicc build didn't need them.

Gib

From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] on behalf of 
Damien Hocking [dam...@khubla.com]
Sent: Tuesday, 2 October 2012 7:21 p.m.
To: Open MPI Users
Subject: Re: [OMPI users] Linking failure on Windows

There's two imports missing from there, OPAL_IMPORTS and ORTE_IMPORTS.
That might be part of it.

Damien

On 01/10/2012 10:20 PM, Gib Bogle wrote:

I guess it's conceivable that one of these Sundials include files is
doing something:

#include  /* prototypes for CVODE
fcts. */
#include   /* definition of N_Vector and
macros /
#include   /* definition of realtype /
#include/* definition of EXP */

I am a complete beginner with Sundials, so I have no idea how it might
interfere with the preprocessor definitions.

Here is the compile command line from VS:

/O2 /Ob2 /I "E:\sundials-2.5.0\include" /I "E:\Sundials-Win32\include"
/I "c:\Program Files (x86)\OpenMPI_v1.6.2-win32\include" /D "WIN32" /D
"_WINDOWS" /D "NDEBUG" /D "OMPI_IMPORTS" /D "_CRT_SECURE_NO_WARNINGS"
/D "CMAKE_INTDIR=\"Release\"" /D "_MBCS" /FD /MD
/Fo"cvAdvDiff_non_p.dir\Release\\"
/Fd"E:\Sundials-Win32\examples\cvode\parallel\Release/cvAdvDiff_non_p.pdb"
/W3 /c /TC /errorReport:prompt

Gib


On 2/10/2012 5:06 p.m., Damien Hocking wrote:

So mpicc builds it completely?  The only thing I can think of is look
closely at both the compile and link command lines and see what's
different.  It might be going sideways at the compile from something
in an include with a preprocessor def.

Damien

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Linking failure on Windows

2012-10-02 Thread Damien Hocking
There's two imports missing from there, OPAL_IMPORTS and ORTE_IMPORTS.  
That might be part of it.


Damien

On 01/10/2012 10:20 PM, Gib Bogle wrote:
I guess it's conceivable that one of these Sundials include files is 
doing something:


#include  /* prototypes for CVODE 
fcts. */
#include   /* definition of N_Vector and 
macros /

#include   /* definition of realtype /
#include/* definition of EXP */

I am a complete beginner with Sundials, so I have no idea how it might 
interfere with the preprocessor definitions.


Here is the compile command line from VS:

/O2 /Ob2 /I "E:\sundials-2.5.0\include" /I "E:\Sundials-Win32\include" 
/I "c:\Program Files (x86)\OpenMPI_v1.6.2-win32\include" /D "WIN32" /D 
"_WINDOWS" /D "NDEBUG" /D "OMPI_IMPORTS" /D "_CRT_SECURE_NO_WARNINGS" 
/D "CMAKE_INTDIR=\"Release\"" /D "_MBCS" /FD /MD 
/Fo"cvAdvDiff_non_p.dir\Release\\" 
/Fd"E:\Sundials-Win32\examples\cvode\parallel\Release/cvAdvDiff_non_p.pdb" 
/W3 /c /TC /errorReport:prompt


Gib


On 2/10/2012 5:06 p.m., Damien Hocking wrote:
So mpicc builds it completely?  The only thing I can think of is look 
closely at both the compile and link command lines and see what's 
different.  It might be going sideways at the compile from something 
in an include with a preprocessor def.


Damien



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Linking failure on Windows

2012-10-02 Thread Damien Hocking
So mpicc builds it completely?  The only thing I can think of is look 
closely at both the compile and link command lines and see what's 
different.  It might be going sideways at the compile from something in 
an include with a preprocessor def.


Damien

On 01/10/2012 9:57 PM, Gib Bogle wrote:

Hi Damien,

I've checked and double-checked, and I can't see anything not 32-bit.  
In fact my VS2005 only knows about 32-bit.


I just tested copying the source code with appropriate include 
directories to another directory and built the executable successfully 
with mpicc.  But I can't see that there is anything in the mpicc link 
(with --showme:link) that is not in VS.  The command line in VS has a 
lot more stuff in it, to be sure.


Gib

On 2/10/2012 3:55 p.m., Damien Hocking wrote:

Gib,

If you have OMPI_IMPORTS set that usually removes those symbol 
errors.  Are you absolutely sure you have everything set to 32-bit in 
Visual Studio?


Damien

On 01/10/2012 7:55 PM, Gib Bogle wrote:
I am building the Sundials examples, with MS Visual Studio 2005 
version 8 (i.e. 32-bit) on Windows 7 64-bit.  The OpenMPI version is 
OpenMPI_1.6.2-win32.
All the parallel examples fail with the same linker errors.  I have 
added the preprocessor definitions OMPI_IMPORTS, OPAL_IMPORTS and 
ORTE_IMPORTS.  The libraries that are being linked are: libmpi.lib, 
libmpi_cxx.lib, libopen-pal.lib, libopen-rte.lib.


Here are the errors:

1>Linking...
1>sundials_nvecparallel.lib(nvector_parallel.obj) : error LNK2019: 
unresolved external symbol _ompi_mpi_op_sum referenced in function 
_VAllReduce_Parallel
1>sundials_nvecparallel.lib(nvector_parallel.obj) : error LNK2019: 
unresolved external symbol _ompi_mpi_op_max referenced in function 
_VAllReduce_Parallel
1>sundials_nvecparallel.lib(nvector_parallel.obj) : error LNK2019: 
unresolved external symbol _ompi_mpi_double referenced in function 
_VAllReduce_Parallel
1>sundials_nvecparallel.lib(nvector_parallel.obj) : error LNK2019: 
unresolved external symbol _ompi_mpi_op_min referenced in function 
_VAllReduce_Parallel
1>sundials_nvecparallel.lib(nvector_parallel.obj) : error LNK2019: 
unresolved external symbol _ompi_mpi_long referenced in function 
_N_VNewEmpty_Parallel
1>E:\Sundials-Win32\examples\cvode\parallel\Release\cvDiurnal_kry_bbd_p.exe 
: fatal error LNK1120: 5 unresolved externals


What am I missing?

Thanks
Gib
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users







Re: [OMPI users] Linking failure on Windows

2012-10-01 Thread Damien Hocking

Gib,

If you have OMPI_IMPORTS set that usually removes those symbol errors.  
Are you absolutely sure you have everything set to 32-bit in Visual Studio?


Damien

On 01/10/2012 7:55 PM, Gib Bogle wrote:
I am building the Sundials examples, with MS Visual Studio 2005 
version 8 (i.e. 32-bit) on Windows 7 64-bit.  The OpenMPI version is 
OpenMPI_1.6.2-win32.
All the parallel examples fail with the same linker errors.  I have 
added the preprocessor definitions OMPI_IMPORTS, OPAL_IMPORTS and 
ORTE_IMPORTS.  The libraries that are being linked are: libmpi.lib, 
libmpi_cxx.lib, libopen-pal.lib, libopen-rte.lib.


Here are the errors:

1>Linking...
1>sundials_nvecparallel.lib(nvector_parallel.obj) : error LNK2019: 
unresolved external symbol _ompi_mpi_op_sum referenced in function 
_VAllReduce_Parallel
1>sundials_nvecparallel.lib(nvector_parallel.obj) : error LNK2019: 
unresolved external symbol _ompi_mpi_op_max referenced in function 
_VAllReduce_Parallel
1>sundials_nvecparallel.lib(nvector_parallel.obj) : error LNK2019: 
unresolved external symbol _ompi_mpi_double referenced in function 
_VAllReduce_Parallel
1>sundials_nvecparallel.lib(nvector_parallel.obj) : error LNK2019: 
unresolved external symbol _ompi_mpi_op_min referenced in function 
_VAllReduce_Parallel
1>sundials_nvecparallel.lib(nvector_parallel.obj) : error LNK2019: 
unresolved external symbol _ompi_mpi_long referenced in function 
_N_VNewEmpty_Parallel
1>E:\Sundials-Win32\examples\cvode\parallel\Release\cvDiurnal_kry_bbd_p.exe 
: fatal error LNK1120: 5 unresolved externals


What am I missing?

Thanks
Gib
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] compilation on windows 7 64-bit

2012-07-26 Thread Damien

Do you have

OMPI_IMPORTS, OPAL_IMPORTS and ORTE_IMPORTS

defined in your preprocessor flags?  You need those.

Damien


On 26/07/2012 3:56 PM, Sayre, Alan N wrote:


I'm trying to replace the usage of platform mpi with open mpi. I am 
trying to compile on Windows 7 64 bit using Visual Studio 2010. I have 
added the paths to the openmpi include and library directories and 
added the libmpid.lib and libmpi_cxxd.lib to the linker input. The 
application compiles (find the mpi headers). When it tries to link I 
get the following output:


como_mplib.lib(mpcomm.obj) : error LNK2019: unresolved external symbol 
_MPI_Comm_remote_size referenced in function "struct MpComm_s * 
__cdecl MpCommSpawn(char const *,char const * * const,int,enum 
Bool_e)" (?MpCommSpawn@@YAPAUMpComm_s@@PBDQAPBDHW4Bool_e@@@Z)


como_mplib.lib(mpcomm.obj) : error LNK2019: unresolved external symbol 
_MPI_Comm_spawn referenced in function "struct MpComm_s * __cdecl 
MpCommSpawn(char const *,char const * * const,int,enum Bool_e)" 
(?MpCommSpawn@@YAPAUMpComm_s@@PBDQAPBDHW4Bool_e@@@Z)


como_mplib.lib(mpcomm.obj) : error LNK2001: unresolved external symbol 
_ompi_mpi_info_null


como_mplib.lib(mpcomm.obj) : error LNK2001: unresolved external symbol 
_ompi_mpi_comm_self


como_mplib.lib(mpcomm.obj) : error LNK2001: unresolved external symbol 
_ompi_mpi_comm_null


como_mplib.lib(mpcomm.obj) : error LNK2001: unresolved external symbol 
_ompi_mpi_op_sum


como_mplib.lib(mpcomm.obj) : error LNK2001: unresolved external symbol 
_ompi_mpi_op_min


como_mplib.lib(mpcomm.obj) : error LNK2001: unresolved external symbol 
_ompi_mpi_op_max


como_mplib.lib(mpcomm.obj) : error LNK2019: unresolved external symbol 
_MPI_Intercomm_create referenced in function "int __cdecl 
MpCommCreateCommunicators(struct MpComm_s * *,struct MpComm_s * *)" 
(?MpCommCreateCommunicators@@YAHPAPAUMpComm_s@@0@Z)


como_mplib.lib(mpcomm.obj) : error LNK2019: unresolved external symbol 
_MPI_Comm_split referenced in function "int __cdecl 
MpCommCreateCommunicators(struct MpComm_s * *,struct MpComm_s * *)" 
(?MpCommCreateCommunicators@@YAHPAPAUMpComm_s@@0@Z)


como_mplib.lib(mpcomm.obj) : error LNK2019: unresolved external symbol 
_MPI_Comm_rank referenced in function "int __cdecl 
MpCommCreateCommunicators(struct MpComm_s * *,struct MpComm_s * *)" 
(?MpCommCreateCommunicators@@YAHPAPAUMpComm_s@@0@Z)


como_mplib.lib(mpenv.obj) : error LNK2001: unresolved external symbol 
_MPI_Comm_rank


como_mplib.lib(mpcomm.obj) : error LNK2019: unresolved external symbol 
_MPI_Comm_size referenced in function "int __cdecl 
MpCommCreateCommunicators(struct MpComm_s * *,struct MpComm_s * *)" 
(?MpCommCreateCommunicators@@YAHPAPAUMpComm_s@@0@Z)


como_mplib.lib(mpenv.obj) : error LNK2001: unresolved external symbol 
_MPI_Comm_size


como_mplib.lib(mpcomm.obj) : error LNK2001: unresolved external symbol 
_ompi_mpi_comm_world


como_mplib.lib(mpenv.obj) : error LNK2001: unresolved external symbol 
_ompi_mpi_comm_world


como_mplib.lib(mpcomm.obj) : error LNK2019: unresolved external symbol 
_MPI_Comm_get_parent referenced in function "struct MpComm_s * __cdecl 
MpCommNewChild(void)" (?MpCommNewChild@@YAPAUMpComm_s@@XZ)


como_mplib.lib(mpcomm.obj) : error LNK2019: unresolved external symbol 
_MPI_Comm_free referenced in function "void __cdecl MpCommFree(struct 
MpComm_s *)" (?MpCommFree@@YAXPAUMpComm_s@@@Z)


como_mplib.lib(mpcomm.obj) : error LNK2019: unresolved external symbol 
_MPI_Send referenced in function "int __cdecl MpCommSend(struct 
MpComm_s *,void const *,int,enum Dtype_e,int,int)" 
(?MpCommSend@@YAHPAUMpComm_s@@PBXHW4Dtype_e@@HH@Z)


como_mplib.lib(mpcomm.obj) : error LNK2019: unresolved external symbol 
_MPI_Isend referenced in function "int __cdecl MpCommISend(struct 
MpComm_s *,void const *,int,enum Dtype_e,int,int,struct MpRequest_s 
*)" (?MpCommISend@@YAHPAUMpComm_s@@PBXHW4Dtype_e@@HHPAUMpRequest_s@@@Z)


como_mplib.lib(mpcomm.obj) : error LNK2019: unresolved external symbol 
_MPI_Get_count referenced in function "int __cdecl MpCommRecv(struct 
MpComm_s *,void *,int,enum Dtype_e,int,int,struct MpStatus_s *)" 
(?MpCommRecv@@YAHPAUMpComm_s@@PAXHW4Dtype_e@@HHPAUMpStatus_s@@@Z)


como_mplib.lib(mpcomm.obj) : error LNK2019: unresolved external symbol 
_MPI_Recv referenced in function "int __cdecl MpCommRecv(struct 
MpComm_s *,void *,int,enum Dtype_e,int,int,struct MpStatus_s *)" 
(?MpCommRecv@@YAHPAUMpComm_s@@PAXHW4Dtype_e@@HHPAUMpStatus_s@@@Z)


como_mplib.lib(mpcomm.obj) : error LNK2019: unresolved external symbol 
_MPI_Irecv referenced in function "int __cdecl MpCommIRecv(struct 
MpComm_s *,void *,int,enum Dtype_e,int,int,struct MpRequest_s *)" 
(?MpCommIRecv@@YAHPAUMpComm_s@@PAXHW4Dtype_e@@HHPAUMpRequest_s@@@Z)


como_mplib.lib(mpcomm.obj) : error LNK2001: unresolved external symbol 
_ompi_mpi_char


como_mplib.lib(mpcomm.obj) : error LNK2019: u

Re: [OMPI users] Fortran90 Bindings

2012-07-25 Thread Damien
Hmmm.  My 64-bit builds create mpif77.exe, libmpi_f77.lib and 
libmpi_f77.dll, and they work.


Damien

On 25/07/2012 10:11 AM, Kumar, Sudhir wrote:


Hi

I am new to Open MPI so please pardon my ignorance, I just came across 
an article which refers to F77 bindings available for 32 bit windows 
only, It was as of June. Has something changed since then,


http://www.open-mpi.org/community/lists/users/2012/06/19525.php

Thanks so much.

*From:*users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
*On Behalf Of *Damien

*Sent:* Wednesday, July 25, 2012 10:52 AM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Fortran90 Bindings

Sudhir,

F77 works on both.

Damien

On 25/07/2012 8:55 AM, Kumar, Sudhir wrote:

Hi

I have one more related question. Is the F77 bindings available
for both 64bit and 32 bit windows environments or just for the 32
bit environment.

Thanks

*From:*users-boun...@open-mpi.org
<mailto:users-boun...@open-mpi.org>
[mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien
*Sent:* Wednesday, July 18, 2012 10:11 AM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Fortran90 Bindings

Hmmm.  6 months ago there weren't F90 bindings in the Windows
version (the F90 bindings are large and tricky).  It's an option
you can select when you compile it yourself, but looking at the
one I just did a month ago, there's still no mpif90.exe built, so
I'd say that's still not supported on Windows. :-(

Damien

On 18/07/2012 9:00 AM, Kumar, Sudhir wrote:

Hi had meant to say if Fortran90 bindings for Windows

*Sudhir Kumar*

*From:*users-boun...@open-mpi.org
<mailto:users-boun...@open-mpi.org>
[mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien
*Sent:* Wednesday, July 18, 2012 9:56 AM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Fortran90 Bindings

Yep.

On 18/07/2012 8:53 AM, Kumar, Sudhir wrote:

Hi

Just wondering if Fortran90 bindings are available for
OpemMPI 1.6

Thanks

*Sudhir Kumar*






___

users mailing list

us...@open-mpi.org  <mailto:us...@open-mpi.org>

http://www.open-mpi.org/mailman/listinfo.cgi/users





___

users mailing list

us...@open-mpi.org  <mailto:us...@open-mpi.org>

http://www.open-mpi.org/mailman/listinfo.cgi/users




___

users mailing list

us...@open-mpi.org  <mailto:us...@open-mpi.org>

http://www.open-mpi.org/mailman/listinfo.cgi/users



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Fortran90 Bindings

2012-07-25 Thread Damien

Sudhir,

F77 works on both.

Damien


On 25/07/2012 8:55 AM, Kumar, Sudhir wrote:


Hi

I have one more related question. Is the F77 bindings available for 
both 64bit and 32 bit windows environments or just for the 32 bit 
environment.


Thanks

*From:*users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
*On Behalf Of *Damien

*Sent:* Wednesday, July 18, 2012 10:11 AM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Fortran90 Bindings

Hmmm.  6 months ago there weren't F90 bindings in the Windows version 
(the F90 bindings are large and tricky).  It's an option you can 
select when you compile it yourself, but looking at the one I just did 
a month ago, there's still no mpif90.exe built, so I'd say that's 
still not supported on Windows.  :-(


Damien

On 18/07/2012 9:00 AM, Kumar, Sudhir wrote:

Hi had meant to say if Fortran90 bindings for Windows

*Sudhir Kumar*

*From:*users-boun...@open-mpi.org
<mailto:users-boun...@open-mpi.org>
[mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien
*Sent:* Wednesday, July 18, 2012 9:56 AM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Fortran90 Bindings

Yep.

On 18/07/2012 8:53 AM, Kumar, Sudhir wrote:

Hi

Just wondering if Fortran90 bindings are available for OpemMPI 1.6

Thanks

*Sudhir Kumar*





___

users mailing list

us...@open-mpi.org  <mailto:us...@open-mpi.org>

http://www.open-mpi.org/mailman/listinfo.cgi/users




___

users mailing list

us...@open-mpi.org  <mailto:us...@open-mpi.org>

http://www.open-mpi.org/mailman/listinfo.cgi/users



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Fortran90 Bindings

2012-07-18 Thread Damien
Hmmm.  6 months ago there weren't F90 bindings in the Windows version 
(the F90 bindings are large and tricky).  It's an option you can select 
when you compile it yourself, but looking at the one I just did a month 
ago, there's still no mpif90.exe built, so I'd say that's still not 
supported on Windows.  :-(


Damien

On 18/07/2012 9:00 AM, Kumar, Sudhir wrote:


Hi had meant to say if Fortran90 bindings for Windows

*Sudhir Kumar*

*From:*users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
*On Behalf Of *Damien

*Sent:* Wednesday, July 18, 2012 9:56 AM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Fortran90 Bindings

Yep.

On 18/07/2012 8:53 AM, Kumar, Sudhir wrote:

Hi

Just wondering if Fortran90 bindings are available for OpemMPI 1.6

Thanks

*Sudhir Kumar*




___

users mailing list

us...@open-mpi.org  <mailto:us...@open-mpi.org>

http://www.open-mpi.org/mailman/listinfo.cgi/users



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Fortran90 Bindings

2012-07-18 Thread Damien

Yep.

On 18/07/2012 8:53 AM, Kumar, Sudhir wrote:


Hi

Just wondering if Fortran90 bindings are available for OpemMPI 1.6

Thanks

*Sudhir Kumar*



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] VS2008 : linking against OpenMPI: unresolved external symbols

2012-06-26 Thread Damien
Really fast off the top of my head, LNK4248 and LNK2020 are 
Microsoft-specific C++/CLI warning for managed C++.  Are you intending 
to use managed C++ in your app?  That can do funny things to linker symbols.


Also, you might need to have all three of OMPI_IMPORTS, OPAL_IMPORTS and 
ORTE_IMPORTS defined.


Also, make sure you're not set to a 64-bit project type using 32-bit 
OpenMPI.


Damien

On 25/06/2012 8:57 PM, Dr AD wrote:

Hello,
I installed the windows binaries by running OpenMPI_v1.6-1_win32.exe
In VS2008 professional I set the following project preferences:

Configuration -> Properties -> Debugging : MPI Cluster Debugger
MPIRun Working Directory : localhost/NUM PROCS TO LAUNCH
MPIRun Command: C:\Program 
Files\OpenMPI_v1.6-win32\bin\mpiexec.exe


C/C++ -> Additional Include Directories: C:\Program 
Files\OpenMPI_v1.6-win32\include

C/C++ -> Preprocessor-> Preprocessor Definitions:
OMPI_IMPORTS

Linker -> Additioanl Library Directories: C:\Program 
Files\OpenMPI_v1.6-win32\lib

Linker -> Additional Dependencies:libmpid.lib
libopen-rted.lib
libopen-pald.lib
libmpi_cxxd.lib

I get unresolved external symbols link errors, below:

: warning LNK4248: unresolved typeref token (0115) for 
'ompi_datatype_t'; image may not run
 warning LNK4248: unresolved typeref token (0116) for 
'ompi_request_t'; image may not run
f warning LNK4248: unresolved typeref token (0117) for 
'ompi_group_t'; image may not run
 warning LNK4248: unresolved typeref token (0118) for 
'ompi_communicator_t'; image may not run
 warning LNK4248: unresolved typeref token (0119) for 
'ompi_win_t'; image may not run
 warning LNK4248: unresolved typeref token (011B) for 
'ompi_errhandler_t'; image may not run
 warning LNK4248: unresolved typeref token (011C) for 
'ompi_info_t'; image may not run
 warning LNK4248: unresolved typeref token (011D) for 'ompi_op_t'; 
image may not run
 warning LNK4248: unresolved typeref token (0122) for 
'ompi_predefined_communicator_t'; image may not run


 error LNK2020: unresolved token (0A0003B5) *ompi_mpi_comm_null*
 error LNK2020: unresolved token (0A000486) *ompi_mpi_comm_world*
 error LNK2028: unresolved token (0A0004AF) "public: __thiscall 
MPI::Comm::Comm(void)" (??0Comm@MPI@@$$FQAE@XZ) referenced in function 
"public: __thiscall MPI::Intracomm::Intracomm(struct 
ompi_communicator_t *)" 
(??0Intracomm@MPI@@$$FQAE@PAUompi_communicator_t@@@Z)
 error LNK2001: unresolved external symbol "public: virtual void 
__thiscall MPI::Datatype::Free(void)" (?Free@Datatype@MPI@@UAEXXZ)
 error LNK2001: unresolved external symbol "public: virtual void 
__thiscall MPI::Win::Free(void)" (?Free@Win@MPI@@UAEXXZ)

 error LNK2001: unresolved external symbol _ompi_mpi_comm_null
 error LNK2019: unresolved external symbol "public: __thiscall 
MPI::Comm::Comm(void)" (??0Comm@MPI@@$$FQAE@XZ) referenced in function 
"public: __thiscall MPI::Intracomm::Intracomm(struct 
ompi_communicator_t *)" 
(??0Intracomm@MPI@@$$FQAE@PAUompi_communicator_t@@@Z)

 error LNK2001: unresolved external symbol _ompi_mpi_cxx_op_intercept
 error LNK2001: unresolved external symbol _ompi_mpi_comm_world

Does anyone know how to fix this ? Thank you.



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users





Re: [OMPI users] Using OpenMPI on a network

2012-06-19 Thread Damien
There's something else wrong, if that's the Supercomputing Blog tutorial 
1 you're running.  It works happily without a hostfile.  I think you 
have some borked paths there.


I don't know why a Windows version is looking for an etc directory for a 
hostfile, unless there's some of your previous Cygwin builds lying 
around.  The etc directory is *Nix thing.  Please make sure you've 
completely deleted all your old failed OpenMPI builds, code, binaries, 
everything.  Uninstall any other MPI versions you have tried, OpenMPI, 
MPICH, whatever.  You need to make absolutely sure you only have one 
version.  Check your paths in your environment after doing all that and 
remove any remaining path references to other MPI versions.  You 
shouldn't be getting that network error either, if you're running 
locally it won't matter if you have a network cable or not.  That has to 
be fixed before you can do anything on a cluster.


Damien


On 19/06/2012 10:53 AM, vimalmat...@eaton.com wrote:


Damien, Shiqing, Jeff?

--

Vimal

*From:*users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
*On Behalf Of *vimalmat...@eaton.com

*Sent:* Monday, June 18, 2012 3:32 PM
*To:* us...@open-mpi.org
*Subject:* [OMPI users] Using OpenMPI on a network

So I configured and compiled a simple MPI program.

Now the issue is when I try to do the same thing on my computer on a 
corporate network, I get this error:


C:\OpenMPI\openmpi-1.6\installed\bin>mpiexec MPI_Tutorial_1.exe

--

*Open RTE was unable to open the hostfile:*

*C:\OpenMPI\openmpi-1.6\installed\bin/../etc/openmpi-default-hostfile*

*Check to make sure the path and filename are correct.*

*--*

*[SOUMIWHP5003567:01884] [[37936,0],0] ORTE_ERROR_LOG: Not found in 
file C:\OpenM*


*PI\openmpi-1.6\orte\mca\ras\base\ras_base_allocate.c at line 200*

*[SOUMIWHP5003567:01884] [[37936,0],0] ORTE_ERROR_LOG: Not found in 
file C:\OpenM*


*PI\openmpi-1.6\orte\mca\plm\base\plm_base_launch_support.c at line 99*

*[SOUMIWHP5003567:01884] [[37936,0],0] ORTE_ERROR_LOG: Not found in 
file C:\OpenM*


*PI\openmpi-1.6\orte\mca\plm\process\plm_process_module.c at line 996*

**

What network settings should I be using? I'm sure this is because of 
the network because when I unplug the network cable, I get the error 
message I got below.


Thanks,

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] 
<mailto:[mailto:users-boun...@open-mpi.org]> *On Behalf Of *Damien

*Sent:* Friday, June 15, 2012 3:15 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

OK, that's what orte_rml_base_select failed means, no TCP connection.  
But you should be able to make OpenMPI & mpiexec work without a 
network if you're just running in local memory.  There's probably a 
runtime parameter to set but I don't know what it is.  Maybe Jeff or 
Shiqing can weigh in with what that is.


Damien

On 15/06/2012 1:10 PM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


Just figured it out.

The only thing different from when it ran yesterday to today was I was 
connected to a network. So I connected my laptop to a network and it 
worked again.


Thanks for all your help, Damien!

I'm sure I'm gonna get stuck more along the way so hoping you can help.

--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien

*Sent:* Friday, June 15, 2012 2:57 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

Hmmm.  Two things.  Can you run helloworldMPI.exe on it's own?  It 
should output "Number of threads = 1, My rank = 0"


Also, can you post the output of ompi_info ?  I think you might still 
have some path mixups.  A successful OpenMPI build with this simple 
program should just work.


If you still have the other OpenMPIs installed from the binaries, you 
might want to try uninstalling all of them and rebooting.  Also if you 
rebuilt OpenMPI and helloworldMPI with VS 2010, make sure that 
helloworldMPI is actually linked to those VS2010 OpenMPI libs by 
setting the right lib path in the Linker options. Linking to VS2008 
libs and trying to run with VS2010 dlls/exes could cause problems too.


Damien

On 15/06/2012 11:44 AM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


Hi Damien,

I installed MS Visual Studio 2010 and tried the whole procedure again 
and it worked!


That's the great news.

Now the bad news is that I'm trying to run the program again using 
mpiexec and it won't!


I get these error messages:

orte_rml_base_select failed

orte_ess_set_name failed, with a bunch of text saying it could be due 
to configuration or environment problems and will make sense only to 
an OpenMPI developer.


Help

Re: [OMPI users] Building MPI on Windows

2012-06-15 Thread Damien
OK, that's what orte_rml_base_select failed means, no TCP connection.  
But you should be able to make OpenMPI & mpiexec work without a network 
if you're just running in local memory.  There's probably a runtime 
parameter to set but I don't know what it is.  Maybe Jeff or Shiqing can 
weigh in with what that is.


Damien

On 15/06/2012 1:10 PM, vimalmat...@eaton.com wrote:


Just figured it out.

The only thing different from when it ran yesterday to today was I was 
connected to a network. So I connected my laptop to a network and it 
worked again.


Thanks for all your help, Damien!

I'm sure I'm gonna get stuck more along the way so hoping you can help.

--

Vimal

*From:*users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
*On Behalf Of *Damien

*Sent:* Friday, June 15, 2012 2:57 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

Hmmm.  Two things.  Can you run helloworldMPI.exe on it's own?  It 
should output "Number of threads = 1, My rank = 0"


Also, can you post the output of ompi_info ?  I think you might still 
have some path mixups.  A successful OpenMPI build with this simple 
program should just work.


If you still have the other OpenMPIs installed from the binaries, you 
might want to try uninstalling all of them and rebooting.  Also if you 
rebuilt OpenMPI and helloworldMPI with VS 2010, make sure that 
helloworldMPI is actually linked to those VS2010 OpenMPI libs by 
setting the right lib path in the Linker options.  Linking to VS2008 
libs and trying to run with VS2010 dlls/exes could cause problems too.


Damien

On 15/06/2012 11:44 AM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


Hi Damien,

I installed MS Visual Studio 2010 and tried the whole procedure again 
and it worked!


That's the great news.

Now the bad news is that I'm trying to run the program again using 
mpiexec and it won't!


I get these error messages:

orte_rml_base_select failed

orte_ess_set_name failed, with a bunch of text saying it could be due 
to configuration or environment problems and will make sense only to 
an OpenMPI developer.


Help!

--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 4:55 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

You did build the project, right?  The helloworldMPI.exe is in the 
Debug directory?


On 14/06/2012 1:49 PM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


No luck.

Output:

*Microsoft Windows [Version 6.1.7601]*

*Copyright (c) 2009 Microsoft Corporation.  All rights reserved.*

**

*C:\Users\...>cd "C:\Users\C9995799\Downloads\helloworldMPI\Debug"*

**

*C:\Users\...\Downloads\helloworldMPI\Debug>mpiexec -n 2 
helloworldMPI.exe*


*--*

*mpiexec was unable to launch the specified application as it could 
not find an e*


*xecutable:*

**

*Executable: helloworldMPI.exe*

*Node: SOUMIWHP5003567*

**

*while attempting to start process rank 0.*

*--*

*2 total processes failed to start*

**

*C:\Users\...\Downloads\helloworldMPI\Debug>*

--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 3:38 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

Here's a MPI Hello World project based on your code.  It runs fine on 
my machine.  You'll need to change the include and lib paths as we 
discussed before to match your paths, and copy those bin files over to 
the Debug directory.


Run it with this to start:  "mpiexec -n 1 helloworldMPI.exe".  Then 
change the -n 1 to -n x where x is the number of cores you have.  Say 
yes to allowing mpiexec firewall access if that comes up.


If this bombs out, there's something wrong on your machine.

Damien




___
users mailing list
us...@open-mpi.org  <mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] Building MPI on Windows

2012-06-15 Thread Damien
Hmmm.  Two things.  Can you run helloworldMPI.exe on it's own?  It 
should output "Number of threads = 1, My rank = 0"


Also, can you post the output of ompi_info ?  I think you might still 
have some path mixups.  A successful OpenMPI build with this simple 
program should just work.


If you still have the other OpenMPIs installed from the binaries, you 
might want to try uninstalling all of them and rebooting.  Also if you 
rebuilt OpenMPI and helloworldMPI with VS 2010, make sure that 
helloworldMPI is actually linked to those VS2010 OpenMPI libs by setting 
the right lib path in the Linker options.  Linking to VS2008 libs and 
trying to run with VS2010 dlls/exes could cause problems too.


Damien

On 15/06/2012 11:44 AM, vimalmat...@eaton.com wrote:


Hi Damien,

I installed MS Visual Studio 2010 and tried the whole procedure again 
and it worked!


That's the great news.

Now the bad news is that I'm trying to run the program again using 
mpiexec and it won't!


I get these error messages:

orte_rml_base_select failed

orte_ess_set_name failed, with a bunch of text saying it could be due 
to configuration or environment problems and will make sense only to 
an OpenMPI developer.


Help!

--

Vimal

*From:*users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
*On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 4:55 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

You did build the project, right?  The helloworldMPI.exe is in the 
Debug directory?


On 14/06/2012 1:49 PM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


No luck.

Output:

*Microsoft Windows [Version 6.1.7601]*

*Copyright (c) 2009 Microsoft Corporation.  All rights reserved.*

**

*C:\Users\...>cd "C:\Users\C9995799\Downloads\helloworldMPI\Debug"*

**

*C:\Users\...\Downloads\helloworldMPI\Debug>mpiexec -n 2 
helloworldMPI.exe*


*--*

*mpiexec was unable to launch the specified application as it could 
not find an e*


*xecutable:*

**

*Executable: helloworldMPI.exe*

*Node: SOUMIWHP5003567*

**

*while attempting to start process rank 0.*

*--*

*2 total processes failed to start*

**

*C:\Users\...\Downloads\helloworldMPI\Debug>*

--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 3:38 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

Here's a MPI Hello World project based on your code.  It runs fine on 
my machine.  You'll need to change the include and lib paths as we 
discussed before to match your paths, and copy those bin files over to 
the Debug directory.


Run it with this to start:  "mpiexec -n 1 helloworldMPI.exe".  Then 
change the -n 1 to -n x where x is the number of cores you have.  Say 
yes to allowing mpiexec firewall access if that comes up.


If this bombs out, there's something wrong on your machine.

Damien



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] Building MPI on Windows

2012-06-14 Thread Damien
You did build the project, right?  The helloworldMPI.exe is in the Debug 
directory?


On 14/06/2012 1:49 PM, vimalmat...@eaton.com wrote:


No luck.

Output:

*Microsoft Windows [Version 6.1.7601]*

*Copyright (c) 2009 Microsoft Corporation.  All rights reserved.*

**

*C:\Users\...>cd "C:\Users\C9995799\Downloads\helloworldMPI\Debug"*

**

*C:\Users\...\Downloads\helloworldMPI\Debug>mpiexec -n 2 
helloworldMPI.exe*


*--*

*mpiexec was unable to launch the specified application as it could 
not find an e*


*xecutable:*

**

*Executable: helloworldMPI.exe*

*Node: SOUMIWHP5003567*

**

*while attempting to start process rank 0.*

*--*

*2 total processes failed to start*

**

*C:\Users\...\Downloads\helloworldMPI\Debug>*

--

Vimal

*From:*users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
*On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 3:38 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

Here's a MPI Hello World project based on your code.  It runs fine on 
my machine.  You'll need to change the include and lib paths as we 
discussed before to match your paths, and copy those bin files over to 
the Debug directory.


Run it with this to start:  "mpiexec -n 1 helloworldMPI.exe".  Then 
change the -n 1 to -n x where x is the number of cores you have.  Say 
yes to allowing mpiexec firewall access if that comes up.


If this bombs out, there's something wrong on your machine.

Damien




Re: [OMPI users] Building MPI on Windows

2012-06-14 Thread Damien
Here's a MPI Hello World project based on your code.  It runs fine on my 
machine.  You'll need to change the include and lib paths as we 
discussed before to match your paths, and copy those bin files over to 
the Debug directory.


Run it with this to start:  "mpiexec -n 1 helloworldMPI.exe".  Then 
change the -n 1 to -n x where x is the number of cores you have.  Say 
yes to allowing mpiexec firewall access if that comes up.


If this bombs out, there's something wrong on your machine.

Damien

On 14/06/2012 1:07 PM, vimalmat...@eaton.com wrote:


Anything else that you can think off that could be causing this?

--

Vimal

*From:*users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
*On Behalf Of *vimalmat...@eaton.com

*Sent:* Thursday, June 14, 2012 2:10 PM
*To:* us...@open-mpi.org
*Subject:* Re: [OMPI users] Building MPI on Windows

Yes, I'm trying to do this in C++.

The file is named .cpp.

I changed printf to cout. Still no change in the output.

--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] 
<mailto:[mailto:users-boun...@open-mpi.org]> *On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 2:03 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

Hmmm.  I think that tutorial might be slightly broken.  Calling printf 
without #include  causes all kinds of random runtime problems, 
and printf in a C++ program is generally not awesome.


Is your long-term goal a C++ project?  If you want C++, rename your 
main.c to main.cpp and update your project.  And change


printf ("Number of threads = %d, My rank = %d\n", nTasks, rank);

to

std::cout << "Number of threads = " << nTasks << ", My rank = " << 
rank << "\n";


Damien

On 14/06/2012 11:52 AM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


Yeah, that's the only output.

Here's the code:

#include

#include"mpi.h"

usingnamespacestd;

intmain(intargc, char* argv[])

{

//cout << "Hello World\n";

int  nTasks, rank;

MPI_Init(,);

MPI_Comm_size(MPI_COMM_WORLD,);

MPI_Comm_rank(MPI_COMM_WORLD,);

printf ("Number of threads = %d, My rank = %d\n", nTasks, rank);

MPI_Finalize();

return0;

}

--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 1:49 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

OK.  We might need to see the code for the program you're trying to 
run with mpiexec to help with that one.  Is that the full output?


Damien

On 14/06/2012 11:41 AM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


Did the copy paste.

Now I get a message saying: *mpiexec noticed that the job aborted, but 
has no info as to the process that caused that situation*


--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 1:36 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

Ah.  You installed the OpenMPI binaries too right?  Those are built 
with VS2010.  I bet the way your paths are set up it's using the 
installed version of mpiexec that wants VS2010 runtimes, not the 
mpiexec you compiled today that will use the VS2008 runtimes that you 
have.


OK, here's a quick fix you can try so you don't have to mess with 
paths.  Go into the installed\bin directory from your OpenMPI version, 
select everything, and copy it to the directory where your new project 
puts its executables (probably the Debug directory).  Then run your 
mpiexec  from within that Debug directory and it will use all your 
OpenMPI exes and dlls because it will look there first.


Welcome to path pain.  Happens on every operating system.

Damien

On 14/06/2012 11:19 AM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


No, it's the VS 2008 Express edition.

--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 1:17 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

That's odd.  That's the standard MS C++ runtime for VS 2010.  I 
thought you built with VS 2008 Express though.  Or is your VS Express 
the 2010 version?


On 14/06/2012 11:10 AM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


Thanks.

When I try to run the program with mpiexec.exe, I get an error message 
saying "*The Program can't start because MSVCR100.dll is missing from 
your computer*".


What did I miss?

--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] *On Behalf O

Re: [OMPI users] Building MPI on Windows

2012-06-14 Thread Damien
Hmmm.  I think that tutorial might be slightly broken.  Calling printf 
without #include  causes all kinds of random runtime problems, 
and printf in a C++ program is generally not awesome.


Is your long-term goal a C++ project?  If you want C++, rename your 
main.c to main.cpp and update your project.  And change


printf ("Number of threads = %d, My rank = %d\n", nTasks, rank);

to

std::cout << "Number of threads = " << nTasks << ", My rank = " << rank 
<< "\n";


Damien

On 14/06/2012 11:52 AM, vimalmat...@eaton.com wrote:


Yeah, that's the only output.

Here's the code:

#include

#include"mpi.h"

usingnamespacestd;

intmain(intargc, char* argv[])

{

//cout << "Hello World\n";

int  nTasks, rank;

MPI_Init(,);

MPI_Comm_size(MPI_COMM_WORLD,);

MPI_Comm_rank(MPI_COMM_WORLD,);

printf ("Number of threads = %d, My rank = %d\n", nTasks, rank);

MPI_Finalize();

return0;

}

--

Vimal

*From:*users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
*On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 1:49 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

OK.  We might need to see the code for the program you're trying to 
run with mpiexec to help with that one.  Is that the full output?


Damien

On 14/06/2012 11:41 AM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


Did the copy paste.

Now I get a message saying: *mpiexec noticed that the job aborted, but 
has no info as to the process that caused that situation*


--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 1:36 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

Ah.  You installed the OpenMPI binaries too right?  Those are built 
with VS2010.  I bet the way your paths are set up it's using the 
installed version of mpiexec that wants VS2010 runtimes, not the 
mpiexec you compiled today that will use the VS2008 runtimes that you 
have.


OK, here's a quick fix you can try so you don't have to mess with 
paths.  Go into the installed\bin directory from your OpenMPI version, 
select everything, and copy it to the directory where your new project 
puts its executables (probably the Debug directory).  Then run your 
mpiexec  from within that Debug directory and it will use all your 
OpenMPI exes and dlls because it will look there first.


Welcome to path pain.  Happens on every operating system.

Damien

On 14/06/2012 11:19 AM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


No, it's the VS 2008 Express edition.

--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 1:17 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

That's odd.  That's the standard MS C++ runtime for VS 2010.  I 
thought you built with VS 2008 Express though.  Or is your VS Express 
the 2010 version?


On 14/06/2012 11:10 AM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


Thanks.

When I try to run the program with mpiexec.exe, I get an error message 
saying "*The Program can't start because MSVCR100.dll is missing from 
your computer*".


What did I miss?

--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 12:11 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

That goes in Configuration Properties - C/C++ - Preprocessor - 
Preprocessor Definitions.


Damien

On 14/06/2012 10:07 AM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


Thanks a lot Damien.

When I compile the code that they've used in the link you sent, I get 
this error: *error LNK2001: unresolved external symbol 
_ompi_mpi_comm_world*


I looked this up in the mail archives and Shiqing said "OMPI_IMPORTS" 
needs to be added as a preprocessor definition in the project 
configuration. Where specifically do I add this?


--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 11:42 AM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

Vimal,

Start with this:

http://supercomputingblog.com/mpi/getting-started-with-mpi-using-visual-studio-2008-express/

The only difference is that whereever this tutorial says "HPC Pack 
2008 SDK directory" you should go to "c:\ompi\openmpi-1.6\installed" 
and use the include, lib and bin directories from there.


This will give you a simple VS project that you can use to start 
building your own stuff.

Re: [OMPI users] Building MPI on Windows

2012-06-14 Thread Damien
OK.  We might need to see the code for the program you're trying to run 
with mpiexec to help with that one.  Is that the full output?


Damien

On 14/06/2012 11:41 AM, vimalmat...@eaton.com wrote:


Did the copy paste.

Now I get a message saying: *mpiexec noticed that the job aborted, but 
has no info as to the process that caused that situation*


--

Vimal

*From:*users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
*On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 1:36 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

Ah.  You installed the OpenMPI binaries too right?  Those are built 
with VS2010.  I bet the way your paths are set up it's using the 
installed version of mpiexec that wants VS2010 runtimes, not the 
mpiexec you compiled today that will use the VS2008 runtimes that you 
have.


OK, here's a quick fix you can try so you don't have to mess with 
paths.  Go into the installed\bin directory from your OpenMPI version, 
select everything, and copy it to the directory where your new project 
puts its executables (probably the Debug directory).  Then run your 
mpiexec  from within that Debug directory and it will use all your 
OpenMPI exes and dlls because it will look there first.


Welcome to path pain.  Happens on every operating system.

Damien

On 14/06/2012 11:19 AM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


No, it's the VS 2008 Express edition.

--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 1:17 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

That's odd.  That's the standard MS C++ runtime for VS 2010.  I 
thought you built with VS 2008 Express though.  Or is your VS Express 
the 2010 version?


On 14/06/2012 11:10 AM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


Thanks.

When I try to run the program with mpiexec.exe, I get an error message 
saying "*The Program can't start because MSVCR100.dll is missing from 
your computer*".


What did I miss?

--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 12:11 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

That goes in Configuration Properties - C/C++ - Preprocessor - 
Preprocessor Definitions.


Damien

On 14/06/2012 10:07 AM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


Thanks a lot Damien.

When I compile the code that they've used in the link you sent, I get 
this error: *error LNK2001: unresolved external symbol 
_ompi_mpi_comm_world*


I looked this up in the mail archives and Shiqing said "OMPI_IMPORTS" 
needs to be added as a preprocessor definition in the project 
configuration. Where specifically do I add this?


--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 11:42 AM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

Vimal,

Start with this:

http://supercomputingblog.com/mpi/getting-started-with-mpi-using-visual-studio-2008-express/

The only difference is that whereever this tutorial says "HPC Pack 
2008 SDK directory" you should go to "c:\ompi\openmpi-1.6\installed" 
and use the include, lib and bin directories from there.


This will give you a simple VS project that you can use to start 
building your own stuff.


Damien

On 14/06/2012 8:55 AM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


Everything went as you expected.

No errors except that I don't have Fortran compilers so checking 
OMPI_WANT_F77_BINDINGS and OMPI_WANT_F90_BINDINGS threw some error 
messages.


One more question -- in which project out of the 16 do I include code 
that I want to run?


--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] 
<mailto:[mailto:users-boun...@open-mpi.org]> *On Behalf Of *Damien

*Sent:* Wednesday, June 13, 2012 5:38 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

Vimal,

Here's how to build OpenMPI with Visual Studio and CMake.  These are 
exact steps.


1)  Download this: 
http://www.open-mpi.org/software/ompi/v1.6/downloads/openmpi-1.6.tar.gz


2)  Extract that to somewhere on your hard drive.  My path was 
C:\projects6\openmpi-1.6.  I renamed it to 
C:\projects6\openmpi-1.6-64.  You can use 7-Zip to extract tgz 
archives on Windows.


3)  Start the CMake GUI.  Set the source and build directories.  Mine 
were C:/projects6/openmpi-1.6-64 and C:/projects6/openmpi-1.6-64/build


4)  Press Configure.  Say Yes if it asks you to create the build 
directory.


5)  Look at the gene

Re: [OMPI users] Building MPI on Windows

2012-06-14 Thread Damien
Ah.  You installed the OpenMPI binaries too right?  Those are built with 
VS2010.  I bet the way your paths are set up it's using the installed 
version of mpiexec that wants VS2010 runtimes, not the mpiexec you 
compiled today that will use the VS2008 runtimes that you have.


OK, here's a quick fix you can try so you don't have to mess with 
paths.  Go into the installed\bin directory from your OpenMPI version, 
select everything, and copy it to the directory where your new project 
puts its executables (probably the Debug directory).  Then run your 
mpiexec  from within that Debug directory and it will use all your 
OpenMPI exes and dlls because it will look there first.


Welcome to path pain.  Happens on every operating system.

Damien

On 14/06/2012 11:19 AM, vimalmat...@eaton.com wrote:


No, it's the VS 2008 Express edition.

--

Vimal

*From:*users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
*On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 1:17 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

That's odd.  That's the standard MS C++ runtime for VS 2010.  I 
thought you built with VS 2008 Express though.  Or is your VS Express 
the 2010 version?


On 14/06/2012 11:10 AM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


Thanks.

When I try to run the program with mpiexec.exe, I get an error message 
saying "*The Program can't start because MSVCR100.dll is missing from 
your computer*".


What did I miss?

--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 12:11 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

That goes in Configuration Properties - C/C++ - Preprocessor - 
Preprocessor Definitions.


Damien

On 14/06/2012 10:07 AM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


Thanks a lot Damien.

When I compile the code that they've used in the link you sent, I get 
this error: *error LNK2001: unresolved external symbol 
_ompi_mpi_comm_world*


I looked this up in the mail archives and Shiqing said "OMPI_IMPORTS" 
needs to be added as a preprocessor definition in the project 
configuration. Where specifically do I add this?


--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 11:42 AM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

Vimal,

Start with this:

http://supercomputingblog.com/mpi/getting-started-with-mpi-using-visual-studio-2008-express/

The only difference is that whereever this tutorial says "HPC Pack 
2008 SDK directory" you should go to "c:\ompi\openmpi-1.6\installed" 
and use the include, lib and bin directories from there.


This will give you a simple VS project that you can use to start 
building your own stuff.


Damien

On 14/06/2012 8:55 AM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


Everything went as you expected.

No errors except that I don't have Fortran compilers so checking 
OMPI_WANT_F77_BINDINGS and OMPI_WANT_F90_BINDINGS threw some error 
messages.


One more question -- in which project out of the 16 do I include code 
that I want to run?


--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] 
<mailto:[mailto:users-boun...@open-mpi.org]> *On Behalf Of *Damien

*Sent:* Wednesday, June 13, 2012 5:38 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

Vimal,

Here's how to build OpenMPI with Visual Studio and CMake.  These are 
exact steps.


1)  Download this: 
http://www.open-mpi.org/software/ompi/v1.6/downloads/openmpi-1.6.tar.gz


2)  Extract that to somewhere on your hard drive.  My path was 
C:\projects6\openmpi-1.6.  I renamed it to 
C:\projects6\openmpi-1.6-64.  You can use 7-Zip to extract tgz 
archives on Windows.


3)  Start the CMake GUI.  Set the source and build directories.  Mine 
were C:/projects6/openmpi-1.6-64 and C:/projects6/openmpi-1.6-64/build


4)  Press Configure.  Say Yes if it asks you to create the build 
directory.


5)  Look at the generator view that comes up.  I chose Visual Studio 9 
2008 Win64 but you can select whatever you have on your system.  Click 
Specify Native Compilers.  This will make sure you get the right 
compilers.


In the C and C++ compiler, I put "C:\Program Files (x86)\Microsoft 
Visual Studio 9.0\VC\bin\amd64\cl.exe".  You can navigate to which one 
you have.


In the Fortran compiler, I put "C:/Program Files (x86)/Intel/Composer 
XE 2011 SP1/bin/intel64/ifort.exe".  You can navigate to which one you 
have.


Press Finish once you have selected the compilers and the config will 
start.  Takes a couple of minutes on my laptop.


Fir

Re: [OMPI users] Building MPI on Windows

2012-06-14 Thread Damien
That's odd.  That's the standard MS C++ runtime for VS 2010.  I thought 
you built with VS 2008 Express though.  Or is your VS Express the 2010 
version?


On 14/06/2012 11:10 AM, vimalmat...@eaton.com wrote:


Thanks.

When I try to run the program with mpiexec.exe, I get an error message 
saying "*The Program can't start because MSVCR100.dll is missing from 
your computer*".


What did I miss?

--

Vimal

*From:*users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
*On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 12:11 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

That goes in Configuration Properties - C/C++ - Preprocessor - 
Preprocessor Definitions.


Damien

On 14/06/2012 10:07 AM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


Thanks a lot Damien.

When I compile the code that they've used in the link you sent, I get 
this error: *error LNK2001: unresolved external symbol 
_ompi_mpi_comm_world*


I looked this up in the mail archives and Shiqing said "OMPI_IMPORTS" 
needs to be added as a preprocessor definition in the project 
configuration. Where specifically do I add this?


--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] *On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 11:42 AM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

Vimal,

Start with this:

http://supercomputingblog.com/mpi/getting-started-with-mpi-using-visual-studio-2008-express/

The only difference is that whereever this tutorial says "HPC Pack 
2008 SDK directory" you should go to "c:\ompi\openmpi-1.6\installed" 
and use the include, lib and bin directories from there.


This will give you a simple VS project that you can use to start 
building your own stuff.


Damien

On 14/06/2012 8:55 AM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


Everything went as you expected.

No errors except that I don't have Fortran compilers so checking 
OMPI_WANT_F77_BINDINGS and OMPI_WANT_F90_BINDINGS threw some error 
messages.


One more question -- in which project out of the 16 do I include code 
that I want to run?


--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] 
<mailto:[mailto:users-boun...@open-mpi.org]> *On Behalf Of *Damien

*Sent:* Wednesday, June 13, 2012 5:38 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

Vimal,

Here's how to build OpenMPI with Visual Studio and CMake.  These are 
exact steps.


1)  Download this: 
http://www.open-mpi.org/software/ompi/v1.6/downloads/openmpi-1.6.tar.gz


2)  Extract that to somewhere on your hard drive.  My path was 
C:\projects6\openmpi-1.6.  I renamed it to 
C:\projects6\openmpi-1.6-64.  You can use 7-Zip to extract tgz 
archives on Windows.


3)  Start the CMake GUI.  Set the source and build directories.  Mine 
were C:/projects6/openmpi-1.6-64 and C:/projects6/openmpi-1.6-64/build


4)  Press Configure.  Say Yes if it asks you to create the build 
directory.


5)  Look at the generator view that comes up.  I chose Visual Studio 9 
2008 Win64 but you can select whatever you have on your system.  Click 
Specify Native Compilers.  This will make sure you get the right 
compilers.


In the C and C++ compiler, I put "C:\Program Files (x86)\Microsoft 
Visual Studio 9.0\VC\bin\amd64\cl.exe".  You can navigate to which one 
you have.


In the Fortran compiler, I put "C:/Program Files (x86)/Intel/Composer 
XE 2011 SP1/bin/intel64/ifort.exe".  You can navigate to which one you 
have.


Press Finish once you have selected the compilers and the config will 
start.  Takes a couple of minutes on my laptop.


First things first.  If you want a Release build, you have to change a 
CMake setting.  The 5th line down in the red window will say 
CMAKE_BUILD_TYPE.  Change the text (type it in) to say Release if you 
want a Release build, otherwise the final install step won't work.


Also, further down the red window there's some options you should 
change.  Scroll down through that window, there's a lot to choose 
from.  I usually check OMPI_RELEASE_BUILD, OMPI_WANT_F77_BINDINGS and 
OMPI_WANT_F90_BINDINGS.  OMPI_WANT_CXX_BINDINGS should already be 
checked.  (Note to Jeff & Shiqing: We should probably work out a good 
set of standard choices if there are others on top of these).


6)  Press Configure again, and CMake will go through identifying the 
Fortran compiler if you asked for Fortran bindings and a few other 
things.  It should work fine with the options above.


7)  Assuming that it was fine, press Generate.  That produces an 
OpenMPI.sln project for Visual Studio, it's in whatever directory you 
specified as your build directory.


8)  Open the sln in Visual Studio.  Open the Properties of "Solution 
'OpenMPI'".  Look at C

Re: [OMPI users] Building MPI on Windows

2012-06-14 Thread Damien
That goes in Configuration Properties - C/C++ - Preprocessor - 
Preprocessor Definitions.


Damien

On 14/06/2012 10:07 AM, vimalmat...@eaton.com wrote:


Thanks a lot Damien.

When I compile the code that they've used in the link you sent, I get 
this error: *error LNK2001: unresolved external symbol 
_ompi_mpi_comm_world*


I looked this up in the mail archives and Shiqing said "OMPI_IMPORTS" 
needs to be added as a preprocessor definition in the project 
configuration. Where specifically do I add this?


--

Vimal

*From:*users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
*On Behalf Of *Damien

*Sent:* Thursday, June 14, 2012 11:42 AM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

Vimal,

Start with this:

http://supercomputingblog.com/mpi/getting-started-with-mpi-using-visual-studio-2008-express/

The only difference is that whereever this tutorial says "HPC Pack 
2008 SDK directory" you should go to "c:\ompi\openmpi-1.6\installed" 
and use the include, lib and bin directories from there.


This will give you a simple VS project that you can use to start 
building your own stuff.


Damien

On 14/06/2012 8:55 AM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


Everything went as you expected.

No errors except that I don't have Fortran compilers so checking 
OMPI_WANT_F77_BINDINGS and OMPI_WANT_F90_BINDINGS threw some error 
messages.


One more question -- in which project out of the 16 do I include code 
that I want to run?


--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] 
<mailto:[mailto:users-boun...@open-mpi.org]> *On Behalf Of *Damien

*Sent:* Wednesday, June 13, 2012 5:38 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

Vimal,

Here's how to build OpenMPI with Visual Studio and CMake.  These are 
exact steps.


1)  Download this: 
http://www.open-mpi.org/software/ompi/v1.6/downloads/openmpi-1.6.tar.gz


2)  Extract that to somewhere on your hard drive.  My path was 
C:\projects6\openmpi-1.6.  I renamed it to 
C:\projects6\openmpi-1.6-64.  You can use 7-Zip to extract tgz 
archives on Windows.


3)  Start the CMake GUI.  Set the source and build directories.  Mine 
were C:/projects6/openmpi-1.6-64 and C:/projects6/openmpi-1.6-64/build


4)  Press Configure.  Say Yes if it asks you to create the build 
directory.


5)  Look at the generator view that comes up.  I chose Visual Studio 9 
2008 Win64 but you can select whatever you have on your system.  Click 
Specify Native Compilers.  This will make sure you get the right 
compilers.


In the C and C++ compiler, I put "C:\Program Files (x86)\Microsoft 
Visual Studio 9.0\VC\bin\amd64\cl.exe".  You can navigate to which one 
you have.


In the Fortran compiler, I put "C:/Program Files (x86)/Intel/Composer 
XE 2011 SP1/bin/intel64/ifort.exe".  You can navigate to which one you 
have.


Press Finish once you have selected the compilers and the config will 
start.  Takes a couple of minutes on my laptop.


First things first.  If you want a Release build, you have to change a 
CMake setting.  The 5th line down in the red window will say 
CMAKE_BUILD_TYPE.  Change the text (type it in) to say Release if you 
want a Release build, otherwise the final install step won't work.


Also, further down the red window there's some options you should 
change.  Scroll down through that window, there's a lot to choose 
from.  I usually check OMPI_RELEASE_BUILD, OMPI_WANT_F77_BINDINGS and 
OMPI_WANT_F90_BINDINGS.  OMPI_WANT_CXX_BINDINGS should already be 
checked.  (Note to Jeff & Shiqing: We should probably work out a good 
set of standard choices if there are others on top of these).


6)  Press Configure again, and CMake will go through identifying the 
Fortran compiler if you asked for Fortran bindings and a few other 
things.  It should work fine with the options above.


7)  Assuming that it was fine, press Generate.  That produces an 
OpenMPI.sln project for Visual Studio, it's in whatever directory you 
specified as your build directory.


8)  Open the sln in Visual Studio.  Open the Properties of "Solution 
'OpenMPI'".  Look at Configuration Properties - Configuration.  Check 
the Configuration button at the top, it might say Debug, but it should 
say Release if you changed CMAKE_BUILD_TYPE earlier.  If it says 
Debug, change the drop-down to Release.  Click OK.  Then open the 
Properties again and make sure what you selected is right, otherwise 
change it, press OK again.  Visual Studio does that sometimes.


9)  Moment of Truth.  Right-click on "Solution 'OpenMPI'" and select 
Build Solution.  The compile should start.


10)  Wait.

11)  Wait some more.

12)  Grab a snack (or a beer.), this will take a while, 15-20 minutes.

13)  If the build was successful (it should be), there's one last 
step.  Right-click 

Re: [OMPI users] Building MPI on Windows

2012-06-14 Thread Damien

Vimal,

Start with this:

http://supercomputingblog.com/mpi/getting-started-with-mpi-using-visual-studio-2008-express/

The only difference is that whereever this tutorial says "HPC Pack 2008 
SDK directory" you should go to "c:\ompi\openmpi-1.6\installed" and use 
the include, lib and bin directories from there.


This will give you a simple VS project that you can use to start 
building your own stuff.


Damien

On 14/06/2012 8:55 AM, vimalmat...@eaton.com wrote:


Everything went as you expected.

No errors except that I don't have Fortran compilers so checking 
OMPI_WANT_F77_BINDINGS and OMPI_WANT_F90_BINDINGS threw some error 
messages.


One more question -- in which project out of the 16 do I include code 
that I want to run?


--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] 
<mailto:[mailto:users-boun...@open-mpi.org]> *On Behalf Of *Damien

*Sent:* Wednesday, June 13, 2012 5:38 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

Vimal,

Here's how to build OpenMPI with Visual Studio and CMake.  These are 
exact steps.


1)  Download this: 
http://www.open-mpi.org/software/ompi/v1.6/downloads/openmpi-1.6.tar.gz


2)  Extract that to somewhere on your hard drive.  My path was 
C:\projects6\openmpi-1.6.  I renamed it to 
C:\projects6\openmpi-1.6-64.  You can use 7-Zip to extract tgz 
archives on Windows.


3)  Start the CMake GUI.  Set the source and build directories.  Mine 
were C:/projects6/openmpi-1.6-64 and C:/projects6/openmpi-1.6-64/build


4)  Press Configure.  Say Yes if it asks you to create the build 
directory.


5)  Look at the generator view that comes up.  I chose Visual Studio 9 
2008 Win64 but you can select whatever you have on your system.  Click 
Specify Native Compilers.  This will make sure you get the right 
compilers.


In the C and C++ compiler, I put "C:\Program Files (x86)\Microsoft 
Visual Studio 9.0\VC\bin\amd64\cl.exe".  You can navigate to which one 
you have.


In the Fortran compiler, I put "C:/Program Files (x86)/Intel/Composer 
XE 2011 SP1/bin/intel64/ifort.exe".  You can navigate to which one you 
have.


Press Finish once you have selected the compilers and the config will 
start.  Takes a couple of minutes on my laptop.


First things first.  If you want a Release build, you have to change a 
CMake setting.  The 5th line down in the red window will say 
CMAKE_BUILD_TYPE.  Change the text (type it in) to say Release if you 
want a Release build, otherwise the final install step won't work.


Also, further down the red window there's some options you should 
change.  Scroll down through that window, there's a lot to choose 
from.  I usually check OMPI_RELEASE_BUILD, OMPI_WANT_F77_BINDINGS and 
OMPI_WANT_F90_BINDINGS.  OMPI_WANT_CXX_BINDINGS should already be 
checked.  (Note to Jeff & Shiqing: We should probably work out a good 
set of standard choices if there are others on top of these).


6)  Press Configure again, and CMake will go through identifying the 
Fortran compiler if you asked for Fortran bindings and a few other 
things.  It should work fine with the options above.


7)  Assuming that it was fine, press Generate.  That produces an 
OpenMPI.sln project for Visual Studio, it's in whatever directory you 
specified as your build directory.


8)  Open the sln in Visual Studio.  Open the Properties of "Solution 
'OpenMPI'".  Look at Configuration Properties - Configuration.  Check 
the Configuration button at the top, it might say Debug, but it should 
say Release if you changed CMAKE_BUILD_TYPE earlier.  If it says 
Debug, change the drop-down to Release.  Click OK.  Then open the 
Properties again and make sure what you selected is right, otherwise 
change it, press OK again.  Visual Studio does that sometimes.


9)  Moment of Truth.  Right-click on "Solution 'OpenMPI'" and select 
Build Solution.  The compile should start.


10)  Wait.

11)  Wait some more.

12)  Grab a snack (or a beer.), this will take a while, 15-20 minutes.

13)  If the build was successful (it should be), there's one last 
step.  Right-click on the INSTALL sub-project and click Build.  That 
will organise the header files, libraries and binaries into a set of 
directories, under whatever directory you said your source is in with 
CMake.  On mine it was C:\projects6\openmpi-1.6\installed.  In there 
you'll see bin, include, lib and share directories.  That's a complete 
OpenMPI build with everything you need.


If you'd like to try this and provide feedback, we can tweak the 
instructions until they're bulletproof.  I can help you build with 
whatever compilers you have on your system, just post back to the 
list.  I don't do Cygwin though.  Doing HPC on Windows is weird 
enough.  :-)


Damien

On 13/06/2012 1:35 PM, vimalmat...@eaton.com 
<mailto:vimalmat...@eaton.com> wrote:


What do I do after 

Re: [OMPI users] Building MPI on Windows

2012-06-14 Thread Damien

I was just typing that... :-)

On 14/06/2012 8:19 AM, vimalmat...@eaton.com wrote:


Sorry Damien, my mistake.

Selected the wrong version of VS.

--

Vimal

*From:*users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
*On Behalf Of *vimalmat...@eaton.com

*Sent:* Thursday, June 14, 2012 10:10 AM
*To:* us...@open-mpi.org
*Subject:* Re: [OMPI users] Building MPI on Windows

Hi Damien,

Thanks for the detailed instructions!

Here's my progress:

1)  Download this: 
http://www.open-mpi.org/software/ompi/v1.6/downloads/openmpi-1.6.tar.gz


*Done*

2)  Extract that to somewhere on your hard drive.  My path was 
C:\projects6\openmpi-1.6.  I renamed it to 
C:\projects6\openmpi-1.6-64.  You can use 7-Zip to extract tgz 
archives on Windows.


*Extracted to C:\OMPI\openmpi-1.6*


3)  Start the CMake GUI.  Set the source and build directories.  Mine 
were C:/projects6/openmpi-1.6-64 and C:/projects6/openmpi-1.6-64/build


*Does it matter where I run CMake from? I've put Cmake-2.8.8-win32-x86 
in C:\OMPI\openmpi-1.6*


4)  Press Configure.  Say Yes if it asks you to create the build 
directory.


*Done*

5)  Look at the generator view that comes up.  I chose Visual Studio 9 
2008 Win64 but you can select whatever you have on your system.  Click 
Specify Native Compilers.  This will make sure you get the right 
compilers.


*I chose Visual Studio 9 2008 Win64.*

In the C and C++ compiler, I put "C:\Program Files (x86)\Microsoft 
Visual Studio 9.0\VC\bin\amd64\cl.exe".  You can navigate to which one 
you have.


*I have "C:/Program Files (x86)/Microsoft Visual Studio 9.0/VC/bin/cl.exe"
*
In the Fortran compiler, I put "C:/Program Files (x86)/Intel/Composer 
XE 2011 SP1/bin/intel64/ifort.exe".  You can navigate to which one you 
have.


*I do not have a Fortran compiler*

Press Finish once you have selected the compilers and the config will 
start.  Takes a couple of minutes on my laptop.


*I get an error message saying: Error in configuration process. 
Project files may be invalid*


*I've attached the text that the CMake GUI generated for the error.*


--

Vimal

*From:*users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org> 
[mailto:users-boun...@open-mpi.org] 
<mailto:[mailto:users-boun...@open-mpi.org]> *On Behalf Of *Damien

*Sent:* Wednesday, June 13, 2012 5:38 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

Vimal,

Here's how to build OpenMPI with Visual Studio and CMake.  These are 
exact steps.


1)  Download this: 
http://www.open-mpi.org/software/ompi/v1.6/downloads/openmpi-1.6.tar.gz


2)  Extract that to somewhere on your hard drive.  My path was 
C:\projects6\openmpi-1.6.  I renamed it to 
C:\projects6\openmpi-1.6-64.  You can use 7-Zip to extract tgz 
archives on Windows.


3)  Start the CMake GUI.  Set the source and build directories.  Mine 
were C:/projects6/openmpi-1.6-64 and C:/projects6/openmpi-1.6-64/build


4)  Press Configure.  Say Yes if it asks you to create the build 
directory.


5)  Look at the generator view that comes up.  I chose Visual Studio 9 
2008 Win64 but you can select whatever you have on your system.  Click 
Specify Native Compilers.  This will make sure you get the right 
compilers.


In the C and C++ compiler, I put "C:\Program Files (x86)\Microsoft 
Visual Studio 9.0\VC\bin\amd64\cl.exe".  You can navigate to which one 
you have.


In the Fortran compiler, I put "C:/Program Files (x86)/Intel/Composer 
XE 2011 SP1/bin/intel64/ifort.exe".  You can navigate to which one you 
have.


Press Finish once you have selected the compilers and the config will 
start.  Takes a couple of minutes on my laptop.


First things first.  If you want a Release build, you have to change a 
CMake setting.  The 5th line down in the red window will say 
CMAKE_BUILD_TYPE.  Change the text (type it in) to say Release if you 
want a Release build, otherwise the final install step won't work.


Also, further down the red window there's some options you should 
change.  Scroll down through that window, there's a lot to choose 
from.  I usually check OMPI_RELEASE_BUILD, OMPI_WANT_F77_BINDINGS and 
OMPI_WANT_F90_BINDINGS.  OMPI_WANT_CXX_BINDINGS should already be 
checked.  (Note to Jeff & Shiqing: We should probably work out a good 
set of standard choices if there are others on top of these).


6)  Press Configure again, and CMake will go through identifying the 
Fortran compiler if you asked for Fortran bindings and a few other 
things.  It should work fine with the options above.


7)  Assuming that it was fine, press Generate.  That produces an 
OpenMPI.sln project for Visual Studio, it's in whatever directory you 
specified as your build directory.


8)  Open the sln in Visual Studio.  Open the Properties of "Solution 
'OpenMPI'".  Look at Configuration Properties - Configuration.  Check 
the Configuration button at the top, it might say Debug, but it should 
say Release if yo

Re: [OMPI users] Building MPI on Windows

2012-06-13 Thread Damien

Vimal,

Shiqing is right, that's a bad way to do it.  This is slightly off-topic 
for the list, but you have to tell VS where header files (.h, .hpp) are, 
which is in Configuration Properties - C/C++ - General - Additional 
Include Directories.  You have to tell VS where additional libraries 
are, which is in Configuration Properties - Linker - General - 
Additional Library Directories.  You have to tell VS which libraries to 
link to, which is in Configuration Properties - Linker - Input - 
Additional Dependencies.  Check out the OpenMPI sln and look at these 
settings in the sub-projects it contains.


VS is a project and build system that drives the configuration for the 
compiler, just like autotools and make are on Linux.  Nothing's done for 
you, you have to set them up for yourself.


Damien

On 13/06/2012 4:21 PM, Shiqing Fan wrote:


This is definitely NOT a good solution. Just setting up the VS 
properties correctly is the direction people should go.



Shiqing

On Wed, 13 Jun 2012 21:51:48 +0200, Trent Creekmore <tcr...@gmail.com> 
wrote:


I find the easiest way to know if LIB and DLL function correctly, and 
avoiding  confusion on correct setup is it just drop them all in the 
root directory of your project. VS should see them upon load of that 
project.



From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
On Behalf Of vimalmat...@eaton.com

Sent: Wednesday, June 13, 2012 2:47 PM
To: us...@open-mpi.org
Subject: Re: [OMPI users] Building MPI on Windows


Yes, and then I added the libraries folder in Visual Studio under 
Project Properties>Linker>General>Additional Library Directories.


I tried compiling simple 'Hello World'' code and I get an error 
message saying 'Cannot open : No such file or directory.



What step am I missing?


--

Vimal


From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
On Behalf Of Damien

Sent: Wednesday, June 13, 2012 3:43 PM
To: Open MPI Users
Subject: Re: [OMPI users] Building MPI on Windows


Once you've run the installer, you'll have a set of OpenMPI debug and 
release dlls, libraries to link to and the necessary include files.  
If you're installing the 64-bit version, it will end up here by default:


C:\Program Files (x86)\OpenMPI_v1.6-x64

Damien

On 13/06/2012 1:35 PM, vimalmat...@eaton.com wrote:

What do I do after I run it?


--

Vimal


From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
On Behalf Of Ralph Castain

Sent: Wednesday, June 13, 2012 3:32 PM
To: Open MPI Users
Subject: Re: [OMPI users] Building MPI on Windows


I'm not a Windozer, so I can't speak to the port for that platform. 
However, the conversation here seems strange to me. Have you actually 
read the instructions on the open-mpi.org web site?



Looks pretty simple to me. You download the .exe installer for either 
32 or 64 bits, and run it. You don't build OMPI from source - the 
distro contains everything you need to just run.



See:


http://www.open-mpi.org/software/ompi/v1.6/


for the software and some Windows notes.



On Jun 13, 2012, at 1:20 PM, Trent Creekmore wrote:





I just gave up and stuck with Unix/Linux.  Eclipse IDE offers a very 
nice plugin for developing and debugging MPI code named Parallel 
Tools Platform. Something not available in Visual Studio, except for 
similar one made by Intel, but I believe you have to use their compiler.



You could always run Eclipse remotely from any Windows OS using a 
Secure Shell client and Xming (A Windows based X Server). That is 
what I do, and no more wasting time trying to get OMPI trying to run 
on Windows.



From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
On Behalf Of vimalmat...@eaton.com

Sent: Wednesday, June 13, 2012 2:09 PM
To: us...@open-mpi.org; us...@open-mpi.org
Subject: Re: [OMPI users] Building MPI on Windows


I've tried the Cygwin way.
Been hitting roadblocks for a week now. I've just uninstalled 
everything and started from scratch again.


--
Vimal


-Original Message-
From: users-boun...@open-mpi.org on behalf of Trent Creekmore
Sent: Wed 6/13/2012 2:47 PM
To: 'Open MPI Users'
Subject: Re: [OMPI users] Building MPI on Windows

This may, or may not be helpful, but I have tried the Windows 
offerings. I have never gotten anything to function was expected. 
Compiling, or the available binaries. I think they just don't work at 
all.




My suggestion which I feel would be easier, and less headache way 
would be to install something like CygWin, which would give you a 
Unix/Linux like environment running under Windows.


You would only need to compile it in CygWin just like the Linux/Unix 
docs say to do.




I don't know if anyone else has done it this way or not.





From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
On Behalf Of vimalmat...@eaton.com

Sent: Wednesday, June 13, 2012 1:32 PM
To: us...@open-mpi.org
Subject: [OMPI users] Building MPI on Windows



Hi,



I'm t

Re: [OMPI users] Building MPI on Windows

2012-06-13 Thread Damien

Vimal,

Here's how to build OpenMPI with Visual Studio and CMake.  These are 
exact steps.


1)  Download this:  
http://www.open-mpi.org/software/ompi/v1.6/downloads/openmpi-1.6.tar.gz


2)  Extract that to somewhere on your hard drive.  My path was 
C:\projects6\openmpi-1.6.  I renamed it to C:\projects6\openmpi-1.6-64.  
You can use 7-Zip to extract tgz archives on Windows.


3)  Start the CMake GUI.  Set the source and build directories.  Mine 
were C:/projects6/openmpi-1.6-64 and C:/projects6/openmpi-1.6-64/build


4)  Press Configure.  Say Yes if it asks you to create the build directory.

5)  Look at the generator view that comes up.  I chose Visual Studio 9 
2008 Win64 but you can select whatever you have on your system.  Click 
Specify Native Compilers.  This will make sure you get the right compilers.


In the C and C++ compiler, I put "C:\Program Files (x86)\Microsoft 
Visual Studio 9.0\VC\bin\amd64\cl.exe".  You can navigate to which one 
you have.


In the Fortran compiler, I put "C:/Program Files (x86)/Intel/Composer XE 
2011 SP1/bin/intel64/ifort.exe".  You can navigate to which one you have.


Press Finish once you have selected the compilers and the config will 
start.  Takes a couple of minutes on my laptop.


First things first.  If you want a Release build, you have to change a 
CMake setting.  The 5th line down in the red window will say 
CMAKE_BUILD_TYPE.  Change the text (type it in) to say Release if you 
want a Release build, otherwise the final install step won't work.


Also, further down the red window there's some options you should 
change.  Scroll down through that window, there's a lot to choose from.  
I usually check OMPI_RELEASE_BUILD, OMPI_WANT_F77_BINDINGS and 
OMPI_WANT_F90_BINDINGS.  OMPI_WANT_CXX_BINDINGS should already be 
checked.  (Note to Jeff & Shiqing: We should probably work out a good 
set of standard choices if there are others on top of these).


6)  Press Configure again, and CMake will go through identifying the 
Fortran compiler if you asked for Fortran bindings and a few other 
things.  It should work fine with the options above.


7)  Assuming that it was fine, press Generate.  That produces an 
OpenMPI.sln project for Visual Studio, it's in whatever directory you 
specified as your build directory.


8)  Open the sln in Visual Studio.  Open the Properties of "Solution 
'OpenMPI'".  Look at Configuration Properties - Configuration.  Check 
the Configuration button at the top, it might say Debug, but it should 
say Release if you changed CMAKE_BUILD_TYPE earlier.  If it says Debug, 
change the drop-down to Release.  Click OK.  Then open the Properties 
again and make sure what you selected is right, otherwise change it, 
press OK again.  Visual Studio does that sometimes.


9)  Moment of Truth.  Right-click on "Solution 'OpenMPI'" and select 
Build Solution.  The compile should start.


10)  Wait.

11)  Wait some more.

12)  Grab a snack (or a beer.), this will take a while, 15-20 minutes.

13)  If the build was successful (it should be), there's one last step.  
Right-click on the INSTALL sub-project and click Build.  That will 
organise the header files, libraries and binaries into a set of 
directories, under whatever directory you said your source is in with 
CMake.  On mine it was C:\projects6\openmpi-1.6\installed.  In there 
you'll see bin, include, lib and share directories.  That's a complete 
OpenMPI build with everything you need.


If you'd like to try this and provide feedback, we can tweak the 
instructions until they're bulletproof.  I can help you build with 
whatever compilers you have on your system, just post back to the list.  
I don't do Cygwin though.  Doing HPC on Windows is weird enough.  :-)


Damien

On 13/06/2012 1:35 PM, vimalmat...@eaton.com wrote:


What do I do after I run it?

--

Vimal

*From:*users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
*On Behalf Of *Ralph Castain

*Sent:* Wednesday, June 13, 2012 3:32 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

I'm not a Windozer, so I can't speak to the port for that platform. 
However, the conversation here seems strange to me. Have you actually 
read the instructions on the open-mpi.org <http://open-mpi.org> web site?


Looks pretty simple to me. You download the .exe installer for either 
32 or 64 bits, and run it. You don't build OMPI from source - the 
distro contains everything you need to just run.


See:

http://www.open-mpi.org/software/ompi/v1.6/

for the software and some Windows notes.

On Jun 13, 2012, at 1:20 PM, Trent Creekmore wrote:



I just gave up and stuck with Unix/Linux.  Eclipse IDE offers a very 
nice plugin for developing and debugging MPI code named Parallel Tools 
Platform. Something not available in Visual Studio, except for similar 
one made by Intel, but I believe you have to use their compiler.


You could always run Eclipse remotely from any

Re: [OMPI users] Building MPI on Windows

2012-06-13 Thread Damien
Funny you should ask for that, I'm doing that right now.  First pass 
I'll post right here as specific instructions for Vimal in a few 
minutes, then you and Shiqing and I can assemble something complete 
off-list.


Damien

On 13/06/2012 2:56 PM, Jeff Squyres wrote:

Yes, I guess it is fair to say that Windows is definitely a secondary platform 
for Open MPI.  :-(

What would be great is if some people could write up a set of cohesive docs for 
the Windows stuff.  Someone mentioned some prereq's earlier in this thread that 
are probably not well documented because all of us POSIX-ish people just assume 
them (e.g., having ssh installed).

And then provide some guidance on how to compile and link with the OMPI 
binaries in Visual Studio (or whatever the normal thing is on the Windows 
platform).  And if you want to build OMPI from source, guidance on how to do so 
(it sounded like someone cited something long out of date earlier in this 
thread -- are those out-of-date docs still hanging around somewhere?  If so, we 
should eliminate/update them!).

I'm not a Windows-based developer myself, so I really have little to offer here.

But if some of you Windows people can get together and write up a good README 
and/or set of FAQ questions, I'll be happy to put them together and put the 
README in SVN and/or the FAQ questions on the web site.



On Jun 13, 2012, at 4:40 PM, Trent Creekmore wrote:


Well, you can now see why I gave up on trying to get to it function with 
Windows.
I would say work with Linux and using the guide I did, to at least get you 
started on doing some work instead of wasting a lot of time working on that.
If you want it to function with Windows that badly just keep working on it in 
spare time. Meanwhile get to coding now using the method I told you.


From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf 
Of vimalmat...@eaton.com
Sent: Wednesday, June 13, 2012 3:23 PM
To: us...@open-mpi.org
Subject: Re: [OMPI users] Building MPI on Windows

Yes, did that too.

--
Vimal
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf 
Of Trent Creekmore
Sent: Wednesday, June 13, 2012 4:21 PM
To: 'Open MPI Users'
Subject: Re: [OMPI users] Building MPI on Windows

I meant the actual files, not including folders.
But you won’t need to Bin files,

From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf 
Of vimalmat...@eaton.com
Sent: Wednesday, June 13, 2012 3:13 PM
To: us...@open-mpi.org
Subject: Re: [OMPI users] Building MPI on Windows

I put all the folders (bin, include, etc, lib and share) in the root folder of 
the project. No success.
Tried adding all the .h files in include in the Header files folder under the 
Project in VS. Still no go.

--Vimal

From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf 
Of Trent Creekmore
Sent: Wednesday, June 13, 2012 3:52 PM
To: 'Open MPI Users'
Subject: Re: [OMPI users] Building MPI on Windows

I find the easiest way to know if LIB and DLL function correctly, and avoiding  
confusion on correct setup is it just drop them all in the root directory of 
your project. VS should see them upon load of that project.

From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf 
Of vimalmat...@eaton.com
Sent: Wednesday, June 13, 2012 2:47 PM
To: us...@open-mpi.org
Subject: Re: [OMPI users] Building MPI on Windows

Yes, and then I added the libraries folder in Visual Studio under Project 
Properties>Linker>General>Additional Library Directories.
I tried compiling simple ‘Hello World’’ code and I get an error message saying 
‘Cannot open: No such file or directory.

What step am I missing?

--
Vimal

From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf 
Of Damien
Sent: Wednesday, June 13, 2012 3:43 PM
To: Open MPI Users
Subject: Re: [OMPI users] Building MPI on Windows

Once you've run the installer, you'll have a set of OpenMPI debug and release 
dlls, libraries to link to and the necessary include files.  If you're 
installing the 64-bit version, it will end up here by default:

C:\Program Files (x86)\OpenMPI_v1.6-x64

Damien

On 13/06/2012 1:35 PM, vimalmat...@eaton.com wrote:
What do I do after I run it?

--
Vimal

From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf 
Of Ralph Castain
Sent: Wednesday, June 13, 2012 3:32 PM
To: Open MPI Users
Subject: Re: [OMPI users] Building MPI on Windows

I'm not a Windozer, so I can't speak to the port for that platform. However, 
the conversation here seems strange to me. Have you actually read the 
instructions on the open-mpi.org web site?

Looks pretty simple to me. You download the .exe installer for either 32 or 64 
bits, and run it. You don't build OMPI from source - the distro contains 
everything you need to just run.

See:

http://www.open-mpi.org/software/ompi/v1.6/

for the software and some Windows notes.


On Jun 13, 2012, at 1:20 

Re: [OMPI users] Building MPI on Windows

2012-06-13 Thread Damien
Once you've run the installer, you'll have a set of OpenMPI debug and 
release dlls, libraries to link to and the necessary include files.  If 
you're installing the 64-bit version, it will end up here by default:


C:\Program Files (x86)\OpenMPI_v1.6-x64

Damien

On 13/06/2012 1:35 PM, vimalmat...@eaton.com wrote:


What do I do after I run it?

--

Vimal

*From:*users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] 
*On Behalf Of *Ralph Castain

*Sent:* Wednesday, June 13, 2012 3:32 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Building MPI on Windows

I'm not a Windozer, so I can't speak to the port for that platform. 
However, the conversation here seems strange to me. Have you actually 
read the instructions on the open-mpi.org <http://open-mpi.org> web site?


Looks pretty simple to me. You download the .exe installer for either 
32 or 64 bits, and run it. You don't build OMPI from source - the 
distro contains everything you need to just run.


See:

http://www.open-mpi.org/software/ompi/v1.6/

for the software and some Windows notes.

On Jun 13, 2012, at 1:20 PM, Trent Creekmore wrote:



I just gave up and stuck with Unix/Linux.  Eclipse IDE offers a very 
nice plugin for developing and debugging MPI code named Parallel Tools 
Platform. Something not available in Visual Studio, except for similar 
one made by Intel, but I believe you have to use their compiler.


You could always run Eclipse remotely from any Windows OS using a 
Secure Shell client and Xming (A Windows based X Server). That is what 
I do, and no more wasting time trying to get OMPI trying to run on 
Windows.


*From:*users-boun...@open-mpi.org 
<mailto:users-boun...@open-mpi.org>[mailto:users-boun...@open-mpi.org] 
<mailto:[mailto:users-boun...@open-mpi.org]>*On Behalf 
Of*vimalmat...@eaton.com <mailto:vimalmat...@eaton.com>

*Sent:*Wednesday, June 13, 2012 2:09 PM
*To:*us...@open-mpi.org <mailto:us...@open-mpi.org>;us...@open-mpi.org 
<mailto:us...@open-mpi.org>

*Subject:*Re: [OMPI users] Building MPI on Windows

I've tried the Cygwin way.
Been hitting roadblocks for a week now. I've just uninstalled 
everything and started from scratch again.


--
Vimal


-Original Message-
From:users-boun...@open-mpi.org <mailto:users-boun...@open-mpi.org>on 
behalf of Trent Creekmore

Sent: Wed 6/13/2012 2:47 PM
To: 'Open MPI Users'
Subject: Re: [OMPI users] Building MPI on Windows

This may, or may not be helpful, but I have tried the Windows 
offerings. I have never gotten anything to function was expected. 
Compiling, or the available binaries. I think they just don't work at all.




My suggestion which I feel would be easier, and less headache way 
would be to install something like CygWin, which would give you a 
Unix/Linux like environment running under Windows.


You would only need to compile it in CygWin just like the Linux/Unix 
docs say to do.




I don't know if anyone else has done it this way or not.





From:users-boun...@open-mpi.org 
<mailto:users-boun...@open-mpi.org>[mailto:users-boun...@open-mpi.org] 
On Behalf ofvimalmat...@eaton.com <mailto:vimalmat...@eaton.com>

Sent: Wednesday, June 13, 2012 1:32 PM
To:us...@open-mpi.org <mailto:us...@open-mpi.org>
Subject: [OMPI users] Building MPI on Windows



Hi,



I'm trying to follow the ReadMe file to build OpenMPI on Windows:



Step 1: Untar the contrib/platform/win32/ompi-static.tgz tarball in 
the root directory of the Open MPI distribution.


I do not have ompi-static.tgz in the mentioned path.



Step 2: Go in the ompi/datatype subdirectory in the Open MPI 
distribution and copy the following:


datatype_pack.c   to datatype_pack_checksum.c

datatype_unpack.c to datatype_unpack_checksum.c

I do not see these files in the mentioned path.



Step 4: Open the Open MPI project (.sln file) from the root directory 
of the distribution.


I don't have a .sln file anywhere



Help anyone? Shiqing?



Thanks,

Vimal



From:users-boun...@open-mpi.org 
<mailto:users-boun...@open-mpi.org>[mailto:users-boun...@open-mpi.org] 
On Behalf ofvimalmat...@eaton.com <mailto:vimalmat...@eaton.com>

Sent: Wednesday, June 13, 2012 11:21 AM
To:f...@hlrs.de <mailto:f...@hlrs.de>
Cc:us...@open-mpi.org <mailto:us...@open-mpi.org>
Subject: Re: [OMPI users] Help with buidling MPI(Error: mpi.h not found)



I did make uninstall. I also deleted the folders of the other 
implementation.


I ran ./configure and make all install.

At the end of the make I saw a bunch of errors for the makefiles. I've 
attached the .log and .out files.




Please tell me if I'm on the right track.



Thanks,

Vimal



From: Shiqing Fan [mailto:f...@hlrs.de]
Sent: Wednesday, June 13, 2012 9:37 AM
To: Mathew, Vimal
Cc: Open MPI Users
Subject: Re: [OMPI users] Help with buidling MPI(Error: mpi.h not found)



Hi Vimal,

I'm not sure how you can uninstall  the other one, may be 'make 
uninstall' from the source? O

Re: [OMPI users] Help with buidling MPI(Error: mpi.h not found)

2012-06-12 Thread Damien
Hey guys, that "Device or resource busy" error looks suspiciously like 
an overzealous antivirus package.  That happens on Windows when linkers 
run and change the timestamps and sizes of exes and dlls (like, say, 
conftest.exe).  The antivirus software then chucks a hissy fit and locks 
the executable because it thinks it's a virus.  A few seconds later when 
nothing bad happens, it releases it again, but often too late for other 
users of the exe.


If you're running antivirus software, turn it off or add an exception 
for your build directory, and try again.


Damien

On 12/06/2012 12:49 PM, Jeff Squyres wrote:

Now I pass you off to Shiqing, our Windows guy...  :-)


On Jun 12, 2012, at 2:44 PM,<vimalmat...@eaton.com>  wrote:


I ran OpenMPI_v1.6-1_win64.exe.
Now I get this message:
C9995799@SOUMIWHP5003567 ~/openmpi-1.6
$ mpicc hello.c -o hello
WARNING: mpicc expected to find liblammpio.* in /usr/local/lib
WARNING: MPI-2 IO support will be disabled
gcc: hello.c: No such file or directory
mpicc: No such file or directory
--
Vimal


-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Jeff Squyres
Sent: Tuesday, June 12, 2012 2:30 PM
To: Open MPI Users
Subject: Re: [OMPI users] Help with buidling MPI(Error: mpi.h not found)

Probably easier to just run the Open MPI binary installer.


On Jun 12, 2012, at 2:24 PM,<vimalmat...@eaton.com>  wrote:


So I simply download and run OpenMPI_v1.6-1_win64.exe?
Or is there a way to fix the Fortran compiler?

--
Vimal


-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On Behalf Of Jeff Squyres
Sent: Tuesday, June 12, 2012 2:20 PM
To: Open MPI Users
Subject: Re: [OMPI users] Help with buidling MPI(Error: mpi.h not
found)

It does not look like you successfully built Open MPI -- it looks like
Open MPI's configure script aborted because your Fortran compiler
wasn't
behaving:

-
checking if Fortran 77 compiler supports COMPLEX*16... yes checking
size of Fortran 77 COMPLEX*16... 16 checking alignment of Fortran
COMPLEX*16... 8 checking if Fortran 77 compiler supports COMPLEX*32...
no checking for max Fortran MPI handle index... ( 0x7fff<
2147483647 ? 0x7fff : 2147483647 ) checking Fortran value for

.TRUE.

logical type... configure: error: Could not determine value of Fortran
.TRUE..  Aborting.
-

Anything that happened after that is somewhat irrelevant because Open
MPI didn't configure properly.

Looking in config.log, I see why:

-
configure:44290: checking Fortran value for .TRUE. logical type
configure:44386: gcc -DNDEBUG -g -O2 -finline-functions
-fno-strict-aliasing -I. -c conftest.c
configure:44393: $? = 0
configure:44403: gfortran  -o conftest conftest.o conftestf.f
/usr/lib/gcc/i686-pc-cygwin/4.5.3/../../../../i686-pc-cygwin/bin/ld:
reopening conftest.exe: Device or resource busy

/usr/lib/gcc/i686-pc-cygwin/4.5.3/../../../../i686-pc-cygwin/bin/ld:
final link failed: Device or resource busy
collect2: ld returned 1 exit status
configure:44410: $? = 1
configure:44427: error: Could not determine value of Fortran .TRUE..
Aborting.
-

All this may be irrelevant, though, because it looks like you're
building on Windows.

In that case, you might well want to just download the OMPI Windows
binaries.  I don't know offhand if we support building on Windows with
the normal configure / make methodology; we normally use cmake to
build from source on Windows.



On Jun 12, 2012, at 1:25 PM,<vimalmat...@eaton.com>  wrote:


Hi,

I was directed to the OpenMPI website from the Boost Libraries page
to

install an MPI Installation.

I've followed all the steps in the installation guide to configure
and

build MPI. When I try to compile the hello.c program which contains
.

I get an error message saying mpi.h does not exist I've attached the
config.log, config.out, make.out , ompi_info all and make-install.out

files.

Any help will be greatly appreciated!

Thanks,
Vimal Mathew

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org

Re: [OMPI users] Problem with prebuilt ver 1.5.3 for windows

2011-06-24 Thread Damien
Yeah, and I'm wrong too, InterlockedCompareExchange64 is available on 
32-bit.  I think this is one for Shiqing.


You could build OpenMPI yourself if you have VS2008.  It's pretty easy 
to do.


Damien

On 24/06/2011 1:51 PM, Jeffrey A Cummings wrote:

Damien -

I'm using the 32 bit version of OpenMPI.  I think the 64 refers to the 
size of integer that the function works on, not the operating system 
version.  I didn't have this problem with VS 2008, so I think they've 
changed something in VS 2010.  I just don't know how to fix it.


- Jeff




From: Damien <dam...@khubla.com>
To: Open MPI Users <us...@open-mpi.org>
Date: 06/24/2011 02:35 PM
Subject: Re: [OMPI users] Problem with prebuilt ver 1.5.3 for windows
Sent by: users-boun...@open-mpi.org




Jeff,

InterlockedCompareExchange64 is a 64-bit-only instruction.  Are you 
running XP 32-bit (I think you are b/c I don't think there was a XP64 
SP3...).  You need the 32-bit OpenMPI version.  If you are running a 
64-bit OS, but building a 32-bit executable, that instruction isn't 
available in 32-bit and you still need to link with 32-bit OpenMPI.


Damien

On 24/06/2011 12:16 PM, Jeffrey A Cummings wrote:
I'm having a problem using the prebuilt Windows version 1.5.3 with my 
app built with MS VisualStudio 2010.  I get an error message (for each 
node) that says: "The procedure entry point 
InterlockedCompareExchange64 could not be located in the dynamic link 
library KERNEL32.dll".  I'm running Windows XP, sp 3.


- Jeff Cummings


___
users mailing list
_users@open-mpi.org_ <mailto:us...@open-mpi.org>
_http://www.open-mpi.org/mailman/listinfo.cgi/users
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] Problem with prebuilt ver 1.5.3 for windows

2011-06-24 Thread Damien

Jeff,

InterlockedCompareExchange64 is a 64-bit-only instruction.  Are you 
running XP 32-bit (I think you are b/c I don't think there was a XP64 
SP3...).  You need the 32-bit OpenMPI version.  If you are running a 
64-bit OS, but building a 32-bit executable, that instruction isn't 
available in 32-bit and you still need to link with 32-bit OpenMPI.


Damien

On 24/06/2011 12:16 PM, Jeffrey A Cummings wrote:
I'm having a problem using the prebuilt Windows version 1.5.3 with my 
app built with MS VisualStudio 2010.  I get an error message (for each 
node) that says: "The procedure entry point 
InterlockedCompareExchange64 could not be located in the dynamic link 
library KERNEL32.dll".  I'm running Windows XP, sp 3.


- Jeff Cummings


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] v1.5.3-x64 does not work on Windows 7 workgroup

2011-05-20 Thread Damien

MPI can get through your firewall, right?

Damien

On 20/05/2011 12:53 PM, Jason Mackay wrote:
I have verified that disabling UAC does not fix the problem. xhlp.exe 
starts, threads spin up on both machines, CPU usage is at 80-90% but 
no progress is ever made.


>From this state, Ctrl-break on the head node yields the following output:

[REMOTEMACHINE:02032] [[20816,1],0]-[[20816,0],0] 
mca_oob_tcp_msg_recv: readv failed: Unknown error (108)
[REMOTEMACHINE:05064] [[20816,1],1]-[[20816,0],0] 
mca_oob_tcp_msg_recv: readv failed: Unknown error (108)
[REMOTEMACHINE:05420] [[20816,1],2]-[[20816,0],0] 
mca_oob_tcp_msg_recv: readv failed: Unknown error (108)
[REMOTEMACHINE:03852] [[20816,1],3]-[[20816,0],0] 
mca_oob_tcp_msg_recv: readv failed: Unknown error (108)
[REMOTEMACHINE:05436] [[20816,1],4]-[[20816,0],0] 
mca_oob_tcp_msg_recv: readv failed: Unknown error (108)
[REMOTEMACHINE:04416] [[20816,1],5]-[[20816,0],0] 
mca_oob_tcp_msg_recv: readv failed: Unknown error (108)
[REMOTEMACHINE:02032] [[20816,1],0] routed:binomial: Connection to 
lifeline [[20816,0],0] lost
[REMOTEMACHINE:05064] [[20816,1],1] routed:binomial: Connection to 
lifeline [[20816,0],0] lost
[REMOTEMACHINE:05420] [[20816,1],2] routed:binomial: Connection to 
lifeline [[20816,0],0] lost
[REMOTEMACHINE:03852] [[20816,1],3] routed:binomial: Connection to 
lifeline [[20816,0],0] lost
[REMOTEMACHINE:05436] [[20816,1],4] routed:binomial: Connection to 
lifeline [[20816,0],0] lost
[REMOTEMACHINE:04416] [[20816,1],5] routed:binomial: Connection to 
lifeline [[20816,0],0] lost




> From: users-requ...@open-mpi.org
> Subject: users Digest, Vol 1911, Issue 1
> To: us...@open-mpi.org
> Date: Fri, 20 May 2011 08:14:13 -0400
>
> Send users mailing list submissions to
> us...@open-mpi.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> or, via email, send a message with subject or body 'help' to
> users-requ...@open-mpi.org
>
> You can reach the person managing the list at
> users-ow...@open-mpi.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of users digest..."
>
>
> Today's Topics:
>
> 1. Re: Error: Entry Point Not Found (Zhangping Wei)
> 2. Re: Problem with MPI_Request, MPI_Isend/recv and
> MPI_Wait/Test (George Bosilca)
> 3. Re: v1.5.3-x64 does not work on Windows 7 workgroup (Jeff Squyres)
> 4. Re: Error: Entry Point Not Found (Jeff Squyres)
> 5. Re: openmpi (1.2.8 or above) and Intel composer XE 2011 (aka
> 12.0) (Jeff Squyres)
> 6. Re: Openib with > 32 cores per node (Jeff Squyres)
> 7. Re: MPI_COMM_DUP freeze with OpenMPI 1.4.1 (Jeff Squyres)
> 8. Re: Trouble with MPI-IO (Jeff Squyres)
> 9. Re: Trouble with MPI-IO (Tom Rosmond)
> 10. Re: Problem with MPI_Request, MPI_Isend/recv and
> MPI_Wait/Test (David B?ttner)
> 11. Re: Trouble with MPI-IO (Jeff Squyres)
> 12. Re: MPI_Alltoallv function crashes when np > 100 (Jeff Squyres)
> 13. Re: MPI_ERR_TRUNCATE with MPI_Allreduce() error, but only
> sometimes... (Jeff Squyres)
> 14. Re: Trouble with MPI-IO (Jeff Squyres)
>
>
> --
>
> Message: 1
> Date: Thu, 19 May 2011 09:13:53 -0700 (PDT)
> From: Zhangping Wei <zhangping_...@yahoo.com>
> Subject: Re: [OMPI users] Error: Entry Point Not Found
> To: us...@open-mpi.org
> Message-ID: <101342.7961...@web111818.mail.gq1.yahoo.com>
> Content-Type: text/plain; charset="gb2312"
>
> Dear Paul,
>
> I checked the way 'mpirun -np N ' you mentioned, but it was the 
same

> problem.
>
> I guess it may related to the system I used, because I have used it 
correctly in

> another XP 32 bit system.
>
> I look forward to more advice.Thanks.
>
> Zhangping
>
>
>
>
> 
>  "users-requ...@open-mpi.org" <users-requ...@open-mpi.org>
>  us...@open-mpi.org
> ?? 2011/5/19 () 11:00:02 
> ??  users Digest, Vol 1910, Issue 2
>
> Send users mailing list submissions to
> us...@open-mpi.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> or, via email, send a message with subject or body 'help' to
> users-requ...@open-mpi.org
>
> You can reach the person managing the list at
> users-ow...@open-mpi.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of users digest..."
>
>
> Today's Topics:
>
> 1. Re: Error: Entry Point Not Found (Paul van der Walt)
> 2. Re: Openib with > 32 cores per node (Robert Horton)
> 3. Re: Openib with > 32 cores per node (Samuel K. Gutierrez)
>
>
> --

Re: [OMPI users] BUILDING OPENMPI ON UBUNTU WITH INTEL 11.1

2011-05-03 Thread Damien
That last error is because you don't have permission to install to /opt 
as a regular user.  You need to run that command as  "sudo make install".


Damien

On 03/05/2011 1:55 PM, Steph Bredenhann wrote:

I think you are a genius!

The new result is attached, it was only the last step make install that
looked suspect.

I'll appreciate if you can look at these results?

While I am at it, thank you a million times for making this available to the
public! Without openmpi I would not have been able to complete my PhD!!!

Thanks

-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Jeff Squyres
Sent: Tuesday, May 03, 2011 21:27
To: Open MPI Users
Subject: Re: [OMPI users] BUILDING OPENMPI ON UBUNTU WITH INTEL 11.1

Ah, I see why your output is munged -- there's a bunch of ^M's in there.

It looks like OMPI's configure script got mucked up somehow.  Did you expand
the tarball on a windows machine and copy it over to a Linux box, perchance?
If so, try expanding it directly on your Linux machine.



On May 3, 2011, at 2:15 PM, Steph Bredenhann wrote:


Thanks for the speedy reply. The required file with information is

attached.

I first thought I must send the file to openmpi again, sorry if that was

wrong.

Thanks


--
Steph Bredenhann Pr.Eng Pr.CPM


Quoting Jeff Squyres<jsquy...@cisco.com>:


Your output appears jumbled.  Can you send all the data listed here:

http://www.open-mpi.org/community/help/

On May 3, 2011, at 1:36 PM, Steph Bredenhann wrote:


Dear Sir/Madam

I want to build openmpi for use with INTEL compilers (version 11.1)
on my

Ubuntu

10.10 x64 system. I am using the guidelines from


http://software.intel.com/en-us/articles/performance-tools-for-softwar
e-developers-building-open-mpi-with-the-intel-compilers/

and specifically the following instructions:


./configure --prefix=/usr/local CC=icc CXX=icpc F77=ifort FC=ifort
... output of configure ...
make all install
... output of build and installation ...

The result is shown below. As can be seen it was unsuccessful. I'll

appreciate

some guidance here as I am nearing the deadline for a project that
is part

of

my research for my PhD.

Thanks in advance.

steph@sjb-linux:/src/openmpi-1.4.3$ ./configure
--prefix=/opt/openmpi-1.4.3 CC=icc CXX=icpc F77=ifort FC=ifort
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
: command not foundconfig/missing: line 3:
: command not foundconfig/missing: line 5:
: command not foundconfig/missing: line 9:
: command not foundconfig/missing: line 14:
: command not foundconfig/missing: line 19:
: command not foundconfig/missing: line 24:
: command not foundconfig/missing: line 29:
/src/openmpi-1.4.3/config/missing: line 49: syntax error near
unexpected

token

`'n
'src/openmpi-1.4.3/config/missing: line 49: `case $1 in
configure: WARNING: `missing' script is too old or missing checking
for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk...
gawk checking whether make sets $(MAKE)... yes checking how to
create a ustar tar archive... gnutar



=
===

== Configuring Open MPI


=
===

*** Checking versions
: integer expression expected 3
: integer expression expected 0
.4ecking Open MPI version... 1
checking Open MPI release date... Oct 05, 2010 checking Open MPI
Subversion repository version... r23834
: integer expression expected 3
: integer expression expected 0
.4ecking Open Run-Time Environment version... 1 checking Open
Run-Time Environment release date... Oct 05, 2010 checking Open
Run-Time Environment Subversion repository version... r23834
: integer expression expected 3
: integer expression expected 0
.4ecking Open Portable Access Layer version... 1 checking Open
Portable Access Layer release date... Oct 05, 2010 checking Open
Portable Access Layer Subversion repository version... r23834
: command not found
: command not found
: command not found
: command not found
: command not found
: command not found
: command not found
: command not found
: command not found
: command not found
: command not found
: command not found
: command not found
: command not found
: command not found
: command not found
: command not found
: command not found
: command not found
: command not found
: command not found

*** Initialization, setup
configure: builddir: /src/openmpi-1.4.3
configure: srcdir: /src/openmpi-1.4.3
configure: error: cannot run /bin/sh config/config.sub
steph@sjb-linux:/src/openmpi-1.4.3$ make all install
make: *** No rule to make target `all'.  Stop.
steph@sjb-linux:/src/openmpi-1.4.3$ make install
make: *** No rule to make target `install'.  Stop.
steph@sjb-linux:/src/openmpi-1.4.3$


Regards

Steph Bredenhann






--
This message was sent by Adept Internet's webmail.
http:/

Re: [OMPI users] missing symbols in Windows 1.5.3 binaries?

2011-04-16 Thread Damien

Shiqing,

I'm using Composer XE2011 and Visual Studio 2008.  VS2008 is doing the 
linking.  I'll do a build of 1.5.3 myself and see how the symbols turn out.


Damien

On 16/04/2011 1:50 PM, Shiqing Fan wrote:

Hi Damien,

Which version of Intel MPI do you use? The only difference between 
1.5.3 and .1.52 I can tell is that 1.5.3 was built with Intel Fortran 
Composer XE 2011. That might be the reason of the problem.


Shiqing

On 4/16/2011 4:01 AM, Damien wrote:

Hiya,

I just tested the 1.5.3 binaries and my link pass broke.  Using 1.5.3 
I get unresolved externals on things like _MPI_NULL_COPY_FN.  On 
1.5.2.2 it's fine.  I did a dumpbin on libmpi.lib for both versions, 
and in 1.5.3 there's upper-case symbols for _OMPI_C_MPI_NULL_COPY_FN, 
but not _MPI_NULL_COPY_FN.  In the 1.5.2.2 libmpi.lib there's symbols 
for both.


Damien
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users






[OMPI users] missing symbols in Windows 1.5.3 binaries?

2011-04-15 Thread Damien

Hiya,

I just tested the 1.5.3 binaries and my link pass broke.  Using 1.5.3 I 
get unresolved externals on things like _MPI_NULL_COPY_FN.  On 1.5.2.2 
it's fine.  I did a dumpbin on libmpi.lib for both versions, and in 
1.5.3 there's upper-case symbols for _OMPI_C_MPI_NULL_COPY_FN, but not 
_MPI_NULL_COPY_FN.  In the 1.5.2.2 libmpi.lib there's symbols for both.


Damien


Re: [OMPI users] Building OpenMPI on Windows 7

2011-03-16 Thread Damien

Hiral,

To add to Shiqing's comments, 1.5 has been running great for me on 
Windows for over 6 months since it was in beta.  You should give it a try.


Damien

On 16/03/2011 8:34 AM, Shiqing Fan wrote:

Hi Hiral,

> it's only experimental in 1.4 series. And there is only F77 
bingdings on Windows, no F90 bindings.
Can you please provide steps to build 1.4.3 with experimental f77 
bindings on Windows?
Well, I highly recommend to use 1.5 series, but I can also take a look 
and probably provide you a patch for 1.4 .


BTW: Do you have any idea on: when next stable release with full 
fortran support on Windows would be available?

There is no plan yet.


Regards,
Shiqing


Thank you.
-Hiral
On Wed, Mar 16, 2011 at 6:59 PM, Shiqing Fan <f...@hlrs.de 
<mailto:f...@hlrs.de>> wrote:


Hi Hiral,

1.3.4 is quite old, please use the latest version. As Damien
noted, the full fortran support is in 1.5 series, it's only
experimental in 1.4 series. And there is only F77 bingdings on
Windows, no F90 bindings. Another choice is to use the released
binary installers to avoid compiling everything by yourself.


Best Regards,
Shiqing


On 3/16/2011 11:47 AM, hi wrote:


Greetings!!!

I am trying to build openmpi-1.3.4 and openmpi-1.4.3 on Windows
7 (64-bit OS), but getting some difficuty...

My build environment:

OS : Windows 7 (64-bit)

C/C++ compiler : Visual Studio 2008 and Visual Studio 2010

Fortran compiler: Intel "ifort"

Approach: followed the "First Approach" described in
README.WINDOWS file.

*1) Using openmpi-1.3.4:***

Observed build time error in version.cc(136). This error is
related to getting SVN version information as described in
http://www.open-mpi.org/community/lists/users/2010/01/11860.php.
As we are using this openmpi-1.3.4 stable version on Linux
platform, is there any fix to this compile time error?

*2) Using openmpi-1.4.3:***

Builds properly without F77/F90 support (i.e. i.e. Skipping
MPI F77 interface).

Now to get the "mpif*.exe" for fortran programs, I provided
proper "ifort" path and enabled "OMPI_WANT_F77_BINDINGS=ON"
and/or OMPI_WANT_F90_BINDINGS=ON flag; but getting following
errors...

*   2.a) "ifort" with OMPI_WANT_F77_BINDINGS=ON gave following
errors... *

Check ifort external symbol convention...

Check ifort external symbol convention...single underscore

Check if Fortran 77 compiler supports LOGICAL...

Check if Fortran 77 compiler supports LOGICAL...done

Check size of Fortran 77 LOGICAL...

CMake Error at
contrib/platform/win32/CMakeModules/f77_get_sizeof.cmake:76
(MESSAGE):

Could not determine size of LOGICAL.

Call Stack (most recent call first):

contrib/platform/win32/CMakeModules/f77_check.cmake:82
(OMPI_F77_GET_SIZEOF)

contrib/platform/win32/CMakeModules/ompi_configure.cmake:1123
(OMPI_F77_CHECK)

CMakeLists.txt:87 (INCLUDE)

Configuring incomplete, errors occurred!

*2.b) "ifort" with OMPI_WANT_F90_BINDINGS=ON gave following
errors... *

Skipping MPI F77 interface

CMake Error: File

C:/openmpi-1.4.3/contrib/platform/win32/ConfigFiles/mpif90-wrapper-data.txt.cmake
does not exist.

CMake Error at ompi/tools/CMakeLists.txt:93 (CONFIGURE_FILE):

configure_file Problem configuring file

CMake Error: File

C:/openmpi-1.4.3/contrib/platform/win32/ConfigFiles/mpif90-wrapper-data.txt.cmake
does not exist.

CMake Error at ompi/tools/CMakeLists.txt:97 (CONFIGURE_FILE):

configure_file Problem configuring file

looking for ccp...

looking for ccp...not found.

looking for ccp...

looking for ccp...not found.

Configuring incomplete, errors occurred!

*2.c) "ifort" with OMPI_WANT_F77_BINDINGS=ON and
OMPI_WANT_F90_BINDINGS=ON gave following errors... *

Check ifort external symbol convention...

Check ifort external symbol convention...single underscore

Check if Fortran 77 compiler supports LOGICAL...

Check if Fortran 77 compiler supports LOGICAL...done

Check size of Fortran 77 LOGICAL...

CMake Error at
contrib/platform/win32/CMakeModules/f77_get_sizeof.cmake:76
(MESSAGE):

Could not determine size of LOGICAL.

Call Stack (most recent call first):

contrib/platform/win32/CMakeModules/f77_check.cmake:82
(OMPI_F77_GET_SIZEOF)

contrib/platform/win32/CMakeModules/ompi_configure.cmake:1123
(OMPI_F77_CHECK)

CMakeLists.txt:87 (INCLUDE)

Configuring incomplete, errors occurred!

Any idea on resolving above errors to get mpif*.exe generated on
Windows platform using "ifort"?

Please let me know if more information is required.
Thank you in advance.

-Hiral


___
u

Re: [OMPI users] Building OpenMPI on Windows 7

2011-03-16 Thread Damien

Hi Hiral,

The 1.4 series doesn't have Fortran support on Windows.  You need to use 
1.5.


Damien

On 16/03/2011 4:47 AM, hi wrote:


Greetings!!!

I am trying to build openmpi-1.3.4 and openmpi-1.4.3 on Windows 7 
(64-bit OS), but getting some difficuty...


My build environment:

OS : Windows 7 (64-bit)

C/C++ compiler : Visual Studio 2008 and Visual Studio 2010

Fortran compiler: Intel "ifort"

Approach: followed the "First Approach" described in README.WINDOWS file.

*1) Using openmpi-1.3.4:***

Observed build time error in version.cc(136). This error is 
related to getting SVN version information as described in 
http://www.open-mpi.org/community/lists/users/2010/01/11860.php. As we 
are using this openmpi-1.3.4 stable version on Linux platform, is 
there any fix to this compile time error?


*2) Using openmpi-1.4.3:***

Builds properly without F77/F90 support (i.e. i.e. Skipping MPI 
F77 interface).


Now to get the "mpif*.exe" for fortran programs, I provided proper 
"ifort" path and enabled "OMPI_WANT_F77_BINDINGS=ON" and/or 
OMPI_WANT_F90_BINDINGS=ON flag; but getting following errors...


*   2.a) "ifort" with OMPI_WANT_F77_BINDINGS=ON gave following errors... *

Check ifort external symbol convention...

Check ifort external symbol convention...single underscore

Check if Fortran 77 compiler supports LOGICAL...

Check if Fortran 77 compiler supports LOGICAL...done

Check size of Fortran 77 LOGICAL...

CMake Error at 
contrib/platform/win32/CMakeModules/f77_get_sizeof.cmake:76 (MESSAGE):


Could not determine size of LOGICAL.

Call Stack (most recent call first):

contrib/platform/win32/CMakeModules/f77_check.cmake:82 
(OMPI_F77_GET_SIZEOF)


contrib/platform/win32/CMakeModules/ompi_configure.cmake:1123 
(OMPI_F77_CHECK)


CMakeLists.txt:87 (INCLUDE)

Configuring incomplete, errors occurred!

*2.b) "ifort" with OMPI_WANT_F90_BINDINGS=ON gave following errors... *

Skipping MPI F77 interface

CMake Error: File 
C:/openmpi-1.4.3/contrib/platform/win32/ConfigFiles/mpif90-wrapper-data.txt.cmake 
does not exist.


CMake Error at ompi/tools/CMakeLists.txt:93 (CONFIGURE_FILE):

configure_file Problem configuring file

CMake Error: File 
C:/openmpi-1.4.3/contrib/platform/win32/ConfigFiles/mpif90-wrapper-data.txt.cmake 
does not exist.


CMake Error at ompi/tools/CMakeLists.txt:97 (CONFIGURE_FILE):

configure_file Problem configuring file

looking for ccp...

looking for ccp...not found.

looking for ccp...

looking for ccp...not found.

Configuring incomplete, errors occurred!

*2.c) "ifort" with OMPI_WANT_F77_BINDINGS=ON and 
OMPI_WANT_F90_BINDINGS=ON gave following errors... *


Check ifort external symbol convention...

Check ifort external symbol convention...single underscore

Check if Fortran 77 compiler supports LOGICAL...

Check if Fortran 77 compiler supports LOGICAL...done

Check size of Fortran 77 LOGICAL...

CMake Error at 
contrib/platform/win32/CMakeModules/f77_get_sizeof.cmake:76 (MESSAGE):


Could not determine size of LOGICAL.

Call Stack (most recent call first):

contrib/platform/win32/CMakeModules/f77_check.cmake:82 
(OMPI_F77_GET_SIZEOF)


contrib/platform/win32/CMakeModules/ompi_configure.cmake:1123 
(OMPI_F77_CHECK)


CMakeLists.txt:87 (INCLUDE)

Configuring incomplete, errors occurred!

Any idea on resolving above errors to get mpif*.exe generated on 
Windows platform using "ifort"?


Please let me know if more information is required.
Thank you in advance.

-Hiral


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] OpenMPI Binaries on Windows XP with MinGW

2011-02-27 Thread Damien Hocking

Manoj,

Those binaries were built for use with Visual Studio 2008, not MinGW.  I 
don't know if OpenMPI has been built with MinGW before, maybe someone on 
the list knows.


Damien

On 27/02/2011 4:42 AM, Manoj Vaghela wrote:

Hi All,

I have downloaded the latest version OpenMPI binaries for Windows and
installed on the machine with all environment variables as set by the
binary. From MSYS window, when I just type command mpic++, it gives
the following error:
--
The Open MPI wrapper compiler was unable to find the specified compiler
cl.exe in your PATH.

Note that this compiler was either specified at configure time or in
one of several possible environment variables.
---

I checked system's environment variables, PATH etc, which looks fine.

To run a basic MPI code, this is the first step. Can you tell me how
can I sort this problem?

Thank you.

--
Manoj
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] Building OpenMPI on Ubuntu

2011-01-28 Thread Damien Hocking

Tom,

Changing the path to icc is done in that configure file:

#!/bin/bash
CC=icc CXX=icpc F77=ifort FC=ifort ./configure 
--prefix=/usr/local/OpenMPI-intel --enable-static --enable-shared

becomes

#!/bin/bash
CC=/usr/local/intel/Compiler/11.0/083/bin/intel64/icc CXX=icpc F77=ifort 
FC=ifort ./configure --prefix=/usr/local/OpenMPI-intel --enable-static 
--enable-shared


You might have to do the same full path for CXX, F77, FC flags too.

Damien

CC=On 28/01/2011 7:18 PM, Greef, T.F.A. de wrote:


Hi everybody,

I try to compile openmpi with intel compilers on ubuntu 10.10
Everything configures and compiles OK. I used the following configure file:

The configure file looks like this:

#!/bin/bash
CC=icc CXX=icpc F77=ifort FC=ifort ./configure 
--prefix=/usr/local/OpenMPI-intel --enable-static --enable-shared


However, when doing make install the I recieve the following error:

libtool: line 7847: icc: command not found
libtool: install: error: relink `libopen-rte.la' with the above command before 
installing it

This was previously reported in another thread 
(http://www.open-mpi.org/community/lists/users/2009/05/9452.php).
I made sure that icc is in my path by sourcing the environmental variables in 
.bashrc

source /opt/intel/Compiler/11.1/059/bin/ifortvars.sh intel64
source /opt/intel/Compiler/11.1/073/bin/iccvars.sh intel64

The output of my printenv is:

MANPATH=/opt/intel/Compiler/11.1/073/man/en_US:/opt/intel/Compiler/11.1/059/man/en_US:/usr/local/man:/usr/local/share/man:/usr/share/man
ORBIT_SOCKETDIR=/tmp/orbit-tom
SSH_AGENT_PID=1778
INTEL_LICENSE_FILE=/opt/intel/Compiler/11.1/059/licenses:/opt/intel/licenses:/home/tom/intel/licenses:/opt/intel/Compiler/11.1/073/licenses:/opt/intel/licenses:/home/tom/intel/licenses
TERM=xterm
SHELL=/bin/bash
XDG_SESSION_COOKIE=e1fd4608d199162d19432797-1296259084.406953-2001076340
LIBRARY_PATH=/opt/intel/Compiler/11.1/073/lib/intel64:/opt/intel/Compiler/11.1/059/lib/intel64
WINDOWID=56833742
GNOME_KEYRING_CONTROL=/tmp/keyring-3LAaIy
GTK_MODULES=canberra-gtk-module
USER=tom
LD_LIBRARY_PATH=/opt/intel/Compiler/11.1/073/lib/intel64:/opt/intel/Compiler/11.1/059/lib/intel64
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.x!
  
wd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:
SSH_AUTH_SOCK=/tmp/keyring-3LAaIy/ssh
DEFAULTS_PATH=/usr/share/gconf/gnome.default.path
SESSION_MANAGER=local/tom-MS-7586:@/tmp/.ICE-unix/1742,unix/tom-MS-7586:/tmp/.ICE-unix/1742
USERNAME=tom
XDG_CONFIG_DIRS=/etc/xdg/xdg-gnome:/etc/xdg
NLSPATH=/opt/intel/Compiler/11.1/073/lib/intel64/locale/%l_%t/%N:/opt/intel/Compiler/11.1/073/idb/intel64/locale/%l_%t/%N:/opt/intel/Compiler/11.1/059/lib/intel64/locale/%l_%t/%N:/opt/intel/Compiler/11.1/059/idb/intel64/locale/%l_%t/%N
DESKTOP_SESSION=gnome
PATH=/opt/intel/Compiler/11.1/073/bin/intel64:/opt/intel/Compiler/11.1/059/bin/intel64:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
PWD=/home/tom
GDM_KEYBOARD_LAYOUT=us
LANG=en_US.utf8
GNOME_KEYRING_PID=1723
MODULEPATH=/usr/share/Modules/modulefiles:/etc/modulefiles
MANDATORY_PATH=/usr/share/gconf/gnome.mandatory.path
GDM_LANG=en_US.utf8
LOADEDMODULES=
GDMSESSION=gnome
SHLVL=1
HOME=/home/tom
GNOME_DESKTOP_SESSION_ID=this-is-deprecated
LOGNAME=tom
XDG_DATA_DIRS=/usr/share/gnome:/usr/local/share/:/usr/share/
DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-bRYGCPNoS9,guid=4002f3c04511f4e74a36b27f001c
MODULESHOME=/usr/share/Modules
LESSOPEN=| /usr/bin/lesspipe %s
WINDOWPATH=7
DISPLAY=:0.0
LESSCLOSE=/usr/bin/lesspipe %s %s
XAUTHORITY=/var/run/gdm/auth-for-tom-Z65SUo/database
COLORTERM=gnome-terminal
module=() {  eval `/usr/bin/modulecmd bash $*`
}

In that particular thread it was suggested to use the full path of CC 
(CC=/usr/local/intel/Compiler/11.0/083/bin/intel64/icc) on the build line but I 
have no idea where to do this (i.e

[OMPI users] Windows installers of 1.5.1 - No Fortan ?

2010-12-29 Thread Damien Hocking

Jeff, Shiqing, anyone...

I notice there's no Fortan support in the Windows binary versions of 
1.5.1 on the website.  Is that a deliberate decision?


Damien


Re: [OMPI users] Help!!!!!!!!!!!!Openmpi instal for ubuntu 64 bits

2010-11-29 Thread Damien

icc is the Intel C/C++ compiler.  Do you have it installed on your system?

Damien

On 29/11/2010 10:43 AM, Maurício Rodrigues wrote:
HI, I need to install opnmpi 1.4.2 in Ubuntu 4.10 64bit, and giving 
this error all the time ... I would like to help.

below follows the lines of the error.

Thank you!

/home/fjustino/fjustino/openmpi-1.4.2/libtool: line 7847: icc: command 
not found
libtool: install: error: relink `libopen-rte.la 
<http://libopen-rte.la/>' with the above command before installing it

make[3]: *** [install-libLTLIBRARIES] Error 1
make[3]: Leaving directory `/home/fjustino/fjustino/openmpi-1.4.2/orte'
make[2]: *** [install-am] Error 2
make[2]: Leaving directory `/home/fjustino/fjustino/openmpi-1.4.2/orte'
make[1]: *** [install-recursive] Error 1
make[1]: Leaving directory `/home/fjustino/fjustino/openmpi-1.4.2/orte'
make: *** [install-recursive] Error 1



--
Maurício Paulo Rodrigues
Bacharelando em Física
Universidade Federal de Viçosa
Mobile- (32)-9972 2239
e-mail alternativo mauricio.pa...@ufv.br <mailto:mauricio.pa...@ufv.br>
Brazil


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] [Open MPI Announce] Open MPI v1.5 released

2010-10-10 Thread Damien Hocking
 You didn't mention complete Fortran support on Windows, thanks to 
Shiqing.  :-)


Damien

On 10/10/2010 5:50 PM, Jeff Squyres wrote:

The Open MPI Team, representing a consortium of research, academic, and 
industry partners, is pleased to announce the release of Open MPI version 1.5.  
This release represents over a year of research, development, and testing.

Although the version 1.5 release marks the advent of a new "feature release" series for 
Open MPI, the v1.4 series remains relevant as the "super stable" series (e.g., there will 
likely be a v1.4.4 someday).  To explain what I mean, here's an excerpt from our release 
methodology:

 o Even minor release numbers are part of "super-stable"
   release series (e.g., v1.4.0). Releases in super stable series
   are well-tested, time-tested, and mature. Such releases are
   recomended for production sites. Changes between subsequent
   releases in super stable series are expected to be fairly small.
 o Odd minor release numbers are part of "feature" release
   series (e.g., v1.5.0). Releases in feature releases are
   well-tested, but they are not necessarily time-tested or as
   mature as super stable releases. Changes between subsequent
   releases in feature series may be large.

The v1.5 series will eventually morph into the next "super stable" series, v1.6 -- at 
which time, we'll start a new "feature" series (v1.7).

Version 1.5 can be downloaded from the main Open MPI web site or any of its 
mirrors (mirrors will be updating shortly).

The following is an abbreviated list of changes in v1.5 (note that countless 
other smaller improvements and enhancements are not shown below):

- Added "knem" support: direct process-to-process copying for shared
   memory message passing.  See http://runtime.bordeaux.inria.fr/knem/
   and the README file for more details.
- Updated shared library versioning scheme and linking style of MPI
   applications.  The MPI application ABI has been broken from the
   v1.3/v1.4 series.  MPI applications compiled against any prior
   version of Open MPI will need to, at a minimum, re-link.  See the
   README file for more details.
- Added "fca" collective component, enabling MPI collective offload
   support for Voltaire switches.
- Fixed MPI one-sided operations with large target displacements.
   Thanks to Brian Price and Jed Brown for reporting the issue.
- Fixed MPI_GET_COUNT when used with large counts.  Thanks to Jed
   Brown for reporting the issue.
- Made the openib BTL safer if extremely low SRQ settings are used.
- Fixed handling of the array_of_argv parameter in the Fortran
   binding of MPI_COMM_SPAWN_MULTIPLE (** also to appear: 1.4.3).
- Fixed malloc(0) warnings in some collectives.
- Fixed a problem with the Fortran binding for
   MPI_FILE_CREATE_ERRHANDLER.  Thanks to Secretan Yves for identifying
   the issue (** also to appear: 1.4.3).
- Updates to the LSF PLM to ensure that the path is correctly passed.
   Thanks to Teng Lin for the patch (** also to appear: 1.4.3).
- Fixes for the F90 MPI_COMM_SET_ERRHANDLER and MPI_WIN_SET_ERRHANDLER
   bindings.  Thanks to Paul Kapinos for pointing out the issue
   (** also to appear: 1.4.3).
- Fixed extra_state parameter types in F90 prototypes for
   MPI_COMM_CREATE_KEYVAL, MPI_GREQUEST_START, MPI_REGISTER_DATAREP,
   MPI_TYPE_CREATE_KEYVAL, and MPI_WIN_CREATE_KEYVAL.
- Fixes for Solaris oversubscription detection.
- If the PML determines it can't reach a peer process, print a
   slightly more helpful message.  Thanks to Nick Edmonds for the
   suggestion.
- Make btl_openib_if_include/exclude function the same way
   btl_tcp_if_include/exclude works (i.e., supplying an _include list
   overrides supplying an _exclude list).
- Apply more scalable reachability algorithm on platforms with more
   than 8 TCP interfaces.
- Various assembly code updates for more modern platforms / compilers.
- Relax restrictions on using certain kinds of MPI datatypes with
   one-sided operations.  Users beware; not all MPI datatypes are valid
   for use with one-sided operations!
- Improve behavior of MPI_COMM_SPAWN with regards to --bynode.
- Various threading fixes in the openib BTL and other core pieces of
   Open MPI.
- Various help file and man pages updates.
- Various FreeBSD and NetBSD updates and fixes.  Thanks to Kevin
   Buckley and Aleksej Saushev for their work.
- Fix case where freeing communicators in MPI_FINALIZE could cause
   process failures.
- Print warnings if shared memory state files are opened on what look
   like networked filesystems.
- Update libevent to v1.4.13.
- Allow propagating signals to processes that call fork().
- Fix bug where MPI_GATHER was sometimes incorrectly examining the
   datatype on non-root processes.  Thanks to Michael Hofmann for
   investigating the issue.
- Various Microsoft Windows fixes.
- Various Catamount fixes.
- V

[OMPI users] minor glitch in 1.5-rc5 Windows build - has workaround

2010-08-06 Thread Damien

 Hi all,

There's a small hiccup in building a Windows version of 1.5-rc5.  When 
you configure in the CMake GUI, you can ask for a Debug or Release 
project before you hit Generate.  If you ask for a Debug project, you 
can still change it to Release in Visual Studio, and it will build 
successfully.  BUT: the Install project will fail, because it tries to 
install libopen-pald.pdb (possibly others too, I didn't check).  It's a 
minor thing, only nuisance value.  If you set a Release project in 
CMake, everything works fine.


Damien



Re: [OMPI users] Ok, I've got OpenMPI set up, now what?!

2010-07-19 Thread Damien Hocking
It does.  The big difference is that MUMPS is a 3-minute compile, and 
PETSc, erm, isn't.  It's..longer...


D

On 19/07/2010 12:56 PM, Daniel Janzon wrote:

Thanks a lot! PETSc seems to be really solid and integrates with MUMPS
suggested by Damien.

All the best,
Daniel Janzon

On 7/18/10, Gustavo Correa<g...@ldeo.columbia.edu>  wrote:
   

Check PETSc:
http://www.mcs.anl.gov/petsc/petsc-as/

On Jul 18, 2010, at 12:37 AM, Damien wrote:

 

You should check out the MUMPS parallel linear solver.

Damien
Sent from my iPhone

On 2010-07-17, at 5:16 PM, Daniel Janzon<jan...@gmail.com>  wrote:

   

Dear OpenMPI Users,

I successfully installed OpenMPI on some FreeBSD machines and I can
run MPI programs on the cluster. Yippie!

But I'm not patient enough to write my own MPI-based routines. So I
thought maybe I could ask here for suggestions. I am primarily
interested in general linear algebra routines. The best would be to
for instance start Octave and just use it as normal, only that all
matrix operations would run on the cluster. Has anyone done that? The
octave-parallel package seems to be something different.

I installed scalapack and the test files ran successfully with mpirun
(except a few of them). But the source code examples of scalapack
looks terrible. Is there no higher-level library that provides an API
with matrix operations, which have all MPI parallelism stuff handled
for you in the background? Certainly a smart piece of software can
decide better than me how to chunk up a matrix and pass it out to the
available processes.

All the best,
Daniel
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
 

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
   

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

 

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
   


Re: [OMPI users] Ok, I've got OpenMPI set up, now what?!

2010-07-18 Thread Damien

You should check out the MUMPS parallel linear solver.

Damien
Sent from my iPhone

On 2010-07-17, at 5:16 PM, Daniel Janzon <jan...@gmail.com> wrote:


Dear OpenMPI Users,

I successfully installed OpenMPI on some FreeBSD machines and I can
run MPI programs on the cluster. Yippie!

But I'm not patient enough to write my own MPI-based routines. So I
thought maybe I could ask here for suggestions. I am primarily
interested in general linear algebra routines. The best would be to
for instance start Octave and just use it as normal, only that all
matrix operations would run on the cluster. Has anyone done that? The
octave-parallel package seems to be something different.

I installed scalapack and the test files ran successfully with mpirun
(except a few of them). But the source code examples of scalapack
looks terrible. Is there no higher-level library that provides an API
with matrix operations, which have all MPI parallelism stuff handled
for you in the background? Certainly a smart piece of software can
decide better than me how to chunk up a matrix and pass it out to the
available processes.

All the best,
Daniel
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] Install OpenMPI on Win 7 machine

2010-07-12 Thread Damien Hocking
Cool.  If you're building OpenMPI on 32-bit Windows as well, you won't 
have any 64-bit switches to sort out.  This part of my instructions:


Visual Studio command prompt: "Start, All Programs, Visual Studio 2008, 
Visual Studio Tools, Visual Studio 2008 Win64 x64 Command Prompt" is 
slightly wrong for 32-bit Windows, there won't be a Win64 x64 prompt.  
There will be only one command prompt option on a 32-bit install (use 
that), and CMake will have set you up with a 32-bit build by default, so 
you'll be fine.  Post back if you need help.


Damien

On 12/07/2010 5:47 PM, Alexandru Blidaru wrote:
I am running 32 bit Windows. The actual cluster is 64 bit and the OS 
is CentOS


On Mon, Jul 12, 2010 at 7:15 PM, Damien Hocking <dam...@khubla.com 
<mailto:dam...@khubla.com>> wrote:


You don't need to check anything alse in the red window, OpenMPI
doesn't know it's in a virtual machine.  If you're running Windows
in a virtual cluster, are you running as 32-bit or 64-bit?

Damien


On 12/07/2010 5:05 PM, Alexandru Blidaru wrote:

Wow thanks a lot guys. I'll try it tomorrow morning. I'll admit
that this time when i saw that there are some header files "not
found" i didn't even bother going through the all process as I
did previously. Could have had it installed by today. Well i'll
give it a try tomorrow and come back to you with a confirmation
of whether it works or not. For the "virtual cluster", should I
select check any of the checkboxes in the red window?

Either way, thanks a lot guys, you've been of great help to me. I
really want to do my project well, as not many almost-18 year
olds get to work with clusters and I'd like to take full
advantage of the experience


    On Mon, Jul 12, 2010 at 5:38 PM, Damien <dam...@khubla.com
<mailto:dam...@khubla.com>> wrote:

Alex,

That red window is what you should see after the first
Configure step in CMake.  You need to do the next few steps
in CMake and Visual Studio to get a Windows OpenMPI build
done.  That's how CMake works.  It's complicated because
CMake has to be able to build on multiple OSes so what you do
on each OS is different.  Here's what to do:

As part of your original CMake setup, it will have asked you
where to put the CMake binaries.  That's in "Where to build
the binaries" line in the main CMake window, at the top.
 Note that these binaries aren't the OpenMPI binaries,
they're the Visual Studio project files that Visual Studio
uses to build the OpenMPI binaries.

See the CMAKE_BUILD_TYPE line?  It says Debug.  Change Debug
to Release if you want a Release build (you probably do).
 Press the Configure button again and let it run.  That
should be all clean.  Now press the Generate button.  That
will build the Visual Studio project files for you.  They'll
go to the "Where to build the binaries" directory.  From here
you're done with CMake.

Next you have two options.  You can build from a command
line, or from within Visual Studio itself.  For command-line
instructions, read this:

https://www.open-mpi.org/community/lists/users/2010/02/12013.php

Note that you need to execute the devenv commands in that
post from within a Visual Studio command prompt: Start, All
Programs, Visual Studio 2008, Visual Studio Tools, Visual
Studio 2008 Win64 x64 Command Prompt.  I'm assuming you want
a 64-bit build.  You need to be in that "Where to build the
binaries" directory as well.

To use Visual Studio directly, start Visual Studio, and open
the OpenMPI.sln project file that's in your "Where to build
the binaries" directory.  In the Solution Explorer you'll see
a list of sub-projects.  Right-click the top heading:
Solution 'Open MPI' and select Configuration Manager.  You
should get a window that says at the top Active Solution
Configuration, with Release below it.  If it says Debug, just
change that to Release and it will flip all the sub-projects
over as well.  Note on the the list of projects the INSTALL
project will not be checked.  Check that now and close the
window.   Now right-click Solution 'Open MPI' again and hit
Build Solution.  It takes a while to compile everything.  If
you get errors about error code -31 and mt.exe at the end of
the build, that's your virus scanner locking the new exe/dll
files and the install project complains.  Keep right-clicking
and Build Solution until it goes through.  The final Open MPI
include files and binaries are in the
C:\Users\Alex's\Downloads..\installed directory.

Re: [OMPI users] Install OpenMPI on Win 7 machine

2010-07-12 Thread Damien Hocking
You don't need to check anything alse in the red window, OpenMPI doesn't 
know it's in a virtual machine.  If you're running Windows in a virtual 
cluster, are you running as 32-bit or 64-bit?


Damien

On 12/07/2010 5:05 PM, Alexandru Blidaru wrote:
Wow thanks a lot guys. I'll try it tomorrow morning. I'll admit that 
this time when i saw that there are some header files "not found" i 
didn't even bother going through the all process as I did previously. 
Could have had it installed by today. Well i'll give it a try tomorrow 
and come back to you with a confirmation of whether it works or not. 
For the "virtual cluster", should I select check any of the checkboxes 
in the red window?


Either way, thanks a lot guys, you've been of great help to me. I 
really want to do my project well, as not many almost-18 year olds get 
to work with clusters and I'd like to take full advantage of the 
experience



On Mon, Jul 12, 2010 at 5:38 PM, Damien <dam...@khubla.com 
<mailto:dam...@khubla.com>> wrote:


Alex,

That red window is what you should see after the first Configure
step in CMake.  You need to do the next few steps in CMake and
Visual Studio to get a Windows OpenMPI build done.  That's how
CMake works.  It's complicated because CMake has to be able to
build on multiple OSes so what you do on each OS is different.
 Here's what to do:

As part of your original CMake setup, it will have asked you where
to put the CMake binaries.  That's in "Where to build the
binaries" line in the main CMake window, at the top.  Note that
these binaries aren't the OpenMPI binaries, they're the Visual
Studio project files that Visual Studio uses to build the OpenMPI
binaries.

See the CMAKE_BUILD_TYPE line?  It says Debug.  Change Debug to
Release if you want a Release build (you probably do).  Press the
Configure button again and let it run.  That should be all clean.
 Now press the Generate button.  That will build the Visual Studio
project files for you.  They'll go to the "Where to build the
binaries" directory.  From here you're done with CMake.

Next you have two options.  You can build from a command line, or
from within Visual Studio itself.  For command-line instructions,
read this:

https://www.open-mpi.org/community/lists/users/2010/02/12013.php

Note that you need to execute the devenv commands in that post
from within a Visual Studio command prompt: Start, All Programs,
Visual Studio 2008, Visual Studio Tools, Visual Studio 2008 Win64
x64 Command Prompt.  I'm assuming you want a 64-bit build.  You
need to be in that "Where to build the binaries" directory as well.

To use Visual Studio directly, start Visual Studio, and open the
OpenMPI.sln project file that's in your "Where to build the
binaries" directory.  In the Solution Explorer you'll see a list
of sub-projects.  Right-click the top heading: Solution 'Open MPI'
and select Configuration Manager.  You should get a window that
says at the top Active Solution Configuration, with Release below
it.  If it says Debug, just change that to Release and it will
flip all the sub-projects over as well.  Note on the the list of
projects the INSTALL project will not be checked.  Check that now
and close the window.   Now right-click Solution 'Open MPI' again
and hit Build Solution.  It takes a while to compile everything.
 If you get errors about error code -31 and mt.exe at the end of
the build, that's your virus scanner locking the new exe/dll files
and the install project complains.  Keep right-clicking and Build
Solution until it goes through.  The final Open MPI include files
and binaries are in the C:\Users\Alex's\Downloads..\installed
directory.

HTH

Damien

PS OpenMPI 1.4.2 doesn't have Fortran support on Windows.  You
need the dev 1.5 series for that and a Fortran compiler.


On 12/07/2010 11:35 AM, Alexandru Blidaru wrote:

Hey,

I installed a 90 day trial of Visual Studio 2008, and I am
pretty sure I am getting the exact same thing. The log and the
picture are attached just as last time. Any new ideas?

Regards,
Alex

___
users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] Install OpenMPI on Win 7 machine

2010-07-12 Thread Damien

Alex,

That red window is what you should see after the first Configure step in 
CMake.  You need to do the next few steps in CMake and Visual Studio to 
get a Windows OpenMPI build done.  That's how CMake works.  It's 
complicated because CMake has to be able to build on multiple OSes so 
what you do on each OS is different.  Here's what to do:


As part of your original CMake setup, it will have asked you where to 
put the CMake binaries.  That's in "Where to build the binaries" line in 
the main CMake window, at the top.  Note that these binaries aren't the 
OpenMPI binaries, they're the Visual Studio project files that Visual 
Studio uses to build the OpenMPI binaries.


See the CMAKE_BUILD_TYPE line?  It says Debug.  Change Debug to Release 
if you want a Release build (you probably do).  Press the Configure 
button again and let it run.  That should be all clean.  Now press the 
Generate button.  That will build the Visual Studio project files for 
you.  They'll go to the "Where to build the binaries" directory.  From 
here you're done with CMake.


Next you have two options.  You can build from a command line, or from 
within Visual Studio itself.  For command-line instructions, read this:


https://www.open-mpi.org/community/lists/users/2010/02/12013.php

Note that you need to execute the devenv commands in that post from 
within a Visual Studio command prompt: Start, All Programs, Visual 
Studio 2008, Visual Studio Tools, Visual Studio 2008 Win64 x64 Command 
Prompt.  I'm assuming you want a 64-bit build.  You need to be in that 
"Where to build the binaries" directory as well.


To use Visual Studio directly, start Visual Studio, and open the 
OpenMPI.sln project file that's in your "Where to build the binaries" 
directory.  In the Solution Explorer you'll see a list of sub-projects.  
Right-click the top heading: Solution 'Open MPI' and select 
Configuration Manager.  You should get a window that says at the top 
Active Solution Configuration, with Release below it.  If it says Debug, 
just change that to Release and it will flip all the sub-projects over 
as well.  Note on the the list of projects the INSTALL project will not 
be checked.  Check that now and close the window.   Now right-click 
Solution 'Open MPI' again and hit Build Solution.  It takes a while to 
compile everything.  If you get errors about error code -31 and mt.exe 
at the end of the build, that's your virus scanner locking the new 
exe/dll files and the install project complains.  Keep right-clicking 
and Build Solution until it goes through.  The final Open MPI include 
files and binaries are in the C:\Users\Alex's\Downloads..\installed 
directory.


HTH

Damien

PS OpenMPI 1.4.2 doesn't have Fortran support on Windows.  You need the 
dev 1.5 series for that and a Fortran compiler.


On 12/07/2010 11:35 AM, Alexandru Blidaru wrote:

Hey,

I installed a 90 day trial of Visual Studio 2008, and I am pretty sure 
I am getting the exact same thing. The log and the picture are 
attached just as last time. Any new ideas?


Regards,
Alex



Re: [OMPI users] Fortran issues on Windows and 1.5 Trunk version

2010-05-12 Thread Damien Hocking

Absolutely.  I'll get a package of stuff put together.

Damien

On 12/05/2010 2:24 AM, Shiqing Fan wrote:


Hi Damien,

I know there will be more problems, and your feedback is always 
helpful.  :-)


Could you please provide me a Visual Studio solution file for MUMPS? I 
would like to test it a little.



Thanks,
Shiqing

On 2010-5-12 6:11 AM, Damien wrote:

Hi all,

Me again (poor Shiqing, I know...).  I've been trying to get the 
MUMPS solver running on Windows with Open-MPI.  I can only use the 
1.5 branch because that has Fortran support on Windows and 1.4.2 
doesn't.  There's a couple of things going wrong:


First, calls to MPI_Initialized from Fortran report that MPI isn't 
initialised (MUMPS has a MPI_Initialized check).  If I call 
MPI_Initialized from C or C++, it is initialized.  I'm not sure what 
this means for MPI calls from Fortran, but it could be the cause of 
the second problem, which is:  If I bypass the MPI_Initialized check 
in MUMPS, I can get the solver to start and run in one process.  If I 
try and run 2 or more processes, all the processes ramp to 100% CPU 
in the first parallel section, and sit there with no progress.  If I 
break in with the debugger, I can usually land on some MPI_IProbe 
calls, presumably looking for receives that don't exist, possibly 
because the Fortran MPI environment really isn't initialised.  After 
many debugger break-ins, I end up in a small group of calls, so it's 
a loop waiting for something.


For reference, it was yesterday's 1.5 svn trunk, MUMPS 4.9.2, and 
Intel Math libraries, and a 32-bit build.  MUMPS is Fortran 90/95 but 
uses the F77 MPI interfaces.  It does run with MPICH2.  I realise 
that 1.5 is a dev branch, so it might just be too early for this to 
work.  I'd be grateful for suggestions though.  I can build and test 
this on Linux if that would help narrow this down.


Damien
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users






[OMPI users] Fortran issues on Windows and 1.5 Trunk version

2010-05-12 Thread Damien

Hi all,

Me again (poor Shiqing, I know...).  I've been trying to get the MUMPS 
solver running on Windows with Open-MPI.  I can only use the 1.5 branch 
because that has Fortran support on Windows and 1.4.2 doesn't.  There's 
a couple of things going wrong:


First, calls to MPI_Initialized from Fortran report that MPI isn't 
initialised (MUMPS has a MPI_Initialized check).  If I call 
MPI_Initialized from C or C++, it is initialized.  I'm not sure what 
this means for MPI calls from Fortran, but it could be the cause of the 
second problem, which is:  If I bypass the MPI_Initialized check in 
MUMPS, I can get the solver to start and run in one process.  If I try 
and run 2 or more processes, all the processes ramp to 100% CPU in the 
first parallel section, and sit there with no progress.  If I break in 
with the debugger, I can usually land on some MPI_IProbe calls, 
presumably looking for receives that don't exist, possibly because the 
Fortran MPI environment really isn't initialised.  After many debugger 
break-ins, I end up in a small group of calls, so it's a loop waiting 
for something.


For reference, it was yesterday's 1.5 svn trunk, MUMPS 4.9.2, and Intel 
Math libraries, and a 32-bit build.  MUMPS is Fortran 90/95 but uses the 
F77 MPI interfaces.  It does run with MPICH2.  I realise that 1.5 is a 
dev branch, so it might just be too early for this to work.  I'd be 
grateful for suggestions though.  I can build and test this on Linux if 
that would help narrow this down.


Damien


Re: [OMPI users] Fortran support on Windows Open-MPI

2010-05-10 Thread Damien
I ended up using the SVN Trunk from today, everything is working fine on 
that.


Damien

On 10/05/2010 2:33 PM, Shiqing Fan wrote:

Hi Damien,

That's a known problem. see this ticket 
https://svn.open-mpi.org/trac/ompi/ticket/2404 . It will be applied 
into 1.5 branch very soon. But if you apply the patch by yourself, it 
should also work.


Thanks,
Shiqing




Re: [OMPI users] Fortran support on Windows Open-MPI

2010-05-10 Thread Damien

Ah.  So it is.  I'll try to remember to look there first.

Damien

On 10/05/2010 2:33 PM, Shiqing Fan wrote:

Hi Damien,

That's a known problem. see this ticket 
https://svn.open-mpi.org/trac/ompi/ticket/2404 . It will be applied 
into 1.5 branch very soon. But if you apply the patch by yourself, it 
should also work.


Thanks,
Shiqing



Re: [OMPI users] Fortran support on Windows Open-MPI

2010-05-10 Thread Damien
Interesting.  If I add the Fortran compiler as a new entry through the 
GUI, CMake wipes it.  If I use the option to specify the compiler paths 
manually on Configure, I can add the Fortran compiler in that way and it 
works.


Then there's a compiler error.  In 
orte\mca\odls\process\odls_process_module.c, right at the top, there's


static bool odls_process_child_died( pid_t pid, unsigned int timeout,
 int* exit_status )
{
int error;
HANDLE handle = OpenProcess( PROCESS_TERMINATE | SYNCHRONIZE, FALSE,
 (DWORD)pid );
if( 0 == child->pid || INVALID_HANDLE_VALUE == handle ) {
error = GetLastError();
/* Let's suppose that the process dissapear ... by now */
return true;
}
CloseHandle(handle);
/* The child didn't die, so return false */
return false;
}

This line "0 == child->pid" causes a compiler error that tanks the 
build, because child doesn't exist in that scope.  Should that just be 
"0 == pid", seeing as pid is the argument passed to the function 
anyway?  The build seems fine with this fix.


Finally, there's an installation error on mpi_portable_platform.h.  That 
file isn't generated as part of the build, and the installation command 
is around line 150 of ompi/CMakeLists.txt.  If you comment out the 
installation of that file the installation works correctly.


I used the 1.5a1r23092 snapshot for this.

Now to make sure it works...

Damien

On 10/05/2010 4:50 AM, Shiqing Fan wrote:



Hi,

Normally, that means a wrong path or incompatible compiler version, 
e.g. 32 bit vs 64 bit.



Shiqing

On 2010-5-7 6:54 PM, Damien wrote:
nd 2.8.1.  In the CMake GUI, I checked the OMPI_WANT_F77_BINDINGS 
option, and added a FilePath for CMAKE_Fortran_COMPILER of C:/Program 
Files (x86)/Intel/Compiler/11.1/065/bin/ia32/ifort.exe.  When I 
re-run the Configure, CMake wipes the CMAKE_Fortran_COMPILER variable 
and complains about a missing Fortran compiler.  Any suggestions? 





Re: [OMPI users] Fortran support on Windows Open-MPI

2010-05-07 Thread Damien

Hi,

I tried the 1.5a1r23092 snapshot and I used CMAKE 2.6.4 and 2.8.1.  In 
the CMake GUI, I checked the OMPI_WANT_F77_BINDINGS option, and added a 
FilePath for CMAKE_Fortran_COMPILER of C:/Program Files 
(x86)/Intel/Compiler/11.1/065/bin/ia32/ifort.exe.  When I re-run the 
Configure, CMake wipes the CMAKE_Fortran_COMPILER variable and complains 
about a missing Fortran compiler.  Any suggestions?


Damien

On 07/05/2010 3:09 AM, Shiqing Fan wrote:


Hi Damien,

Currently only Fortran 77 bindings are supported in Open MPI on 
Windows. You could set the Intel Fortran compiler with 
CMAKE_Fortran_COMPILER variable in CMake (the full path to ifort.exe), 
and enable OMPI_WANT_F77_BINDINGS option for Open MPI, then everything 
should be compiled. I recommend to use Open MPI trunk or 1.5 branch 
version.


I have successfully compiled/ran NPB benchmark with f77 bindings on 
Windows. If you want to compile f90 programs, this should also be 
possible, but it needs a little modification in the config file. 
Please let me know if I can help.



Regards,
Shiqing

On 2010-5-7 5:52 AM, Damien wrote:

Hi all,

Can anyone tell me what the plans are for Fortran 90 support on 
Windows, with say the Intel compilers?  I need to get MUMPS built and 
running using Open-MPI, with Visual Studio and Intel 11.1.  I know 
Fortran isn't part of the regular CMake build for Windows.  If 
someone's working on this I'm happy to test or help out.


Damien
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users






Re: [OMPI users] Fortran support on Windows Open-MPI

2010-05-07 Thread Damien Hocking
Thanks Shiqing.  I'll try that.  I'm not sure which bindings MUMPS uses, 
I'll post back if I need F90.


My apologies for not asking a clearer question, when I said Fortran 90 
support on Windows, I meant Open MPI, not compilers.


Damien

On 07/05/2010 3:09 AM, Shiqing Fan wrote:


Hi Damien,

Currently only Fortran 77 bindings are supported in Open MPI on 
Windows. You could set the Intel Fortran compiler with 
CMAKE_Fortran_COMPILER variable in CMake (the full path to ifort.exe), 
and enable OMPI_WANT_F77_BINDINGS option for Open MPI, then everything 
should be compiled. I recommend to use Open MPI trunk or 1.5 branch 
version.


I have successfully compiled/ran NPB benchmark with f77 bindings on 
Windows. If you want to compile f90 programs, this should also be 
possible, but it needs a little modification in the config file. 
Please let me know if I can help.



Regards,
Shiqing

On 2010-5-7 5:52 AM, Damien wrote:

Hi all,

Can anyone tell me what the plans are for Fortran 90 support on 
Windows, with say the Intel compilers?  I need to get MUMPS built and 
running using Open-MPI, with Visual Studio and Intel 11.1.  I know 
Fortran isn't part of the regular CMake build for Windows.  If 
someone's working on this I'm happy to test or help out.


Damien
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users






[OMPI users] Fortran support on Windows Open-MPI

2010-05-07 Thread Damien

Hi all,

Can anyone tell me what the plans are for Fortran 90 support on Windows, 
with say the Intel compilers?  I need to get MUMPS built and running 
using Open-MPI, with Visual Studio and Intel 11.1.  I know Fortran isn't 
part of the regular CMake build for Windows.  If someone's working on 
this I'm happy to test or help out.


Damien


Re: [OMPI users] Open MPI performance on Amazon Cloud

2010-03-20 Thread Damien Hocking

A few people have looked at EC2 for this lately.  This one's a good read.

http://insidehpc.com/2009/08/03/comparing-hpc-cluster-amazons-ec2-nas-benchmarks-linpack/

There was another paper published too, if I can find it again I'll post 
the link.


Damien

On 19/03/2010 9:17 PM, Joshua Bernstein wrote:

Hi Hammad,

Before we launched the Penguin Computing On-Demand service we 
conducted several tests that compared the latencies of EC2 with a 
traditional HPC type setup (much like we have with our POD service). I 
have a whole suite of tests that I'd be happy to share with you, but 
to sum it up the EC2 latencies were absolutely terrible. For starters, 
the EC2 PingPong latencies for a zero byte message was around ~150ms, 
compared to an completely untuned, Gigabit Ethernet link of 32ms. For 
something actually useful, say a packet of 4K, EC2 was roughly ~265ms, 
where as a standard GigE link was a more reasonable (but still high) 
71ms. One "real-world" application that was very sensitive to latency 
took almost 30 times longer to run on EC2 then a real cluster 
configuration such as POD.


I have benchmarks from several complete IMB runs, as well as other 
types of benchmarks such as STREAM and some iobench. If you are 
interested in any particular type, please let me know as I'd be happy 
to share.


If you really need an on-demand type system where latency is an 
issue, you should look towards our POD offering. We even offer 
Inifniband! On the compute side nothing is virtualized so your 
application runs on the hardware without the overhead of a VM.


-Joshua Bernstein
Senior Software Engineer
Penguin Computing


On Mar 19, 2010, at 11:19 AM, Jeff Squyres wrote:

Yes, it is -- sometimes we get so caught up in other issues that user 
emails slip through the cracks.  Sorry about that!


I actually have little experience with EC2 -- other than knowing that 
it works, I don't know much about the performance that you can 
extract from it.  I have heard issues about non-uniform latency 
between MPI processes because you really don't know where the 
individual MPI processes may land (network- / VM-wise).  It suggests 
to me that EC2 might be best suited for compute-bound jobs (vs. 
latency-bound jobs).


Amusingly enough, the first time someone reported an issue with Open 
MPI on EC2, I tried to submit a help ticket to EC2 support saying, 
"I'm one of the Open MPI developers ... blah blah blah ... is there 
anything I can do to help?" The answer I got back was along the lines 
of, "You need to have a paid EC2 support account before we can help 
you." I think they missed the point, but oh well.  :-)




On Mar 12, 2010, at 12:10 AM, Hammad Siddiqi wrote:


Dear All,
Is this the correct forum for sending these kind of emails. please 
let me know if there is some other mailing list.

Thank
Best Regards,
Hammad Siddiqi
System Administrator,
Centre for High Performance Scientific Computing,
School of Electrical Engineering and Computer Science,
National University of Sciences and Technology,
H-12, Islamabad.
Office : +92 (51) 90852207
Web: http://hpc.seecs.nust.edu.pk/~hammad/


On Sat, Feb 27, 2010 at 10:07 PM, Hammad Siddiqi 
<hammad.sidd...@seecs.edu.pk> wrote:

Dear All,

I am facing very wierd results of OpenMPI 1.4.1 on Amazon EC2. I have
used Small Instance and  and High CPU medium instance for benchmarking
latency and bandwidth. The OpenMPI was configured with the default
options. when the code is run in the cluster mode the latency and
bandwidth  of Amazon EC2 Small instance is really less than that of
Amazon EC2 High CPU medium instance. To my understanding the
difference should not be that much. The following are the links to
graphs ad their data:

Data: 
http://hpc.seecs.nust.edu.pk/~hammad/OpenMPI,Latency-BandwidthData.jpg
Graphs: 
http://hpc.seecs.nust.edu.pk/~hammad/OpenMPI,Latency-Bandwidth.jpg



Please have a look on them.

Is anyone else facing the same problem. Any guidance in this regard
will highly be appreciated.

Thank you.


--
Best Regards,
Hammad Siddiqi
System Administrator,
Centre for High Performance Scientific Computing,
School of Electrical Engineering and Computer Science,
National University of Sciences and Technology,
H-12, Islamabad.
Office : +92 (51) 90852207

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] noob warning - problems testing MPI_Comm_spawn

2010-03-04 Thread Damien Hocking

Thanks Shiqing.  I'll checkout a trunk copy and try that.

Damien

On 04/03/2010 7:29 AM, Shiqing Fan wrote:


Hi Damien,

Sorry for late reply, I was trying to dig inside the code and got some 
information.


First of all, in your example, it's not correct to define the MPI_Info 
as an pointer, it will cause the initialization violation at run time. 
The message "LOCAL DAEMON SPAWN IS CURRENTLY UNSUPPORTED" is just a 
warning which won't block the execution. In order to make the 
master-slave work, you have to disable the CCP support, it seems to 
have conflicts with comm_spawn operation, I'm still checking it.


To disable CCP in Open MPI 1.4.1, you have to exclude the source files 
manually, i.e. excluding ccp files in orte/mca/plm/ccp and 
orte/mca/ras/ccp, then also remove the ccp related lines in 
orte/mca/plm/base/static-components.h and 
orte/mca/ras/base/static-components.h. There is an option to do so in 
the trunk version, but not for 1.4.1. Sorry for the inconvenience.


For the "singleton" run with master.exe, it's still not working under 
Windows.



Best Regards,
Shiqing


Damien Hocking wrote:

Hi all,

I'm playing around with MPI_Comm_spawn, trying to do something simple 
with a master-slave example.  I get a LOCAL DAEMON SPAWN IS CURRENTLY 
UNSUPPORTED error when it tries to spawn the slave.  This is on 
Windows, OpenMPI version 1.4.1, r22421.


Here's the master code:

int main(int argc, char* argv[])
{
  int myid, ierr;
  MPI_Comm maincomm;
  ierr = MPI_Init(, );
  ierr = MPI_Comm_rank(MPI_COMM_WORLD, );

  if (myid == 0)
  {
 std::cout << "\n Hello from the boss " << myid;
 std::cout.flush();
  }

  MPI_Info* spawninfo;
  MPI_Info_create(spawninfo);
  MPI_Info_set(*spawninfo, "add-host", "127.0.0.1");

  if (myid == 0)
  {
 std::cout << "\n About to MPI_Comm_spawn." << myid;
 std::cout.flush();
  }
  MPI_Comm_spawn("slave.exe", MPI_ARGV_NULL, 1, *spawninfo, 0, 
MPI_COMM_SELF, , MPI_ERRCODES_IGNORE);

  if (myid == 0)
  {
 std::cout << "\n MPI_Comm_spawn successful." << myid;
 std::cout.flush();
  }
  ierr = MPI_Finalize();
  return 0;
}

Here's the slave code:

int main(int argc, char* argv[])
{
  int myid, ierr;

  MPI_Comm parent;

  ierr = MPI_Init(, );
  MPI_Comm_get_parent();

  if (parent == MPI_COMM_NULL)
  {
 std::cout << "\n No parent.";
  }
  ierr = MPI_Comm_rank(MPI_COMM_WORLD, );

  std::cout << "\n Hello from a worker " << myid;
  std::cout.flush();  ierr = MPI_Finalize();

  return 0;
}

Also, this only starts up correctly if I kick it off with orterun.  
Ideally I'd like to run it as "master.exe" and have it initialise the 
MPI environment from there.  Can anyone tell me what setup I need to 
do that?

Thanks in advance,

Damien
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users






[OMPI users] noob warning - problems testing MPI_Comm_spawn

2010-03-03 Thread Damien Hocking

Hi all,

I'm playing around with MPI_Comm_spawn, trying to do something simple 
with a master-slave example.  I get a LOCAL DAEMON SPAWN IS CURRENTLY 
UNSUPPORTED error when it tries to spawn the slave.  This is on Windows, 
OpenMPI version 1.4.1, r22421.


Here's the master code:

int main(int argc, char* argv[])
{
  int myid, ierr;
  MPI_Comm maincomm;
  ierr = MPI_Init(, );
  ierr = MPI_Comm_rank(MPI_COMM_WORLD, );

  if (myid == 0)
  {
 std::cout << "\n Hello from the boss " << myid;
 std::cout.flush();
  }

  MPI_Info* spawninfo;
  MPI_Info_create(spawninfo);
  MPI_Info_set(*spawninfo, "add-host", "127.0.0.1");

  if (myid == 0)
  {
 std::cout << "\n About to MPI_Comm_spawn." << myid;
 std::cout.flush();
  }
  MPI_Comm_spawn("slave.exe", MPI_ARGV_NULL, 1, *spawninfo, 0, 
MPI_COMM_SELF, , MPI_ERRCODES_IGNORE);

  if (myid == 0)
  {
 std::cout << "\n MPI_Comm_spawn successful." << myid;
 std::cout.flush();
  }
  ierr = MPI_Finalize();
  return 0;
}

Here's the slave code:

int main(int argc, char* argv[])
{
  int myid, ierr;

  MPI_Comm parent;

  ierr = MPI_Init(, );
  MPI_Comm_get_parent();

  if (parent == MPI_COMM_NULL)
  {
 std::cout << "\n No parent.";
  }
  ierr = MPI_Comm_rank(MPI_COMM_WORLD, );

  std::cout << "\n Hello from a worker " << myid;
  std::cout.flush();   


  ierr = MPI_Finalize();

  return 0;
}

Also, this only starts up correctly if I kick it off with orterun.  
Ideally I'd like to run it as "master.exe" and have it initialise the 
MPI environment from there.  Can anyone tell me what setup I need to do 
that? 


Thanks in advance,

Damien


Re: [OMPI users] Using dynamic process management without mpirun/mpiexec

2010-02-24 Thread Damien Hocking
Yes, that's right.  It will launch a singleton, and then add slaves as 
required.  Thank you.


Damien

On 24/02/2010 6:17 PM, Ralph Castain wrote:

Let me see if I understand your question. You want to launch an initial MPI code using 
mpirun or as a singleton. This code will then determine available resources and use 
MPI_Comm_spawn to launch the "real" MPI job.

Correct?

If so, then yes - you can do that. When you do the comm_spawn, you need to include an MPI_Info key 
of "add-host" that specifies the host(s) (comma-delimited list) to be used for launching 
the specified app. Or you can do "add-hostfile" - either or both are supported.



On Feb 24, 2010, at 5:39 PM, Damien Hocking wrote:


Hi all,

Does OpenMPI support dynamic process management without launching through 
mpirun or mpiexec?  I need to use some MPI code in a shared-memory environment 
where I don't know the resources in advance.

Damien
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




[OMPI users] Using dynamic process management without mpirun/mpiexec

2010-02-24 Thread Damien Hocking

Hi all,

Does OpenMPI support dynamic process management without launching 
through mpirun or mpiexec?  I need to use some MPI code in a 
shared-memory environment where I don't know the resources in advance.


Damien


Re: [OMPI users] INSTALL bug in 64-bit build of OpenMPI Release build on Windows - has workaround

2010-02-04 Thread Damien Hocking
I started again from the beginning to sort out exactly what was going 
on.  Here's what I found.


If I use the CMake GUI, and set CMAKE_BUILD_TYPE to Release, 
re-configure and then generate, and then do the following build command:


"devenv OpenMPI.sln /build"

I get the following:

1>-- Build started: Project: libopen-pal, Configuration: Debug x64

etc.  It still builds debug versions, and pdbs.  The install project 
doesn't try to install the pdbs though.


If I do a "devenv OpenMPI.sln /build release", I get a proper release 
build, no pdbs and the install project works.


If I use the CMake GUI, and leave CMAKE_BUILD_TYPE at Debug, and do
"devenv OpenMPI.sln /build release", I get a release build, but the 
install fails because it goes looking for pdbs that aren't there.


So, in order to get a Release build that installs using the CMake GUI, 
you need to have CMAKE_BUILD_TYPE set to Release *and* do a


"devenv OpenMPI.sln /build release" at the command line, followed by
"devenv OpenMPI.sln /build release /project INSTALL".

To get a Release build from the command-line without using the CMake 
GUI, you need to have SET(CMAKE_BUILD_TYPE Release) in the top-level 
CMakeFiles.txt, and then


"devenv OpenMPI.sln /build release" at the command-line, followed by 
"devenv OpenMPI.sln /build release /project INSTALL"


As Shiquing said in another post, you need to do completely independent 
setups for 32 and 64-bit builds.  I set mine up with build32 and build64 
directories, and install32 and install64 directories to keep everything 
separate.


HTH,

Damien



On 04/02/2010 7:34 AM, Marcus G. Daniels wrote:

Hmmm.  I did try setting release and I think I still got pdbs.  I'll try
again from a totally clean source tree and post back.


Another datapoint:

I tried Cmake's Generate after setting CMAKE_BUILD_TYPE and building.
I have the same sort of build problems with setting x64 in the VS 2008
configuration manager.

Marcus


Re: [OMPI users] INSTALL bug in 64-bit build of OpenMPI Release build on Windows - has workaround

2010-02-04 Thread Damien Hocking
Hmmm.  I did try setting release and I think I still got pdbs.  I'll try 
again from a totally clean source tree and post back.


Damien

On 10-02-04 4:41 AM, Shiqing Fan wrote:


Hi Damien,

I did a clean build on my 64 bit Windows 7, but I didn't see the same 
problem. Could you please make sure that the CMAKE_BUILD_TYPE variable 
in the CMake-GUI is set to "release"? Setting "release" in Visual 
Studio will not change the CMake install scripts.



Thanks,
Shiqing



Damien Hocking wrote:

Hi all,

There might be some minor bugs in the 64-bit CMake Visual Studio 
Install project on Windows (say that 3 times fast...).  When I build 
a 64-bit release version, the install is still set up for installing 
pdbs, even though it's a release build.  This is for VS2008 on 
Windows 7, CMake 2.6.4.  The offending sections are below, and the 
install works if you delete these sections yourself.  It doesn't 
happen on 32-bit release installs.


opal cmake.install.cmake

IF(NOT CMAKE_INSTALL_COMPONENT OR "${CMAKE_INSTALL_COMPONENT}" 
STREQUAL "Unspecified")
  FILE(INSTALL DESTINATION "${CMAKE_INSTALL_PREFIX}/bin" TYPE FILE 
FILES "C:/projects2/openmpi-1.4.1/build64/Debug/libopen-pald.pdb")
ENDIF(NOT CMAKE_INSTALL_COMPONENT OR "${CMAKE_INSTALL_COMPONENT}" 
STREQUAL "Unspecified")



ompi cmake.install.cmake

IF(NOT CMAKE_INSTALL_COMPONENT OR "${CMAKE_INSTALL_COMPONENT}" 
STREQUAL "Unspecified")
  FILE(INSTALL DESTINATION "${CMAKE_INSTALL_PREFIX}/bin" TYPE FILE 
FILES "C:/projects2/openmpi-1.4.1/build64/Debug/libmpid.pdb")
ENDIF(NOT CMAKE_INSTALL_COMPONENT OR "${CMAKE_INSTALL_COMPONENT}" 
STREQUAL "Unspecified")



ompi/mpi/cxx cmake.install.cmake

IF(NOT CMAKE_INSTALL_COMPONENT OR "${CMAKE_INSTALL_COMPONENT}" 
STREQUAL "Unspecified")
  FILE(INSTALL DESTINATION "${CMAKE_INSTALL_PREFIX}/bin" TYPE FILE 
FILES "C:/projects2/openmpi-1.4.1/build64/Debug/libmpi_cxxd.pdb")
ENDIF(NOT CMAKE_INSTALL_COMPONENT OR "${CMAKE_INSTALL_COMPONENT}" 
STREQUAL "Unspecified")


orte cmake.install.cmake

IF(NOT CMAKE_INSTALL_COMPONENT OR "${CMAKE_INSTALL_COMPONENT}" 
STREQUAL "Unspecified")
  FILE(INSTALL DESTINATION "${CMAKE_INSTALL_PREFIX}/bin" TYPE FILE 
FILES "C:/projects2/openmpi-1.4.1/build64/Debug/libopen-rted.pdb")
ENDIF(NOT CMAKE_INSTALL_COMPONENT OR "${CMAKE_INSTALL_COMPONENT}" 
STREQUAL "Unspecified")



Damien
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users






[OMPI users] INSTALL bug in 64-bit build of OpenMPI Release build on Windows - has workaround

2010-02-03 Thread Damien Hocking

Hi all,

There might be some minor bugs in the 64-bit CMake Visual Studio Install 
project on Windows (say that 3 times fast...).  When I build a 64-bit 
release version, the install is still set up for installing pdbs, even 
though it's a release build.  This is for VS2008 on Windows 7, CMake 
2.6.4.  The offending sections are below, and the install works if you 
delete these sections yourself.  It doesn't happen on 32-bit release 
installs.


opal cmake.install.cmake

IF(NOT CMAKE_INSTALL_COMPONENT OR "${CMAKE_INSTALL_COMPONENT}" STREQUAL 
"Unspecified")
  FILE(INSTALL DESTINATION "${CMAKE_INSTALL_PREFIX}/bin" TYPE FILE 
FILES "C:/projects2/openmpi-1.4.1/build64/Debug/libopen-pald.pdb")
ENDIF(NOT CMAKE_INSTALL_COMPONENT OR "${CMAKE_INSTALL_COMPONENT}" 
STREQUAL "Unspecified")



ompi cmake.install.cmake

IF(NOT CMAKE_INSTALL_COMPONENT OR "${CMAKE_INSTALL_COMPONENT}" STREQUAL 
"Unspecified")
  FILE(INSTALL DESTINATION "${CMAKE_INSTALL_PREFIX}/bin" TYPE FILE 
FILES "C:/projects2/openmpi-1.4.1/build64/Debug/libmpid.pdb")
ENDIF(NOT CMAKE_INSTALL_COMPONENT OR "${CMAKE_INSTALL_COMPONENT}" 
STREQUAL "Unspecified")



ompi/mpi/cxx cmake.install.cmake

IF(NOT CMAKE_INSTALL_COMPONENT OR "${CMAKE_INSTALL_COMPONENT}" STREQUAL 
"Unspecified")
  FILE(INSTALL DESTINATION "${CMAKE_INSTALL_PREFIX}/bin" TYPE FILE 
FILES "C:/projects2/openmpi-1.4.1/build64/Debug/libmpi_cxxd.pdb")
ENDIF(NOT CMAKE_INSTALL_COMPONENT OR "${CMAKE_INSTALL_COMPONENT}" 
STREQUAL "Unspecified")


orte cmake.install.cmake

IF(NOT CMAKE_INSTALL_COMPONENT OR "${CMAKE_INSTALL_COMPONENT}" STREQUAL 
"Unspecified")
  FILE(INSTALL DESTINATION "${CMAKE_INSTALL_PREFIX}/bin" TYPE FILE 
FILES "C:/projects2/openmpi-1.4.1/build64/Debug/libopen-rted.pdb")
ENDIF(NOT CMAKE_INSTALL_COMPONENT OR "${CMAKE_INSTALL_COMPONENT}" 
STREQUAL "Unspecified")



Damien


[OMPI users] CMake-Windows build of 1.41 with Fortran bindings

2010-02-03 Thread Damien Hocking

Can anyone tell me how to enable Fortran bindings on a Windows build?

Damien


Re: [OMPI users] OpenMPI 1.4.2a snapshots on Windows

2010-02-03 Thread Damien Hocking

OK, thanks, I'll do that.

Damien

Shiqing Fan wrote:


Hi Damien,

r22405 was the fix for the trunk, it hasn't been patched into 1.4 
branch yet, see the open ticket for v1.4 branch: 
https://svn.open-mpi.org/trac/ompi/ticket/2169, it's RM approved, so 
it will be moved very soon. And please note that the patch will be in 
1.4.2, but not in 1.4.1 release, which means you can update your CMake 
to 2.8 for the upcoming Open MPI 1.4.2 release.



Thanks,
Shiqing

Damien Hocking wrote:

Hi all,

I notice in the last couple of weeks there was a patch with 
ALL_DEPENDENCIES to fix CMake 2.8 builds with on Windows.  With CMake 
2.8 I'm getting exactly the same build errors in r22504 as in the 
1.4.1 release. Has that patch made it into the snapshots yet, or is 
there a regression?  I can keep going with CMake 2.6.4 if need be.


Damien
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users






[OMPI users] OpenMPI 1.4.2a snapshots on Windows

2010-02-02 Thread Damien Hocking

Hi all,

I notice in the last couple of weeks there was a patch with 
ALL_DEPENDENCIES to fix CMake 2.8 builds with on Windows.  With CMake 
2.8 I'm getting exactly the same build errors in r22504 as in the 1.4.1 
release. Has that patch made it into the snapshots yet, or is there a 
regression?  I can keep going with CMake 2.6.4 if need be.


Damien


Re: [OMPI users] MPI_INIT failure while building coinor-ipopt

2009-10-24 Thread Damien Hocking

Roberto,

Ipopt doesn't use MPI.  It can use the MUMPS parallel linear solver in 
sequential mode, but nothing is set up in IPOPT to use the parallel MPI 
version.  For sequential mode, MUMPS dummies out the MPI headers.  The 
dummy headers are part of the MUMPS distribution in the libseq 
directory.  You're probably getting the OpenMPI headers instead of the 
dummy ones.  I can't open your bz2, my machine tells me it's borked, so 
I can't read your log and see exactly where it's going wrong.


Damien

Roberto C. Sánchez wrote:

Hi,

I am in the process of packaging coinor-ipopt for Debian.  The build
process fails during the 'make test' phase.  The error message referense
otre_init, ompi_mpi_init and MPI_INIT.  I have already asked on the
ipopt mailing list [0].  However, that query has not received any
replies.  I thought that perhaps I would ask here and see if anyone on
this list could offer some insight.

I have attached the following:

- the output of Debian package build 
- the specific output from the 'make test' step

- the output of 'ompi_info --all'

Any pointers would be much appreciated.

Regards,

-Roberto

[0] http://list.coin-or.org/pipermail/ipopt/2009-October/001730.html





___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] "An error occurred in MPI_Recv" with more than 2 CPU

2009-05-27 Thread Damien Hocking
I've seen this behaviour with MUMPS on shared-memory machines as well 
using MPI.  I use the iterative refinement capability to sharpen the 
last few digits of the solution ( 2 or 3 iterations is usually enough).  
If you're not using that, give it a try, it will probably reduce the 
noise you're getting in your results.  The quality of the answer from a 
direct solve is highly dependent on the matrix scaling and pivot order 
and it's easy to get differences in the last few digits.  MUMPS itself 
is also asynchronous, and might not be completely deterministic in how 
it solves if MPI processes can run in a different order.


Damien 


George Bosilca wrote:
This is a problem of numerical stability, and there is no solution for 
such a problem in MPI. Usually,  preconditioning the input matrix 
improve the numerical stability.


If you read the MPI standard, there is a __short__ section about what 
guarantees the MPI collective communications provide. There is only 
one: if you run the same collective twice, on the same set of nodes 
with the same input data, you will get the same output. In fact the 
main problem is that MPI consider all default operations (MPI_OP) as 
being commutative and associative, which is usually the case in real 
world but not when floating point rounding is around. When you 
increase the number of nodes, the data will be spread in smaller 
pieces, which means more operations will have to be done in order to 
achieve the reduction, i.e. more rounding errors might occur and so on.


  Thanks,
george.

On May 27, 2009, at 11:16 , vasilis wrote:


Rank 0 accumulates all the res_cpu values into a single array, res.  It
starts with its own res_cpu and then adds all other processes.  When
np=2, that means the order is prescribed.  When np>2, the order is no
longer prescribed and some floating-point rounding variations can start
to occur.


Yes you are right. Now, the question is why would these 
floating-point rounding

variations occur for np>2? It cannot be  due to a not prescribed order!!


If you want results to be more deterministic, you need to fix the order
in which res is aggregated.  E.g., instead of using MPI_ANY_SOURCE, 
loop

over the peer processes in a specific order.



P.S.  It seems to me that you could use MPI collective operations to
implement what you're doing.  E.g., something like:
I could use these operations for the res variable (Will it make the 
summation

any faster?). But, I can not use them for the other 3 variables.
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



Re: [OMPI users] Open MPI 2009 released

2009-04-01 Thread Damien Hocking

Outstanding.  I'll have two.

Damien

George Bosilca wrote:

The Open MPI Team, representing a consortium of bailed-out banks, car
manufacturers, and insurance companies, is pleased to announce the
release of the "unbreakable" / bug-free version Open MPI 2009,
(expected to be available by mid-2011).  This release is essentially a
complete rewrite of Open MPI based on new technologies such as C#,
Java, and object-oriented Cobol (so say we all!).  Buffer overflows
and memory leaks are now things of the past.  We strongly recommend
that all users upgrade to Windows 7 to fully take advantage of the new
powers embedded in Open MPI.

This version can be downloaded from the The Onion web site or from
many BitTorrent networks (seeding now; the Open MPI ISO is
approximately 3.97GB -- please wait for the full upload).

Here is an abbreviated list of changes in Open MPI 2009 as compared to
the previous version:

- Dropped support for MPI 2 in favor of the newly enhanced MPI 11.7
 standard.  MPI_COOK_DINNER support is only available with additional
 equipment (some assembly may be required).  An experimental PVM-like
 API has been introduced to deal with the current limitations of the
 MPI 11.7 API.
- Added a Twitter network transport capable of achieving peta-scale
 per second bandwidth (but only on useless data).
- Dropped support for the barely-used x86 and x86_64 architectures in
 favor of the most recent ARM6 architecture.  As a direct result,
 several Top500 sites are planning to convert from their now obsolete
 peta-scale machines to high-reliability iPhone clusters using the
 low-latency AT 3G network.
- The iPhone iMPI app (powered by iOpen MPI) is now downloadable from
 the iTunes Store.  Blackberry support will be included in a future
 release.
- Fix all compiler errors related to the PGI 8.0 compiler by
 completely dropping support.
- Add some "green" features for energy savings.  The new "--bike"
 mpirun option will only run your parallel jobs only during the
 operation hours of the official Open MPI biking team.  The
 "--preload-result" option will directly embed the final result in
 the parallel execution, leading to more scalable and reliable runs
 and decreasing the execution time of any parallel application under
 the real-time limit of 1 second.  Open MPI is therefore EnergyStar
 compliant when used with these options.
- In addition to moving Open MPI's lowest point-to-point transports to
 be an external project, limited support will be offered for
 industry-standard platforms.  Our focus will now be to develop
 highly scalable transports based on widely distributed technologies
 such as SMTP, High Performance Gopher (v3.8 and later), OLE COMM,
 RSS/Atom, DNS, and Bonjour.
- Opportunistic integration with Conflicker in order to utilize free
 resources distributed world-wide.
- Support for all Fortran versions prior to Fortran 2020 has been
 dropped.

Make today an Open MPI day!


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



  1   2   >