Re: [OMPI users] successful story of building openmpi on cygwin?

2011-10-28 Thread Yue Guan
Hi, Shiqing

Thanks for the info. Usually I code in cygwin on my PC and move the
code to clusters later. So I try to setup openmpi on cygwin.

Best

-- Yue

On Fri, Oct 28, 2011 at 6:09 AM, Shiqing Fan  wrote:
> Hi Yue,
>
> If you want to build Open MPI on Windows, there is another way by using
> CMake and Visual Studio, take a look into the windows readme file in the
> source root.
>
> If you want to use the GNU compilers, you might also use CMake with MinGW.
> But the MinGW support is only basically supported.
>
>
> Regards,
> Shiqing
>
> On 2011-10-27 6:14 AM, Yue Guan wrote:
>>
>> Hi, there
>>
>> Any successful story of building openmpi on cygwin? I run the default
>> building process and stuck at /opal/mca/maffinity for gcc: vfork:
>> Resource temporarily unavailable.
>>
>> Best
>>
>> --Yue
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>
> --
> ---
> Shiqing Fan
> High Performance Computing Center Stuttgart (HLRS)
> Tel: ++49(0)711-685-87234      Nobelstrasse 19
> Fax: ++49(0)711-685-65832      70569 Stuttgart
> http://www.hlrs.de/organization/people/shiqing-fan/
> email: f...@hlrs.de
>
>



Re: [OMPI users] How to override default hostfile to specify host

2011-10-28 Thread Ralph Castain
On Oct 28, 2011, at 11:16 AM, Saurabh T wrote:

> 
> Hi,
> 
> If I use "orterun -H " and  does not belong in the default 
> hostfile ("etc/openmpi-default-hostfile"), openmpi gives an error. Is there 
> an easy way to get the aforementioned command to work without specifying a 
> different hostfile with  in it? Thank you.

Not currently, I'm afraid.


> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




[OMPI users] How to override default hostfile to specify host

2011-10-28 Thread Saurabh T

Hi,

If I use "orterun -H " and  does not belong in the default 
hostfile ("etc/openmpi-default-hostfile"), openmpi gives an error. Is there an 
easy way to get the aforementioned command to work without specifying a 
different hostfile with  in it? Thank you.



Re: [OMPI users] successful story of building openmpi on cygwin?

2011-10-28 Thread John R. Cary
I have been trying to build with mingw, including mingw32-gfortran, with 
no luck.

Using mingw32-4.5.1, openmpi-1.5.4.

Has anyone gotten mingw32 with gfortran to work with openmpi?

ThxJohn



On 10/28/11 4:09 AM, Shiqing Fan wrote:

Hi Yue,

If you want to build Open MPI on Windows, there is another way by 
using CMake and Visual Studio, take a look into the windows readme 
file in the source root.


If you want to use the GNU compilers, you might also use CMake with 
MinGW. But the MinGW support is only basically supported.



Regards,
Shiqing

On 2011-10-27 6:14 AM, Yue Guan wrote:

Hi, there

Any successful story of building openmpi on cygwin? I run the default
building process and stuck at /opal/mca/maffinity for gcc: vfork:
Resource temporarily unavailable.

Best

--Yue
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users








Re: [OMPI users] Problem launching application on windows

2011-10-28 Thread Alex van 't Veer
Hi Shiqing,

 

Unfortunately that did not solve the problem.

Can you tell me something more about how the sockets work and how they could
get corrupted? Maybe I can figure out what is going wrong.

 

Thanks

 



From: Shiqing Fan [mailto:f...@hlrs.de] 
Sent: Friday, October 28, 2011 12:16 PM
To: Open MPI Users
Cc: Alex van 't Veer
Subject: Re: [OMPI users] Problem launching application on windows

 

Hi,

This looks not normal, because this error might happen mainly by improper
sockets. I don't have any clue at moment, as I can't reproduce it.

Could you try to reinstall Open MPI? And make sure there is no other
installation on your system. If this is still not working, try using Open MPI
1.5.3. Please let me know whether these will work for you or not.

Regards,
Shiqing

On 2011-10-27 11:35 AM, Alex van 't Veer wrote: 

Hi

 

I've installed the OpenMPI 1.5.4-1 64-bit binaries on windows 7 when I run
mpirun.exe without any options I get the help text and everything seems to
work fine but when I try to actually run a application, I get the following
error:

..\..\..\openmpi-1.5.4\opal\event\event.c: ompi_evesel->dispatch() failed.

I get the error when running any application, to exclude my own application I
tried the hello world example and it returns the same error. (The command I
used is mpirun.exe helloworld.exe)

Searching for the error in the list or looking at event.c didn't get me much
further, can anyone point me in the right direction for solving this problem?

 

Thanks






___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users






-- 
---
Shiqing Fan
High Performance Computing Center Stuttgart (HLRS)
Tel: ++49(0)711-685-87234  Nobelstrasse 19
Fax: ++49(0)711-685-65832  70569 Stuttgart
http://www.hlrs.de/organization/people/shiqing-fan/
email: f...@hlrs.de
 
 
 


Re: [OMPI users] Problem launching application on windows

2011-10-28 Thread Shiqing Fan

Hi,

This looks not normal, because this error might happen mainly by 
improper sockets. I don't have any clue at moment, as I can't reproduce it.


Could you try to reinstall Open MPI? And make sure there is no other 
installation on your system. If this is still not working, try using 
Open MPI 1.5.3. Please let me know whether these will work for you or not.


Regards,
Shiqing

On 2011-10-27 11:35 AM, Alex van 't Veer wrote:


Hi

I've installed the OpenMPI 1.5.4-1 64-bit binaries on windows 7 when I 
run mpirun.exe without any options I get the help text and everything 
seems to work fine but when I try to actually run a application, I get 
the following error:


..\..\..\openmpi-1.5.4\opal\event\event.c: ompi_evesel->dispatch() failed.

I get the error when running any application, to exclude my own 
application I tried the hello world example and it returns the same 
error. (The command I used is mpirun.exe helloworld.exe)


Searching for the error in the list or looking at event.c didn't get 
me much further, can anyone point me in the right direction for 
solving this problem?


Thanks



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
---
Shiqing Fan
High Performance Computing Center Stuttgart (HLRS)
Tel: ++49(0)711-685-87234  Nobelstrasse 19
Fax: ++49(0)711-685-65832  70569 Stuttgart
http://www.hlrs.de/organization/people/shiqing-fan/
email: f...@hlrs.de






Re: [OMPI users] successful story of building openmpi on cygwin?

2011-10-28 Thread Shiqing Fan

Hi Yue,

If you want to build Open MPI on Windows, there is another way by using 
CMake and Visual Studio, take a look into the windows readme file in the 
source root.


If you want to use the GNU compilers, you might also use CMake with 
MinGW. But the MinGW support is only basically supported.



Regards,
Shiqing

On 2011-10-27 6:14 AM, Yue Guan wrote:

Hi, there

Any successful story of building openmpi on cygwin? I run the default
building process and stuck at /opal/mca/maffinity for gcc: vfork:
Resource temporarily unavailable.

Best

--Yue
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




--
---
Shiqing Fan
High Performance Computing Center Stuttgart (HLRS)
Tel: ++49(0)711-685-87234  Nobelstrasse 19
Fax: ++49(0)711-685-65832  70569 Stuttgart
http://www.hlrs.de/organization/people/shiqing-fan/
email: f...@hlrs.de



Re: [OMPI users] Hybrid MPI/Pthreads program behaves differently on two different machines with same hardware (solved)

2011-10-28 Thread 吕慧伟
After trying several kernel versions. The problem is narrowed down to the
change from kernel 2.6.22 to 2.6.23. Finally I find the big change in
process scheduler in 2.6.23: the Completely Fair Scheduler.
http://kernelnewbies.org/Linux_2_6_23#head-f3a847a5aace97932f838027c93121321a6499e7

It
says:
Applications that depend *heavily* on sched_yield()'s behaviour (like, f.e.,
many benchmarks) can suffer from huge performance gains/losses due to the
very very subtle semantics of what sched_yield() should do and how CFS
changes them. There's a sysctl at /proc/sys/kernel/sched_compat_yield that
you can set to "1" to change the sched_yield() behaviour that you should try
in those cases.

After setting /proc/sys/kernel/sched_compat_yield to "1", my hybrid
application is live again.

--
Huiwei Lv
http://asg.ict.ac.cn/lhw/

On Tue, Oct 25, 2011 at 10:26 PM, Ralph Castain  wrote:

> My best guess is that you are seeing differences in scheduling behavior
> with respect to memory locale. I notice that you are not binding your
> processes, and so they are free to move around the various processors on the
> node. I would guess that your thread is winding up on a processor that is
> non-local to your memory in one case, but local to your memory in the other.
> This is an OS-related scheduler decision.
>
> You might try binding your processes to see if it helps. With threads, you
> don't really want to bind to a core, but binding to a socket should help.
> Try adding --bind-to-socket to your mpirun cmd line (you can't do this if
> you run it as a singleton - have to use mpirun).
>
>
> On Oct 25, 2011, at 2:45 AM, 吕慧伟 wrote:
>
> Thanks, Ralph. Yes, I have taking that into account. The problem is not to
> compare two proc with one proc, but the "multi-threading effect".
> Multi-threading is good on the first machine for one and two proc, but on
> the second machine, it disappears for two proc.
>
> To narrow down the problem, I reinstalled the operating system on the
> second machine from SUSE 11(kernel 2.6.32.12, gcc 4.3.4) to Red Hat 5.4
> (kernel 2.6.18, gcc 4.1.2) which is similar to the first machine (Cent OS
> 5.3, kernel 2.6.18, gcc 4.1.2). Then the problem disappears. So the problem
> must lies somewhere in OS kernel or GCC version. Any suggestions? Thanks.
>
> --
> Huiwei Lv
>
> On Tue, Oct 25, 2011 at 3:11 PM, Ralph Castain  wrote:
>
>> Okay - thanks for testing it.
>>
>> Of course, one obvious difference is that there isn't any communication
>> when you run only one proc, but there is when you run two or more, assuming
>> your application has MPI send/recv (or calls collective and other functions
>> that communicate) calls in it. Communication to yourself is very fast as no
>> bits actually move - sending messages to another proc is considerably
>> slower.
>>
>> Are you taking that into account?
>>
>>
>> On Oct 24, 2011, at 8:47 PM, 吕慧伟 wrote:
>>
>> No. There's a difference between "mpirun -np 1 ./my_hybrid_app..."
>> and "mpirun -np 2 ./...".
>>
>> Run "mpirun -np 1 ./my_hybrid_app..." will increase the performance with
>> more number of threads, but run "mpirun -np 2 ./..." decrease the
>> performance.
>>
>> --
>> Huiwei Lv
>>
>> On Tue, Oct 25, 2011 at 12:00 AM,  wrote:
>>
>>>
>>> Date: Mon, 24 Oct 2011 07:14:21 -0600
>>> From: Ralph Castain 
>>> Subject: Re: [OMPI users] Hybrid MPI/Pthreads program behaves
>>>differently on  two different machines with same hardware
>>> To: Open MPI Users 
>>> Message-ID: <42c53d0b-1586-4001-b9d2-d77af0033...@open-mpi.org>
>>> Content-Type: text/plain; charset="utf-8"
>>>
>>> Does the difference persist if you run the single process using mpirun?
>>> In other words, does "mpirun -np 1 ./my_hybrid_app..." behave the same as
>>> "mpirun -np 2 ./..."?
>>>
>>> There is a slight difference in the way procs start when run as
>>> singletons. It shouldn't make a difference here, but worth testing.
>>>
>>> On Oct 24, 2011, at 12:37 AM, ??? wrote:
>>>
>>> > Dear List,
>>> >
>>> > I have a hybrid MPI/Pthreads program named "my_hybrid_app", this
>>> program is memory-intensive and take advantage of multi-threading to improve
>>> memory throughput. I run "my_hybrid_app" on two machines, which have same
>>> hardware configuration but different OS and GCC. The problem is: when I run
>>> "my_hybrid_app" with one process, two machines behaves the same: the more
>>> number of threads, the better the performance; however, when I run
>>> "my_hybrid_app" with two or more processes. The first machine still increase
>>> performance with more threads, the second machine degrades in performance
>>> with more threads.
>>> >
>>> > Since running "my_hybrid_app" with one process behaves correctly, I
>>> suspect my linking to MPI library has some problem. Would somebody point me
>>> in the right direction? Thanks