Re: [OMPI users] Directed to Undirected Graph

2012-06-05 Thread Reuti
Hi,

Am 05.06.2012 um 15:39 schrieb Mudassar Majeed:

>Let say there are N MPI processes. Each MPI process 
> has to communicate with some T processes, where T < N. This information is a 
> directed graph (and every process knows only about its own). I need to 
> convert it to undirected graph, so that each process will inform T other 
> processes about it. Every process will update this information. (that may be 
> stored in an array of maximum size N). What can be the best way to exchange 
> this information among all MPI processes ? MPI_AllGather and MPI_AllGatherv 
> do not solve my problem. 

I'm not sure whether I understand the problem in the correct way: in principle 
you want to gather all information maybe to the rank 0 process from each 
process, and then broadcast the complete information to all processes.

Why is MPI_AllGather not working for you then? Each element needs to be a 
vector with the index of the targeted T processes flagged. Each of the T 
processes have to look then into the received vectors (from all processes) 
whether they are mentioned there. Each of the T processes can be targeted by 
more than one of the N processes?

-- Reuti


Re: [OMPI users] Directed to Undirected Graph

2012-06-05 Thread Michael Raymond
  You need to use 2 calls. One option is an Allgather followed by an 
Allgatherv.


Allgather() with one integer, which is the number of nodes the rank is 
linked to
Allgatherv() with a variable size array of integers where each entry is 
a connected to node



On 06/05/2012 08:39 AM, Mudassar Majeed wrote:


Dear people,
Let say there are N MPI processes. Each MPI process has to communicate
with some T processes, where T < N. This information is a directed graph
(and every process knows only about its own). I need to convert it to
undirected graph, so that each process will inform T other processes
about it. Every process will update this information. (that may be
stored in an array of maximum size N). What can be the best way to
exchange this information among all MPI processes ? MPI_AllGather and
MPI_AllGatherv do not solve my problem.


best regards,


-- Mudassar



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Michael A. Raymond
SGI MPT Team Leader
(651) 683-3434



Re: [OMPI users] problem with sctp.h on Solaris

2012-06-05 Thread TERRY DONTJE
This looks like a missing check in the sctp configure.m4.  I am working 
on a patch.


--td

On 6/5/2012 10:10 AM, Siegmar Gross wrote:

Hello,

I compiled "openmpi-1.6" on "Solaris 10 sparc" and "Solaris 10 x86"
with "gcc-4.6.2" and "Sun C 5.12". Today I searched my log-files for
"WARNING" and found the following message.

WARNING: netinet/sctp.h: present but cannot be compiled
WARNING: netinet/sctp.h: check for missing prerequisite headers?
WARNING: netinet/sctp.h: see the Autoconf documentation
WARNING: netinet/sctp.h: section "Present But Cannot Be Compiled"
WARNING: netinet/sctp.h: proceeding with the compiler's result
WARNING: ## -- ##
WARNING: ## Report this to http://www.open-mpi.org/community/help/ ##
WARNING: ## -- ##

Looking in "config.log" showed that some types are undefined.

tyr openmpi-1.6-SunOS.sparc.64_cc 323 grep sctp config.log
configure:119568: result: elan, mx, ofud, openib, portals, sctp, sm, tcp, udapl
configure:125730: checking for MCA component btl:sctp compile mode
configure:125752: checking --with-sctp value
configure:125862: checking --with-sctp-libdir value
configure:125946: checking netinet/sctp.h usability
"/usr/include/netinet/sctp.h", line 228:
   incomplete struct/union/enum sockaddr_storage: spc_aaddr
"/usr/include/netinet/sctp.h", line 530: syntax error before or at: socklen_t
"/usr/include/netinet/sctp.h", line 533: syntax error before or at: socklen_t
"/usr/include/netinet/sctp.h", line 537: syntax error before or at: socklen_t
"/usr/include/netinet/sctp.h", line 772: syntax error before or at: ipaddr_t
"/usr/include/netinet/sctp.h", line 779: syntax error before or at: in6_addr_t
| #include
...

The missing types are defined via. In which files must
I include this header file to avoid the warning? Thank you very much
for any help in advance.


Kind regards

Siegmar

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Terry D. Dontje | Principal Software Engineer
Developer Tools Engineering | +1.781.442.2631
Oracle *- Performance Technologies*
95 Network Drive, Burlington, MA 01803
Email terry.don...@oracle.com 





Re: [OMPI users] seg fault with intel compiler

2012-06-05 Thread Edmund Sumbar
First of all, thanks to everyone who took the trouble to offer suggests.

The solution seems to be to upgrade the Intel compilers. However, I'm not
the cluster admin, so other crucial changes may have been implemented. For
example, I know that ssh was reconfigured over the weekend (but that
shouldn't impact OMPI in a Torque environment).

In any case, I went from version 12.1.0.233 (Build 20110811) to 12.1.4.319
(Build 20120410), and rebuilt Open MPI 1.6. After that, all tests worked,
for any number of tasks.

-- 
Edmund Sumbar
University of Alberta
+1 780 492 9360


[OMPI users] MPI and gprof

2012-06-05 Thread TAY wee-beng

Hi,

I am trying to use gprof with my mpi code. I googled and saw this msg:

Open-MPI and gprof:

/Yes you can profile MPI applications by compiling with -pg. However, by
/

/default each process will produce an output file called "gmon.out",
which is a problem if all processes are writing to the same global file
system (i.e. all processes will try to write to the same file).
/

/There is an undocumented feature of gprof that allows you to specify the
filename for profiling output via the environment variable
GMON_OUT_PREFIX. For example, one can set this variable in the .bashrc
file for every node to insure unique profile filenames, i.e.:
/

/export GMON_OUT_PREFIX='gmon.out-'`/bin/uname -n`
/

/The filename will appear as GMON_OUT_PREFIX.pid, where pid is the
process id on a given node (so this will work when multiple nodes are
contained in a single host). /


However, this msg was written in 2009. I wonder if it is still the same 
method. Also, in that case, if i run on 10 cpus, I will have 10 such 
outputs. Is it possible to get an average result instead of individual 
results?


I run the mpi code but without using the above instructions and got just 
1 file - gmon.out. Does it mean that this result is for the current node?


Thank you!

--
Yours sincerely,

TAY wee-beng



[OMPI users] problem with sctp.h on Solaris

2012-06-05 Thread Siegmar Gross
Hello,

I compiled "openmpi-1.6" on "Solaris 10 sparc" and "Solaris 10 x86"
with "gcc-4.6.2" and "Sun C 5.12". Today I searched my log-files for
"WARNING" and found the following message.

WARNING: netinet/sctp.h: present but cannot be compiled
WARNING: netinet/sctp.h: check for missing prerequisite headers?
WARNING: netinet/sctp.h: see the Autoconf documentation
WARNING: netinet/sctp.h: section "Present But Cannot Be Compiled"
WARNING: netinet/sctp.h: proceeding with the compiler's result
WARNING: ## -- ##
WARNING: ## Report this to http://www.open-mpi.org/community/help/ ##
WARNING: ## -- ##

Looking in "config.log" showed that some types are undefined.

tyr openmpi-1.6-SunOS.sparc.64_cc 323 grep sctp config.log
configure:119568: result: elan, mx, ofud, openib, portals, sctp, sm, tcp, udapl
configure:125730: checking for MCA component btl:sctp compile mode
configure:125752: checking --with-sctp value
configure:125862: checking --with-sctp-libdir value
configure:125946: checking netinet/sctp.h usability
"/usr/include/netinet/sctp.h", line 228:
  incomplete struct/union/enum sockaddr_storage: spc_aaddr
"/usr/include/netinet/sctp.h", line 530: syntax error before or at: socklen_t
"/usr/include/netinet/sctp.h", line 533: syntax error before or at: socklen_t
"/usr/include/netinet/sctp.h", line 537: syntax error before or at: socklen_t
"/usr/include/netinet/sctp.h", line 772: syntax error before or at: ipaddr_t
"/usr/include/netinet/sctp.h", line 779: syntax error before or at: in6_addr_t
| #include 
...

The missing types are defined via . In which files must
I include this header file to avoid the warning? Thank you very much
for any help in advance.


Kind regards

Siegmar



[OMPI users] Directed to Undirected Graph

2012-06-05 Thread Mudassar Majeed

Dear people, 
   Let say there are N MPI processes. Each MPI process has 
to communicate with some T processes, where T < N. This information is a 
directed graph (and every process knows only about its own). I need to convert 
it to undirected graph, so that each process will inform T other processes 
about it. Every process will update this information. (that may be stored in an 
array of maximum size N). What can be the best way to exchange this information 
among all MPI processes ? MPI_AllGather and MPI_AllGatherv do not solve my 
problem. 


best regards,


-- Mudassar


Re: [OMPI users] problems compiling openmpi-1.6 on some platforms

2012-06-05 Thread Jeff Squyres
We'll get this applied for 1.6.1.

Thanks!


On Jun 5, 2012, at 5:29 AM, Siegmar Gross wrote:

> Hello,
> 
>> the patch below fixes the build issue on Solaris 10. Please apply it to Open 
>> MPI 1.6 as follows:
>> 
>> $ cd openmpi-1.6
>> $ patch -p1 > $ make ; make install
> 
> Thank you for the patch. Now it works.
> 
> 
> Kind regards
> 
> Siegmar
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] problems compiling openmpi-1.6 on some platforms

2012-06-05 Thread Siegmar Gross
Hello,

> the patch below fixes the build issue on Solaris 10. Please apply it to Open 
> MPI 1.6 as follows:
> 
> $ cd openmpi-1.6
> $ patch -p1  $ make ; make install

Thank you for the patch. Now it works.


Kind regards

Siegmar



Re: [OMPI users] problems compiling openmpi-1.6 on some platforms

2012-06-05 Thread Matthias Jurenz
Hello,

the patch below fixes the build issue on Solaris 10. Please apply it to Open 
MPI 1.6 as follows:

$ cd openmpi-1.6
$ patch -p1  From: Siegmar Gross 
> Subject: [OMPI users] problems compiling openmpi-1.6 on some platforms
> Date: May 30, 2012 7:29:31 AM EDT
> To: 
> Reply-To: Siegmar Gross , Open MPI 
Users 
> 
> Hi,
> 
> I tried to compile "openmpi-1.6" on "Solaris 10" and Linux
> (openSUSE 12.1) with "gcc-4.6.2" and "Sun C 5.12" (Oracle Solaris
> Studio 12.3) with mainly the following configuration for a 64- and
> 32-bit installation. "-L/usr/local/..." was necessary because "gcc"
> didn't find its 64-bit libraries without this option.
> 
> ../openmpi-1.6/configure --prefix=/usr/local/openmpi-1.6_64_gcc \
>  --libdir=/usr/local/openmpi-1.6_64_gcc/lib64 \
>  LDFLAGS="-m64 -L/usr/local/gcc-4.6.2/lib/sparcv9" \
>  CC="gcc" CPP="cpp" CXX="g++" CXXCPP="cpp" F77="gfortran" \
>  CFLAGS="-m64" CXXFLAGS="-m64" FFLAGS="-m64" FCFLAGS="-m64" \
>  CXXLDFLAGS="-m64" CPPFLAGS="" \
>  C_INCL_PATH="" C_INCLUDE_PATH="" CPLUS_INCLUDE_PATH="" \
>  OBJC_INCLUDE_PATH="" MPIHOME="" \
>  --without-udapl --without-openib \
>  --enable-mpi-f90 --with-mpi-f90-size=small \
>  --enable-heterogeneous --enable-cxx-exceptions \
>  --enable-shared --enable-orterun-prefix-by-default \
>  --with-threads=posix --enable-mpi-thread-multiple \
>  --with-hwloc=internal --with-ft=LAM --enable-sparse-groups \
>  |& tee log.configure.$SYSTEM_ENV.$MACHINE_ENV.64_gcc
> 
> 
> For "cc" I used 'CC="cc" CXX="CC" F77="f77" FC="f95"'. With "gcc"
> I got for example the following error so that I had to add the option
> "--disable-vt" to the gcc-configuration.
> 
> 
> tail -n 20 log.make.SunOS.sparc.64_gcc
> 
> make[5]: Leaving directory `.../ompi/contrib/vt/vt/rfg'
> Making all in vtlib
> make[5]: Entering directory `.../ompi/contrib/vt/vt/vtlib'
>  CC vt_comp_gnu.lo
>  CC vt_iowrap.lo
>  CC vt_iowrap_helper.lo
>  CC vt_libwrap.lo
> ../../../../../../openmpi-1.6/ompi/contrib/vt/vt/vtlib/vt_libwrap.c:
>  In function 'get_libc_errno_ptr':
> ../../../../../../openmpi-1.6/ompi/contrib/vt/vt/vtlib/vt_libwrap.c:
>  106:20: error: called object 'libc_errno' is not a function
> make[5]: *** [vt_libwrap.lo] Error 1
> ...
> 
> 
> With these options I was able to install OpenMPI on some of my
> platforms.
> 
> ls -d /export2/prog/*/openmpi-1.6*
> 
> /export2/prog/Linux_x86/openmpi-1.6_32_gcc
> /export2/prog/Linux_x86_64/openmpi-1.6_32_gcc
> /export2/prog/Linux_x86_64/openmpi-1.6_64_gcc
> /export2/prog/SunOS_sparc/openmpi-1.6_32_cc
> /export2/prog/SunOS_sparc/openmpi-1.6_32_gcc
> /export2/prog/SunOS_sparc/openmpi-1.6_64_cc
> /export2/prog/SunOS_sparc/openmpi-1.6_64_gcc
> /export2/prog/SunOS_x86_64/openmpi-1.6_32_cc
> /export2/prog/SunOS_x86_64/openmpi-1.6_32_gcc
> /export2/prog/SunOS_x86_64/openmpi-1.6_64_cc
> /export2/prog/SunOS_x86_64/openmpi-1.6_64_gcc
> 
> 
> Unfortunately "cc" on Linux creates the following error.
> 
> ln -s "../../../openmpi-1.6/opal/asm/generated/
>  atomic-ia32-linux-nongas.s" atomic-asm.S
>  CPPAS  atomic-asm.lo
> :19:0: warning: "__FLT_EVAL_METHOD__" redefined
>  [enabled by default]
> :110:0: note: this is the location of the previous definition
> cpp: fatal error: -fuse-linker-plugin, but liblto_plugin.so not found
> compilation terminated.
> cc: cpp failed for atomic-asm.S
> make[2]: *** [atomic-asm.lo] Error 1
> make[2]: Leaving directory `/.../opal/asm'
> make[1]: *** [all-recursive] Error 1
> make[1]: Leaving directory `/.../opal'
> make: *** [all-recursive] Error 1
> 
> 
> Adding the option "--with-libltdl=internal" (should be the default
> anyway) didn't solve the problem so that I tried to add the options
> "--without-libltdl --disable-dlopen" to the cc-configuration on
> Linux. Unfortunately I still get the above error although I started
> everything in a new directory.
> 
> ln -s "../../../openmpi-1.6/opal/asm/generated/
>  atomic-ia32-linux-nongas.s" atomic-asm.S
>  CPPAS  atomic-asm.lo
> :19:0: warning: "__FLT_EVAL_METHOD__" redefined
>  [enabled by default]
> :110:0: note: this is the location of the previous definition
> cpp: fatal error: -fuse-linker-plugin, but liblto_plugin.so not found
> compilation terminated.
> cc: cpp failed for atomic-asm.S
> make[2]: *** [atomic-asm.lo] Error 1
> make[2]: Leaving directory `/.../opal/asm'
> make[1]: *** [all-recursive] Error 1
> make[1]: Leaving directory `/.../opal'
> make: *** [all-recursive] Error 1
> 
> linpc1 openmpi-1.6-Linux.x86.32_cc 94 more config.log 
> This file contains any messages produced by compilers while
> running configure, to aid debugging if configure makes a mistake.
> 
> It was created by Open MPI configure 1.6, which was
> generated by GNU Autoconf 2.68.  Invocation command line was
> 
>  $ ../openmpi-1.6/configure --prefix=/usr/local/openmpi-1.6_32_cc LDFLAGS=-
m32 
> CC=cc CXX=CC F77=f77 FC=f95 CFLAGS=-m32 

Re: [OMPI users] checkpointing/restart of hpl

2012-06-05 Thread Ifeanyi
Thanks Constantinos.

I have gone through the site you sent to me, however whenever I want to
enable FT, it will not be enabled.

this is what I got.
/openmpi-1.6# ./configure --enable-ft-thread --with-ft=cr
--with-blcr=/usr/src/blcr-0.8.2
#FT Checkpoint support: no (checkpoint thread: no)
"FT Checkpoint support: no (checkpoint thread: no)"

Please is there a special way to enable the FT?

I also want test with real application that runs for about 30 mins but I
cannot easily lay my hands on any.

Please help.

Thanks in advance.

Regards,
Ifeanyi

On Tue, Jun 5, 2012 at 1:44 AM, Constantinos Makassikis <
cmakassi...@gmail.com> wrote:

> Hi,
>
> you may start by looking at http://www.open-mpi.org/faq/?category=ft
> which leads you to
> https://svn.open-mpi.org/trac/ompi/wiki/ProcessFT_CR
> and
> http://osl.iu.edu/research/ft/ompi-cr/
>
> The latter is the most up-to-date link and probably what you are looking
> for.
>
>
> HTH,
>
> --
> Constantinos
>
> On Mon, Jun 4, 2012 at 3:24 AM, Ifeanyi  wrote:
>
>> Dear,
>>
>> I am a new user of open mpi. I have already installed openmpi and build
>> hpl. I want to checkpoint/restart hpl and compare its performance
>>
>> Please can you point me to a useful link that will guide me through this
>> checkpointing of hpl.
>>
>> thanks in advance.
>>
>> Regards,
>> Ifeanyi
>>
>>
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>