Re: [OMPI users] configuration openMPI problem

2012-11-26 Thread Diego Avesani
Dear all, Dear Tom,

you are right, sorry. I was only too excited, I had just able to configure
openMPI after many hours spent in FAQ, google and in the mailing list.
I will read more carefully.

Thanks again to you all.

Diego




On 26 November 2012 15:36, Elken, Tom  wrote:

>  ** **
>
> Now I would like to test it with a simple hello project.  Ralph Castain 
> suggest
> me the following web site:
>
> https://wiki.mst.edu/nic/examples/openmpi-intel-fortran90-example
>
> ** **
>
> This is the results of my simulation:
>
>  Hello World! I am0  of1
>
> ** **
>
> How ever I have a quad core processor, I belive (I run a  cat
> /proc/cpuinfo)
>
> *[Tom] *
>
> *What was your mpirun command?*
>
> *Did it have a ‘-np 4’  in it to tell mpirun that you want 4 processes to
> be spawned?*
>
> * *
>
> *Don’t be afraid to read the FAQ on running MPI programs:*
>
> *http://www.open-mpi.org/faq/?category=running *
>
> * *
>
> *-Tom*
>
> ** **
>
> Thanks a lot
>
> ** **
>
> Diego
>
>
>
> 
>
> On 26 November 2012 13:49, Gus Correa  wrote:
>
> Hi Diego
>
>
>
> > deal all, dear Gustavo,
> >
> > This is my bash.bashrc in ubuntu 12.04:
> >
> > ##
>
> > /PATH="/opt/intel/bin/compilervars.sh intel64$PATH"/
> > /source /opt/intel/bin/compilervars.sh intel64/
> > /source /opt/intel/mkl/bin/mklvars.sh intel64 mod lp64/
> > /export PATH/
> > ##
>
> This is not an OpenMPI problem, but about Linux environment setup.
>
> Anyway, my guess is that all you
> need in your .bashrc are these two lines (2 and 3):
>
>
>
> source /opt/intel/bin/compilervars.sh intel64
> source /opt/intel/mkl/bin/mklvars.sh intel64 mod lp64
>
> The first line is probably messing up your PATH, the fourth line
> may be just redundant with the the Intel compilervars.sh script.
> Try commenting out lines 1 and 4 (with a leading # character),
> and leave only lines 2 and 3.
>
> (Note, no '/' in the beginning or at the end of the lines, not sure
> if the '/'s are part of your .bashrc or just part of your email.)
>
> After you make the change, then login again, or open
> a new terminal/shell window and try these commands:
>
> which icc
> which icpc
> which ifort
> printenv
>
> to make sure your environment is pointing
> to the correct Intel compilers.
>
> I hope this helps,
> Gus Correa
>
>
>
> On 11/26/2012 09:42 AM, Diego Avesani wrote:
>
>  I think that is correct according to your mail, so I do not think that
> this is this problem.
> I check the config.log file. It says:
>   checking for gcc
> ##
>
> /configure:5133: result: icc/
> /configure:5362: checking for C compiler version/
> /configure:5371: icc --version >&5/
> /./configure: line 5373: icc: command not found/
> /configure:5382: $? = 127/
> /configure:5371: icc -v >&5/
>
>
> ##
> When I write the simple project inside the config.log file in new file .c
> ##
>   int
>   main ()
>   {
> ;
> return 0;
> }
> ##
>
> it works when I compile it with icc
>
> Do I probably need to change also the .csh?
> My current intel version is 13.0, When I compile it they told me to set***
> *
>
> /compilervars.sh /moreover check iccvars.sh,  ifortvars.sh and
> /compilervars.sh, /they are the same.
>
>
>
> I do not know what to do, could I compile open mpi with gcc,
> gcpc,gnufort and then use it with intel fortran?
> do you think that is a OpenMpi problem? Has someone compile it with
> intel linux icc? which distro have you used?
>
> Thank all
>
> Diego
>
>
>
>
> On 25 November 2012 22:21, Gustavo Correa 
> > wrote:
>
> urce compilervars.sh intel64
>
>
>
> 
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> ** **
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] configuration openMPI problem

2012-11-26 Thread Elken, Tom

Now I would like to test it with a simple hello project.  Ralph Castain suggest 
me the following web site:
https://wiki.mst.edu/nic/examples/openmpi-intel-fortran90-example

This is the results of my simulation:
 Hello World! I am0  of1

How ever I have a quad core processor, I belive (I run a  cat /proc/cpuinfo)
[Tom]
What was your mpirun command?
Did it have a '-np 4'  in it to tell mpirun that you want 4 processes to be 
spawned?

Don't be afraid to read the FAQ on running MPI programs:
http://www.open-mpi.org/faq/?category=running

-Tom

Thanks a lot

Diego



On 26 November 2012 13:49, Gus Correa 
> wrote:
Hi Diego


> deal all, dear Gustavo,
>
> This is my bash.bashrc in ubuntu 12.04:
>
> ##
> /PATH="/opt/intel/bin/compilervars.sh intel64$PATH"/
> /source /opt/intel/bin/compilervars.sh intel64/
> /source /opt/intel/mkl/bin/mklvars.sh intel64 mod lp64/
> /export PATH/
> ##

This is not an OpenMPI problem, but about Linux environment setup.

Anyway, my guess is that all you
need in your .bashrc are these two lines (2 and 3):


source /opt/intel/bin/compilervars.sh intel64
source /opt/intel/mkl/bin/mklvars.sh intel64 mod lp64
The first line is probably messing up your PATH, the fourth line
may be just redundant with the the Intel compilervars.sh script.
Try commenting out lines 1 and 4 (with a leading # character),
and leave only lines 2 and 3.

(Note, no '/' in the beginning or at the end of the lines, not sure
if the '/'s are part of your .bashrc or just part of your email.)

After you make the change, then login again, or open
a new terminal/shell window and try these commands:

which icc
which icpc
which ifort
printenv

to make sure your environment is pointing
to the correct Intel compilers.

I hope this helps,
Gus Correa


On 11/26/2012 09:42 AM, Diego Avesani wrote:
I think that is correct according to your mail, so I do not think that
this is this problem.
I check the config.log file. It says:
  checking for gcc
##
/configure:5133: result: icc/
/configure:5362: checking for C compiler version/
/configure:5371: icc --version >&5/
/./configure: line 5373: icc: command not found/
/configure:5382: $? = 127/
/configure:5371: icc -v >&5/

##
When I write the simple project inside the config.log file in new file .c
##
  int
  main ()
  {
;
return 0;
}
##

it works when I compile it with icc

Do I probably need to change also the .csh?
My current intel version is 13.0, When I compile it they told me to set
/compilervars.sh /moreover check iccvars.sh,  ifortvars.sh and
/compilervars.sh, /they are the same.


I do not know what to do, could I compile open mpi with gcc,
gcpc,gnufort and then use it with intel fortran?
do you think that is a OpenMpi problem? Has someone compile it with
intel linux icc? which distro have you used?

Thank all

Diego




On 25 November 2012 22:21, Gustavo Correa 

>> wrote:

urce compilervars.sh intel64



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



Re: [OMPI users] configuration openMPI problem

2012-11-26 Thread Diego Avesani
dear all,
Now it seems to work, I mean the confinguration ended and I did also make
installi all.

Here It's what I did:
1) sudo bash (to put the openmpi folder in opt)
2)  ./configure --prefix=/opt/openmpi CC=icc CXX=icpc F77=ifort FC=ifort
 n.b I did not use sudo  ./configure --prefix=/opt/openmpi CC=icc
CXX=icpc F77=ifort FC=ifort
If I use sudo it does not work.
3) After that : make all install

4) I changed my bash as:
source /opt/intel/bin/compilervars.sh intel64
source /opt/intel/mkl/bin/mklvars.sh intel64 mod lp64
export PATH
#openMPI
export PATH=/opt/openmpi/bin:$PATH
export LD_LIBRARY_PATH=/opt/openmpi/lib:$LD_LIBRARY_PATH
export PATH

And here what I got now in my bash:

laptop:~$ mpi
mpic++ mpiCC-vt   mpicxx-vt  mpif77 mpirun
mpicc  mpicleanup mpiexecmpif77-vt
mpiCC  mpic++-vt  mpiexec.hydra  mpif90
mpicc-vt   mpicxx mpiexec.py mpif90-vt

It's seems the the installations has worked properly.

Now I would like to test it with a simple hello project.  Ralph Castain suggest
me the following web site:
https://wiki.mst.edu/nic/examples/openmpi-intel-fortran90-example

This is the results of my simulation:
 Hello World! I am0  of1

How ever I have a quad core processor, I belive (I run a  cat /proc/cpuinfo)

Thanks a lot

Diego




On 26 November 2012 13:49, Gus Correa  wrote:

> Hi Diego
>
>
> > deal all, dear Gustavo,
> >
> > This is my bash.bashrc in ubuntu 12.04:
> >
> > ##**
> > /PATH="/opt/intel/bin/**compilervars.sh intel64$PATH"/
> > /source /opt/intel/bin/compilervars.sh intel64/
> > /source /opt/intel/mkl/bin/mklvars.sh intel64 mod lp64/
> > /export PATH/
> > ##**
>
> This is not an OpenMPI problem, but about Linux environment setup.
>
> Anyway, my guess is that all you
> need in your .bashrc are these two lines (2 and 3):
>
>
> source /opt/intel/bin/compilervars.sh intel64
> source /opt/intel/mkl/bin/mklvars.sh intel64 mod lp64
>
> The first line is probably messing up your PATH, the fourth line
> may be just redundant with the the Intel compilervars.sh script.
> Try commenting out lines 1 and 4 (with a leading # character),
> and leave only lines 2 and 3.
>
> (Note, no '/' in the beginning or at the end of the lines, not sure
> if the '/'s are part of your .bashrc or just part of your email.)
>
> After you make the change, then login again, or open
> a new terminal/shell window and try these commands:
>
> which icc
> which icpc
> which ifort
> printenv
>
> to make sure your environment is pointing
> to the correct Intel compilers.
>
> I hope this helps,
> Gus Correa
>
>
> On 11/26/2012 09:42 AM, Diego Avesani wrote:
>
>> I think that is correct according to your mail, so I do not think that
>> this is this problem.
>> I check the config.log file. It says:
>>   checking for gcc
>> ##**
>> /configure:5133: result: icc/
>> /configure:5362: checking for C compiler version/
>> /configure:5371: icc --version >&5/
>> /./configure: line 5373: icc: command not found/
>> /configure:5382: $? = 127/
>> /configure:5371: icc -v >&5/
>>
>> ##**
>> When I write the simple project inside the config.log file in new file .c
>> ##**
>>   int
>>   main ()
>>   {
>> ;
>> return 0;
>> }
>> ##**
>>
>> it works when I compile it with icc
>>
>> Do I probably need to change also the .csh?
>> My current intel version is 13.0, When I compile it they told me to set
>> /compilervars.sh /moreover check iccvars.sh,  ifortvars.sh and
>> /compilervars.sh, /they are the same.
>>
>>
>> I do not know what to do, could I compile open mpi with gcc,
>> gcpc,gnufort and then use it with intel fortran?
>> do you think that is a OpenMpi problem? Has someone compile it with
>> intel linux icc? which distro have you used?
>>
>> Thank all
>>
>> Diego
>>
>>
>>
>>
>> On 25 November 2012 22:21, Gustavo Correa > **> wrote:
>>
>> urce compilervars.sh intel64
>>
>>
>>
>>
>> __**_
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/**mailman/listinfo.cgi/users
>>
>
> __**_
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/**mailman/listinfo.cgi/users
>


Re: [OMPI users] Maximum number of MPI processes on a node + discovering faulty nodes

2012-11-26 Thread Jeff Squyres
On Nov 26, 2012, at 4:02 AM, George Markomanolis wrote:

> Another more generic question, is about discovering nodes with faulty memory. 
> Is there any way to identify nodes with faulty memory? I found accidentally 
> that a node with exact the same hardware couldn't execute an MPI application 
> when it was using more than 12GB of ram while the second one could use all of 
> the 48GB of memory. If I have 500+ nodes is difficult to check all of them 
> and I am not familiar with any efficient solution. Initially I thought about 
> memtester but it takes a lot of time. I know that this does not apply exactly 
> on this mailing list but I thought that maybe an OpenMPI user knows something 
> about.

You really do want something like a memory tester.  MPI applications *might* 
beat on your memory to identify errors, but that's really just a side effect of 
HPC access patterns.  You really want a dedicated memory tester.

If such a memory tester takes a long time, you might want to use mpirun to 
launch it on multiple nodes simultaneously to save some time...?

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] configuration openMPI problem

2012-11-26 Thread Gus Correa

Hi Diego

> deal all, dear Gustavo,
>
> This is my bash.bashrc in ubuntu 12.04:
>
> ##
> /PATH="/opt/intel/bin/compilervars.sh intel64$PATH"/
> /source /opt/intel/bin/compilervars.sh intel64/
> /source /opt/intel/mkl/bin/mklvars.sh intel64 mod lp64/
> /export PATH/
> ##

This is not an OpenMPI problem, but about Linux environment setup.

Anyway, my guess is that all you
need in your .bashrc are these two lines (2 and 3):

source /opt/intel/bin/compilervars.sh intel64
source /opt/intel/mkl/bin/mklvars.sh intel64 mod lp64

The first line is probably messing up your PATH, the fourth line
may be just redundant with the the Intel compilervars.sh script.
Try commenting out lines 1 and 4 (with a leading # character),
and leave only lines 2 and 3.

(Note, no '/' in the beginning or at the end of the lines, not sure
if the '/'s are part of your .bashrc or just part of your email.)

After you make the change, then login again, or open
a new terminal/shell window and try these commands:

which icc
which icpc
which ifort
printenv

to make sure your environment is pointing
to the correct Intel compilers.

I hope this helps,
Gus Correa

On 11/26/2012 09:42 AM, Diego Avesani wrote:

I think that is correct according to your mail, so I do not think that
this is this problem.
I check the config.log file. It says:
  checking for gcc
##
/configure:5133: result: icc/
/configure:5362: checking for C compiler version/
/configure:5371: icc --version >&5/
/./configure: line 5373: icc: command not found/
/configure:5382: $? = 127/
/configure:5371: icc -v >&5/
##
When I write the simple project inside the config.log file in new file .c
##
  int
  main ()
  {
;
return 0;
}
##

it works when I compile it with icc

Do I probably need to change also the .csh?
My current intel version is 13.0, When I compile it they told me to set
/compilervars.sh /moreover check iccvars.sh,  ifortvars.sh and
/compilervars.sh, /they are the same.

I do not know what to do, could I compile open mpi with gcc,
gcpc,gnufort and then use it with intel fortran?
do you think that is a OpenMpi problem? Has someone compile it with
intel linux icc? which distro have you used?

Thank all

Diego




On 25 November 2012 22:21, Gustavo Correa > wrote:

urce compilervars.sh intel64




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Maximum number of MPI processes on a node + discovering faulty nodes

2012-11-26 Thread Ralph Castain
What version of OMPI are you using?

On Nov 26, 2012, at 1:02 AM, George Markomanolis  
wrote:

> Dear all,
> 
> Initially I would like an advice of how to identify the maximum number of MPI 
> processes that can be executed on a node with oversubscribing. When I try to 
> execute an application with 4096 MPI processes on a 24-cores node with 48GB 
> of memory, I have an error "Unknown error: 1" while the memory is not even at 
> the half. I can execute the same application with 2048 MPI processes in less 
> than one minute. I have checked linux settings about maximum number of 
> processes and it is much bigger than 4096.
> 
> Another more generic question, is about discovering nodes with faulty memory. 
> Is there any way to identify nodes with faulty memory? I found accidentally 
> that a node with exact the same hardware couldn't execute an MPI application 
> when it was using more than 12GB of ram while the second one could use all of 
> the 48GB of memory. If I have 500+ nodes is difficult to check all of them 
> and I am not familiar with any efficient solution. Initially I thought about 
> memtester but it takes a lot of time. I know that this does not apply exactly 
> on this mailing list but I thought that maybe an OpenMPI user knows something 
> about.
> 
> 
> Best regards,
> George Markomanolis
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] configuration openMPI problem

2012-11-26 Thread Diego Avesani
deal all, dear Gustavo,

This is my bash.bashrc in ubuntu 12.04:

##
*PATH="/opt/intel/bin/compilervars.sh intel64$PATH"*
*source /opt/intel/bin/compilervars.sh intel64*
*source /opt/intel/mkl/bin/mklvars.sh intel64 mod lp64*
*export PATH*
##
I think that is correct according to your mail, so I do not think that this
is this problem.
I check the config.log file. It says:
 checking for gcc
##
*configure:5133: result: icc*
*configure:5362: checking for C compiler version*
*configure:5371: icc --version >&5*
*./configure: line 5373: icc: command not found*
*configure:5382: $? = 127*
*configure:5371: icc -v >&5*
##
When I write the simple project inside the config.log file in new file .c
##
 int
 main ()
 {

   ;
   return 0;
}
##

it works when I compile it with icc

Do I probably need to change also the .csh?
My current intel version is 13.0, When I compile it they told me to
set *compilervars.sh
*moreover check iccvars.sh,  ifortvars.sh and *compilervars.sh, *they are
the same.

I do not know what to do, could I compile open mpi with gcc, gcpc,gnufort
and then use it with intel fortran?
do you think that is a OpenMpi problem? Has someone compile it with intel
linux icc? which distro have you used?

Thank all

Diego




On 25 November 2012 22:21, Gustavo Correa  wrote:

> urce compilervars.sh intel64


Re: [OMPI users] Multiple RPM build fails

2012-11-26 Thread Jeff Squyres
On Nov 26, 2012, at 7:41 AM, Jakub Nowacki wrote:

> Thanks for the information. Sorry that I'm posting it here but bugs mailing 
> list/Trac rejected my e-mail.

No worries; the bugs list is *only* for mails from Trac.

>  I've tested the SRPM and I was able to compile it correctly on both RHEL 5 
> and CentOS 6 with multiple packages on, i.e:
> 
> rpmbuild -bb --define 'build_all_in_one_rpm 0' --define 'configure_options 
> --with-sge' /usr/src/redhat/SPECS/openmpi-
> 1.6.3.spec
> 
> Hence, the modified spec-file seems to fix the issue.

Great!  Thanks for the update.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] Multiple RPM build fails

2012-11-26 Thread Jakub Nowacki
Hi Jeff,

Thanks for the information. Sorry that I'm posting it here but bugs mailing
list/Trac rejected my e-mail.

 I've tested the SRPM and I was able to compile it correctly on both RHEL 5
and CentOS 6 with multiple packages on, i.e:

rpmbuild -bb --define 'build_all_in_one_rpm 0' --define 'configure_options
--with-sge' /usr/src/redhat/SPECS/openmpi-
1.6.3.spec

Hence, the modified spec-file seems to fix the issue.

Cheers,

Jakub

On 21 November 2012 12:47, Jeff Squyres  wrote:

> Greetings Jakub; thanks for the bug report.
>
> I've replicated your error.  Off the top of my head, I don't see why this
> is happening.  I see that rpmbuild has compressed the man pages in the
> multi-build scenario (e.g., BUILDROOT contains
> /usr/share/man/man3/MPI_.3.gz -- NOT .../man3/MPI_.3).  So I see
> that the error is *correct*, but I'm not sure offhand why it's not picking
> up the .3.gz files instead of the .3 files.
>
> I've opened https://svn.open-mpi.org/trac/ompi/ticket/3410 to track this
> issue.
>
>
> On Nov 21, 2012, at 4:06 AM, Jakub Nowacki wrote:
>
> > Hi,
> >
> > I tried to build OpenMPI 1.6.3 RPM on RHEL 5.5 and CentOS 6.3 for usage
> with SGE (--with-sge) but the build of multiple RPMs fail with the error:
> >
> > Processing files: openmpi-runtime-1.6.3-1.x86_64
> > error: File not found: /root/rpmbuild/BUILDROOT/
> > openmpi-1.6.3-1.x86_64/usr/share/man/man3/MPI_Comm_remote_size.3
> > error: File not found:
> /root/rpmbuild/BUILDROOT/openmpi-1.6.3-1.x86_64/usr/share/man/man3/MPI_Comm_remote_group.3
> > Executing(%doc): /bin/sh -e /var/tmp/rpm-tmp.0HfCky
> >
> > [...]
> >
> > RPM build errors:
> > File not found:
> /root/rpmbuild/BUILDROOT/openmpi-1.6.3-1.x86_64/usr/share/man/man3/MPI_Comm_remote_size.3
> > File not found:
> /root/rpmbuild/BUILDROOT/openmpi-1.6.3-1.x86_64/usr/share/man/man3/MPI_Comm_remote_group.3
> > *** FAILURE BUILDING MULTIPLE RPM!
> >
> > Indeed, these man pages does not seem to be there but there are gzipped
> files there:
> >
> > -rw-r--r-- 1 root root 884 Nov 20 15:29
> /root/rpmbuild/BUILDROOT/openmpi-1.6.3-1.x86_64/usr/share/man/man3/MPI_Comm_remote_group.3.gz
> > -rw-r--r-- 1 root root 904 Nov 20 15:29
> /root/rpmbuild/BUILDROOT/openmpi-1.6.3-1.x86_64/usr/share/man/man3/MPI_Comm_remote_size.3.gz
> >
> > Interestingly, single RPM build is successful. I get the same error on
> both RHEL 5.5 and CentOS 6.3 using SRPM and tar package along with
> buildrpm.sh script. I have tried to find a solution but most of the sources
> I have found use single RPM build.
> >
> > Thank you very much for the help.
> >
> > Regards,
> >
> > Jakub Nowacki
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


[OMPI users] Maximum number of MPI processes on a node + discovering faulty nodes

2012-11-26 Thread George Markomanolis

Dear all,

Initially I would like an advice of how to identify the maximum number 
of MPI processes that can be executed on a node with oversubscribing. 
When I try to execute an application with 4096 MPI processes on a 
24-cores node with 48GB of memory, I have an error "Unknown error: 1" 
while the memory is not even at the half. I can execute the same 
application with 2048 MPI processes in less than one minute. I have 
checked linux settings about maximum number of processes and it is much 
bigger than 4096.


Another more generic question, is about discovering nodes with faulty 
memory. Is there any way to identify nodes with faulty memory? I found 
accidentally that a node with exact the same hardware couldn't execute 
an MPI application when it was using more than 12GB of ram while the 
second one could use all of the 48GB of memory. If I have 500+ nodes is 
difficult to check all of them and I am not familiar with any efficient 
solution. Initially I thought about memtester but it takes a lot of 
time. I know that this does not apply exactly on this mailing list but I 
thought that maybe an OpenMPI user knows something about.



Best regards,
George Markomanolis