On 5/1/08, Mukesh K Srivastava <srimk...@gmail.com> wrote:
>
> Hi Lenny.
>
> Thanks for responding. To correct more - would like to know few things.
>
> (a) I did modify make_mpich makefile present in IMB-3.1/src folder giving
> the path for openmpi. Here I am using same mpirun as built from
> openmpi(v-1.2.5) also did mention in PATH & LD_LIBRARY_PATH.
>
> (b) What is the command on console to run any new additional file with MPI
> API contents call. Do I need to add in Makefile.base of IMB-3.1/src folder
> or mentioning in console as a command it takes care alongwith "$mpirun
> IMB-MPI1"
>
> (c) Does IMB-3.1 need INB(Infiniband) or TCP support to complete it's
> Benchmark routine call, means do I need to configure and build OpnMPI with
> Infiniband stack too?
>

IMB is a set of benchmarks that can be run between 1 and more machines
it calls for MPI API that does all the communication
MPI decides how to run ( IB or TCP or shared memory ) according to
priorities and all possible ways to be connected to another host.

you can make your own benchmark or test program, compile it with mpicc and
run
ex:
#mpicc -o hello_world hello_world.c
#mpirun -np 2 -H host1,host2 ./hello_world


#cat hello_world.c
/*
* Hewlett-Packard Co., High Performance Systems Division
*
* Function: - example: simple "hello world"
*
* $Revision: 1.1.2.1 $
*/

#include <stdio.h>
#include <mpi.h>

main(argc, argv)

int argc;
char *argv[];

{
int rank, size, len;
char name[MPI_MAX_PROCESSOR_NAME];
int to_wait = 0, sleep_diff = 0, max_limit = 0;
double sleep_start = 0.0, sleep_now = 0.0;

MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);

MPI_Get_processor_name(name, &len);

if (argc > 1)
{
to_wait = atoi(argv[1]);
}

//busy loop for debuging needs
if (to_wait)
{
sleep_start=MPI_Wtime();
while(1)
{
max_limit++;
if(max_limit > 100000000)
{
fprintf(stdout,"-------- exit loop, to_wait: %d, \n", to_wait);
break;
}

sleep_now = MPI_Wtime();
sleep_diff = (int)(sleep_now - sleep_start);
if(sleep_diff >= to_wait)
{
break;
}
}
}

if (rank == 0) //only the first will print this message
{
printf ("Hello world! I'm %d of %d on %s\n", rank, size, name);
}

MPI_Finalize();
exit(0);
}






(d) I don't see any README in IMB-3.1 or anu user-guide which tells how to
> execute rather it simply tells about each 17 benchmark and flags to be used.
>
> BR
>
>
> On 4/30/08, Lenny Verkhovsky <lenny.verkhov...@gmail.com> wrote:
> >
> >
> >
> >
> > On 4/30/08, Mukesh K Srivastava <srimk...@gmail.com> wrote:
> > >
> > > Hi.
> > >
> > > I am using IMB-3.1, an Intel MPI Benchmark tool with OpenMPI(v-1.2.5).
> > > In /IMB-3.1/src/make_mpich file, I had only given the decalartion for
> > > MPI_HOME, which takes care for CC, OPTFLAGS & CLINKER. Building IMB_MPI1,
> > > IMP-EXT & IMB-IO happens succesfully.
> > >
> > > I get proper results of IMB Benchmark with command "-np 1" as mpirun
> > > IMB-MPI1, but for "-np 2", I get below errors -
> > >
> > > -----
> > > [mukesh@n161 src]$ mpirun -np 2 IMB-MPI1
> > > [n161:13390] *** Process received signal ***
> > > [n161:13390] Signal: Segmentation fault (11)
> > > [n161:13390] Signal code: Address not mapped (1)
> > > [n161:13390] Failing at address: (nil)
> > > [n161:13390] [ 0] /lib64/tls/libpthread.so.0 [0x399e80c4f0]
> > > [n161:13390] [ 1]
> > > /home/mukesh/openmpi/prefix/lib/openmpi/mca_btl_sm.so [0x2a9830f8b4]
> > > [n161:13390] [ 2]
> > > /home/mukesh/openmpi/prefix/lib/openmpi/mca_btl_sm.so [0x2a983109e3]
> > > [n161:13390] [ 3]
> > > /home/mukesh/openmpi/prefix/lib/openmpi/mca_btl_sm.so(mca_btl_sm_component_progress+0xbc)
> > > [0x2a9830fc50]
> > > [n161:13390] [ 4]
> > > /home/mukesh/openmpi/prefix/lib/openmpi/mca_bml_r2.so(mca_bml_r2_progress+0x4b)
> > > [0x2a97fce447]
> > > [n161:13390] [ 5]
> > > /home/mukesh/openmpi/prefix/lib/libopen-pal.so.0(opal_progress+0xbc)
> > > [0x2a958fc343]
> > > [n161:13390] [ 6]
> > > /home/mukesh/openmpi/prefix/lib/openmpi/mca_oob_tcp.so(mca_oob_tcp_msg_wait+0x22)
> > > [0x2a962e9e22]
> > > [n161:13390] [ 7]
> > > /home/mukesh/openmpi/prefix/lib/openmpi/mca_oob_tcp.so(mca_oob_tcp_recv+0x677)
> > > [0x2a962f1aab]
> > > [n161:13390] [ 8]
> > > /home/mukesh/openmpi/prefix/lib/libopen-rte.so.0(mca_oob_recv_packed+0x46)
> > > [0x2a9579d243]
> > > [n161:13390] [ 9]
> > > /home/mukesh/openmpi/prefix/lib/openmpi/mca_gpr_proxy.so(orte_gpr_proxy_put+0x2f3)
> > > [0x2a96508c8f]
> > > [n161:13390] [10]
> > > /home/mukesh/openmpi/prefix/lib/libopen-rte.so.0(orte_smr_base_set_proc_state+0x425)
> > > [0x2a957c391d]
> > > [n161:13390] [11]
> > > /home/mukesh/openmpi/prefix/lib/libmpi.so.0(ompi_mpi_init+0xa1e)
> > > [0x2a9559f042]
> > > [n161:13390] [12]
> > > /home/mukesh/openmpi/prefix/lib/libmpi.so.0(PMPI_Init_thread+0xcb)
> > > [0x2a955e1c5b]
> > > [n161:13390] [13] IMB-MPI1(main+0x33) [0x403543]
> > > [n161:13390] [14] /lib64/tls/libc.so.6(__libc_start_main+0xdb)
> > > [0x399e11c3fb]
> > > [n161:13390] [15] IMB-MPI1 [0x40347a]
> > > [n161:13390] *** End of error message ***
> > > [n161:13391] *** Process received signal ***
> > > [n161:13391] Signal: Segmentation fault (11)
> > > [n161:13391] Signal code: Address not mapped (1)
> > > [n161:13391] Failing at address: (nil)
> > > [n161:13391] [ 0] /lib64/tls/libpthread.so.0 [0x399e80c4f0]
> > > [n161:13391] [ 1]
> > > /home/mukesh/openmpi/prefix/lib/openmpi/mca_btl_sm.so [0x2a9830f8b4]
> > > [n161:13391] [ 2]
> > > /home/mukesh/openmpi/prefix/lib/openmpi/mca_btl_sm.so [0x2a983109e3]
> > > [n161:13391] [ 3]
> > > /home/mukesh/openmpi/prefix/lib/openmpi/mca_btl_sm.so(mca_btl_sm_component_progress+0xbc)
> > > [0x2a9830fc50]
> > > [n161:13391] [ 4]
> > > /home/mukesh/openmpi/prefix/lib/openmpi/mca_bml_r2.so(mca_bml_r2_progress+0x4b)
> > > [0x2a97fce447]
> > > [n161:13391] [ 5]
> > > /home/mukesh/openmpi/prefix/lib/libopen-pal.so.0(opal_progress+0xbc)
> > > [0x2a958fc343]
> > > [n161:13391] [ 6]
> > > /home/mukesh/openmpi/prefix/lib/openmpi/mca_oob_tcp.so(mca_oob_tcp_msg_wait+0x22)
> > > [0x2a962e9e22]
> > > [n161:13391] [ 7]
> > > /home/mukesh/openmpi/prefix/lib/openmpi/mca_oob_tcp.so(mca_oob_tcp_recv+0x677)
> > > [0x2a962f1aab]
> > > [n161:13391] [ 8]
> > > /home/mukesh/openmpi/prefix/lib/libopen-rte.so.0(mca_oob_recv_packed+0x46)
> > > [0x2a9579d243]
> > > [n161:13391] [ 9] /home/mukesh/openmpi/prefix/lib/libopen-rte.so.0
> > > [0x2a9579e910]
> > > [n161:13391] [10]
> > > /home/mukesh/openmpi/prefix/lib/libopen-rte.so.0(mca_oob_xcast+0x140)
> > > [0x2a9579d824]
> > > [n161:13391] [11]
> > > /home/mukesh/openmpi/prefix/lib/libmpi.so.0(ompi_mpi_init+0xaf1)
> > > [0x2a9559f115]
> > > [n161:13391] [12]
> > > /home/mukesh/openmpi/prefix/lib/libmpi.so.0(PMPI_Init_thread+0xcb)
> > > [0x2a955e1c5b]
> > > [n161:13391] [13] IMB-MPI1(main+0x33) [0x403543]
> > > [n161:13391] [14] /lib64/tls/libc.so.6(__libc_start_main+0xdb)
> > > [0x399e11c3fb]
> > > [n161:13391] [15] IMB-MPI1 [0x40347a]
> > > [n161:13391] *** End of error message ***
> > >
> > > -----
> > >
> > > Query#1: Any clue for above?
> >
> >
> > It worked for me.
> >
> > 1. maybe mpirun belongs to another MPI.
> > 2. try to define hosts ( -H host1,host2 )
> >
> >
> >
> >
> > Query#2:  How can I include seperate exe file and have the IMB for it,
> > > e.g, writing a hello.c with MPI elementary API calls, compiling with mpicc
> > > and performing IMB for the same exe.?
> >
> >
> > you have all the sorces
> > maybe in IMB's README you can find something
> >
> > Best Regards,
> > Lenny
> >
> >
> > BR
> > >
> > > _______________________________________________
> > > devel mailing list
> > > de...@open-mpi.org
> > > http://www.open-mpi.org/mailman/listinfo.cgi/devel
> > >
> >
> >
>

Reply via email to