Re: [OMPI users] Redusing libmpi.so size....

2016-11-07 Thread Dave Love
Mahesh Nanavalla  writes:

> Hi all,
>
> I am using openmpi-1.10.3.
>
> openmpi-1.10.3 compiled for  arm(cross compiled on X86_64 for openWRT
> linux)  libmpi.so.12.0.3 size is 2.4MB,but if i compiled on X86_64 (linux)
> libmpi.so.12.0.3 size is 990.2KB.
>
> can anyone tell how to reduce the size of libmpi.so.12.0.3 compiled for
>  arm.

Do what Debian does for armel?

  du -h lib/openmpi/lib/libmpi.so.20.0.1
  804K  lib/openmpi/lib/libmpi.so.20.0.1

[What's ompi useful for on an openWRT system?]
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users


Re: [OMPI users] Redusing libmpi.so size....

2016-11-02 Thread George Bosilca
Gilles is right, the script shows only what is used right after MPI_Init,
and it will disregard some of the less mainstreams type of modules, the
ones that are dynamically loaded as needed during the execution. It also
only shows only what is related to libmpi, and ignores everything related
to ORTE that is not in use inside the MPI library. However, it does allow
you to define a list of necessary modules, that you can then use during
configure to limit the size of your MPI library.

1. If your goal is to limit the size of the library for a limited set of
applications you can do the following. Instead of generating an app, use
the output of the script to generate a function. You can then link it with
your application(s). Calling the function right before your MPI_Finalize
will allow you to dump the entire list of used modules in your
application(s).

2. During configure use the option --enable-mca-no-build="list" to remove
all unnecessary modules from the build process. The configure will ignore
them, and therefore they will not endup in your libmpi.so

3. Some of the framework are dynamically selected for each communicator or
peer process (e.g. collective and BTL), so it might be difficult and error
prone to trim then down more.

  George.



On Wed, Nov 2, 2016 at 12:28 AM, Gilles Gouaillardet 
wrote:

> Did you strip the libraries already ?
>
>
> the script will show the list of frameworks and components used by MPI
> helloworld.
>
> from that, you can deduce a list of components that are not required,
> exclude them via the configure command line, and rebuild a trimmed Open MPI.
>
> note this is pretty painful and incomplete. for example, the ompi/io
> components are not explicitly required by MPI helloworld, but they are
> required
>
> if your app uses MPI-IO (e.g. MPI_File_xxx)
>
> some more components might be dynamically required by realworld MPI app.
>
>
> may i ask why you are focusing on reducing the lib size ?
>
> reducing the lib size by excluding (allegedly) useless components is a
> long and painful process, and you might end up having to debug
>
> new problems by your own ...
>
> as far as i am concerned, if a few MB libs is too big (filesystem ? memory
> ?), i do not see how a real world application can even run on your arm node
>
>
> Cheers,
>
>
> Gilles
> On 11/2/2016 12:49 PM, Mahesh Nanavalla wrote:
>
> HI George,
> Thanks for reply,
>
> By that above script ,how can i reduce* libmpi.so* size.
>
>
>
> On Tue, Nov 1, 2016 at 11:27 PM, George Bosilca 
> wrote:
>
>> Let's try to coerce OMPI to dump all modules that are still loaded after
>> MPI_Init. We are still having a superset of the needed modules, but at
>> least everything unnecessary in your particular environment has been
>> trimmed as during a normal OMPI run.
>>
>> George.
>>
>> PS: It's a shell script that needs ag to run. You need to provide the
>> OMPI source directory. You will get a C file (named tmp.c) in the current
>> directory that contain the code necessary to dump all active modules. You
>> will have to fiddle with the compile line to get it to work, as you will
>> need to specify both source and build header files directories. For the
>> sake of completeness here is my compile line
>>
>> mpicc -o tmp -g tmp.c -I. -I../debug/opal/include -I../debug/ompi/include
>> -Iompi/include -Iopal/include -Iopal/mca/event/libevent2022/libevent
>> -Iorte/include -I../debug/opal/mca/hwloc/hwloc1113/hwloc/include
>> -Iopal/mca/hwloc/hwloc1113/hwloc/include -Ioshmem/include -I../debug/
>> -lopen-rte -l open-pal
>>
>>
>>
>> On Tue, Nov 1, 2016 at 7:12 AM, Jeff Squyres (jsquyres) <
>> jsquy...@cisco.com> wrote:
>>
>>> Run ompi_info; it will tell you all the plugins that are installed.
>>>
>>> > On Nov 1, 2016, at 2:13 AM, Mahesh Nanavalla <
>>> mahesh.nanavalla...@gmail.com> wrote:
>>> >
>>> > Hi Jeff Squyres,
>>> >
>>> > Thank you for your reply...
>>> >
>>> > My problem is i want to reduce library size by removing unwanted
>>> plugin's.
>>> >
>>> > Here libmpi.so.12.0.3 size is 2.4MB.
>>> >
>>> > How can i know what are the pluggin's included to build the
>>> libmpi.so.12.0.3 and how can remove.
>>> >
>>> > Thanks,
>>> > Mahesh N
>>> >
>>> > On Fri, Oct 28, 2016 at 7:09 PM, Jeff Squyres (jsquyres) <
>>> jsquy...@cisco.com> wrote:
>>> > On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla <
>>> mahesh.nanavalla...@gmail.com> wrote:
>>> > >
>>> > > i have configured as below for arm
>>> > >
>>> > > ./configure --enable-orterun-prefix-by-default
>>> --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
>>> CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++
>>> --host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers
>>> --disable-mpi-fortran --enable-dlopen --enable-shared --disable-vt
>>> --disable-java --disable-libompitrace --disable-static
>>> >
>>> > Note that there is a tradeoff here: --enable-dlopen will reduce the
>>> size of libmpi.so by splitting out all the 

Re: [OMPI users] Redusing libmpi.so size....

2016-11-01 Thread Gilles Gouaillardet

Did you strip the libraries already ?


the script will show the list of frameworks and components used by MPI 
helloworld.


from that, you can deduce a list of components that are not required, 
exclude them via the configure command line, and rebuild a trimmed Open MPI.


note this is pretty painful and incomplete. for example, the ompi/io 
components are not explicitly required by MPI helloworld, but they are 
required


if your app uses MPI-IO (e.g. MPI_File_xxx)

some more components might be dynamically required by realworld MPI app.


may i ask why you are focusing on reducing the lib size ?

reducing the lib size by excluding (allegedly) useless components is a 
long and painful process, and you might end up having to debug


new problems by your own ...

as far as i am concerned, if a few MB libs is too big (filesystem ? 
memory ?), i do not see how a real world application can even run on 
your arm node



Cheers,


Gilles

On 11/2/2016 12:49 PM, Mahesh Nanavalla wrote:

HI George,
Thanks for reply,

By that above script ,how can i reduce*libmpi.so* size.



On Tue, Nov 1, 2016 at 11:27 PM, George Bosilca > wrote:


Let's try to coerce OMPI to dump all modules that are still loaded
after MPI_Init. We are still having a superset of the needed
modules, but at least everything unnecessary in your particular
environment has been trimmed as during a normal OMPI run.

George.

PS: It's a shell script that needs ag to run. You need to provide
the OMPI source directory. You will get a C file (named tmp.c) in
the current directory that contain the code necessary to dump all
active modules. You will have to fiddle with the compile line to
get it to work, as you will need to specify both source and build
header files directories. For the sake of completeness here is my
compile line

mpicc -o tmp -g tmp.c -I. -I../debug/opal/include
-I../debug/ompi/include -Iompi/include -Iopal/include
-Iopal/mca/event/libevent2022/libevent -Iorte/include
-I../debug/opal/mca/hwloc/hwloc1113/hwloc/include
-Iopal/mca/hwloc/hwloc1113/hwloc/include -Ioshmem/include
-I../debug/ -lopen-rte -l open-pal



On Tue, Nov 1, 2016 at 7:12 AM, Jeff Squyres (jsquyres)
> wrote:

Run ompi_info; it will tell you all the plugins that are
installed.

> On Nov 1, 2016, at 2:13 AM, Mahesh Nanavalla
> wrote:
>
> Hi Jeff Squyres,
>
> Thank you for your reply...
>
> My problem is i want to reduce library size by removing
unwanted plugin's.
>
> Here libmpi.so.12.0.3 size is 2.4MB.
>
> How can i know what are the pluggin's included to build the
libmpi.so.12.0.3 and how can remove.
>
> Thanks,
> Mahesh N
>
> On Fri, Oct 28, 2016 at 7:09 PM, Jeff Squyres (jsquyres)
> wrote:
> On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla
> wrote:
> >
> > i have configured as below for arm
> >
> > ./configure --enable-orterun-prefix-by-default
--prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
CC=arm-openwrt-linux-muslgnueabi-gcc
CXX=arm-openwrt-linux-muslgnueabi-g++
--host=arm-openwrt-linux-muslgnueabi
--enable-script-wrapper-compilers --disable-mpi-fortran
--enable-dlopen --enable-shared --disable-vt --disable-java
--disable-libompitrace --disable-static
>
> Note that there is a tradeoff here: --enable-dlopen will
reduce the size of libmpi.so by splitting out all the plugins
into separate DSOs (dynamic shared objects -- i.e., individual
.so plugin files).  But note that some of plugins are quite
small in terms of code.  I mention this because when you
dlopen a DSO, it will load in DSOs in units of pages.  So even
if a DSO only has 1KB of code, it will use  of
bytes in your running process (e.g., 4KB -- or whatever the
page size is on your system).
>
> On the other hand, if you --disable-dlopen, then all of Open
MPI's plugins are slurped into libmpi.so (and friends). 
Meaning: no DSOs, no dlopen, no page-boundary-loading

behavior.  This allows the compiler/linker to pack in all the
plugins into memory more efficiently (because they'll be
compiled as part of libmpi.so, and all the code is packed in
there -- just like any other library).  Your total memory
usage in the process may be smaller.
>
> Sidenote: if you run more than one MPI process per node,
  

Re: [OMPI users] Redusing libmpi.so size....

2016-11-01 Thread Mahesh Nanavalla
HI George,
Thanks for reply,

By that above script ,how can i reduce* libmpi.so* size.



On Tue, Nov 1, 2016 at 11:27 PM, George Bosilca  wrote:

> Let's try to coerce OMPI to dump all modules that are still loaded after
> MPI_Init. We are still having a superset of the needed modules, but at
> least everything unnecessary in your particular environment has been
> trimmed as during a normal OMPI run.
>
> George.
>
> PS: It's a shell script that needs ag to run. You need to provide the OMPI
> source directory. You will get a C file (named tmp.c) in the current
> directory that contain the code necessary to dump all active modules. You
> will have to fiddle with the compile line to get it to work, as you will
> need to specify both source and build header files directories. For the
> sake of completeness here is my compile line
>
> mpicc -o tmp -g tmp.c -I. -I../debug/opal/include -I../debug/ompi/include
> -Iompi/include -Iopal/include -Iopal/mca/event/libevent2022/libevent
> -Iorte/include -I../debug/opal/mca/hwloc/hwloc1113/hwloc/include
> -Iopal/mca/hwloc/hwloc1113/hwloc/include -Ioshmem/include -I../debug/
> -lopen-rte -l open-pal
>
>
>
> On Tue, Nov 1, 2016 at 7:12 AM, Jeff Squyres (jsquyres) <
> jsquy...@cisco.com> wrote:
>
>> Run ompi_info; it will tell you all the plugins that are installed.
>>
>> > On Nov 1, 2016, at 2:13 AM, Mahesh Nanavalla <
>> mahesh.nanavalla...@gmail.com> wrote:
>> >
>> > Hi Jeff Squyres,
>> >
>> > Thank you for your reply...
>> >
>> > My problem is i want to reduce library size by removing unwanted
>> plugin's.
>> >
>> > Here libmpi.so.12.0.3 size is 2.4MB.
>> >
>> > How can i know what are the pluggin's included to build the
>> libmpi.so.12.0.3 and how can remove.
>> >
>> > Thanks,
>> > Mahesh N
>> >
>> > On Fri, Oct 28, 2016 at 7:09 PM, Jeff Squyres (jsquyres) <
>> jsquy...@cisco.com> wrote:
>> > On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla <
>> mahesh.nanavalla...@gmail.com> wrote:
>> > >
>> > > i have configured as below for arm
>> > >
>> > > ./configure --enable-orterun-prefix-by-default
>> --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
>> CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++
>> --host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers
>> --disable-mpi-fortran --enable-dlopen --enable-shared --disable-vt
>> --disable-java --disable-libompitrace --disable-static
>> >
>> > Note that there is a tradeoff here: --enable-dlopen will reduce the
>> size of libmpi.so by splitting out all the plugins into separate DSOs
>> (dynamic shared objects -- i.e., individual .so plugin files).  But note
>> that some of plugins are quite small in terms of code.  I mention this
>> because when you dlopen a DSO, it will load in DSOs in units of pages.  So
>> even if a DSO only has 1KB of code, it will use  of bytes in
>> your running process (e.g., 4KB -- or whatever the page size is on your
>> system).
>> >
>> > On the other hand, if you --disable-dlopen, then all of Open MPI's
>> plugins are slurped into libmpi.so (and friends).  Meaning: no DSOs, no
>> dlopen, no page-boundary-loading behavior.  This allows the compiler/linker
>> to pack in all the plugins into memory more efficiently (because they'll be
>> compiled as part of libmpi.so, and all the code is packed in there -- just
>> like any other library).  Your total memory usage in the process may be
>> smaller.
>> >
>> > Sidenote: if you run more than one MPI process per node, then libmpi.so
>> (and friends) will be shared between processes.  You're assumedly running
>> in an embedded environment, so I don't know if this factor matters (i.e., I
>> don't know if you'll run with ppn>1), but I thought I'd mention it anyway.
>> >
>> > On the other hand (that's your third hand, for those at home
>> counting...), you may not want to include *all* the plugins.  I.e., there
>> may be a bunch of plugins that you're not actually using, and therefore if
>> they are compiled in as part of libmpi.so (and friends), they're consuming
>> space that you don't want/need.  So the dlopen mechanism might actually be
>> better -- because Open MPI may dlopen a plugin at run time, determine that
>> it won't be used, and then dlclose it (i.e., release the memory that would
>> have been used for it).
>> >
>> > On the other (fourth!) hand, you can actually tell Open MPI to *not*
>> build specific plugins with the --enable-dso-no-build=LIST configure
>> option.  I.e., if you know exactly what plugins you want to use, you can
>> negate the ones that you *don't* want to use on the configure line, use
>> --disable-static and --disable-dlopen, and you'll likely use the least
>> amount of memory.  This is admittedly a bit clunky, but Open MPI's
>> configure process was (obviously) not optimized for this use case -- it's
>> much more optimized to the "build everything possible, and figure out which
>> to use at run time" use case.
>> >
>> > If you really want to hit rock bottom on MPI 

Re: [OMPI users] Redusing libmpi.so size....

2016-11-01 Thread George Bosilca
Let's try to coerce OMPI to dump all modules that are still loaded after
MPI_Init. We are still having a superset of the needed modules, but at
least everything unnecessary in your particular environment has been
trimmed as during a normal OMPI run.

George.

PS: It's a shell script that needs ag to run. You need to provide the OMPI
source directory. You will get a C file (named tmp.c) in the current
directory that contain the code necessary to dump all active modules. You
will have to fiddle with the compile line to get it to work, as you will
need to specify both source and build header files directories. For the
sake of completeness here is my compile line

mpicc -o tmp -g tmp.c -I. -I../debug/opal/include -I../debug/ompi/include
-Iompi/include -Iopal/include -Iopal/mca/event/libevent2022/libevent
-Iorte/include -I../debug/opal/mca/hwloc/hwloc1113/hwloc/include
-Iopal/mca/hwloc/hwloc1113/hwloc/include -Ioshmem/include -I../debug/
-lopen-rte -l open-pal



On Tue, Nov 1, 2016 at 7:12 AM, Jeff Squyres (jsquyres) 
wrote:

> Run ompi_info; it will tell you all the plugins that are installed.
>
> > On Nov 1, 2016, at 2:13 AM, Mahesh Nanavalla <
> mahesh.nanavalla...@gmail.com> wrote:
> >
> > Hi Jeff Squyres,
> >
> > Thank you for your reply...
> >
> > My problem is i want to reduce library size by removing unwanted
> plugin's.
> >
> > Here libmpi.so.12.0.3 size is 2.4MB.
> >
> > How can i know what are the pluggin's included to build the
> libmpi.so.12.0.3 and how can remove.
> >
> > Thanks,
> > Mahesh N
> >
> > On Fri, Oct 28, 2016 at 7:09 PM, Jeff Squyres (jsquyres) <
> jsquy...@cisco.com> wrote:
> > On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla <
> mahesh.nanavalla...@gmail.com> wrote:
> > >
> > > i have configured as below for arm
> > >
> > > ./configure --enable-orterun-prefix-by-default
> --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi" 
> CC=arm-openwrt-linux-muslgnueabi-gcc
> CXX=arm-openwrt-linux-muslgnueabi-g++ --host=arm-openwrt-linux-muslgnueabi
> --enable-script-wrapper-compilers --disable-mpi-fortran --enable-dlopen
> --enable-shared --disable-vt --disable-java --disable-libompitrace
> --disable-static
> >
> > Note that there is a tradeoff here: --enable-dlopen will reduce the size
> of libmpi.so by splitting out all the plugins into separate DSOs (dynamic
> shared objects -- i.e., individual .so plugin files).  But note that some
> of plugins are quite small in terms of code.  I mention this because when
> you dlopen a DSO, it will load in DSOs in units of pages.  So even if a DSO
> only has 1KB of code, it will use  of bytes in your running
> process (e.g., 4KB -- or whatever the page size is on your system).
> >
> > On the other hand, if you --disable-dlopen, then all of Open MPI's
> plugins are slurped into libmpi.so (and friends).  Meaning: no DSOs, no
> dlopen, no page-boundary-loading behavior.  This allows the compiler/linker
> to pack in all the plugins into memory more efficiently (because they'll be
> compiled as part of libmpi.so, and all the code is packed in there -- just
> like any other library).  Your total memory usage in the process may be
> smaller.
> >
> > Sidenote: if you run more than one MPI process per node, then libmpi.so
> (and friends) will be shared between processes.  You're assumedly running
> in an embedded environment, so I don't know if this factor matters (i.e., I
> don't know if you'll run with ppn>1), but I thought I'd mention it anyway.
> >
> > On the other hand (that's your third hand, for those at home
> counting...), you may not want to include *all* the plugins.  I.e., there
> may be a bunch of plugins that you're not actually using, and therefore if
> they are compiled in as part of libmpi.so (and friends), they're consuming
> space that you don't want/need.  So the dlopen mechanism might actually be
> better -- because Open MPI may dlopen a plugin at run time, determine that
> it won't be used, and then dlclose it (i.e., release the memory that would
> have been used for it).
> >
> > On the other (fourth!) hand, you can actually tell Open MPI to *not*
> build specific plugins with the --enable-dso-no-build=LIST configure
> option.  I.e., if you know exactly what plugins you want to use, you can
> negate the ones that you *don't* want to use on the configure line, use
> --disable-static and --disable-dlopen, and you'll likely use the least
> amount of memory.  This is admittedly a bit clunky, but Open MPI's
> configure process was (obviously) not optimized for this use case -- it's
> much more optimized to the "build everything possible, and figure out which
> to use at run time" use case.
> >
> > If you really want to hit rock bottom on MPI process size in your
> embedded environment, you can do some experimentation to figure out exactly
> which components you need.  You can use repeated runs with "mpirun --mca
> ABC_base_verbose 100 ...", where "ABC" is each of Open MPI's framework
> names ("framework" = collection of plugins 

Re: [OMPI users] Redusing libmpi.so size....

2016-11-01 Thread Jeff Squyres (jsquyres)
Run ompi_info; it will tell you all the plugins that are installed.

> On Nov 1, 2016, at 2:13 AM, Mahesh Nanavalla  
> wrote:
> 
> Hi Jeff Squyres,
> 
> Thank you for your reply...
> 
> My problem is i want to reduce library size by removing unwanted plugin's.
> 
> Here libmpi.so.12.0.3 size is 2.4MB.
> 
> How can i know what are the pluggin's included to build the libmpi.so.12.0.3 
> and how can remove.
> 
> Thanks,
> Mahesh N
> 
> On Fri, Oct 28, 2016 at 7:09 PM, Jeff Squyres (jsquyres)  
> wrote:
> On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla  
> wrote:
> >
> > i have configured as below for arm
> >
> > ./configure --enable-orterun-prefix-by-default  
> > --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi" 
> > CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++ 
> > --host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers 
> > --disable-mpi-fortran --enable-dlopen --enable-shared --disable-vt 
> > --disable-java --disable-libompitrace --disable-static
> 
> Note that there is a tradeoff here: --enable-dlopen will reduce the size of 
> libmpi.so by splitting out all the plugins into separate DSOs (dynamic shared 
> objects -- i.e., individual .so plugin files).  But note that some of plugins 
> are quite small in terms of code.  I mention this because when you dlopen a 
> DSO, it will load in DSOs in units of pages.  So even if a DSO only has 1KB 
> of code, it will use  of bytes in your running process (e.g., 4KB 
> -- or whatever the page size is on your system).
> 
> On the other hand, if you --disable-dlopen, then all of Open MPI's plugins 
> are slurped into libmpi.so (and friends).  Meaning: no DSOs, no dlopen, no 
> page-boundary-loading behavior.  This allows the compiler/linker to pack in 
> all the plugins into memory more efficiently (because they'll be compiled as 
> part of libmpi.so, and all the code is packed in there -- just like any other 
> library).  Your total memory usage in the process may be smaller.
> 
> Sidenote: if you run more than one MPI process per node, then libmpi.so (and 
> friends) will be shared between processes.  You're assumedly running in an 
> embedded environment, so I don't know if this factor matters (i.e., I don't 
> know if you'll run with ppn>1), but I thought I'd mention it anyway.
> 
> On the other hand (that's your third hand, for those at home counting...), 
> you may not want to include *all* the plugins.  I.e., there may be a bunch of 
> plugins that you're not actually using, and therefore if they are compiled in 
> as part of libmpi.so (and friends), they're consuming space that you don't 
> want/need.  So the dlopen mechanism might actually be better -- because Open 
> MPI may dlopen a plugin at run time, determine that it won't be used, and 
> then dlclose it (i.e., release the memory that would have been used for it).
> 
> On the other (fourth!) hand, you can actually tell Open MPI to *not* build 
> specific plugins with the --enable-dso-no-build=LIST configure option.  I.e., 
> if you know exactly what plugins you want to use, you can negate the ones 
> that you *don't* want to use on the configure line, use --disable-static and 
> --disable-dlopen, and you'll likely use the least amount of memory.  This is 
> admittedly a bit clunky, but Open MPI's configure process was (obviously) not 
> optimized for this use case -- it's much more optimized to the "build 
> everything possible, and figure out which to use at run time" use case.
> 
> If you really want to hit rock bottom on MPI process size in your embedded 
> environment, you can do some experimentation to figure out exactly which 
> components you need.  You can use repeated runs with "mpirun --mca 
> ABC_base_verbose 100 ...", where "ABC" is each of Open MPI's framework names 
> ("framework" = collection of plugins of the same type).  This verbose output 
> will show you exactly which components are opened, which ones are used, and 
> which ones are discarded.  You can build up a list of all the discarded 
> components and --enable-mca-no-build them.
> 
> > While i am running the using mpirun
> > am getting following errror..
> > root@OpenWrt:~# /usr/bin/mpirun --allow-run-as-root -np 1 
> > /usr/bin/openmpiWiFiBulb
> > --
> > Sorry!  You were supposed to get help about:
> > opal_init:startup:internal-failure
> > But I couldn't open the help file:
> > 
> > /home/nmahesh/Workspace/ARM_MPI/openmpi/share/openmpi/help-opal-runtime.txt:
> >  No such file or directory.  Sorry!
> 
> So this is really two errors:
> 
> 1. The help message file is not being found.
> 2. Something is obviously going wrong during opal_init() (which is one of 
> Open MPI's startup functions).
> 
> For #1, when I do a default build of Open MPI 1.10.3, that file *is* 
> installed.  Are you trimming the installation tree, 

Re: [OMPI users] Redusing libmpi.so size....

2016-11-01 Thread Mahesh Nanavalla
Hi all,

Thank you for your reply...

My problem is i want to *reduce library* size by removing unwanted plugin's.

Here *libmpi.so.12.0.3 *size is 2.4MB.

How can i know what are the* pluggin's *included to* build the*
*libmpi.so.12.0.3* and how can remove.

Thanks,
Mahesh N

On Tue, Nov 1, 2016 at 11:43 AM, Mahesh Nanavalla <
mahesh.nanavalla...@gmail.com> wrote:

> Hi Jeff Squyres,
>
> Thank you for your reply...
>
> My problem is i want to *reduce library* size by removing unwanted
> plugin's.
>
> Here *libmpi.so.12.0.3 *size is 2.4MB.
>
> How can i know what are the* pluggin's *included to* build the*
> *libmpi.so.12.0.3* and how can remove.
>
> Thanks,
> Mahesh N
>
> On Fri, Oct 28, 2016 at 7:09 PM, Jeff Squyres (jsquyres) <
> jsquy...@cisco.com> wrote:
>
>> On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla <
>> mahesh.nanavalla...@gmail.com> wrote:
>> >
>> > i have configured as below for arm
>> >
>> > ./configure --enable-orterun-prefix-by-default
>> --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
>> CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++
>> --host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers
>> --disable-mpi-fortran --enable-dlopen --enable-shared --disable-vt
>> --disable-java --disable-libompitrace --disable-static
>>
>> Note that there is a tradeoff here: --enable-dlopen will reduce the size
>> of libmpi.so by splitting out all the plugins into separate DSOs (dynamic
>> shared objects -- i.e., individual .so plugin files).  But note that some
>> of plugins are quite small in terms of code.  I mention this because when
>> you dlopen a DSO, it will load in DSOs in units of pages.  So even if a DSO
>> only has 1KB of code, it will use  of bytes in your running
>> process (e.g., 4KB -- or whatever the page size is on your system).
>>
>> On the other hand, if you --disable-dlopen, then all of Open MPI's
>> plugins are slurped into libmpi.so (and friends).  Meaning: no DSOs, no
>> dlopen, no page-boundary-loading behavior.  This allows the compiler/linker
>> to pack in all the plugins into memory more efficiently (because they'll be
>> compiled as part of libmpi.so, and all the code is packed in there -- just
>> like any other library).  Your total memory usage in the process may be
>> smaller.
>>
>> Sidenote: if you run more than one MPI process per node, then libmpi.so
>> (and friends) will be shared between processes.  You're assumedly running
>> in an embedded environment, so I don't know if this factor matters (i.e., I
>> don't know if you'll run with ppn>1), but I thought I'd mention it anyway.
>>
>> On the other hand (that's your third hand, for those at home
>> counting...), you may not want to include *all* the plugins.  I.e., there
>> may be a bunch of plugins that you're not actually using, and therefore if
>> they are compiled in as part of libmpi.so (and friends), they're consuming
>> space that you don't want/need.  So the dlopen mechanism might actually be
>> better -- because Open MPI may dlopen a plugin at run time, determine that
>> it won't be used, and then dlclose it (i.e., release the memory that would
>> have been used for it).
>>
>> On the other (fourth!) hand, you can actually tell Open MPI to *not*
>> build specific plugins with the --enable-dso-no-build=LIST configure
>> option.  I.e., if you know exactly what plugins you want to use, you can
>> negate the ones that you *don't* want to use on the configure line, use
>> --disable-static and --disable-dlopen, and you'll likely use the least
>> amount of memory.  This is admittedly a bit clunky, but Open MPI's
>> configure process was (obviously) not optimized for this use case -- it's
>> much more optimized to the "build everything possible, and figure out which
>> to use at run time" use case.
>>
>> If you really want to hit rock bottom on MPI process size in your
>> embedded environment, you can do some experimentation to figure out exactly
>> which components you need.  You can use repeated runs with "mpirun --mca
>> ABC_base_verbose 100 ...", where "ABC" is each of Open MPI's framework
>> names ("framework" = collection of plugins of the same type).  This verbose
>> output will show you exactly which components are opened, which ones are
>> used, and which ones are discarded.  You can build up a list of all the
>> discarded components and --enable-mca-no-build them.
>>
>> > While i am running the using mpirun
>> > am getting following errror..
>> > root@OpenWrt:~# /usr/bin/mpirun --allow-run-as-root -np 1
>> /usr/bin/openmpiWiFiBulb
>> > 
>> --
>> > Sorry!  You were supposed to get help about:
>> > opal_init:startup:internal-failure
>> > But I couldn't open the help file:
>> > 
>> > /home/nmahesh/Workspace/ARM_MPI/openmpi/share/openmpi/help-opal-runtime.txt:
>> No such file or directory.  Sorry!
>>
>> So this is really two errors:
>>
>> 1. The help message file is not being found.
>> 2. 

Re: [OMPI users] Redusing libmpi.so size....

2016-11-01 Thread Mahesh Nanavalla
Hi Jeff Squyres,

Thank you for your reply...

My problem is i want to *reduce library* size by removing unwanted plugin's.

Here *libmpi.so.12.0.3 *size is 2.4MB.

How can i know what are the* pluggin's *included to* build the*
*libmpi.so.12.0.3* and how can remove.

Thanks,
Mahesh N

On Fri, Oct 28, 2016 at 7:09 PM, Jeff Squyres (jsquyres)  wrote:

> On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla <
> mahesh.nanavalla...@gmail.com> wrote:
> >
> > i have configured as below for arm
> >
> > ./configure --enable-orterun-prefix-by-default  
> > --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
> CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++
> --host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers
> --disable-mpi-fortran --enable-dlopen --enable-shared --disable-vt
> --disable-java --disable-libompitrace --disable-static
>
> Note that there is a tradeoff here: --enable-dlopen will reduce the size
> of libmpi.so by splitting out all the plugins into separate DSOs (dynamic
> shared objects -- i.e., individual .so plugin files).  But note that some
> of plugins are quite small in terms of code.  I mention this because when
> you dlopen a DSO, it will load in DSOs in units of pages.  So even if a DSO
> only has 1KB of code, it will use  of bytes in your running
> process (e.g., 4KB -- or whatever the page size is on your system).
>
> On the other hand, if you --disable-dlopen, then all of Open MPI's plugins
> are slurped into libmpi.so (and friends).  Meaning: no DSOs, no dlopen, no
> page-boundary-loading behavior.  This allows the compiler/linker to pack in
> all the plugins into memory more efficiently (because they'll be compiled
> as part of libmpi.so, and all the code is packed in there -- just like any
> other library).  Your total memory usage in the process may be smaller.
>
> Sidenote: if you run more than one MPI process per node, then libmpi.so
> (and friends) will be shared between processes.  You're assumedly running
> in an embedded environment, so I don't know if this factor matters (i.e., I
> don't know if you'll run with ppn>1), but I thought I'd mention it anyway.
>
> On the other hand (that's your third hand, for those at home counting...),
> you may not want to include *all* the plugins.  I.e., there may be a bunch
> of plugins that you're not actually using, and therefore if they are
> compiled in as part of libmpi.so (and friends), they're consuming space
> that you don't want/need.  So the dlopen mechanism might actually be better
> -- because Open MPI may dlopen a plugin at run time, determine that it
> won't be used, and then dlclose it (i.e., release the memory that would
> have been used for it).
>
> On the other (fourth!) hand, you can actually tell Open MPI to *not* build
> specific plugins with the --enable-dso-no-build=LIST configure option.
> I.e., if you know exactly what plugins you want to use, you can negate the
> ones that you *don't* want to use on the configure line, use
> --disable-static and --disable-dlopen, and you'll likely use the least
> amount of memory.  This is admittedly a bit clunky, but Open MPI's
> configure process was (obviously) not optimized for this use case -- it's
> much more optimized to the "build everything possible, and figure out which
> to use at run time" use case.
>
> If you really want to hit rock bottom on MPI process size in your embedded
> environment, you can do some experimentation to figure out exactly which
> components you need.  You can use repeated runs with "mpirun --mca
> ABC_base_verbose 100 ...", where "ABC" is each of Open MPI's framework
> names ("framework" = collection of plugins of the same type).  This verbose
> output will show you exactly which components are opened, which ones are
> used, and which ones are discarded.  You can build up a list of all the
> discarded components and --enable-mca-no-build them.
>
> > While i am running the using mpirun
> > am getting following errror..
> > root@OpenWrt:~# /usr/bin/mpirun --allow-run-as-root -np 1
> /usr/bin/openmpiWiFiBulb
> > 
> --
> > Sorry!  You were supposed to get help about:
> > opal_init:startup:internal-failure
> > But I couldn't open the help file:
> > 
> > /home/nmahesh/Workspace/ARM_MPI/openmpi/share/openmpi/help-opal-runtime.txt:
> No such file or directory.  Sorry!
>
> So this is really two errors:
>
> 1. The help message file is not being found.
> 2. Something is obviously going wrong during opal_init() (which is one of
> Open MPI's startup functions).
>
> For #1, when I do a default build of Open MPI 1.10.3, that file *is*
> installed.  Are you trimming the installation tree, perchance?  If so, if
> you can put at least that one file back in its installation location (it's
> in the Open MPI source tarball), it might reveal more information on
> exactly what is failing.
>
> Additionally, I wonder if shared memory is 

Re: [OMPI users] Redusing libmpi.so size....

2016-10-28 Thread Jeff Squyres (jsquyres)
On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla  
wrote:
> 
> i have configured as below for arm   
> 
> ./configure --enable-orterun-prefix-by-default  
> --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi" 
> CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++ 
> --host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers 
> --disable-mpi-fortran --enable-dlopen --enable-shared --disable-vt 
> --disable-java --disable-libompitrace --disable-static

Note that there is a tradeoff here: --enable-dlopen will reduce the size of 
libmpi.so by splitting out all the plugins into separate DSOs (dynamic shared 
objects -- i.e., individual .so plugin files).  But note that some of plugins 
are quite small in terms of code.  I mention this because when you dlopen a 
DSO, it will load in DSOs in units of pages.  So even if a DSO only has 1KB of 
code, it will use  of bytes in your running process (e.g., 4KB -- or 
whatever the page size is on your system).

On the other hand, if you --disable-dlopen, then all of Open MPI's plugins are 
slurped into libmpi.so (and friends).  Meaning: no DSOs, no dlopen, no 
page-boundary-loading behavior.  This allows the compiler/linker to pack in all 
the plugins into memory more efficiently (because they'll be compiled as part 
of libmpi.so, and all the code is packed in there -- just like any other 
library).  Your total memory usage in the process may be smaller.

Sidenote: if you run more than one MPI process per node, then libmpi.so (and 
friends) will be shared between processes.  You're assumedly running in an 
embedded environment, so I don't know if this factor matters (i.e., I don't 
know if you'll run with ppn>1), but I thought I'd mention it anyway.

On the other hand (that's your third hand, for those at home counting...), you 
may not want to include *all* the plugins.  I.e., there may be a bunch of 
plugins that you're not actually using, and therefore if they are compiled in 
as part of libmpi.so (and friends), they're consuming space that you don't 
want/need.  So the dlopen mechanism might actually be better -- because Open 
MPI may dlopen a plugin at run time, determine that it won't be used, and then 
dlclose it (i.e., release the memory that would have been used for it).

On the other (fourth!) hand, you can actually tell Open MPI to *not* build 
specific plugins with the --enable-dso-no-build=LIST configure option.  I.e., 
if you know exactly what plugins you want to use, you can negate the ones that 
you *don't* want to use on the configure line, use --disable-static and 
--disable-dlopen, and you'll likely use the least amount of memory.  This is 
admittedly a bit clunky, but Open MPI's configure process was (obviously) not 
optimized for this use case -- it's much more optimized to the "build 
everything possible, and figure out which to use at run time" use case.

If you really want to hit rock bottom on MPI process size in your embedded 
environment, you can do some experimentation to figure out exactly which 
components you need.  You can use repeated runs with "mpirun --mca 
ABC_base_verbose 100 ...", where "ABC" is each of Open MPI's framework names 
("framework" = collection of plugins of the same type).  This verbose output 
will show you exactly which components are opened, which ones are used, and 
which ones are discarded.  You can build up a list of all the discarded 
components and --enable-mca-no-build them.

> While i am running the using mpirun 
> am getting following errror..
> root@OpenWrt:~# /usr/bin/mpirun --allow-run-as-root -np 1 
> /usr/bin/openmpiWiFiBulb
> --
> Sorry!  You were supposed to get help about:
> opal_init:startup:internal-failure
> But I couldn't open the help file:
> 
> /home/nmahesh/Workspace/ARM_MPI/openmpi/share/openmpi/help-opal-runtime.txt: 
> No such file or directory.  Sorry!

So this is really two errors:

1. The help message file is not being found.
2. Something is obviously going wrong during opal_init() (which is one of Open 
MPI's startup functions).

For #1, when I do a default build of Open MPI 1.10.3, that file *is* installed. 
 Are you trimming the installation tree, perchance?  If so, if you can put at 
least that one file back in its installation location (it's in the Open MPI 
source tarball), it might reveal more information on exactly what is failing.

Additionally, I wonder if shared memory is not getting setup right.  Try 
running with "mpirun --mca shmem_base_verbose 100 ..." and see if it's 
reporting an error.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/

___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users


Re: [OMPI users] Redusing libmpi.so size....

2016-10-28 Thread Mahesh Nanavalla
Hi Gilles,

Thanks for reply

i have configured as below for arm

./configure --enable-orterun-prefix-by-default
--prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++
--host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers
--disable-mpi-fortran* --enable-dlopen* --enable-shared --disable-vt
--disable-java --disable-libompitrace --disable-static

While i am running the using mpirun
am getting following errror..
root@OpenWrt:~# /usr/bin/mpirun --allow-run-as-root -np 1
/usr/bin/openmpiWiFiBulb
--
Sorry!  You were supposed to get help about:
opal_init:startup:internal-failure
But I couldn't open the help file:
/home/nmahesh/Workspace/ARM_MPI/openmpi/share/openmpi/help-opal-runtime.txt:
No such file or directory.  Sorry!


kindly guide me...

On Fri, Oct 28, 2016 at 5:34 PM, Mahesh Nanavalla <
mahesh.nanavalla...@gmail.com> wrote:

> Hi Gilles,
>
> Thanks for reply
>
> i have configured as below for arm
>
> ./configure --enable-orterun-prefix-by-default  
> --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
> CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++
> --host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers
> --disable-mpi-fortran* --enable-dlopen* --enable-shared --disable-vt
> --disable-java --disable-libompitrace --disable-static
>
> While i am running the using mpirun
> am getting following errror..
> root@OpenWrt:~# /usr/bin/mpirun --allow-run-as-root -np 1
> /usr/bin/openmpiWiFiBulb
> --
> Sorry!  You were supposed to get help about:
> opal_init:startup:internal-failure
> But I couldn't open the help file:
> 
> /home/nmahesh/Workspace/ARM_MPI/openmpi/share/openmpi/help-opal-runtime.txt:
> No such file or directory.  Sorry!
>
>
> kindly guide me...
>
> On Fri, Oct 28, 2016 at 4:36 PM, Gilles Gouaillardet <
> gilles.gouaillar...@gmail.com> wrote:
>
>> Hi,
>>
>> i do not know if you can expect same lib size on x86_64 and arm.
>> x86_64 uses variable length instructions, and since arm is RISC, i
>> assume instructions are fixed length, and more instructions are
>> required to achieve the same result.
>> also, 2.4 MB does not seem huge to me.
>>
>> anyway, make sure you did not compile with -g, and you use similar
>> optimization levels on both arch.
>> you also have to be consistent with respect to the --disable-dlopen option
>> (by default, it is off, so all components are in
>> /.../lib/openmpi/mca_*.so. if you configure with --disable-dlopen, all
>> components are slurped into lib{open_pal,open_rte,mpi}.so,
>> and this obviously increases lib size.
>> depending on your compiler, you might be able to optimize for code
>> size (vs performance) with the appropriate flags.
>>
>> last but not least, strip your libs before you compare their sizes.
>>
>> Cheers,
>>
>> Gilles
>>
>> On Fri, Oct 28, 2016 at 3:17 PM, Mahesh Nanavalla
>>  wrote:
>> > Hi all,
>> >
>> > I am using openmpi-1.10.3.
>> >
>> > openmpi-1.10.3 compiled for  arm(cross compiled on X86_64 for openWRT
>> linux)
>> > libmpi.so.12.0.3 size is 2.4MB,but if i compiled on X86_64 (linux)
>> > libmpi.so.12.0.3 size is 990.2KB.
>> >
>> > can anyone tell how to reduce the size of libmpi.so.12.0.3 compiled for
>> > arm.
>> >
>> > Thanks,
>> > Mahesh.N
>> >
>>
>
>
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

[OMPI users] Redusing libmpi.so size....

2016-10-28 Thread Mahesh Nanavalla
Hi all,

I am using openmpi-1.10.3.

openmpi-1.10.3 compiled for  arm(cross compiled on X86_64 for openWRT
linux)  libmpi.so.12.0.3 size is 2.4MB,but if i compiled on X86_64 (linux)
libmpi.so.12.0.3 size is 990.2KB.

can anyone tell how to reduce the size of libmpi.so.12.0.3 compiled for
 arm.

Thanks,
Mahesh.N
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users