Did you strip the libraries already ?

the script will show the list of frameworks and components used by MPI helloworld.

from that, you can deduce a list of components that are not required, exclude them via the configure command line, and rebuild a trimmed Open MPI.

note this is pretty painful and incomplete. for example, the ompi/io components are not explicitly required by MPI helloworld, but they are required

if your app uses MPI-IO (e.g. MPI_File_xxx)

some more components might be dynamically required by realworld MPI app.


may i ask why you are focusing on reducing the lib size ?

reducing the lib size by excluding (allegedly) useless components is a long and painful process, and you might end up having to debug

new problems by your own ...

as far as i am concerned, if a few MB libs is too big (filesystem ? memory ?), i do not see how a real world application can even run on your arm node


Cheers,


Gilles

On 11/2/2016 12:49 PM, Mahesh Nanavalla wrote:
HI George,
Thanks for reply,

By that above script ,how can i reduce*libmpi.so* size.



On Tue, Nov 1, 2016 at 11:27 PM, George Bosilca <bosi...@icl.utk.edu <mailto:bosi...@icl.utk.edu>> wrote:

    Let's try to coerce OMPI to dump all modules that are still loaded
    after MPI_Init. We are still having a superset of the needed
    modules, but at least everything unnecessary in your particular
    environment has been trimmed as during a normal OMPI run.

    George.

    PS: It's a shell script that needs ag to run. You need to provide
    the OMPI source directory. You will get a C file (named tmp.c) in
    the current directory that contain the code necessary to dump all
    active modules. You will have to fiddle with the compile line to
    get it to work, as you will need to specify both source and build
    header files directories. For the sake of completeness here is my
    compile line

    mpicc -o tmp -g tmp.c -I. -I../debug/opal/include
    -I../debug/ompi/include -Iompi/include -Iopal/include
    -Iopal/mca/event/libevent2022/libevent -Iorte/include
    -I../debug/opal/mca/hwloc/hwloc1113/hwloc/include
    -Iopal/mca/hwloc/hwloc1113/hwloc/include -Ioshmem/include
    -I../debug/ -lopen-rte -l open-pal



    On Tue, Nov 1, 2016 at 7:12 AM, Jeff Squyres (jsquyres)
    <jsquy...@cisco.com <mailto:jsquy...@cisco.com>> wrote:

        Run ompi_info; it will tell you all the plugins that are
        installed.

        > On Nov 1, 2016, at 2:13 AM, Mahesh Nanavalla
        <mahesh.nanavalla...@gmail.com
        <mailto:mahesh.nanavalla...@gmail.com>> wrote:
        >
        > Hi Jeff Squyres,
        >
        > Thank you for your reply...
        >
        > My problem is i want to reduce library size by removing
        unwanted plugin's.
        >
        > Here libmpi.so.12.0.3 size is 2.4MB.
        >
        > How can i know what are the pluggin's included to build the
        libmpi.so.12.0.3 and how can remove.
        >
        > Thanks&Regards,
        > Mahesh N
        >
        > On Fri, Oct 28, 2016 at 7:09 PM, Jeff Squyres (jsquyres)
        <jsquy...@cisco.com <mailto:jsquy...@cisco.com>> wrote:
        > On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla
        <mahesh.nanavalla...@gmail.com
        <mailto:mahesh.nanavalla...@gmail.com>> wrote:
        > >
        > > i have configured as below for arm
        > >
        > > ./configure --enable-orterun-prefix-by-default
        --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
        CC=arm-openwrt-linux-muslgnueabi-gcc
        CXX=arm-openwrt-linux-muslgnueabi-g++
        --host=arm-openwrt-linux-muslgnueabi
        --enable-script-wrapper-compilers --disable-mpi-fortran
        --enable-dlopen --enable-shared --disable-vt --disable-java
        --disable-libompitrace --disable-static
        >
        > Note that there is a tradeoff here: --enable-dlopen will
        reduce the size of libmpi.so by splitting out all the plugins
        into separate DSOs (dynamic shared objects -- i.e., individual
        .so plugin files).  But note that some of plugins are quite
        small in terms of code.  I mention this because when you
        dlopen a DSO, it will load in DSOs in units of pages.  So even
        if a DSO only has 1KB of code, it will use <page_size> of
        bytes in your running process (e.g., 4KB -- or whatever the
        page size is on your system).
        >
        > On the other hand, if you --disable-dlopen, then all of Open
MPI's plugins are slurped into libmpi.so (and friends). Meaning: no DSOs, no dlopen, no page-boundary-loading
        behavior.  This allows the compiler/linker to pack in all the
        plugins into memory more efficiently (because they'll be
        compiled as part of libmpi.so, and all the code is packed in
        there -- just like any other library).  Your total memory
        usage in the process may be smaller.
        >
        > Sidenote: if you run more than one MPI process per node,
        then libmpi.so (and friends) will be shared between
        processes.  You're assumedly running in an embedded
        environment, so I don't know if this factor matters (i.e., I
        don't know if you'll run with ppn>1), but I thought I'd
        mention it anyway.
        >
        > On the other hand (that's your third hand, for those at home
counting...), you may not want to include *all* the plugins. I.e., there may be a bunch of plugins that you're not actually
        using, and therefore if they are compiled in as part of
        libmpi.so (and friends), they're consuming space that you
        don't want/need.  So the dlopen mechanism might actually be
        better -- because Open MPI may dlopen a plugin at run time,
        determine that it won't be used, and then dlclose it (i.e.,
        release the memory that would have been used for it).
        >
        > On the other (fourth!) hand, you can actually tell Open MPI
        to *not* build specific plugins with the
        --enable-dso-no-build=LIST configure option.  I.e., if you
        know exactly what plugins you want to use, you can negate the
        ones that you *don't* want to use on the configure line, use
        --disable-static and --disable-dlopen, and you'll likely use
        the least amount of memory.  This is admittedly a bit clunky,
        but Open MPI's configure process was (obviously) not optimized
        for this use case -- it's much more optimized to the "build
        everything possible, and figure out which to use at run time"
        use case.
        >
        > If you really want to hit rock bottom on MPI process size in
        your embedded environment, you can do some experimentation to
        figure out exactly which components you need.  You can use
        repeated runs with "mpirun --mca ABC_base_verbose 100 ...",
        where "ABC" is each of Open MPI's framework names ("framework"
        = collection of plugins of the same type).  This verbose
        output will show you exactly which components are opened,
        which ones are used, and which ones are discarded.  You can
        build up a list of all the discarded components and
        --enable-mca-no-build them.
        >
        > > While i am running the using mpirun
        > > am getting following errror..
        > > root@OpenWrt:~# /usr/bin/mpirun --allow-run-as-root -np 1
        /usr/bin/openmpiWiFiBulb
        > >
        
--------------------------------------------------------------------------
        > > Sorry!  You were supposed to get help about:
        > >     opal_init:startup:internal-failure
        > > But I couldn't open the help file:
> > /home/nmahesh/Workspace/ARM_MPI/openmpi/share/openmpi/help-opal-runtime.txt:
        No such file or directory.  Sorry!
        >
        > So this is really two errors:
        >
        > 1. The help message file is not being found.
        > 2. Something is obviously going wrong during opal_init()
        (which is one of Open MPI's startup functions).
        >
        > For #1, when I do a default build of Open MPI 1.10.3, that
        file *is* installed.  Are you trimming the installation tree,
        perchance?  If so, if you can put at least that one file back
        in its installation location (it's in the Open MPI source
        tarball), it might reveal more information on exactly what is
        failing.
        >
        > Additionally, I wonder if shared memory is not getting setup
        right.  Try running with "mpirun --mca shmem_base_verbose 100
        ..." and see if it's reporting an error.
        >
        > --
        > Jeff Squyres
        > jsquy...@cisco.com <mailto:jsquy...@cisco.com>
        > For corporate legal information go to:
        http://www.cisco.com/web/about/doing_business/legal/cri/
        <http://www.cisco.com/web/about/doing_business/legal/cri/>
        >
        > _______________________________________________
        > users mailing list
        > users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
        > https://rfd.newmexicoconsortium.org/mailman/listinfo/users
        <https://rfd.newmexicoconsortium.org/mailman/listinfo/users>
        >
        > _______________________________________________
        > users mailing list
        > users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
        > https://rfd.newmexicoconsortium.org/mailman/listinfo/users
        <https://rfd.newmexicoconsortium.org/mailman/listinfo/users>


        --
        Jeff Squyres
        jsquy...@cisco.com <mailto:jsquy...@cisco.com>
        For corporate legal information go to:
        http://www.cisco.com/web/about/doing_business/legal/cri/
        <http://www.cisco.com/web/about/doing_business/legal/cri/>

        _______________________________________________
        users mailing list
        users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
        https://rfd.newmexicoconsortium.org/mailman/listinfo/users
        <https://rfd.newmexicoconsortium.org/mailman/listinfo/users>



    _______________________________________________
    users mailing list
    users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
    https://rfd.newmexicoconsortium.org/mailman/listinfo/users
    <https://rfd.newmexicoconsortium.org/mailman/listinfo/users>




_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to