Update of /cvsroot/boost/boost/libs/parallel/doc
In directory sc8-pr-cvs3.sourceforge.net:/tmp/cvs-serv657/libs/parallel/doc

Added Files:
        Jamfile.v2 mpi.qbk 
Log Message:
Import Boost.MPI with the beginnings of a BBv2-based build system

--- NEW FILE: Jamfile.v2 ---
# Copyright (C) 2005-2006 Douglas Gregor <[EMAIL PROTECTED]>
#
# Distributed under the Boost Software License, Version 1.0. (See accompanying 
# file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt.)
project boost/parallel/mpi ;

using quickbook ;
using doxygen ;

doxygen mpi_autodoc 
  : ../../../boost/parallel/mpi.hpp
    ../../../boost/parallel/mpi/allocator.hpp
    ../../../boost/parallel/mpi/collectives.hpp
    ../../../boost/parallel/mpi/collectives_fwd.hpp
    ../../../boost/parallel/mpi/communicator.hpp
    ../../../boost/parallel/mpi/config.hpp
    ../../../boost/parallel/mpi/datatype.hpp
    ../../../boost/parallel/mpi/datatype_fwd.hpp
    ../../../boost/parallel/mpi/environment.hpp
    ../../../boost/parallel/mpi/exception.hpp
    ../../../boost/parallel/mpi/nonblocking.hpp
    ../../../boost/parallel/mpi/operations.hpp
    ../../../boost/parallel/mpi/packed_iarchive.hpp
    ../../../boost/parallel/mpi/packed_oarchive.hpp
    ../../../boost/parallel/mpi/skeleton_and_content.hpp
    ../../../boost/parallel/mpi/skeleton_and_content_fwd.hpp
    ../../../boost/parallel/mpi/status.hpp
    ../../../boost/parallel/mpi/request.hpp
    ../../../boost/parallel/mpi/timer.hpp
    ../../../boost/parallel/mpi/python.hpp
  : <doxygen:param>MACRO_EXPANSION=YES
    <doxygen:param>MACRO_ONLY_PREDEF=YES
    <doxygen:param>"PREDEFINED=BOOST_MPI_HAS_MEMORY_ALLOCATION= MPI_VERSION=2 
BOOST_MPI_DOXYGEN="
  ;

boostbook mpi : mpi.qbk mpi_autodoc 
              : <xsl:param>boost.root=http://www.boost.org
              # Uncomment this line when generating PDF output
              # <xsl:param>max-columns=66
              ;


--- NEW FILE: mpi.qbk ---
[library Boost.MPI
    [authors [Gregor, Douglas], [Troyer, Matthias] ]
    [copyright 2005 2006 Douglas Gregor, Matthias Troyer]
    [purpose
        An generic, user-friendly interface to MPI, the Message
        Passing Interface.
    ]
    [id parallel/mpi]
    [dirname parallel]
    [license
        Distributed under the Boost Software License, Version 1.0.
        (See accompanying file LICENSE_1_0.txt or copy at
        <ulink url="http://www.boost.org/LICENSE_1_0.txt";>
            http://www.boost.org/LICENSE_1_0.txt
        </ulink>)
    ]
]

[/ Links ]
[...1613 lines suppressed...]
The higher-level abstractions provided for convenience must not have
an impact on the performance of the application. For instance, sending
an integer via `send` must be as efficient as a call to `MPI_Send`,
which means that it must be implemented by a simple call to
`MPI_Send`; likewise, an integer [funcref boost::parallel::mpi::reduce
`reduce()`] using `std::plus<int>` must be implemented with a call to
`MPI_Reduce` on integers using the `MPI_SUM` operation: anything less
will impact performance. In essence, this is the "don't pay for what
you don't use" principle: if the user is not transmitting strings,
s/he should not pay the overhead associated with strings. 

Sometimes, achieving maximal performance means foregoing convenient
abstractions and implementing certain functionality using lower-level
primitives. For this reason, it is always possible to extract enough
information from the abstractions in Boost.MPI to minimize
the amount of effort required to interface between Boost.MPI
and the C MPI library.

[endsect]



-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Boost-cvs mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/boost-cvs

Reply via email to