I’ve patched the Make.inc file. Does it look correct?
Make.inc
Description: Binary data
Mark Brethen [email protected] > On Nov 16, 2018, at 8:20 PM, Mark Brethen <[email protected]> wrote: > > So is this port a good candidate for the mpi port group? If not then I’ll > just whitelist a compiler if the mpi variant is chosen. > > > Mark Brethen > [email protected] > > > >> On Nov 16, 2018, at 5:08 PM, Mark Brethen <[email protected]> wrote: >> >> From the SPOOLES documentation: >> >> The SPOOLES library operates in serial, multithreaded and MPI environments. >> The code for these three environments is fairly segregated. The MPI >> directory contains all source and driver code for MPI programs. The MT >> directory contains all source and driver code for multithreaded programs. >> All other directories contain serial code.2 The MPI source code is compiled >> into a spoolesMPI.a library. The multithreaded source code is compiled into >> a spoolesMT.a library. The serial code is compiled into a spooles.a library. >> >> I would like to offer the user the option of MT or MPI. The build phase >> would look something like this: >> >> if defined(WITH_MPI) >> cd ${WRKSRC}/MPI/src ; ${SETENV} ${MAKE_ENV} ${MAKE_CMD} -f >> makeGlobalLib >> cd ${WRKSRC}_SHARED/MPI/src; ${SETENV} ${MAKE_ENV} ${MAKE_CMD} >> -f makeGlobalLib >> cd ${WRKSRC}_SHARED ; ld -Bshareable -o libspooles.so.1 -x >> -soname libspooles.so.1 --whole-archive spooles.a >> >> else >> cd ${WRKSRC}; ${SETENV} ${MAKE_ENV} ${MAKE_CMD} global -f >> makefile >> cd ${WRKSRC}/MT/src; ${SETENV} ${MAKE_ENV} ${MAKE_CMD} -f >> makeGlobalLib >> cd ${WRKSRC}_SHARED; ${SETENV} ${MAKE_ENV} ${MAKE_CMD} global >> -f makefile) >> cd ${WRKSRC}_SHARED/MT/src; ${SETENV} ${MAKE_ENV} ${MAKE_CMD} >> -f makeGlobalLib >> cd ${WRKSRC}_SHARED ; ld -Bshareable -o libspooles.so.1 -x >> -soname libspooles.so.1 --whole-archive spooles.a >> endif >> >> There’s probably an easy way to do this using the MPI portgroup. Can you >> suggest some ports to look at as an example? >> >> >> Mark Brethen >> [email protected] >> >> >> >
