getting_started.qbk 8.7 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252
  1. [section:getting_started Getting started]
  2. Getting started with Boost.MPI requires a working MPI implementation,
  3. a recent version of Boost, and some configuration information.
  4. [section:implementation MPI Implementation]
  5. To get started with Boost.MPI, you will first need a working
  6. MPI implementation. There are many conforming _MPI_implementations_
  7. available. Boost.MPI should work with any of the
  8. implementations, although it has only been tested extensively with:
  9. * [@http://www.open-mpi.org Open MPI]
  10. * [@http://www-unix.mcs.anl.gov/mpi/mpich/ MPICH2]
  11. * [@https://software.intel.com/en-us/intel-mpi-library Intel MPI]
  12. You can test your implementation using the following simple program,
  13. which passes a message from one processor to another. Each processor
  14. prints a message to standard output.
  15. #include <mpi.h>
  16. #include <iostream>
  17. int main(int argc, char* argv[])
  18. {
  19. MPI_Init(&argc, &argv);
  20. int rank;
  21. MPI_Comm_rank(MPI_COMM_WORLD, &rank);
  22. if (rank == 0) {
  23. int value = 17;
  24. int result = MPI_Send(&value, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);
  25. if (result == MPI_SUCCESS)
  26. std::cout << "Rank 0 OK!" << std::endl;
  27. } else if (rank == 1) {
  28. int value;
  29. int result = MPI_Recv(&value, 1, MPI_INT, 0, 0, MPI_COMM_WORLD,
  30. MPI_STATUS_IGNORE);
  31. if (result == MPI_SUCCESS && value == 17)
  32. std::cout << "Rank 1 OK!" << std::endl;
  33. }
  34. MPI_Finalize();
  35. return 0;
  36. }
  37. You should compile and run this program on two processors. To do this,
  38. consult the documentation for your MPI implementation. With _OpenMPI_, for
  39. instance, you compile with the `mpiCC` or `mpic++` compiler, boot the
  40. LAM/MPI daemon, and run your program via `mpirun`. For instance, if
  41. your program is called `mpi-test.cpp`, use the following commands:
  42. [pre
  43. mpiCC -o mpi-test mpi-test.cpp
  44. lamboot
  45. mpirun -np 2 ./mpi-test
  46. lamhalt
  47. ]
  48. When you run this program, you will see both `Rank 0 OK!` and `Rank 1
  49. OK!` printed to the screen. However, they may be printed in any order
  50. and may even overlap each other. The following output is perfectly
  51. legitimate for this MPI program:
  52. [pre
  53. Rank Rank 1 OK!
  54. 0 OK!
  55. ]
  56. If your output looks something like the above, your MPI implementation
  57. appears to be working with a C++ compiler and we're ready to move on.
  58. [endsect]
  59. [section:config Configure and Build]
  60. As the rest of Boost, Boost.MPI uses version 2 of the
  61. [@http://www.boost.org/doc/html/bbv2.html Boost.Build] system for
  62. configuring and building the library binary.
  63. Please refer to the general Boost installation instructions for
  64. [@http://www.boost.org/doc/libs/release/more/getting_started/unix-variants.html#prepare-to-use-a-boost-library-binary Unix Variant]
  65. (including Unix, Linux and MacOS) or
  66. [@http://www.boost.org/doc/libs/1_58_0/more/getting_started/windows.html#prepare-to-use-a-boost-library-binary Windows].
  67. The simplified build instructions should apply on most platforms with a few specific modifications described below.
  68. [section:bootstrap Bootstrap]
  69. As explained in the boost installation instructions, running the bootstrap (`./bootstrap.sh` for unix variants or `bootstrap.bat` for Windows) from the boost root directory will produce a 'project-config.jam` file. You need to edit that file and add the following line:
  70. using mpi ;
  71. Alternatively, you can explicitly provide the list of Boost libraries you want to build.
  72. Please refer to the `--help` option of the `bootstrap` script.
  73. [endsect:bootstrap]
  74. [section:setup Setting up your MPI Implementation]
  75. First, you need to scan the =include/boost/mpi/config.hpp= file and check if some
  76. settings need to be modified for your MPI implementation or preferences.
  77. In particular, the [macroref BOOST_MPI_HOMOGENEOUS] macro, that you will need to comment out
  78. if you plan to run on a heterogeneous set of machines. See the [link mpi.tutorial.performance_optimizations.homogeneous_machines optimization] notes below.
  79. Most MPI implementations require specific compilation and link options.
  80. In order to mask theses details to the user, most MPI implementations provide
  81. wrappers which silently pass those options to the compiler.
  82. Depending on your MPI implementation, some work might be needed to tell Boost which
  83. specific MPI option to use. This is done through the `using mpi ;` directive in the `project-config.jam` file those general form is (do not forget to leave spaces around *:* and before *;*):
  84. [pre
  85. using mpi
  86. : \[<MPI compiler wrapper>\]
  87. : \[<compilation and link options>\]
  88. : \[<mpi runner>\] ;
  89. ]
  90. Depending on your installation and MPI distribution, the build system might be able to find all the required informations and you just need to specify:
  91. [pre
  92. using mpi ;
  93. ]
  94. [section:troubleshooting Trouble shooting]
  95. Most of the time, specially with production HPC clusters, some work will need to be done.
  96. Here is a list of the most common issues and suggestions on how to fix those.
  97. * [*Your wrapper is not in your path or does ot have a standard name ]
  98. You will need to tell the build system how to call it using the first parameter:
  99. [pre
  100. using mpi : /opt/mpi/bullxmpi/1.2.8.3/bin/mpicc ;
  101. ]
  102. [warning
  103. Boost.MPI only uses the C interface, so specifying the C wrapper should be enough. But some implementations will insist on importing the C++ bindings.
  104. ]
  105. * [*Your wrapper is really eccentric or does not exist]
  106. With some implementations, or with some specific integration[footnote Some HPC cluster will insist that the users uss theirs own in house interface to the MPI system.] you will need to provide the compilation and link options through de second parameter using 'jam' directives.
  107. The following type configuration used to be required for some specific Intel MPI implementation (in such a case, the name of the wrapper can be left blank):
  108. [pre
  109. using mpi : mpiicc :
  110. <library-path>/softs/intel/impi/5.0.1.035/intel64/lib
  111. <library-path>/softs/intel/impi/5.0.1.035/intel64/lib/release_mt
  112. <include>/softs/intel/impi/5.0.1.035/intel64/include
  113. <find-shared-library>mpifort
  114. <find-shared-library>mpi_mt
  115. <find-shared-library>mpigi
  116. <find-shared-library>dl
  117. <find-shared-library>rt ;
  118. ]
  119. As a convenience, MPI wrappers usually have an option that provides the required informations, which usually starts with `--show`. You can use those to find out the requested jam directive:
  120. [pre
  121. $ mpiicc -show
  122. icc -I/softs/...\/include ... -L/softs/...\/lib ... -Xlinker -rpath -Xlinker \/softs/...\/lib .... -lmpi -ldl -lrt -lpthread
  123. $
  124. ]
  125. [pre
  126. $ mpicc --showme
  127. icc -I/opt/...\/include -pthread -L/opt/...\/lib -lmpi -ldl -lm -lnuma -Wl,--export-dynamic -lrt -lnsl -lutil -lm -ldl
  128. $ mpicc --showme:compile
  129. -I/opt/mpi/bullxmpi/1.2.8.3/include -pthread
  130. $ mpicc --showme:link
  131. -pthread -L/opt/...\/lib -lmpi -ldl -lm -lnuma -Wl,--export-dynamic -lrt -lnsl -lutil -lm -ldl
  132. $
  133. ]
  134. To see the results of MPI auto-detection, pass `--debug-configuration` on
  135. the bjam command line.
  136. * [*The launch syntax cannot be detected]
  137. [note This is only used when [link mpi.getting_started.config.tests running the tests].]
  138. If you need to use a special command to launch an MPI program, you will need to specify it through the third parameter of the `using mpi` directive.
  139. So, assuming you launch the `all_gather_test` program with:
  140. [pre
  141. $mpiexec.hydra -np 4 all_gather_test
  142. ]
  143. The directive will look like:
  144. [pre
  145. using mpi : mpiicc :
  146. \[<compilation and link options>\]
  147. : mpiexec.hydra -n ;
  148. ]
  149. [endsect:troubleshooting]
  150. [endsect:setup]
  151. [section:build Build]
  152. To build the whole Boost distribution:
  153. [pre
  154. $cd <boost distribution>
  155. $./b2
  156. ]
  157. To build the Boost.MPI library and dependancies:
  158. [pre
  159. $cd <boost distribution>\/lib/mpi/build
  160. $..\/../../b2
  161. ]
  162. [endsect:build]
  163. [section:tests Tests]
  164. You can run the regression tests with:
  165. [pre
  166. $cd <boost distribution>\/lib/mpi/test
  167. $..\/../../b2
  168. ]
  169. [endsect:tests]
  170. [section:installation Installation]
  171. To install the whole Boost distribution:
  172. [pre
  173. $cd <boost distribution>
  174. $./b2 install
  175. ]
  176. [endsect:installation]
  177. [endsect:config]
  178. [section:using Using Boost.MPI]
  179. To build applications based on Boost.MPI, compile and link them as you
  180. normally would for MPI programs, but remember to link against the
  181. `boost_mpi` and `boost_serialization` libraries, e.g.,
  182. [pre
  183. mpic++ -I/path/to/boost/mpi my_application.cpp -Llibdir \
  184. -lboost_mpi-gcc-mt-1_35 -lboost_serialization-gcc-d-1_35.a
  185. ]
  186. If you plan to use the [link mpi.python Python bindings] for
  187. Boost.MPI in conjunction with the C++ Boost.MPI, you will also need to
  188. link against the boost_mpi_python library, e.g., by adding
  189. `-lboost_mpi_python-gcc-mt-1_35` to your link command. This step will
  190. only be necessary if you intend to [link mpi.python.user_data
  191. register C++ types] or use the [link
  192. mpi.python.skeleton_content skeleton/content mechanism] from
  193. within Python.
  194. [endsect:using]
  195. [endsect:getting_started]