tutorial_parallel.qbk 9.9 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266
  1. [/============================================================================
  2. Boost.odeint
  3. Copyright 2013 Karsten Ahnert
  4. Copyright 2013 Pascal Germroth
  5. Copyright 2013 Mario Mulansky
  6. Use, modification and distribution is subject to the Boost Software License,
  7. Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at
  8. http://www.boost.org/LICENSE_1_0.txt)
  9. =============================================================================/]
  10. [section Parallel computation with OpenMP and MPI]
  11. Parallelization is a key feature for modern numerical libraries due to the vast
  12. availability of many cores nowadays, even on Laptops.
  13. odeint currently supports parallelization with OpenMP and MPI, as described in
  14. the following sections.
  15. However, it should be made clear from the beginning that the difficulty of
  16. efficiently distributing ODE integration on many cores/machines lies in the
  17. parallelization of the system function, which is still the user's
  18. responsibility.
  19. Simply using a parallel odeint backend without parallelizing the system function
  20. will bring you almost no performance gains.
  21. [section OpenMP]
  22. [import ../examples/openmp/phase_chain.cpp]
  23. odeint's OpenMP support is implemented as an external backend, which needs to be
  24. manually included. Depending on the compiler some additional flags may be
  25. needed, i.e. [^-fopenmp] for GCC.
  26. [phase_chain_openmp_header]
  27. In the easiest parallelization approach with OpenMP we use a standard `vector`
  28. as the state type:
  29. [phase_chain_vector_state]
  30. We initialize the state with some random data:
  31. [phase_chain_init]
  32. Now we have to configure the stepper to use the OpenMP backend.
  33. This is done by explicitly providing the `openmp_range_algebra` as a template
  34. parameter to the stepper.
  35. This algebra requires the state type to be a model of Random Access Range and
  36. will be used from multiple threads by the algebra.
  37. [phase_chain_stepper]
  38. Additional to providing the stepper with OpenMP parallelization we also need
  39. a parallelized system function to exploit the available cores.
  40. Here this is shown for a simple one-dimensional chain of phase oscillators with
  41. nearest neighbor coupling:
  42. [phase_chain_rhs]
  43. [note In the OpenMP backends the system function will always be called
  44. sequentially from the thread used to start the integration.]
  45. Finally, we perform the integration by using one of the integrate functions from
  46. odeint.
  47. As you can see, the parallelization is completely hidden in the stepper and the
  48. system function.
  49. OpenMP will take care of distributing the work among the threads and join them
  50. automatically.
  51. [phase_chain_integrate]
  52. After integrating, the data can be accessed immediately and be processed
  53. further.
  54. Note, that you can specify the OpenMP scheduling by calling `omp_set_schedule`
  55. in the beginning of your program:
  56. [phase_chain_scheduling]
  57. See [github_link examples/openmp/phase_chain.cpp
  58. openmp/phase_chain.cpp] for the complete example.
  59. [heading Split state]
  60. [import ../examples/openmp/phase_chain_omp_state.cpp]
  61. For advanced cases odeint offers another approach to use OpenMP that allows for
  62. a more exact control of the parallelization.
  63. For example, for odd-sized data where OpenMP's thread boundaries don't match
  64. cache lines and hurt performance it might be advisable to copy the data from the
  65. continuous `vector<T>` into separate, individually aligned, vectors.
  66. For this, odeint provides the `openmp_state<T>` type, essentially an alias for
  67. `vector<vector<T>>`.
  68. Here, the initialization is done with a `vector<double>`, but then we use
  69. odeint's `split` function to fill an `openmp_state`.
  70. The splitting is done such that the sizes of the individual regions differ at
  71. most by 1 to make the computation as uniform as possible.
  72. [phase_chain_state_init]
  73. Of course, the system function has to be changed to deal with the
  74. `openmp_state`.
  75. Note that each sub-region of the state is computed in a single task, but at the
  76. borders read access to the neighbouring regions is required.
  77. [phase_chain_state_rhs]
  78. Using the `openmp_state<T>` state type automatically selects `openmp_algebra`
  79. which executes odeint's internal computations on parallel regions.
  80. Hence, no manual configuration of the stepper is necessary.
  81. At the end of the integration, we use `unsplit` to concatenate the sub-regions
  82. back together into a single vector.
  83. [phase_chain_state_integrate]
  84. [note You don't actually need to use `openmp_state<T>` for advanced use cases,
  85. `openmp_algebra` is simply an alias for `openmp_nested_algebra<range_algebra>`
  86. and supports any model of Random Access Range as the outer, parallel state type,
  87. and will use the given algebra on its elements.]
  88. See [github_link examples/openmp/phase_chain_omp_state.cpp
  89. openmp/phase_chain_omp_state.cpp] for the complete example.
  90. [endsect]
  91. [section MPI]
  92. [import ../examples/mpi/phase_chain.cpp]
  93. To expand the parallel computation across multiple machines we can use MPI.
  94. The system function implementation is similar to the OpenMP variant with split
  95. data, the main difference being that while OpenMP uses a spawn/join model where
  96. everything not explicitly paralleled is only executed in the main thread, in
  97. MPI's model each node enters the `main()` method independently, diverging based
  98. on its rank and synchronizing through message-passing and explicit barriers.
  99. odeint's MPI support is implemented as an external backend, too.
  100. Depending on the MPI implementation the code might need to be compiled with i.e.
  101. [^mpic++].
  102. [phase_chain_mpi_header]
  103. Instead of reading another thread's data, we asynchronously send and receive the
  104. relevant data from neighbouring nodes, performing some computation in the interim
  105. to hide the latency.
  106. [phase_chain_mpi_rhs]
  107. Analogous to `openmp_state<T>` we use `mpi_state< InnerState<T> >`, which
  108. automatically selects `mpi_nested_algebra` and the appropriate MPI-oblivious
  109. inner algebra (since our inner state is a `vector`, the inner algebra will be
  110. `range_algebra` as in the OpenMP example).
  111. [phase_chain_state]
  112. In the main program we construct a `communicator` which tells us the `size` of
  113. the cluster and the current node's `rank` within that.
  114. We generate the input data on the master node only, avoiding unnecessary work on
  115. the other nodes.
  116. Instead of simply copying chunks, `split` acts as a MPI collective function here
  117. and sends/receives regions from master to each slave.
  118. The input argument is ignored on the slaves, but the master node receives
  119. a region in its output and will participate in the computation.
  120. [phase_chain_mpi_init]
  121. Now that `x_split` contains (only) the local chunk for each node, we start the
  122. integration.
  123. To print the result on the master node, we send the processed data back using
  124. `unsplit`.
  125. [phase_chain_mpi_integrate]
  126. [note `mpi_nested_algebra::for_each`[~N] doesn't use any MPI constructs, it
  127. simply calls the inner algebra on the local chunk and the system function is not
  128. guarded by any barriers either, so if you don't manually place any (for example
  129. in parameter studies cases where the elements are completely independent) you
  130. might see the nodes diverging, returning from this call at different times.]
  131. See [github_link examples/mpi/phase_chain.cpp
  132. mpi/phase_chain.cpp] for the complete example.
  133. [endsect]
  134. [section Concepts]
  135. [section MPI State]
  136. As used by `mpi_nested_algebra`.
  137. [heading Notation]
  138. [variablelist
  139. [[`InnerState`] [The inner state type]]
  140. [[`State`] [The MPI-state type]]
  141. [[`state`] [Object of type `State`]]
  142. [[`world`] [Object of type `boost::mpi::communicator`]]
  143. ]
  144. [heading Valid Expressions]
  145. [table
  146. [[Name] [Expression] [Type] [Semantics]]
  147. [[Construct a state with a communicator]
  148. [`State(world)`] [`State`] [Constructs the State.]]
  149. [[Construct a state with the default communicator]
  150. [`State()`] [`State`] [Constructs the State.]]
  151. [[Get the current node's inner state]
  152. [`state()`] [`InnerState`] [Returns a (const) reference.]]
  153. [[Get the communicator]
  154. [`state.world`] [`boost::mpi::communicator`] [See __boost_mpi.]]
  155. ]
  156. [heading Models]
  157. * `mpi_state<InnerState>`
  158. [endsect]
  159. [section OpenMP Split State]
  160. As used by `openmp_nested_algebra`, essentially a Random Access Container with
  161. `ValueType = InnerState`.
  162. [heading Notation]
  163. [variablelist
  164. [[`InnerState`] [The inner state type]]
  165. [[`State`] [The split state type]]
  166. [[`state`] [Object of type `State`]]
  167. ]
  168. [heading Valid Expressions]
  169. [table
  170. [[Name] [Expression] [Type] [Semantics]]
  171. [[Construct a state for `n` chunks]
  172. [`State(n)`] [`State`] [Constructs underlying `vector`.]]
  173. [[Get a chunk]
  174. [`state[i]`] [`InnerState`] [Accesses underlying `vector`.]]
  175. [[Get the number of chunks]
  176. [`state.size()`] [`size_type`] [Returns size of underlying `vector`.]]
  177. ]
  178. [heading Models]
  179. * `openmp_state<ValueType>` with `InnerState = vector<ValueType>`
  180. [endsect]
  181. [section Splitter]
  182. [heading Notation]
  183. [variablelist
  184. [[`Container1`] [The continuous-data container type]]
  185. [[`x`] [Object of type `Container1`]]
  186. [[`Container2`] [The chunked-data container type]]
  187. [[`y`] [Object of type `Container2`]]
  188. ]
  189. [heading Valid Expressions]
  190. [table
  191. [[Name] [Expression] [Type] [Semantics]]
  192. [[Copy chunks of input to output elements]
  193. [`split(x, y)`] [`void`]
  194. [Calls `split_impl<Container1, Container2>::split(x, y)`, splits `x` into
  195. `y.size()` chunks.]]
  196. [[Join chunks of input elements to output]
  197. [`unsplit(y, x)`] [`void`]
  198. [Calls `unsplit_impl<Container2, Container1>::unsplit(y, x)`, assumes `x`
  199. is of the correct size ['__sigma `y[i].size()`], does not resize `x`.]]
  200. ]
  201. [heading Models]
  202. * defined for `Container1` = __boost_range and `Container2 = openmp_state`
  203. * and `Container2 = mpi_state`.
  204. To implement splitters for containers incompatible with __boost_range,
  205. specialize the `split_impl` and `unsplit_impl` types:
  206. ```
  207. template< class Container1, class Container2 , class Enabler = void >
  208. struct split_impl {
  209. static void split( const Container1 &from , Container2 &to );
  210. };
  211. template< class Container2, class Container1 , class Enabler = void >
  212. struct unsplit_impl {
  213. static void unsplit( const Container2 &from , Container1 &to );
  214. };
  215. ```
  216. [endsect]
  217. [endsect]
  218. [endsect]