block_indirect_sort.qbk 9.2 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237
  1. [/===========================================================================
  2. Copyright (c) 2017 Steven Ross, Francisco Tapia, Orson Peters
  3. Distributed under the Boost Software License, Version 1.0
  4. See accompanying file LICENSE_1_0.txt or copy at
  5. http://www.boost.org/LICENSE_1_0.txt
  6. =============================================================================/]
  7. [section:block_indirect_sort 3.1- block_indirect_sort]
  8. [section:block_description Description]
  9. [:
  10. [*BLOCK_INDIRECT_SORT] is a new unstable parallel sort, conceived and implemented by Francisco Jose Tapia for the Boost Library.
  11. The most important characteristics of this algorithm are the *speed* and the *low memory consumption*.
  12. [*[teletype]
  13. ``
  14. | | | |
  15. Algorithm |Stable | Additional memory |Best, average, and worst case |
  16. ----------------------+-------+------------------------+------------------------------+
  17. block_indirect_sort | no |block_size * num_threads| N, N LogN, N LogN |
  18. | | | |
  19. ``
  20. ]
  21. The block_size is an internal parameter of the algorithm, which in order to achieve the
  22. highest speed, changes according with the size of the objects to sort according to the next table.The strings use a block_size of 128.
  23. [*[teletype]
  24. ``
  25. | | | | | | | |
  26. object size (bytes) | 1 - 15 | 16 - 31 | 32 - 63 | 64 - 127|128 - 255|256 - 511| 512 - |
  27. --------------------------------+--------+---------+---------+---------+---------+---------+----------+
  28. block_size (number of elements) | 4096 | 2048 | 1024 | 768 | 512 | 256 | 128 |
  29. | | | | | | | |
  30. ``
  31. ]
  32. ]
  33. [endsect]
  34. [br]
  35. [section:block_benchmark Benchmark]
  36. [:
  37. Sorting 100 000 000 64 bits numbers, the measured memory used was:
  38. [*[teletype]
  39. ``
  40. | | |
  41. Algorithm | Time (secs) | Memory used |
  42. ----------------------------------+-------------+-------------+
  43. Open MP Parallel Sort | 1.1990 | 1564 MB |
  44. Threading Building Blocks (TBB) | 1.6411 | 789 MB |
  45. Block Indirect Sort | 0.9270 | 790 MB |
  46. | | |
  47. ``
  48. ]
  49. ]
  50. [endsect]
  51. [br]
  52. [section:block_programming Programming]
  53. [:
  54. [h4[_Thread specification]]
  55. [:
  56. This algorithm has an integer parameter indicating the *number of threads* to use in the sorting process,
  57. which always is the last value in the call. The default value (if left unspecified) is the number of HW threads of
  58. the machine where the program is running provided by std::thread::hardware_concurrency().
  59. If the number is 1 or 0, the algorithm runs with only 1 thread.
  60. The number of threads is not a fixed number, but is calculated in each execution. The number of threads passed can be greater
  61. than the number of hardware threads on the machine. We can pass 100 threads in a machine with 4 HW threads,
  62. and in the same mode we can pass a function as (std::thread::hardware_concurrency() / 4 ).
  63. If this value is 0, the program is executed with 1 thread.
  64. ]
  65. [h4[_Programming]]
  66. [:
  67. You only need to include the file boost/sort/sort.hpp to use this algorithm
  68. The algorithm runs in the namespace boost::sort
  69. [c++]
  70. ``
  71. #include <boost/sort/sort.hpp>
  72. template <class iter_t>
  73. void block_indirect_sort (iter_t first, iter_t last);
  74. template <class iter_t, typename compare>
  75. void block_indirect_sort (iter_t first, iter_t last, compare comp);
  76. template <class iter_t>
  77. void block_indirect_sort (iter_t first, iter_t last, uint32_t num_thread);
  78. template <class iter_t, typename compare>
  79. void block_indirect_sort (iter_t first, iter_t last, compare comp, uint32_t num_thread);
  80. ``
  81. This algorithm needs a *C++11 compliant compiler*. You don't need any other code or library. With older compilers correct operation is not guaranteed.
  82. If the number of threads is unspecified, use the result of std::thread::hardware_concurrency()
  83. This algorithm uses a *comparison object*, in the same way as the standard library sort
  84. algorithms. If not defined, the comparison object is std::less, which uses
  85. the < operator internally.
  86. This algorithm is [*exception safe], meaning that, the exceptions generated by the algorithm
  87. guarantee the integrity of the objects to sort, but not their relative order. If the exception
  88. is generated inside the objects (in the move or in the copy constructor.. ) the results can be
  89. unpredictable.
  90. ]
  91. ]
  92. [endsect]
  93. [br]
  94. [section:block_internal Internal Description]
  95. [:
  96. There are two primary categories of parallelization in sorting algorithms.
  97. [h4[_Subdivision Algorithms]]
  98. [: Filter the data and generate two or more parts. Each part obtained is
  99. filtered and divided by other threads. This filter and division process is done
  100. until the size of the part is smaller than a predefined size, then is sorted by a single thread.
  101. The algorithm most frequently used in the filter and sorting is quick sort
  102. These algorithms are fast with a small number of threads, but are inefficient
  103. with a great number of threads. Examples of this category are
  104. *Intel Threading Building Blocks (TBB)
  105. *Microsoft PPL Parallel Sort.
  106. ]
  107. [h4[_Merging Algorithms]]
  108. [:
  109. Divide the data in parts, and each part is sorted by a thread. When
  110. the parts are sorted, they are merged to obtain the final results. These algorithms need additional memory for the
  111. merge, usually the same size as the data.
  112. With a small number of threads, these algorithms have similar speed to
  113. the subdivision algorithms, but with many threads are much faster.
  114. Examples of this category are
  115. *GCC Parallel Sort (based on OpenMP)
  116. *Microsoft PPL Parallel Buffered Sort
  117. This generates an *undesirable duality*. With a small number of threads the optimal algorithm is not the optimal for a big number of threads.
  118. For this reason, the SW designed for a *small machine* is *inadequate* for a *big machine* and vice versa.
  119. Using only *merging algorithms*, has the *problem of the additional memory* used, usually of the same size as the data.
  120. ]
  121. [h4[_New Parallel Sort Algorithm (Block Indirect Sort)]]
  122. [:
  123. This algorithm, named Block Indirect Sort, created for processors connected with shared memory, is a hybrid algorithm.
  124. *With small number of threads, it is a subdivision algorithm.
  125. *With many threads it is a merging algorithm, with a small auxiliary memory ( block_size * number of threads).
  126. This algorithm *eliminates the duality*. The same code has *optimal performance* with a small and a big number of threads.
  127. The number of threads to use is evaluated in each execution.
  128. When the program runs with a *small number of threads* the algorithm
  129. internally uses a *subdivision algorithm* and has similar performance to TBB, and when run with *many threads*,
  130. it internally uses the *new algorithm* and has the performance of GCC Parallel Sort, with the additional advantage of *reduced memory consumption*.
  131. ]
  132. ]
  133. [endsect]
  134. [br]
  135. [section:design_process Design Process]
  136. [:
  137. [h4[_Initial Idea]]
  138. [:
  139. The *initial idea*, was to build a *merge algorithm*, to be *fast with many threads, with a low additional memory*.
  140. This algortihm is *not based in any other idea or algorithm*. The technique used in the algorithm (indirect blocks) *is new, and had been conceived and implemented for this algorithm*.
  141. As sample of their results, we can see the the sorting 100 000 000 64 bits numbers, ramdomly generated,
  142. in a machine with 12 threads.
  143. [*[teletype]
  144. ``
  145. | | |
  146. Algorithm | Time (secs) | Memory used |
  147. ----------------------------------+-------------+-------------+
  148. Open MP Parallel Sort | 1.1990 | 1564 MB |
  149. Threading Building Blocks (TBB) | 1.6411 | 789 MB |
  150. Block Indirect Sort | 0.9270 | 790 MB |
  151. | | |
  152. ``
  153. ]
  154. The best words about this algorithm are expressed by their [*[link sort.parallel.linux_parallel Benchmarks results]]
  155. ]
  156. [h4[_Design process]]
  157. [:
  158. The process had been *long and very hard*, mainly, by the uncertainty about if the ideas are correct and run
  159. so fast as need for to be useful. This is complicated by the fact that we can’t be sure of the efficiency until the last part
  160. of the code is done and the first benchmark has run.
  161. But it had been a *very exciting process*, each time a problem is resolved, a new internal algorithm is designed,
  162. tested …, and see, step by step, the advance of the process.
  163. I discovered *many new problems* during this process, unknown until now, which forced me to design *new internal algorithms* to resolve them,
  164. and divide the work in many parts to execute in parallel mode. Due to this, you can find many nice algorithms inside the sorting algorithm
  165. written to resolve and parallelize the internal problems.
  166. If you are interested in a detailed description of the algorithm, you can find it here : [* [@../papers/block_indirect_sort_en.pdf block_indirect_sort_en.pdf]].
  167. ]
  168. ]
  169. [endsect]
  170. [endsect]