root_comparison.qbk 13 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249
  1. [/
  2. Copyright 2015 Paul A. Bristow.
  3. Copyright 2015 John Maddock.
  4. Distributed under the Boost Software License, Version 1.0.
  5. (See accompanying file LICENSE_1_0.txt or copy at
  6. http://www.boost.org/LICENSE_1_0.txt).
  7. ]
  8. [section:root_comparison Comparison of Root Finding Algorithms]
  9. [section:cbrt_comparison Comparison of Cube Root Finding Algorithms]
  10. In the table below, the cube root of 28 was computed for three __fundamental_types floating-point types,
  11. and one __multiprecision type __cpp_bin_float using 50 decimal digit precision, using four algorithms.
  12. The 'exact' answer was computed using a 100 decimal digit type:
  13. cpp_bin_float_100 full_answer ("3.036588971875662519420809578505669635581453977248111123242141654169177268411884961770250390838097895");
  14. Times were measured using __boost_timer using `class cpu_timer`.
  15. * ['Its] is the number of iterations taken to find the root.
  16. * ['Times] is the CPU time-taken in arbitrary units.
  17. * ['Norm] is a normalized time, in comparison to the quickest algorithm (with value 1.00).
  18. * ['Dis] is the distance from the nearest representation of the 'exact' root in bits.
  19. Distance from the 'exact' answer is measured by using function __float_distance.
  20. One or two bits distance means that all results are effectively 'correct'.
  21. Zero means 'exact' - the nearest __representable value for the floating-point type.
  22. The cube-root function is a simple function, and is a contrived example for root-finding.
  23. It does allow us to investigate some of the factors controlling efficiency that may be extrapolated to
  24. more complex functions.
  25. The program used was [@ ../../example/root_finding_algorithms.cpp root_finding_algorithms.cpp].
  26. 100000 evaluations of each floating-point type and algorithm were used and the CPU times were
  27. judged from repeat runs to have an uncertainty of 10 %. Comparing MSVC for `double` and `long double`
  28. (which are identical on this patform) may give a guide to uncertainty of timing.
  29. The requested precision was set as follows:
  30. [table
  31. [[Function][Precision Requested]]
  32. [[TOMS748][numeric_limits<T>::digits - 2]]
  33. [[Newton][floor(numeric_limits<T>::digits * 0.6)]]
  34. [[Halley][floor(numeric_limits<T>::digits * 0.4)]]
  35. [[Schr'''&#xf6;'''der][floor(numeric_limits<T>::digits * 0.4)]]
  36. ]
  37. * The C++ Standard cube root function [@http://en.cppreference.com/w/cpp/numeric/math/cbrt std::cbrt]
  38. is only defined for built-in or fundamental types,
  39. so cannot be used with any User-Defined floating-point types like __multiprecision.
  40. This, and that the cube function is so impeccably-behaved,
  41. allows the implementer to use many tricks to achieve a fast computation.
  42. On some platforms,`std::cbrt` appeared several times as quick as the more general `boost::math::cbrt`,
  43. on other platforms / compiler options `boost::math::cbrt` is noticeably faster. In general, the results are highly
  44. dependent on the code-generation / processor architecture selection compiler options used. One can
  45. assume that the standard library will have been compiled with options ['nearly] optimal for the platform
  46. it was installed on, where as the user has more choice over the options used for Boost.Math. Pick something
  47. too general/conservative and performance suffers, while selecting options that make use of the latest
  48. instruction set opcodes speed's things up noticeably.
  49. * Two compilers in optimise mode were compared: GCC 4.9.1 using Netbeans IDS
  50. and Microsoft Visual Studio 2013 (Update 1) on the same hardware.
  51. The number of iterations seemed consistent, but the relative run-times surprisingly different.
  52. * `boost::math::cbrt` allows use with ['any user-defined floating-point type], conveniently
  53. __multiprecision. It too can take some advantage of the good-behaviour of the cube function,
  54. compared to the more general implementation in the nth root-finding examples. For example,
  55. it uses a polynomial approximation to generate a better guess than dividing the exponent by three,
  56. and can avoid the complex checks in __newton required to prevent the
  57. search going wildly off-track. For a known precision, it may also be possible to
  58. fix the number of iterations, allowing inlining and loop unrolling. It also
  59. algebraically simplifies the Halley steps leading to a big reduction in the
  60. number of floating point operations required compared to a "black box" implementation
  61. that calculates the derivatives seperately and then combines them in the Halley code.
  62. Typically, it was found that computation using type `double`
  63. took a few times longer when using the various root-finding algorithms directly rather
  64. than the hand coded/optimized `cbrt` routine.
  65. * The importance of getting a good guess can be seen by the iteration count for the multiprecision case:
  66. here we "cheat" a little and use the cube-root calculated to double precision as the initial guess.
  67. The limitation of this tactic is that the range of possible (exponent) values may be less than the multiprecision type.
  68. * For __fundamental_types, there was little to choose between the three derivative methods,
  69. but for __cpp_bin_float, __newton was twice as fast. Note that the cube-root is an extreme
  70. test case as the cost of calling the functor is so cheap that the runtimes are largely
  71. dominated by the complexity of the iteration code.
  72. * Compiling with optimisation halved computation times, and any differences between algorithms
  73. became nearly negligible. The optimisation speed-up of the __TOMS748 was especially noticable.
  74. * Using a multiprecision type like `cpp_bin_float_50` for a precision of 50 decimal digits
  75. took a lot longer, as expected because most computation
  76. uses software rather than 64-bit floating-point hardware.
  77. Speeds are often more than 50 times slower.
  78. * Using `cpp_bin_float_50`, __TOMS748 was much slower showing the benefit of using derivatives.
  79. __newton was found to be twice as quick as either of the second-derivative methods:
  80. this is an extreme case though, the function and its derivatives are so cheap to compute that we're
  81. really measuring the complexity of the boilerplate root-finding code.
  82. * For multiprecision types only one or two extra ['iterations] are needed to get the remaining 35 digits, whatever the algorithm used.
  83. (The time taken was of course much greater for these types).
  84. * Using a 100 decimal-digit type only doubled the time and required only a very few more iterations,
  85. so the cost of extra precision is mainly the underlying cost of computing more digits,
  86. not in the way the algorithm works. This confirms previous observations using __NTL high-precision types.
  87. [include root_comparison_tables_msvc.qbk]
  88. [include root_comparison_tables_gcc.qbk]
  89. [endsect] [/section:cbrt_comparison Comparison of Cube Root Finding Algorithms]
  90. [section:root_n_comparison Comparison of Nth-root Finding Algorithms]
  91. A second example compares four generalized nth-root finding algorithms for various n-th roots (5, 7 and 13)
  92. of a single value 28.0, for four floating-point types, `float`, `double`,
  93. `long double` and a __multiprecision type `cpp_bin_float_50`.
  94. In each case the target accuracy was set using our "recomended" accuracy limits
  95. (or at least limits that make a good starting point - which is likely to give
  96. close to full accuracy without resorting to unnecessary iterations).
  97. [table
  98. [[Function][Precision Requested]]
  99. [[TOMS748][numeric_limits<T>::digits - 2]]
  100. [[Newton][floor(numeric_limits<T>::digits * 0.6)]]
  101. [[Halley][floor(numeric_limits<T>::digits * 0.4)]]
  102. [[Schr'''&#xf6;'''der][floor(numeric_limits<T>::digits * 0.4)]]
  103. ]
  104. Tests used Microsoft Visual Studio 2013 (Update 1) and GCC 4.9.1 using source code
  105. [@../../example/root_n_finding_algorithms.cpp root_n_finding_algorithms.cpp].
  106. The timing uncertainty (especially using MSVC) is at least 5% of normalized time 'Norm'.
  107. To pick out the 'best' and 'worst' algorithms are highlighted in blue and red.
  108. More than one result can be 'best' when normalized times are indistinguishable
  109. within the uncertainty.
  110. [/include roots_table_100_msvc.qbk]
  111. [/include roots_table_75_msvc.qbk]
  112. [/include roots_table_75_msvc_X86.qbk]
  113. [/include roots_table_100_msvc_X86.qbk]
  114. [/include roots_table_100_msvc_AVX.qbk]
  115. [/include roots_table_75_msvc_AVX.qbk]
  116. [/include roots_table_75_msvc_X86_SSE2.qbk]
  117. [/include roots_table_100_msvc_X86_SSE2.qbk]
  118. [/include roots_table_100_gcc_X64_SSE2.qbk]
  119. [/include roots_table_75_gcc_X64_SSE2.qbk]
  120. [/include type_info_table_100_msvc.qbk]
  121. [/include type_info_table_75_msvc.qbk]
  122. [include roots_table_100_msvc_X86_SSE2.qbk]
  123. [include roots_table_100_msvc_X64_AVX.qbk]
  124. [include roots_table_100_gcc_X64_SSE2.qbk]
  125. Some tentative conclusions can be drawn from this limited exercise.
  126. * Perhaps surprisingly, there is little difference between the various algorithms for __fundamental_types floating-point types.
  127. Using the first derivatives (__newton) is usually the best, but while the improvement over the no-derivative
  128. __TOMS748 is considerable in number of iterations, but little in execution time. This reflects the fact that the function
  129. we are finding the root for is trivial to evaluate, so runtimetimes are dominated by the time taken by the boilerplate code
  130. in each method.
  131. * The extra cost of evaluating the second derivatives (__halley or __schroder) is usually too much for any net benefit:
  132. as with the cube root, these functors are so cheap to evaluate that the runtime is largely dominated by the
  133. complexity of the root finding method.
  134. * For a __multiprecision floating-point type, the __newton is a clear winner with a several-fold gain over __TOMS748,
  135. and again no improvement from the second-derivative algorithms.
  136. * The run-time of 50 decimal-digit __multiprecision is about 30-fold greater than `double`.
  137. * The column 'dis' showing the number of bits distance from the correct result.
  138. The Newton-Raphson algorithm shows a bit or two better accuracy than __TOMS748.
  139. * The goodness of the 'guess' is especially crucial for __multiprecision.
  140. Separate experiments show that evaluating the 'guess' using `double` allows
  141. convergence to the final exact result in one or two iterations.
  142. So in this contrived example, crudely dividing the exponent by N for a 'guess',
  143. it would be far better to use a `pow<double>` or ,
  144. if more precise `pow<long double>`, function to estimate a 'guess'.
  145. The limitation of this tactic is that the range of possible (exponent) values may be less than the multiprecision type.
  146. * Using floating-point extension __SSE2 made a modest ten-percent speedup.
  147. *Using MSVC, there was some improvement using 64-bit, markedly for __multiprecision.
  148. * The GCC compiler 4.9.1 using 64-bit was at least five-folder faster that 32-bit,
  149. apparently reflecting better optimization.
  150. Clearly, your mileage [*will vary], but in summary, __newton seems the first choice of algorithm,
  151. and effort to find a good 'guess' the first speed-up target, especially for __multiprecision.
  152. And of course, compiler optimisation is crucial for speed.
  153. [endsect] [/section:root_n_comparison Comparison of Nth-root Finding Algorithms]
  154. [section:elliptic_comparison Comparison of Elliptic Integral Root Finding Algoritghms]
  155. A second example compares four root finding algorithms for locating
  156. the second radius of an ellipse with first radius 28 and arc length 300,
  157. for four floating-point types, `float`, `double`,
  158. `long double` and a __multiprecision type `cpp_bin_float_50`.
  159. Which is to say we're solving:
  160. [pre 4xE(sqrt(1 - 28[super 2] / x[super 2])) - 300 = 0]
  161. In each case the target accuracy was set using our "recomended" accuracy limits
  162. (or at least limits that make a good starting point - which is likely to give
  163. close to full accuracy without resorting to unnecessary iterations).
  164. [table
  165. [[Function][Precision Requested]]
  166. [[TOMS748][numeric_limits<T>::digits - 2]]
  167. [[Newton][floor(numeric_limits<T>::digits * 0.6)]]
  168. [[Halley][floor(numeric_limits<T>::digits * 0.4)]]
  169. [[Schr'''&#xf6;'''der][floor(numeric_limits<T>::digits * 0.4)]]
  170. ]
  171. Tests used Microsoft Visual Studio 2013 (Update 1) and GCC 4.9.1 using source code
  172. [@../../example/root_elliptic_finding.cpp root_elliptic_finding.cpp].
  173. The timing uncertainty (especially using MSVC) is at least 5% of normalized time 'Norm'.
  174. To pick out the 'best' and 'worst' algorithms are highlighted in blue and red.
  175. More than one result can be 'best' when normalized times are indistinguishable
  176. within the uncertainty.
  177. [include elliptic_table_100_msvc_X86_SSE2.qbk]
  178. [include elliptic_table_100_msvc_X64_AVX.qbk]
  179. [include elliptic_table_100_gcc_X64_SSE2.qbk]
  180. Remarks:
  181. * The function being solved is now moderately expensive to call, and twice as expensive to call
  182. when obtaining the derivative than when not. Consequently there is very little improvement in moving
  183. from a derivative free method, to Newton iteration. However, once you've calculated the first derivative
  184. the second comes almost for free, consequently the third order methods (Halley) does much the best.
  185. * Of the two second order methods, Halley does best as would be expected: the Schroder method offers better
  186. guarantees of ['quadratic] convergence, while Halley relies on a smooth function with a single root to
  187. give ['cubic] convergence. It's not entirely clear why Schroder iteration often does worse than Newton.
  188. [endsect] [/section:elliptic_comparison Comparison of Elliptic Integral Root Finding Algoritghms]
  189. [endsect] [/section:root_comparison Comparison of Root Finding Algorithms]