double_exponential.qbk 29 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550
  1. [/
  2. Copyright (c) 2017 Nick Thompson
  3. Use, modification and distribution are subject to the
  4. Boost Software License, Version 1.0. (See accompanying file
  5. LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
  6. ]
  7. [section:double_exponential Double-exponential quadrature]
  8. [section:de_overview Overview]
  9. [heading Synopsis]
  10. ``
  11. #include <boost/math/quadrature/tanh_sinh.hpp>
  12. #include <boost/math/quadrature/exp_sinh.hpp>
  13. #include <boost/math/quadrature/sinh_sinh.hpp>
  14. namespace boost{ namespace math{
  15. template<class Real>
  16. class tanh_sinh
  17. {
  18. public:
  19. tanh_sinh(size_t max_refinements = 15, const Real& min_complement = tools::min_value<Real>() * 4)
  20. template<class F>
  21. auto integrate(const F f, Real a, Real b,
  22. Real tolerance = tools::root_epsilon<Real>(),
  23. Real* error = nullptr,
  24. Real* L1 = nullptr,
  25. std::size_t* levels = nullptr)->decltype(std::declval<F>()(std::declval<Real>())) const;
  26. template<class F>
  27. auto integrate(const F f, Real
  28. tolerance = tools::root_epsilon<Real>(),
  29. Real* error = nullptr,
  30. Real* L1 = nullptr,
  31. std::size_t* levels = nullptr)->decltype(std::declval<F>()(std::declval<Real>())) const;
  32. };
  33. template<class Real>
  34. class exp_sinh
  35. {
  36. public:
  37. exp_sinh(size_t max_refinements = 9);
  38. template<class F>
  39. auto integrate(const F f, Real a, Real b,
  40. Real tol = sqrt(std::numeric_limits<Real>::epsilon()),
  41. Real* error = nullptr,
  42. Real* L1 = nullptr,
  43. size_t* levels = nullptr)->decltype(std::declval<F>()(std::declval<Real>())) const;
  44. template<class F>
  45. auto integrate(const F f,
  46. Real tol = sqrt(std::numeric_limits<Real>::epsilon()),
  47. Real* error = nullptr,
  48. Real* L1 = nullptr,
  49. size_t* levels = nullptr)->decltype(std::declval<F>()(std::declval<Real>())) const;
  50. };
  51. template<class Real>
  52. class sinh_sinh
  53. {
  54. public:
  55. sinh_sinh(size_t max_refinements = 9);
  56. template<class F>
  57. auto integrate(const F f,
  58. Real tol = sqrt(std::numeric_limits<Real>::epsilon()),
  59. Real* error = nullptr,
  60. Real* L1 = nullptr,
  61. size_t* levels = nullptr)->decltype(std::declval<F>()(std::declval<Real>())) const;
  62. };
  63. }}
  64. ``
  65. These three integration routines provide robust general purpose quadrature, each having a "native" range over which
  66. quadrature is performed.
  67. For example, the `sinh_sinh` quadrature integrates over the entire real line, the `tanh_sinh` over (-1, 1),
  68. and the `exp_sinh` over (0, [infin]).
  69. The latter integrators also have auxilliary ranges which are handled via a change of variables on the function being integrated,
  70. so that the `tanh_sinh` can handle integration over /(a, b)/, and `exp_sinh` over /(a, [infin]) and(-[infin], b)/.
  71. Like the other quadrature routines in Boost, these routines support both real and complex-valued integrands.
  72. The `integrate` methods which do not specify a range always integrate over the native range of the method, and generally
  73. are the most efficient and produce the smallest code, on the other hand the methods which do specify the bounds of integration
  74. are the most general, and use argument transformations which are generally very robust. The following table summarizes
  75. the ranges supported by each method:
  76. [table
  77. [[Integrator][Native range][Other supported ranges][Comments]]
  78. [[tanh_sinh] [(-1,1)] [(a,b)[br](a,[infin])[br](-[infin],b)[br](-[infin],[infin])]
  79. [Special care is taken for endpoints at or near zero to ensure that abscissa values are calculated without the loss of precision
  80. that would normally occur. Likewise when transforming to an infinite endpoint, the additional information which tanh_sinh has
  81. internally on abscissa values is used to ensure no loss of precision during the transformation.]]
  82. [[exp_sinh] [(0,[infin])] [(a,[infin])[br](-[infin],0)[br](-[infin],b)] []]
  83. [[sinh_sinh] [(-[infin],[infin])] [][]]
  84. ]
  85. [endsect] [/section:de_overview Overview]
  86. [section:de_tanh_sinh tanh_sinh]
  87. template<class Real>
  88. class tanh_sinh
  89. {
  90. public:
  91. tanh_sinh(size_t max_refinements = 15, const Real& min_complement = tools::min_value<Real>() * 4)
  92. template<class F>
  93. auto integrate(const F f, Real a, Real b,
  94. Real tolerance = tools::root_epsilon<Real>(),
  95. Real* error = nullptr,
  96. Real* L1 = nullptr,
  97. std::size_t* levels = nullptr)->decltype(std::declval<F>()(std::declval<Real>())) const;
  98. template<class F>
  99. auto integrate(const F f, Real
  100. tolerance = tools::root_epsilon<Real>(),
  101. Real* error = nullptr,
  102. Real* L1 = nullptr,
  103. std::size_t* levels = nullptr)->decltype(std::declval<F>()(std::declval<Real>())) const;
  104. };
  105. The `tanh-sinh` quadrature routine provided by boost is a rapidly convergent numerical integration scheme for holomorphic integrands.
  106. By this we mean that the integrand is the restriction to the real line of a complex-differentiable function which is bounded on the interior of the unit disk /|z| < 1/,
  107. so that it lies within the so-called [@https://en.wikipedia.org/wiki/Hardy_space Hardy space].
  108. If your integrand obeys these conditions, it can be shown that `tanh-sinh` integration is optimal,
  109. in the sense that it requires the fewest function evaluations for a given accuracy of any quadrature algorithm for a random element from the Hardy space.
  110. A basic example of how to use the `tanh-sinh` quadrature is shown below:
  111. tanh_sinh<double> integrator;
  112. auto f = [](double x) { return 5*x + 7; };
  113. // Integrate over native bounds of (-1,1):
  114. double Q = integrator.integrate(f);
  115. // Integrate over (0,1.1) instead:
  116. Q = integrator.integrate(f, 0.0, 1.1);
  117. The basic idea of `tanh-sinh` quadrature is that a variable transformation can cause the endpoint derivatives to decay rapidly.
  118. When the derivatives at the endpoints decay much faster than the Bernoulli numbers grow,
  119. the Euler-Maclaurin summation formula tells us that simple trapezoidal quadrature converges faster than any power of /h/.
  120. That means the number of correct digits of the result should roughly double with each new level of integration (halving of /h/),
  121. Hence the default termination condition for integration is usually set to the square root of machine epsilon.
  122. Most well-behaved integrals should converge to full machine precision with this termination condition,
  123. and in 6 or fewer levels at double precision, or 7 or fewer levels for quad precision.
  124. One very nice property of tanh-sinh quadrature is that it can handle singularities at the endpoints of the integration domain.
  125. For instance, the following integrand, singular at both endpoints, can be efficiently evaluated to 100 binary digits:
  126. auto f = [](Real x) { return log(x)*log1p(-x); };
  127. Real Q = integrator.integrate(f, (Real) 0, (Real) 1);
  128. Now onto the caveats: As stated before, the integrands must lie in a Hardy space to ensure rapid convergence.
  129. Attempting to integrate a function which is not bounded on the unit disk by tanh-sinh can lead to very slow convergence.
  130. For example, take the Runge function:
  131. auto f1 = [](double t) { return 1/(1+25*t*t); };
  132. Q = integrator.integrate(f1, (double) -1, (double) 1);
  133. This function has poles at \u00B1 \u2148/5, and as such it is not bounded on the unit disk.
  134. However, the related function
  135. auto f2 = [](double t) { return 1/(1+0.04*t*t); };
  136. Q = integrator.integrate(f2, (double) -1, (double) 1);
  137. has poles outside the unit disk (at \u00B1 5\u2148), and is therefore in the Hardy space.
  138. Our benchmarks show that the second integration is performed 22x faster than the first!
  139. If you do not understand the structure of your integrand in the complex plane, do performance testing before deployment.
  140. Like the trapezoidal quadrature, the tanh-sinh quadrature produces an estimate of the L[sub 1] norm of the integral along with the requested integral.
  141. This is to establish a scale against which to measure the tolerance, and to provide an estimate of the condition number of the summation.
  142. This can be queried as follows:
  143. tanh_sinh<double> integrator;
  144. auto f = [](double x) { return 5*x + 7; };
  145. double termination = std::sqrt(std::numeric_limits<double>::epsilon());
  146. double error;
  147. double L1;
  148. size_t levels;
  149. double Q = integrator.integrate(f, 0.0, 1.0, termination, &error, &L1, &levels);
  150. double condition_number = L1/std::abs(Q);
  151. If the condition number is large, the computed integral is worthless: typically one can assume that Q has lost one digit of precision
  152. when the condition number of O(10^Q).
  153. The returned error term is not the actual error in the result, but merely an a posteriori error estimate.
  154. It is the absolute difference between the last two approximations, and for well behaved integrals, the actual error should be very much smaller than this.
  155. The following table illustrates how the errors and conditioning vary for few sample integrals, in each case the termination condition was set
  156. to the square root of epsilon, and all tests were conducted in double precision:
  157. [table
  158. [[Integral][Range][Error][Actual measured error][Levels][Condition Number][Comments]]
  159. [[`5 * x + 7`][(0,1)][3.5e-15][0][5][1][This trivial case shows just how accurate these methods can be.]]
  160. [[`log(x) * log(x)`][0, 1)][0][0][5][1][This is an example of an integral that Gaussian integrators fail to handle.]]
  161. [[`exp(-x) / sqrt(x)`][(0,+[infin])][8.0e-10][1.1e-15][5][1][Gaussian integrators typically fail to handle the singularities at the endpoints of this one.]]
  162. [[`x * sin(2 * exp(2 * sin(2 * exp(2 * x))))`][(-1,1)][7.2e-16][4.9e-17][9][1.89][This is a truely horrible integral that oscillates wildly and
  163. unpredictably with some very sharp "spikes" in it's graph. The higher number of levels used reflects the difficulty of sampling the more extreme features.]]
  164. [[`x == 0 ? 1 : sin(x) / x`][(-[infin], [infin])][3.0e-1][4.0e-1][15][159][This highly oscillatory integral isn't handled at all well by tanh-sinh quadrature: there is so much
  165. cancellation in the sum that the result is essentially worthless. The argument transformation of the infinite integral behaves somewhat badly as well, in fact
  166. we do ['slightly] better integrating over 2 symmetrical and large finite limits.]]
  167. [[`sqrt(x / (1 - x * x))`][(0,1)][1e-8][1e-8][5][1][This an example of an integral that has all its area close to a non-zero endpoint, the problem here is that
  168. the function being integrated returns "garbage" values for x very close to 1. We can easily fix this issue by passing a 2 argument functor to the integrator:
  169. the second argument gives the distance to the nearest endpoint, and we can use that information to return accurate values, and thus fix the integral calculation.]]
  170. [[`x < 0.5 ? sqrt(x) / sqrt(1 - x * x) : sqrt(x / ((x + 1) * (xc)))`][(0,1)][0][0][5][1][This is the 2-argument version of the previous integral, the second
  171. argument ['xc] is `1-x` in this case, and we use 1-x[super 2] == (1-x)(1+x) to calculate 1-x[super 2] with greater accuracy.]]
  172. ]
  173. Although the `tanh-sinh` quadrature can compute integral over infinite domains by variable transformations, these transformations can create a very poorly behaved integrand.
  174. For this reason, double-exponential variable transformations have been provided that allow stable computation over infinite domains; these being the exp-sinh and sinh-sinh quadrature.
  175. [h4 Complex integrals]
  176. The `tanh_sinh` integrator supports integration of functions which return complex results, for example the sine-integral `Si(z)` has the integral representation:
  177. [equation sine_integral]
  178. Which we can code up directly as:
  179. template <class Complex>
  180. Complex Si(Complex z)
  181. {
  182. typedef typename Complex::value_type value_type;
  183. using std::sin; using std::cos; using std::exp;
  184. auto f = [&z](value_type t) { return -exp(-z * cos(t)) * cos(z * sin(t)); };
  185. boost::math::quadrature::tanh_sinh<value_type> integrator;
  186. return integrator.integrate(f, 0, boost::math::constants::half_pi<value_type>()) + boost::math::constants::half_pi<value_type>();
  187. }
  188. [endsect] [/section:de_tanh_sinh tanh_sinh]
  189. [section:de_tanh_sinh_2_arg Handling functions with large features near an endpoint with tanh-sinh quadrature]
  190. Tanh-sinh quadrature has a unique feature which makes it well suited to handling integrals with either singularities or large features of interest
  191. near one or both endpoints, it turns out that when we calculate and store the abscissa values at which we will be evaluating the function, we can equally
  192. well calculate the difference between the abscissa value and the nearest endpoint.
  193. This makes it possible to perform quadrature arbitrarily close to an endpoint, without suffering cancellation error.
  194. Note however, that we never actually reach the endpoint, so any endpoint singularity will always be excluded from the quadrature.
  195. The tanh_sinh integration routine will use this additional information internally when performing range transformation, so that for example,
  196. if one end of the range is zero (or infinite) then our transformations will get arbitrarily close to the endpoint without precision loss.
  197. However, there are some integrals which may have all of their area near ['both] endpoints, or else near the non-zero endpoint, and in that situation
  198. there is a very real risk of loss of precision. For example:
  199. tanh_sinh<double> integrator;
  200. auto f = [](double x) { return sqrt(x / (1 - x * x); };
  201. double Q = integrator.integrate(f, 0.0, 1.0);
  202. Results in very low accuracy as all the area of the integral is near 1, and the `1 - x * x` term suffers from cancellation error here.
  203. However, both of tanh_sinh's integration routines will automatically handle 2 argument functors: in this case the first argument is the abscissa value as
  204. before, while the second is the distance to the nearest endpoint, ie `a - x` or `b - x` if we are integrating over (a,b).
  205. You can always differentiate between these 2 cases because the second argument will be negative if we are nearer to the left endpoint.
  206. Knowing this, we can rewrite our lambda expression to take advantage of this additional information:
  207. tanh_sinh<double> integrator;
  208. auto f = [](double x, double xc) { return x <= 0.5 ? sqrt(x) / sqrt(1 - x * x) : sqrt(x / ((x + 1) * (xc))); };
  209. double Q = integrator.integrate(f, 0.0, 1.0);
  210. Not only is this form accurate to full machine-precision, but it converges to the result faster as well.
  211. [endsect] [/section:de_tanh_sinh_2_arg Handling functions with large features near an endpoint with tanh-sinh quadrature]
  212. [section:de_sinh_sinh sinh_sinh]
  213. template<class Real>
  214. class sinh_sinh
  215. {
  216. public:
  217. sinh_sinh(size_t max_refinements = 9);
  218. template<class F>
  219. auto integrate(const F f,
  220. Real tol = sqrt(std::numeric_limits<Real>::epsilon()),
  221. Real* error = nullptr,
  222. Real* L1 = nullptr,
  223. size_t* levels = nullptr)->decltype(std::declval<F>()(std::declval<Real>())) const;;
  224. };
  225. The sinh-sinh quadrature allows computation over the entire real line, and is called as follows:
  226. sinh_sinh<double> integrator;
  227. auto f = [](double x) { return exp(-x*x); };
  228. double error;
  229. double L1;
  230. double Q = integrator.integrate(f, &error, &L1);
  231. Note that the limits of integration are understood to be (-[infin], +[infin]).
  232. Complex valued integrands are supported as well, for example the [@https://en.wikipedia.org/wiki/Dirichlet_eta_function Dirichlet Eta function]
  233. can be represented via:
  234. [equation complex_eta_integral]
  235. which we can directly code up as:
  236. template <class Complex>
  237. Complex eta(Complex s)
  238. {
  239. typedef typename Complex::value_type value_type;
  240. using std::pow; using std::exp;
  241. Complex i(0, 1);
  242. value_type pi = boost::math::constants::pi<value_type>();
  243. auto f = [&, s, i](value_type t) { return pow(0.5 + i * t, -s) / (exp(pi * t) + exp(-pi * t)); };
  244. boost::math::quadrature::sinh_sinh<value_type> integrator;
  245. return integrator.integrate(f);
  246. }
  247. [endsect] [/section:de_sinh_sinh sinh_sinh]
  248. [section:de_exp_sinh exp_sinh]
  249. template<class Real>
  250. class exp_sinh
  251. {
  252. public:
  253. exp_sinh(size_t max_refinements = 9);
  254. template<class F>
  255. auto integrate(const F f, Real a, Real b,
  256. Real tol = sqrt(std::numeric_limits<Real>::epsilon()),
  257. Real* error = nullptr,
  258. Real* L1 = nullptr,
  259. size_t* levels = nullptr)->decltype(std::declval<F>()(std::declval<Real>())) const;
  260. template<class F>
  261. auto integrate(const F f,
  262. Real tol = sqrt(std::numeric_limits<Real>::epsilon()),
  263. Real* error = nullptr,
  264. Real* L1 = nullptr,
  265. size_t* levels = nullptr)->decltype(std::declval<F>()(std::declval<Real>())) const;
  266. };
  267. For half-infinite intervals, the `exp-sinh` quadrature is provided:
  268. exp_sinh<double> integrator;
  269. auto f = [](double x) { return exp(-3*x); };
  270. double termination = sqrt(std::numeric_limits<double>::epsilon());
  271. double error;
  272. double L1;
  273. double Q = integrator.integrate(f, termination, &error, &L1);
  274. The native integration range of this integrator is (0, [infin]), but we also support /(a, [infin]), (-[infin], 0)/ and /(-[infin], b)/ via argument transformations.
  275. Endpoint singularities and complex-valued integrands are supported by `exp-sinh`.
  276. For example, the modified Bessel function K can be represented via:
  277. [equation complex_bessel_k_integral]
  278. Which we can code up as:
  279. template <class Complex>
  280. Complex bessel_K(Complex alpha, Complex z)
  281. {
  282. typedef typename Complex::value_type value_type;
  283. using std::cosh; using std::exp;
  284. auto f = [&, alpha, z](value_type t)
  285. {
  286. value_type ct = cosh(t);
  287. if (ct > log(std::numeric_limits<value_type>::max()))
  288. return Complex(0);
  289. return exp(-z * ct) * cosh(alpha * t);
  290. };
  291. boost::math::quadrature::exp_sinh<value_type> integrator;
  292. return integrator.integrate(f);
  293. }
  294. The only wrinkle in the above code is the need to check for large `cosh(t)` in which case we assume that
  295. `exp(-x cosh(t))` tends to zero faster than `cosh(alpha x)` tends to infinity and return `0`. Without that
  296. check we end up with `0 * Infinity` as the result (a NaN).
  297. [endsect] [/section:de_exp_sinh exp_sinh]
  298. [section:de_tol Setting the Termination Condition for Integration]
  299. The integrate method for all three double-exponential quadratures supports ['tolerance] argument that acts as the
  300. termination condition for integration.
  301. The tolerance is met when two subsequent estimates of the integral have absolute error less than `tolerance*L1`.
  302. It is highly recommended that the tolerance be left at the default value of [radic][epsilon], or something similar.
  303. Since double exponential quadrature converges exponentially fast for functions in Hardy spaces, then once the routine has *proved* that the error is ~[radic][epsilon],
  304. then the error should in fact be ~[epsilon].
  305. If you request that the error be ~[epsilon], this tolerance might never be achieved (as the summation is not stabilized ala Kahan),
  306. and the routine will simply flounder,
  307. dividing the interval in half in order to increase the precision of the integrand, only to be thwarted by floating point roundoff.
  308. If for some reason, the default value doesn't quite achieve full precision, then you could try something a little smaller such as
  309. [radic][epsilon]/4 or [epsilon][super 2/3].
  310. However, more likely, you need to check that your function to be integrated is able to return accurate values, and that there are no other issues with your integration scheme.
  311. [endsect] [/section:de_tol Setting the Termination Condition for Integration]
  312. [section:de_levels Setting the Maximum Interval Halvings and Memory Requirements]
  313. The max interval halvings is the maximum number of times the interval can be cut in half before giving up.
  314. If the accuracy is not met at that time, the routine returns its best estimate, along with the `error` and `L1`,
  315. which allows the user to decide if another quadrature routine should be employed.
  316. An example of this is
  317. double tol = std::sqrt(std::numeric_limits<double>::epsilon());
  318. size_t max_halvings = 12;
  319. tanh_sinh<double> integrator(max_halvings);
  320. auto f = [](double x) { return 5*x + 7; };
  321. double error, L1;
  322. double Q = integrator.integrate(f, (double) 0, (double) 1, &error, &L1);
  323. if (error*L1 > 0.01)
  324. {
  325. Q = some_other_quadrature_method(f, (double) 0, (double) 1);
  326. }
  327. It's important to remember that the number of sample points doubles with each new level, as does the memory footprint
  328. of the integrator object. Further, if the integral is smooth, then the precision will be doubling with each new level,
  329. so that for example, many integrals can achieve 100 decimal digit precision after just 7 levels. That said, abscissa-weight
  330. pairs for new levels are computed only when a new level is actually required (see thread safety), none the less,
  331. you should avoid setting the maximum arbitrarily high "just in case" as the time and space requirements for a large
  332. number of levels can quickly grow out of control.
  333. [endsect] [/section:de_levels Setting the Maximum Interval Halvings and Memory Requirements]
  334. [section:de_thread Thread Safety]
  335. All three of the double-exponential integrators are thread safe as long as BOOST_MATH_NO_ATOMIC_INT is not set. Since the
  336. integrators store a large amount of fairly hard to compute data, it is recommended that these objects are stored and reused
  337. as much as possible.
  338. Internally all three of the double-exponential integrators use the same caching strategy: they allocate all the vectors needed
  339. to store the maximum permitted levels, but only populate the first few levels when constructed. This means a minimal amount of memory
  340. is actually allocated when the integrator is first constructed, and already populated levels can be accessed via a lockfree
  341. atomic read, and only populating new levels requires a thread lock.
  342. In addition, the three built in types (plus `__float128` when available), have the first 7 levels pre-computed: this is generally sufficient for the vast majority
  343. of integrals - even at quad precision - and means that integrators for these types are relatively cheap to construct.
  344. [endsect] [/section:de_thread Thread Safety]
  345. [section:de_caveats Caveats]
  346. A few things to keep in mind while using the tanh-sinh, exp-sinh, and sinh-sinh quadratures:
  347. These routines are *very* aggressive about approaching the endpoint singularities.
  348. This allows lots of significant digits to be extracted, but also has another problem: Roundoff error can cause the function to be evaluated at the endpoints.
  349. A few ways to avoid this: Narrow up the bounds of integration to say, [a + [epsilon], b - [epsilon]], make sure (a+b)/2 and (b-a)/2 are representable, and finally,
  350. if you think the compromise between accuracy an usability has gone too far in the direction of accuracy, file a ticket.
  351. Both exp-sinh and sinh-sinh quadratures evaluate the functions they are passed at *very* large argument.
  352. You might understand that x[super 12]exp(-x) is should be zero when x[super 12] overflows, but IEEE floating point arithmetic does not.
  353. Hence `std::pow(x, 12)*std::exp(-x)` is an indeterminate form whenever `std::pow(x, 12)` overflows.
  354. So make sure your functions have the correct limiting behavior; for example
  355. auto f = [](double x) {
  356. double t = exp(-x);
  357. if(t == 0)
  358. {
  359. return 0;
  360. }
  361. return t*pow(x, 12);
  362. };
  363. has the correct behavior for large /x/, but `auto f = [](double x) { return exp(-x)*pow(x, 12); };` does not.
  364. Oscillatory integrals, such as the sinc integral, are poorly approximated by double-exponential quadrature.
  365. Fortunately the error estimates and L1 norm are massive for these integrals, but nonetheless, oscillatory integrals require different techniques.
  366. A special mention should be made about integrating through zero: while our range adaptors preserve precision when one endpoint is zero,
  367. things get harder when the origin is neither in the center of the range, nor at an endpoint. Consider integrating:
  368. [expression 1 / (1 +x^2)]
  369. Over (a, [infin]). As long as `a >= 0` both the tanh_sinh and the exp_sinh integrators will handle this just fine: in fact they provide
  370. a rather efficient method for this kind of integral. However, if we have `a < 0` then we are forced to adapt the range in a way that
  371. produces abscissa values near zero that have an absolute error of [epsilon], and since all of the area of the integral is near zero
  372. both integrators thrash around trying to reach the target accuracy, but never actually get there for `a << 0`. On the other hand, the
  373. simple expedient of breaking the integral into two domains: (a, 0) and (0, b) and integrating each seperately using the tanh-sinh
  374. integrator, works just fine.
  375. Finally, some endpoint singularities are too strong to be handled by `tanh_sinh` or equivalent methods, for example consider integrating
  376. the function:
  377. double p = some_value;
  378. tanh_sinh<double> integrator;
  379. auto f = [&](double x){ return pow(tan(x), p); };
  380. auto Q = integrator.integrate(f, 0, constants::half_pi<double>());
  381. The first problem with this function, is that the singularity is at [pi]/2, so if we're integrating over (0, [pi]/2) then we can never
  382. approach closer to the singularity than [epsilon], and for p less than but close to 1, we need to get ['very] close to the singularity
  383. to find all the area under the function. If we recall the identity [^tan([pi]/2 - x) == 1/tan(x)] then we can rewrite the function like this:
  384. auto f = [&](double x){ return pow(tan(x), -p); };
  385. And now the singularity is at the origin and we can get much closer to it when evaluating the integral: all we have done is swap the
  386. integral endpoints over.
  387. This actually works just fine for p < 0.95, but after that the `tanh_sinh` integrator starts thrashing around and is unable to
  388. converge on the integral. The problem is actually a lack of exponent range: if we simply swap type double for something
  389. with a greater exponent range (an 80-bit long double or a quad precision type), then we can get to at least p = 0.99. If we want to go
  390. beyond that, or stick with type double, then we have to get smart.
  391. The easiest method is to notice that for small x, then [^tan(x) [cong] x], and so we are simply integrating x[super -p]. Therefore we can use
  392. this approximation over (0, small), and integrate numerically from (small, [pi]/2), using [epsilon] as a suitable crossover point
  393. seems sensible:
  394. double p = some_value;
  395. double crossover = std::numeric_limits<double>::epsilon();
  396. tanh_sinh<double> integrator;
  397. auto f = [&](double x){ return pow(tan(x), p); };
  398. auto Q = integrator.integrate(f, crossover, constants::half_pi<double>()) + pow(crossover, 1 - p) / (1 - p);
  399. There is an alternative, more complex method, which is applicable when we are dealing with expressions which can be simplified
  400. by evaluating by logs. Let's suppose that as in this case, all the area under the graph is infinitely close to zero, now inagine
  401. that we could expand that region out over a much larger range of abscissa values: that's exactly what happens if we perform
  402. argument substitution, replacing `x` by `exp(-x)` (note that we must also multiply by the derivative of `exp(-x)`).
  403. Now the singularity at zero is moved to +[infin], and the [pi]/2 bound to
  404. -log([pi]/2). Initially our argument substituted function looks like:
  405. auto f = [&](double x){ return exp(-x) * pow(tan(exp(-x)), -p); };
  406. Which is hardly any better, as we still run out of exponent range just as before. However, if we replace `tan(exp(-x))` by `exp(-x)` for suitably
  407. small `exp(-x)`, and therefore [^x > -log([epsilon])], we can greatly simplify the expression and evaluate by logs:
  408. auto f = [&](double x)
  409. {
  410. static const double crossover = -log(std::numeric_limits<double>::epsilon());
  411. return x > crossover ? exp((p - 1) * x) : exp(-x) * pow(tan(exp(-x)), -p);
  412. };
  413. This form integrates just fine over (-log([pi]/2), +[infin]) using either the `tanh_sinh` or `exp_sinh` classes.
  414. [endsect] [/section:de_caveats Caveats]
  415. [section:de_refes References]
  416. * Hidetosi Takahasi and Masatake Mori, ['Double Exponential Formulas for Numerical Integration] Publ. Res. Inst. Math. Sci., 9 (1974), pp. 721-741.
  417. * Masatake Mori, ['An IMT-Type Double Exponential Formula for Numerical Integration], Publ RIMS, Kyoto Univ. 14 (1978), 713-729.
  418. * David H. Bailey, Karthik Jeyabalan and Xiaoye S. Li ['A Comparison of Three High-Precision Quadrature Schemes] Office of Scientific & Technical Information Technical Reports.
  419. * Tanaka, Ken’ichiro, et al. ['Function classes for double exponential integration formulas.] Numerische Mathematik 111.4 (2009): 631-655.
  420. [endsect] [/section:de_refes References]
  421. [endsect] [/section:double_exponential Double-exponential quadrature]