lanczos.qbk 11 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246
  1. [section:lanczos The Lanczos Approximation]
  2. [h4 Motivation]
  3. ['Why base gamma and gamma-like functions on the Lanczos approximation?]
  4. First of all I should make clear that for the gamma function
  5. over real numbers (as opposed to complex ones)
  6. the Lanczos approximation (See [@http://en.wikipedia.org/wiki/Lanczos_approximation Wikipedia or ]
  7. [@http://mathworld.wolfram.com/LanczosApproximation.html Mathworld])
  8. appears to offer no clear advantage over more traditional methods such as
  9. [@http://en.wikipedia.org/wiki/Stirling_approximation Stirling's approximation].
  10. __pugh carried out an extensive comparison of the various methods available
  11. and discovered that they were all very similar in terms of complexity
  12. and relative error. However, the Lanczos approximation does have a couple of
  13. properties that make it worthy of further consideration:
  14. * The approximation has an easy to compute truncation error that holds for
  15. all /z > 0/. In practice that means we can use the same approximation for all
  16. /z > 0/, and be certain that no matter how large or small /z/ is, the truncation
  17. error will /at worst/ be bounded by some finite value.
  18. * The approximation has a form that is particularly amenable to analytic
  19. manipulation, in particular ratios of gamma or gamma-like functions
  20. are particularly easy to compute without resorting to logarithms.
  21. It is the combination of these two properties that make the approximation
  22. attractive: Stirling's approximation is highly accurate for large z, and
  23. has some of the same analytic properties as the Lanczos approximation, but
  24. can't easily be used across the whole range of z.
  25. As the simplest example, consider the ratio of two gamma functions: one could
  26. compute the result via lgamma:
  27. exp(lgamma(a) - lgamma(b));
  28. However, even if lgamma is uniformly accurate to 0.5ulp, the worst case
  29. relative error in the above can easily be shown to be:
  30. Erel > a * log(a)/2 + b * log(b)/2
  31. For small /a/ and /b/ that's not a problem, but to put the relationship another
  32. way: ['each time a and b increase in magnitude by a factor of 10, at least one
  33. decimal digit of precision will be lost.]
  34. In contrast, by analytically combining like power
  35. terms in a ratio of Lanczos approximation's, these errors can be virtually eliminated
  36. for small /a/ and /b/, and kept under control for very large (or very small
  37. for that matter) /a/ and /b/. Of course, computing large powers is itself a
  38. notoriously hard problem, but even so, analytic combinations of Lanczos
  39. approximations can make the difference between obtaining a valid result, or
  40. simply garbage. Refer to the implementation notes for the __beta function for
  41. an example of this method in practice. The incomplete
  42. [link math_toolkit.sf_gamma.igamma gamma_p gamma] and
  43. [link math_toolkit.sf_beta.ibeta_function beta] functions
  44. use similar analytic combinations of power terms, to combine gamma and beta
  45. functions divided by large powers into single (simpler) expressions.
  46. [h4 The Approximation]
  47. The Lanczos Approximation to the Gamma Function is given by:
  48. [equation lanczos0]
  49. Where S[sub g](z) is an infinite sum, that is convergent for all z > 0,
  50. and /g/ is an arbitrary parameter that controls the "shape" of the
  51. terms in the sum which is given by:
  52. [equation lanczos0a]
  53. With individual coefficients defined in closed form by:
  54. [equation lanczos0b]
  55. However, evaluation of the sum in that form can lead to numerical instability
  56. in the computation of the ratios of rising and falling factorials (effectively
  57. we're multiplying by a series of numbers very close to 1, so roundoff errors
  58. can accumulate quite rapidly).
  59. The Lanczos approximation is therefore often written in partial fraction form
  60. with the leading constants absorbed by the coefficients in the sum:
  61. [equation lanczos1]
  62. where:
  63. [equation lanczos2]
  64. Again parameter /g/ is an arbitrarily chosen constant, and /N/ is an arbitrarily chosen
  65. number of terms to evaluate in the "Lanczos sum" part.
  66. [note
  67. Some authors
  68. choose to define the sum from k=1 to N, and hence end up with N+1 coefficients.
  69. This happens to confuse both the following discussion and the code (since C++
  70. deals with half open array ranges, rather than the closed range of the sum).
  71. This convention is consistent with __godfrey, but not __pugh, so take care
  72. when referring to the literature in this field.]
  73. [h4 Computing the Coefficients]
  74. The coefficients C0..CN-1 need to be computed from /N/ and /g/
  75. at high precision, and then stored as part of the program.
  76. Calculation of the coefficients is performed via the method of __godfrey;
  77. let the constants be contained in a column vector P, then:
  78. P = D B C F
  79. where B is an NxN matrix:
  80. [equation lanczos4]
  81. D is an NxN matrix:
  82. [equation lanczos3]
  83. C is an NxN matrix:
  84. [equation lanczos5]
  85. and F is an N element column vector:
  86. [equation lanczos6]
  87. Note than the matrices B, D and C contain all integer terms and depend
  88. only on /N/, this product should be computed first, and then multiplied
  89. by /F/ as the last step.
  90. [h4 Choosing the Right Parameters]
  91. The trick is to choose
  92. /N/ and /g/ to give the desired level of accuracy: choosing a small value for
  93. /g/ leads to a strictly convergent series, but one which converges only slowly.
  94. Choosing a larger value of /g/ causes the terms in the series to be large
  95. and\/or divergent for about the first /g-1/ terms, and to then suddenly converge
  96. with a "crunch".
  97. __pugh has determined the optimal
  98. value of /g/ for /N/ in the range /1 <= N <= 60/: unfortunately in practice choosing
  99. these values leads to cancellation errors in the Lanczos sum as the largest
  100. term in the (alternating) series is approximately 1000 times larger than the result.
  101. These optimal values appear not to be useful in practice unless the evaluation
  102. can be done with a number of guard digits /and/ the coefficients are stored
  103. at higher precision than that desired in the result. These values are best
  104. reserved for say, computing to float precision with double precision arithmetic.
  105. [table Optimal choices for N and g when computing with guard digits (source: Pugh)
  106. [[Significand Size] [N] [g][Max Error]]
  107. [[24] [6] [5.581][9.51e-12]]
  108. [[53][13][13.144565][9.2213e-23]]
  109. ]
  110. The alternative described by __godfrey is to perform an exhaustive
  111. search of the /N/ and /g/ parameter space to determine the optimal combination for
  112. a given /p/ digit floating-point type. Repeating this work found a good
  113. approximation for double precision arithmetic (close to the one __godfrey found),
  114. but failed to find really
  115. good approximations for 80 or 128-bit long doubles. Further it was observed
  116. that the approximations obtained tended to optimised for the small values
  117. of z (1 < z < 200) used to test the implementation against the factorials.
  118. Computing ratios of gamma functions with large arguments were observed to
  119. suffer from error resulting from the truncation of the Lancozos series.
  120. __pugh identified all the locations where the theoretical error of the
  121. approximation were at a minimum, but unfortunately has published only the largest
  122. of these minima. However, he makes the observation that the minima
  123. coincide closely with the location where the first neglected term (a[sub N]) in the
  124. Lanczos series S[sub g](z) changes sign. These locations are quite easy to
  125. locate, albeit with considerable computer time. These "sweet spots" need
  126. only be computed once, tabulated, and then searched when required for an
  127. approximation that delivers the required precision for some fixed precision
  128. type.
  129. Unfortunately, following this path failed to find a really good approximation
  130. for 128-bit long doubles, and those found for 64 and 80-bit reals required an
  131. excessive number of terms. There are two competing issues here: high precision
  132. requires a large value of /g/, but avoiding cancellation errors in the evaluation
  133. requires a small /g/.
  134. At this point note that the Lanczos sum can be converted into rational form
  135. (a ratio of two polynomials, obtained from the partial-fraction form using
  136. polynomial arithmetic),
  137. and doing so changes the coefficients so that /they are all positive/. That
  138. means that the sum in rational form can be evaluated without cancellation
  139. error, albeit with double the number of coefficients for a given N. Repeating
  140. the search of the "sweet spots", this time evaluating the Lanczos sum in
  141. rational form, and testing only those "sweet spots" whose theoretical error
  142. is less than the machine epsilon for the type being tested, yielded good
  143. approximations for all the types tested. The optimal values found were quite
  144. close to the best cases reported by __pugh (just slightly larger /N/ and slightly
  145. smaller /g/ for a given precision than __pugh reports), and even though converting
  146. to rational form doubles the number of stored coefficients, it should be
  147. noted that half of them are integers (and therefore require less storage space)
  148. and the approximations require a smaller /N/ than would otherwise be required,
  149. so fewer floating point operations may be required overall.
  150. The following table shows the optimal values for /N/ and /g/ when computing
  151. at fixed precision. These should be taken as work in progress: there are no
  152. values for 106-bit significand machines (Darwin long doubles & NTL quad_float),
  153. and further optimisation of the values of /g/ may be possible.
  154. Errors given in the table
  155. are estimates of the error due to truncation of the Lanczos infinite series
  156. to /N/ terms. They are calculated from the sum of the first five neglected
  157. terms - and are known to be rather pessimistic estimates - although it is noticeable
  158. that the best combinations of /N/ and /g/ occurred when the estimated truncation error
  159. almost exactly matches the machine epsilon for the type in question.
  160. [table Optimum value for N and g when computing at fixed precision
  161. [[Significand Size][Platform/Compiler Used][N][g][Max Truncation Error]]
  162. [[24][Win32, VC++ 7.1] [6] [1.428456135094165802001953125][9.41e-007]]
  163. [[53][Win32, VC++ 7.1] [13] [6.024680040776729583740234375][3.23e-016]]
  164. [[64][Suse Linux 9 IA64, gcc-3.3.3] [17] [12.2252227365970611572265625][2.34e-024]]
  165. [[116][HP Tru64 Unix 5.1B \/ Alpha, Compaq C++ V7.1-006] [24] [20.3209821879863739013671875][4.75e-035]]
  166. ]
  167. Finally note that the Lanczos approximation can be written as follows
  168. by removing a factor of exp(g) from the denominator, and then dividing
  169. all the coefficients by exp(g):
  170. [equation lanczos7]
  171. This form is more convenient for calculating lgamma, but for the gamma
  172. function the division by /e/ turns a possibly exact quality into an
  173. inexact value: this reduces accuracy in the common case that
  174. the input is exact, and so isn't used for the gamma function.
  175. [h4 References]
  176. # [#godfrey]Paul Godfrey, [@http://my.fit.edu/~gabdo/gamma.txt "A note on the computation of the convergent
  177. Lanczos complex Gamma approximation"].
  178. # [#pugh]Glendon Ralph Pugh,
  179. [@http://bh0.physics.ubc.ca/People/matt/Doc/ThesesOthers/Phd/pugh.pdf
  180. "An Analysis of the Lanczos Gamma Approximation"],
  181. PhD Thesis November 2004.
  182. # Viktor T. Toth,
  183. [@http://www.rskey.org/gamma.htm "Calculators and the Gamma Function"].
  184. # Mathworld, [@http://mathworld.wolfram.com/LanczosApproximation.html
  185. The Lanczos Approximation].
  186. [endsect][/section:lanczos The Lanczos Approximation]
  187. [/
  188. Copyright 2006 John Maddock and Paul A. Bristow.
  189. Distributed under the Boost Software License, Version 1.0.
  190. (See accompanying file LICENSE_1_0.txt or copy at
  191. http://www.boost.org/LICENSE_1_0.txt).
  192. ]