dist_tutorial.qbk 19 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460
  1. [/ def names all end in distrib to avoid clashes with names of functions]
  2. [def __binomial_distrib [link math_toolkit.dist_ref.dists.binomial_dist Binomial Distribution]]
  3. [def __chi_squared_distrib [link math_toolkit.dist_ref.dists.chi_squared_dist Chi Squared Distribution]]
  4. [def __normal_distrib [link math_toolkit.dist_ref.dists.normal_dist Normal Distribution]]
  5. [def __F_distrib [link math_toolkit.dist_ref.dists.f_dist Fisher F Distribution]]
  6. [def __students_t_distrib [link math_toolkit.dist_ref.dists.students_t_dist Students t Distribution]]
  7. [def __handbook [@http://www.itl.nist.gov/div898/handbook/
  8. NIST/SEMATECH e-Handbook of Statistical Methods.]]
  9. [section:stat_tut Statistical Distributions Tutorial]
  10. This library is centred around statistical distributions, this tutorial
  11. will give you an overview of what they are, how they can be used, and
  12. provides a few worked examples of applying the library to statistical tests.
  13. [section:overview Overview of Statistical Distributions]
  14. [section:headers Headers and Namespaces]
  15. All the code in this library is inside `namespace boost::math`.
  16. In order to use a distribution /my_distribution/ you will need to include
  17. either the header(s) `<boost/math/my_distribution.hpp>` (quicker compiles), or
  18. the "include all the distributions" header: `<boost/math/distributions.hpp>`.
  19. For example, to use the Students-t distribution include either
  20. `<boost/math/students_t.hpp>` or
  21. `<boost/math/distributions.hpp>`
  22. You also need to bring distribution names into scope,
  23. perhaps with a `using namespace boost::math;` declaration,
  24. or specific `using` declarations like `using boost::math::normal;` (*recommended*).
  25. [caution Some math function names are also used in `namespace std` so including `<random>` could cause ambiguity!]
  26. [endsect] [/ section:headers Headers and Namespaces]
  27. [section:objects Distributions are Objects]
  28. Each kind of distribution in this library is a class type - an object, with member functions.
  29. [tip If you are familiar with statistics libraries using functions,
  30. and 'Distributions as Objects' seem alien, see
  31. [link math_toolkit.stat_tut.weg.nag_library the comparison to
  32. other statistics libraries.]
  33. ] [/tip]
  34. [link policy Policies] provide optional fine-grained control
  35. of the behaviour of these classes, allowing the user to customise
  36. behaviour such as how errors are handled, or how the quantiles
  37. of discrete distributions behave.
  38. Making distributions class types does two things:
  39. * It encapsulates the kind of distribution in the C++ type system;
  40. so, for example, Students-t distributions are always a different C++ type from
  41. Chi-Squared distributions.
  42. * The distribution objects store any parameters associated with the
  43. distribution: for example, the Students-t distribution has a
  44. ['degrees of freedom] parameter that controls the shape of the distribution.
  45. This ['degrees of freedom] parameter has to be provided
  46. to the Students-t object when it is constructed.
  47. Although the distribution classes in this library are templates, there
  48. are typedefs on type /double/ that mostly take the usual name of the
  49. distribution
  50. (except where there is a clash with a function of the same name: beta and gamma,
  51. in which case using the default template arguments - `RealType = double` -
  52. is nearly as convenient).
  53. Probably 95% of uses are covered by these typedefs:
  54. // using namespace boost::math; // Avoid potential ambiguity with names in std <random>
  55. // Safer to declare specific functions with using statement(s):
  56. using boost::math::beta_distribution;
  57. using boost::math::binomial_distribution;
  58. using boost::math::students_t;
  59. // Construct a students_t distribution with 4 degrees of freedom:
  60. students_t d1(4);
  61. // Construct a double-precision beta distribution
  62. // with parameters a = 10, b = 20
  63. beta_distribution<> d2(10, 20); // Note: _distribution<> suffix !
  64. If you need to use the distributions with a type other than `double`,
  65. then you can instantiate the template directly: the names of the
  66. templates are the same as the `double` typedef but with `_distribution`
  67. appended, for example: __students_t_distrib or __binomial_distrib:
  68. // Construct a students_t distribution, of float type,
  69. // with 4 degrees of freedom:
  70. students_t_distribution<float> d3(4);
  71. // Construct a binomial distribution, of long double type,
  72. // with probability of success 0.3
  73. // and 20 trials in total:
  74. binomial_distribution<long double> d4(20, 0.3);
  75. The parameters passed to the distributions can be accessed via getter member
  76. functions:
  77. d1.degrees_of_freedom(); // returns 4.0
  78. This is all well and good, but not very useful so far. What we often want
  79. is to be able to calculate the /cumulative distribution functions/ and
  80. /quantiles/ etc for these distributions.
  81. [endsect] [/section:objects Distributions are Objects]
  82. [section:generic Generic operations common to all distributions are non-member functions]
  83. Want to calculate the PDF (Probability Density Function) of a distribution?
  84. No problem, just use:
  85. pdf(my_dist, x); // Returns PDF (density) at point x of distribution my_dist.
  86. Or how about the CDF (Cumulative Distribution Function):
  87. cdf(my_dist, x); // Returns CDF (integral from -infinity to point x)
  88. // of distribution my_dist.
  89. And quantiles are just the same:
  90. quantile(my_dist, p); // Returns the value of the random variable x
  91. // such that cdf(my_dist, x) == p.
  92. If you're wondering why these aren't member functions, it's to
  93. make the library more easily extensible: if you want to add additional
  94. generic operations - let's say the /n'th moment/ - then all you have to
  95. do is add the appropriate non-member functions, overloaded for each
  96. implemented distribution type.
  97. [tip
  98. [*Random numbers that approximate Quantiles of Distributions]
  99. If you want random numbers that are distributed in a specific way,
  100. for example in a uniform, normal or triangular,
  101. see [@http://www.boost.org/libs/random/ Boost.Random].
  102. Whilst in principal there's nothing to prevent you from using the
  103. quantile function to convert a uniformly distributed random
  104. number to another distribution, in practice there are much more
  105. efficient algorithms available that are specific to random number generation.
  106. ] [/tip Random numbers that approximate Quantiles of Distributions]
  107. For example, the binomial distribution has two parameters:
  108. n (the number of trials) and p (the probability of success on any one trial).
  109. The `binomial_distribution` constructor therefore has two parameters:
  110. `binomial_distribution(RealType n, RealType p);`
  111. For this distribution the __random_variate is k: the number of successes observed.
  112. The probability density\/mass function (pdf) is therefore written as ['f(k; n, p)].
  113. [note
  114. [*Random Variates and Distribution Parameters]
  115. The concept of a __random_variable is closely linked to the term __random_variate:
  116. a random variate is a particular value (outcome) of a random variable.
  117. and [@http://en.wikipedia.org/wiki/Parameter distribution parameters]
  118. are conventionally distinguished (for example in Wikipedia and Wolfram MathWorld)
  119. by placing a semi-colon or vertical bar)
  120. /after/ the __random_variable (whose value you 'choose'),
  121. to separate the variate from the parameter(s) that defines the shape of the distribution.
  122. For example, the binomial distribution probability distribution function (PDF) is written as
  123. [role serif_italic ['f(k| n, p)] = Pr(K = k|n, p) = ] probability of observing k successes out of n trials.
  124. K is the __random_variable, k is the __random_variate,
  125. the parameters are n (trials) and p (probability).
  126. ] [/tip Random Variates and Distribution Parameters]
  127. [note By convention, __random_variate are lower case, usually k is integral, x if real, and
  128. __random_variable are upper case, K if integral, X if real. But this implementation treats
  129. all as floating point values `RealType`, so if you really want an integral result,
  130. you must round: see note on Discrete Probability Distributions below for details.]
  131. As noted above the non-member function `pdf` has one parameter for the distribution object,
  132. and a second for the random variate. So taking our binomial distribution
  133. example, we would write:
  134. `pdf(binomial_distribution<RealType>(n, p), k);`
  135. The ranges of __random_variate values that are permitted and are supported can be
  136. tested by using two functions `range` and `support`.
  137. The distribution (effectively the __random_variate) is said to be 'supported'
  138. over a range that is
  139. [@http://en.wikipedia.org/wiki/Probability_distribution
  140. "the smallest closed set whose complement has probability zero"].
  141. MathWorld uses the word 'defined' for this range.
  142. Non-mathematicians might say it means the 'interesting' smallest range
  143. of random variate x that has the cdf going from zero to unity.
  144. Outside are uninteresting zones where the pdf is zero, and the cdf zero or unity.
  145. For most distributions, with probability distribution functions one might describe
  146. as 'well-behaved', we have decided that it is most useful for the supported range
  147. to *exclude* random variate values like exact zero *if the end point is discontinuous*.
  148. For example, the Weibull (scale 1, shape 1) distribution smoothly heads for unity
  149. as the random variate x declines towards zero.
  150. But at x = zero, the value of the pdf is suddenly exactly zero, by definition.
  151. If you are plotting the PDF, or otherwise calculating,
  152. zero is not the most useful value for the lower limit of supported, as we discovered.
  153. So for this, and similar distributions,
  154. we have decided it is most numerically useful to use
  155. the closest value to zero, min_value, for the limit of the supported range.
  156. (The `range` remains from zero, so you will still get `pdf(weibull, 0) == 0`).
  157. (Exponential and gamma distributions have similarly discontinuous functions).
  158. Mathematically, the functions may make sense with an (+ or -) infinite value,
  159. but except for a few special cases (in the Normal and Cauchy distributions)
  160. this implementation limits random variates to finite values from the `max`
  161. to `min` for the `RealType`.
  162. (See [link math_toolkit.sf_implementation.handling_of_floating_point_infin
  163. Handling of Floating-Point Infinity] for rationale).
  164. [note
  165. [*Discrete Probability Distributions]
  166. Note that the [@http://en.wikipedia.org/wiki/Discrete_probability_distribution
  167. discrete distributions], including the binomial, negative binomial, Poisson & Bernoulli,
  168. are all mathematically defined as discrete functions:
  169. that is to say the functions `cdf` and `pdf` are only defined for integral values
  170. of the random variate.
  171. However, because the method of calculation often uses continuous functions
  172. it is convenient to treat them as if they were continuous functions,
  173. and permit non-integral values of their parameters.
  174. Users wanting to enforce a strict mathematical model may use `floor`
  175. or `ceil` functions on the random variate prior to calling the distribution
  176. function.
  177. The quantile functions for these distributions are hard to specify
  178. in a manner that will satisfy everyone all of the time. The default
  179. behaviour is to return an integer result, that has been rounded
  180. /outwards/: that is to say, lower quantiles - where the probablity
  181. is less than 0.5 are rounded down, while upper quantiles - where
  182. the probability is greater than 0.5 - are rounded up. This behaviour
  183. ensures that if an X% quantile is requested, then /at least/ the requested
  184. coverage will be present in the central region, and /no more than/
  185. the requested coverage will be present in the tails.
  186. This behaviour can be changed so that the quantile functions are rounded
  187. differently, or return a real-valued result using
  188. [link math_toolkit.pol_overview Policies]. It is strongly
  189. recommended that you read the tutorial
  190. [link math_toolkit.pol_tutorial.understand_dis_quant
  191. Understanding Quantiles of Discrete Distributions] before
  192. using the quantile function on a discrete distribtion. The
  193. [link math_toolkit.pol_ref.discrete_quant_ref reference docs]
  194. describe how to change the rounding policy
  195. for these distributions.
  196. For similar reasons continuous distributions with parameters like
  197. "degrees of freedom"
  198. that might appear to be integral, are treated as real values
  199. (and are promoted from integer to floating-point if necessary).
  200. In this case however, there are a small number of situations where non-integral
  201. degrees of freedom do have a genuine meaning.
  202. ]
  203. [endsect] [/ section:generic Generic operations common to all distributions are non-member functions]
  204. [section:complements Complements are supported too - and when to use them]
  205. Often you don't want the value of the CDF, but its complement, which is
  206. to say `1-p` rather than `p`. It is tempting to calculate the CDF and subtract
  207. it from `1`, but if `p` is very close to `1` then cancellation error
  208. will cause you to lose accuracy, perhaps totally.
  209. [link why_complements See below ['"Why and when to use complements?"]]
  210. In this library, whenever you want to receive a complement, just wrap
  211. all the function arguments in a call to `complement(...)`, for example:
  212. students_t dist(5);
  213. cout << "CDF at t = 1 is " << cdf(dist, 1.0) << endl;
  214. cout << "Complement of CDF at t = 1 is " << cdf(complement(dist, 1.0)) << endl;
  215. But wait, now that we have a complement, we have to be able to use it as well.
  216. Any function that accepts a probability as an argument can also accept a complement
  217. by wrapping all of its arguments in a call to `complement(...)`, for example:
  218. students_t dist(5);
  219. for(double i = 10; i < 1e10; i *= 10)
  220. {
  221. // Calculate the quantile for a 1 in i chance:
  222. double t = quantile(complement(dist, 1/i));
  223. // Print it out:
  224. cout << "Quantile of students-t with 5 degrees of freedom\n"
  225. "for a 1 in " << i << " chance is " << t << endl;
  226. }
  227. [tip
  228. [*Critical values are just quantiles]
  229. Some texts talk about quantiles, or percentiles or fractiles,
  230. others about critical values, the basic rule is:
  231. ['Lower critical values] are the same as the quantile.
  232. ['Upper critical values] are the same as the quantile from the complement
  233. of the probability.
  234. For example, suppose we have a Bernoulli process, giving rise to a binomial
  235. distribution with success ratio 0.1 and 100 trials in total. The
  236. ['lower critical value] for a probability of 0.05 is given by:
  237. `quantile(binomial(100, 0.1), 0.05)`
  238. and the ['upper critical value] is given by:
  239. `quantile(complement(binomial(100, 0.1), 0.05))`
  240. which return 4.82 and 14.63 respectively.
  241. ]
  242. [#why_complements]
  243. [tip
  244. [*Why bother with complements anyway?]
  245. It's very tempting to dispense with complements, and simply subtract
  246. the probability from 1 when required. However, consider what happens when
  247. the probability is very close to 1: let's say the probability expressed at
  248. float precision is `0.999999940f`, then `1 - 0.999999940f = 5.96046448e-008`,
  249. but the result is actually accurate to just ['one single bit]: the only
  250. bit that didn't cancel out!
  251. Or to look at this another way: consider that we want the risk of falsely
  252. rejecting the null-hypothesis in the Student's t test to be 1 in 1 billion,
  253. for a sample size of 10,000.
  254. This gives a probability of 1 - 10[super -9], which is exactly 1 when
  255. calculated at float precision. In this case calculating the quantile from
  256. the complement neatly solves the problem, so for example:
  257. `quantile(complement(students_t(10000), 1e-9))`
  258. returns the expected t-statistic `6.00336`, where as:
  259. `quantile(students_t(10000), 1-1e-9f)`
  260. raises an overflow error, since it is the same as:
  261. `quantile(students_t(10000), 1)`
  262. Which has no finite result.
  263. With all distributions, even for more reasonable probability
  264. (unless the value of p can be represented exactly in the floating-point type)
  265. the loss of accuracy quickly becomes significant if you simply calculate probability from 1 - p
  266. (because it will be mostly garbage digits for p ~ 1).
  267. So always avoid, for example, using a probability near to unity like 0.99999
  268. `quantile(my_distribution, 0.99999)`
  269. and instead use
  270. `quantile(complement(my_distribution, 0.00001))`
  271. since 1 - 0.99999 is not exactly equal to 0.00001 when using floating-point arithmetic.
  272. This assumes that the 0.00001 value is either a constant,
  273. or can be computed by some manner other than subtracting 0.99999 from 1.
  274. ] [/ tip *Why bother with complements anyway?]
  275. [endsect] [/ section:complements Complements are supported too - and why]
  276. [section:parameters Parameters can be calculated]
  277. Sometimes it's the parameters that define the distribution that you
  278. need to find. Suppose, for example, you have conducted a Students-t test
  279. for equal means and the result is borderline. Maybe your two samples
  280. differ from each other, or maybe they don't; based on the result
  281. of the test you can't be sure. A legitimate question to ask then is
  282. "How many more measurements would I have to take before I would get
  283. an X% probability that the difference is real?" Parameter finders
  284. can answer questions like this, and are necessarily different for
  285. each distribution. They are implemented as static member functions
  286. of the distributions, for example:
  287. students_t::find_degrees_of_freedom(
  288. 1.3, // difference from true mean to detect
  289. 0.05, // maximum risk of falsely rejecting the null-hypothesis.
  290. 0.1, // maximum risk of falsely failing to reject the null-hypothesis.
  291. 0.13); // sample standard deviation
  292. Returns the number of degrees of freedom required to obtain a 95%
  293. probability that the observed differences in means is not down to
  294. chance alone. In the case that a borderline Students-t test result
  295. was previously obtained, this can be used to estimate how large the sample size
  296. would have to become before the observed difference was considered
  297. significant. It assumes, of course, that the sample mean and standard
  298. deviation are invariant with sample size.
  299. [endsect] [/ section:parameters Parameters can be calculated]
  300. [section:summary Summary]
  301. * Distributions are objects, which are constructed from whatever
  302. parameters the distribution may have.
  303. * Member functions allow you to retrieve the parameters of a distribution.
  304. * Generic non-member functions provide access to the properties that
  305. are common to all the distributions (PDF, CDF, quantile etc).
  306. * Complements of probabilities are calculated by wrapping the function's
  307. arguments in a call to `complement(...)`.
  308. * Functions that accept a probability can accept a complement of the
  309. probability as well, by wrapping the function's
  310. arguments in a call to `complement(...)`.
  311. * Static member functions allow the parameters of a distribution
  312. to be found from other information.
  313. Now that you have the basics, the next section looks at some worked examples.
  314. [endsect] [/section:summary Summary]
  315. [endsect] [/section:overview Overview]
  316. [section:weg Worked Examples]
  317. [include distribution_construction.qbk]
  318. [include students_t_examples.qbk]
  319. [include chi_squared_examples.qbk]
  320. [include f_dist_example.qbk]
  321. [include binomial_example.qbk]
  322. [include geometric_example.qbk]
  323. [include negative_binomial_example.qbk]
  324. [include normal_example.qbk]
  325. [/include inverse_gamma_example.qbk]
  326. [/include inverse_gaussian_example.qbk]
  327. [include inverse_chi_squared_eg.qbk]
  328. [include nc_chi_squared_example.qbk]
  329. [include error_handling_example.qbk]
  330. [include find_location_and_scale.qbk]
  331. [include nag_library.qbk]
  332. [include c_sharp.qbk]
  333. [endsect] [/section:weg Worked Examples]
  334. [include background.qbk]
  335. [endsect] [/ section:stat_tut Statistical Distributions Tutorial]
  336. [/ dist_tutorial.qbk
  337. Copyright 2006, 2010, 2011 John Maddock and Paul A. Bristow.
  338. Distributed under the Boost Software License, Version 1.0.
  339. (See accompanying file LICENSE_1_0.txt or copy at
  340. http://www.boost.org/LICENSE_1_0.txt).
  341. ]