Bayesian statistics

Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a degree of belief in an event. The degree of belief may be based on prior knowledge about the event, such as the results of previous experiments, or on personal beliefs about the event. This differs from a number of other interpretations of probability, such as the frequentist interpretation that views probability as the limit of the relative frequency of an event after a large number of trials.[1]

Bayesian statistical methods use Bayes' theorem to compute and update probabilities after obtaining new data. Bayes' theorem describes the conditional probability of an event based on data as well as prior information or beliefs about the event or conditions related to the event. For example, in Bayesian inference, Bayes' theorem can be used to estimate the parameters of a probability distribution or statistical model. Since Bayesian statistics treats probability as a degree of belief, Bayes' theorem can directly assign a probability distribution that quantifies the belief to the parameter or set of parameters.[1]

Bayesian statistics was named after Thomas Bayes, who formulated a specific case of Bayes' theorem in his paper published in 1763. In several papers spanning from the late-1700s to the early-1800s, Pierre-Simon Laplace developed the Bayesian interpretation of probability and used methods that would now be considered as Bayesian methods to solve a number of statistical problems. Many Bayesian methods were developed by later authors, but the term was not commonly used to describe such methods until the 1950s. During much of the 20th century, Bayesian methods were unfavorable with many statisticians due to philosophical and practical considerations. Many Bayesian methods required a lot of computation to complete. However, with the advent of powerful computers and new algorithms like Markov chain Monte Carlo, Bayesian methods had seen increasing use within statistics coming into the 21st century, and are now frequently used in statistics.[1][2]

Bayes' theorem

Bayes' theorem is a fundamental theorem in Bayesian statistics, as it is used by Bayesian methods to update probabilities, which are degrees of belief, after obtaining new data. Given two events and , the conditional probability of given that is true is expressed as follows:[3]

where . Although Bayes' theorem is a fundamental result of probability theory, it has a specific interpretation in Bayesian statistics. In the above equation, usually represents a proposition (such as the statement that a coin lands on heads fifty percent of the time) and represents the evidence, or new data that is to be taken into account (such as the result of a series of coin flips). is the prior probability of which expresses one's beliefs about before evidence is taken into account. The prior probability may also quantify prior knowledge or information about . is the likelihood function, which can be interpreted as the probability of the evidence given that is true. The likelihood quantifies the extent to which the evidence supports the proposition . is the posterior probability, the probability of the proposition after taking the evidence into account. Essentially, Bayes' theorem updates one's prior beliefs after considering the new evidence .[1]

The probability of the evidence can be calculated using the law of total probability. If is a partition of the sample space, which is the set of all outcomes of an experiment, then,[1][3]

When there are an infinite number of outcomes, it is necessary to integrate over all outcomes to calculate using the law of total probability. Often, is difficult to calculate as the calculation would involve intractable sums or integrals, so often only the product of the prior and likelihood is considered, since the evidence does not change in the same analysis. The posterior is proportional to this product:[1]

The maximum a posteriori, which is the mode of the posterior and is often computed in Bayesian statistics using mathematical optimization methods, remains the same. The posterior can be approximated even without computing the exact value of with methods such as Markov chain Monte Carlo or variational Bayesian methods.[1]

Outline

The general set of statistical techniques can be divided into a number of activities, many of which have special Bayesian versions.

Statistical inference

Bayesian inference is an approach to statistical inference that is distinct from frequentist inference. It is specifically based on the use of Bayesian probability to summarize evidence.

Statistical modeling

The formulation of statistical models using Bayesian statistics has the identifying feature of requiring the specification of prior distributions for any unknown parameters. Indeed, parameters of prior distributions may themselves have prior distributions, leading to Bayesian hierarchical modeling, or may be interrelated, leading to Bayesian networks.

Design of experiments

The Bayesian design of experiments includes a concept called 'influence of prior beliefs'. This approach uses sequential analysis techniques to include the outcome of earlier experiments in the design of the next experiment. This is achieved by updating 'beliefs' through the use of prior and posterior distribution. This allows the design of experiments to make good use of resources of all types. An example of this is the multi-armed bandit problem.

Statistical graphics

Statistical graphics includes methods for data exploration, for model validation, etc. The use of certain modern computational techniques for Bayesian inference, specifically the various types of Markov chain Monte Carlo techniques, have led to the need for checks, often made in graphical form, on the validity of such computations in expressing the required posterior distributions.

References

  1. 1 2 3 4 5 6 7 Gelman, Andrew; Carlin, John B.; Stern, Hal S.; Dunson, David B.; Vehtari, Aki; Rubin, Donald B. (2013). Bayesian Data Analysis, Third Edition. Chapman and Hall/CRC. ISBN 978-1-4398-4095-5.
  2. Fienberg, Stephen E. (2006). "When Did Bayesian Inference Become "Bayesian"?". Bayesian Analysis. 1 (1): 1–40.
  3. 1 2 Grinstead, Charles M.; Snell, J. Laurie (2006). Introduction to probability (2nd ed.). Providence, RI: American Mathematical Society. ISBN 978-0-8218-9414-9.

Further reading

  • Think Bayes, Allen B. Downey
  • Bayesian Statistics: Why and How
  • Puga JL, Krzywinski M, Altman N (May 2015). "Bayesian Statistics". Points of Significance. Nature Methods. 12 (5): 377–8. doi:10.1038/nmeth.3368. Retrieved 31 May 2016.
  • Eliezer S. Yudkowsky. "An Intuitive Explanation of Bayes' Theorem" (webpage). Retrieved 2015-06-15.
  • Theo Kypraios. "A Gentle Tutorial in Bayesian Statistics" (PDF). Retrieved 2013-11-03.
  • Jordi Vallverdu. "Bayesians Versus Frequentists A Philosophical Debate on Statistical Reasoning".
  • Bayesian statistics David Spiegelhalter, Kenneth Rice Scholarpedia 4(8):5230. doi:10.4249/scholarpedia.5230
  • Bayesian modeling book and examples available for downloading.
  • Rens Van De Schoot. "A Gentle Introduction to Bayesian Analysis" (PDF).
  • Bayesian A/B Testing Calculator Dynamic Yield
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.