Inverse-gamma distribution

Inverse-gamma
Probability density function
Cumulative distribution function
Parameters shape (real)
scale (real)
Support
PDF
CDF
Mean for
Mode
Variance for
Skewness for
Ex. kurtosis for
Entropy


(see digamma function)
MGF Does not exist.
CF

In probability theory and statistics, the inverse gamma distribution is a two-parameter family of continuous probability distributions on the positive real line, which is the distribution of the reciprocal of a variable distributed according to the gamma distribution. Perhaps the chief use of the inverse gamma distribution is in Bayesian statistics, where the distribution arises as the marginal posterior distribution for the unknown variance of a normal distribution, if an uninformative prior is used, and as an analytically tractable conjugate prior, if an informative prior is required.

However, it is common among Bayesians to consider an alternative parametrization of the normal distribution in terms of the precision, defined as the reciprocal of the variance, which allows the gamma distribution to be used directly as a conjugate prior. Other Bayesians prefer to parametrize the inverse gamma distribution differently, as a scaled inverse chi-squared distribution.

Characterization

Probability density function

The inverse gamma distribution's probability density function is defined over the support

with shape parameter and scale parameter .[1] Here denotes the gamma function.

Unlike the Gamma distribution, which contains a somewhat similar exponential term, is a scale parameter as the distribution function satisfies:

Cumulative distribution function

The cumulative distribution function is the regularized gamma function

where the numerator is the upper incomplete gamma function and the denominator is the gamma function. Many math packages allow direct computation of , the regularized gamma function.

Characteristic function

in the expression of the characteristic function is the modified Bessel function of the 2nd kind.

Properties

For and ,

and

The information entropy is

where is the digamma function.

The Kullback-Leibler divergence of Inverse-Gamma(αp, βp) from Inverse-Gamma(αq, βq) is the same as the KL-divergence of Gamma(αp, βp) from Gamma(αq, βq):

where are the pdfs of the Inverse-Gamma distributions and are the pdfs of the Gamma distributions, is Gamma(αp, βp) distributed.

Derivation from Gamma distribution

The pdf of the gamma distribution is

and define the transformation then the resulting transformation is

See also

References

  1. "InverseGammaDistribution—Wolfram Language Documentation". reference.wolfram.com. Retrieved 9 April 2018.
  • V. Witkovsky (2001) Computing the distribution of a linear combination of inverted gamma variables, Kybernetika 37(1), 79-90
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.