Minimum description length

The minimum description length (MDL) principle is a formalization of Occam's razor in which the best hypothesis (a model and its parameters) for a given set of data is the one that leads to the best compression of the data. MDL was introduced by Jorma Rissanen in 1978.[1] It is an important concept in information theory and computational learning theory.[2][3][4]

Overview

Any set of data can be represented by a string of symbols from a finite (say, binary) alphabet.

[The MDL Principle] is based on the following insight: any regularity in a given set of data can be used to compress the data, i.e. to describe it using fewer symbols than needed to describe the data literally. (Grünwald, 1998)[5]

MDL is a theory of inductive and statistical inference that starts out with this idea: all statistical learning is about finding regularities in data, and the best hypothesis to describe the regularities in data is also the one that is able to compress the data most. Like other statistical methods, it can be used for learning the parameters of a model using some data. Usually though, standard statistical methods assume that the general form of a model is fixed. MDL's main strength is that it can also be used for selecting the general form of a model and its parameters. The quantity of interest (sometimes just a model, sometimes just parameters, sometimes both at the same time) is called a hypothesis. The basic idea is then to consider the (lossless) two-stage code that encodes data by first encoding a hypothesis in the set of considered hypotheses and then coding "with the help of" ; in the simplest context this just means "encoding the deviations of the data from the predictions made by :

The achieving this minimum is then viewed as the best explanation of data . As a simple example, take a regression problem: the data could consist of a sequence of points , the set could be the set of all polynomials from to . To describe a polynomial of degree (say) , one would first have to discretize the parameters to some precision; one would then have to describe this precision (a natural number); next, one would have to describe the degree (another natural number), and in the final step, one would have to describe parameters; the total length would be . One would then describe the points in using some fixed code for the x-values and then using a code for the deviations .

In practice, one often (but not always) uses a probabilistic model. For example, one associates each polynomial with the corresponding conditional distribution expressing that given , is normally distributed with mean and some variance which could either be fixed or added as a free parameter. Then the set of hypotheses reduces to the assumption of a linear model, , with a polynomial.

Furthermore, one is often not directly interested in specific parameters values, but just, for example, the degree of the polynomial. In that case, one sets to be where each represents the hypothesis that the data is best described as a j-th degree polynomial. One then codes data given hypothesis using a one-part code designed such that, whenever some hypothesis fits the data well, the codelength is short. The design of such codes is called universal coding. There are various types of universal codes one could use, often giving similar lengths for long data sequences but differing for short ones. The 'best' (in the sense that it has a minimax optimality property) are the normalized maximum likelihood (NML) or Shtarkov codes. A quite useful class of codes are the Bayesian marginal likelihood codes. For exponential families of distributions, when Jeffreys prior is used and the parameter space is suitably restricted, these asymptotically coincide with the NML codes; this brings MDL theory in close contact with objective Bayes model selection, in which one also sometimes adopts Jeffreys' prior, albeit for different reasons.

MDL vs. Solomonoff's Theory of Inductive Inference

To select the hypothesis that captures the most regularity in the data, scientists look for the hypothesis with which the best compression can be achieved. In order to do this, a code is fixed to compress the data. Arguably, the most general code one could use is a (Turing-complete) computer language. A program to output the data is written in that language; thus the program effectively represents the data. The length of the shortest program that outputs the data is called the Kolmogorov complexity of the data. This is the central idea of Ray Solomonoff's idealized theory of inductive inference, which has been a source of inspiration for MDL.

Inference

However, this mathematical theory does not provide a practical way of reaching an inference. The most important reasons for this are:

  • Kolmogorov complexity is uncomputable: there exists no algorithm that, when input an arbitrary sequence of data, outputs the shortest program that produces the data.
  • Kolmogorov complexity depends on what computer language is used. This is an arbitrary choice, but it does influence the complexity up to some constant additive term. For that reason, constant terms tend to be disregarded in Kolmogorov complexity theory. In practice, however, where often only a small amount of data is available, such constants may have a very large influence on the inference results: good results cannot be guaranteed when one is working with limited data.

MDL attempts to remedy these by:

  • Restricting the set of allowed codes in such a way that it becomes possible (computable) to find the shortest codelength of the data, relative to the allowed codes.
  • Choosing codes that are reasonably efficient, whatever the data at hand. The idea of "reasonable efficiency" is encapsulated by the idea of a "universal code".

One of the important properties of MDL methods is that they provide a natural safeguard against overfitting, because they implement a tradeoff between the complexity of the hypothesis (model class) and the complexity of the data given the hypothesis.[3] An illustration is given in the following example.

Example of MDL

A coin is flipped 1000 times, and the numbers of heads and tails are recorded. Consider two model classes:

  • The first is a code that represents outcomes with a 0 for heads or a 1 for tails. This code represents the hypothesis that the coin is fair. The code length according to this code is always exactly 1000 bits.
  • The second consists of all codes that are efficient for a coin with some specific bias, representing the hypothesis that the coin is not fair. Say that we observe 510 heads and 490 tails. Then the code length according to the best code in the second model class is shorter than 1000 bits.

For this reason a naive statistical method might choose the second model as a better explanation for the data. However, an MDL approach would construct a single code based on the hypothesis, instead of just using the best one. This code could be the normalized maximum likelihood code or a Bayesian code. If such a code is used, then the total codelength based on the second model class would be larger than 1000 bits. Therefore, the conclusion when following an MDL approach is inevitably that there is not enough evidence to support the hypothesis of the biased coin, even though the best element of the second model class provides better fit to the data.

MDL notation

Central to MDL theory is the one-to-one correspondence between code length functions and probability distributions (this follows from the Kraft–McMillan inequality). For any probability distribution , it is possible to construct a code such that the length (in bits) of is equal to ; this code minimizes the expected code length. Conversely, given a code , one can construct a probability distribution such that the same holds. (Rounding issues are ignored here.) In other words, searching for an efficient code is equivalent to searching for a good probability distribution.

MDL is very strongly connected to probability theory and statistics through the correspondence between codes and probability distributions mentioned above. This has led some researchers to view MDL as equivalent to Bayesian inference: code length of model and data together in MDL correspond respectively to prior probability and marginal likelihood in the Bayesian framework.[6]

While Bayesian machinery is often useful in constructing efficient MDL codes, the MDL framework also accommodates other codes that are not Bayesian. An example is the Shtarkov normalized maximum likelihood code, which plays a central role in current MDL theory, but has no equivalent in Bayesian inference. Furthermore, Rissanen stresses that we should make no assumptions about the true data-generating process: in practice, a model class is typically a simplification of reality and thus does not contain any code or probability distribution that is true in any objective sense.[7][8] In the last mentioned reference Rissanen bases the mathematical underpinning of MDL on the Kolmogorov structure function.

According to the MDL philosophy, Bayesian methods should be dismissed if they are based on unsafe priors that would lead to poor results. The priors that are acceptable from an MDL point of view also tend to be favored in so-called objective Bayesian analysis; there, however, the motivation is usually different.[9]

Other systems

MDL was not the first information-theoretic approach to learning; as early as 1968 Wallace and Boulton pioneered a related concept called minimum message length (MML). The difference between MDL and MML is a source of ongoing confusion. Superficially, the methods appear mostly equivalent, but there are some significant differences, especially in interpretation:

  • MML is a fully subjective Bayesian approach: it starts from the idea that one represents one's beliefs about the data-generating process in the form of a prior distribution. MDL avoids assumptions about the data-generating process.
  • Both methods make use of two-part codes: the first part always represents the information that one is trying to learn, such as the index of a model class (model selection) or parameter values (parameter estimation); the second part is an encoding of the data given the information in the first part. The difference between the methods is that, in the MDL literature, it is advocated that unwanted parameters should be moved to the second part of the code, where they can be represented with the data by using a so-called one-part code, which is often more efficient than a two-part code. In the original description of MML, all parameters are encoded in the first part, so all parameters are learned.
  • Within the MML framework, each parameter is stated to exactly that precision, which results in the optimal overall message length: the preceding example might arise if some parameter was originally considered "possibly useful" to a model but was subsequently found to be unable to help to explain the data (such a parameter will be assigned a code length corresponding to the (Bayesian) prior probability that the parameter would be found to be unhelpful). In the MDL framework, the focus is more on comparing model classes than models, and it is more natural to approach the same question by comparing the class of models that explicitly include such a parameter against some other class that doesn't. The difference lies in the machinery applied to reach the same conclusion.

Key people

See also

References

  1. Rissanen, J. (1978). "Modeling by shortest data description". Automatica. 14 (5): 465–658. doi:10.1016/0005-1098(78)90005-5.
  2. "Minimum Description Length". University of Helsinki. Archived from the original on February 18, 2010. Retrieved 2010-07-03.
  3. 1 2 Grünwald, P. (June 2007). "the Minimum Description Length principle". MIT Press. Archived from the original on 2010-06-16. Retrieved 2010-07-03.
  4. Grünwald, P (April 2005). "Advances in Minimum Description Length: Theory and Applications". MIT Press. Retrieved 2010-07-03.
  5. Grünwald, Peter. "MDL Tutorial" (PDF). Retrieved 2010-07-03.
  6. MacKay, David (2003). "Information Theory, Inference, and Learning Algorithms". Cambridge University Press. Retrieved 2010-07-03.
  7. Rissanen, Jorma. "Homepage of Jorma Rissanen". Retrieved 2010-07-03.
  8. Rissanen, J. (2007). "Information and Complexity in Statistical Modeling". Springer. Retrieved 2010-07-03.
  9. Nannen, Volker. "A short introduction to Model Selection, Kolmogorov Complexity and Minimum Description Length" (PDF). Retrieved 2010-07-03.

Further reading

  • Minimum Description Length on the Web, by the University of Helsinki. Features readings, demonstrations, events and links to MDL researchers.
  • Homepage of Jorma Rissanen, containing lecture notes and other recent material on MDL.
  • Advances in Minimum Description Length, MIT Press, ISBN 0-262-07262-9.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.