Hindley–Milner type system

A Hindley–Milner (HM) type system is a classical type system for the lambda calculus with parametric polymorphism. It is also known as Damas–Milner or Damas–Hindley–Milner, It was first described by J. Roger Hindley[1] and later rediscovered by Robin Milner.[2] Luis Damas contributed a close formal analysis and proof of the method in his PhD thesis.[3][4]

Among HM's more notable properties are its completeness and its ability to infer the most general type of a given program without programmer-supplied type annotations or other hints. Algorithm W is an efficient type inference method that performs in almost linear time with respect to the size of the source, making it practically useful to type large programs.[note 1] HM is preferably used for functional languages. It was first implemented as part of the type system of the programming language ML. Since then, HM has been extended in various ways, most notably with type class constraints like those in Haskell.

Introduction

One and the same thing can be used for many purposes. A chair might be used to support a sitting person but also as a ladder to stand on while changing a light bulb or as a clothes valet. Beside having particular material qualities, which make a chair usable as such, it also has the particular designation for its use. When no chair is at hand, other things might be used as a seat, and so the designation of a thing can be changed as fast as one can turn an empty bottle crate upside down to change its purpose from a container to that of a support.

Different uses of physically near-identical things are usually accompanied by giving those things different names to emphasize the intended purpose. Depending on the use, seaman have a dozen or more words for a rope though it might materially be the same thing. The same in everyday language, where a leash indicates a use different to a line.

In computer science, this practice of naming things by its intended use is put to an extreme called typing and the names or expressions called types:

  • Contrary to the richly varying raw materials in the physical world, computing has only one raw material, bytes, and much like letters in a book they are used to build up and express everything in a computer's memory.
  • Programmers not only have many words indicating the different uses for bytes, but rather a plethora of formal type languages each having their own grammar.
  • There are branches of research in computer science and mathematics dedicated to developing and enhancing such languages. Their theoretical foundations have type systems and type theory as their subjects.
  • Types are not only used in specifications and documentations, but are also an integral part of programming languages. There, they are mechanised to strongly support the programmers' tasks.

Beside structuring objects, (data) types serve as means to validate that these objects are used as intended. Much like a crate that could only be used as a support or a container at a time, a particular arrangement of bytes designated for one purpose might exclude other possible uses.

In programming, these uses are expressed as functions or procedures which serve the role of verbs in natural language. As an example for typing verbs, an English dictionary might define gift as "to give so. sth.", indicating that the object must be a person and the indirect object a physical thing. In programming, "so." (someone) and "sth." (something) would be called types and using a thing into the place of "so." would be indicated as a programming error by a type checker.

Beside checking, one can use the types in this example to gain knowledge about an unknown word. Reading the sentence "Mary gifts John a bilber" the types could be used to conclude that a "bilber" is likely a physical thing. This activity and conclusion is called type inference. As the story unfolds, more and more information about the unknown "bilber" may be gained, and eventually enough details become known to form a complete image of that kind of thing.

The type inference method designed by Hindley and Milner does just this for programming languages. The advantage of type inference over type checking is that it allows a more natural and dense style of programming. Instead of starting a program text with a glossary defining what a bilber and everything else is, one can distribute this information over the text simply by using the yet undefined words and let a program collect all the details about them. The method works for both nouns (data types) and for verbs (functions types). As a consequence, a programmer can proceed without ever mentioning types at all while still having the full support of a type checker that validates their writing. When reading a program, the programmer can use type inference to query the full definition of anything named in the program whenever needed.

History of type inference

Historically, type inference to this extent was developed for a particular group of programming languages, called functional languages. These started in 1958 with Lisp, a programming language based on the lambda calculus and that compares well with modern scripting languages like Python or Lua. Lisp was mainly used for computer science research, often for symbol manipulation purposes where large, tree-like data structures were common.

Data in Lisp is dynamically typed and the types are only available to some degree while running a program. Debugging type errors was no less of a concern than it is with modern script languages. But, being completely untyped, i.e. written without any explicit type information, maintaining large programs written in Lisp soon became a problem because the many complicated types of the data were mentioned only in the program documentation and comments at best.

Thus the need to have a Lisp-like language with machine-checkable types became more and more pressing. At some point, programming language development faced two challenges:

  1. Polymorphism. Some kinds of data are very generic. In particular, Lisp is a "list programming language" where lists are data whose type can be a "list of something", e.g. a list of numbers. Functions for such generic data types are often themselves generic. For instance, counting the number of items on a list is independent of the type of its items. However, a generic function that adds another item to a given list, needs type checking to ensure that the list will remain consistent with respect to the type of its items. For example, that only numbers may be added to a list of numbers. Types for such "generic" data and functions are called polymorphic, meaning that they can be used for more than one type. The polymorphic function for adding an item can be used for a list of numbers as well as for a list of words or even a list of anything. More precisely, this kind of polymorphism is called parametric polymorphism, where "something" is the parameter in "list of something". More formally, "list of something" may be written List T with T being the type parameter. The type of a function that adds a new item to a list is forall T . T -> List T -> List T, meaning that for any and all types T the adding function needs an item of type T and a list-of-Ts, to produce a resulting (new) list-of-Ts. Thus, the first challenge was, to design a type system that properly expressed parametric polymorphism.
  2. Type inference. Unfortunately, polymorphic functions requiring type checking must be continuously informed of the types. The above function would need the type T as an additional first parameter, resulting in program text so cluttered with type information that it becomes unreadable. Additionally, when keying in a program, a programmer would spend a significant portion of time keying in types.

As an example, polymorphically constructing the list "(1 2)" of two numbers would mean writing:

  (addItem Number 1 (addItem Number 2 (emptyList Number)))

This example was quite typical. Every third word a type, monotonously serving the type checker in every step. This worsens when the types become more complex. Then, the methods to be expressed in code become buried in types.

To handle this issue, effective methods for type inference were the subject of research and Hindley-Milner's method was one of them. Their method was first used in ML (1973) and is also used in an extended form in Haskell (1990). The HM type inference method is strong enough to not only infer types for expressions but for whole programs including the procedures and local definitions, providing a type-less style of programming.

The following text gives an impression of the resulting programming style for the quicksort procedure in Haskell:

quickSort []     = []                           -- Sort an empty list
quickSort (x:xs) = quickSort (filter (<x) xs)   -- Sort the left part of the list
                   ++ [x] ++                    -- Insert pivot between two sorted parts
                   quickSort (filter (>=x) xs)  -- Sort the right part of the list

Though all of the functions in the above example need type parameters, types are nowhere mentioned. The code is statically type-checked even though the type of the function defined is unknown and must be inferred to type-check the applications in the body.

Over the years other programming languages added their own version of parametric types. C++ templates were introduced in 1998 and Java introduced generics in 2004. As programming with type parameters became more common, problems similar to the ones sketched for Lisp surfaced in imperative languages, too, perhaps not as pressing as it was for the functional languages. As a consequence these languages obtained support for some type inference techniques, for instance "auto" in C++11 (2014). Unfortunately, the stronger type inference methods developed for functional programming cannot simply be integrated in the imperative languages, as their type systems' features are in part incompatible and a programming language must be designed from the ground up when continuous type inference is wanted.

Features of the Hindley–Milner method

Before presenting the HM type system and related algorithms, the following sections make some features of HM more formal and precise.

Type-checking vs. type-inference

In a typing, an expression E is opposed to a type T, formally written as E : T. Usually a typing only makes sense within some context, which is omitted here.

In this setting, the following questions are of particular interest:

  1. E : T? In this case, both an expression E and a type T is given. Now, is E really a T? This scenario is known as type-checking.
  2. E : _? Here, only the expression is known. So what type is E? If there is a way to derive a type for E, then we have accomplished type inference.
  3. _ : T? The other way round. Given only a type, is there any expression for it or does the type have no values? Is there any example of a T? And in light of the Curry–Howard isomorphism, is there a proof for T?

For the simply typed lambda calculus, all three questions are decidable. The situation is not as comfortable when more expressive types are allowed. Additionally, the simply typed lambda calculus makes the types of the parameters of each function explicit, while they are not needed in HM. While HM is a method for type inference, it can be used also for type checking and answer the first question. To do that, a type is first inferred from E and then compared with the type wanted. The third question becomes of interest when looking at recursively defined functions at the end of this article.

Monomorphism vs. polymorphism

In the simply typed lambda calculus, types are either atomic type constants or function types of form . Such types are monomorphic. Typical examples are the types used in arithmetic values:

 3       : Number
 add 3 4 : Number
 add     : Number -> Number -> Number

Contrary to this, the untyped lambda calculus is neutral to typing at all and many of its functions can be meaningfully applied to all type of arguments. The trivial example is the identity function

id x . x

which simply returns whatever value it is applied to. Less trivial examples include parametric types like lists.

While polymorphism in general means that operations accept values of more than one type, the polymorphism used here is parametric. One finds the notation of type schemes in the literature, too, emphasizing the parametric nature of the polymorphism. Additionally, constants may be typed with (quantified) type variables. E.g.:

 cons : forall a . a -> List a -> List a
 nil  : forall a . List a.
 id   : forall a . a -> a

Polymorphic types can become monomorphic by consistent substitution of their variables. Examples of monomorphic instances are:

id'  : String -> String
nil' : List Number

More generally, types are polymorphic when they contain type variables, while types without them are monomorphic.

Contrary to the type systems used for example in Pascal (1970) or C (1972), which only support monomorphic types, HM is designed with emphasis on parametric polymorphism. The successors of the languages mentioned, like C++ (1985), focused on different types of polymorphism, namely subtyping in connection with object-oriented programming and overloading. While subtyping is incompatible with HM, a variant of systematic overloading is available in the HM-based type system of Haskell.

Let-polymorphism

When extending the simply typed lambda calculus towards polymorphism, one has to define when deriving an instance of a value is admissible. Ideally, this would be allowed with any use of a bound variable, as in:

 (λ id .  ... (id 3) ... (id "text") ... ) (λ x . x)

Unfortunately, type inference in such a system is not decidable. Instead, HM provides let-polymorphism of the form

 let id = λ x . x
  in ... (id 3) ... (id "text") ...

restricting the binding mechanism in an extension of the expression syntax. Only values bound in a let construct are subject to instantiation, i.e. are polymorphic, while the parameters in lambda-abstractions are treated as being monomorphic.

Overview

The remainder of the article is more technical as it has to present the HM method as it is handled in the literature. It proceeds as follows:

  • The HM type system is defined. This is done by describing a deduction system that makes precise what expressions have what type, if any.
  • From there, it works towards an implementation of the type inference method. After introducing a syntax driven variant of the above deductive system, it sketches an efficient implementation (algorithm J), appealing mostly to the reader's metalogical intuition.
  • Because it remains open whether algorithm J indeed realises the initial deduction system, a less efficient implementation (algorithm W), is introduced and its use in a proof is hinted.
  • Finally, further topics related to the algorithm are discussed.

The same description of the deduction system is used throughout, even for the two algorithms, to make the various forms in which the HM method is presented directly comparable.

The Hindley–Milner type system

The type system can be formally described by syntax rules that fix a language for the expressions, types, etc. The presentation here of such a syntax is not too formal, in that it is written down not to study the surface grammar, but rather the depth grammar, and leaves some syntactical details open. This form of presentation is usual. Building on this, type rules are used to define how expressions and types are related. As before, the form used is a bit liberal.

Syntax

Expressions
Types
Context and Typing
Free Type Variables

The expressions to be typed are exactly those of the lambda calculus extended with a let-expression as shown in the adjacent table. Parentheses can be used to disambiguate an expression. The application is left-binding and binds stronger than abstraction or the let-in construct.

Types are syntactically split into two groups, monotypes and polytypes.[note 2]

Monotypes

Monotypes always designate a particular type. Monotypes are syntactically represented as terms.

Examples of monotypes include type constants like or , and parametric types like . The later types are examples of applications of type functions, for example, from the set , where the superscript indicates the number of type parameters. The complete set of type functions is arbitrary in HM,[note 3] except that it must contain at least , the type of functions. It is often written in infix notation for convenience. For example, a function mapping integers to strings has type . Again, parentheses can be used to disambiguate a type expression. The application binds stronger than the infix arrow, which is right-binding.

Type variables are admitted as monotypes. Monotypes are not to be confused with monomorphic types, which exclude variables and allow only ground terms.

Two monotypes are equal if they have identical terms.

Polytypes

Polytypes (or type schemes) are types containing variables bound by one or more for-all quantifiers, e.g. .

A function with polytype can map any value of the same type to itself, and the identity function is a value for this type.

As another example is the type of a function mapping all finite sets to integers. The count of members is a value for this type.

Note that quantifiers can only appear top level, i.e. a type for instance, is excluded by the syntax of types. Note also that monotypes are included in the polytypes, thus a type has the general form , where is a monotype.

Equality of polytypes is up to reordering the quantification and renaming the quantified variables ( -conversion). Further, quantified variables not occurring in the monotype can be dropped.

Context and Typing

To meaningfully bring together the still disjoint parts (syntax expressions and types) a third part is needed: context. Syntactically, a context is a list of pairs , called assignments, assumptions or bindings, each pair stating that value variable has type . All three parts combined give a typing judgment of the form , stating that under assumptions , the expression has type .

Free type variables

In a type , the symbol is the quantifier binding the type variables in the monotype . The variables are called quantified and any occurrence of a quantified type variable in is called bound and all unbound type variables in are called free. Additionally to the quantification in polytypes, type variables can also be bound by occurring in the context, but with the inverse effect on the right hand side of the . Such variables then behave like type constants there. Finally, a type variable may legally occur unbound in a typing, in which case they are implicitly all-quantified.

The presence of both bound and unbound type variables is a bit uncommon in programming languages. Often, all type variables are implicitly treated all-quantified. For instance, one does not have clauses with free variables in Prolog. Likely in Haskell, in the absence of the ScopedTypeVariables language extension, all type variables implicitly occur quantified, i.e. a Haskell type a -> a means here.

Type order

Polymorphism means that one and the same expression can have (perhaps infinitely) many types. But in this type system, these types are not completely unrelated, but rather orchestrated by the parametric polymorphism.

As an example, the identity can have as its type as well as or and many others, but not . The most general type for this function is , while the others are more specific and can be derived from the general one by consistently replacing another type for the type parameter, i.e. the quantified variable . The counter-example fails because the replacement is not consistent.

The consistent replacement can be made formal by applying a substitution to the term of a type , written . As the example suggests, substitution is not only strongly related to an order, that expresses that a type is more or less special, but also with the all-quantification which allows the substitution to be applied.

Specialization Rule

Formally, in HM, a type is more general than , formally if some quantified variable in is consistently substituted such that one gains as shown in the side bar. This order is part of the type definition of the type system.

While substituting a monomorphic (ground) type for a quantified variable is straight forward, substituting a polytype has some pitfalls caused by the presence of free variables. Most particularly, unbound variables must not be replaced. They are treated as constants here. Additionally, note that quantifications can only occur top-level. Substituting a parametric type, one has to lift its quantors. The table on the right makes the rule precise.

Alternatively, consider an equivalent notation for the polytypes without quantors in which quantified variables are represented by a different set of symbols. In such a notation, the specialization reduces to plain consistent replacement of such variables.

The relation is a partial order and is its smallest element.

Principal type

While specialization of a type scheme is one use of the order, it plays a crucial second role in the type system. Type inference with polymorphism faces the challenge of summarizing all possible types an expression may have. The order guarantees that such a summary exists as the most general type of the expression.

Substitution in typings

The type order defined above can be extended to typings because the implied all-quantification of typings enables consistent replacement:

Contrary to the specialisation rule, this is not part of the definition, but like the implicit all-quantification rather a consequence of the type rules defined next. Free type variables in a typing serve as placeholders for possible refinement. The binding effect of the environment to free type variables on the right hand side of that prohibits their substitution in the specialisation rule is again that a replacement has to be consistent and would need to include the whole typing.

Deductive system

The Syntax of Rules

The syntax of HM is carried forward to the syntax of the inference rules that form the body of the formal system, by using the typings as judgments. Each of the rules define what conclusion could be drawn from what premises. Additionally to the judgments, some extra conditions introduced above might be used as premises, too.

A proof using the rules is a sequence of judgments such that all premises are listed before a conclusion. The examples below show a possible format of proofs. From left to right, each line shows the conclusion, the of the rule applied and the premises, either by referring to an earlier line (number) if the premise is a judgment or by making the predicate explicit.

Typing rules

Declarative Rule System

The side box shows the deduction rules of the HM type system. One can roughly divide the rules into two groups:

The first four rules (variable or function access), (application, i.e. function call with one parameter), (abstraction, i.e. function declaration) and (variable declaration) are centered around the syntax, presenting one rule for each of the expression forms. Their meaning is obvious at the first glance, as they decompose each expression, prove their sub-expressions and finally combine the individual types found in the premises to the type in the conclusion.

The second group is formed by the remaining two rules and . They handle specialization and generalization of types. While the rule should be clear from the section on specialization above, complements the former, working in the opposite direction. It allows generalization, i.e. to quantify monotype variables not bound in the context.

The following two examples exercise the rule system in action. Since both the expression and the type are given, they are a type-checking use of the rules.

Example: A proof for where , could be written

Example: To demonstrate generalization, is shown below:

Let-polymorphism

Not visible immediately, the rule set encodes a regulation under which circumstances a type might be generalized or not by a slightly varying use of mono- and polytypes in the rules and . Remember that and denote poly- and monotypes respectively.

In rule , the value variable of the parameter of the function is added to the context with a monomorphic type through the premise , while in the rule , the variable enters the environment in polymorphic form . Though in both cases the presence of in the context prevents the use of the generalisation rule for any free variable in the assignment, this regulation forces the type of parameter in a -expression to remain monomorphic, while in a let-expression, the variable could be introduced polymorphic, making specializations possible.

As a consequence of this regulation, cannot be typed, since the parameter is in a monomorphic position, while has type , because has been introduced in a let-expression and is treated polymorphic therefore.

Generalization rule

The generalisation rule is also worth for closer look. Here, the all-quantification implicit in the premise is simply moved to the right hand side of in the conclusion. This is possible, since does not occur free in the context. Again, while this makes the generalisation rule plausible, it is not really a consequence. Vis versa, the generalisation rule is part of the definition of HM's type system and the implicit all-quantification a consequence.

An inference algorithm

Now that the deduction system of HM is at hand, one could present an algorithm and validate it with respect to the rules. Alternatively, it might be possible to derive it by taking a closer look on how the rules interact and proof are formed. This is done in the remainder of this article focusing on the possible decisions one can make while proving a typing.

Degrees of freedom choosing the rules

Isolating the points in a proof, where no decision is possible at all, the first group of rules centered around the syntax leaves no choice since to each syntactical rule corresponds a unique typing rule, which determines a part of the proof, while between the conclusion and the premises of these fixed parts chains of and could occur. Such a chain could also exist between the conclusion of the proof and the rule for topmost expression. All proofs must have the so sketched shape.

Because the only choice in a proof with respect of rule selection are the and chains, the form of the proof suggests the question whether it can be made more precise, where these chains might be needed. This is in fact possible and leads to a variant of the rules system with no such rules.

Syntax-directed rule system

Syntactical Rule System
Generalization

A contemporary treatment of HM uses a purely syntax-directed rule system due to Clement[5] as an intermediate step. In this system, the specialization is located directly after the original rule and merged into it, while the generalization becomes part of the rule. There the generalization is also determined to always produce the most general type by introducing the function , which quantifies all monotype variables not bound in .

Formally, to validate, that this new rule system is equivalent to the original , one has to show that , which falls apart into two sub-proofs:

While consistency can be seen by decomposing the rules and of into proofs in , it is likely visible that is incomplete, as one cannot show in , for instance, but only . An only slightly weaker version of completeness is provable [6] though, namely

implying, one can derive the principal type for an expression in allowing us to generalize the proof in the end.

Comparing and , note that now only monotypes appear in the judgments of all rules. Additionally, the shape of any possible proof with the deduction system is now identical to the shape of the expression (both seen as trees). Thus the expression fully determines the shape of the proof. In the shape would likely be determined with respect to all rules except and , which allow building arbitrarily long branches (chains) between the other nodes.

Degrees of freedom instantiating the rules

Now that the shape of the proof is known, one is already close to formulating a type inference algorithm. Because any proof for a given expression must have the same shape, one can assume the monotypes in the proof's judgements to be undetermined and consider how to determine them.

Here, the substitution (specialisation) order comes into play. Although at the first glance one cannot determine the types locally, the hope is that it is possible to refine them with the help of the order while traversing the proof tree, additionally assuming, because the resulting algorithm is to become an inference method, that the type in any premise will be determined as the best possible. And in fact, one can, as looking at the rules of suggests:

  • : The critical choice is . At this point, nothing is known about , so one can only assume the most general type, which is . The plan is to specialize the type if it should become necessary. Unfortunately, a polytype is not permitted in this place, so some has to do for the moment. To avoid unwanted captures, a type variable not yet in the proof is a safe choice. Additionally, one has to keep in mind that this monotype is not yet fixed, but might be further refined.
  • : The choice is how to refine . Because any choice of a type here depends on the usage of the variable, which is not locally known, the safest bet is the most general one. Using the same method as above one can instantiate all quantified variables in with fresh monotype variables, again keeping them open to further refinement.
  • : The rule does not leave any choice. Done.
  • : Only the application rule might force a refinement to the variables "opened" so far.

The first premise forces the outcome of the inference to be of the form .

  • If it is, then fine. One can later pick its for the result.
  • If not, it might be an open variable. Then this can be refined to the required form with two new variables as before.
  • Otherwise, the type checking fails because the first premise inferred a type which is not and cannot be made into a function type.

The second premise requires that the inferred type is equal to of the first premise. Now there are two possibly different types, perhaps with open type variables, at hand to compare and to make equal if it is possible. If it is, a refinement is found, and if not, a type error is detected again. An effective method is known to "make two terms equal" by substitution, Robinson's Unification in combination with the so-called Union-Find algorithm.

To briefly summarize the union-find algorithm, given the set of all types in a proof, it allows one to group them together into equivalence classes by means of a procedure and to pick a representative for each such class using a procedure. Emphasizing the word procedure in the sense of side effect, we're clearly leaving the realm of logic in order to prepare an effective algorithm. The representative of a is determined such that, if both and are type variables then the representative is arbitrarily one of them, but while uniting a variable and a term, the term becomes the representative. Assuming an implementation of union-find at hand, one can formulate the unification of two monotypes as follows:

unify(ta,tb):
  ta = find(ta)
  tb = find(tb)
  if both ta,tb are terms of the form D p1..pn with identical D,n then
    unify(ta[i],tb[i]) for each corresponding ith parameter
  else
  if at least one of ta,tb is a type variable then
    union(ta,tb)
  else
    error 'types do not match'

Now having a sketch of an inference algorithm at hand, a more formal presentation is given in the next section. It is described in Milner[2] P. 370 ff. as algorithm J.

Algorithm J

Algorithm J

The presentation of Algorithm J is a misuse of the notation of logical rules, since it includes side effects but allows a direct comparison with while expressing an efficient implementation at the same time. The rules now specify a procedure with parameters yielding in the conclusion where the execution of the premises proceeds from left to right.

The procedure specializes the polytype by copying the term and replacing the bound type variables consistently by new monotype variables. ' ' produces a new monotype variable. Likely, has to copy the type introducing new variables for the quantification to avoid unwanted captures. Overall, the algorithm now proceeds by always making the most general choice leaving the specialization to the unification, which by itself produces the most general result. As noted above, the final result has to be generalized to in the end, to gain the most general type for a given expression.

Because the procedures used in the algorithm have nearly O(1) cost, the overall cost of the algorithm is close to linear in the size of the expression for which a type is to be inferred. This is in strong contrast to many other attempts to derive type inference algorithms, which often came out to be NP-hard, if not undecidable with respect to termination. Thus the HM performs as well as the best fully informed type-checking algorithms can. Type-checking here means that an algorithm does not have to find a proof, but only to validate a given one.

Efficiency is slightly reduced because the binding of type variables in the context has to be maintained to allow computation of and enable an occurs check to prevent the building of recursive types during . An example of such a case is , for which no type can be derived using HM. Practically, types are only small terms and do not build up expanding structures. Thus, in complexity analysis, one can treat comparing them as a constant, retaining O(1) costs.

Proving the algorithm

In the previous section, while sketching the algorithm its proof was hinted at with metalogical argumentation. While this leads to an efficient algorithm J, it is not clear whether the algorithm properly reflects the deduction systems D or S which serve as a semantic base line.

The most critical point in the above argumentation is the refinement of monotype variables bound by the context. For instance, the algorithm boldly changes the context while inferring e.g. , because the monotype variable added to the context for the parameter later needs to be refined to when handling application. The problem is that the deduction rules do not allow such a refinement. Arguing that the refined type could have been added earlier instead of the monotype variable is an expedient at best.

The key to reaching a formally satisfying argument is to properly include the context within the refinement. Formally, typing is compatible with substitution of free type variables.

To refine the free variables thus means to refine the whole typing.

Algorithm W

Algorithm W

From there, a proof of algorithm J leads to algorithm W, which only makes the side effects imposed by the procedure explicit by expressing its serial composition by means of the substitutions . The presentation of algorithm W in the sidebar still makes use of side effects in the operations set in italic, but these are now limited to generating fresh symbols. The form of judgement is , denoting a function with a context and expression as parameter producing a monotype together with a substitution. is a side-effect free version of producing a substitution which is the most general unifier.

While algorithm W is normally considered to be the HM algorithm and is often directly presented after the rule system in literature, its purpose is described by Milner[2] on P. 369 as follows:

As it stands, W is hardly an efficient algorithm; substitutions are applied too often. It was formulated to aid the proof of soundness. We now present a simpler algorithm J which simulates W in a precise sense.

While he considered W more complicated and less efficient, he presented it in his publication before J. It has its merits when side effects are unavailable or unwanted. By the way, W is also needed to prove completeness, which is factored by him into the soundness proof.

Proof obligations

Before formulating the proof obligations, a deviation between the rules systems D and S and the algorithms presented needs to be emphasized.

While the development above sort of misused the monotypes as "open" proof variables, the possibility that proper monotype variables might be harmed was sidestepped by introducing fresh variables and hoping for the best. But there's a catch: One of the promises made was that these fresh variables would be "kept in mind" as such. This promise is not fulfilled by the algorithm.

Having a context , the expression cannot be typed in either or , but the algorithms come up with the type , where W additionally delivers the substitution , meaning that the algorithm fails to detect all type errors. This omission can easily be fixed by more carefully distinguishing proof variables and monotype variables.

The authors were well aware of the problem but decided not to fix it. One might assume a pragmatic reason behind this. While more properly implementing the type inference would have enabled the algorithm to deal with abstract monotypes, they were not needed for the intended application where none of the items in a preexisting context have free variables. In this light, the unneeded complication was dropped in favor of a simpler algorithm. The remaining downside is that the proof of the algorithm with respect to the rule system is less general and can only be made for contexts with as a side condition.

The side condition in the completeness obligation addresses how the deduction may give many types, while the algorithm always produces one. At the same time, the side condition demands that the type inferred is actually the most general.

To properly prove the obligations one needs to strengthen them first to allow activating the substitution lemma threading the substitution through and . From there, the proofs are by induction over the expression.

Another proof obligation is the substitution lemma itself, i.e. the substitution of the typing, which finally establishes the all-quantification. The later cannot formally be proven, since no such syntax is at hand.

Extensions

Recursive definitions

To make programming practical recursive functions are needed. A central property of the lambda calculus is that recursive definitions are not directly available, but can instead be expressed with a fixed point combinator. But unfortunately, the fixpoint combinator cannot be formulated in a typed version of the lambda calculus without having a disastrous effect on the system as outlined below.

Typing rule

The original paper[4] notes that recursion can be realized by a combinator . A possible recursive definition could thus be formulated as .

Alternatively an extension of the expression syntax and an extra typing rule is possible:

where

basically merging and while including the recursively defined variables in monotype positions where they occur to the left of the but as polytypes to the right of it. This formulation perhaps best summarizes the essence of let-polymorphism.

Consequences

While the above is straightforward it does come at a price.

Type theory connects lambda calculus computation and logic. The easy modification above has effects on both:

Programs in simply typed lambda calculus are guaranteed to always terminate. Moreover, they are even guaranteed to terminate under any evaluation strategy, be it top down, bottom up, breadth first, whatever. The same is true for expressions that have types in HM. It is well-known that separating terminating from non-terminating programs is most difficult, and especially in lambda calculus, which is so expressive that it can formulate recursion with just a few symbols. Thus the initial inability of HM to provide recursive functions was not an omission, but a feature. Adding recursion enables normal programming but the guarantee is not longer valid.

Another reading of the typing is given by the Curry–Howard isomorphism. Here the types are interpreted as logical expressions. Let's look at the type of the fixpoint combinator from this perspective, assuming the variables to have logical value:

But this is invalid. Adding an invalid axiom will break the logic in the sense that every formula can then be shown to be true in it, e.g. . Thus the ability to distinguish even two simple things is no longer given. Everything is the same and collapses into 42. The fixpoint combinator that came in so handy above also plays a role in Curry's paradox.

Logic aside, does this matter for typing programs? It does. Since one is now able to formulate non-terminating functions, one can make a function that would return whatever one wants[note 4] but never really returns:

.

In practical programming such a function can come in handy when breaking out of a computation, like with exit(1) in C, while silencing the type checker in the current branch by returning essentially nothing but with a suitable type.

Less desirable is that the type checker (type inferencer) now succeeds with a type for a function that in fact never returns any value, like . The function would return a value of this type, but it cannot because no terminating function with this type exists. The type checker's claim that everything is ok thus has to be taken with a grain of salt. The types might only be claimed to be checked, but the program can still be typed wrong. Only if all functions are terminating does in the logic above have a true value, and the assertions of the type checker become strong again.

Overloading

Overloading means, that different functions still can be defined and used with the same name. Most programming languages at least provide overloading with the built-in arithmetic operations (+,<,etc.), to allow the programmer to write arithmetic expressions in the same form, even for different numerical types like int or real. Because a mixture of these different types within the same expression also demands for implicit conversion, overloading especially for these operations is often built into the programming language itself. In some languages, this feature is generalized and made available to the user, e.g. in C++.

While ad-hoc overloading has been avoided in functional programming for the computation costs both in type checking and inference, a means to systematise overloading has been introduced that resembles both in form and naming to object oriented programming, but works one level upwards. "Instances" in this systematic are not objects (i.e. on value level), but rather types. The quicksort example mentioned in the introduction uses the overloading in the orders, having the following type annotation in Haskell:

quickSort :: Ord a => [a] -> [a]

Herein, the type a is not only polymorphic, but also restricted to be an instance of some type class Ord, that provides the order predicates < and >= used in the functions body. The proper implementations of these predicates are then passed to quicksorts as additional parameters, as soon as quicksort is used on more concrete types providing a single implementation of the overloaded function quickSort.

Because the "classes" only allow a single type as their argument, the resulting type system can still provide inference. Additionally, the type classes can then be equipped with some kind of overloading order allowing one to arrange the classes as a lattice.

Meta types

Parametric polymorphism implies that types themselves are passed as parameters as if they were proper values. Passed as arguments into a proper functions as in the introduction, but also into "type functions" as in the "parametric" type constants, leads to the question how to more properly type types themselves. A meta type,[note 5] the "type of types" would be useful to create an even more expressive type system.

Though this would be a straight forward extension, unfortunately, only unification is not longer decidable in the presence of meta types, rendering type inference impossible in this extend of generality. Additionally, assuming a type of all types that includes itself as type leads into a paradox, as in the set of all sets, so one must proceed in steps of levels of abstraction. Research in second order lambda calculus, one step upwards, showed, that type inference is undecidable in this generality.

Parts of one extra level has been introduced into Haskell named kind, where it is used helping to type monads. Kinds are left implicit, working behind scene in the inner mechanics of the extended type system.

Subtyping

Attempts to combine subtyping and type inference have caused quite some frustration. While type inference is needed in object-oriented programming for the same reason as in functional languages, methods like HM cannot be made going for this purpose. It is not difficult to setup a type system with subtyping enabling object-oriented style, as e.g. Cardelli [7] shows in his system .

  • The type equivalence can be broken up into a subtyping relation "<:".
  • Extra type rules define this relation e.g. for the functions.
  • A suiting record type is then added whose values represent the objects.

Such objects would be immutual in a functional language context, but the type system would enable object-oriented programming style and the type inference method could be reused in imperative languages.

The subtyping rule for the record types is:

Syntatically, record expressions would have form

and have a type rule leading to the above type. Such record values could then be used the same way as objects in object oriented programming.

Notes

  1. Hindley–Milner is DEXPTIME-complete. Non-linear behaviour does manifest itself, yet only on pathological inputs. Thus the complexity theoretic proofs by (Mairson 1990) and (Kfoury, Tiuryn & Urzyczyn 1990) came as a surprise to the research community. When the depth of nested let-bindings is bounded—which is the case in realistic programs—Hindley–Milner type inference becomes polynomial.
  2. Polytypes are called "type schemes" in the original article.
  3. The parametric types were not present in the original paper on HM and are not needed to present the method. None of the inference rules below will take care or even note them. The same holds for the non-parametric "primitive types" in said paper. All the machinery for polymorphic type inference can be defined without them. They have been included here for sake of examples but also because the nature of HM is all about parametric types. This comes from the function type , hard-wired in the inference rules, below, which already has two parameters and has been presented here as only a special case.
  4. To make a function that works just like Aladdin's wonderful lamp, one can define: and formulate the wish in a slightly more expressive type language like: "(iwisha (flying carpet))" passing the type explicitly as in the introduction. When the function returns, it delivers the wish, but no longer as a type (i.e. specification), but as something real, a value of that type. This implementation at least does not return. Many have dreamed of such a useful device, and if it exists it could do wonders even if only in computing. The type is so general, that its values belong to the realm of imagination.
  5. The term "higher-order" has been so overused, that "meta type" is preferred here.

References

  1. Hindley, J. Roger (1969). "The Principal Type-Scheme of an Object in Combinatory Logic". Transactions of the American Mathematical Society. 146: 29–60. doi:10.2307/1995158. JSTOR 1995158.
  2. 1 2 3 Milner, Robin (1978). "A Theory of Type Polymorphism in Programming". Journal of Computer and System Sciences. 17: 348–374. CiteSeerX 10.1.1.67.5276. doi:10.1016/0022-0000(78)90014-4.
  3. Damas, Luis (1985). Type Assignment in Programming Languages (PhD thesis). University of Edinburgh (CST-33-85).
  4. 1 2 Damas, Luis; Milner, Robin (1982). Principal type-schemes for functional programs (PDF). 9th Symposium on Principles of programming languages (POPL'82). ACM. pp. 207–212.
  5. Clement (1986). A Simple Applicative Language: Mini-ML. LFP'86. ACM.
  6. Vaughan, Jeff (July 23, 2008) [May 5, 2005]. "A proof of correctness for the Hindley–Milner type inference algorithm" (PDF). Archived from the original (PDF) on 2012-03-24.
  7. Cardelli, Luca; Martini, Simone; Mitchell, John C.; Scedrov, Andre (1994). "An extension of system F with subtyping". Information and Computation, vol. 9. North Holland, Amsterdam. pp. 4–56. doi:10.1006/inco.1994.1013.

  • Mairson, Harry G. (1990). "Deciding ML typability is complete for deterministic exponential time". Proceedings of the 17th ACM SIGPLAN-SIGACT symposium on Principles of programming languages. POPL '90. ACM: 382–401. doi:10.1145/96709.96748. ISBN 0-89791-343-4.
  • Kfoury, A. J.; Tiuryn, J.; Urzyczyn, P. (1990). "ML typability is dexptime-complete". Lecture Notes in Computer Science. CAAP '90. 431: 206–220. doi:10.1007/3-540-52590-4_50.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.