Understanding Algorithm W

Posted on March 25, 2018

I found it really hard to fully understand Algorithm W, so I wrote this as a way of helping myself understand it better. Hopefully you find it useful, too.

This article is aimed at someone who already has some familiarity with a language that has type inference (perhaps by using a language like Haskell, OCaml, or Rust) and is curious about how the type inference works. Familiarity with one of those languages probably isn’t strictly necessary for understanding this article, but it would help.

I won’t cover the theory or how to prove that Algorithm W works. I’m more interested in understanding what it actually does and why.

See the notes at the bottom for a glossary and links to other resources (including the original paper that introduced Algorithm W).

The Language

Algorithm W works on a simple programming language. It is so minimal that it doesn’t even have things like if statements or data structures. But, it is just complicated enough to show the different techniques that need to happen in type inference. It’s possible to extend the language to have more features and still be able to infer the type using those techniques.

The language is a purely functional one, in a similar style to the ML family of languages. Everything (even function definitions) in the language is an expression. In fact, programs are just a single expression.

The kinds of expressions that are possible are:

  • A literal value, like 123 or false.
  • A variable.
  • A function application (AKA a function call).
  • A function abstraction (AKA a function declaration).
  • A let binding.

Pretty minimal, right?

Syntax

Parenthesis can be added around any expression to show the proper grouping.

Literal values have the syntax you would expect, looking like 55 or true. The same goes for variables, like x or foobar.

Function applications is a bit different. Say you had a function plus. Calling it with x and y would simply look like plus x y. If you want to call a function g with x, then call f with the result of that, you have to use parenthesis to write that: f (g x). Writing (f (g x)) would also be valid. In a certain sense, plus x y actually groups as (plus x) y, but more on that later.

A function abstraction names a single argument (why a single argument? again, more on that later) followed by a period, then the expression that forms the function body. For example: x.plus 1 x would be a function that adds 1 to the given number. By itself, this doesn’t give the function a name. It’s common to see function definitions wrapped in parenthesis. The body of a function is allowed to be another function, like x.y.plus x y (which groups as (x.(y.(plus x y)))).

Let bindings give a variable name, an expression for the value that variable should have, and an expression to evaluate (which can involve the variable that was just bound). Example: let square = (x.times x x) in square 10.

Here’s roughly the BNF for the syntax:

PROGRAM := EXPR

EXPR
  := LITERAL
  |  VARIABLE
  | '(' EXPR ')'
  | ABSTRACTION
  | APPLICATION
  | LET

LITERAL := INTEGER | BOOLEAN

INTEGER := DIGIT+
DIGIT := '0'..'9'

BOOLEAN := 'true' | 'false'

VARIABLE := LETTER+
LETTER := 'a'..'z' | 'A'..'Z'

ABSTRACTION := VARIABLE '.' EXPR

APPLICATION := EXPR EXPR

LET := 'let' VARIABLE '=' EXPR 'in' EXPR

Semantics

Variables may refer to either values or functions. The two ways a variable can be introduced is with a let binding or with the argument of a function. A variable declaration may shadow an existing one. If that is the case, then the variable name in the subexpression refers to the newer variable, but outside of that expression the old variable is still accessible. Note that in a let binding like let x = plus x 1 in x, the plus x 1 uses the older definition of x, whereas in x refers to the newer one. (In this example and future ones, I’m going to assume a handful of built-in functions such as plus for the sake of examples that make sense.)

Functions in this language close over the variables that are in scope when they are declared. This is how multiple-argument functions are achieved even though the language technically only contains single functions. For example,

The function hypotenuse is a function that takes a number (as the argument x) and returns another function. That second function also takes a number as an argument (y), but it returns a number. When this function is called like hypotenuse 5 10, this actually means “apply the function hypotenuse to 5, then apply the result of that to 10.” It groups like (hypotenuse 5) 10, and the part in parenthesis evaluates first.

Making all functions take a single argument like this makes type inference simpler (because it doesn’t need separate logic to check the number of arguments passed to a function). It also means that you can partially apply functions, e.g.

Since applying 2 to times results in another function, nothing prevents us from binding that function to a name and using it some time later.

One final thing to point out about the semantics: recursive calls are made impossible by the syntax (a function’s name isn’t bound inside the function definition, so there is no way for a function to refer to itself). However, it is possible to transform a non-recursive function into a recursive one using the fixed-point combinator, though this type system can’t actually handle the fixed-point combinator.

Type system

The set of types contains a few base types, like int and bool.

There’s also a function type, which combines an input type and an output type with an infix ->. For example: int -> int is a function that takes an int and returns an int. (int -> int) -> int is a function that takes a function from int to int and returns an int. int -> (int -> int) is effectively a function that takes two ints and returns another int, but to be technically accurate it is a function that takes an int and returns a function from int to int.

In addition to concrete types like those, there are also type variables. (These are usually written as a single letter, a, b, etc, or sometimes a letter and a number, like a0 or v2.) Think of these less like variables in the programming sense and more like variables in the algebra sense: a variable ends up having some value (which will be another type), but we just don’t know what it is yet. For example, a -> a is a function type that takes something (we don’t have anything specifying what) and returns something of the same type (whatever that ends up being).

So a type is one of:

  • A concrete type like int or bool.
  • A type variable like a or b.
  • A function type t1 -> t2 containing two other types.

That’s it. This language doesn’t include any classes or user-defined structures, so there is no way to represent their type.

Type variables lead to one detail that makes type inference more interesting: Generics. Let’s take a look at the identity function (and function that just returns its argument unchanged): (x. x). Its type would be a -> a. We have to write that using the type variable a because we can’t tell what type x has from just reading the definition of the function. That said, in any actual call to this function, x will be a value that has some type. Say we call it with an integer:

The input type becomes an integer here. Let’s say we want to use that integer result in some other computation:

How do we know that (id 123) evaluates to something of the right type? Here’s where we make use of the type variables a -> a. We know the input type is int, so we match that up with the function and see that things work if we replace a with int, that makes the input type work. When we replace a type variable with a type, it replaces that type variable everywhere. In this case, that means replacing both copies of a in the type with int. The result is int -> int, meaning that we know that the function returns an int.

However, what I just said is slightly wrong: we don’t replace the type variable a directly. Let’s start with an example of where this goes wrong, then I’ll show you what actually happens.

Here, we pass square to id (which doesn’t change it) and pass 123 to id (which also doesn’t change it). In terms of the output of the program, this is equivalent to let id = (x. x) in square 123 (or even just square 123).

However, the type is interesting. Assuming square is int -> int, this means that the first call to id uses it with the type (int->int) -> (int->int) (a function taking a function of int->int and returning the same), yet the second call uses it with the type int -> int. If we replace a in a -> a, which of the two should we replace it with? Either way, it would give us the wrong type for the other use of id.

What we actually do is to “instantiate” the type of the id function everywhere it is used. This means replacing certain type variables in the type with new type variables (type variables names that haven’t been used yet). So instead of trying to replace a with two conflicting types, we first instantiate id’s type twice to get b -> b for the first call and c -> c. Then, we replace b with (int->int) so that the type works in (id square) and replace c with int so that the type works in (id 123). The exact details of how this works are discussed below in “Schemes and Generalization.”

Another thing worth noting: Unlike languages like Java or C++, this type system doesn’t have any concept like subclassing. It turns out that subclassing causes a lot of trouble for type inference algorithms.

Alright, now that we know how the language works, we can start thinking about how to infer types for it.

Type Inference

Before describing how to figure out the type of expressions in that language, there are two groups of operations on types that I need to cover first.

Unification and Substitution

Unification is a small amount of code, yet it is the heart of the type inference algorithm. It’s where we find type errors and where we learn what the type of things are. It would be fair to say that the rest of the type inference algorithm exists to prepare data for calling the unification function and for using its result.

So what is it? It is an algorithm that figures out what to change in order to make two types the same. If it turns out to be impossible, it means there is a type error. The only thing it is allowed to do to the types in order to make them the same is to replace a type variable with something else (possibly another type variable). This means that if the two types are different, there had better be some type variables we can change!

Let’s look at a few examples. Unifying types that are already the same, like int with int, doesn’t do anything. Unifying types that are definitely different, like int and bool or bool and a -> bool, results in a type error (no amount of monkeying with type variables will ever make them be the same type).

The actual variable replacement happens when one side has a variable and the other side has something else. For example, unifying int with a results in replacing a with int. We call this a substitution; we are substituting a for int. If the input had been the other way around (a and int), the result would have been the same. If both parts are type variables, like a and b, then it substitutes the left one for the right one (replacing a with b in this example).

What about function types? I’ll come back to those after saying a little more about substitutions.

Substitutions are a mapping from type variables to things to replace that variable with. For example, we might replace a with b -> b and c with int. From now on, I’ll write that substitution like: {a: (b -> b), c: int}. We can apply substitutions to types, which means looking in that type for any type variables that need to be replaced and replacing them. Applying that example substitution to a type a -> (b -> c) results in the type (b -> b) -> (b -> int).

The unification algorithm (assuming it succeeds) returns the substitution that can make two types the same. To put it another way, if the unification algorithm returns some substitution S for types t1 and t2, then apply(S, t1) == apply(S, t2).

OK, as promised, back to function types. Let’s say the two types are int -> a and b -> bool. The unification algorithm first unifies the left side (int and b). That gives the substitution {b: int}. Then, it unifies the right side (a and bool), which results in the substitution {a: bool}. In then combines those two results, and returns {a: bool, b: int}. I hope it makes sense at this point that applying that substitution to either of our two input types both results in the same type, int -> bool.

With me so far? OK, then it’s time to look at a more advanced case. What if we unify a -> a with int -> c? (We’d see a unification like this if we applied the identity function to an integer, for example. The type variable c is there to capture the resulting type.) As before, we start by unifying the left hand side: a and int. That results in the substitution {a: int}. Here’s the subtle bit: We have to substitute the other a (on the right hand side of the function type) with int before continuing. That’s because when you replace a type variable with something else, you have to replace it everywhere1. So, after unifying the left hand side, the two types become int -> int and int -> c. Now we can unify the right hand side: int and c. That gives the substitution {c: int}. Finally, we can compose the two substitutions and get the result: {a: int, c: int}. Congratulations, you just figured out that applying the identity function to an integer results in an integer!

There are two special cases of unification that I should mention.

First: Normally, when you unify a type variable with something, it results in a substitution replacing that type variable with the other type (even if it’s another type variable). The one exception to this is when it’s the same type variable on both sides. In that case, the result is just an empty substitution {} (since nothing needs to change to make the types be the same).

Second: You aren’t allowed to replace a type variable with a type containing that variable. For example, it is not allowed to replace a with a -> b (though replacing it with c -> b would have been fine). Cases where this would happen (e.g. unifying the left-hand side of a -> d and (a -> b) -> int) result in a type error. If it was somehow allowed, this would result in an infinitely large type. In various literature, you’ll often see that called simply and “infinite type.”

Infinite types

(On your first read through this blog post, you might want to skip this section and come back to it later)

A simple way to cause an infinite type is (f.f f). That’s not the clearest thing to read in this language, so I’ll re-express it in python:

Examples like this are interesting because even though we can’t express a (finite) type for f, there are actually values that would run without crashing if we passed them as f (for example, the identity function would work fine here).

Let’s go on a rabbit trail and see why that type becomes infinite. Suppose we’ve figured out that the type of the function is:

Since we know f must also be a function, we can break up type_of_f into argument_of_f and type_f_returns, giving us:

Let’s see if we can figure out what argument_of_f is. We are passing f as an argument to f, so the argument’s type must be the same type as f. OK, let’s try that:

Wait, that didn’t get rid of argument_of_f. Luckily, we still know it must be the same type as f:

Oh dear. We aren’t ever going to remove argument_of_f, are we? And that’s why the type is infinite.

Factoid: infinite types are the problem you run into when trying to type the fixed-point combinator.

Applying and Composing Substitutions

So far I’ve only informally described what it means to apply a substitution. Let me try to be a little bit more precise. I’ll split this up by what sort of type you might be applying a substitution to:

Applying a substitution to a concrete type (like int or bool) just returns that type unchanged.

Applying a substitution to a type variable does one of two things:

  1. If there is a substitute for that type variable, return that substitute. For example, if your type variable is c and there is a substitution {c: bool}, return bool.
  2. On the other hand, if the substitution doesn’t say what to do with your type variable, just return the type variable unchanged. For example, if your type variable was b and there is a substitution {c: bool} it would return b.

Applying a substitution to a function type is done by applying the substitution to the left and right side, then building a new function type out of that. (This applies recursively.)

Now I can describe more precisely what it means to “compose” or “combine” substitutions. Let’s say I have two substitutions S1 and S2. The substitution composed from those two substitutions would do the same thing as applying those two substitutions, one after the other. That is to say, if S3 = compose_substitutions(S1, S2), that means that apply(S3, t) does the same thing as apply(S2, apply(S1, t)).

Example: let’s say that t is a -> b, S1 is {b: c -> int}, and S2 is {a: bool, c: float}. Applying S1 to t gives a -> (c -> int). Applying S2 to that results in bool -> (float -> int). Armed with that example, let’s take a stab at defining how to compose two substitutions:

(this is not correct) Compose two substitutions by combining the two mappings of things to substitute. If they both provide a substitute for the same type variable, the left substitution’s replacement wins.

That definition is close, but missing something. To see what, let’s use it to construct our S3. There aren’t any conflicting variables, so we can simply combine the two substitutions, giving us S3 = {b: c -> int, a: bool, c: float}. When we apply that substitution to t (which was a -> b), we get bool -> (c -> int). Oops, that’s not quite right. We missed replacing c with float. This is because it needs to have applied the substitution {b: c -> int} first, then applied {c: float} on top of that.

Can we bake this behavior into our substitution S3? How about S3 = {b: float -> int, a: bool, c: float}? That looks like it will work. We changed the replacement for b to take the replacement for c into account. In effect, we “already applied” the substitution for c to the substitution to b. Actually that’s exactly what we do:

Compose two substitutions S1 and S2 by applying S1 to all the substitute types in S2, then add in any substitutions in S1 unless S2 already has a substitute for that variable.

In most cases, applying S1 to the types in S2 won’t change anything, but you can construct examples where it does, e.g.

After you read the algorithm, come back to this example and see if you can walk through the algorithm on paper to see how it works (expect it to take about an hour or so to do. It’s pretty tedious, but it gives you a better sense for the mechanics of the algorithm).

Schemes and Generalization

A scheme (also known as a type scheme) is a type with an extra piece of information attached. That extra piece of information is a list of type variables. The type variables in the list are a subset of the type variables that appear in the type (you may see the type variables that appear in the type referred to as the variables that are “free” in the type or as “free type variables”). Often, this list is empty, though sometimes for function types it will contain some type variables.

To see what that looks like, let’s once again turn to our trusty identity function. We get a lot of mileage out of that little guy. I said before that it has the type a -> a. That actually isn’t always totally accurate. When we store the identity function in a variable, like

the type of the variable id is actually a type scheme: Scheme([a], a -> a). In fact, the type of any variable is always represented as a type scheme (regardless of whether it is a function).

OK, so we’ve wrapped a scheme around the type a -> a, and used it to add the list of type variables [a]. As promised, all the type variables in that list appear in the type. You may now be asking yourself, so what is this good for?

Back to the example. I’ll expand it to show some uses of the identity function. As in other examples, I’ll assume that there is a built-in function square that I can make use of.

(You may notice that this is an example that also came up in the Type System section.) Here, the id function effectively needs to take on two different types. It needs to be (int -> int) -> (int -> int) when applied to the square function, but it also needs to be int -> int when applied to 123. How do we manage this trick? With a technique called instantiation.

Both times we use the id function, we take the type scheme we have for id (which was Scheme([a], a -> a)), and replace any type variable in the type with a new type variable IF that type variable appears in the list in the scheme. This process results in a type (rather than a scheme). Here, a appears in the list, so we replace it with a new type variable. Note that we do this for both times the id function is used, and the new type variables are different each time. So the first use of id (where it is applied to square) might be instantiated to the type a0 -> a0, and the second use might be instantiated to a1 -> a1.

Now, we can replace a0 with (int -> int) without impacting the type of the other use of id. Likewise, we can replace a1 with int to make that instance have the right type, too. (How do we know to replace a0 with (int -> int)? That substitution is one that the unification function generates. We’ll see in a moment how it gets called.)

Now you know what a scheme is, and how it is instantiated to create a new type, but we haven’t yet covered how to create a type scheme in the first place. Type schemes are created using a type (which is just the type that is in the type scheme) and using information about the rest of the environment. The type is just copied into the type scheme, so what we really need to know is how to pick that list of type variables. The way to do that is simple: It’s the list of all type variables appearing in the type that don’t appear in the type environment.

Um, excuse me, type environment? What?

The type environment is basically just the things we already have a type for (and that are still in scope).

This is easier to explain starting from an example:

Look specifically at the type of f. It’s a function, so the type must be a function type. What is its argument type? It could be anything, so we use a type variable to represent that. To make this easy to read, I’ll call that type variable t_y. What about the return type? That’s whatever type x has, which I’ll call t_x. If we use those variable names, the type of f is then t_y -> t_x.

When we use f by applying it to 123, that changes the argument type from t_y to int, but it doesn’t do anything to the return type. That remains t_x (whatever that is). That means that when we instantiate the type of f to use it, we only replace t_y with a new type variable, and we leave t_x alone. To say the same thing in terms of a type scheme, we say that the type of f is Scheme([t_y], t_y -> t_x).

Back to the question at hand: how do we know that we only list t_y and not both t_y and t_x? Because t_x appears in the type environment. What’s the type environment? It’s the types of the variables that are currently defined. x is defined outside of f and its type is t_x, so t_x is excluded from the list of variables in the type scheme even though it is one of the variables in the type t_y -> t_x.

It’s worth noting at this point that only variables created by let bindings are generalized. Arguments of functions aren’t generalized (their type scheme always has an empty list of variables).

To summarize:

  • Schemes contain a type and a list of some of the type variables in that type.
  • When variables are declared, their type is generalized.
  • Generalization means creating a scheme out of a type.
  • The type variables in a scheme are those in the type that aren’t also in the type environment.
  • When a variable is used, it is instantiated.
  • Instantiating a type scheme produces a type.
  • That type is the one from the type scheme, with the type variables listed in the scheme replaced with brand new type variables.

Inferring Types of Expressions

Now we can finally describe the full algorithm!

The type inference function takes two arguments: the expression to find the type of and the current type environment. The type environment is a map from variable names (variables in the language, not type variables) to type schemes. It has the variables that are currently in scope. When we start of finding the type of a program, the type environment starts out empty. The algorithm adds things to it as it goes along.

The function also returns two things: the type of the expression and a substitution. Think of the substitution as a way of returning what the algorithm learns about other parts of the program (besides just the expression).

This algorithm recursively walks the tree of expressions. When one expression contains other expressions (e.g. function application contains an expression for the function and an expression for the argument’s value), this algorithm recursively calls itself on both subexpressions. I’ll get to the exact details in a moment, but I wanted to point out something now: sometimes the algorithm passes a modified copy of the type environment to the recursive calls. This modified copy is in fact a copy; the original remains unchanged.

It probably comes as no surprise that the algorithm needs to do a different thing for each sort of expression in the language. That said, there is a general pattern they mostly follow: recurse down the tree of sub-expressions, adding information to the type environment on the way down. On the way back up, combine the returned substitutions to get the full picture of what types things have.

Let’s go through each sort of expression one by one:

Literals. e.g. 123.

These are super easy. We just return the type of whatever the value is (e.g. return bool for false or int for 123) and an empty substitution.

Variables. e.g. foo.

These aren’t bad either. We look up the name of the variable in the type environment and get the scheme for that type variable. Once we have the type scheme, it is instantiated to get our type. We return that type along with an empty substitution. If the type environment doesn’t have anything for the variable, this means the variable isn’t defined, which is an error.

It’s worth pointing out the fact that instantiation happens when a variable is used, not when a function is applied.

Function abstraction. e.g. x. times 2 x.

Remember that functions have just one argument in this language. A new type variable is created to represent that argument. Let’s say that the argument is x and the type variable is v0. Using that, a new type environment is created by adding x to the type environment that is passed in (remember, this doesn’t modify the existing type environment value). When x is added, its type is set to the scheme Scheme([], v0). Basically, that is the new type variable just wrapped in a type scheme.

Using that updated type environment, this makes a recursive call to find the type of the function body. That returns a substitution and a type.

Next, that substitution we just got is applied to that type variable we created earlier (v0). That way, if finding the type of the body of the function learns anything about what the type of the argument needs to be, that will be reflected by what the type variable gets replaced with. It might be the case that applying the substitution to it returns it unchanged.

Now we can create the type of the function. It is a function type (big surprise). The argument type is whatever the result of applying the substitution to the type variable was. The return type is just the type of the function body. And that’s it. The algorithm returns that function type along with the substitution we got when typing the function body.

Time for an example. Let’s say that the recursive call to find the type of the function body returned the type bool and the substitution {v0: int}. When we apply that substitution to our type variable, that gives int. Putting those together in a function type gives int -> bool. We return that along with the substitution.

Function application. e.g. foo 55.

When figuring out the type of a function application, we can’t use the function’s return type as the type of the whole expression. Consider passing a number (type int) to the identity function (type a -> a). Since the identity function returns the input unchanged, the return value will clearly be an int, not type a. What we actually need to do is to figure out how to make the left-hand side of the function type become the same as the type of the argument that is being passed to it (which is to say, find a substitution that makes them become the same type). Once we do that, we’ll have enough information to figure out the return type.

Step 1: This also starts off by creating a new type variable. It is going to be used to “capture” the return type of the function. We actually don’t use it in either of the recursive calls, that comes later. The type variable is created at this step just so that the order in which the type variables gets created makes sense (which makes debugging easier).

Step 2: Do a recursive call to find the type of the function, resulting in a substitution (let’s call it sub1) and a type (type1). If the function is referred to by a variable (like foo), this recursive call handles instantiating it for us.

Right now, we know the type of the left hand side (it’s type1).

Step 3: Create a new type environment (let’s call it env1) by applying that substitution (sub1) to all of the types in the environment. Why do this? Well, when we found the type of the left hand side, we might have also discovered something about the type of some variable that also gets used somewhere in the right hand side (imagine an expression like (times x) (id x)x appears on both sides), so we should update the type environment with what we learned before continuing on. That’s important for getting the right type when we find the type of the right hand side.

Step 4: Speaking of the finding the type of right hand side, that’s what happens next. This recursive call uses that new type environment env1 when running. Let’s call the results of this recursive call sub2 and type2.

Step 5: Apply sub2 to type1 (let’s call the result type1_sub). Like before, this is because we may have learned something when typing the right hand side that influences the type of the left hand side.

Right now, we know the type of the left and right hand side, but not the type of applying those two things.

Step 6: Create a function type with type2 as the argument type and the type variable we created in step 1 as the return type. This is going to be how we use the type variable to capture that return type. Let’s call this type3.

Step 7: Call the unification algorithm on type1_sub (the type of the function) and type3.

This will unify the type variable in type3 with whatever the return type ends up being, giving us a convenient way to find out the return type (just see what the type variable gets replaced with).

This call to the unification function is the only time that things get added to the substitution. That means that if you see an expression that doesn’t contain any function applications, you can be sure that the substitution you’ll get from typing it will be empty.

Step 8: We are almost done. All that is left to do is prepare the two values to return. First up: we need to make the substitution to return. This is done by composing the substitutions sub1, sub2, and sub3 to get a single substitution. When the resulting substitution is applied to a type, it will do the same thing as applying sub1, then sub2, and finally sub3.

Step 9: Finally, we apply sub3 to the type variable from step 1. That’s how we actually figure out what the type resulting from the function application is.

That was a lot of steps, so I’ll try to summarize. Say you figure out that the type of the function is t1 -> t2 and the type of the argument is t3. Using a new type variable v4, unify t1 -> t2 with t3 -> v4. Then, figure out what v4 gets replaced with, and that gives the type of the expression.

Let bindings. e.g. let x = 123 in x.

Last one! We’re almost there.

Step 1: Find the type of the first expression (which is 123 in the example above). Let’s call the results sub1 and type1.

Step 2: Create a new type environment by applying sub1 to all the types in the current environment, and use that to generalize type1 into a type scheme. We don’t use this newly created type environment for anything else.

Step 3: Create a new type environment with that scheme for the variable (which was x in the example above).

Step 4: Create (yet another!) new type environment by applying sub1 to all the types in the environment from step 3. This is because when we figured out the type of the first expression, we might have learned something about the type of some variable that we are going to use again in the second expression.

Step 5: Use that new type environment to find the type of the second expression. Let’s call the results sub2 and type2.

The result of the function is type2, and sub1 composed with sub2.

Again, I’ll summarize. Start by figuring out the type of the first expression (t1), then generalize it into a scheme, add that scheme to the type environment, and find the type of the second expression.

There is something important to notice here. In step 2, we generalize a type when creating a type scheme. If you look back to the section on function abstractions, you’ll see that we also created type schemes there (for the function arguments), but their list of type variables was always empty. This means that you can get different types by instantiating things introduced in a let binding, but not things introduced as function arguments. For example, this works:

But this doesn’t work:

Function arguments are required to take on the same type everywhere they are used, whereas variables created with let bindings can be instantiated to different types. This is the case in most languages with type inference (though in Haskell you can enable a compiler feature called “Rank-N Polymorphism” which allows you to tell the compiler a type for the function where that would work.)

It’s also worth pointing out how this avoids self-referential/recursive bindings. (If we had recursive bindings, we could make recursive functions because functions would be allowed to refer to themselves.) The way this works is pretty simple: We find the type of the first expression before the variable name has been added to the type environment. That means that any references to the variable name in the first expression would actually refer to a previous binding of that variable name, not the variable that the let expression is introducing.

For example, the expression

evaluates to 25.

Conclusion

So that’s how the algorithm works. It uses a different set of the type operations we introduced for each different sort of expression.

Handling literals didn’t involve any type operations.

Variables required a lookup in the type environment and the instantiation operation.

Function abstractions involved creating a new type variable, modifying the type environment, and applying a substitution.

Typing function applications also involved creating a new type variable and applying substitutions. Additionally, it used the unification algorithm and it composed substitutions.

Let bindings involved the generalization operation as well as modifying the type environment and both applying and composing substitutions.

Finding the type for the latter 3 expressions also involved recursively calling the algorithm on the sub-expressions.

Hopefully it made sense how the individual operations like generalization or unification work and what they are for. Perhaps, with enough times re-reading this, it will even make sense how they are used on each type of expression.

Notes

If anything above doesn’t make sense or is incorrect, please ping me on Twitter.

Most General Unifier

Unification in this algorithm is said to return the “most general unifier.” That basically means that it returns the most general type possible (or rather, a substitution that results in the most general type possible). Think of it as changing the fewest things possible to make the two types match.

For example, let’s say we are unifying the types a -> a and b -> c. One possible unification could be {a: int, b: int, c: int}. That would successfully unify the two types (resulting in int -> int for both), but it wouldn’t be the most general. One possible most general unifier is {b: a, c: a} (resulting in a -> a for both). Another is {a: c, b: c} (resulting in c -> c for both).

One interesting property of the most general unifier is that you can get from the type it gives (e.g. c -> c) to any other unification of the two types by simply applying a second substitution (e.g. {c: a} to get back to a -> a, or {c: int} to get int -> int).

Glossary

  • Abstraction: Creating a function.
  • Application (of substitutions): replacing some type variables in a type with other type.
  • Application: Calling a function.
  • BNF: Backus-Naur form
  • Binding: Creating a variable.
  • Generalize: To create a type scheme out of a type.
  • Identity function: A function that just returns its argument
  • Instantiate: To create a type out of a type scheme.
  • New type variable: Creating a type variable that isn’t used anywhere else.
  • Scheme: See “Type scheme”
  • Substitution: A mapping from type variables to the types they should be replaced with.
  • Type environment: The types schemes we currently have for all the variables in scope.
  • Type scheme: A type with a list of type variables attached.
  • Type variable: A variable in the type system that holds some type.
  • Unifier: A substitution that can make two types become the same.
  • Variable: A variable in the language that holds some value.

See also

Footnotes


  1. That is, replace it everywhere in what the type inference algorithm is currently working on. It trusts caller to apply the substitution it returns where needed.