Type inference for Haskell, part 2
This is part 2 of the series on type inference for Haskell.
In order to be able to tell whether a type is correct for a given expression, you have to know what that expression does. Instead of looking at all of the possible expressions in Haskell, we’ll focus on the behavior of a subset of the language. That subset is small but useful for describing the behavior of a surprisingly large amount of what Haskell does. Later on in this series the set of expressions we can consider will be expanded to cover basically everything that Haskell can do1.
From here on out, I’ll try to note the relevant section of Typing Haskell in Haskell, like so:
(THIH Section 11.1, Literals)
The first kind of syntax form to consider is the literal values. These include things like numbers, characters, and strings. Their representation is rather straighforward:
Notice that this doesn’t include any recursive structure or option for user-defined structures. You couldn’t express Just 123
as a literal (but you could express 123
) because that is an expression made out of smaller expressions. Literals are already a value, so evaluating a literal results in the value of the literal itself (as you would expect).
(THIH Section 11.3, Expressions)
There are also variables. To represent the syntax for these all we need to store is the variable name. When a variable is evaluated, the result is whatever value the variable has been bound to.
There are function applications (more commonly called a “function call,” but researchers in the programming languages field like their own names for things). Due to the fact that all functions actually have just a single argument, this applies one expression (the function) to another (the argument). As you would expect, evaluating a function application results in whatever value the function returned. This is represented as a structure that contains the two inner expressions.
The two expressions on either side could be any sort of expression, including other function applications. Here it is assumed that if any parenthesis were included in the original code, their impact on how expressions are grouped had been considered already when the expression we are given was made. Parenthesis might be necessary to make function expressions group the way we want (in particular, a b c
groups differently from a (b c)
), but they aren’t included in this syntax definition.
Next up is a function “abstraction.” You might know this by the name closure, function literal, function expression, lambda expression, or anonymous function. I guess naming things really is one of the hard problems in computer science. These correspond to the Haskell syntax (\x -> foo x 123)
. The definition of an abstraction contains the name of the (yes, just one) argument name (like x
), plus the expression for the function body (like foo x 123
). Not much happens when you evaluate this kind of expression. It just gives back a function that can be called later. There is no evaluation of the expression inside until the function is actually called.
Another thing that can act like a function is a constructor. Just
is a constructor that, when applied to another value, produces a Maybe T
. Another constructor is Nothing
. That one doesn’t take any arguments. The representation for this needs to contain the type that the constructor acts as. In the case of Nothing
, that would be Maybe a
. In the case of Just
, that would be a -> Maybe a
(it has to be a function type in order for it to be valid to apply the Just
constructor as a function). True
is a constructor of type Bool
.
Finally, there are let bindings. For now, we’ll only consider let bindings that declare just one variable. You can simulate most let bindings (those that don’t have mutually recursive definitions) with multiple nested bindings, writing
let a = 1
in let b = 2
in foo
instead of
let a = 1
b = 2
in foo
The general form is let v1 = e1 in e2
. When these are evaluated, the first step is to evaluate the expression e1. Once that is done, the resulting value is bound to the variable v1. Then the expression e2 is evaluated with that variable added to the scope. The result of the whole expression is the result of evaluating e2. These need to store the name of the variable introduced, the expression it is bound to, and the expression that is then evaluated.
Here’s how to represent these syntax forms as a data structure:
type Id = String
data Assump
= Assump Id Type
deriving (Eq, Show)
data Expr
= Var Id
| Lit Literal
| Const Assump
| Ap Expr Expr
| Abs Id Expr
| Let Binding Expr
data Binding = Binding Id Expr
(The Binding
and Assump
structures will become more useful later on; the Type
structure will be defined in the next part in this series.)
Compared to hugely complicated languages like C++, it’s interesting that you can represent syntax for most of the functionality of Haskell in so little code2.
There are a few limitations to the language as we’ve defined it so far.
For one thing, let expressions only allow defining a single variable at once. They also don’t allow that variable to be in scope in the expression that gets bound to the variable, meaning that recursive definitions (including recursive functions!) are not presently possible. Those two facts together mean that you also cannot have mutually recursive functions. We’ll extend the language to address both of these issues, but it’s easier to understand the core mechanics of type inference with simpler let expressions.
There is also no support for pattern matching right now. This means definitions like the following are not yet supported:
isJust (Just _) = True
isJust _ = False
It also means that there is no case
statement. This too will be addressed after covering the basics of type inference.
There is another missing feature known as “let polymorphism” that I’ll discuss in the next post. This is the first extension that will get added to the type system once basic type inference is working.
At this point, you might think I’ve been misleading in saying that those six forms - literals, variables, abstractions, applications, constructors, and let bindings - cover all the other the features of Haskell. You’ve definitely seen syntax that doesn’t look like any of these.
What about, for example, infix operators like +
? How does type inference work for them? It turns out that those act like functions from the perspective of type inference. It doesn’t matter that evaluating those expressions might not involve a function call and a new stack frame, the only thing that matters is that they take inputs and produce outputs. For example, the type of &&
is Bool -> Bool -> Bool
.
Ok, what about if .. then .. else ..
expressions? That, too behaves like a function from the perspective of a type system. The type system doesn’t care that only one of the “then” and “else” parts will be evaluated; they have to be the right type regardless. You can imagine it as an “if function” like:
if e1 e2 e3
It takes in a Bool (e1) and two things of the same type (e2 and e3), and returns something of that type. It would have the type Bool -> a -> a -> a
.
The same goes for structure expressions like Just 123
. The Just
constructor acts like a function (again, from the perspective of the type system). Its type is a -> Maybe a
.
List literals (such as [1, 2, 3]
) are desugared3 into calls to the function :
(which prepends an element to a list) and to the constructor []
(which creates an empty list). The list above becomes 1 : 2 : 3 : []
. Range literals (like [1..2]
) can be converted into a call to a function that produces that range.
Statements in do-notation are handled as function calls (there seems to be a theme here). Statements in do-notation are desugared into plain old function calls by introducing lambda expressions and adding the >>=
operator between them.
Lastly, list comprehensions desugar to do-notation (in the list monad) which itself desugars to function calls.
One parting note: type inference algorithms tend to expect pretty heavy preprocessing to the syntax tree to have been done before they run. For example, it expects that the syntax for Nothing
has been converted into a structure that contains the type that the Nothing
constructor is from. As we go on, you’ll see more places where it is expected that some tedious preprocessing has already been applied.
In the next post, I’ll go over some examples of what types are inferred for some expressions to give you a taste of what type inference needs to be able to accomplish.
Pattern matching is biggest feature presently left out.↩
Ignoring details like data type definitions and import statements.↩
Desugaring means removing syntactic sugar↩