Type inference for Haskell, part 22
This is part 22 of the series on type inference for Haskell.
(THIH Section 11.6.2, Implicitly Typed Bindings)
At the very beginning of this series of posts, I said that you could think of a function defined like:
foo x y z = ...
as being equivalent to:
foo = (\x -> (\y -> (\z -> ...)))
The point was that
foo is actually a single argument function which only takes a value of
x. It just happens to return another function which takes a value
y, and so on.
But, I also said that they are not completely equivalent.
For a concrete example of where there is a difference, consider these two implementations of a function for adding two numbers:
It looks like
plus2 should both be able to handle any kind of number, right?
plus1 has the type you’d expect:
plus1 :: Num a => a -> a -> a
plus2 has a less general type:
plus2 :: Integer -> Integer -> Integer
Why on earth is that the case?
This is due to a odd feature of Haskell called the “Monomorphism Restriction.”
What it is?
In short, it’s where Haskell sometimes uses a more restricted type for bindings that look like they might not be functions. It considers a binding to look like a function when there are argument patterns to the left of the
= in the declaration.
In what way does it restrict the types? It makes them monomorphic (hence the name, monomorphism restriction). What does monomorphic mean? It means that there are no type variables in the type that can be used to change the type. It’s the opposite of polymorphic - it means that the type can only take on one “shape.” The type
a -> a is polymorphic, but the type
Int -> Int is monomorphic. The monomorphism restriction only makes certain parts of the type monomorphic. There can still be type variables left in the type after the restriction has been applied.
How does it make a type monomorphic? It applies a defaulting substitution. In the previous post, I said that this function would be used later:
This chooses default types in the same way as
defaultedPreds did. The only difference is that now instead of returning predicates that can be removed, it returns a substitution to apply to the types. That substitution replaces polymorphic type variables with monomorphic types.
plus2 example above, the type might have originally been the same as
(Num a) => a -> a -> a. When the monomorphism restriction gets applied to this type, the substitution changed it to
Integer -> Integer -> Integer.
When does it apply?
First of all, it doesn’t apply to explicitly typed bindings. For example, it does not apply to this function (even though it is otherwise the same as
plus3 :: Num a => a -> a -> a plus3 = (\x y -> x + y)
It also doesn’t apply when the type doesn’t involve any type classes. For example, it does not apply to the following function:
identity = (\x -> x)
The type for that is still inferred to be
a -> a.
The most interesting of the rules is that it only applies to bindings with no patterns. That is to say, it only applies when a binding looks like a variable, not a function. This is the reason why it applied to
plus2 but not
plus1 in the example above.
plus1 has the patterns
PVar patterns if you recall the post on pattern matching).
Lastly, it only picks default types when there are predicates for certain classes. It’s only able to pick a default type for a type variable if the predicates for that type variable are all standard type classes and at least one of them is a number class. That worked for
plus2 since the only class was
Num. If you create an ambiguous declaration with some other type class or one without a number class, inference will fail:
someFn = (\x -> show x)
Type inference fails for definitions like these because it attempts to find a default type that matches, but the predicates don’t meet the requirements for defaulting.
Why does it exist?
This may seem like a very strange feature to have. Why make the types more narrow for seemingly no good reason? It turns out that there actually is a good reason for this.
Let’s say you bind
x to the result of some expensive computation (maybe it computes the Ackermann function for a large value):
x = someReallyExpensiveNumericComputation 12345
And let’s say that the function this calls has the type
(Num a) => a -> a. That function could be called with an integer and return an integer, or called with a float and return a float. It could be used with any type that is an instance of
Num. When the value of
x is computed, what type will it be?
Without the monomorphism restriction, code like this puts us in an awkward position. It looks like a variable not a function, so we wouldn’t expect it to be re-computed every time it is used. Yet, it can be used by code wanting it to be a float and by code wanting it to be an int. We cannot have both at once: either you can use it as different types and accept recomputing it, or you use it as a single type and only compute it once. Haskell’s designers chose to go with the later option.
I think this was a good design decision since it follows the principle of least astonishment. If you assumed that it would only be computed once but it was actually recomputed at every usage, that could lead to performance bugs that are really difficult to track down. On the other hand, if you assume that it is usable as different types, the compiler will catch the error and tell you what went wrong.
The function to ask if a binding is restricted is fairly simple (no pun intended):
Remember that an
Impl is an implicitly typed binding. (An
Impl is a
(Id, [Alt]), and an
Alt is a
([Pat], Expr).) This function is given an entire binding group (a set of mutually recursive functions). It asks if any of the bindings in that group are “simple.” A binding is simple if it has no patterns/arguments. For example,
f is simple but
h are not:
f = (\x -> 123) g x = 123 h Nothing = 123
restricted function is called from one place, the end of the
tiImpls function. The
tiImpls function is returning a tuple containing a list of predicates and a list of assumptions. In the code below,
bs is the list of implicitly typed bindings in a single binding group.
When the group is restricted, two things change.
First, the retained predicates
rs are returned in addition to the deferred predicates
Second, the schemes in the assumptions change. They no longer have any predicates of their own (
 :=> instead of
rs :=>) and the set of type variables they are quantified over changes.
gs is the set of type variables that appear only in the types of the bindings in this group, and not in the types of other bindings. Instead of quantifying over
gs, it quantifies over the variables in
gs that don’t also appear in the retained predicates
rs. Quantifying over fewer type variables results in a less general type.
Specifically, the type is no longer generalized over a type variable that is mentioned in one of the predicates that defaulting will remove. In the
plus2 example above, this would be the type variable
a in the type
(Num a) => a -> a -> a. It isn’t generalizing over
a is going to be replaced with
Integer later on.
So far we’ve moved the predicates around and made the scheme less general, but haven’t actually replaced type variables with monomorphic types. That happens as a last step at the end of type inference for the whole program.
The call to
defaultSubst finds the remaining ambiguities and resolves them through defaulting. If it can’t resolve them with defaulting, type inference fails. The example above that defined
someFn demonstrated this.
At this point I’d encourage you to open a new Haskell file and play around with the monomorphic restriction.
- The Haskell Wiki page about it.
- A StackOverflow post with a good explanation.
- The definition in the Haskell 2010 report.
This wraps up the discussion of how type class predicates are handled. All that remains is to show how this connects to the rest of the type inference code.