Using type-classes to model the expressivity of build systems

Posted on Jan 11, 2020

I first came across the paper Build Systems à la Carte by Mokhov, Mitchell and Peyton Jones over an year ago. I’ve recently been re-reading it, this time working through all the details and trying to really understand it. I love that it is an extremely readable paper analyzing a practical aspect of software engineering. The twist: it uses Haskell, using its properties to impose strong proofs on certain aspects of the build. I don’t know Haskell, so I’ve been using this as an excuse to learn some of the concepts and syntax.

I’m currently wrapping my head around Section 3. The amazing idea presented here (and clarified in Section 3.4 and 3.7) is:

  1. By placing an Applicative bound on tasks, we can only create tasks with statically known dependencies. Their dependencies do not change based on inspecting the value of other dependencies. In addition, by passing a specific instance of Applicative to the tasks, that same build system can actually determine the dependencies without having to run any tasks. Static analysis for “free”!

  2. If we replace that bound by Monad, we instead get a build system that can have dynamic dependencies, but cannot be analyzed without running it.

  3. And finally, crazy as it sounds, the compiler can actually enforce that we never pass a Monadic set of tasks, that can have dynamic dependencies, to a scheduler that is incapable of handling them correctly.

These are brilliant insights. However, I spent a lot of time figuring out why they hold and how it is implemented. This post is an effort to explain that.

This post assumes you already have some familiarity with a build system like Make, Bazel, or Microsoft Excel(!). There is a notion of tasks and dependencies between tasks.

Figuring out how what the authors are saying maps to the Haskell code is a matter of figuring out what capabilities the Functor subclasses in use provide.

Now, this is not a monad tutorial. Instead I recommend reading these resources:

  1. Maybe Haskell by Pat Brisbin walks through the Functor, Applicative, Monad hierarchy and what it means for expressiveness.
  2. Stephen Diehl helps untangle the things that lead to a profusion of monad tutorials.
  3. Applicative Programming with Effects by McBride and Patterson introduces the Applicative type-class as something that was discovered after Functor and Monad were already baked into the language. Very readable, at least in the beginning.

To model build systems, we have a Task, essentially a function that does “something” and produces a value. Again, to keep things simple, I’m going to use concrete, primitive types. Every task will be identified by a String key and have an Integer as the value or output of running the task. A simple task would just return the number 4. Of course, a build system where tasks cannot have dependencies isn’t very useful. So we introduce the notion of Tasks, which is a mapping from a string key to a Task. In a real build system, a key would be foo.exe, and its task would consume dependencies (other keys) lib.o and main.c and produce as an output the executable file for foo.exe. Our String -> Integer mapping is good enough to model a basic version of Excel.

To allow tasks to retrieve the value of dependencies (i.e. the artifact produced by the dependent task), the build system passes a function, fetch, to every task.

Here is the code from Section 3 we will be looking at. I’ve taken the liberty of replacing some types, so this may not actually be valid Haskell. In addition, the i (information) from Store is removed.

-- This type declaration also generates a function:
-- run :: Task -> (String -> f Integer) -> f Integer
newtype Task c String Integer =
  Task { run :: forall f. c f => (String -> f Integer) -> f Integer }

-- Tasks is a mapping from keys to potential Tasks
type Tasks c String Integer = String -> Maybe (Task c String Integer)

-- A build system takes a set of tasks, a key we want to build, and a
-- persistent store, and returns a store in which the key has been built. The new
-- store has the value in it.
type Build c String Integer =
  Tasks c String Integer -> String -> Store String Integer -> Store String Integer

The authors present an Applicative set of Tasks, sprsh1:

sprsh1 :: Tasks Applicative String Integer
sprsh1 "B1" = Just $ Task $ \fetch -> ((+) <$> fetch "A1" <*> fetch "A2")
sprsh1 "B2" = Just $ Task $ \fetch -> ((*2) <$> fetch "B1")
sprsh1 _ = Nothing

and a Monadic set of Tasks, sprsh2:

sprsh2 :: Tasks Monad String Integer
sprsh2 "B1" = Just $ Task $ \fetch -> do
    c1 <- fetch "C1"
    if c1 == 1 then fetch "B2" else fetch "A2"
sprsh2 "B2" = Just $ Task $ \fetch -> do
    c1 <- fetch "C1"
    if c1 == 1 then fetch "A1" else fetch "B1"
sprsh2 _ = Nothing

and go on to say in Section 3.4 that this affects what kind of build system these tasks can run as part of. The Applicative bound implies and enforces that tasks must statically declare all their dependencies. This is seen in sprsh1 "B1" always fetching A1 and A2.

Similarly, the Monad bound allows sprsh2 "B1" to decide whether to fetch B2 or A2 based on the value of C1.

In Section 4.1, they propose that which bound we use will affect what scheduler we use, and how optimally it can decide which tasks need to be run.

Why does Applicative imply static dependencies?

The first observation is that we can only operate on values obtained from fetch using functions passed to the resulting Functor. The task only sees “boxed” dependencies. This means we are limited by the capabilities the Functor sub-class (specified by the constraint c) provides.

A naive Functor (c is Functor), as they point out, can apply a function to the value (via an operation called fmap or <$>) and then it is done. There is no way to pass another Functor to it, so we can never combine dependencies in any way.

An Applicative functor provides ap or <*>. This means we can take a wrapped value, take another wrapped value, and if we want to use both the values to compute a new value, we can! This is what sprsh1 does. We can keep chaining values, which is exactly what “take all these .c files, and produce a single object file” is! The Applicative implementation takes care of combining and propagating the contexts in a reasonable way.

compile <$> (Just "foo.c") <*> (Just "bar.c") <*> (Just "baz.c")

is approximately the same as

Just (compile("foo.c", "bar.c", "baz.c"))

while sparing us all the error handling.

Notice what Applicative doesn’t let us do. It does not let the function injected into the context influence the context. Or in this case, it is easier to think of the context as the computation. Specifically, an Applicative only supports these operations

class (Functor f) => Applicative f where  
    pure :: a -> f a  
    (<*>) :: f (a -> b) -> f a -> f b

That is, an Applicative does not know what to do with a function that returns a value with a context. How does it smush1 the returned context with its own context? You’ve suddenly yielded a wrapped type that is going to fail the type checker somewhere. If both sides of your applicative yield a wrapped type, it is going to fail at the Task boundary, where the build system is expecting a value, and not a wrapped value.

What does this mean? We know that tasks can no longer look at values and decide which dependency to fetch next. We know that all their dependencies have to be specified up front!

How can we use this to determine dependencies?

Realize that due to the powers of traits/type-classes, we haven’t said anything about what that Applicative needs to be. That is, we haven’t said anything about the context. This means we don’t have to provide a store from which real values are fetched and real-world effects are enacted. We could pass a fetch function that returns wrapped dummy values and the tasks would be none-the-wiser.

dependencies :: Task Applicative k v -> [k]
dependencies task = getConst $ run task (\k -> Const [k])

The authors leverage this to define the function dependencies (Section 3.7) which does exactly that. Const defines a list of strings as the context and no value. Putting together two Consts concatenates the lists, so that we can collect the list of keys a task requested be fetched, which are its dependencies! How cool is that?

Why does Monad allow dynamic dependencies?

If we constrain the return values to Monads, we get one extra operation – and_then (also known as bind or >>= and hidden behind do notation). I’m going to use and_then because using a named, prefix function makes everything more readable for non-Haskell programmers.

and_then :: m a -> (a -> m b) -> m b

Or in sacrilegious form:

impl M<A> {
  fn and_then<M<B>>(&self, f: FnOnce(A) -> M<B>) -> M<B>
}

Concretely for Rustaceans:

impl Option<A> {
  fn and_then(self, f: F) -> Option<B>
    where F: FnOnce(A) -> Option<B>
}

Yep TODO link, Option::and_then is bind in disguise. Option knows how to smush itself.

None.and_then(<anything>) -> None
Some(x).and_then(|x| -> None) -> None
Some(x).and_then(|x| -> Some(y)) -> Some(y)

Types implementing and_then have some sensible operation for the combination of two contexts, yielding another context. For example Maybe will collapse, State will compose/stack, Lists will flatten themselves. In Haskell, [] will concat. For rustaceans, Vec will concat.

This ability to smush allows the intermediate functions acting on values to yield computations which can depend on the value. In our build system, this means a task can read the value produced by a dependency and use it to determine whether to then build another dependency, or do nothing.

This lets us have build systems where tasks can have dynamic dependencies. Most build systems out there don’t really support this, but Excel does, as does Shake. Other sections of the paper discuss how to handle correctness and minimality in the face of dynamic dependencies. For now, we understand that this can be expressed by the Monad constraint.

What Monad giveth, it taketh away. If a task can inspect values, we can no longer have a generic dependencies function that gives out dummy values. This means the only way to determine dependencies is to actually run the task in a valid environment. In addition, subsequent runs of the task may have different dependencies. This decreases the efficiency of rebuilds.

How do we get the compiler to enforce this?

My understanding is that there are two critical properties that allow the compiler to enforce these constraints, and more than monads, it highlights the benefits of a strong type system and encapsulation.

  1. Functors 2 wrap a value in a computational context and DO NOT allow ever getting the value out again. 3

  2. The type-class constrains what operations you can do on the Functor, and this can be trivially checked.

Think of it as placing a trait bound in Rust. The Task definition:

newtype Task c String Integer =
  Task { run :: forall f. c f => (String -> f Integer) -> f Integer }

says “Function run takes a function as the first argument and returns an f Integer”. Haskell has currying and laziness, so everything looks like a function application and f could be anything. But, concretely, f is forced to be some kind of container here. Now the forall syntax means f can be any type, as long as that type satisfies the trait bound c, where c itself is specified by instances of that type. As far as I know, this late binding of the bound c isn’t really expressible in other languages.

While spending several hours trying to figure all these ideas out, I realized there are two ways of looking at them.

The top-down way, where we have these abstract concepts and can use them to enforce behavior, if we guarantee that instances of these concepts satisfy certain properties.

The bottom-up way, where we find that some patterns keep popping up repeatedly (see McBride and Patterson). Wouldn’t it be nice if we could generalize it?

Here the Applicative bound is imposed so that dependency calculations can be done without executing the task, not the other way round. We don’t pick Applicative out of thin air, we pick it so we can do cool things without changing the task. Similarly, a Monad bound could be imposed, but the static analysis would not be possible then. That is the top-down way. But based on the summary post, I’m guessing the authors arrived at this idea by implementing all their build systems, and noticing the patterns (bottom-up).

So, when you come across an Internet telling you to “just use a Monad to solve this”, they are probably approaching it from the other direction based on past experience.

What prevents us from doing “monadic things” in the Applicative constraint? Consider a tweaked sprsh2 with an Applicative constraint.

sprsh2 :: Tasks Applicative String Integer
sprsh2 "B1" = Just $ Task $ \fetch -> do
    c1 <- fetch "C1"
    if c1 == 1 then fetch "B2" else fetch "A2"
sprsh2 "B2" = Just $ Task $ \fetch -> do
    c1 <- fetch "C1"
    if c1 == 1 then fetch "A1" else fetch "B1"
sprsh2 _ = Nothing

The do syntax obfuscates things, so rewriting in terms of and_then.

sprsh2 :: Tasks Applicative String Integer
sprsh2 "B1" = Just $ Task $ \fetch -> do
    fetch "C1" and_then (\c1 ->
        if c1 == 1 then fetch "B2" else fetch "A2")
...

Now it becomes very clear. The type signature only permits fetch to return an Applicative, which has no notion of and_then!

Is Haskell a requirement for the ideas in this paper?

I thought about this as much as I could while having a life. My understanding is, not really, but it sure helps. You do want a strong type system. I think any language with some kind of trait/interfaces system will allow you to enforce what your tasks can do with values. They may not give you the full higher-kinded type (HKT) goodness of Applicative and Monad, but you should be able to land in the ballpark. You would do something similar by wrapping values into these “opaque” types and providing analogs to fmap, ap and bind. I have yet to find out if true HKTs are necessary for the purposes of this paper.

Haskell is a cherry on the top because the paper can assume referential transparency of tasks. In the Haskell implementation it is much more difficult for a task to depend on the state of the world. It is impossible for IO to happen in the task, because we never propagate the singleton IO monad to it. It seems like a task could still use a random number generator and fetch dependencies based on the results, but that is going out of your way to shoot yourself in the foot. There is no equivalent for this kind of enforcement in a language like Rust or Python. One of the ways Bazel gets close is by having a custom language (Starlark) with limited capabilities and APIs, and sandboxing execution as much as possible.

As a systems programmer, I have been poking around at whether it is even possible to implement the basic ideas in Rust. It is certainly going to be a lot more verbose, particularly if I try to be pedantic, instead of just avoiding writing accidentally monadic tasks. However, that is for another post, if ever.


  1. Totally valid technical term. ↩︎

  2. All Monads are Applicatives and all Applicatives are Functors. ↩︎

  3. Specific functors let you get the value out, but the type-class does not, and that is the key here. ↩︎